text
stringlengths 1
1.03M
| id
stringlengths 1
7.38k
| metadata
dict |
---|---|---|
\section{Introduction}
Consider $N$ random walkers on a line. The only interaction between the walkers is that if any two walkers meet, they annihilate each other, hence these walkers are deemed vicious. This system was introduced by M. E. Fisher in his 1983 Boltzmann Medal lecture and was applied to interfacial wetting in $1+1$ dimensions since the interaction of different interfaces (walkers) drives the wetting transition~\cite{Fisher,Fisher.Huse}. In addition, the fermionic nature of vicious walkers provides a Coulomb gas description and, thereby, a link with Gaussian random matrices~\cite{Dyson,Forrester}. More specifically, Baik has proven that a particular limiting conditional distribution of the displacement of the leftmost walker is equivalent to the Tracy-Widom distribution for the Gaussian Orthogonal Ensemble(GOE)~\cite{Baik}. Moreover, vicious walker configurations correspond to directed polymer brushes with the vicious mechanism capturing the non-intersecting property of the polymers~\cite{Essam}. So while the study of vicious walkers has attracted attention from the mathematical physics community, a number of different physical applications also drive its study.
There have been generalizations of vicious walkers to dissimilar walkers~\cite{Gelfand}, walkers with drift~\cite{Wrinkler}, and walkers with external potentials~\cite{Bray}. Here, we introduce a system of $N$ accelerating vicious walkers and study the survival probability, $s(t)$, the probability of having none of the $N$ walkers annihilated up to time $t$. In one-dimension, a randomly accelerating walker, $x(t)$, is defined by
\begin{equation}
\frac{d^2 x(t)}{dt^2}=\eta(t),
\end{equation}
where $\eta(t)$ is Gaussian noise with $\langle \eta(t)\rangle=0$ and $\langle\eta(t)\eta(t')\rangle=2D\delta(t-t')$. The corresponding Fokker-Planck equation is
\begin{equation}
\left[ D\frac{\partial^2}{\partial v^2}-v\frac{\partial}{\partial x}\right]p(x,v,t)=\frac{\partial}{\partial t}p(x,v,t),
\end{equation}
where $p(x,v,t)$ is the probability density distribution for a randomly accelerating walker in one-dimension.
A randomly accelerating walker is presumably the simplest non-Markovian
stochastic process. Given the fermionic interaction between vicious
accelerating walkers, one can explore the possibility of a non-Markovian
analogue to the Coulomb gas and, hence, a potentially new class of
non-Markovian random matrices. Recently, Fukushima and collaborators have
revisited Dyson's original Brownian motion model for random matrices and found
another non-Markovian stochastic process after generalizing the coefficients
of the matrices~\cite{Fukushima}. In addition, a randomly accelerating walker
appears in the Boltzmann weight of an extensible, semiflexible polymer of
length $L$ with non-zero bending energy~\cite{Burkhardt}. For a given displacement vector $\vec{r}(z)$ from some reference point along a contour length, $z$, the Hamiltonian, $H$, is given by
\begin{equation}
H(\vec{r}(z),\vec{u}(z);z)=\frac{\kappa}{2}\int_0^L{dz}\left(\frac{d^2\vec{r}(z)}{dz^2}\right)^2,
\end{equation}
where $\vec{u}(z)$ denotes the tangent vector and $\kappa$ characterizes the bending rigidity. Mapping the contour length $z$ to time $t$ and the displacement vector $\vec{r}$ to $\vec{x}(t)$, the equation of motion for this system corresponds to a randomly accelerating walker in the corresponding dimension. Implementing the vicious interaction between $N$ random accelerating walkers, therefore, corresponds to the statistics of non-intersecting semiflexible polymer brushes. In the interest of studying a mixture of flexible and semiflexible polymer brushes, one can also address a mixture of vicious accelerating and random walkers. Please see the appendix for some discussion of $N\le3$ vicious mixed walkers system.
\section{$N\le 3$ vicious accelerating walkers}
We begin by considering $N=2$ vicious accelerating walkers in one-dimension with equal diffusion constants $D_1=D_2=D$. The two accelerating walkers are governed by the equation,
\begin{eqnarray}
D\left(\frac{\partial^2}{\partial v_1^2}+\frac{\partial^2}{\partial v_2^2}\right )p(x_1,x_2,v_1,v_2,t) \nonumber\\
-\left(v_1\frac{\partial}{\partial x_1}+v_2\frac{\partial}{\partial x_2}\right )p(x_1,x_2,v_1,v_2,t)= \nonumber\\
\frac{\partial}{\partial t}p(x_1,x_2,v_1,v_2,t)
\end{eqnarray}
with the initial condition, $
p(x_1,x_2,v_1,v_2,t=0)=\delta(x_1-x_{1,i})\delta(x_2-x_{2,i})\delta(v_1)\delta(v_2)$,
with $x_{1,i}<x_{2,i}$. In addition, to compute the survival probability, $s(t)$, we implement the boundary condition,
\begin{equation}
p(x_1=x_2,v_1>v_2,t)=0
\end{equation}
such that the system ``dies'' when the two walkers meet in space and have ingoing velocities. Note that the boundary condition in $v$ is redundant since, due to the initial positions, the two walkers cannot meet with a relative outgoing velocity.
We then choose the relative coordinate system to reduce the $N=2$ vicious accelerating walkers to one random accelerating walker in the presence of an absorbing wall. This change of variables takes the form, $x=x_1-x_2$ and $v=v_1-v_2$. The Fokker-Planck equation thereby reduces to
\begin{equation}
\left[2D\frac{\partial^2}{\partial v^2}-v\frac{\partial}{\partial x}\right]p(x,v,t)=\frac{\partial}{\partial t}p(x,v,t),
\end{equation}
with the boundary condition, $p(x=0,v>0,t)=0.$
The survival probability for this process is nontrivial in that one cannot invoke the method of images as is done for the ordinary random walker. Using properties of the integral of a Brownian curve, Sinai~\cite{Sinai} proved that the asymptotic survival probability distribution is given by
\begin{equation}
s(t)\sim t^{-1/4}
\end{equation}
at long times. Note that the first-passage time distribution, $f(t)$, where the first passage time is defined by the time at which any of the two walkers meet is given by $f(t)=-\frac{ds(t)}{dt}$ such that $f(t)\sim t^{-5/4}$. In general, if the survival probability distribution is given by $s(t)\sim t^{-\alpha}$ at large times, then $f(t)\sim t^{-\beta}$ with $\beta=\alpha+1$.
A heuristic argument, based on Sinai's approach, for $\alpha=1/4$ was given in Ref.~\cite{Schwarz}. First, a new time counter, $M$, is defined by each ``original'' time the velocity crosses zero. In other words, the original time is now defined by the distribution of first-passage time of a random walk undergoing a Levy flight with Levy index $\mu=1/2$. Moreover, with this new counting, the position variable is also a Levy flight with Levy index $\mu=1/3$. By invoking the powerful superuniversality of the Sparre-Andersen theorem in one-dimension~\cite{SparreAndersen,Feller,Majumdar}, the first passage time distribution for the position in terms of $M$ is the same as for a random walk, i.e. $M^{-3/2}$. To convert back to the original time, one simply needs to compute the integral
\begin{equation}
f(t)\sim \int \frac{1}{M^{\frac{3}{2}}}\frac{M}{t^{\frac{3}{2}}}\exp(-t^2/M) dM,
\end{equation}
where $\frac{M}{t^{\frac{3}{2}}}\exp(-t^2/M)$ is the limiting distribution for the sum of $M$ $\mu=1/2$ Levy variables. We should also mention that Burkhardt~\cite{Burkhardt} made the use of Marshall-Watson functions\cite{Marshall.Watson} to solve for the Laplace transform version of Eq. 6 with the absorbing boundary condition.
To numerically check for $\alpha=1/4$ result (and other results), we implement
the go-with-the winners algorithm~\cite{Winner}. We do this because as $N$
increases, the first passage time exponent $\beta$ increases, making it more
difficult to sample the tail of the distribution. The go-with-the winners
algorithm iterates replica systems in parallel. Once at least one pair of
random accelerators have crossed paths in some fraction of the replicas, those
surviving replicas are copied over to the replicas that have already
``died''. We choose that fraction to be one-half such that each copy generated
carries a relative weight of $1/2^c$, where $c$ is the number of copies, which
is then incorporated into the survival probability. The number of replicas range between $1,000$ and $10,000$. The number of runs averaged over range between $10$ and $40$. Fluctuations between the runs are then used for error analysis.
\begin{figure}[t]
\epsfig{figure=vicious.vaw.N2.N3.eps,width=3.0in}
\caption{Log-log plot of survival probability distribution versus time for $N=2$ and $N=3$ vicious accelerating walkers in line and for a single accelerating walker in a 60$^\circ$ wedge geometry. The line denotes a survival probability exponent of $1/4$, while the dashed line denotes a survival probability exponent of $3/4$. }
\end{figure}
Having calibrated our results for the $N=2$ case, we now consider $N=3$ vicious accelerating walkers with equal diffusion constants. The Fokker-Planck equation is given by
\begin{equation}
\sum_{j=1}^3\left[D\left(\frac{\partial^2}{\partial v_j^2}\right )
-\left(v_j\frac{\partial}{\partial x_j}\right)\right]p(X,V,t)
=\frac{\partial}{\partial t}p(X,V,t),
\end{equation}
where $X=\{x_1,x_2,x_3\}$, and $V=\{v_1,v_2,v_3\}$.
We apply change of variables $v=v_1-v_2$, $u=v_2-v_3$, $x=x_1-x_2$, $y=x_2-x_3$ such that the LHS operator for two relative coordinates becomes
\begin{equation}
2D\left(\frac{\partial^2}{\partial v^2}+\frac{\partial^2}{\partial u^2}-\frac{\partial^2}{\partial v\partial u}\right)-\left(v\frac{\partial}{\partial x}+u\frac{\partial}{\partial y}\right)
\end{equation}
with the absorbing boundary at $x=0$, $y=0$, $u>0$ and $v>0$. Again, here, boundary conditions on the velocities are redundant. To remove the coupled term in $u$ and $v$, we perform another set of linear transforms, $u=\frac{l-q/\sqrt{3}}{2}, v=\frac{-l-q/\sqrt{3}}{2}, x=\frac{-z-w/\sqrt{3}}{2},$ and $y=\frac{z-w/\sqrt{3}}{2}$ to obtain
\begin{equation}
6D\left(\frac{\partial^2}{\partial q^2}+\frac{\partial^2}{\partial l^2}\right)-\left(q\frac{\partial}{\partial w}+l\frac{\partial}{\partial z}\right)
\end{equation}
with absorbing boundaries, $z=\pm w/\sqrt{3}$, i.e. a $60^{\circ}$ wedge in the $z-w$ plane.
We have reduced three vicious accelerating walkers in one-dimension to one accelerating walker in two-dimensions in a wedge geometry. For comparative purposes, let us review the ordinary random walker in a wedge~\cite{Redner.Book}. Using the conformal mapping $z'=z^{\pi/\theta}$, one can map the wedge geometry to the upper-half plane. Then, the motion in $x$-direction becomes unbounded and in the $y$-direction it is the simple situation of a one-dimensional walker with an absorbing boundary condition. The survival probability distribution asymptotically in the upper-half plane is $s(t)\sim t^{-1/2}$. After an inverse conformal mapping, one arrives at the survival probability distribution of $s(t)\sim t^{-\pi/2\theta}$.
While there is currently no analytical solution for the survival probability distribution for an accelerating walker in a $60^{\circ}$ wedge geometry, for a $90^{\circ}$ wedge, the Sparre-Andersen theorem can be invoked for the two indepedent directions to arrive at an asymptotic power-law survival probability distribution with $\alpha=\frac{1}{2}$. In addition, for a $180^{\circ}$ wedge, $\alpha=\frac{1}{4}$. While the Sparre-Andersen theorem is quite powerful and can easily be extended to as many independent dimensions as needed, the $60^{\circ}$ wedge geometry couples the two directions (for $N=3$, at least) and, therefore, the Sparre-Andersen theorem, as it stands, cannot be invoked.
However, to smoothly interpolate between the $90^{\circ}$ and $180^{\circ}$
cases, we conjecture that for other wedge angles, the survival probability
distribution also asymptotes to a power-law with survival probability
exponent, $\alpha$, with $\alpha$ decreasing continuously as the wedge angle
increases. To this conjecture, we resort to numerical simulation of both the
wedge geometry for one accelerating walker and the line geometry for three
accelerating walkers. The result is presented in Fig. 1. We measure a survival
probability exponent of $\alpha=0.71\pm0.01$ for $N=3$ vicious accelerating
walkers in a line, which agrees with the $60^{\circ}$ wedge geometry result
(with essentially the same error bar). Also, referring back to the accelerating walker-Levy flight mapping implemented to demonstrate the $\alpha=\frac{1}{4}$ exponent for an absorbing accelerating walker in one-dimension, simulating a $\mu=1/3$ Levy flight in a $60^{\circ}$ wedge geometry with the opening angle, $\theta$, between $0^{\circ}$ and $60^{\circ}$, yields $\alpha=0.71\pm0.03$.
Figure 2 tests our conjecture that $\alpha$ continuously decreases with increasing wedge angle. Both the numerical values of $\alpha$ for the $90^{\circ}$ and $180^{\circ}$ wedges agree well with their analytical counterparts. For comparison purposes, we have also plotted the curve $\alpha=\pi/4\theta$, which agrees with the two analytical solutions and can be viewed as a trivial extension of the random walker solution. While the agreement looks reasonable at larger angles, the deviation becomes more apparent at smaller angles. It also appears that the divergence in $\alpha$ as $\theta$ decreases to zero is slower than $1/\theta$.
\begin{figure}[t]
\epsfig{figure=vicious.vaw.wedge.radians.eps,width=3.0in}
\caption{Survival probability exponent for a two dimensional accelerating walker in a wedge geometry as a function of the opening angle. The line denotes $\alpha=\frac{180}{4\theta}$. Inset: Log-log plot to emphasize the difference between the data and $\alpha=\frac{180}{4\theta}$. The error bars are smaller than the symbols.}
\end{figure}
\section{$N>3$ vicious accelerating walkers}
Now, we numerically address $N>3$
vicious accelerating walkers. To compare with ordinary vicious walkers, based
on the method of images, Fisher~\cite{Fisher,Fisher.Huse} considered one
compound walker in $N$ dimensions that cannot cross any of the
$x_1=x_2,x_2=x_3,...,$ or $x_{N-1}=x_N$ linear manifolds. Using the method of
images, as long as the initial positive and negative weights (corresponding to
unrestricted positive and negative walkers) are chosen such that probability
distribution is antisymmetric under reflection within each linear manifold,
then the distribution will satisfy the absorbing boundary conditions for all
future times. These weights can be represented as a Vandemond determinant,
which can be factorized to yield a product of $\frac{1}{2}N(N-1)$ pairings
such that $\alpha(N)=N(N-1)/4$, where $\alpha(2)=1/2$. Previous and the current simulations verify this prediction. See Figure 3. By the same argument, if the survival exponent for two (one pair of) vicious accelerating walkers is $1/4$, the survival exponent for $N$ vicious accelerating walkers system should be $N(N-1)/8$. However, for vicious accelerating walkers, the method of images fails.
Based on the $N=2$ and $N=3$ results, combined with the fact that $N$ vicious accelerating walkers can be mapped to one accelerating walker in $N-1$ dimensions in an unbounded domain, we expect the power-law survival probability distribution extends to $N>3$. In Figure 3 we present the simulation results of survival probability exponents for vicious accelerating walkers system up to $N=10$, as well as the too trivial prediction, $\alpha=N(N-1)/8$. The deviation is apparent at large $N$. We find that $N(N-1)/8$ is clearly an upper bound for the measured $N$. The non-Markovian nature of the accelerating walker enables the system to survive longer and in a way that cannot be accounted for by individual pairings.
\begin{figure}[t]
\epsfig{figure=vicious.acc.vel.compare.fontchange.eps,width=3.0in}
\caption{Survival probability exponent $\alpha$ for vicious accelerating walkers (solid squares) and vicious Gaussian walkers (solid circles) systems up to $N=10$. The simulation results for vicious Gaussian walkers agree very well with theory prediction $\alpha=N(N-1)/4$ (black curve). However, the results for vicious accelerating walkers deviate from a method of images prediction of $\alpha=N(N-1)/8$ (blue dashed curve). }
\end{figure}
\section{Vicious Levy flights}
As mentioned previously, vicious Gaussian walkers
problem is closely related to the Gaussian random matrix theory. A matrix with
random entries falls into this category as long as the entries are independent
and identically distributed (iid) variables with a finite second moment of the
corresponding distribution. A generalization of the random Gaussian matrix is
the random Levy matrix~\cite{RLM}, where the entries are drawn from a broader
distribution, namely a Levy distribution. The most important characteristic of
Levy distribution is a heavy power-law tail step-size $S$ distribution,
$P(S)\sim S^{-1-\mu}$ for large $S$. When $\mu\ge 2$, the variance of the
distribution is finite and, hence, the central limit theorem holds for the
distribution of the sum of independently drawn Levy variables. Similarly, the
random Levy matrices reduce to random Gaussian matrices. However in the regime
$\mu<2$, the variance of the distribution diverges and hence random Levy
matrices behave qualitatively different from random Gaussian matrices. For
example, the famous Wigner-Dyson semicircular law is replaced with a density
of states that extends over the entire eigenvalue axis~\cite{RLM}.
Inspired by the connection between vicious Gaussian walkers and random
Gaussian matrices as well as the connection between Levy flights and random
accelerating walkers, we study the problem of vicious Levy flights.
Given $N$ Levy flights in one-dimension, we define the vicious interaction
between pairs. Because Levy flights are nonlocal, two Levy flights jump over
each other without meeting at some exact point. Hence, there could be two
ways to define the vicious interaction, to prohibit the jump-overs or to allow
for jump-overs and the annihilation occurs upon intersection within some
range, irrespective of the ordering. The latter has been recently studied\cite{VLF}. However, we are more interested in the former case, in which the set of Levy flights annihilate whenever a crossing occurs. In other words, a surviving system strictly maintains the initial ordering of all flights, which is the same as for the vicious Gaussian walkers.
\begin{figure}[t]
\vspace{0.5cm}
\epsfig{figure=vicious.vlf.eps,width=3.0in}
\caption{Log-log plot of the the survival probability distribution for $N=3$ vicious Levy flights for different values of the Levy index, $\mu$. The curve denotes a survival probability exponent of $\alpha=\frac{3}{2}$. Inset: The survival probability exponent for $N=2$ and $N=3$ vicious Levy flights as a function of the Levy index. For $\mu>2$, we obtain the vicious random walker results. }
\end{figure}
The Fokker-Planck equation for a system of $N$ one-dimensional vicious Levy flights is described as
\begin{equation}
\sum_{j=1}^{N}\frac{\partial^\mu}{\partial |x_j|^\mu}p(X,t)=\frac{\partial}{\partial t}p(X,t),
\end{equation}
where the normal Laplacian is replaced by the Riesz-Feller derivative of fractional order $2>\mu>0$~\cite{Podlubny,Samko}. This derivative has an integral representation, which more easily reveals its nonlocal nature. The initial condition is still $p(X,t=0)=\prod_{j=1}^N\delta(x-x_{j,i})$, with $x_{j,i}<x_{k,i}$ for all $j<k$. The boundary condition for the non-crossing vicious interaction as we described above is then $p({x_j},t)=0$, if $x_j\ge x_k$ for any $j<k$.
The $N=2$ case is, again, equivalent to the first-passage problem of a single
Levy flight via a transformation to relative coordinates (and
integrating out the center of mass coordinate). The only
difference with a random walker is that the absorbing boundary condition at
the origin has to be modified to an absorbing region occuping the positive
x-axis to preserve the non-crossing property. The first-passage property of a
Levy flight is governed by the Sparre-Andersen
theorem\cite{SparreAndersen,Feller,Majumdar}, which implies that the first-passage time
distribution for any symmetric step size distribution in one-dimension
asymptotes to the same as that of a Gaussian walker. Thus, the survival probability exponent for $N=2$ is $\alpha=1/2$ independent of $\mu$. We verify this result in our simulation. See Figure 4. Note that this result is very different from the result obtained in Ref.~\cite{VLF} where $\alpha$ depends on $\mu$ for $N=2$ and higher.
We also simulate $N=3$ vicious Levy flights. Because of the linearity of
fractional derivatives, the mapping of two vicious Levy flights to a single
Levy flights in an absorbing plane holds. However, due to the lack of
rotational invariance of the Riesz-Feller derivative, the wedge mapping that
holds for vicious walkers and now for vicious accelerating walkers, does not
apply to vicious Levy flights. In order to make progress, since for $N=2$
there exists a power-law distribution, we conjecture that the survival
probability distribution scales as a power-law at long times for $N>2$ and
measure $\alpha$. Figure 4 plots the survival probability exponents for $N=3$
vicious Levy flights for several different Levy indices. For $N=2$ all values
of $\mu$ yield the same survival probability exponent of 1/2, in agreement
with the Sparre-Andersen theorem. However, the $N=3$ exponents appear to vary
with $\mu$. For instance, for $\mu=1$, $\alpha=1.31\pm0.03$. While the 0.19
difference between $\mu=1.0$ and $\mu=2$ is small, the difference grows with
$N$. For example, for $N=4$ and $\mu=1$, $\alpha=2.3\pm0.1$ and for $\mu=2$,
$\alpha=2.91\pm0.09$. Based on this data, we speculate that for $N>2$,
$\alpha$ depends on $\mu$.
A few comments on the technical aspects of the simulations are in order. We implement an upper cut-off on the Levy steps so that at long enough times, the survival probability distribution approaches the random walker result~\cite{Mantegna}. The convergence also depends on the Levy index. For example, for $\mu=1$ and stepsize cut-off $S_c=10-100$, the convergence to the random walker result is fast such that the asymptote to a power-law beyond $t\approx 10^2$ is in agreement with the random walker result to within one standard deviation. In contrast for $\mu=1.6$, $S_c= 10^9$, and time scales beyond $t\approx 10^8$, convergence to the random walker result is observed. Secondly, for $\mu=1$ we also generated Cauchy distributed numbers directly and found good agreement with the power-law generated $\mu=1$ result.
\section{Conclusion}
To summarize, we have generalized the vicious walker problem
in two different ways: (1) vicious accelerating walkers and (2) vicious Levy
flights as defined by non-crossing. For both generalizations, the typical
analytical technique of the method of images fails. Analytical results for
$N=2$ are readily obtainable since both problems can be mapped to the first
passage problem of a single accelerating walker or Levy flight with the
appropriate absorbing boundary or region. We demonstrate that the $N$ vicious
walker mapping to one walker in $N-1$ dimensions in a wedge geometry
generalizes to vicious accelerating walkers. We also conjecture, based on our
numerical data, that there exists an upper bound on the survival probability
exponent of $\alpha=\frac{1}{8}N(N-1)$ for $N$ vicious accelerating
walkers. An analytical calculation for the $N=3$ case corresponding to one
acclerating walker in a two-dimensional wedge geometry would be the next
logical step. The heuristic argument for the absorbing accelerating walker
in one-dimension using a new time counter and Levy flights may eventually
become useful to analyze the two-dimensional wedge problem. There also exists a recent numerical result for the
survival probability distribution for the two-dimensional Fractional Browian
motion process, originally introduced by Kolmogorov~\cite{Kolmogorov}, in a
wedge~\cite{Jeon}. We anticipate more study of non-Markovian processes in
dimensions higher than unity in the near future. Indeed, a non-Markovian
extension of Dyson's Brownian motion model to, for example, include inertia,
may be related to $N$ vicious accelerating walkers to arrive at a new class
of random matrices. It may also be interesting to investigate other ordering
problems of randomly accelerating walkers on a line such as the Gaussian
equivalent of the ''leader'' and the ''laggard'' problem~\cite{benAvraham}.
Finally, given our numerical results, we speculate that the survival probability exponent for $N$ vicious Levy flights (as defined by no-crossing) depends on $\mu$ for fixed $N>2$. We also refer to a new result where vicious Levy flights are defined as annihilating when any two Levy flights come within some range of each other and $\alpha$ depends on $\mu$ even for $N=2$~\cite{VLF}. While the survival probability exponent in one-dimension is independent of the Levy index, as a consequence of the powerful Sparre-Andersen theorem, we anticipate that this superuniversality may be broken in dimensions higher than unity and the universality of each Levy index becomes exposed. In light of our results, a higher dimensional generalization (or modification) of the Sparre-Andersen theorem should be on the forefront of at least several statistical physicists and mathematicians minds.
JMS would like to acknowledge M. Jeng for performing some preliminary simulations on no-crossing vicious Levy flights, Eli Hawkins for helpful discussion, and The Aspen Center of Physics where some of this work was conducted. JMS is supported by NSF-DMR-0645373.
| 0dc481727ff0edee3ed0a47f802b0f3d60c1d49e | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Analysis}\label{sec:proof}
In this section we provide a proof sketch of Theorem \ref{thm:main}. The main technical tool that facilitates our analysis is the convex Gaussian min-max theorem (CGMT), which is an extension of Gordon's Gaussian min-max inequality (GMT). We introduce the necessary background on the CGMT in \ref{sec:CGMT}.
\subsection{Technical tool: CGMT}\label{sec:CGMT}
\subsubsection{Gordon's Min-Max Theorem (GMT)}
The Gordon's Gaussian comparison inequality \cite{Gor88} compares the min-max value of two doubly indexed Gaussian processes based on how their autocorrelation functions compare. The inequality is quite general (see \cite{Gor88}), but for our purposes we only need its application to the following two Gaussian processes:
\begin{subequations}
\begin{align}
X_{\mathbf{w},\mathbf{u}} &:= \mathbf{u}^T {G} \mathbf{w} + \psi(\mathbf{w},\mathbf{u}),\\
Y_{\mathbf{w},\mathbf{u}} &:= \norm{\mathbf{w}}_2 \mathbf{g}^T \mathbf{u} + \norm{\mathbf{u}}_2 \mathbf{h}^T \mathbf{w} + \psi(\mathbf{w},\mathbf{u}),
\end{align}
\end{subequations}
where: ${G}\in\mathbb{R}^{m\times n}$, $\mathbf{g} \in \mathbb{R}^m$, $\mathbf{h}\in\mathbb{R}^n$, they all have entries iid Gaussian; the sets $\mathcal{S}_{\mathbf{w}}\subset\mathbb{R}^n$ and $\mathcal{S}_{\mathbf{u}}\subset\mathbb{R}^m$ are compact; and, $\psi: \mathbb{R}^n\times \mathbb{R}^m \to \mathbb{R}$. For these two processes, define the following (random) min-max optimization programs, which (following \cite{Master}) we refer to as the \emph{primary optimization} (PO) problem and the \emph{auxiliary optimization} (AO) -- for purposes that will soon become clear.
\begin{subequations}
\begin{align}\label{eq:PO_loc}
\Phi({G})&=\min\limits_{\mathbf{w} \in \mathcal{S}_{\mathbf{w}}} \max\limits_{\mathbf{u}\in\mathcal{S}_{\mathbf{u}}} X_{\mathbf{w},\mathbf{u}},\\
\label{eq:AO_loc}
\phi(\mathbf{g},\mathbf{h})&=\min\limits_{\mathbf{w} \in \mathcal{S}_{\mathbf{w}}} \max\limits_{\mathbf{u}\in\mathcal{S}_{\mathbf{u}}} Y_{\mathbf{w},\mathbf{u}}.
\end{align}
\end{subequations}
According to Gordon's comparison inequality, for any $c\in\mathbb{R}$, it holds:
\begin{equation}\label{eq:gmt}
\mathbb{P}\left( \Phi({G}) < c\right) \leq 2 \mathbb{P}\left( \phi(\mathbf{g},\mathbf{h}) < c \right).
\end{equation}
In other words, a high-probability lower bound on the AO is a high-probability lower bound on the PO. The premise is that it is often much simpler to lower bound the AO rather than the PO. To be precise, \eqref{eq:gmt} is a slight reformulation of Gordon's original result proved in \cite{COLT} (see therein for details).
\subsubsection{Convex Gaussian Min-Max Theorem (CGMT)}
The proof of Theorem \ref{thm:main} builds on the CGMT \cite{COLT}.
For ease of reference we summarize here the essential ideas of the framework following the presentation in \cite{Master}; please see \cite[Section~6]{Master} for the formal statement of the theorem and further details.
The CGMT is an extension of the GMT and it asserts that the AO in \eqref{eq:AO_loc} can be used to tightly infer properties of the original (PO) in \eqref{eq:PO_loc}, including the optimal cost and the optimal solution.
According to the CGMT \cite[Theorem 6.1]{Master}, if the sets $\mathcal{S}_{\mathbf{w}}$ and $\mathcal{S}_{\mathbf{u}}$ are convex and $\psi$ is continuous \emph{convex-concave} on $\mathcal{S}_{\mathbf{w}}\times \mathcal{S}_{\mathbf{u}}$, then, for any $\nu \in \mathbb{R}$ and $t>0$, it holds
\begin{equation}\label{eq:cgmt}
\mathbb{P}\left( \abs{\Phi({G})-\nu} > t\right) \leq 2 \mathbb{P}\left( \abs{\phi(\mathbf{g},\mathbf{h})-\nu} > t \right).
\end{equation}
In words, concentration of the optimal cost of the AO problem around $\mu$ implies concentration of the optimal cost of the corresponding PO problem around the same value $\mu$. Moreover, starting from \eqref{eq:cgmt} and under strict convexity conditions, the CGMT shows that concentration of the optimal solution of the AO problem implies concentration of the optimal solution of the PO to the same value. For example, if minimizers of \eqref{eq:AO_loc} satisfy $\norm{\mathbf{w}^\ast(\mathbf{g},\mathbf{h})}_2 \to \zeta^\ast$ for some $\zeta^\ast>0$, then, the same holds true for the minimizers of \eqref{eq:PO_loc}: $\norm{\mathbf{w}^\ast({G})}_2 \to \zeta^\ast$ \cite[Theorem 6.1(iii)]{Master}. Thus, one can analyze the AO to infer corresponding properties of the PO, the premise being of course that the former is simpler to handle than the latter.
\subsection{Applying the CGMT to \eqref{eq:opt_reg}} \label{sec:mainproof}
In this section, we show how to apply the CGMT to \eqref{eq:opt_reg}. For convenience, we drop the subscript $\ell$ from $\widehat{\mathbf{x}}_\ell$ and simply write
\begin{equation} \label{eq:opt}
\widehat{\mathbf{x}} = \arg\min_{\mathbf{x}} \frac{1}{m} \sum_{i=1}^{m} \ell(y_i \mathbf{a}_i^T \mathbf{x}) + r\left\lVert\mathbf{x}\right\rVert_2^2,
\end{equation}
where the measurements $y_i,~i\in[m]$ follow \eqref{eq:gen_model}. By rotational invariance of the Gaussian distribution of the measurement vectors $\mathbf{a}_i,~i\in[m]$, we assume without loss of generality that $\mathbf{x}_0 = [1,0,...,0]^T$. Denoting $y_i\mathbf{a}_i^T\mathbf{x}$ by $u_i$, \eqref{eq:opt} is equivalent to the following min-max optimization:
\begin{dmath}\label{eq:mmbn}
\min_{\mathbf{u},\mathbf{x}} \max_{\pmb{\beta}} \frac{1}{m} \sum_{i=1}^{m}\ell(u_i) + \frac{1}{m}\sum_{i=1}^{m}\beta_i u_i \\
- \frac{1}{m}\sum_{i=1}^{m}\beta_i y_i \mathbf{a}_i^T \mathbf{x} + r\left\lVert\mathbf{x}\right\rVert_2^2.
\end{dmath}
Now, let us define
$$\mathbf{a}_i=[s_i;\tilde{\mathbf{a}}_i],~i\in[m]\quad\text{ and }\quad \mathbf{x}=[x_1;\tilde{\mathbf{x}}],$$ such that $s_i$ and $x_1$ are the first entries of $\mathbf{a}_i$ and $\mathbf{x}$, respectively. Note that in this new notation \eqref{eq:gen_label} becomes:
\begin{align}\label{eq:y_pf}
y_i = \begin{cases} 1 &, \text{w.p.}~~ f(s), \\ -1 &, \text{w.p.}~~1-f(s), \end{cases}
\end{align}
and
\begin{align}\label{eq:corr_pf}
\corr{\mathbf{x}}{\mathbf{x}_0} = \frac{\widehat{\mathbf{x}}_1}{\sqrt{\widehat{\mathbf{x}}_1^2+\|\widetilde{\widehat{\mathbf{x}}}\|_2^2}},
\end{align}
where we denote $\widehat{\mathbf{x}}=[\widehat{\mathbf{x}}_1;{\widetilde{\widehat{\mathbf{x}}}}]$.
Also, \eqref{eq:mmbn} is written as
\begin{dmath*}
\min_{\mathbf{u},\mathbf{x}}\max_{\pmb{\beta}} \frac{1}{m}\sum_{i=1}^{m} \ell(u_i) + \frac{1}{m}\sum_{i=1}^{m} \beta_i u_i+ \frac{1}{m}\sum_{i=1}^{m} \beta_i y_i \tilde{\mathbf{a}}_i^T \tilde{\mathbf{x}} - \frac{1}{m}\sum_{i=1}^{m} \beta_i y_i s_i x_1+r x_1^2 + r\left\lVert\tilde{\mathbf{x}}\right\rVert_2^2
\end{dmath*}
or, in matrix form:
\begin{dmath}\label{eq:normopt_PO}
\min_{\mathbf{u},\mathbf{x}}\max_{\pmb{\beta}}~ \frac{1}{m}\boldsymbol{\beta}^T\mathbf{D}_y\tilde{\mathbf{A}}\tilde{\mathbf{x}} + \frac{1}{m}x_1\pmb{\beta}^T\mathbf{D}_y\mathbf{s}+ \frac{1}{m}\pmb{\beta}^T\mathbf{u} + r x_1^2 + r\left\lVert\tilde{\mathbf{x}}\right\rVert_2^2+\frac{1}{m}\sum_{i=1}^m \ell (u_i).
\end{dmath}
where $\mathbf{D}_\mathbf{y} := {\rm{diag}}(y_1,y_2,...,y_m)$ is a diagonal matrix with $y_1,y_2,...y_m$ on the diagonal, $\mathbf{s}=[s_1,\ldots,s_m]^T$ and $\tilde{\mathbf{A}}$ is an $m\times (n-1)$ matrix with rows $\tilde{\mathbf{a}}_i^T,~i\in[m]$
In \eqref{eq:normopt_PO} we recognize that the first term has the bilinear form required by the GMT in \eqref{eq:PO_loc}. The rest of the terms form the function $\psi$ in \eqref{eq:PO_loc}: they are independent of $\tilde\mathbf{A}$ and convex-concave as desired by the CGMT. Therefore, we have expressed \eqref{eq:opt} in the desired form of a PO and for the rest of the proof we will analyze the probabilistically equivalent AO problem. In view of \eqref{eq:AO_loc}, this is given as follows,
\begin{dmath}\label{eq:normopt}
\min_{\mathbf{u},\mathbf{x}}\max_{\pmb{\beta}} \frac{1}{m} \left\lVert\tilde{\mathbf{x}}\right\rVert_2 \mathbf{g}^T \mathbf{D}_y \pmb{\beta} + \frac{1}{m} \left\lVert \mathbf{D}_y\pmb{\beta}\right\rVert_2\mathbf{h}^T\tilde{\mathbf{x}} - \frac{1}{m}x_1\pmb{\beta}^T\mathbf{D}_y\mathbf{s}+ \frac{1}{m}\pmb{\beta}^T\mathbf{u} +r x_1^2 + r\left\lVert\tilde{\mathbf{x}}\right\rVert_2^2+ \frac{1}{m}\sum_{i=1}^m \ell (u_i) ,
\end{dmath}
where as in \eqref{eq:AO_loc} $\mathbf{g}\sim\mathcal{N}(0,I_m)$ and $\mathbf{h}\sim\mathcal{N}(0,I_{n-1})$.
\subsection{Analysis of the Auxiliary Optimization}
Here, we show how to analyze the AO in \eqref{eq:normopt} \footnote{There are several technical details in the proof that we omit from this proof sketch. This includes: boundedness of the constraint sets in \eqref{eq:normopt}; changing the order of min-max when optimizing over the direction of $\tilde{\mathbf{x}}$ in \eqref{eq:alpha_pf}; and, uniform convergence in going from \eqref{eq:binfty} to \eqref{eq:det}.}. The steps are similar and follow the recipe prescribed in \cite{COLT,Master}. To begin with, note that $y_i\in{\pm1}$, therefore $\mathbf{D}_\mathbf{y}\mathbf{g} \sim\mathcal{N}(0,I_m)$ and $\left\lVert\mathbf{D}_\mathbf{y}\pmb{\beta}\right\rVert_2 = \left\lVert\pmb{\beta}\right\rVert_2$. Let us denote
$$\alpha:=\left\lVert\tilde{\mathbf{x}}\right\rVert_2\quad\text{ and }\quad\mu:=x_1.$$ We can simplify \eqref{eq:normopt} by optimizing over the direction of $\tilde{\mathbf{x}}$. This leads to the following
\begin{dmath}\label{eq:alpha_pf}
\min_{\alpha\ge0,\mu,\mathbf{u}}~\max_{\pmb{\beta}} ~ \frac{1}{m}\alpha\mathbf{g}^T\pmb{\beta} - \frac{\alpha}{m}\left\lVert\pmb{\beta}\right\rVert_2\left\lVert\mathbf{h}\right\rVert_2 - \frac{1}{m}\mu \mathbf{s}^T\mathbf{D}_{\mathbf{y}} \pmb{\beta} + \frac{1}{m}\pmb{\beta}^T \mathbf{u} +r\mu^2 + r\alpha^2 + \frac{1}{m} \sum_{i=1}^m\ell(u_i).
\end{dmath}
Next, let $\gamma := \frac{\left\lVert\pmb{\beta}\right\rVert_2}{\sqrt{m}}$ and optimize over the direction of $\boldsymbol{\beta}$ to yield
\begin{dmath}\label{eq:gamma}
\min_{\alpha\ge0,\mathbf{u},\mu}~\max_{\gamma\ge0}~\frac{\gamma}{\sqrt{m}}\left\lVert \alpha\mathbf{g}-\mu\mathbf{D}_\mathbf{y}\mathbf{s}+\mathbf{u}\right\rVert_2 - \frac{\alpha}{\sqrt{m}}\gamma\left\lVert\mathbf{h}\right\rVert_2 +r\mu^2 + r\alpha^2 + \frac{1}{m}\sum_{i=1}^m\ell(u_i).
\end{dmath}
To continue, we utilize the fact that for all $x\in\mathbb{R}$,
$\min_{\tau>0}\frac{\tau}{2} + \frac{x^2}{2\tau m} = \frac{x}{\sqrt{m}}$.
With this trick, the optimization over $\mathbf{u}$ becomes separable over its coordinates and \eqref{eq:gamma} can be rewritten as
\begin{dmath*}
\min_{\alpha\ge0,\tau>0,\mathbf{u},\mu}\max_{\gamma\ge0}\frac{\gamma\tau}{2} + \frac{\gamma}{2\tau m}{\left\lVert \alpha\mathbf{g}-\mu\mathbf{D}_\mathbf{y}\mathbf{s}+\mathbf{u} \right\rVert}_2 ^2+r\mu^2 + r\alpha^2 + \frac{1}{m}\sum_{i=1}^m\ell(u_i) - \frac{\alpha}{\sqrt{m}}\gamma\left\lVert\mathbf{h}\right\rVert_2.
\end{dmath*}
Equivalently, we express this in the following convenient form:
\begin{dmath} \label{eq:binfty}
\min_{\mu,\alpha\ge0,\tau>0}~\max_{\gamma\ge0}~\frac{\gamma \tau}{2}- \frac{\alpha}{\sqrt{m}}\gamma\left\lVert\mathbf{h}\right\rVert_2 +r\mu^2 + r\alpha^2 + \frac{1}{m}\sum_{i=1}^m \env{\ell}{\alpha G+\mu YS}{\frac{\tau}{\gamma}},
\end{dmath}
where recall the definition of the Moreau envelope:
$$
\env{\ell}{\alpha g_i+\mu y_is_i}{\frac{\tau}{\gamma}} = \min_{u_i}\frac{\gamma}{2\tau}(\alpha g_i + \mu y_i s_i-u_i)^2 + \ell(u_i).
$$
As to now, we have reduced the AO into a random min-max optimization over only four scalar variables in \eqref{eq:binfty}. For fixed $\mu,\alpha,\tau,\gamma$, direct application of the weak law of large numbers, shows that the objective function of \eqref{eq:binfty} converges in probability to the following as $m,n\rightarrow\infty$ and $\frac{m}{n}=\delta$:
$$
\gamma\frac{\tau}{2}-\frac{\alpha\gamma}{\sqrt{\delta}}+r\mu^2 + r\alpha^2 + \mathbb{E}\left[\env{\ell}{\alpha G+\mu YS}{\frac{\tau}{\gamma}} \right],
$$
where $G,S\sim\mathcal{N}(0,1)$ and $Y\sim 2\mathrm{Bernoulli}(f(S))-1$ (in view of \eqref{eq:y_pf}).
Based on that, it can be shown (see \cite{Master,PhaseLamp} for similar arguments; details are deferred to the long version of the paper) that the random optimizers $\alpha_n$ and $\mu_n$ of \eqref{eq:binfty} converge to the deterministic optimizers $\alpha$ and $\mu$ of the following (deterministic) optimization problem (whenever these are bounded as the statement of the theorem requires):
\begin{dmath}\label{eq:det}
\min_{\alpha\ge0,\mu,\tau>0}\max_{\gamma\ge0} \gamma\frac{\tau}{2}-\frac{\alpha\gamma}{\sqrt{\delta}}+r\mu^2 + r\alpha^2 + \mathbb{E}\left[\env{\ell}{\alpha G+\mu YS}{\frac{\tau}{\gamma}} \right].
\end{dmath}
At this point, recall that $\alpha$ represents the norm of $\tilde\mathbf{x}$ and $\mu$ the value of $x_1$. Thus, in view of (i) \eqref{eq:corr_pf}, (ii) the equivalence between the PO and the AO, and, (iii) our derivations thus far we have that
\begin{align}
\lim_{n\rightarrow+\infty}\corr{\widehat{\mathbf{x}}}{\mathbf{x}_0} = \frac{\mu}{\sqrt{\mu^2+\alpha^2}},\notag
\end{align}
where $\mu$ and $\alpha$ are the minimizers in \eqref{eq:det}. The three equations in \eqref{eq:eq_main} are derived by the first-order optimality conditions of the optimization in \eqref{eq:det}. We show this next.
\subsection{First-order optimality conditions}
By direct differentiation, the first order optimality conditions of of the min-max optimization in \eqref{eq:det} are as follows:
\begin {equation}\label{eq:1}
2r\mu+\mathbb{E}\left[YS\envdx{\ell}{\alpha G + \mu S Y}{\frac{\tau}{\gamma}}\right] = 0,
\end{equation}
\begin{equation}\label{eq:2}
2 r \alpha +\mathbb{E}\left[G \envdx{\ell}{\alpha G + \mu S Y}{\frac{\tau}{\gamma}} \right]=\frac{\gamma}{\sqrt{\delta}},
\end{equation}
\begin{equation}\label{eq:3}
\frac{\gamma}{2} + \frac{1}{\gamma}\mathbb{E}\left[\envdla{\ell}{\alpha G + \mu S Y}{\frac{\tau}{\gamma}}\right]=0,
\end{equation}
\begin{equation}\label{eq:4}
-\frac{\alpha}{\sqrt{\delta}}-\frac{\tau}{\gamma^2}\mathbb{E}\left[\envdla{\ell}{\alpha G + \mu S Y}{\frac{\tau}{\gamma}}\right]+\frac{\tau}{2} = 0.
\end{equation}
Next, we show how these equations simplify to the following system of equation:
\begin{subequations}\label{eq:reg_main}
\begin{align}
\Exp\left[Y\, S \cdot\envdx{\ell}{\alpha G + \mu S Y}{\la} \right]&=-2r\mu , \label{eq:mureg_main}\\
{\la^2}\,{\delta}\,\Exp\left[\,\left(\envdx{\ell}{\alpha G + \mu S Y}{\la}\right)^2\,\right]&=\alpha^2 ,
\label{eq:alphareg_main}\\
\lambda\,\delta\,\E\left[ G\cdot \envdx{\ell}{\alpha G + \mu S Y}{\la} \right]&=\alpha(1-2r\la\delta) .
\label{eq:lambdareg_main}
\end{align}
\end{subequations}
Denote $\lambda := \frac{\tau}{\gamma}$.
First, \eqref{eq:mureg_main} is followed by equation \eqref{eq:1}.
Second, substituting $\gamma$ from \eqref{eq:3} in \eqref{eq:4} yields $\tau=\frac{\alpha}{\sqrt{\delta}}$ or $\gamma=\frac{\alpha}{\lambda \sqrt{\delta}}$, which together with \eqref{eq:2} leads to \eqref{eq:lambdareg_main}. Finally, \eqref{eq:alphareg_main} can be obtained by substituting $\gamma = \frac{\alpha}{\lambda\sqrt{\delta}}$ in \eqref{eq:3} and using the relation:
\begin{equation*}
\envdla{\ell}{\alpha G + \mu S Y}{\lambda} = -\frac{1}{2}(\envdx{\ell}{\alpha G + \mu S Y}{\lambda})^2.
\end{equation*}
As already mentioned in Section \ref{sec:form}, we focus on the non-regularized loss functions in this paper. Setting $r=0$ in \eqref{eq:reg_main} will give the desired system of equations in \eqref{eq:eq_main}
\section{Useful properties of Prox}
\begin{remark}[Upper bound on correlation]
We use the result of Theorem \ref{thm:main} to derive an upper bound on the correlation value $\corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}$ that holds for all choices of continuously differentiable loss functions. For simplicity, we focus on the noiseless case (i.e., $\varepsilon=0$ in \eqref{eq:gen_model}, and defer the general case to the long version of the paper. An interesting important direction for future work is to study whether the bound of the lemma can be attained by certain loss function.
Thus, we see that Theorem \ref{thm:main} can be useful for the design of optimal loss functions in \eqref{eq:gen_opt}.
\begin{lem}[Upper bound] Let $\ell$ be continuously differentiable and let the assumptions and notation of Theorem \ref{thm:main} hold. Fix $\varepsilon=0$ in \eqref{eq:gen_model}. Then,
$$
\lim_{n\rightarrow+\infty}\,\corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}\,\leq\, {\color{red}????}
$$
\Christos{Check!!!}
\end{lem}
\begin{proof} In view of \eqref{eq:corr_thm}, it suffices to lower bound the ratio $\sigma_\ell:=\alpha/\mu$.
First, applying Gaussian integration by parts in \eqref{eq:lambda_main} we find that $$1=\lambda\, \delta\,\E[\envddx{\ell}{\alpha G + \mu S Y}{\la}],$$
where $\envddx{\ell}{x}{\la}$ denotes the second derivative of the Moreau-envelope function with respect to the first argument (which exists since $\ell$ is differentiable). Solving for $\lambda$ and substituting in \eqref{eq:alpha_main} gives
\begin{align}\label{eq:lower}
\alpha^2 = \frac{1}{\delta}\cdot\frac{ \E\left[\left(\envdx{\ell}{\alpha G + \mu S Y}{\la}\right)^2\right] }{\left(\,\E[\envddx{\ell}{\alpha G + \mu S Y}{\la}]\,\right)^2}.
\end{align}
The rest of the proof follows steps similar to \cite[Lem.~3.4]{MontanariHigh} and \cite[Rem.~5.3.3]{Master}.
Call $$Z:=\mu S Y=\mu |S|\quad\text{and}\quad H:=\alpha G + Z.$$ Also, for a random variable $W$ with density $p_W(w)$ that has a derivative $p_W(w)$ for \emph{every} $w$, denote its score function $\xi_w:=\frac{\partial}{\partial w}{\log p(w)}=\frac{p_W^{'}(w)}{p_W(w)}$ and consider the Fisher information (for the location family) of $W$, (e.g., \cite[Sec.~2]{barron1984monotonic})
$$
\mathcal{I}(W):=\Exp[\,(\xi_W)^2\,].
$$
Note that for $\alpha\neq 0$ $H$ has a continuously differentiable function at every $h$; in fact,
$$
p_H^{'}(h) = \int \phi_{\alpha}^{'}(u) p_{Z}(h-u)\, \mathrm{d}u,
$$
where $\phi_{\alpha}(u)=\frac{1}{\alpha\sqrt{2\pi}}e^{-\frac{u^2}{2\alpha^2}}$.
With this notation, by Cauchy-Schwartz inequality we have
\begin{align*}
\E\left[\left(\envdx{\ell}{H}{\la}\right)^2\right]\cdot \mathcal{I}(H)
&\geq\left(\E[\envdx{\ell}{H}{\la}\cdot \xi_H]\right)^2\\
&=\left(\E[\envddx{\ell}{H}{\la}]\right)^2,
\end{align*}
where the last line follows by integration by parts.
This, when combined with \eqref{eq:lower} shows that
\begin{align}\label{eq:lower2}
\alpha^2 \geq \frac{1}{\delta}\cdot \frac{1}{\mathcal{I}(H)}.
\end{align}
Recall that $H=\alpha G + \mu |S|$ and $G\sim\mathcal{N}(0,1)$. Now, since (e.g., \cite[Eqn.~2.13]{barron1984monotonic})
$$
\mathcal{I}(c\cdot W) = \mathcal{I}(W)/c^2,
$$
we have that
$$
\mathcal{I}(H) = \mathcal{I}\left(\mu \,\Big(\frac{\alpha}{\mu} G + |S| \Big)\right) = \frac{1}{\mu^2}\mathcal{I}\big(\sigma_\ell\, G + |S|\big),
$$
where recall that
$$
\sigma_\ell := \alpha\big/\mu.
$$
Substituting this in \eqref{eq:lower2}, we find that
\begin{align}\label{eq:lower2}
\sigma_\ell^2 \geq \frac{1}{\delta}\cdot \frac{1}{\mathcal{I}\big(\sigma_\ell\, G + |S|\big)}.
\end{align}
{\color{red}To complete the proof, it remains to upper bound $\mathcal{I}\big(\sigma_\ell\, G + |S|\big)$.}
\Christos{That shouldn't be so hard to do!!! However, the approach (see below) that I know of and would use fails. I think up to here we are good though.}
If $|S|$ (or more generally for the noisy case $Y\,S$) had an \emph{everywhere} continuously differentiable density then, we could use Stam's convolution inequality for Fisher information \cite{blachman1965convolution}
$$
\mathcal{I}(X+W)\leq \frac{\mathcal{I}(X)\mathcal{I}(W)}{\mathcal{I}(X)+\mathcal{I}(W)}.
$$
For $X=\sigma_\ell \,G$, we have $\mathcal{I}(X)=\sigma_\ell^{-2}$. Thus
\begin{align}\label{eq:Stam}
\mathcal{I}(\sigma_\ell \,G+W)\leq \frac{\mathcal{I}(W)}{1+\sigma_\ell^2\mathcal{I}(W)}.
\end{align}
Applying this for $W=S\cdot Y$ and using it in \eqref{eq:lower2}, we would find that
\begin{align}\label{eq:lower}
\sigma_\ell^2 \geq \frac{1}{\delta-1}.
\end{align}
\Christos{But this lower bound violates the quadratic loss (at least for the noiseless case that we checked), so it is wrong. Is the mistake that I can't apply \eqref{eq:Stam} since $Y\, S$ does not have a derivative everywhere??}
\Christos{Perhaps we don't need that "fancy" Stam's inequality here to get an upper bound on $\mathcal{I}\big(\sigma_\ell\, G + |S|\big)$ because $G,S$ are simple Gaussians and we sort of know the density of their sum.}
\end{proof}
\end{remark}
\begin{remark}[Why $\delta>1$] The theorem assumes that $\delta>1$ or equivalently that $m>n$. Here, we show that this condition is \emph{necessary} for the equations \eqref{eq:eq_main} to have a bounded solution, or equivalently for the minimizer $\widehat{\mathbf{x}}_\ell$ in \eqref{eq:gen_opt} to be bounded.
To see this, take squares in both sides of \eqref{eq:lambda_main} and divide by \eqref{eq:alpha_main}, to find that
$$
\delta = \frac{\Exp\left[\,\left(\envdx{\ell}{\alpha G + \mu S Y}{\la}\right)^2\,\right]}{\E\left[ G\cdot \envdx{\ell}{\alpha G + \mu S Y}{\la} \right]} \geq 1.
$$
The inequality follows by applying Cauchy-Schwarz and using the fact that $\E[G^2]=1$. To continue,
\end{remark}
\subsection{Proof of \eqref{eq:Stam}} \label{sec:upperboundstam}\begin{proof}
Let $Z = \sigma_{\ell}G + |S|$. Moreover let $p(x), q(y)$ and $r(z)$ denote the probability distributions of $\sigma_{\ell}G, |S|$ and $Z$, respectively. then:
\begin{align}
r'(z) = \int_{-\infty}^{\infty} p'(x)q(z-x)dx.\nonumber
\end{align}
Thus
\begin{align}
\frac{r'(z)}{r(z)} &= \int_{-\infty}^{\infty} \frac{p(x)q(z-x)}{r(z)}\frac{p'(x)}{p(x)}dx \nonumber \\
&=\mathbb{E}\left[\frac{p'(x)}{p(x)} \Bigg| z\right].\nonumber
\end{align}
Similarly,
\begin{align}
\frac{r'(z)}{r(z)} = \mathbb{E}\left[\frac{q'(|y|)}{q(|y|)} \Bigg | z\right].\nonumber
\end{align}
Therefore for any constants a and b,
\begin{align}
\mathbb{E}\left[a\frac{p'(x)}{p(x)} + b\frac{q'(|y|)}{q(|y|)}\Bigg | z\right ] = (a+b)\frac{r'(z)}{r(z)}\nonumber
\end{align}
Subsequently,
\begin{align} \label{eq:secondmoment}
(a+b)^2 \left(\frac{r'(z)}{r(z)}\right)^2 &= \left(\mathbb{E}\left[a\frac{p'(x)}{p(x)} + b\frac{q'(|y|)}{q(|y|)}\Bigg | z\right ] \right)^2 \nonumber \\
& \le \mathbb{E}\left[\left(a\frac{p'(x)}{p(x)} + b\frac{q'(|y|)}{q(|y|)}\right)^2\Bigg | z\right ]
\end{align}
Taking expectation with respect to $z$ from both sides of \eqref{eq:secondmoment} gives:
\begin{align}
(a+b)^2 &\mathbb{E}\left[\left(\frac{r'(z)}{r(z)}\right)^2\right] \le \mathbb{E}\left[\left(a\frac{p'(x)}{p(x)} + b\frac{q'(|y|)}{q(|y|)}\right)^2\right ] \nonumber \\
&=a^2\mathbb{E}\left[\left(\frac{p'(x)}{p(x)}\right)^2\right] + b^2\mathbb{E}\left[\left(\frac{q'(|y|)}{q(|y|)}\right)^2\right],\label{eq:fi}
\end{align}
where the last line follows from $\mathbb{E}\left[\frac{p'(x)}{p(x)}\right]=0$ and
independence of two random variables $G$ and $S$.\\
Recalling the definition of fisher information, the following inequality is derived from \eqref{eq:fi}:
\begin{align}
(a+b)^2 \mathcal{I}(Z) &\le a^2 \mathcal{I}(\sigma_{\ell}G) + 2b^2\mathcal{I}(|S|)\nonumber\\
&= \frac{a^2}{\sigma_{\ell}^2}+2b^2 \label{eq:beforeopt}
\end{align}
Setting $a=\sigma_{\ell}^2$ and $b=\frac{1}{2}$ in \eqref{eq:beforeopt} results in:
\begin{align}
\mathcal{I}(Z) \le \frac{2}{1+2\sigma_{\ell}^2}\nonumber
\end{align}
\end{proof}
\begin{remark}[Upper bound on correlation]
We use the result of Theorem \ref{thm:main} to derive an upper bound on the correlation value $\corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}$ that holds for all choices of continuously differentiable loss functions. For simplicity, we focus on the noiseless case (i.e., $\varepsilon=0$ in \eqref{eq:gen_model}). An interesting important direction for future work is to study whether the bound of the lemma can be attained by certain loss function.
Thus, we see that Theorem \ref{thm:main} can be useful for the design of optimal loss functions in \eqref{eq:gen_opt}.
\begin{lem}[Upper bound] Let $\ell$ be continuously differentiable and let the assumptions and notation of Theorem \ref{thm:main} hold. Fix $\varepsilon=0$ in \eqref{eq:gen_model}. Then,
$$
\lim_{n\rightarrow+\infty}\,\corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}\,\leq\, {\color{red}\frac{1}{\sqrt{1+\frac{1}{2(\delta-1)}}}}
$$
{\color{red},Hossein: I think Blachman's approach can be extended to our problem, please check appendix E for the proof}
\end{lem}
\begin{proof} In view of \eqref{eq:corr_thm}, it suffices to lower bound the ratio $\sigma_\ell:=\alpha/\mu$.
First, applying Gaussian integration by parts in \eqref{eq:lambda_main} we find that $$1=\lambda\, \delta\,\E[\envddx{\ell}{\alpha G + \mu S Y}{\la}],$$
where $\envddx{\ell}{x}{\la}$ denotes the second derivative of the Moreau-envelope function with respect to the first argument (which exists since $\ell$ is differentiable). Solving for $\lambda$ and substituting in \eqref{eq:alpha_main} gives
\begin{align}\label{eq:lower}
\alpha^2 = \frac{1}{\delta}\cdot\frac{ \E\left[\left(\envdx{\ell}{\alpha G + \mu S Y}{\la}\right)^2\right] }{\left(\,\E[\envddx{\ell}{\alpha G + \mu S Y}{\la}]\,\right)^2}.
\end{align}
The rest of the proof follows steps similar to \cite[Lem.~3.4]{MontanariHigh} and \cite[Rem.~5.3.3]{Master}.
Call $$Z:=\mu S Y=\mu |S|\quad\text{and}\quad H:=\alpha G + Z.$$ Also, for a random variable $W$ with density $p_W(w)$ that has a derivative $p_W(w)$ for \emph{every} $w$, denote its score function $\xi_w:=\frac{\partial}{\partial w}{\log p(w)}=\frac{p_W^{'}(w)}{p_W(w)}$ and consider the Fisher information (for the location family) of $W$, (e.g., \cite[Sec.~2]{barron1984monotonic})
$$
\mathcal{I}(W):=\Exp[\,(\xi_W)^2\,].
$$
Note that for $\alpha\neq 0$ $H$ has a continuously differentiable function at every $h$; in fact,
$$
p_H^{'}(h) = \int \phi_{\alpha}^{'}(u) p_{Z}(h-u)\, \mathrm{d}u,
$$
where $\phi_{\alpha}(u)=\frac{1}{\alpha\sqrt{2\pi}}e^{-\frac{u^2}{2\alpha^2}}$.
With this notation, by Cauchy-Schwartz inequality we have:
\begin{align*}
\E\left[\left(\envdx{\ell}{H}{\la}\right)^2\right]\cdot \mathcal{I}(H)
&\geq\left(\E[\envdx{\ell}{H}{\la}\cdot \xi_H]\right)^2\\
&=\left(\E[\envddx{\ell}{H}{\la}]\right)^2,
\end{align*}
where the last line follows by integration by parts.
This, when combined with \eqref{eq:lower} shows that:
\begin{align}\label{eq:lower2}
\alpha^2 \geq \frac{1}{\delta}\cdot \frac{1}{\mathcal{I}(H)}.
\end{align}
Recall that $H=\alpha G + \mu |S|$ and $G\sim\mathcal{N}(0,1)$. Now, since (e.g., \cite[Eqn.~2.13]{barron1984monotonic})
$$
\mathcal{I}(c\cdot W) = \mathcal{I}(W)/c^2,
$$
we have that
$$
\mathcal{I}(H) = \mathcal{I}\left(\mu \,\Big(\frac{\alpha}{\mu} G + |S| \Big)\right) = \frac{1}{\mu^2}\mathcal{I}\big(\sigma_\ell\, G + |S|\big),
$$
where recall that
$$
\sigma_\ell := \alpha\big/\mu.
$$
Substituting this in \eqref{eq:lower2}, we find that
\begin{align}\label{eq:lower3}
\sigma_\ell^2 \geq \frac{1}{\delta}\cdot \frac{1}{\mathcal{I}\big(\sigma_\ell\, G + |S|\big)}.
\end{align}
Next we use the following upper bound (refer to Appendix \ref{sec:upperboundstam} for the proof)
\begin{align}\label{eq:Stam}
\mathcal{I}(\sigma_\ell \,G+|S|)\leq \frac{2}{1+2\sigma_\ell^2}.
\end{align}
Combining \eqref{eq:lower3} and \eqref{eq:Stam} gives:
\begin{align}\label{eq:lower4}
\sigma_\ell^2 \geq \frac{1}{2} \frac{1}{\delta-1}.
\end{align}
The desired inequality follows directly from \eqref{eq:lower4}
\end{proof}
\end{remark}
\section{Special cases}\label{sec:cases}
In this section, we apply the general result of Theorem \ref{thm:main} to specific popular choices of the loss function.
\subsection{Least-squares}\label{sec:LS}
By choosing $\ell(t)=(t-1)^2$ in \eqref{eq:gen_opt}, we obtain the standard least squares estimate. To see this, note that since $y_i=\pm 1$, it holds for all $i$ that
$
(y_i\mathbf{a}_i^T\mathbf{x}-1)^2 = (y_i-\mathbf{a}_i^T\mathbf{x})^2.
$
Thus, the estimator $\widehat{\mathbf{x}}$ is minimizing the sum of squares of the residuals:
\begin{align}\label{eq:LS}
\widehat{\mathbf{x}}=\arg\min_\mathbf{x} \sum(y_i-\mathbf{a}_i^T\mathbf{x})^2.
\end{align}
For the choice $\ell(t)=(t-1)^2$, it turns out that we can solve the equations in \eqref{eq:eq_main} in closed form. The final result is summarized in the corollary below.
\begin{cor}[Least-squares]\label{cor:LS}
Let Assumption \ref{ass:Gaussian} hold and $\delta>1$. Let $\widehat{\mathbf{x}}$ be as in \eqref{eq:LS}. Then, in the limit of $m,n\rightarrow+\infty$, $m/n\rightarrow\delta$, Equations \eqref{eq:corr_thm} and \eqref{eq:norm_thm} hold with probability one with $\alpha$ and $\mu$ given as follows:
\begin{align}
\mu &= (1-2\epsilon)\sqrt{\frac{2}{\pi}}, \label{eq:mu_LS} \\
\alpha^2 &=\left(1-(1-2\epsilon)^2\,\frac{2}{\pi}\right)\frac{1}{\delta-1}.\label{eq:alpha_LS}
\end{align}
\end{cor}
\begin{proof} In order to get the values of $\alpha$ and $\mu$ as in the statement of the corollary, we show how to simplify Equations \eqref{eq:eq_main} for $\ell(t)=(t-1)^2$. In this case, the proximal operator admits a simple expression:
\begin{equation
\prox{\ell}{x}{\la} = ({x+2\la})\Big/({1+2\la}).\notag
\end{equation}
Also, $\ell^{'}(t)=2(t-1)$.
Substituting these in \eqref{eq:mu_main2} gives the formula for $\mu$ as follows:
\begin{align}\notag
0 &= \E\left[YS(\alpha G + \mu SY - 1)\right] = \mu\, \E[S^2] - \E[YS]\\
&\qquad\qquad\qquad \Longrightarrow
\mu = \sqrt{\frac{2}{\pi}}(1-2\epsilon), \notag
\end{align}
where we have also used from \eqref{eq:GSY} that $\E[S^2]=1$, $\E[YS]=(1-2\varepsilon)\sqrt{\frac{2}{\pi}}$ and $G$ is independent of $S$.
Also, since $\ell^{''}(t)=2$, direct application of \eqref{eq:lambda_main3} gives
\begin{align}
1 = \lambda\delta\,\frac{2}{1+2\la}\Longrightarrow \la = \frac{1}{2(\delta-1)}\notag
\end{align}
Finally, substituting the value of $\lambda$ in \eqref{eq:alpha_main2} we obtain the desired value for $\alpha$ as follows
\begin{align*}
\alpha^2 &= 4 \lambda^2 \delta\,\mathbb{E}\left[(\prox{\ell}{\alpha G + \mu S Y}{\la}-1)^2\right] \\
&= \frac{4\lambda^2}{(1+2\la)^2}\delta\,\mathbb{E}\left[(\alpha G+\mu S Y -1)^2 \right] \\
&=\frac{4\la^2\delta}{(1+2\la)^2}(\alpha^2+1 -\frac{2}{\pi}(1-2\epsilon)^2)\\
&=\frac{1}{\delta}(\alpha^2+1-\frac{2}{\pi}(1-2\epsilon)^2)\label{eq:alpha}\quad \Longrightarrow \,\eqref{eq:alpha_LS} .
\end{align*}
\end{proof}
\begin{remark}[Least-squares: One-bit vs signed measurements]
On the one hand, Corollary \ref{cor:LS} shows that least-squares for (noisy) one-bit measurements lead to an estimator that satisfies
\begin{equation}\label{eq:norm_LS}
\lim_{n\rightarrow \infty} \Big\|\widehat{\mathbf{x}}-\frac{\mu}{\|\mathbf{x}_0\|_2}\cdot{\mathbf{x}_0}\Big\|_2^2 = \tau^2\cdot \frac{1}{\delta-1},
\end{equation}
where $\mu$ is as in \eqref{eq:mu_LS} and $\tau^2:=1-(1-2\varepsilon)\frac{2}{\pi}$. On the other hand, it is well-known (e.g., see references in \cite[Sec.~5.1]{Master}) that least-squares for (scaled) linear measurements with additive Gaussian noise (i.e. $y_i= \rho \mathbf{a}_i^T\mathbf{x}_0 + \sigma z_i$, $z_i\sim\mathcal{N}(0,1)$)
leads to an estimator that satisfies
\begin{align}
\lim_{n\rightarrow \infty} \|\widehat{\mathbf{x}}-\rho\cdot{\mathbf{x}_0}\|_2^2 = \sigma^2\cdot \frac{1}{\delta-1}.\label{eq:norm_LS_lin}
\end{align}
Direct comparison of \eqref{eq:norm_LS} to \eqref{eq:norm_LS_lin} suggests that least-squares with one-bit measurements performs the same as if measurements were linear with scaling factor $\rho=\mu/\|\mathbf{x}_0\|_2$ and noise variance $\sigma^2=\tau^2=\alpha^2(\delta-1)$. This worth-mentioning conclusion is not new; it was proved in \cite{Bri,PV15,NIPS}. We elaborate on the relation to this prior work in the following remark.
\end{remark}
\begin{remark}[Prior work]
There is a lot of recent work on the use of least-squares-type estimators for recovering signals from nonlinear measurements of the form $y_i=f(\mathbf{a}_i^T\mathbf{x}_0)$ with Gaussian vectors $\mathbf{a}_i$. The original work that suggests least-squares as a reasonable estimator in this setting is due to Brillinger \cite{Bri}. In his 1982 paper, Brillinger studied the problem in the classical statistics regime (aka $n$ is fixed not scaling with $m\rightarrow+\infty$) and he proved for the least-squares solution satisfies
$$
\lim_{m\rightarrow+\infty} \frac{1}{m}\|\widehat{\mathbf{x}}-\frac{\mu}{\|\mathbf{x}_0\|_2}\cdot{\mathbf{x}_0}\|_2^2 = \tau^2,
$$
where
\begin{align}
\mu &= \E[S f(S)],\quad\quad\qquad S\sim\mathcal{N}(0,1),\notag\\
\tau^2 &= \E[(f(S)-\mu S)^2].\label{eq:Bri}
\end{align}
and the expectations are with respect to $S$ and possible randomness of $f$. Evaluating \eqref{eq:Bri} for $f(S)={\rm{BSC}}_{\varepsilon}(\mathrm{sign}(S))$ leads to the same values for $\mu$ and $\tau^2$ in \eqref{eq:norm_LS}. In other works, \eqref{eq:norm_LS} for $\delta\rightarrow+\infty$ indeed recovers Brillinger's result. The extension of Brillinger's original work to the high-dimensional setting (both $m,n$ large) was first studied by Plan and Vershynin \cite{PV15}, who derived (non-sharp) non-asymptotic upper bounds on the performance of constrained least-squares (such as the Lasso). Shortly after, \cite{NIPS} extended this result to \emph{sharp} asymtpotic predictions and to regularized least-squares. In particular, Corollary \ref{cor:LS} is a special case of the main theorem in \cite{NIPS}. Several other interesting extensions of the result by Plan and Vershynin have recently appeared in the literature, e.g., \cite{genzel2017high,goldstein2018structured,genzel2017recovering,thrampoulidis2018generalized}. However, \cite{NIPS} is the only one to give results that are sharp in the flavor of this paper. Our work, extends the result of \cite{NIPS} to general loss functions beyond least-squares. The techniques of \cite{NIPS} that have guided the use of the CGMT in our context have also been recently applied in \cite{PhaseLamp} in the context of phase-retrieval.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=8cm,height=6.6cm]{fig2}
\caption{Comparisons between theoretical and simulated results for the least-squares (LS) and least-absolute deviations (LAD) estimators along with the upper bound, as a function of $\delta$, for noiseless measurements ($\epsilon=0$). The LS estimator significantly outperforms the LAD for all values of $\delta$.}
\label{fig:fig2}
\end{figure}
\subsection{Least-absolute deviations} \label{sec:LAD}
By choosing $\ell(t)=|t-1|$ in \eqref{eq:gen_opt}, we obtain a least-absolute deviations estimate. Again, since $y_i=\pm 1$, it holds for all $i$ that
$
|y_i\mathbf{a}_i^T\mathbf{x}-1| = |y_i-\mathbf{a}_i^T\mathbf{x}|.
$
Thus, this choice of loss function leads to residuals:
\begin{align}\label{eq:LAD}
\widehat{\mathbf{x}}=\arg\min_\mathbf{x} \sum |y_i-\mathbf{a}_i^T\mathbf{x}|.
\end{align}
As in Section \ref{sec:LS}, for $\ell(t) = |t-1|$ the proximal operator admits a simple expression, as follows:
\begin{align}
\prox{\ell}{x}{\lambda} = 1+ \soft{x-1}{\la}
\end{align}
where
$$
\soft{x}{\la} = \begin{cases}
x-\lambda, & \text{if}\ x>\lambda, \\
x+\lambda, & \text{if}\ x<-\lambda, \\
0,&\text{otherwise}.
\end{cases}
$$
is the standard soft-thresholding function.
\subsection{Hinge-loss}
We obtain the hinge-loss estimator in by setting $\ell(t) = \max(1-t,0)$ in \eqref{eq:gen_opt}. Similar to Section \ref{sec:LAD}, the proximal operator of the hinge-loss can be expressed in terms of the soft-thresholding function as follows:
\begin{align*}
\prox{\ell}{x}{\la} = 1 +\soft{x+\frac{\la}{2}-1}{\frac{\la}{2}}.
\end{align*}
As already mentioned in Remark \ref{rem:threshold}, the set of minimizers of the hinge-loss is bounded (required by Theorem \ref{thm:main}) only for $\delta>\delta^\star_\varepsilon$ where $\delta^\star_\varepsilon$ is the value of the threshold in \eqref{eq:threshold}. Our numerical simulations in Figures \ref{fig:fig3} and \ref{fig:fig4} suggest that hinge-loss is robust to measurement corruptions, as for moderate to large values of $\delta$ it outperforms the LS and the LAD estimators. Theorem \ref{thm:main} opens the way to analytically confirm such conclusions, which is an interesting future direction.
\section{Conclusion}\label{sec:conc}
This paper derives \emph{sharp} asymptotic performance guarantees for a wide class of convex optimization based estimators for recovering a signal from corrupted one-bit measurements. Our general result includes as a special case the least-squares estimator that was previously studied in \cite{NIPS}. Beyond that, it applies to other popular estimators such as the LAD, Hinge-loss, logistic loss, etc. One natural and interesting research direction is finding the optimal loss function $\ell(\cdot)$ in \eqref{eq:gen_opt}. In view of Theorem \ref{thm:main}, this boils down to finding $\ell(\cdot)$ that minimizes the ratio $\alpha/\mu$ of the parameters $\alpha$ and $\mu$ that solve the system of equations in \eqref{eq:eq_main}. For this purpose, it might also be important to derive necessary and sufficient conditions that guarantee \eqref{eq:eq_main} has a unique solution.
\section{Introduction}
\subsection{Motivation}
Classical statistical signal-processing theory studies estimation problems in which the number of unknown parameters $n$ is small compared to the number of observations $m$. In contrast, modern inference problems are typically \emph{high-dimensional}, that is $n$ can be of the same order as $m$. Examples are abundant in a wide range of signal-processing applications such as medical imaging, wireless communications, recommendation systems and so on. Classical tools and theories are not applicable in these modern inference problems. As such, over the last two decades or so, the study of high-dimensional estimation problems has received significant attention.
Despite the remarkable progress in many directions, several important questions remain to be explored.
This paper studies the fundamental problem of recovering an unknown signal from (possibly corrupted) one-bit measurements in high-dimensions. We focus on a rather rich class of convex optimization-based estimators that includes, for example, least-squares (LS), least-absolute deviations (LAD), logistic regression and hinge-loss as special cases. For such estimators and Gaussian measurement vectors, we compute their asymptotic performance in the high-dimensional linear regime in which $m,n\rightarrow+\infty$ and $m/n\rightarrow\delta\in(1,+\infty)$. Importantly, our results are \emph{sharp}. In contrast to existing related results which are order-wise (i.e., they involve unknown or loose constants) this allows us to accurately compare the relative performance of different methods (e.g., LS vs LAD).
It is worth mentioning that while our predictions are asymptotic, our numerical illustrations suggest that they are valid for dimensions $m$ and $n$ that are as small as a few hundreds.
\subsection{Contributions}
Our goal is to recover $\mathbf{x}_0\in\mathbb{R}^n$ from measurements $y_i=\mathrm{sign}(\mathbf{a}_i^T\mathbf{x}_0),~i=1,\ldots,m$, where $\mathbf{a}_i\in\mathbb{R}^n$ have entries iid Gaussian. The results account for possible corruptions by allowing each measurement $y_i$ to be sign-flipped with constant probability $\varepsilon\in[0,1/2]$ (see Section \ref{sec:form} for details). We study the asymptotic performance of estimators $\hat\mathbf{x}_\ell$ that are solutions to the following optimization problem for some convex loss function $\ell(\cdot)$.
\begin{align}\label{eq:intro_opt}
\widehat{\mathbf{x}}_\ell := \arg\min_\mathbf{x} \sum_{i=1}^{m} \ell(y_i\mathbf{a}_i^T\mathbf{x}).
\end{align}
When $m,n\rightarrow+\infty$ and $m/n\rightarrow\delta>\delta_\varepsilon^\star$, we show that the correlation of $\widehat{\mathbf{x}}_\ell$ to the true vector $\mathbf{x}_0$ is sharply predicted by $\sqrt{\frac{1}{1+({\alpha}/{\mu})^2}}$ where the parameters $\alpha$ and $\mu$ are the solutions to a system of three non-linear equations in three unknowns. We find that the system of equations (and thus, the value of $\alpha/\mu$) depends on the loss function $\ell(\cdot)$ through its Moreau envelope function. We prove that $\delta_\varepsilon^\star>1$ is necessary for the equations to have a bounded solution, but, in general, the value of the threshold $\delta_\varepsilon^\star$ depends both on the noise level $\varepsilon$ and on the loss function. For the general case where $\varepsilon \in[0,1/2]$, we propose a method to find the upper bound on correlation which corresponds to the maximum value of correlation that can be achieved for any convex loss function
\par Despite the fact that it is not the main focus of this paper, we remark that our results hold for the general case where measurements $y_i$ are determined according to an arbitrary function $f:\mathbb{R}\rightarrow[0,1]$ (see \eqref{eq:gen_label}). Moreover as our analysis in Appendix \ref{sec:mainproof} shows, the system of equations introduced in Section \ref{sec:SOE} can be extended to address regularized loss functions i.e.,
\begin{align}
\sum_{i=1}^{m} \ell(y_i\mathbf{a}_i^T\mathbf{x}) + r\left\lVert\mathbf{x}\right\rVert_2^2.\nonumber
\end{align}
\par We specialize our general result to specific loss functions such as LS, LAD and hinge-loss. This allows us to numerically compare the performance of these popular estimators by simply evaluating the corresponding theoretical predictions. Our numerical illustrations corroborate our theoretical predictions. For LS, our equations can be solved in closed form and recover the result of \cite{NIPS} (see Section \ref{sec:prior}). For the hinge-loss, we show that $\delta_\varepsilon^\star$ is a decreasing function of $\varepsilon$ that approaches $+\infty$ in the noiseless case and $2$ when $\varepsilon=1/2$.
We believe that our work opens the possibility for addressing several important open questions, such
as finding the optimal choice of the loss function in \eqref{eq:intro_opt} and the value of $\delta_\varepsilon^\star$ for general loss functions.
\subsection{Prior work}\label{sec:prior}
As mentioned, over the past two decades there has been a very long list of works that derive statistical guarantees for high-dimensional estimation problems. In particular, many of these are concerned with convex optimization-based inference methods. Our work is most closely related to the following two lines of research.
\vspace{4pt}
\noindent\emph{(a)~Sharp asymptotic predictions for noisy linear measurements.} Most of the results in the literature of high-dimensional statistics are order-wise in nature. Sharp asymptotic predictions have only recently appeared in the literature for the case of noisy linear measurements with Gaussian measurement vectors. There are by now three different approaches that have been used (to different extent each) towards asymptotic analysis of convex regularized estimators: (a) the one that is based on the approximate message passing (AMP) algorithm and its state-evolution analysis; \cite{AMP,donoho2011noise,bayati2011dynamics,montanariLasso,donoho2016high} (b) the one that is based on Gaussian process (GP) inequalities, specifically the convex Gaussian min-max Theorem (CGMT); \cite{Sto,Cha,StoLASSO,OTH13,COLT,Master} (c) and the ``leave-one-out" approach \cite{karoui2013asymptotic,karoui15}. The three approaches are quite different to each other and each comes with its unique distinguishing features and disadvantages. A detailed comparison is beyond our scope. In this paper, we follow the GP approach and build upon the CGMT.
Since concerned with linear measurements, these previous works consider estimators that solve minimization problems of the form
\begin{align}\label{eq:intro_opt2}
\widehat{\mathbf{x}} := \arg\min_\mathbf{x} \sum_{i=1}^{m} \widetilde{\ell}(y_i-\mathbf{a}_i^T\mathbf{x}) + r R(\mathbf{x})
\end{align}
Specifically, the loss function $\widetilde\ell(\cdot)$ penalizes the residual. In this paper, we extend the applicability of the CGMT to optimization problems in the form of \eqref{eq:intro_opt}. For our case of signed measurements, \eqref{eq:intro_opt} is more general than \eqref{eq:intro_opt2}. To see this, note that for $y_i\in\pm 1$ and popular symmetric loss functions $\widetilde{\ell}(t)=\widetilde{\ell}(-t)$ (e.g., LS, LAD), \eqref{eq:intro_opt} results in \eqref{eq:intro_opt2} by choosing $\ell(t)=\widetilde\ell(t-1)$ in the former. Moreover, \eqref{eq:intro_opt} includes several other popular loss functions such as the logistic loss and the hinge-loss which cannot be expressed by \eqref{eq:intro_opt2}.
\vspace{4pt}
\noindent\emph{(b)~One-bit compressed sensing.} Our works naturally relates to the literature of one-bit compressed sensing (CS) \cite{boufounos20081}. The vast majority of performance guarantees for one-bit CS are order-wise in nature, e.g., \cite{jacques2013robust,plan2013one,plan2012robust,PV15}. To the best of our knowledge, the only existing sharp results are presented in \cite{NIPS} for Gaussian measurement vectors. Specifically, the paper \cite{NIPS} derives the asymptotic performance of regularized LS for generalized nonlinear measurements, which include signed measurements as a special case. Our work can be seen as a direct extension of \cite{NIPS} to loss functions beyond least-squares, such as the hinge-loss. In fact, the result of \cite{NIPS} for our setting is a direct corollary of our main theorem (see Section \ref{sec:LS}). As in \cite{NIPS}, our proof technique is based on the CGMT.
\vspace{4pt}
There are few works that consider general convex loss functions for estimating a signal from noisy measurements in high dimensions. In \cite{genzel2016nonas}, the general estimator $\ell(\langle\mathbf{a}_i,\mathbf{x}\rangle,y_i)$ for estimating a structured signal in the non-asymptotic case has been studied. However it is assumed that the loss function satisfies some conditions including restricted strong convexity, continuously differentiability in the first argument and derivative of loss function being Lipschitz-continuous with respect to the second argument. The author furthermore derives some sufficient conditions for the loss function ensuring the restricted strong convexity condition.
Our result in Theorem \ref{sec:lem} for achieving the best performance across all loss functions is comparable to \cite[Theorem 1]{bean2013optimal} in which they have also proposed a method for deriving optimal loss function and measuring its performance. However their results hold for measurements of the form $y_i=\mathbf{a}_i^T\mathbf{x}_0+ \varepsilon_i$, where $\{\varepsilon_i\}_{i=1}^{m}$ are random errors independent of $\mathbf{a}_i$'s.
\vspace{4pt}
Finally, our paper is closely related to \cite{candes2018phase,sur2019modern}, in which the authors study the high-dimensional performance of maximum-likelihood (ML) estimation for the logistic model. The ML estimator is a special case of \eqref{eq:intro_opt} but their measurement model differs from the one considered in this paper. Also, their analysis is based on the AMP. While this paper was being prepared, we became aware of \cite{salehi2019impact}, in which the authors extend the results of \cite{sur2019modern} to regularized ML by using the CGMT. However we present results for general loss functions and a different measurement model.
\subsection{Organization and notation}
The rest of the paper is organized as follows. Section \ref{sec:form} formally introduces the problem that this paper is concerned with. We present our main results Theorems \ref{thm:main} and \ref{sec:lem} in Section \ref{sec:gen}, where we also discuss some of their implications. In Section \ref{sec:cases}, we specialize the general result of Theorem \ref{thm:main} to the LS, LAD and hinge-loss estimators. We also present numerical simulations to validate our theoretical predictions. We conclude in Section \ref{sec:conc} with several possible directions for future research. A proof sketch of Theorem \ref{thm:main} is provided in Appendix \ref{sec:proof}.
The symbols $\mathbb{P}\left(\cdot\right)$ and $\Exp\left[\cdot\right]$ denote the probability of an event and the expectation of a random variable, respectively. We use boldface notation for vectors. $\|\mathbf{v}\|_2$ denotes the Euclidean norm of a vector $\mathbf{v}$. We write $i\in[m]$ for $i=1,2,\ldots,m$.
When writing $x_* = \arg\min_x f(x),$ we let the operator $\arg\min$ return any one of the possible minimizers of $f$. For all $x\in\mathbb{R}$, Gaussian $Q$-function at $x$ is defined as $Q(x)= \int_{s=x}^{\infty}\frac{1}{\sqrt{2\pi}}e^{s^2/2} ds.$
For a random variable $H$ with density $p_H(h)$ that has a derivative $p_H^{'}(h)$ for all $h\in\mathbb{R}$, denote its score function $\xi_h:=\frac{\partial}{\partial h}{\log p(h)}=\frac{p_H^{'}(h)}{p_H(h)}$. Fisher information of $H$ (e.g., \cite[Sec.~2]{barron1984monotonic}) is defined as
$\mathcal{I}(H):=\Exp[\,(\xi_H)^2\,].$
\section{Problem statement}\label{sec:form}
The goal is to recover the unknown vector $\mathbf{x}_0\in\mathbb{R}^n$ from $m$ noisy signed measurements $y_i$. Let $\overline{y}_i,~i\in[m]$ denote the \emph{noiseless} signed measurements $ \overline{y}_i= \mathrm{sign}(\mathbf{a}_i^T\mathbf{x}_0),~i\in[m]$, where $\mathbf{a}_i\in\mathbb{R}^n$ are the measurement vectors. We assume the following noise model in which each measurement is corrupted (i.e., sign flipped) with some constant probability $\varepsilon\in[0,1/2]$:
\begin{align}\label{eq:gen_model}
y_i = {\rm{BSC}}_{\varepsilon}(\overline{y}_i) := \begin{cases} \mathrm{sign}(\mathbf{a}_i^T\mathbf{x}_0) &, \text{w.p.}~~1-\varepsilon, \\ -\mathrm{sign}(\mathbf{a}_i^T\mathbf{x}_0) &, \text{w.p.}~~\varepsilon. \end{cases}
\end{align}
We remark that all our results remain valid in the case of (potentially) adversarial noise in which $\varepsilon\, m$ number of noiseless measurements $\overline{y}_i$ are flipped. Nevertheless, for the rest of the paper, we focus on the measurement model in \eqref{eq:gen_model}.
This paper studies the recovery performance of estimates $\widehat{\mathbf{x}}_{\ell}$ of $\mathbf{x}_0$ that are obtained by solving the following convex optimization problem:
\begin{align}\label{eq:gen_opt}
\widehat{\mathbf{x}}_{\ell} \in \arg\min_{\mathbf{x}} \frac{1}{m} \sum_{i=1}^{m} \ell(y_i \mathbf{a}_i^T \mathbf{x}),
\end{align}
Here, $\ell:\mathbb{R}\rightarrow\mathbb{R}$ is a convex loss function and the subscript $\ell$ on the estimate $\widehat{\mathbf{x}}_\ell$ emphasizes its dependence on the choice of the loss function. Different choices for $\ell(.)$ lead to popular specific estimators. For example, these include the following:
\begin{itemize}
\item Least-squares (LS): $\ell(t)=\frac{1}{2}(t-1)^2$,
\item Least-absolute deviations (LAD): $\ell(t)=|t-1|$,
\item Logistic maximum-likelihood: $\ell(t)=\log(1+e^{-t})$,
\item Ada-boost: $\ell(t)=e^{-t}$,
\item Hinge-loss: $\ell(t)=\max\{1-t\,,\,0\}$.
\end{itemize}
Since we only observe sign-information, any information about the magnitude $\|\mathbf{x}_0\|_2$ of the signal $\mathbf{x}_0$ is lost. Thus, we can only hope to obtain an accurate estimate of the direction of $\mathbf{x}_0$.
We measure performance of the estimate $\widehat{\mathbf{x}}_{\ell}$ by its (absolute) correlation value to $\mathbf{x}_0$, i.e.,
\begin{align}\label{eq:corr}
\corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}:=\frac{|\,{\inp{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}}\,|}{\|\widehat{\mathbf{x}}_\ell\|_2 \|\mathbf{x}_0\|_2} \in [0,1].
\end{align}
Obviously, we seek estimates that maximize correlation. \\
Although signed measurements obtained according to \eqref{eq:gen_model} are the main focus of this paper, we remark that all results are valid for the general case where measurements are determined according to :
\begin{align} \label{eq:gen_label}
y_i = \begin{cases} 1 &, \text{w.p.}~~ f(\mathbf{a}_i^T\mathbf{x}_0), \\ -1 &, \text{w.p.}~~1-f(\mathbf{a}_i^T\mathbf{x}_0). \end{cases}
\end{align}
where $f : \mathbb{R}\rightarrow[0,1]$. It is straightforward to see that choosing $f(t) = \frac{1}{2}+\frac{1-2\varepsilon}{2} \mathrm{sign}(t)$ will give \eqref{eq:gen_model}.
Moreover as the analysis in Appendix \ref{sec:mainproof} suggests, we can extend our results to the broader class of optimization problems where there is an additional term for the penalty on the magnitude of $\mathbf{x}$, i.e.,
\begin{align} \label{eq:opt_reg}
\widehat{\mathbf{x}}_\ell = \arg\min_\mathbf{x} \sum_{i=1}^{m} \ell(y_i\mathbf{a}_i^T\mathbf{x}) + r\left\lVert\mathbf{x}\right\rVert_2^2,
\end{align}
where $r \in \{ \mathbb{R}^+\cup{0}\}$. However, the main body of this paper focuses on the case $r=0$.\\
Our main result characterizes the asymptotic value of $\corr{\widehat{\mathbf{x}}_{\ell}}{\mathbf{x}_0}$ in the linear high-dimensional regime in which the problem dimensions $m$ and $n$ grow proportionally to infinity with ${m}/{n}\rightarrow\delta\in(1,\infty).$ All our results are valid under the assumption that the measurement vectors have entries IID Gaussian.
\begin{ass}[Gaussian measurement vectors]\label{ass:Gaussian}
The vectors $\mathbf{a}_i,~i\in[m]$ have entries IID standard normal $\mathcal{N}(0,1)$.
\end{ass}
We make no further assumptions on the distribution of the true vector $\mathbf{x}_0$.
\section{Main results}\label{sec:gen}
\subsection{Moreau Envelopes}
Before presenting our main result, we need a few definitions. We write
$$\env{\ell}{x}{\la}:=\min_{v}\frac{1}{2\la}(x-v)^2 + \ell(v),$$
for the \emph{Moreau envelope function} of the loss $\ell:\mathbb{R}\rightarrow\mathbb{R}$ at $x$ with parameter $\la>0$. Note that the objective function in the minimization above is strongly convex. Thus, for all values of $x$ and $\la$, there exists a unique minimizer which we denote by $\prox{\ell}{x}{\la}$. This is known as the \emph{proximal operator} of $\ell$ at $x$ with parameter $\la$. One of the important and useful properties of the Moreau envelope function is that it is continuously differentiable with respect to both $x$ and $\la$ \cite{rockafellar2009variational}. We denote these derivatives as follows
\begin{align}\notag
\envdx{\ell}{x}{\la}&:=\frac{\partial{\env{\ell}{x}{\la}}}{\partial x},\\
\envdla{\ell}{x}{\la}&:=\frac{\partial{\env{\ell}{x}{\la}}}{\partial \la}.\notag
\end{align}
The following is a well-known result that is useful for our purposes.
\begin{propo}[Derivatives of $\mathcal{M}_{\ell}$~\cite{rockafellar2009variational}]
\label{propo:der}
For a function $\ell:\mathbb{R}\rightarrow\mathbb{R}$, and all $x\in\mathbb{R}$ and $\lambda>0$, the following properties are true:
\begin{align}
\envdx{\ell}{x}{\la} &= \frac{1}{\la}{(x-\prox{\ell}{x}{\la})}, \notag \\
\envdla{\ell}{x}{\la} &= -\frac{1}{2\la^2}{(x-\prox{\ell}{x}{\la})^2}.\notag
\end{align}
If in addition $\ell$ is differentiable and $\ell^{'}$ denotes its derivative, then
\begin{align}
\envdx{\ell}{x}{\la}&= \ell^{'}(\proxl{x}{\la}),\notag\\
\envdla{\ell}{x}{\la}&= -\frac{1}{2}(\ell^{'}(\proxl{x}{\la})^2.\notag
\end{align}
\end{propo}
\subsection{A system of equations} \label{sec:SOE}
It turns out, that the asymptotic performance of \eqref{eq:gen_opt} depends on the loss function $\ell$ via its Moreau envelope. Specifically, define random variables $G,S$ and $Y$ as follows (recall the definition of ${\rm{BSC}}_{\varepsilon}$ in \eqref{eq:gen_model})
\begin{align}\label{eq:GSY}
G,S\stackrel{\text{iid}}{\sim}\mathcal{N}(0,1) \quad\text{and}\quad Y={\rm{BSC}}_{\varepsilon}(\mathrm{sign}(S)),
\end{align}
and consider the following system of non-linear equations in three unknowns $(\mu,\alpha>0,\la)$:
\begin{subequations}\label{eq:eq_main}
\begin{align}
\Exp\left[Y\, S \cdot\envdx{\ell}{\alpha G + \mu S Y}{\la} \right]&=0 , \label{eq:mu_main}\\
{\la^2}\,{\delta}\,\Exp\left[\,\left(\envdx{\ell}{\alpha G + \mu S Y}{\la}\right)^2\,\right]&=\alpha^2 ,
\label{eq:alpha_main}\\
\lambda\,\delta\,\E\left[ G\cdot \envdx{\ell}{\alpha G + \mu S Y}{\la} \right]&=\alpha.
\label{eq:lambda_main}
\end{align}
\end{subequations}
The expectations above are with respect to the randomness of the random variables $G$, $S$ and $Y$. As we show shortly, the solution to these equations is tightly connected to the asymptotic behavior of the optimization in \eqref{eq:gen_opt}.
We remark that the equations are well defined even if the loss function $\ell$ is not differentiable. If $\ell$ is differentiable then, using Proposition \ref{propo:der} the Equations \eqref{eq:eq_main}
can be equivalently written as follows:
\begin{subequations}\label{eq:eq_main2}
\begin{align}
\Exp\left[Y\, S \cdot\ell^{'}\left( \proxl{\alpha G + \mu S Y}{\la} \right) \right]\label{eq:mu_main2}&=0,\\
{\la^2}\,{\delta}\,\Exp\left[\left(\ell^{'}\left( \proxl{\alpha G + \mu S Y}{\la} \right)\right)^2\right]&=\alpha^2, \label{eq:alpha_main2}\\
\lambda\,\delta\,\E\left[ G\cdot \ell^{'}\left(\proxl{\alpha G + \mu S Y}{\la}\right) \right]&=\alpha. \label{eq:lambda_main2}
\end{align}
\end{subequations}
Finally, if $\ell$ is two times differentiable then applying integration by parts in Equation \eqref{eq:lambda_main2} results in the following reformulation of \eqref{eq:lambda_main}:
\begin{align}\label{eq:lambda_main3}
1 &= \lambda\,\delta\, \Exp\left[\frac{\ell^{''}\left( \proxl{\alpha G + \mu S Y}{\la} \right)}{1+\la\, \ell^{''}\left( \proxl{\alpha G + \mu S Y}{\la} \right)}\right].
\end{align}
Note that the system of equations in \eqref{eq:eq_main} and \eqref{eq:eq_main2} hold when there is no regularization i.e., $r=0$. For the general case, the system of equations can be extended to \eqref{eq:reg_main}.
\subsection{Asymptotic prediction}
We are now ready to state the main result of this paper.
\begin{thm}[General loss function]\label{thm:main}
Let Assumption \ref{ass:Gaussian} hold and fix some $\varepsilon\in[0,1/2]$ in \eqref{eq:gen_model}. Assume $\delta>1$ such that the set of minimizers in \eqref{eq:gen_opt} is bounded and the system of Equations \eqref{eq:eq_main} has a unique solution $(\mu,\alpha,\la)$, such that $\mu\neq 0$. Let $\widehat{\mathbf{x}}_\ell$ be as in \eqref{eq:gen_opt}. Then, in the limit of $m,n\rightarrow+\infty$, $m/n\rightarrow\delta$, it holds with probability one that
\begin{align}\label{eq:corr_thm}
\lim_{n\rightarrow \infty} \corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0} = \sqrt{\frac{1}{1+({\alpha}/{\mu})^2}}.
\end{align}
\noindent Moreover,
\begin{align}\label{eq:norm_thm}
\lim_{n\rightarrow \infty} \Big\|\widehat{\mathbf{x}}_\ell-\mu\cdot\frac{\mathbf{x}_0}{\|\mathbf{x}_0\|_2}\Big\|_2^2 = \alpha^2.
\end{align}
\end{thm}
Theorem \ref{thm:main} holds for general loss functions. In Section \ref{sec:cases} we specialize the result to specific popular choices. We also present numerical simulations that confirm the validity of the predictions of Theorem \ref{thm:main} (see Figures \ref{fig:fig2}--\ref{fig:fig4}). Before that, in the following remarks we present a few notes on the conditions, interpretation and implications of the theorem. A proof outline is included in Appendix \ref{sec:proof}.
\begin{remark}[The role of $\mu$ and $\alpha$]
The theorem evaluates the asymptotic performance of the estimator $\widehat{\mathbf{x}}_\ell$ for a convex loss function $\ell$ in \eqref{eq:gen_opt}. According to \eqref{eq:corr_thm}, the prediction for the limiting behavior of the correlation value is given in terms of $\sigma_\ell:=\alpha\big/\mu$, where $\mu$ and $\alpha$ are unique solutions of \eqref{eq:eq_main}. \emph{The smaller the value of $\sigma_\ell$ is, the larger becomes the correlation value.} Thus, the correlation value is fully determined by the ratio of the parameters $\alpha$ and $\mu$. Their individual role is clarified in \eqref{eq:norm_thm}. Specifically, according to \eqref{eq:norm_thm}, $\widehat{\mathbf{x}}_{\ell}$ is a biased estimate of the true $\mathbf{x}_0$ and $\mu$ represents exactly that bias term. In other words, solving \eqref{eq:gen_opt} returns an estimator that is close to a $\mu$--scaled version of $\mathbf{x}_0$. When $\mathbf{x}_0$ and $\widehat{\mathbf{x}}_{\ell}$ are scaled appropriately, then the L2 squared norm of their difference converges to $\alpha^2$.
\end{remark}
\begin{remark}[On the existence of a solution to \eqref{eq:gen_opt}] While $\delta>1$ is a necessary condition for the equations in \eqref{eq:eq_main} to have a solution, it is \emph{not} sufficient in general. This depends on the specific choice of the loss function. For example, in Section \ref{sec:LS}, we show that for the squared loss $\ell(t)=(t-1)^2$, the equations have a unique solution iff $\delta>1$. On the other hand, for logistic regression and hinge-loss, it is argued in Remark \ref{rem:threshold} that there exists a threshold value $\delta^\star_\varepsilon:=\delta^\star(\varepsilon)>2$ such that the set of minimizers in \eqref{eq:gen_opt} is unbounded if $\delta>\delta_{\varepsilon}$. Hence, the theorem does not hold for $\delta<\delta^\star_\varepsilon$.
We conjecture that for these choices of loss, the equations \eqref{eq:eq_main} are solvable iff $\delta>\delta_{\varepsilon}$. Justifying this conjecture is an interesting direction for future work. More generally, we leave the study of sufficient and necessary conditions under which the equations \eqref{eq:eq_main} admit a unique solution to future work.
\end{remark}
\begin{remark}[Bounded minimizers]\label{rem:threshold}
Theorem \ref{thm:main} only holds in regimes for which the set of minimizers of \eqref{eq:gen_opt} is bounded. As we show here, this is $\emph{not}$ always the case. Specifically, consider non-negative loss functions $\ell(t)\geq 0$ with the property $\lim_{t\rightarrow+\infty} \ell(t)=0$. For example, the hinge-loss, Ada-boost and logistic loss all satisfy this property. Now, we show that for such loss functions the set of minimizers is unbounded if $\delta<\delta^\star_\varepsilon$ for some appropriate $\delta^\star_\varepsilon>2$. First, note that the set of minimizers is unbounded if the following condition holds:
\begin{align}\label{eq:sep}
\exists~\mathbf{x}_s\in\mathbb{R}^p \quad\text{such that}\quad y_i\mathbf{a}_i^T\mathbf{x}_s \geq 1, \quad \forall~i\in[m].
\end{align}
Indeed, if \eqref{eq:sep} holds then $\mathbf{x}=c\cdot\mathbf{x}_s$ with $c\rightarrow+\infty$, attains zero cost in \eqref{eq:gen_opt}; thus, it is optimal and the set of minimizers is unbounded. To proceed, we rely on a recent result by Candes and Sur \cite{candes2018phase} who prove that \eqref{eq:sep} holds iff \footnote{To be precise, \cite{candes2018phase} prove the statement for measurements $y_i,~i\in[m]$ that follow a logistic model. Close inspection of their proof shows that this requirement can be relaxed by appropriately defining the random variable $Y$ in \eqref{eq:threshold}.} \begin{align}\label{eq:threshold}
\delta \leq \delta^\star_{\varepsilon}:= \left(\min_{c\in\mathbb{R}}\E\left[\left(G+c\,S\,Y\right)_{-}^2\right]\right)^{-1},
\end{align}
where $G,S$ and $Y$ are random variables as in \eqref{eq:GSY} and $(t)_{-}:=\min\{0,t\}$. It can be checked analytically that $\delta^\star(\varepsilon)$ is a decreasing function of $\varepsilon$ with $\delta^\star(0^+)=+\infty$ and $\delta^{\star}(1/2)=2$. In Figure \ref{fig:thresholdfig}, we have numerically evaluated the threshold value $\delta^\star_\varepsilon$ as a function of the corruption level $\varepsilon$. For $\delta<\delta^\star_\varepsilon$, the set of minimizers of the \eqref{eq:gen_opt} with logistic or hinge loss is unbounded. \\
\end{remark}
\begin{figure}
\centering
\includegraphics[width=8.2cm,height=7cm]{threshold}
\caption{The value of the threshold $\delta^\star_\varepsilon$ in \eqref{eq:threshold} as a function of probability of error $\varepsilon\in[0,1/2]$. For logistic and hinge-loss, the set of minimizers in \eqref{eq:gen_opt} is bounded (as required by Theorem \ref{thm:main}) iff $\delta>\delta^\star_\varepsilon$. See Remark \ref{rem:threshold} and \cite{candes2018phase}.}
\label{fig:thresholdfig}
\end{figure}
\begin{remark}[Why $\delta>1$] The theorem assumes that $\delta>1$ or equivalently $m>n$. Here, we show that this condition is \emph{necessary} for the equations \eqref{eq:eq_main} to have a bounded solution, or equivalently for the minimizer $\widehat{\mathbf{x}}_\ell$ in \eqref{eq:gen_opt} to be bounded.
To see this, take squares in both sides of \eqref{eq:lambda_main} and divide by \eqref{eq:alpha_main}, to find that
$$
\delta = \frac{\Exp\left[\,\left(\envdx{\ell}{\alpha G + \mu S Y}{\la}\right)^2\,\right]}{\left(\E\left[ G\cdot \envdx{\ell}{\alpha G + \mu S Y}{\la} \right]\right)^2} \geq 1.
$$
The inequality follows by applying Cauchy-Schwarz and using the fact that $\E[G^2]=1$.
\end{remark}
\begin{remark}[Solving the equations]\label{rem:FP}
Evaluating the performance of $\widehat{\mathbf{x}}_\ell$ requires solving the system of non-linear equations in \eqref{eq:gen_opt}. We empirically observe (see also \cite{Master} for similar observation) that if a solution exists, then it can be efficiently found by the following fixed-point iteration method. Let $\mathbf{v} := [\mu,\alpha,\la]^T$ and $\mathcal{F}:\mathbb{R}^3\rightarrow\mathbb{R}^3$ be such that \eqref{eq:gen_opt} is equivalent to $\mathbf{v} = \mathcal{F}(\mathbf{v})$. With this notation, initialize $\mathbf{v}=\mathbf{v}_0$ and for $k\geq 1$ repeat the iterations $\mathbf{v}_{k+1} = \mathcal{F}(\mathbf{v}_k)$ until convergence.
\end{remark}
\subsection{Performance bounds}
In this section, we use the asymptotic prediction of Theorem \ref{thm:main} to derive an upper bound on the correlation value $\corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}$ that holds for \emph{all} choices of continuously differentiable loss functions. The following theorem is the key intermediate result in this direction. At a high-level, the proof of the theorem involves a careful algebraic manipulation of the system of equations \eqref{eq:eq_main}, and, leveraging properties of the Moreau envelope function.
\begin{thm}[Lower bound on $\sigma_{\ell}$]\label{sec:lem}Let $\ell$ be continuously differentiable and let assumptions and notation of Theorem \ref{thm:main} hold. Then,
\begin{align} \label{eq:lem}
\sigma_\ell^2 \, \mathcal{I}\big(\sigma_\ell\, G + SY) \geq \frac{1}{\delta}.
\end{align}
\end{thm}
\begin{proof}
First, applying Gaussian integration by parts in \eqref{eq:lambda_main} we find that : $$1=\lambda\, \delta\,\E[\envddx{\ell}{\alpha G + \mu S Y}{\la}],$$
where $\envddx{\ell}{x}{\la}$ denotes the second derivative of the Moreau-envelope function with respect to the first argument (which exists since $\ell$ is differentiable). Solving for $\lambda$ and substituting in \eqref{eq:alpha_main} gives :
\begin{align}\label{eq:lower}
\alpha^2 = \frac{1}{\delta}\cdot\frac{ \E\left[\left(\envdx{\ell}{\alpha G + \mu S Y}{\la}\right)^2\right] }{\left(\,\E[\envddx{\ell}{\alpha G + \mu S Y}{\la}]\,\right)^2}.
\end{align}
The rest of the proof follows steps similar to \cite[Lem.~3.4]{donoho2016high} and \cite[Rem.~5.3.3]{Master}.
Call $$Z:=\mu S Y\quad\text{and}\quad W:=\alpha G + Z.$$
Note that for $\alpha\neq 0$, $W$ has a continuously differentiable function at every $w$; in fact,
$$
p_W^{'}(w) = \int \phi_{\alpha}^{'}(u) p_{Z}(w-u)\, \mathrm{d}u,
$$
where $\phi_{\alpha}(u)=\frac{1}{\alpha\sqrt{2\pi}}e^{-\frac{u^2}{2\alpha^2}}$.
With this notation, by Cauchy-Schwartz inequality we have:
\begin{align*}
\E\left[\left(\envdx{\ell}{W}{\la}\right)^2\right]\cdot \mathcal{I}(W)
&\geq\left(\E[\envdx{\ell}{W}{\la}\cdot \xi_W]\right)^2\\
&=\left(\E[\envddx{\ell}{W}{\la}]\right)^2,
\end{align*}
where the last line follows by integration by parts.
This, when combined with \eqref{eq:lower} shows that:
\begin{align}\label{eq:lower2}
\alpha^2 \geq \frac{1}{\delta}\, \frac{1}{\mathcal{I}(W)}.
\end{align}
Recall that $W=\alpha G + \mu SY$ and $G\sim\mathcal{N}(0,1)$. Now, since (e.g., \cite[Eqn.~2.13]{barron1984monotonic})
$$
\mathcal{I}(c\cdot H) = \mathcal{I}(H)/c^2,
$$
we have that
$$
\mathcal{I}(W) = \mathcal{I}\left(\mu \,\Big(\frac{\alpha}{\mu} G + SY \Big)\right) = \frac{1}{\mu^2}\mathcal{I}\big(\sigma_\ell\, G + SY\big),
$$
where recall that
$$
\sigma_\ell := \alpha\big/\mu.
$$
Substituting this in \eqref{eq:lower2} gives the desired inequality in the statement of the theorem.
\end{proof}
As it is illustrated in Figure \ref{fig:sigma}, $\sigma_\ell^2 \, \mathcal{I}\big(\sigma_\ell\, G + SY)$ is an increasing function of $\sigma_{\ell}$. Therefore the minimum value of $\sigma_{\ell}$ can be derived from the inequality in \eqref{eq:lem}, which corresponds to the optimum $\sigma_{\ell}$ that can be achieved for any continuously differentiable loss function. In the following two remarks, this result is extended to achieve the optimum values for correlation.
\begin{remark}[Upper bound on correlation] \label{rem:nub}
Using \eqref{eq:lem} with $Y = {\rm{BSC}}_{\epsilon}(\mathrm{sign}(S))$ and by numerically evaluating $\mathcal{I}(\sigma_{\ell}G+SY)$ based on $\sigma_{\ell}$\,, we can find a numerical lower bound on $\sigma_{\ell}$\,. In view of \eqref{eq:corr_thm} this directly yields an upper bound on $\corr{\widehat{\mathbf{x}}_\ell}{\mathbf{x}_0}$, i.e., the maximum correlation that can be achieved for any choice of (continuously differentiable) convex loss function.
In Figures \ref{fig:fig2}-\ref{fig:fig4}, the green lines show the upper bound on correlation for specific values of $\varepsilon$; see Section \ref{sec:sim} for details.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=7.9cm,height=6.6cm]{sigma}
\caption{Numerical evaluations of $\sigma_{\ell}^2\mathcal{I}(\sigma_{\ell}G+SY)$, as defined in Theorem \ref{sec:lem}, with respect to $\sigma_{\ell}$ for $\epsilon$ = 0, 0.1 and 0.25 (recall \eqref{eq:GSY}). $\sigma_{\ell}^2\mathcal{I}(\sigma_{\ell}G+SY) \in [0,1)$ and it is an increasing function of $\sigma_{\ell}$, which implies the existence and uniqueness of minimum value of $\sigma_{\ell}$ for all $\delta>1$.
}
\label{fig:sigma}
\end{figure}
\subsection{Numerical simulations}\label{sec:sim}
\begin{figure}
\centering
\includegraphics[width=7.9cm,height=6.6cm]{fig3}
\caption{Comparison between theoretical and simulated results for LAD, LS and Hinge-Loss estimators along with numerical upper bound, as a function of $\delta$ for probability of error $\epsilon=0.1$. The dashed line represents the value of the threshold $\delta^*_\varepsilon$ for $\epsilon=0.1$ (see Figure \ref{fig:thresholdfig}). For small values of $\delta$ LS outperforms the other two estimators, but the hinge-loss becomes better as $\delta$ in increases.}
\label{fig:fig3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.9cm,height=6.6cm]{fig4}
\caption{
Comparison between theoretical and simulated results for LAD, LS and Hinge-Loss estimators along with numerical upper bound, as a function of $\delta$ for probability of error $\epsilon=0.25$. As in Figure \ref{fig:fig3}, the dashed line represents the value of the threshold $\delta^*_\varepsilon$ for $\epsilon=0.25$.}
\label{fig:fig4}
\end{figure}
We present numerical simulations that validate the predictions of Theorems \ref{thm:main} and \ref{sec:lem} . For the numerical experiments, we generate random measurements according to \eqref{eq:gen_model} and Assumption \ref{ass:Gaussian}. Without loss of generality (due to rotational invariance of the Gaussian measure) we set $\mathbf{x}_0=[1,0,...,0]^T$. We then obtain estimates $\widehat{\mathbf{x}}_\ell$ of $\mathbf{x}_0$ by numerically solving \eqref{eq:gen_opt} and measuring performance by the correlation value $\corr{\hat{\mathbf{x}}_\ell}{\mathbf{x}_0}$. Throughout the experiments, we set $n=128$ and the recorded values of correlation in Figures \ref{fig:fig2}--\ref{fig:fig4} are averages over $25$ independent experiments. The theoretical curves for the correlation are computed based on Theorem \ref{thm:main}. We solve the system of equations in \eqref{eq:eq_main} by the fixed-point iteration method described in Remark \ref{rem:FP}. The expectations involved in \eqref{eq:eq_main} are evaluated with Monte-Carlo estimation using $10^5$ independent samples. Numerical upper bounds in Figures \ref{fig:fig2}-\ref{fig:fig4} are derived according to Remark \ref{rem:nub}.
Comparisons between theoretical and simulated values for LAD and LS estimators along with the upper bound on correlation are illustrated in Figure \ref{fig:fig2} for the noiseless case. Note that for $\varepsilon=0$, the hinge-loss has an unbounded set of minimizers for all values of $\delta$ (thus, Theorem \ref{thm:main} is not applicable). In Figure \ref{fig:fig3}, the probability of error $\epsilon$ is increased to $0.1$. Note that in this setting hinge-loss estimator exists for $\delta> \delta^*_{0.1}\approx3$ and that it outperforms LAD and LS estimators for large values of $\delta$. \\
In Figure \ref{fig:fig4} we present similar results for $\epsilon=0.25$.
As it is evident from Figures \ref{fig:fig3} and \ref{fig:fig4}, the best estimator is varying based on the value of $\delta$ and $\varepsilon$.
This further emphasizes the impact of studying the accuracy of estimators while we do not restrict ourselves to a specific loss function.
| 66412ed7969a9d5aefc7c918ae7ba59f46d2abe0 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Partial Correlations}
Set $\ell=4$ for simplicity. \ Fix $1\leq i<j\leq4$. \ There exists a unique
ordered pair $m<n$ such that $(i,j,m,n\}=\{1,2,3,4\}$. \ Define a $3\times3$
matri
\[
P=\left(
\begin{array}
[c]{ccc
1 & r_{i,mn} & r_{i,jm}\\
r_{i,mn} & 1 & r_{i,jn}\\
r_{i,jm} & r_{i,jn} & 1
\end{array}
\right)
\]
which captures all correlations among $X_{i}-X_{m}$, $X_{i}-X_{n}$,
$X_{i}-X_{j}$. \ Let $P_{ab}$ be the cofactor of the element $p_{ab}$ in the
expansion of the determinant of $P$. \ The partial correlation $r_{i,mn.j}$
between $X_{i}-X_{m}$ and $X_{i}-X_{n}$, given $X_{i}-X_{j}$, is prescribed b
\[
r_{i,mn.j}=-\frac{P_{12}}{\sqrt{P_{11}P_{22}}}=\frac{r_{i,mn}-r_{i,jm
r_{i,jn}}{\sqrt{\left( 1-r_{i,jm}^{2}\right) \left( 1-r_{i,jn}^{2}\right)
}}.
\]
The notation here \cite{Afj-heu} differs from that used
elsewhere\ \cite{Prk-heu, BS-heu, Fi1-heu}. \ In words, $r_{i,mn.j}$ measures
the linear dependence of $X_{i}-X_{m}$ and $X_{i}-X_{n}$ in which the
influence of $X_{i}-X_{j}$ is removed. \ Define finally a $2\times2$ matri
\[
R_{i,j}=\left(
\begin{array}
[c]{cc
1 & r_{i,mn.j}\\
r_{i,mn.j} & 1
\end{array}
\right)
\]
as was to be done.
Set now $\ell=5$. \ Fix $1\leq i<j\leq5$. \ There exists a unique ordered
triple $m<n<o$ such that $(i,j,m,n,o\}=\{1,2,3,4,5\}$. \ The preceding
discussion extends naturally, supplementing the case $(m,n)$ by additional
possible cases $(m,o)$ and $(n,o)$. \ Define finally a $3\times3$ matri
\[
R_{i,j}=\left(
\begin{array}
[c]{ccc
1 & r_{i,mn.j} & r_{i,mo.j}\\
r_{i,mn.j} & 1 & r_{i,no.j}\\
r_{i,mo.j} & r_{i,no.j} & 1
\end{array}
\right) .
\]
We could go on for larger $\ell$, but this is all that is needed for our purposes.
Again, set $\ell=5$. \ Fix $1\leq i<j<k\leq5$. \ There exists a unique ordered
pair $m<n$ such that $(i,j,k,m,n\}=\{1,2,3,4,5\}$. \ Define a $4\times4$
matri
\[
Q=\left(
\begin{array}
[c]{cccc
1 & r_{i,mn} & r_{i,jm} & r_{i,km}\\
r_{i,mn} & 1 & r_{i,jn} & r_{i,kn}\\
r_{i,jm} & r_{i,jn} & 1 & r_{i,jk}\\
r_{i,km} & r_{i,kn} & r_{i,jk} & 1
\end{array}
\right)
\]
which captures all correlations among $X_{i}-X_{m}$, $X_{i}-X_{n}$,
$X_{i}-X_{j}$, $X_{i}-X_{k}$. \ Let $Q_{ab}$ be the cofactor of the element
$q_{ab}$ in the expansion of the determinant of $Q$. \ The partial correlation
$r_{i,mn.jk}$ between $X_{i}-X_{m}$ and $X_{i}-X_{n}$, given $X_{i}-X_{j}$ and
$X_{i}-X_{k}$, is prescribed b
\begin{align*}
r_{i,mn.jk} & =-\frac{Q_{12}}{\sqrt{Q_{11}Q_{22}}}\\
& =\frac{r_{i,mn}-r_{i,jm}r_{i,jn}-r_{i,km}r_{i,kn}+r_{i,km}r_{i,jn
r_{i,jk}+r_{i,jm}r_{i,kn}r_{i,jk}-r_{i,mn}r_{i,jk}^{2}}{\sqrt{\left(
1-r_{i,jm}^{2}-r_{i,km}^{2}+2r_{i,jm}r_{i,km}r_{i,jk}-r_{i,jk}^{2}\right)
\left( 1-r_{i,jn}^{2}-r_{i,kn}^{2}+2r_{i,jn}r_{i,kn}r_{i,jk}-r_{i,jk
^{2}\right) }}.
\end{align*}
In words, $r_{i,mn.jk}$ measures the linear dependence of $X_{i}-X_{m}$ and
$X_{i}-X_{n}$ in which the influence of $X_{i}-X_{j}$ and $X_{i}-X_{k}$ is
removed. \ Define finally a $2\times2$ matri
\[
R_{i,jk}=\left(
\begin{array}
[c]{cc
1 & r_{i,mn.jk}\\
r_{i,mn.jk} & 1
\end{array}
\right)
\]
as was to be done.
Set now $\ell=6$. \ Fix $1\leq i<j<k\leq6$. \ There exists a unique ordered
triple $m<n<o$ such that $(i,j,k,m,n,o\}=\{1,2,3,4,5,6\}$. \ The preceding
discussion extends naturally, supplementing the case $(m,n)$ by additional
possible cases $(m,o)$ and $(n,o)$. \ Define finally a $3\times3$ matri
\[
R_{i,jk}=\left(
\begin{array}
[c]{ccc
1 & r_{i,mn.jk} & r_{i,mo.jk}\\
r_{i,mn.jk} & 1 & r_{i,no.jk}\\
r_{i,mo.jk} & r_{i,no.jk} & 1
\end{array}
\right) .
\]
We could go on for larger $\ell$, but this is all that is needed for our purposes.
\section{Small Segments}
For convenience, defin
\begin{align*}
h(x,y,z) & =\frac{1-x}{4\pi}\cdot\frac{1+x-y-z}{\sqrt
{4(1-x)(1-y)-(1-x-y+z)^{2}}}\\
& =\frac{1-x}{4\pi}\cdot\frac{1+x-y-z}{\sqrt{4(1-x)(1-z)-(1-x-z+y)^{2}}}\\
& =\frac{1-x}{4\pi}\cdot\frac{1+x-y-z}{\sqrt
{(1-x)(3+x)-y(2+y)-z(2+z)+2(xy+xz+yz)}}.
\end{align*}
The latter expression, while more cumbersome, exhibits symmetry in $y$, $z$. \
If $\ell=2$, then $\Phi_{\ell-2}(R_{i,j})=1$ and $\Phi_{\ell-3}(R_{i,,jk})=0$.
\ We hav
\
\begin{array}
[c]{ccc
\mathbb{E}\left( M\right) =2\sqrt{\dfrac{1-\rho_{12}}{4\pi}}=\sqrt
{\dfrac{1-\rho_{12}}{\pi}}, & & \mathbb{E}\left( M^{2}\right) =1.
\end{array}
\]
If $\ell=3$, then $\Phi_{\ell-2}(R_{i,j})=\frac{1}{2}$ and $\Phi_{\ell
-3}(R_{i,,jk})=1$. \ We hav
\[
\mathbb{E}\left( M\right) =\frac{1}{\sqrt{4\pi}}\left( \sqrt{1-\rho_{12
}+\sqrt{1-\rho_{13}}+\sqrt{1-\rho_{23}}\right) ,
\
\[
\mathbb{E}\left( M^{2}\right) =1+2h(\rho_{12},\rho_{13},\rho_{23
)+2h(\rho_{13},\rho_{12},\rho_{23})+2h(\rho_{23},\rho_{12},\rho_{13}).
\]
In formula (3.6)\ for $\mathbb{E}\left( M\right) $ in \cite{Afj-heu},
$\sqrt{(\pi)}$ should be replaced by $\sqrt{(2\pi)}$. \
If $\ell=4$, then
\
\begin{array}
[c]{ccc
\Phi_{\ell-2}(R_{i,j})=\frac{1}{4}+\frac{1}{2\pi}\arcsin(r_{i,mn.j}), & &
\Phi_{\ell-3}(R_{i,,jk})=\frac{1}{2}.
\end{array}
\]
In general, $r_{i,mn.j}\neq r_{j,mn.i}$ and thus symmetry fails for
$\mathbb{E}\left( M\right) $. \ We hav
\begin{align*}
\mathbb{E}\left( M\right) & =\tfrac{1}{\sqrt{4\pi}}\left[ \sqrt
{1-\rho_{12}}\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin(r_{1,34.2
)\right) +\sqrt{1-\rho_{12}}\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi
\arcsin(r_{2,34.1})\right) \right. \\
& +\sqrt{1-\rho_{13}}\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin
(r_{1,24.3})\right) +\sqrt{1-\rho_{13}}\cdot\left( \tfrac{1}{4}+\tfrac
{1}{2\pi}\arcsin(r_{3,24.1})\right) \\
& +\sqrt{1-\rho_{14}}\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin
(r_{1,23.4})\right) +\sqrt{1-\rho_{14}}\cdot\left( \tfrac{1}{4}+\tfrac
{1}{2\pi}\arcsin(r_{4,23.1})\right) \\
& +\sqrt{1-\rho_{23}}\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin
(r_{2,14.3})\right) +\sqrt{1-\rho_{23}}\cdot\left( \tfrac{1}{4}+\tfrac
{1}{2\pi}\arcsin(r_{3,14.2})\right) \\
& +\sqrt{1-\rho_{24}}\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin
(r_{2,13.4})\right) +\sqrt{1-\rho_{24}}\cdot\left( \tfrac{1}{4}+\tfrac
{1}{2\pi}\arcsin(r_{4,13.2})\right) \\
& \left. +\sqrt{1-\rho_{34}}\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi
\arcsin(r_{3,12.4})\right) +\sqrt{1-\rho_{34}}\cdot\left( \tfrac{1
{4}+\tfrac{1}{2\pi}\arcsin(r_{4,12.3})\right) \right] ,
\end{align*
\begin{align*}
\mathbb{E}\left( M^{2}\right) & =1+h(\rho_{12},\rho_{13},\rho_{23
)+h(\rho_{12},\rho_{14},\rho_{24})+h(\rho_{13},\rho_{12},\rho_{23
)+h(\rho_{13},\rho_{14},\rho_{34})\\
& +h(\rho_{14},\rho_{12},\rho_{24})+h(\rho_{14},\rho_{13},\rho_{34
)+h(\rho_{23},\rho_{12},\rho_{13})+h(\rho_{23},\rho_{24},\rho_{34})\\
& +h(\rho_{24},\rho_{12},\rho_{14})+h(\rho_{24},\rho_{23},\rho_{34
)+h(\rho_{34},\rho_{13},\rho_{14})+h(\rho_{34},\rho_{23},\rho_{24}).
\end{align*}
In formula (3.7)\ for $\mathbb{E}\left( M\right) $ in \cite{Afj-heu}, a
factor $1/\sqrt{2\pi}$ should be inserted in front of the summation. \
If $\ell=5$, the
\[
\Phi_{\ell-2}(R_{i,j})=\tfrac{1}{2}-\tfrac{1}{4\pi}\arccos(r_{i,mn.j
)-\tfrac{1}{4\pi}\arccos(r_{i,mo.j})-\tfrac{1}{4\pi}\arccos(r_{i,no.j}),
\
\[
\Phi_{\ell-3}(R_{i,,jk})=\tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin(r_{i,mn.jk}).
\]
Symmetry now fails for both $\mathbb{E}\left( M\right) $ and $\mathbb{E
\left( M^{2}\right) $. \ We hav
\begin{align*}
\mathbb{E}\left( M\right) & =\tfrac{1}{\sqrt{4\pi}}\left[ \sqrt
{1-\rho_{12}}\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi}\arccos(r_{1,34.2
)-\tfrac{1}{4\pi}\arccos(r_{1,35.2})-\tfrac{1}{4\pi}\arccos(r_{1,45.2
)\right) \right. \\
& +\sqrt{1-\rho_{12}}\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi}\arccos
(r_{2,34.1})-\tfrac{1}{4\pi}\arccos(r_{2,35.1})-\tfrac{1}{4\pi}\arccos
(r_{2,45.1})\right) \\
& +\sqrt{1-\rho_{13}}\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi}\arccos
(r_{1,24.3})-\tfrac{1}{4\pi}\arccos(r_{1,25.3})-\tfrac{1}{4\pi}\arccos
(r_{1,45.3})\right) \\
& \left. +\sqrt{1-\rho_{13}}\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
\arccos(r_{3,24.1})-\tfrac{1}{4\pi}\arccos(r_{3,25.1})-\tfrac{1}{4\pi
\arccos(r_{3,45.1})\right) +\cdots\right] ,
\end{align*
\begin{align*}
\mathbb{E}\left( M^{2}\right) & =1+h(\rho_{12},\rho_{13},\rho_{23
)\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin(r_{1,45.23})\right)
+h(\rho_{12},\rho_{23},\rho_{13})\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi
}\arcsin(r_{2,45.13})\right) \\
& +h(\rho_{12},\rho_{14},\rho_{24})\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi
}\arcsin(r_{1,35.24})\right) +h(\rho_{12},\rho_{24},\rho_{14})\cdot\left(
\tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin(r_{2,35.14})\right) \\
& +h(\rho_{12},\rho_{15},\rho_{25})\cdot\left( \tfrac{1}{4}+\tfrac{1}{2\pi
}\arcsin(r_{1,34.25})\right) +h(\rho_{12},\rho_{25},\rho_{15})\cdot\left(
\tfrac{1}{4}+\tfrac{1}{2\pi}\arcsin(r_{2,34.15})\right) +\cdots,
\end{align*}
a total of $20$ terms and $61$ terms, respectively.
If $\ell=6$, then $\mathbb{E}\left( M\right) $ contains non-elementary
functions which require numerical integration (beyond our present scope). \ In
contrast,
\begin{align*}
\mathbb{E}\left( M^{2}\right) & =1+h(\rho_{12},\rho_{13},\rho_{23
)\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi}\arccos(r_{1,45.23})-\tfrac{1}{4\pi
}\arccos(r_{1,46.23})-\tfrac{1}{4\pi}\arccos(r_{1,56.23})\right) \\
& +h(\rho_{12},\rho_{23},\rho_{13})\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
}\arccos(r_{2,45.13})-\tfrac{1}{4\pi}\arccos(r_{2,46.13})-\tfrac{1}{4\pi
}\arccos(r_{2,56.13})\right) \\
& +h(\rho_{12},\rho_{14},\rho_{24})\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
}\arccos(r_{1,35.24})-\tfrac{1}{4\pi}\arccos(r_{1,36.24})-\tfrac{1}{4\pi
}\arccos(r_{1,56.24})\right) \\
& +h(\rho_{12},\rho_{24},\rho_{14})\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
}\arccos(r_{2,35.14})-\tfrac{1}{4\pi}\arccos(r_{2,36.14})-\tfrac{1}{4\pi
}\arccos(r_{2,56.14})\right) \\
& +h(\rho_{12},\rho_{15},\rho_{25})\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
}\arccos(r_{1,34.25})-\tfrac{1}{4\pi}\arccos(r_{1,36.25})-\tfrac{1}{4\pi
}\arccos(r_{1,46.25})\right) \\
& +h(\rho_{12},\rho_{25},\rho_{15})\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
}\arccos(r_{2,34.15})-\tfrac{1}{4\pi}\arccos(r_{2,36.15})-\tfrac{1}{4\pi
}\arccos(r_{2,46.15})\right) \\
& +h(\rho_{12},\rho_{16},\rho_{26})\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
}\arccos(r_{1,34.26})-\tfrac{1}{4\pi}\arccos(r_{1,35.26})-\tfrac{1}{4\pi
}\arccos(r_{1,45.26})\right) \\
& +h(\rho_{12},\rho_{26},\rho_{16})\cdot\left( \tfrac{1}{2}-\tfrac{1}{4\pi
}\arccos(r_{2,34.16})-\tfrac{1}{4\pi}\arccos(r_{2,35.16})-\tfrac{1}{4\pi
}\arccos(r_{2,45.16})\right) +\cdots,
\end{align*}
a total of $121$ terms. \ In formula (3.9)\ for $\mathbb{E}\left(
M^{2}\right) $ in \cite{Afj-heu}, a constant term $1$ should be inserted in
front of the first summation; further, the last summation should be taken over
both $k\neq i$ and $k\neq j$ (not merely $k\neq i$).
\section{Time Series}
Consider a discrete-time stationary first-order autoregressive proces
\
\begin{array}
[c]{ccccc
X_{t}=\rho\,X_{t-1}+\sqrt{1-\rho^{2}}\cdot\varepsilon_{t}, & & -\infty
<t<\infty, & & |\rho|<1
\end{array}
\]
where $\varepsilon_{t}$ is $N(0,1)$ white noise. The $\ell\times\ell$
covariance matrix $R$ has $ij^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}$ element
\[
\rho_{ij}=\rho^{\left\vert j-i\right\vert
\]
which leads to certain simplifications. \ Let us make the reliance of $M$ on
$\ell$ explicit. \ We hav
\[
\mathbb{E}\left( M_{2}\right) =\sqrt{\dfrac{1-\rho}{\pi}},
\
\[
\mathbb{E}\left( M_{3}\right) =\sqrt{\dfrac{1-\rho}{\pi}}+\sqrt
{\dfrac{1-\rho^{2}}{4\pi}},
\
\begin{align*}
& \mathbb{E}\left( M_{4}\right) =\tfrac{1}{4\pi}\sqrt{\tfrac{1-\rho}{\pi
}\left[ \pi+2\arcsin\left( 1-\tfrac{2}{3-\rho}\right) \right] +\tfrac
{1}{2\pi}\sqrt{\tfrac{1-\rho}{\pi}}\left[ \pi+2\arcsin\left( \tfrac
{1+2\rho-\rho^{2}}{\sqrt{(3-\rho)\left( 3+\rho+\rho^{2}-\rho^{3}\right)
}\right) \right] \\
& +\tfrac{1}{2\pi}\sqrt{\tfrac{1-\rho^{2}}{\pi}}\left[ \pi+2\arcsin\left(
\tfrac{(1-\rho)^{2}}{\sqrt{(3-\rho)\left( 3+\rho+\rho^{2}-\rho^{3}\right)
}\right) \right] +\tfrac{1}{4\pi}\sqrt{\tfrac{1-\rho^{3}}{\pi}}\left[
\pi+2\arcsin\left( 1-\tfrac{2}{3+\rho+\rho^{2}-\rho^{3}}\right) \right]
\end{align*}
but $\mathbb{E}\left( M_{5}\right) $ is too lengthy to record here. \ In the
limit as $\rho\rightarrow0$, we obtai
\[
\frac{1}{\sqrt{\pi}},\;\;\frac{3}{2\sqrt{\pi}},\;\;\frac{3}{\sqrt{\pi}}\left[
1-\frac{1}{\pi}\operatorname*{arcsec}(3)\right] ,\;\;\frac{5}{\sqrt{\pi
}\left[ 1-\frac{3}{2\pi}\operatorname*{arcsec}(3)\right]
\]
for $\ell=2,3,4,5$ and these are consistent with well-known values
\cite{Fi0-heu} corresponding to independent $X_{t}$. \ Figure 1 displays
$\mathbb{E}\left( M_{\ell}\right) $ as functions of $\rho$. \ The left-hand
endpoint is at $(-1,\sqrt{2/\pi})$ and $\sqrt{2/\pi}$ is unsurprisingly the
mean of a standard half-normal distribution. \ The right-hand endpoint is at
$(1,0)$. \ Associated with $\ell=3,4,5$ are maximum points with $\rho$ equal
t
\[
\frac{1-\sqrt{5}}{2}=-0.6180339887498948482045868...,
\
\
\begin{array}
[c]{ccc
-0.4973597615161907364022217..., & & -0.4336476843162656141275672...
\end{array}
\]
respectively. \ Closed-form expressions for the latter two quantities remain open.
We also hav
\[
\mathbb{E}\left( M_{3}^{2}\right) =1+\frac{(1-\rho)\sqrt{(3-\rho)(1+\rho)
}{2\pi},
\
\[
\mathbb{E}\left( M_{4}^{2}\right) =1+\tfrac{3+\sqrt{(3-\rho)\left(
3+\rho+\rho^{2}-\rho^{3}\right) }+\rho\left[ 1-2\rho-2\rho^{2}-\rho^{3
+\rho^{4}-\rho\sqrt{(3-\rho)\left( 3+\rho+\rho^{2}-\rho^{3}\right) }\right]
}{2\pi\sqrt{(1+\rho)\left( 3+\rho+\rho^{2}-\rho^{3}\right) }
\]
but $\mathbb{E}\left( M_{5}^{2}\right) $ and $\mathbb{E}\left( M_{6
^{2}\right) $ are too lengthy to record here. \ In the limit as
$\rho\rightarrow0$, we obtai
\[
1+\frac{\sqrt{3}}{2\pi},\;\;1+\frac{\sqrt{3}}{\pi},\;\;1+\frac{5\sqrt{3}
{2\pi}\left[ 1-\frac{1}{\pi}\operatorname*{arcsec}(4)\right] ,\;\;1+\frac
{5\sqrt{3}}{\pi}\left[ 1-\frac{3}{2\pi}\operatorname*{arcsec}(4)\right]
\]
for $\ell=3,4,5,6$ and these again are consistent with well-known values
\cite{Fi0-heu}. \ Associated with $\ell=3,4,5,6$ are maximum points with
$\rho$ equal to
\[
1-\sqrt{2}=-0.4142135623730950488016887,
\
\
\begin{array}
[c]{ccc
-0.3879232988398265768779440..., & & -0.3599267104829689555367968...
\end{array}
\
\[
-0.3406053067160525788737944...
\]
respectively. \ Closed-form expressions for the latter three quantities remain
open. \ Figure 2 displays $\mathbb{V}\left( M_{\ell}\right) =\mathbb{E
\left( M_{\ell}^{2}\right) -\mathbb{E}\left( M_{\ell}\right) ^{2}$ as
functions of $\rho$. \ The left-hand endpoint is at $(-1,1-2/\pi)$; the
right-hand endpoint is at $(1,1)$. \ Unlike $\mathbb{E}\left( M_{\ell
}\right) $ or $\mathbb{E}\left( M_{\ell}^{2}\right) $, the variance is
strictly increasing throughout the interval. \ An intuitive reason for such
behavior would be good to establish someday.
\section{Proof of Revision}
Our general formula for $\mathbb{E}\left( M^{2}\right) $ looks somewhat
different from that presented by Afonja \cite{Afj-heu}. \ To demonstrate the
equivalence of the two formulas, it suffices to prove that if $j\neq i$,
$k\neq i$ and $k\neq j$, the
\[
\frac{1}{2\pi}r_{i,ji}\cdot\frac{r_{i,ki}-r_{i,jk}r_{i,ji}}{\sqrt
{1-r_{i,jk}^{2}}}=h\left( \rho_{ij},\rho_{ik},\rho_{jk}\right) .
\]
The left-hand side is equal t
\begin{align*}
& \dfrac{1}{2\pi}\sqrt{\dfrac{1-\rho_{ij}}{2}}\cdot\frac{\sqrt{\dfrac
{1-\rho_{ik}}{2}}-\dfrac{1-\rho_{ij}-\rho_{ik}+\rho_{jk}}{\sqrt{4(1-\rho
_{ij})(1-\rho_{ik})}}\sqrt{\dfrac{1-\rho_{ij}}{2}}}{\sqrt{1-\dfrac
{(1-\rho_{ij}-\rho_{ik}+\rho_{jk})^{2}}{4(1-\rho_{ij})(1-\rho_{ik})}}}\\
& =\dfrac{1}{4\pi}\sqrt{1-\rho_{ij}}\cdot\frac{\sqrt{1-\rho_{ik}
-\dfrac{1-\rho_{ij}-\rho_{ik}+\rho_{jk}}{\sqrt{4(1-\rho_{ik})}}
{\sqrt{1-\dfrac{(1-\rho_{ij}-\rho_{ik}+\rho_{jk})^{2}}{4(1-\rho_{ij
)(1-\rho_{ik})}}}\\
& =\dfrac{1}{4\pi}\sqrt{1-\rho_{ij}}\cdot\frac{2(1-\rho_{ik})-(1-\rho
_{ij}-\rho_{ik}+\rho_{jk})}{\sqrt{4(1-\rho_{ik})-\dfrac{(1-\rho_{ij}-\rho
_{ik}+\rho_{jk})^{2}}{1-\rho_{ij}}}}\\
& =\dfrac{1}{4\pi}(1-\rho_{ij})\cdot\frac{1+\rho_{ij}-\rho_{ik}-\rho_{jk
}{\sqrt{4(1-\rho_{ij})(1-\rho_{ik})-(1-\rho_{ij}-\rho_{ik}+\rho_{jk})^{2}}
\end{align*}
which is the right-hand side, as was to be shown.
\section{Proof from First Principles}
An exercise in \cite{DN-heu} suggests that formulas for $\mathbb{E}\left(
M_{2}\right) $ and $\mathbb{V}\left( M_{2}\right) $ should be derived fro
\[
\max\left\{ X_{1},X_{2}\right\} =\frac{1}{2}(X_{1}+X_{2})+\frac{1
{2}\left\vert X_{1}-X_{2}\right\vert .
\]
It is instructive to similarly prove our formula for $\mathbb{E}\left(
M_{3}\right) $, using instea
\begin{align*}
\max\left\{ X_{1},X_{2},X_{2}\right\} & =\max\left\{ \max\left\{
X_{1},X_{2}\right\} ,\max\left\{ X_{2},X_{3}\right\} \right\} \\
& =\frac{1}{2}\left( \max\left\{ X_{1},X_{2}\right\} +\max\left\{
X_{2},X_{3}\right\} \right) +\frac{1}{2}\left\vert \max\left\{ X_{1
,X_{2}\right\} -\max\left\{ X_{2},X_{3}\right\} \right\vert \\
& =\frac{1}{4}\left( (X_{1}+X_{2})+(X_{2}+X_{3})+\left\vert X_{1
-X_{2}\right\vert +\left\vert X_{2}-X_{3}\right\vert \right) \\
& +\frac{1}{4}\left\vert (X_{1}+X_{2})-(X_{2}+X_{3})+\left\vert X_{1
-X_{2}\right\vert -\left\vert X_{2}-X_{3}\right\vert \right\vert .
\end{align*}
Define $Y=X_{1}-X_{2}$ and $Z=X_{3}-X_{2}$. \ Clearly $(Y,Z)$ is bivariate
normally distributed with vector mean zero and covariance matri
\[
\left(
\begin{array}
[c]{cc
2-2\rho_{12} & \rho_{13}-\rho_{12}-\rho_{23}+1\\
\rho_{13}-\rho_{12}-\rho_{23}+1 & 2-2\rho_{23
\end{array}
\right) =\left(
\begin{array}
[c]{cc
\sigma_{y}^{2} & \xi\,\sigma_{y}\sigma_{z}\\
\xi\,\sigma_{y}\sigma_{z} & \sigma_{z}^{2
\end{array}
\right) .
\]
Als
\[
\frac{1}{4}\mathbb{\,E\,}\left\vert Y\right\vert =\frac{\sigma_{y}}{4
\sqrt{\frac{2}{\pi}}=\sqrt{\dfrac{1-\rho_{12}}{4\pi}},
\
\[
\frac{1}{4}\mathbb{\,E\,}\left\vert Z\right\vert =\frac{\sigma_{z}}{4
\sqrt{\frac{2}{\pi}}=\sqrt{\dfrac{1-\rho_{23}}{4\pi}}.
\]
The four integrals (depending on signs of $Y$ and $Z$) underlyin
\[
\frac{1}{4}\mathbb{\,E\,}\left\vert (Y+\left\vert Y\right\vert )-(Z+\left\vert
Z\right\vert )\right\vert =\sqrt{\dfrac{1-\rho_{13}}{4\pi}
\]
can all be evaluated (however tediously). \ Becaus
\[
X_{1}-X_{3}=(X_{1}-X_{2})-(X_{3}-X_{2})=Y-Z
\]
we suspect that a more elegant proof ought to be available. \ Ideas on
bridging this gap would be welcome.
In more detail, lettin
\[
f(y,z)=\frac{1}{2\pi\sqrt{1-\xi^{2}}\,\sigma_{y}\sigma_{z}}\exp\left[
-\frac{1}{2\left( 1-\xi^{2}\right) }\left( \frac{y^{2}}{\sigma_{y}^{2
}-\frac{2\xi yz}{\sigma_{y}\sigma_{z}}+\frac{z^{2}}{\sigma_{z}^{2}}\right)
\right]
\]
denote the bivariate normal density, we obtai
\[
\int\limits_{0}^{\infty}\int\limits_{0}^{\infty}2\left\vert y-z\right\vert
f(y,z)dy\,dz=\frac{1}{\sqrt{2\pi}}\left[ -(1-\xi)(\sigma_{y}+\sigma
_{z})+2\sqrt{\sigma_{y}^{2}-2\xi\sigma_{y}\sigma_{z}+\sigma_{z}^{2}}\right]
\]
when $Y>0$ and $Z>0$;
\[
\int\limits_{0}^{\infty}\int\limits_{-\infty}^{0}2\,z\,f(y,z)dy\,dz=\frac
{(1-\xi)\sigma_{z}}{\sqrt{2\pi}
\]
when $Y<0$ and $Z>0$
\[
\int\limits_{-\infty}^{0}\int\limits_{0}^{\infty}2\,y\,f(y,z)dy\,dz=\frac
{(1-\xi)\sigma_{y}}{\sqrt{2\pi}
\]
when $Y>0$ and $Z<0$; and $0$ when $Y<0$ and $Z<0$. \ Adding these
contributions and dividing by $4$, we verif
\begin{align*}
\frac{1}{2\sqrt{2\pi}}\sqrt{\sigma_{y}^{2}-2\xi\sigma_{y}\sigma_{z}+\sigma
_{z}^{2}} & =\frac{1}{2\sqrt{\pi}}\sqrt{(1-\rho_{12})-(\rho_{13}-\rho
_{12}-\rho_{23}+1)+(1-\rho_{23})}\\
& =\sqrt{\dfrac{1-\rho_{13}}{4\pi}
\end{align*}
as was desired.
Calculating the variance of $M_{3}$ from first principles\ has not been
attempted. \ The variance of the median ($50\%$-tile) is also of interest,
appearing explicitly in \cite{Fi5-heu} for $\ell=3$ but under the assumption
of independence. \
An alternative probability density-based derivation of $\mathbb{E}\left(
M_{2}\right) $ and $\mathbb{V}\left( M_{2}\right) $ can be found in
\cite{Htr-heu, NK-heu}. \ See also \cite{Fi2-heu} for the expected range of a
normal sample, \cite{Fi3-heu} for the expected \textit{absolute} maximum, and
\cite{Fi4-heu} for other aspects of AR(1).
\section{Large Segments}
Assuming $\rho_{ij}$ depends only on $\left\vert j-i\right\vert =d$, Berman
\cite{Be-heu, Pi-heu, Ja-heu} proved that if eithe
\
\begin{array}
[c]{ccccc
\lim\limits_{d\rightarrow\infty}\rho(d)\cdot\ln(d)=0 & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{or} & &
{\displaystyle\sum\limits_{d=1}^{\infty}}
\rho(d)^{2}<\infty,
\end{array}
\]
the
\[
\lim\limits_{\ell\rightarrow\infty}P\left\{ \sqrt{2\ln(\ell)}(M_{\ell
}-a_{\ell})\leq x\right\} =\exp\left( -e^{-x}\right)
\]
wher
\[
a_{\ell}=\sqrt{2\ln(\ell)}-\frac{1}{2}\frac{\ln(\ln(\ell))+\ln(4\pi)
{\sqrt{2\ln(\ell)}}.
\]
Further, the two hypotheses on $\rho(d)$ cannot be significantly weakened.
\ This theorem clearly applies for a first-order autoregressive process,
although we note that $a_{\ell}$ does not incorporate lag-one correlation
$\rho$ at all. \ A more precise asymptotic result might do so. \
Other relevant works in the literature include \cite{R1-heu, R2-heu, WM-heu,
AG-heu, W1-heu, W2-heu, W3-heu, CBS-heu, Na-heu}. \ In particular, Figure 2 of
\cite{AG-heu} depicts the density of AR(1) maximum for $\ell=5$ and
$\rho=-9/10$, $-8/10$, \ldots, $8/10$, $9/10$. \
\section{Acknowledgements}
Raymond Kan \cite{KR-heu} symbolically evaluated the integrals in Section 5,
at my request, for the special case $\sigma_{y}=\sigma_{z}=1$ using
Mathematica. \ Enrique del Castillo \cite{CBS-heu} identified several
typographical errors in \cite{WM-heu} and provided R\ code for numerically
approximating the first two moments\ of $M_{\ell}$. \ I am grateful to both
individuals for their kindness!
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\msi@int\textstyle\int}%
\def\tiint{\msi@int\textstyle\iint}%
\def\tiiint{\msi@int\textstyle\iiint}%
\def\tiiiint{\msi@int\textstyle\iiiint}%
\def\tidotsint{\msi@int\textstyle\idotsint}%
\def\toint{\msi@int\textstyle\oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{%
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}
\futurelet\@nextcs
\@int
}
\def\@int{%
\ifx\@nextcs\limits
\typeout{Found limits}%
\temptoksc={\limits}%
\let\@next\@intgobble%
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}%
\temptoksc={\nolimits}%
\let\@next\@intgobble%
\else
\typeout{Did not find limits or no limits}%
\temptoksc={}%
\let\@next\msi@limits%
\fi\fi
\@next
}%
\def\@intgobble#1{%
\typeout{arg is #1}%
\msi@limits
}
\def\msi@limits{%
\temptoksa={}%
\temptoksb={}%
\@ifnextchar_{\@limitsa}{\@limitsb}%
}
\def\@limitsa_#1{%
\temptoksa={#1}%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsb{%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsc^#1{%
\temptoksb={#1}%
\@ifnextchar_{\@limitsd}{\@temp
}
\def\@limitsd_#1{%
\temptoksa={#1}%
\@temp
}
\def\dint{\msi@int\displaystyle\int}%
\def\diint{\msi@int\displaystyle\iint}%
\def\diiint{\msi@int\displaystyle\iiint}%
\def\diiiint{\msi@int\displaystyle\iiiint}%
\def\didotsint{\msi@int\displaystyle\idotsint}%
\def\doint{\msi@int\displaystyle\oint}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
| 2f3bbeb74bf46739a6c3653f13710fd55313d2c1 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
In reinforcement learning (RL) \cite{sutton1998reinforcement} an agent is tasked with learning a policy that maximizes expected reward based only on its interactions with the environment.
In general, there is no guarantee that any such procedure will lead to an optimal policy; while convergence proofs exist, they only apply to a tiny and rather uninteresting class of environments. Reinforcement learning still performs well for a wide range of scenarios not covered by those convergence proofs. However, while recent successes in game-playing with deep reinforcement learning~\cite{justesen2017deep} have led to a high degree of confidence in the deep RL approach, there are still scenarios or games where deep RL fails.
Some oft-mentioned reasons why RL algorithms fail are partial observability and long time spans between actions and rewards.
But are there other causes?
In this paper, we want to address these questions by looking at games that are designed to be deliberately deceptive. Deceptive games are defined as those where the reward structure is designed to lead away from an optimal policy. For example, games where learning to take the action which produces early rewards curtails further exploration. Deception does not include outright lying (or presenting false information). More generally speaking, deception is the exploitation of cognitive biases. Better and faster AIs have to make some assumptions to improve their performance or generalize over their observation (as per the no free lunch theorem, an algorithm needs to be tailored to a class of problems in order to improve performance on those problems~\cite{wolpert1997no}). These assumptions in turn make them susceptible to deceptions that subvert these very assumptions. For example, evolutionary optimization approaches assume locality, i.e., that solutions that are close in genome space have a similar fitness - but if very bad solutions surround a very good solution, then an evolutionary algorithm would be less likely to find it than random search.
While we are specifically looking at digital games here, the ideas we discuss are related to the question of optimization and decision making in a broader context. Many real-world problems involve some form of deception; for example, while eating sugar brings momentary satisfaction, a long-term policy of eating as much sugar as possible is not optimal in terms of health outcomes.
In a recent paper, a handful of \emph{deceptive games} were proposed, and the performance of a number of planning algorithms were tested on them~\cite{anderson2018deceptive}. It was shown that many otherwise competent game-playing agents succumbed to these deceptions and that different types of deceptions affected different kinds of planning algorithms; for example, agents that build up a model of the effects of in-game objects are vulnerable to deceptions based on changing those effects. In this paper, we want to see how well deep reinforcement learning performs on these games. This approach aims to gain a better understanding of the vulnerabilities of deep reinforcement learning.
\section{Background}
Reinforcement learning algorithms learn through interacting with an environment and receiving rewards~\cite{sutton1998reinforcement}. There are different types of algorithms that fit this bill. A core distinction between the types are between ontogenetic algorithms, that learn within episodes from the reward that they encounter, and phylogenetic algorithms, that learn between episodes based on the aggregate reward at the end of each episode~\cite{togelius2009ontogenetic}.
For some time, reinforcement learning had few clear successes. However, in the last five years, the combination of ontogenetic RL algorithms with deep neural networks have seen significant successes, in particular in playing video games~\cite{justesen2017deep} such as simple 2D arcade games~\cite{mnih2015human} to more advanced games like Dota 2 and Starcraft \cite{OpenAI_dota,alphastarblog}. This combination, generally referred to as deep reinforcement learning, is the focus of much research.
The deceptive games presented in this paper were developed for the GVGAI (General Video Game Artificial Intelligence~\cite{perez2016general}) framework. The GVGAI framework itself is based on VGDL (Video Game Description Language~\cite{ebner2013towards,Schaul2013}) which is a language that was developed to express a range of arcade games, like Sokoban and Space Invaders. VGDL was developed to encourage research into more general video game playing~\cite{levine2013general} by providing a language and an interface to a range of arcade games.
Currently the GVGAI corpus has over 150 games. The deceptive games discussed in this paper are fully compatible with the framework.
\section{Methods}
To empirically test the effectiveness of the deception in every game, we train a reinforcement learning algorithm and run six planning algorithms on each game. The benefit of working in GVGAI is that we are able to evaluate the same game implementations with algorithms that require an available forward model and with learning agents. GVGAI has a Java interface for planning agents as well as an OpenAI Gym interface for learning agents \cite{perez2016general,torrado2018deep,brockman2016openai}.
All algorithms were evaluated on each game 150 times. The agent's scores are evaluated along with play through videos. The qualitative analyses of the videos provide key insights into the causes behind certain scores and into what an agent is actually learning. The quantitative and qualitative results are then used for the final analysis.
\subsection{Reinforcement Learning}
To test if these games are capable of deceiving an agent trained via reinforcement learning, we use Advantage Actor-Critic (A2C) to learn to play the games \cite{mnih2016asynchronous}. A2C is a good benchmark algorithm and has been shown to be capable of playing GVGAI games with some success \cite{torrado2018deep,justesen2018procedural}. A2C is a model-free,extrinsically driven algorithm that allows for examining the effects of different reward patterns. A2C is also relevant due to the popularity of model-free agents.
Due to the arcade nature of GVGAI games, we train on pixels with the same setup developed for the Atari Learning Environment framework \cite{bellemare13arcade}. The atari configuration has been shown to work well for GVGAI and allows a consistent baseline with which to compare all the games \cite{torrado2018deep}. Instead of tuning the algorithms for the games, we designed the games for the algorithms. We use the OpenAI Baselines implementation of A2C \cite{baselines}. The neural network architecture is the same as the original designed by Mnih et al. \cite{mnih2016asynchronous}. The hyper-parameters are the default from the original paper as implemented by OpenAI: step size of 5, no frame skipping, constant learning rate of 0.007, RMS, and we used 12 workers.
For each environment, we trained five different A2C agents to play, each starting from random seeds. In initial testing, we tried training for twenty million frames, and we found that the agents converged very quickly, normally within two million frames of training. We therefore standardized the experiments to all train for five million frames. One stochastic environment, WaferThinMints, did not converge and might have benefited from more training time.
\subsection{Planning Agents}
For comparison with previous work and better insight into the universality of the deceptive problems posed here, we compare our results to planning algorithms. What we mean by planning agents are algorithms that utilize a forward model to search for an ideal game state. In the GVGAI planning track, each algorithm is provided with the current state and a forward model and it has to return the next action in a small time frame (40 milliseconds). This time frame doesn't give the algorithm enough time to find the best action. This limitation forces traditional planning algorithms to be somewhat greedy which, for most of these games, is a trap.
In this paper, we are using six different planning algorithms. Three of them (aStar, greedySearch, and sampleMCTS) are directly from the GVGAI framework, while the rest (NovelTS, Return42, and YBCriber) are collected from the previous GVGAI competitions. Two of these algorithms, Return42, and YBCriber, are hybrid algorithms. They use one approach for deterministic games, such as A* or Iterative Width, and a different one for stochastic games, such as random walk or MCTS. Both algorithms use hand designed heuristics to judge game states. These hybrid algorithms also use online learning to bypass the small time per frame. The online learning agents try to understand the game rules, from the forward model during each time step, and then use that knowledge to improve the search algorithm.
\section{Deceptive Games}
In our previous work, a suite of deceptive games was created in order to take a look at the effects that these deceptive mechanics would have on agents \cite{anderson2018deceptive}. These deceptive games were designed in order to deceive different types of agents in different ways.
From a game design perspective, the category of deceptive games partially overlaps with ``abusive games'', as defined by Wilson and Sicart~\cite{wilson2010now}. In particular, the abuse modalities of ``unfair design'' can be said to apply to some of the games we describe below. Wilson and Sicart note that these modalities are present in many commercial games, even successful and beloved games, especially those from the 8-bit era.
This section describes some of these games in detail, and defines optimal play for an agent playing each game. We focus on four key categories of deception that these games exploit. We believe these categories represent general problems that learning agents face and these simple games allow us to shine a spotlight on weaknesses that model-free, deep reinforcement learning agents still face. For a more comprehensive list of types of deceptions and deceptive games see Deceptive Games \cite{anderson2018deceptive}.
The following four different categories of deception will be discussed further in the discussion section: Lack of Hierarchical Understanding, Subverted Generalization, Delayed Gratification, and Delayed Reward.
\subsection{DeceptiCoins (DC)}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{decepticoins2.pdf}
\caption{DeceptiCoins Levels}
\label{fig:dc}
\end{figure}
\subsubsection{Game} DeceptiCoins, Figure \ref{fig:dc}, offers an agent two paths which both lead to the win condition. The first path presents immediate points to the agent, in the form of gold coins. The second path contains more gold coins, but they are further away and may not be immediately visible to a short-sighted agent. Once the agent selects a path, they become trapped within their chosen path and can only continue to the bottom exit. The levels used here are increasingly larger versions of the same challenge, but remain relatively small overall
The optimal strategy for DeceptiCoins is to select the path with the highest overall number of points. For the levels shown in Figure 1, this is achieved by taking the right side path, as it leads to the highest total score (i.e., more gold coins can be collected before completing the level).
\subsubsection{Goal}
The game offers a simple form of deception that targets the exploration versus exploitation problem that learning algorithms face. The only way for the learning agent to discover the higher reward is for it to forgo the natural reward it discovers early on completely. By designing different sized levels, we can see how quickly the exploration space becomes too large.
At the same time, an agent that correctly learns, on the short route, about coins and navigation could then see that going right is superior.
\subsubsection{Results}
The first two levels of DeceptiCoins are very small, and the agent fairly quickly learns the optimal strategy. However, In level two the agent took several times longer to discover the optimal strategy, as expected from an agent that can only look at the rewards of individual moves. Level 3 proves to be too hard, and the agent converges on the suboptimal strategy. By comparison, a randomly initialized agent is very likely to select the easy path, since it starts next to it, before being forced to move toward the exit.
The training curve for level 3 shows a significant drop in performance at the beginning of training. The video footage suggests that the agent learns the concept of the gold coins and is attempting to collect them all, but fails to understand that once it takes the easy coin it will become trapped in the left path. The agent will also move back and forth between the paths at the beginning of the game, trying to decide.
\subsection{WaferThinMints (Mints)}
\begin{figure}[t!]
\centering
\includegraphics[width=.7\linewidth]{waferthinmints.pdf}
\caption{The first level of WaferThinMints}
\label{fig:wfmLevel1}
\end{figure}
\subsubsection{Game} WaferThinMints is inspired by a scene in Monty Python's \emph{The Meaning of Life}. The game presents the agent with easily obtainable points, but if the agent collects too many it will lead to a loss condition. The idea of this game is to model a situation where a repeated action does not always lead to the same outcome or has a diminishing return over time. The levels for this game feature mints which each award a point when collected and also fill up a resource gauge on the agent. The level used is shown in figure \ref{fig:wfmLevel1}. If the avatar's resource gauge (green bar on avatar) is filled, defined in this case as nine mints, and the agent attempts to collect an additional mint, then the agent is killed and a loss condition is reached. Losing the game also causes the agent to lose 20 points.
A waiter (not seen in Figure \ref{fig:wfmLevel1}) moves around the board distributing mints at random. This means it is possible for an agent to get trapped while the waiter places mint on the agent's square, forcing the agent to eat it. The agent must, therefore, try to avoid getting trapped.
The optimal strategy
is to collect as many mints as possible without collecting too many, which is currently set as nine. The player should avoid mints early on and try to avoid getting trapped. Near the end of the game, the agent should then eat the remaining mints to get to 9.
\subsubsection{Goal}
WaferThinMints is our primary example of the changing heuristic deception. The mint goes from providing a positive reward to giving a substantial negative reward with the only visual indication being a green bar on the avatar that represents how full the character is. The agent must learn that the value of the mints is dependent on that green bar. Since the bar moves with the Avatar, it cannot just memorize a fixed state in which to stop eating the mints. The mint is distributed by a chef and left around the board at random. For the agent to play optimally, it should also learn that it is not good to get full early on because it might get trapped in and forced to eat another mint at some point.
\subsubsection{Results}
As can be seen from the graph, this agent did not have enough time to converge completely. This points to the difficulty of learning in the noisy environment where even a good strategy could result in a bad reward if the agent is unlucky. This is necessary though, as in a simpler environment with a fixed mint layout, the agent would learn to memorize a path that results in a perfect score. The agent shows some improvement over time but still plays very poorly.
By observing the agent, we see that the agent uses location to solve this problem. At the beginning of the episode, the agent rushes to the room where the initial mints are placed. This is a guaranteed source of rewards. The agent will mostly stay in the room, a safe place, unless chased out by the chef's mint placement. After the initial mints, the agent attempts to avoid mints until it's trapped by them.
It is not clear whether the agent understands its fullness bar or uses the amount of mints placed in the game to assess the risk of eating more mints. The agent seems to have learned that the mints become dangerous, but it seems to use strange state and location information to help it know when to eat mints. This is related to the behavior we see in the game Invest. It also is incapable of reasoning about waiting until the end of the game to eat mints when it is safer to eat, an instance of the delayed gratification deception.
\subsection{Flower (Flow)}
\begin{figure}[t]
\centering
\includegraphics[width=.8\linewidth]{flowers.pdf}
\caption{Flower level 1}
\label{fig:flower}
\end{figure}
\subsubsection{Game} Flower is a game which rewards patient agents by offering the opportunity to collect a small number of points immediately, but which will grow larger over time the longer it is not collected. As shown in figure \ref{fig:flower}, a few seeds are available for the agent to collect, which are worth zero points. The seeds will eventually grow into full flowers and their point values grow along with them up to ten points. Once a flower is collected, another will begin to grow as soon as the agent leaves the space from which it was collected.
The optimal strategy for \textit{Flower} is to let the flowers grow to their final stage of development before collecting them.
\subsubsection{Goal}
In Flower, an agent is rewarded every time it collects a flower. To get maximum points the agent should collect each flower the moment it matures to 10 points. This will provide a better score than constantly collecting seedlings.
\subsubsection{Results}
The training graph for this game shows the agent falling for the specific deception with the sudden drop-off in performance. As the agent gets better at knowing where the flowers are, the score starts to improve. Then the agent gets too good at collecting the flowers, and they no longer have a chance to grow, lowering the score. Watching agent replays further confirms this, the agent finds a circuit through all the flowers and then gets better at quickly moving through this circuit. The agent perfectly falls for the deceit and has no way back unless it ignores the immediate rewards.
\subsection{Invest (Inv)}
\begin{figure}[t]
\centering
\includegraphics[width=.8\linewidth]{invest.pdf}
\caption{Invest level 1}
\label{fig:invest}
\end{figure}
\subsubsection{Game} Invest is a game where agents can forgo a portion of their already accumulated reward, for the benefit of receiving a larger reward in the future. The level used is shown in figure \ref{fig:invest}. The agent begins with no points but can collect a small number of coins around the level to get some initial amount. These points can then be ``spent'' on certain investment options. Doing this will deduct a certain number of points from the agent's current score, acting as an immediate penalty, but will reward them with a greater number of points after some time has passed. The agent has several different options on what they can invest in, represented by the three human characters (referred to as bankers) in the top half of the level. Each banker has different rules: Green banker turns 3 into 5 after 30 ticks, Red turns 7 into 15 after 60 ticks, and Blue turns 5 into 10 after 90 ticks. The agent can decide to invest in any of these bankers by simply moving onto them, after which the chosen banker will take some of the agent's points and disappear, returning a specific number of timesteps later with the agent's reward. The agent will win the game once the time limit for the level expires.
The optimal strategy for \textit{Invest} is defined as successfully investing with everyone as often as possible.
\subsubsection{Goal}
Invest is a game where the agent has to intentionally seek some negative reward to get a positive reward, and then wait for a certain amount of time to get the positive reward. This delayed reward makes it very difficult for the reinforcement learning algorithm to assign credit to a specific assignment. The initial investment will only be assigned a negative reward, and the agent then has to figure out that the reward that happens later should also be assigned to this action.
In this case, the reward is deterministic, and the challenge could be increased further by making the delay stochastic.
\subsubsection{Results}
The agent learns a very particular strategy for all five instances of training. The agent first collects all the coins and then invests with the Green Banker. From there it runs to the far right corner and waits, some agents always choose the top while others choose the bottom. As soon as the Green banker returns, the agent runs back over and reinvests only to run back to its corner and wait. This at first seems like puzzling behavior as a better strategy would be to sit next to the Green Banker and be able to reinvest faster and collect more points. On closer inspection, it becomes apparent that the time it takes the agent to reach the far corner correlates with the arrival of the delayed reward. It appears that the agent learned that investing in the Green Banker and then touching the far tile resulted in a large positive reward.
The size of the game board allowed the agent to embody the delay through movement and predict the arrival of the reward through how long it takes to walk across the board. It is possible that the agent would have learned to invest with the other bankers if the board was larger so the agent could have found a location associated with the delayed reward.
The training graph shows an interesting story too. The initial random agent would accidentally invest with all three bankers and get a fairly high score despite not consistently investing with anyone. The agent quickly learns to avoid the negative reward associated with the bankers and its score drops. It stops investing with the Blue Banker first, then the Red, and finally the Green. After it discovers how to predict the delayed reward for the Green Banker, it starts doing this more regularly until its performance converges.
\section{Comparison with planning algorithms}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|l||cccccc|}
\hline
\textbf{Agent} & \textbf{DC 1} & \textbf{DC 2} & \textbf{DC 3} & \textbf{Inv} & \textbf{Flow} & \textbf{Mints}\\
\hline
\hline
\textbf{aStar} & \cellcolor{blue!33}3.36 & \cellcolor{blue!44}3.54 & \cellcolor{blue!16}1.33 & \cellcolor{blue!4}17.53 & \cellcolor{blue!42}604.99 & \cellcolor{blue!10}1.92\\
\textbf{greedySearch} & \cellcolor{blue!50}5.0 & \cellcolor{blue!37}3.0 & \cellcolor{blue!15}1.23 & \cellcolor{blue!0}1.0 & \cellcolor{blue!0}6.83 & \cellcolor{red!8}-5.15\\
\textbf{sampleMCTS} & \cellcolor{blue!20}2.0 & \cellcolor{blue!25}2.0 & \cellcolor{blue!25}1.99 & \cellcolor{blue!1}3.5 & \cellcolor{blue!27}392.73 & \cellcolor{blue!32}5.73\\
\hline\hline
\textbf{NovelTS} & \cellcolor{blue!21}2.1 & \cellcolor{blue!25}2.0 & \cellcolor{blue!25}2.0 & \cellcolor{blue!1}4.8 & \cellcolor{blue!21}298.51 & \cellcolor{blue!48}8.75\\
\textbf{Return42} & \cellcolor{blue!50}5.0 & \cellcolor{blue!25}2.0 & \cellcolor{blue!25}2.0 & \cellcolor{blue!43}190.12 & \cellcolor{blue!23}329.73 & \cellcolor{red!4}-2.66\\
\textbf{YBCriber} & \cellcolor{blue!50}5.0 & \cellcolor{blue!50}4.0 & \cellcolor{blue!50}4.0 & \cellcolor{blue!2}10.91 & \cellcolor{blue!21}300.73 & \cellcolor{blue!28}5.2\\
\hline\hline
\textbf{A2C} & \cellcolor{blue!50}5.0 & \cellcolor{blue!47}3.79 & \cellcolor{blue!25}2.0 & \cellcolor{blue!16}69.6 & \cellcolor{blue!16}228.86 & \cellcolor{red!10}-6.21\\
\hline
\end{tabular}
}
\caption{\small{Average score for different games using different agents. Darker blue entries have higher positive score values for that game between all the agents, while darker red entries have higher negative score values.}}
\label{tab:averageResults}
\end{table}
In this section we want to compare the results from some of the planning agents in the previous paper \cite{anderson2018deceptive} with the deep RL results in this paper. Table~\ref{tab:averageResults},
shows the average score respectively for all the games using six different planning agents and the trained reinforcement learning agents. Every agent plays each game around 150 times, and the average score is recorded. These are drastically different algorithms from A2C, but they provide context for how different algorithms are affected by our deceptions.
While the planning agents perform slightly better on average, this depends highly on what exact planning algorithm we are examining. The planning algorithms have an advantage over the reinforcement learning algorithm as they have a running forward model that can predict the results of each action. On the other hand, the small time frame (40 milliseconds), for deciding the next action, doesn't give the algorithm enough time to find the best action.
In an important way, both RL and planning are facing a similar problem here. In both cases, the algorithms can only query the game environment a limited amount of times. This makes it impossible to look at all possible futures and forces the algorithms to prioritize. While most planning agents entirely rely on the given forward model, some, such Return42, also use online learning. These agents initially play with the forward model but will try to learn and generalize the game rules while playing. As the game progresses, they rely more and more on those learned abstractions. In general, this is an efficient and smart strategy but makes them vulnerable to deceptions where the game rules changed in the middle of the game, such as in \textit{Wafer Thin Mints}. Here the agents might get deceived if they do not verify the result using the forward model. This is very similar to the problem that A2C encounters since the network representation is tries to generalize the states of the game.
In summary, while the best planning agents seem to be stronger than A2C, they also are subject to different forms of deceptions, dependent on how they are implemented.
\section{Discussion}
In summary, while the A2C deep reinforcement learning\cite{mnih2016asynchronous} approach performs somewhat well, it rarely achieves the optimal performance in our games and is vulnerable to most deceptions discussed here. In contrast, the A2C algorithm performs quite well across the board for different AI benchmarks and can be considered competitive \cite{arulkumaran2017deep,justesen2017deep}. It should also be noted that the fast-moving field of deep reinforcement learning has already produced numerous modifications that could potentially solve the games discussed here\cite{arulkumaran2017deep}. However, instead of discussing possible modifications to overcome any particular challenge presented here, we want to take a step back and refocus back on the point of this exercise. We are interested in deceptions to gain a better understanding of the general vulnerabilities of AI approaches, and try to gain a more systematic understanding of the ways deep learning in particular, and AI, in general, might fail. With the previous games as concrete examples in mind, we now want to discuss four, non-exhaustive, categories for deception.
\subsection{Types of Deception}
\subsubsection{Lack of Hierarchical Understanding}
The DeceptiCoin games
are relatively easy to solve if one thinks about them at the right level of abstractions. DeceptiCoins can be seen as a single binary decision between one path and another. Once this is clear, one can quickly evaluate the utility of choosing the correct one and pick the correct path. The deceptive element here is the fact that this is presented to the AI as an incredibly large search space, as it takes many steps to complete the overall meta-action. Humans are usually quite good at finding these higher levels of abstraction, and hence this problem might not look like much of a deception to us - but it is pretty hard for an AI. The large search space, paired with the assumptions that all actions along the path of the larger action matter, makes it very hard to explore all possible steps until a possible reward is reached. This is a similar problem to the famous problem in Montezuma's Revenge, where the AI could not reach the goal, and its random exploration did not even get close. This problem was only recently solved with forced exploration \cite{ecoffet2019go}.
Finding a good hierarchical abstraction can actually solve the problem. For example, in DeceptiCoins we can look at the path from one point to another as one action - something that has been explored in GVGAI playing agents before.
\subsubsection{Subverted Generalization}
Wafterthinmints is a game specifically designed to trick agents that generalize. Agents that simply use a forward model to plan their next step perform quite well here, as they realize that their next action will kill them. But in general, we do not have access to a forward model, so there is a need to generalize from past experience and use induction. The fact that each mint up to the 9th gives a positive rewards reinforces the idea that eating a mint will be good. The 10th mint then kills you. This is not only a problem for reinforcement learning, but has been discussed in both epistemology \cite{hume1739,Russel1912} and philosophy of AI - with the consensus that induction in general does not work, and that there is not really a way to avoid this problem. The subverted generalization is also a really good example of how more advanced AIs become more vulnerable to certain deceptions. On average, generalization is a good skill to have and can make an AI much faster, up to the point where it fails.
\subsubsection{Delayed Reward}
The big challenge in reinforcement learning is to associate what actions lead to the reward \cite{Sutton1992}. One way to complicate this is to delay the payment of this reward, as we did in the example of invest. The player first has to incur a negative reward to invest, and then, after a certain amount of time steps gets a larger positive reward. The RL agent had two problems with Invest. First, it only ever invests with the investor with the shortest repayment time. The Red Banker would, overall, offer the best payout, but the RL agent either does not realize this relationship, or does not associate the reward correctly.
Furthermore, the RL agents also seems to be learning ``superstitions''. When we examined the behaviour of the evolved RL agent, we see that the agent invests with the Green Banker and then runs to a specific spot in the level, waiting there for the reward payout. This behaviour is then repeated, the agent runs to the banker and then back to the spot to wait for its reward. We reran the training for the RL agent and saw the same behaviour, albeit with a different spot that the agent runs to. We assume that this superstition arose because the agent initially wandered off after investing in the Green Banker, and then received the reward when it was in that spot. It seems to have learned that it needs to invest in the banker - as varying this behaviour would result in no payout. But there is little pressure to move it away from its superstition of waiting for the result in a specific spot, even though this has no impact on the payout. In fact, it makes the behaviour, even with just the Green Banker sub-optimal, as it delays the time until it can invest again, as it has to run back to the green banker.
What was exciting about this behavior, was the fact that similar behavior was also observed in early reinforcement learning studies with animals \cite{skinner1948superstition}. Pigeons that were regularly fed by an automatic mechanism (regardless of their behaviour) developed different superstitious behaviours, like elaborate dance and motions, which Skinner hypothesized were assumed (by the pigeon) to causally influence the food delivery. In our game, the agent seems to develop similar superstitions.
\subsubsection{Delayed Gratification}
There is a famous experiment \cite{mischel1972cognitive} about delayed gratification that confronts 4 year old children with a marshmallow, and asks them not to eat it while the experimenter leaves the room. They are told that they will get another marshmallow, if they can just hold off eating the first marshmallow now. This task proves difficult for some children, and it is also difficult for our agent.
Flower is a game where the agent actually gets worse over time. This is because it initially is not very good at collecting the flowers, which allows the flowers time to mature. The optimal strategy would be to wait for the flowers to grow fully, and then go around and collect them.
The agent learns the expected reward of collecting seeds early on but does not realize that this reward changes with faster collection. When it updates its expected reward based on its new speed, it forgets that it could get higher rewards when it was slower.
While some of the planning algorithms perform better here, it is likely that they did not actually ``understand'' this problem, but are simply much worse at collecting the flowers (like the untrained RL agent). This example demonstrates that we can design a problem where the AI gets worse over time by ``learning'' to play.
\section{Conclusion}
It appears that deep reinforcement learners are easily deceived. We have devised a set of games specifically to showcase different forms of deception, and tested one of the most widely used RL algorithms, Advantage Actor-Critic (A2C), on them. In all games, the reinforcement learners failed to find the optimal policy (with the exception that it found the optimal policy on one level of one game), as it evidently fell for the various traps laid in the levels.
As the games were implemented in the GVGAI framework, it was also possible for us to compare with tree search-based planning agents, including those based on MCTS. (This is very much a comparison of apples and oranges, as the planning agents have access to a forward model and direct object representation but are not given any kind of training time.) We can see that for every game, there is a planning agent which performs better than the A2C agent, but that there is in most cases also a planning agent that performs worse. It is clear that some kinds of deception affect our reinforcement learning algorithm much more severely than it affects the planning algorithms; in particular, the subverted generalization of WaferThinMints. On the other hand, it performed better than most planning algorithms given the delayed reward in Invest, even though the policy it arrived at is bizarre to a human observer and suggests a warped association between cause and effect.
We look forward to testing other kinds of algorithms on these games, including phylogenetic reinforcement learning methods such as neuroevolution. We also hope that other researchers will use these games to test the susceptibility of their agents to specific deceptions.
\fontsize{9.0pt}{10.0pt} \selectfont
| da4684f832037fb9bc1e8cbbafbf7b9f92b8a961 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction and Statement of the Problem}
\subsection{Classical Discretizations in Fractional Calculus}
The efficient numerical solution of initial value problems with fractional differential equations
like, e.g.,
\begin{equation}
\label{eq:ivp}
D^\alpha_a y(t) = f(t, y(t)), \qquad y(a) = y_0,
\end{equation}
is a significant
computational challenge due to, among other reasons, the non-locality of fractional
differential operators.
In our formulation \eqref{eq:ivp}, $D^\alpha_a$ denotes the standard
Caputo differential operator of order $\alpha$ with starting point
$a \in \mathbb R$ \cite[Chapter 3]{Di2010},
and we assume here and throughout some other parts of this paper that $0 < \alpha < 1$
(although we explicitly point out that the generalization of our findings to the case
that $\alpha$ is a noninteger number greater than $1$ is a relatively straightforward matter).
When dealing with the problem \eqref{eq:ivp}, one usually introduces
a discretization of the interval $[a, a+T]$, say, on which the solution is sought by
defining some grid points $a = t_0 < t_1 < t_2 < \cdots < t_N = a+T$. For each
grid point $t_j$, $j = 1, 2, \ldots, N$, typical numerical methods then introduce an
approximation formula for a discretization of $D_a^\alpha y(t_j)$ based on function values of
$y$ at the grid points, replace the exact fractional derivative in eq.~\eqref{eq:ivp} by this
approximation, discard the approximation error and solve the resulting algebraic equation to
obtain an approximation for $y(t_j)$. In their standard forms, classical methods like
fractional linear multistep methods \cite{Lub:lmm-abel,Lub:discr-fc} or the Adams method
\cite{DFF2002,DFF2004} require $O(j)$ operations to compute the required approximation at the
$j$-th grid point, thus leading to an $O(N^2)$ complexity for the overall calculation of the
approximate solution at all $N$ grid points. Moreover, the construction of the algorithms
requires the entire history of the process to be in the active memory at any time, thus leading
to an $O(N)$ memory requirement. This may be prohibitive in situations like,
e.g., the simulation of the mechanical behaviour of viscoelastic materials via some
finite element code where a very large number of such differential equations needs
to be solved simultaneously \cite{HSL2019}.
Numerous modifications of these basic algorithms have been proposed to resolve these
issues. Specifically (see, e.g., \cite[Section 3]{DKLMT}), one may use FFT techniques
to evaluate the sums that arise in the formulas \cite{Ga2018,HLS1985,HLS1988}, thus reducing
the overall computational complexity to $O(N \log^2 N)$; however, this approach does not improve the
memory requirements. Alternatively, nested mesh techniques \cite{DF2006,FS} can be employed;
this typically reduces the computational complexity to $O(N \log N)$, and some of these methods are
also able to cut down the active memory requirements to $O(\log N)$.
\subsection{Diffusive Representations in Discretized Fractional Calculus}
From the properties recalled above, it becomes clear that none of the
schemes mentioned so far allows to reach the level known for traditional algorithms for
first order initial value problems that, due to their differential operators being local,
have an $O(N)$ complexity and an $O(1)$ memory requirement. However, it is possible
to achieve these perfomance features by using methods based on diffusive representations
for the fractional derivatives \cite{Mo1998}. Typically, such representations take the form
\begin{equation}
\label{eq:diff-rep}
D_a^\alpha y(t) = \int_0^\infty \phi(w, t) \mathrm d w
\end{equation}
where, for a fixed value of $w$, the function $\phi(w, \cdot)$ is characterized as the solution
to an initial value problem for a first order differential equation the formulation of which contains
the function $y$ whose fractional derivative is to be computed. In the presently available literature,
many different special cases of this representation are known, e.g.\ the version of Yuan and
Agrawal \cite{YA} (originally proposed in that paper for $0 < \alpha < 1$ and extended to
$1 < \alpha < 2$ in \cite{TR} and to general positive noninteger values of $\alpha$ in
\cite{Di2008}; see also \cite{SG} for further properties of this method) where the associated
initial value problem reads
\begin{subequations}
\label{eq:ya}
\begin{equation}
\label{eq:ya-ode}
\frac{\partial \phi^{\mathrm{YA}}}{\partial t} (w,t) \!
= \!- w^2 \phi^{\mathrm{YA}}\!(w,t)
\! + \! (\!-1\!)^{\lfloor \!\alpha\! \rfloor} \frac{2 \sin \pi \alpha} \pi
w^{2\alpha-2\lceil \! \alpha \! \rceil+1} y^{(\lceil\! \alpha\! \rceil)}(t),
\,\,
\phi^{\mathrm{YA}}\!(w, a) = 0,
\end{equation}
such that the function $\phi^{\mathrm{YA}}$ has the form
\begin{equation}
\label{eq:ya-phi}
\phi^{\mathrm{YA}}(w,t)
= (-1)^{\lfloor \alpha \rfloor} \frac{2 \sin \pi \alpha} \pi
w^{2\alpha-2\lceil \alpha \rceil+1} \int_a^t y^{(\lceil \alpha \rceil)}(\tau)
\exp(-(x-\tau) w^2) \mathrm d \tau.
\end{equation}
\end{subequations}
An alternative has been proposed by Chatterjee \cite{Chatt} (see also \cite{SC}) using the
initial value problem
\begin{subequations}
\label{eq:sc}
\begin{equation}
\label{eq:sc-ode}
\frac{\partial \phi^{\mathrm{C}}}{\partial t} (w,t) \!
= \! - w^{1 / (\alpha - \lceil \! \alpha \! \rceil + 1)} \phi^{\mathrm{C}}\!(w,t)
+ (\!-1\!)^{\lfloor \! \alpha \! \rfloor} \frac{\sin \pi \alpha} {\pi (\alpha - \lceil \!\alpha\!\rceil + 1)}
y^{(\lceil \!\alpha\! \rceil)}(t),
\quad
\phi^{\mathrm{C}}\!(w, a) = 0,
\end{equation}
such that the function $\phi^{\mathrm{C}}$ has the form
\begin{equation}
\label{eq:sc-phi}
\phi^{\mathrm{C}}(w,t)
= \frac{(-1)^{\lfloor \alpha \rfloor}\sin \pi \alpha}{\pi ( \alpha - \lceil \alpha \rceil + 1)}
\int_a^t y^{(\lceil \alpha \rceil)}(\tau)
\exp\left(-(t-\tau) w^{1/( \alpha - \lceil \alpha \rceil + 1)}\right)
\mathrm d \tau.
\end{equation}
\end{subequations}
In either case (or in the case of the many variants thereof that have been proposed;
cf., e.g., \cite{Ba2019,BS,Li,McL,ZCSHN}), the numerical calculation of $D^\alpha_a y(t_j)$ requires
\begin{enumerate}
\item a quadrature formula
\begin{equation}
\label{eq:qf1}
\sum_{k=1}^K \lambda_k \phi(w_k, t_j) \approx \int_0^\infty \phi(w, t_j) \mathrm d w
= D_a^\alpha y(t_j)
\end{equation}
with nodes $w_1, w_2, \ldots, w_K \in [0, \infty)$ and weights $\lambda_1, \lambda_2,
\ldots, \lambda_K \in \mathbb R$ for numerically evaluating the integral in eq.~\eqref{eq:diff-rep}
\item a standard numerical solver for the associated differential equation (e.g., a linear multistep
method) to approximately compute, for each $k \in \{ 1, 2, \ldots, K \}$, the values $\phi(w_k, t_j)$
required to evaluate the formula \eqref{eq:qf1}.
\end{enumerate}
Evidently, the run time and the memory requirements of the operation in step~1 do not depend on $j$.
Also, one can perform step 2 in an amount of time that is independent of $j$. Furthermore, if an $\ell$-step
method is used in step 2, one needs to have (approximate) information about $y(t_{j-1}), y(t_{j-2}),
\ldots y(t_{j-\ell})$ which has to be kept in the active memory---but the amount of storage space required
for this purpose is also independent of $j$.
In summary, approaches of this type require $O(1)$ arithmetic operations per time step, i.e.\ we
have a computational cost of $O(N)$ for all $N$ time steps combined, and the required amount of
memory is $O(1)$ as desired. A further pleasant property of these methods is that they
impose no restrictions at all on the choice of the grid points $t_j$ whereas this can not always be
achieved with the other approaches.
Thus, from a theoretical point of view, algorithms of this form
are very attractive. In practice, however, the implied constants in the $O$-terms may be very large.
This is due to the following observation \cite[Theorems 3.20(b) and 3.21(b)]{Di2010}:
\begin{proposition}
\label{prop:asymp-yac}
Let $t \in [a, a+T]$ be fixed. Then, for $w \to \infty$, we have
\[
\phi^{\mathrm{YA}}(w, t) = c^{\mathrm{YA}} w^{q_{\mathrm{YA}}} (1 + o(1))
\quad \mbox{ with } \quad
q_{\mathrm{YA}} = 2 \alpha - 2 \lceil\alpha\rceil - 1 \in (-3, -1)
\]
and
\[
\phi^{\mathrm{C}}(w, t) = c^{\mathrm{C}} w^{q_{\mathrm C}} (1 + o(1))
\quad \mbox{ with } \quad
q_{\mathrm{C}} = - \frac 1 {\alpha - \lceil\alpha\rceil + 1} < -1
\]
where $c^{\mathrm{C}}$ and $c^{\mathrm{YA}}$ are some constants independent of $w$
(that may, however, depend on $t$, $a$, $\alpha$ and $y$).
\end{proposition}
\vskip-0.5cm
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{q-vs-alpha}
\caption{\label{fig:q-vs-alpha}Behaviour of the exponents
$q_{\mathrm{YA}}$ (blue) and $q_{\mathrm{C}}$ (orange)
as introduced in Proposition \ref{prop:asymp-yac} vs.\ the
order $\alpha$ of the differential operator}
\end{figure}
From Proposition \ref{prop:asymp-yac} we can see that the integrands in eq.\ \eqref{eq:diff-rep}
decay to zero in an algebraic manner as $w \to \infty$.
Figure \ref{fig:q-vs-alpha} shows the behaviour of the exponents $q_{\mathrm{YA}}$ and
$q_{\mathrm C}$ as they depend on $\alpha$. It can be seen that the exponents are less than $-1$ for
all $\alpha \in (0,1)$. This suffices to assert that the integrals $\int_0^\infty \phi(w,t) \mathrm d w$
are convergent. On the other hand, step 1 of the algorithm outlined above requires to
numerically approximate this integral, and to this end, classical results from approximation theory
\cite{Lubinsky} imply that such an algebraic decay does not admit a very fast convergence of
such numerical methods. Indeed, as the constant $q^{\mathrm C}$ is slightly larger than $q^{YA}$ for
$\alpha \ge 1/2$ and (significantly) smaller for $\alpha < 1/2$, one may state that overall
Chatterjee's method has more preferable properties from this point of view (although its
properties are still far from good enough).
To the best of the author's knowledge, this is a feature shared by very many
algorithms based on this type of approach. Therefore, one needs a relatively large number $K$ of quadrature nodes
in eq.\ \eqref{eq:qf1} to obtain an approximation with an acceptable accuracy (with the approaches known
so far, a common choice for $K$ is in the range between 200 and 500, cf.\ \cite{Ba2019,HSL2019}).
This number $K$ clearly has a strong influence on the constants implied in the $O$-term for the
computational complexity estimate. The main goal of this paper thus is to develop a
method that is also based on the same fundamental idea but that leads to a function $\phi(w,t)$
which exhibits an exponential decay for large $w$. This behaviour is much more pleasant from
an approximation theoretic point of view because it allows to use well understood and rapidly
convergent classical techniques like Gauss-Laguerre quadrature formulas. The hope behind this
idea is that the improved convergence behaviour will admit to use quadrature formulas as in
eq.\ \eqref{eq:qf1} with a significantly smaller number $K$ of nodes, so that the resulting algorithms
can produce results with a comparable accuracy as the known methods in a much shorter amount of time
(that is still proportional to $N$ but with a significantly smaller implied constant).
\section{The New Diffusive Representation and its Properties}
Our main idea is based on the following result.
\begin{theorem}
\label{thm:phinew}
For given values $a \in \mathbb R$, $T > 0$ and $\alpha \in \mathbb R_+ \setminus \mathbb N$
and a given function $y \in C^{\lceil \alpha \rceil}[a, a+T]$, let
\begin{equation}
\label{eq:def-q}
q_{\mathrm D} = \alpha - \lceil \alpha \rceil +1
\end{equation}
and
\begin{equation}
\label{eq:def-phinew}
\phi^{\mathrm D}(w, t)
= (-1)^{\lfloor \alpha \rfloor} \frac{\sin \alpha \pi}{\pi} \mathrm e^{wq_{\mathrm D}}
\int_a^t y^{(\lceil \alpha \rceil)}(\tau) \exp\left(-(t-\tau) \mathrm e^w\right) \mathrm d\tau
\end{equation}
for all $w \in \mathbb R$ and $t \in [a, a+T]$.
Then, we have the following properties:
\begin{enumerate}
\item[(a)] The value $q_{\mathrm D}$ satisfies $0 < q_{\mathrm D} < 1$.
\item[(b)] For any $w \in \mathbb R$, the function $\phi^{\mathrm D}(w, \cdot)$ solves the initial value
problem
\begin{equation}
\label{eq:ivp-phinew}
\frac{\partial \phi^{\mathrm D}}{ \partial t} (w, t)
= - \mathrm e^w \phi^{\mathrm D}(w, t)
+ (-1)^{\lfloor \alpha \rfloor} \frac{\sin \alpha \pi}{\pi}
\mathrm e^{wq_{\mathrm D}} y^{(\lceil \alpha \rceil)}(t),
\quad
\phi^{\mathrm D}(w, a) = 0
\end{equation}
for $t \in [a, a+T]$.
\item[(c)] For any $t \in [a, a+T]$,
\begin{equation}
\label{eq:phinew-caputo}
D_a^\alpha y(t) = \int_{-\infty}^\infty \phi^{\mathrm D}(w, t) \mathrm d w.
\end{equation}
\item[(d)] For any $t \in [a, a+T]$, we have $\phi^{\mathrm D}(\cdot, t) \in C^\infty(\mathbb R)$.
\item[(e)] For any $t \in [a, a+T]$,
\begin{equation}
\label{eq:asymp-phi+}
\phi^{\mathrm D}(w, t) = O(\mathrm e^{w (q_{\mathrm D}-1)})
\quad \mbox{ as } w \to \infty
\end{equation}
and
\begin{equation}
\label{eq:asymp-phi-}
\phi^{\mathrm D}(w, t) = O(\mathrm e^{w q_{\mathrm D}})
\quad \mbox{ as } w \to -\infty.
\end{equation}
\end{enumerate}
\end{theorem}
So, part (b) of Theorem \ref{thm:phinew} asserts that our function $\phi^{\mathrm D}$ solves an
initial value problem of the same type as the previously considered functions, cf.\ \eqref{eq:ya-ode} or
\eqref{eq:sc-ode}. Moreover, according to part (c), by integrating this function with respect to $w$
we obtain the fractional derivative of the given function $y$, which is in analogy with the corresponding
equation \eqref{eq:diff-rep} for the known approaches mentioned above. Note that there is a marginal
difference between eqs.\ \eqref{eq:diff-rep} and \eqref{eq:phinew-caputo} in the sense that the latter
involves an integration over the entire real line whereas the former requires to integrate over the
positive half line only, but from the point of view of approximation (or quadrature) theory this does not
introduce any substantial problems. (The index $\mathrm D$ in $\phi^{\mathrm D}$ and $q_{\mathrm D}$
can be interpreted to stand for ``doubly infinite integration range''.)
Thus, in these respects the new model behaves in very much the
same way as the known ones. The significant difference between the known approach and the new
one is evident from part (e) of the Theorem: It asserts (in view of the property of $q_{\mathrm D}$
shown in part (a)) that the integrand exhibits the desired exponential decay as $w \to \pm \infty$,
thus allowing, in combination with the smoothness result of part (d), a much more efficient numerical integration.
\begin{proof}
Part (a) is an immediate consequence of the definition of $q_{\mathrm D}$ given in eq.~\eqref{eq:def-q}.
For part (b), we first note that the integrand in eq.\ \eqref{eq:def-phinew} is continuous by
assumption. Hence, the integral is zero for $t=a$ which implies that the initial condition given in
eq.\ \eqref{eq:ivp-phinew} is correct. Also, a standard differentiation of the integral in the definition
\eqref{eq:def-phinew} with respect to the parameter $t$ yields the differential equation.
To prove (c), we recall from \cite[Proof of Theorem 3.18]{Di2010} that
\[
D_a^\alpha y(t)
= (-1)^{\lfloor \alpha \rfloor} \frac{\sin \alpha \pi} \pi
\int_a^t \int_0^\infty \frac{\mathrm e^{-z}} z \left( \frac z {x - \tau} \right)^{q_{\mathrm D}}
y^{(\lceil \alpha \rceil)}(\tau) \mathrm d z \, \mathrm d \tau.
\]
The substitution $z = (x-\tau) \mathrm e^w$, combined with an interchange of the two integrations
(that is admissible in view of Fubini's theorem), then leads to the desired result.
Statement (d) directly follows from the definition \eqref{eq:def-phinew} of the function $\phi^{\mathrm D}$.
Finally, we show that the estimates of (e) are true. To this end, let us first discuss what happens for
$w \to + \infty$. Here, we can see that
\[
\phi^{\mathrm D}(w, t) =(-1)^{\lfloor \alpha \rfloor} \frac{\sin \alpha \pi}{\pi} ( I_1 + I_2 )
\]
where
\begin{align*}
| I_1 |
&= \left| \mathrm e^{w q_{\mathrm D}}
\int_{t - w \exp(-w)}^t y^{(\lceil \alpha \rceil)}(\tau)
\exp\left(-(t-\tau) \mathrm e^w\right) \mathrm d\tau \right| \\
& \le \| y^{(\lceil \alpha \rceil)} \|_{L_\infty[a, a+T]} \mathrm e^{w q_{\mathrm D}}
\left| \int_{t - w \exp(-w)}^t
\exp\left(-(t-\tau) \mathrm e^w\right) \mathrm d\tau \right| \\
& \le \| y^{(\lceil \alpha \rceil)} \|_{L_\infty[a, a+T]} \mathrm e^{w q_{\mathrm D}}
\mathrm e^{-w} \left[ 1 - \mathrm e^{-w} \right]
< \| y^{(\lceil \alpha \rceil)} \|_{L_\infty[a, a+T]} \mathrm e^{w (q_{\mathrm D} - 1)}
\end{align*}
and
\begin{align*}
| I_2 |
&= \left| \mathrm e^{w q_{\mathrm D}}
\int_a^{t - w \exp(-w)} y^{(\lceil \alpha \rceil)}(\tau)
\exp\left(-(t-\tau) \mathrm e^w\right) \mathrm d\tau \right| \\
&\le \mathrm e^{w q_{\mathrm D}}
\max_{\tau \in [a, t - w \exp(-w)]} \exp\left(-(t-\tau) \mathrm e^w\right)
\int_a^{t - w \exp(-w)} | y^{(\lceil \alpha \rceil)}(\tau) | \mathrm d\tau \\
&\le \mathrm e^{w q_{\mathrm D}}
\mathrm e^{-w}
\int_a^{a+T} | y^{(\lceil \alpha \rceil)}(\tau) | \mathrm d\tau
= \mathrm e^{w (q_{\mathrm D}-1)}
\int_a^{a+T} | y^{(\lceil \alpha \rceil)}(\tau) | \mathrm d\tau
\end{align*}
which shows the desired result \eqref{eq:asymp-phi+} in this case; in particular,
the upper bound decays exponentially for $w \to \infty$ because $q_{\mathrm D}<1$.
Regarding the behaviour for $w \to -\infty$, we start from the representation \eqref{eq:def-phinew}
and apply a partial integration.
This yields, taking into consideration that $t \ge a$, that
\begin{align*}
| \phi^{\mathrm D}(w, t) |
&= \frac{|\sin \alpha \pi|}{\pi} \mathrm e^{wq_{\mathrm D}}
\Big | \exp \left(-(x-\tau) \mathrm e^w\right)
y^{(\lceil \alpha \rceil - 1)}(\tau) {\big|}_{\tau=a}^{\tau=t} \\
& \qquad \qquad \qquad \qquad
- \mathrm e^w \int_a^t \exp \left( -(t-\tau) \mathrm e^w \right)
y^{(\lceil \alpha \rceil - 1)}(\tau) \mathrm d \tau \Big | \\
&\le \frac{|\sin \alpha \pi|}{\pi} \mathrm e^{wq_{\mathrm D}}
\left|
y^{(\lceil \alpha \rceil - 1)}(t)
- y^{(\lceil \alpha \rceil - 1)}(a) \exp\left(-(t-a) \mathrm e^w \right)
\right| \\
& \phantom{\le} \quad {} + \frac{|\sin \alpha \pi|}{\pi} \mathrm e^{w q_{\mathrm D}}
\| y^{(\lceil \alpha \rceil - 1)} \|_{L_\infty[a, a+T]}
\left| \mathrm e^w \int_a^t \exp \left( -(t-\tau) \mathrm e^w \right) \mathrm d \tau \right| \\
&\le \frac{|\sin \alpha \pi|}{\pi}
\| y^{(\lceil \alpha \rceil - 1)} \|_{L_\infty[a, a+T]} \mathrm e^{wq_{\mathrm D}}
\left( 2 + 1 - \exp \left( -(t-a) \mathrm e^w \right) \right) \\
&\le 3 \frac{|\sin \alpha \pi|}{\pi}
\| y^{(\lceil \alpha \rceil - 1)} \|_{L_\infty[a, a+T]} \mathrm e^{wq_{\mathrm D}} ,
\end{align*}
thus proving the relation \eqref{eq:asymp-phi-} and demonstrating, in view of $q_{\mathrm D} > 0$,
that $\phi^{\mathrm D}(w, t)$ decays to zero exponentially as $w \to -\infty$.
\end{proof}
\section{The Complete Numerical Method}
Based on Theorem \ref{thm:phinew}---in particular, using the properties shown in parts (d) and (e)---we
thus proceed as follows to obtain the required approximation of $D_a^\alpha y(t_j)$, $j = 1, 2, \ldots, N$.
Splitting up the integral from eq.\ \eqref{eq:phinew-caputo} into the integrals over the negative
and over the positive half line, respectively, and introducing some obvious substitutions, we notice that
\begin{align*}
\int_{-\infty}^\infty \phi^{\mathrm D}(w, t) \mathrm d w
&= \frac 1 {q_{\mathrm D}} \int_{0}^\infty \mathrm e^{-u} \mathrm e^{u}
\phi^{\mathrm D}(-u/q_{\mathrm D}, t) \mathrm d u \\
& \phantom{=} \quad {}
+ \frac 1 {1-q_{\mathrm D}} \int_{0}^\infty \mathrm e^{-u} \mathrm e^{u}
\phi^{\mathrm D}(u/(1-q_{\mathrm D}), t) \mathrm d u.
\end{align*}
Therefore, using
\begin{equation}
\label{eq:def-phitilde}
\hat \phi^{\mathrm D}(u, t)
:= \mathrm e^u \left( \frac 1 {q_{\mathrm D}} \phi^{\mathrm D}(-u/q_{\mathrm D}, t)
+ \frac 1 {1-q_{\mathrm D}} \phi^{\mathrm D}(u/(1-q_{\mathrm D}), t) \right),
\end{equation}
we find that
\begin{equation}
D^\alpha_a y(t)
= \int_{-\infty}^\infty \phi^{\mathrm D}(w, t) \mathrm d w
= \int_0^\infty \mathrm e^{-u} \hat \phi^{\mathrm D}(u, t) \mathrm d u
\approx Q^{\mathrm{GLa}}_K [ \hat \phi^{\mathrm D}(\cdot, t)]
\end{equation}
where
\[
Q^{\mathrm{GLa}}_K [f] = \sum_{k=1}^K a^{\mathrm{GLa}}_k f(x^{\mathrm{GLa}}_k)
\]
is the $K$-point Gauss-Laguerre quadrature formula, i.e.\ the Gaussian quadrature formula for
the weight function $\mathrm e^{-u}$ on the interval $[0, \infty)$ \cite[Sections 3.6 and 3.7]{DR}.
For the sake of simplicity, we have chosen to omit from our notation for the nodes $x^{\mathrm{GLa}}_k$
and the weights $a^{\mathrm{GLa}}_k$ of the Gauss-Laguerre quadrature formula the fact that
these quantities depend on the total number $K$ of quadrature nodes.
From \cite [p.\ 227]{DR} and our Theorem \ref{thm:phinew} above, we can immediately
conclude the following result:
\begin{theorem}
\label{thm:conv-gl}
Under the assumptions of Theorem \ref{thm:phinew}, we have
\[
\lim_{K \to \infty} Q^{\mathrm{GLa}}_K [ \hat \phi^{\mathrm D}(\cdot, t) ]
= D^\alpha_a y(t)
\]
for all $t \in [a, a+T]$.
\end{theorem}
For a given number $K$ of quadrature points, it is known that the
nodes $x^{\mathrm{GLa}}_k$, $k = 1, 2, \ldots, K$, are the zeros of the
Laguerre polynomial $L_K$ of order $K$, and the associated weights are given by
\[
a^{\mathrm{GLa}}_k = \frac{x^{\mathrm{GLa}}_k}{[L_{K+1}(x_k)]^2},
\]
cf., e.g., \cite[p.\ 223]{DR}. (In our definition of the Laguerre polynomials, the normalization is such that
$\int_0^\infty \mathrm e ^{-x} (L_K(x))^2 \mathrm d x = 1$.)
From \cite[eqs.\ (6.31.7), (6.31.11) and (6.31.12)]{Sz} we know that,
at least for $K \ge 3$,
\[
\frac{2.89}{2K+1} < x^{\mathrm{GLa}}_1 < \frac 3 {2 K}
\quad \mbox{ and } \quad
2 K < x^{\mathrm{GLa}}_K < 4K + 3.
\]
We are now in a position to describe the method for the numerical computation
of $D^\alpha_a y(t_j)$, $j = 1, 2, \ldots, N$, that we propose. In this algorithm,
the symbol $\phi_k$ is used to denote the approximate value of
$\phi^{\mathrm D}(x^{\mathrm{GLa}}_k, t_j)$ for the current time step. i.e.\ for the
currently considered value of $j$.
Steps 1 and 2 here are merely preparatory in nature; the core of the algorithm is step 3.
\begin{quote}
Given the initial point $a$, the order $\alpha$, the grid points $t_j$, $j = 1, 2, \ldots, N$
and the number $K \in \mathbb N$ of quadrature nodes,
\begin{enumerate}
\item Set $q_{\mathrm D} \mapsfrom \alpha - \lceil \alpha \rceil + 1$.
\item For $k = 1, 2, \ldots, K$:
\begin{enumerate}
\item compute the Gauss-Laguerre nodes $x^{\mathrm{GLa}}_k$
and the associated weights $a^{\mathrm{GLa}}_k$,
\item define the auxiliary quantities $w_k \mapsfrom - x^{\mathrm{GLa}}_k / q_{\mathrm D}$
and $\tilde w_k \mapsfrom x^{\mathrm{GLa}}_k / (1 - q_{\mathrm D})$,
\item set $\phi_k \mapsfrom 0$ and $\tilde \phi_k \mapsfrom 0$
(to represent the initial condition of the differential equation \eqref{eq:ivp-phinew}
for $t = t_0 = a$).
\end{enumerate}
\item For $j = 1, 2, \ldots, N$:
\begin{enumerate}
\item Set $h \mapsfrom t_j - t_{j-1}$.
\item For $k = 1, 2, \ldots, K$:
\begin{subequations}
\begin{enumerate}
\item update the value $\phi_k$ by means of solving the associated differential equation
\eqref{eq:ivp-phinew} with, e.g., the backward Euler method, viz.\
\begin{equation}
\label{eq:bweuler}
\phi_k \mapsfrom \frac 1 {1 + h \mathrm e^{w_k}}
\left( \phi_k + h (-1)^{\lfloor \alpha \rfloor}
\frac{\sin \alpha \pi}{\pi}
\mathrm e^{w_k q_{\mathrm D}}
y^{(\lceil \alpha \rceil)}(t_{j})
\right)
\end{equation}
(note that the index $k$ used here is not the time index);
\item similarly, update the value $\tilde \phi_k$ by
\begin{equation}
\label{eq:bweuler-tilde}
\tilde \phi_k \mapsfrom \frac 1 {1 + h \mathrm e^{\tilde w_k}}
\left( \tilde \phi_k + h (-1)^{\lfloor \alpha \rfloor}
\frac{\sin \alpha \pi}{\pi}
\mathrm e^{\tilde w_k q_{\mathrm D}}
y^{(\lceil \alpha \rceil)}(t_{j})
\right).
\end{equation}
\end{enumerate}
\end{subequations}
\item Compute the desired approximate value for $D^\alpha_a y(t_j)$ using the formula
\[
D^\alpha_a y(t_j)
= \sum_{k=1}^K a^{\mathrm{GLa}}_k
\exp(x^{\mathrm{GLa}}_k)
\left( \frac 1 {q_{\mathrm D}} \phi_k
+ \frac 1 {1 - q_{\mathrm D}} \tilde \phi_k
\right) .
\]
\end{enumerate}
\end{enumerate}
\end{quote}
The main goal of this paper is to develop a diffusive representation that can be numerically handled in
a more efficient way than traditional formuals. Therefore, our work concentrates on the aspects
related to the integral, i.e.\ on the properties of the integrand and on the associated numerical
quadrature. The solution of the
differential equation is not in the focus of our work; we only use some very simple (but nevertheless
reasonable) methods here. Our specific choice is based on the observation that the magnitude of the
constant factor with which the unkonwn function $\phi(w, \cdot)$ on the right-hand side of
\eqref{eq:ivp-phinew} is multiplied is such that an A-stable method should be used \cite{HW}.
Therefore, as the simplest possible choice among these methods, we have suggested the
backward Euler method in our description given above.
Alternatively, one could, e.g., use the trapezoidal method which is also A-stable. This would mean that
the formulas given in eqs.~\eqref{eq:bweuler} and \eqref{eq:bweuler-tilde} would have to be
replaced by
\begin{subequations}
\begin{align}
\label{eq:trap}
\phi_k &\mapsfrom \frac 1 {1 + h \mathrm e^{w_k} /2}
\Bigg( \left(1 - \frac h 2 \mathrm e^{w_k} \right) \phi_k \\
& \phantom{mapsfrom} {} \qquad \qquad
+ \frac h 2 (-1)^{\lfloor \alpha \rfloor}
\frac{\sin \alpha \pi}{\pi}
\mathrm e^{w_k q_{\mathrm D}}
(y^{(\lceil \alpha \rceil)}(t_{j}) + y^{(\lceil \alpha \rceil)}(t_{j-1}))
\Bigg)
\nonumber
\end{align}
and
\begin{align}
\label{eq:traptilde}
\tilde \phi_k &\mapsfrom \frac 1 {1 + h \mathrm e^{\tilde w_k} /2}
\Bigg( \left(1 - \frac h 2 \mathrm e^{\tilde w_k} \right) \tilde \phi_k \\
& \phantom{mapsfrom} {} \qquad \qquad
+ \frac h 2 (-1)^{\lfloor \alpha \rfloor}
\frac{\sin \alpha \pi}{\pi}
\mathrm e^{\tilde w_k q_{\mathrm D}}
(y^{(\lceil \alpha \rceil)}(t_{j}) + y^{(\lceil \alpha \rceil)}(t_{j-1}))
\Bigg)
\nonumber
\end{align}
\end{subequations}
respectively. In the following section, we shall report the results of our numerical experiments
for both variants.
\begin{remark}
From a formal point of view, eqs. \eqref{eq:bweuler} and \eqref{eq:bweuler-tilde} have
exactly the same structure. From a numerical perspective, however, there is a significant
difference between them that needs to be taken into account when implementing the
algorithm in finite-precision arithmetic: In view of the definitions of the quantities $w_k$ and
$\tilde w_k$ given in step 2b of the algorithm and the facts that the Gauss-Laguerre nodes
$x_k^{\mathrm{GLa}}$ are strictly positive for all $k$ and that $q_{\mathrm D} \in (0,1)$,
it is clear that $w_k < 0$ for all $k$, and hence the powers $\mathrm e^{w_k}$ and
$\mathrm e^{w_k q_{\mathrm D}}$ that occur in eq.\ \eqref{eq:bweuler} are always in the
interval $(0,1)$. It may be, if $|w_k|$ is very large, that the calculation of $\mathrm e^{w_k}$
in IEEE arithmetic results in an underflow, but this number can then safely be replaced by $0$
without causing any problems. Therefore, eq.\ \eqref{eq:bweuler} can be implemented directly
in its given form. On the other hand, using the same arguments we can see that $\tilde w_k > 0$
for all $k$, and indeed (at least if $k$ is large and/or $q_{\mathrm D}$ is close to $1$)
$\tilde w_k$ may be so large that the computation of $\mathrm e^{\tilde w_k}$ results in a fatal
overflow. For this reason, in a practical implementation, eq.\ \eqref{eq:bweuler-tilde} should
not be used in its form given above but in the equivalent form
\begin{equation}
\label{eq:bweuler-tilde2}
\tag{14c}
\tilde \phi_k \mapsfrom \frac {\mathrm e^{-\tilde w_k}} {\mathrm e^{-\tilde w_k} + h} \tilde \phi_k
+ h (-1)^{\lfloor \alpha \rfloor} \frac{\sin \alpha \pi}{\pi}
\frac {\mathrm e^{\tilde w_k (q_{\mathrm D} - 1)}} {\mathrm e^{-\tilde w_k} + h}
y^{(\lceil \alpha \rceil)}(t_{j})
\end{equation}
that avoids all potential overflows.
Evidently, an analog comment applies to eqs.\ \eqref{eq:trap} and \eqref{eq:traptilde}.
\end{remark}
\section{Experimental Results and Conclusion}
In \cite{Di2021}, we have reported some numerical results illustrating the convergence behaviour
of the RISS method proposed by Hinze et al.\ \cite{HSL2019}. Here now we
present similar numerical results obtained with the new algorithm.
A comparison with the corresponding data shown in \cite{Di2021} reveals that, in many cases,
our new method requires a smaller number of quadrature nodes than the RISS approach
(with otherwise identical parameters) to obtain approximations of a similar quality.
A typical result is shown in Figure \ref{fig:ex1} where we have numerically computed
the Caputo derivative of order $0.4$ of the function $y(t) = t^{1.6}$ over the interval $[0,3]$.
The calculations have been performed on an equispaced grid for the interval $[0,3]$
with various different step sizes (i.e.\ with different numbers of grid points) and different
choices of the number $K$ of quadrature nodes. Both the backward Euler and the trapezoidal scheme
have been tried as the ODE solvers. The figure exhibits the maximal absolute error over all grid
points.
\begin{figure}
\vskip-0.4cm
\centering
\includegraphics[width=0.7\textwidth]{ex1}
\caption{\label{fig:ex1}Maximal errors for the calculation of $D_0^{0.4} y(t)$ with $t \in [0, 3]$ for
$y(t) = t^{1.6}$ using different step sizes for the ODE solver and different numbers of
quadrature nodes.}
\end{figure}
The findings of this example can be summarized as follows:
\begin{itemize}
\item The trapezoidal method clearly leads to a more accurate approximation than the
backward Euler method. Obviously, in view of the trapezoidal method's higher
convergence order, this behaviour is exactly what would have been expected.
\item The number of quadrature points, i.e.\ our parameter $K$, only has a very small influence
on the overall error. Therefore, one can afford to work with a relatively small value of $K$,
thus significantly reducing the computational cost, without a substantial loss of accuracy.
\item A comparison of the results for the trapezoidal method shows that a certain kind of saturation
is reached at an error level of $4.5\cdot10^{-6}$ for $K=40$, i.e.\ we do not achieve
a better accuracy even if we continue to decrease the step size for the ODE solver.
This is an indication that this level reflects the contribution of the total error caused
by the quadrature formula. If a smaller error is required, one therefore needs to use more
quadrature nodes. For example, choosing $K = 70$ leads to a saturation level
of approximately $3.2\cdot10^{-7}$. This indicates that the saturation level
might be proportional to $K^{-0.6}$, leading to the conjecture that the exponent
of $K$ in this expression could be related to the smoothness properties of the
function $y$ (note that the function $y'$ that appears in the formulas which describe our
algorithm satisfies a Lipschitz condition of order $0.6$).
The fact that this phenomenon is hardly visible if the backward Euler method is used
is due to the fact that this ODE solver has a larger error which only just about reaches this
range for the chosen step sizes. It would be possible to more clearly observe a similar behaviour
if the step sizes were reduced even more.
\end{itemize}
We have also used a number of other test cases; the behaviour has usually been very similar.
Also, the findings of \cite{Di2021} for a significantly different method based on a related fundamental
approach point into the same direction. In our future work, we will attempt to provide a thorough
analysis of the approximation properties of methods of this type that should confirm the
experimental results.
| d3716e36fe4384275b8746967342ce10f7973531 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Optical absorption spectroscopy is an important material characterization for identifying material species, atomic structures, reactions, and for understanding the electronic structure and light-matter interaction.\cite{kaufmann_ultraviolet_2002, zhang_characterization_2019}
Optical light wavelengths, spanning UV, visible light and near-IR (350 - 750 nm), are used to probe materials including crystalline semiconductors, molecules, metals and amorphous materials.
Due to the direct relation between structure and physical properties, \textit{in situ} UV-vis spectroscopy is commonly used to monitor the crystallization, composition, and phase transition at relevant time and length scales by tracking the evolving optical properties.\cite{zhang_situ_2019, buckner_situ_2019, babbe_optical_2020}
Its non-destructive nature and inexpensive access makes it a popular tool to probe dynamic excitation processes in materials.
The absorption process entails the interaction between incident electromagnetic waves and the material electronic structure.
Specifically in semiconductors, the electrons are excited from the valence band to the conduction band by absorbing the photon energy. Therefore, optical spectroscopy is often used to determine the optical bandgap ($E_\mathrm{g,opt}$) by considering the onset of the absorption edge.\cite{dolgonos_direct_2016}
Intrinsically, the optical bandgap is largely determined by the electronic bandgap $E_\mathrm{g,e}$ between the band edges, considering only vertical transitions due to lack of momentum transfer from the photons ($q \rightarrow 0$).
However, in some materials where direct forbidden transitions are present, $E_\mathrm{g,opt}$ can be larger than $E_\mathrm{g,e}$.
Additionally, when electron-hole Coulomb interaction is strong, resonant peaks of exitons form below the band-to-band absorption edge, leading to a below-bandgap $E_\mathrm{g,opt}$. \cite{koch_semiconductor_2006,rohlfing_excitonic_1998-1, elliott_intensity_1957, ogawa_optical_1991}.
Moreover, charged defects with energy levels close to the band edges can cause the absorption edge to broaden and extend into lower energy range. \cite{redfield_effect_1963}
Extrinsically, the measured $E_\mathrm{g,opt}$ is influenced by many factors, including experimental setup, sample thickness etc.
Various mathematical ways of extrapolating a linear fitting of the spectral tailing often result in inconsistent $E_\mathrm{g,opt}$ even in the same sample. \cite{raciti_optical_2017, zanatta_revisiting_2019}
These factors interplay and complicate the determination of E$_\mathrm{g,opt}$.
To this end, a database of internally consistent, computed optical absorption spectra using a standardized approach will be of benefit in providing a common reference point for experiments.
By comparing with the absorption spectra of crystalline, defect-free materials, experiments can infer whether other influences, beyond band-to-band transitions, such as experimental setup and interpretation, defect formation, excitons, electron-phonon coupling etc., are present.
From materials design perspective, these data provide an important yet currently not widely available material descriptor, the optical band gap $E_\mathrm{g,opt}$, for optoelectronic materials design.
In this study, we developed a high-throughput computational workflow for generating optical absorption spectrum using density functional theory (DFT) as implemented in the VASP package,\cite{kresse_efficient_1996-2, kresse_efficiency_1996-2}
with projector augmented wave core potentials (reciprocal space projection). \cite{blochl_projector_1994-2}
We used the independent-particle approximation (IPA) to compute the frequency dependent dielectrics for more than 1000 solid materials chosen from the Materials Project.\cite{jain_commentary_2013}
Benchmarks against the random-phase approximations (RPA) and with experiments, using the Pearson correlation coefficients as metric, show excellent agreement when incorporating a scissor shift of the difference between PBE band gap and HSE band gap. We anticipate that the workflow and growing dataset will be useful to the general materials community, and in particular for designing optoelectronic materials.
\section{Methods}
For each material, the structure was relaxed using the PBEsol functional,\cite{perdew_restoring_2008-4,perdew_erratum_2009-2} followed by a more rigorous relaxation with the r2SCAN functional,\cite{furness_accurate_2020} as implemented in VASP. \cite{kresse_efficient_1996-2, kresse_efficiency_1996-2}.
The structures were relaxed until the forces were converged to 1 meV/\AA, and the energy to 10$^{-6}$ eV.
The static DFT calculation was performed using the tetrahedron smearing method, with the final wavefunctions then used for the following optical calculation.
The optical calculation was performed within the independent-particle approximation (IPA), with two times the number of bands initially considered in the static calculation to ensure the absorption spectrum covers a sufficient energy range.
The \textit{k}-point density is chosen to be high (400/atom) to ensure convergence against the density of states.
Here, the imaginary part of the frequency dependent microscopic dielectric function is calculated :
\begin{equation}
\begin{split}
\epsilon_2 = \frac{4\pi e^2}{\Omega} \lim_{q\rightarrow0}\frac{1}{q^2}
& \sum_{v,c,\mathrm{k}} 2 w_\mathrm{k} \delta(E_{ck} - E_{vk} - h\omega) \\
& \times \braket{u_{c,k + qe_\alpha}}{u_{vk}} \braket{u_{vk}}{u_{c,k+qe_\beta}}
\end{split}
\end{equation}
where $\Omega$ is the unit cell volume, $e$ is the electron charge, $q$ is the photon momentum, $\omega$ is the photon frequency, and $c$, $v$ and $k$ denote the conduction band, valence band, and the k-point, respectively. $u$ is the electron wavefunction and the subscript denotes a k-point and band. Here, $\ket{u_{nk+qe_\alpha}}$ can be obtained using first-order perturbation theory.
The real part of the dielectric function is calculated via the Kramers-Kronig relation:
\[ \epsilon_1(\omega) = 1 +\frac{2}{\pi} \int_0^{\infty}\frac{\epsilon_2(\omega')\omega'}{\omega'^2 - \omega^2} d\omega'\]
The optical absorption coefficient relies on the extinction coefficient $\Tilde{k}$ and the refractive index $\Tilde{n}$, where
\[ \Tilde{k}^2 = \Tilde{n}^2 - \epsilon_1 \]
and $\Tilde{n}$ follows:
\[ \Tilde{n} = \frac{\epsilon_2}{2\Tilde{k}} \]
The frequency-dependent absorption coefficient depends on the extinction coefficient and the incident photon frequency:
\[ \alpha = \frac{2\omega}{c} \Tilde{k} \]
Combining the above equations, we arrive at the absorption coefficient $\alpha$ as a function of both the real and imaginary parts of the dielectric function:
\[ \alpha = \frac{2\pi E}{hc} \sqrt{2}\times \sqrt{ \sqrt{\epsilon_1^2 + \epsilon_2^2} - \epsilon_1} \]
where $E$ is the incident photon energy and $c$ is the speed of light in vacuum.
At the long-wavelength limit ($q \rightarrow 0$) the dielectric matrix determines the optical properties accessible to optical probes.
In IPA, the macroscopic dielectric is approximated to be the same as the microscopic dielectric by neglecting the local field effect, hence $\epsilon_{\mathrm{mac}} \approx \mathrm{lim}_{\mathbf{q} \rightarrow 0} \epsilon_{0,0}(\mathbf{q},\omega)$.
In order to include the local field effect, the random-phase-approximation (RPA) calculation is needed to obtain the response function $\epsilon_{\mathrm{mac}} = (\lim_{q \to 0} \epsilon_{0,0}(\mathbf{q},\omega) )^{-1} $.
The RPA calculation uses the wavefunctions $u$ and the corresponding \textit{k}-space derivatives $\partial u/ \partial k$ generated by the previous IPA calculation and computes the response function $\chi$.
\section{High-throughput workflow}
\subsection{Screening}
As a first step towards building an absorption coefficient database, we consider solids suitable for photolvoltaic applications, specifically semiconductors with an overlapping absorption spectrum with visible light.
Hence we screened compounds calculated by the Materials Project with three criteria.
First, an electronic band gap between 0.3 - 3 eV, corresponding to 4100 nm (infra-red) - 413 nm (violet) in photon wavelength;
Second, to ensure structural stability, the energy above hull was selected to be less than 0.02 eV/atom. This is comparable to the thermal energy $k_B T$ of 25 meV at room temperature which can lead to entropic term that reduce the free energy.
Finally, we limit the number of sites in the unit cell to be less than 10 to ensure a manageable memory footprint for the optical calculations. (Fig.\ref{fig:workflow})
This set of filtering results in 1112 candidates, for which the IPA absorption spectra were calculated and integrated into the Materials Project.
\subsection{Workflow}
We carried out a high-throughput computational workflow on the candidates resulted from the screening step.
The workflow consists of three parts: 1) structure relaxation 2) static calculation and 3) optic calculation as described in Fig.\ref{fig:workflow}.
The structural relaxations are carried out using PBEsol functional and an additional relaxation by r2SCAN functional to improve its accuracy.
The static calculation provides a more accurate total energyand the ground state wavefunction.
The following optical calculation at the IPA level then calculates the dielectric functions, by which the absorption coefficient is derived.
To examine the importance of including the local field effects, we benchmarked the RPA calculation against IPA for a subset of the materials where experimental results are available.
The validation is shown in the following section.
\begin{figure}
\centering
\includegraphics[scale=0.24]{figures/screening.pdf}
\caption{The filtering process and the workflow steps used in this study.}
\label{fig:workflow}
\end{figure}
\section{Technical Validation}
\subsection{IPA vs. RPA}
The calculated spectra was benchmarked against experimental results, some of which are shown in Fig \ref{fig:benchmark}.
Even with the lower-level IPA calculations, the experimental peak shapes and intensities are captured reasonably well.
Small spectral difference are observed between the IPA and RPA at higher energies, where the local field effects are more prominent. \cite{wiser_dielectric_1963,louie_local-field_1975}
In order to numerically quantify spectral matching, we calculated the Pearson correlation coefficient between the theoretical spectra and the experimental one, using the equation:
\begin{equation}
P = \frac{\Sigma (x_i - \bar{x})(y_i -\bar{y})}{\sqrt{\Sigma(x_i -\bar{x})^2 \Sigma(y_i -\bar{y})^2}}
\end{equation}
where $x_i$ and $y_i$ are the calculated and experimental absorption coefficients respectively, and $\bar{x}$ and $\bar{y}$ are the average value of the $x_i$ and $y_i$, respectively.
The Pearson coefficients range between zero and one, and a value closer to 1.0 indicates highly correlated spectra.
The coefficients are plotted as color map in Fig.\ref{fig:pearson_co}.
For the set of materials with experimental data available, the RPA calculation shows marginal improvement in $P$ in some materials.
In most cases, the differences are negligible, suggesting that the IPA is sufficient to reproduce the spectra.
Considering the significantly higher cost of the RPA calculation, only the IPA spectrum was calculated for the rest of other compounds ($>$1000).
We note that for a few materials such as \ce{MgF2} and \ce{WS2}, a sharp peak before the main onset is measured but is absent in our calculations, indicating possible formation of excitons which can only be captured by many-body theories, for example as captured by the Bethe-Salpeter-Equation.
\begin{figure*}
\centering
\includegraphics[scale=0.28]{figures/benchmark.png}
\caption{Example comparison between absorption spectrum by IPA, RPA and experiments. Full data sets are included in SI.}
\label{fig:benchmark}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.7]{figures/pearson_co.png}
\caption{Pearson correlation between simulated spectra and experiments}
\label{fig:pearson_co}
\end{figure*}
\subsection{HSE band gap correction}
For some materials (e.g Si, GaAs, CdS, ZnSe etc.), the onsets of the calculated IPA/RPA absorption are lower than experiments. This is an expected outcome due to the underestimation of bandgaps by the PBE functional.
To correct the spurious self-interaction in GGA, we used the HSE functional to obtain a more accurate bandgap,\cite{heyd_hybrid_2003-2,heyd_erratum_2006} which includes short-range Hartree-Fock exchange and is known to decrease the error and improve the agreement between the calculated $E_g$ and corresponding experimental values. \cite{crowley_resolution_2016, garza_predicting_2016}
The comparisons in $E_\mathrm{g}$ between HSE and PBE band gaps are shown in Fig.\ref{fig:bandgap}.
As expected, most of the HSE band gaps are above or close ($<$ 0.2 eV) to the PBE band gaps.
A few outliers include: B ($\Delta E_{g,\mathrm{HSE}}=-1.35$ eV), \ce{Al2O3} ($\Delta E_{g,\mathrm{HSE}}=-0.64$), \ce{H2O} ($\Delta E_{g,\mathrm{HSE}}=-0.41$ eV), \ce{CaSO4} ($\Delta E_{g,\mathrm{HSE}}=-4.09$ eV), \ce{Se} ($\Delta E_{g,\mathrm{HSE}}=-1.18$ eV)
Using the HSE results, the spectra are shifted by $\Delta E_{g,\mathrm{HSE}}$, the energy difference between the PBE band gap and HSE band gap.
This results in a rigid blue shift for most of the spectra, and in many cases better match the experimental data (see Figure \ref{fig:benchmark}).
A numerical description is reflected in the Pearson correlation for the shifted spectra.Fig.\ref{fig:pearson_co}, where for materials (e.g. CsI, ZnO, Te) with previously lower correlation, this scissor correction improved $P$ significantly.
This indicates that, to simulate a reasonably accurate absorption spectra, carrying out a PBE-level frequency-dependent dielectric calculation with correction from an HSE band gap is sufficient.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figures/hse_pbe_bandgap.png}
\caption{HSE and PBE band gaps for the materials used for benchmark. The color bar represents $\Delta E_{g,\mathrm{,HSE}}$. The gray dashed line corresponds to 1:1 ratio of $E_{g,\mathrm{,HSE}}$/$E_{g,\mathrm{,PBE}}$, and the shaded region represent a $\pm 0.2$ eV window. }
\label{fig:bandgap}
\end{figure}
\section{Data Records}
The calculated spectra are available through the Materials Project website (www.materialsproject.org) and can be downloaded using the REST API. The results are also available in the form of a JSON file that can be downloaded directly from Materials Project.
\paragraph{Data representations:}
The data for each of the calculated compounds are stored in a list and are provided as a JSON file. For each compound, there are key values, such as ``energy'' and ``density'', that point to the appropriate property (Table 2). Other keys include electronic band gap, imaginary and real part of the dielectric functions, as well as the number of \textit{k}-points.
\begin{table*}[]
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
json fields & units & physical property \\ \hline
energies & eV & Absorption energy corresponding to optic wavelength \\
energy\_max & eV & The highest energy of the spectrum for this calculation \\
optical\_absorption\_co & cm$^{-1}$ & Absorption coefficient \\
average\_real\_dielectric & none & Real part of the frequency-dependent dielectrics, averaged for each tensor \\
average\_imag\_dielectric & none & Imaginary part of the frequency-dependent dielectrics, averaged for each tensor \\
bandgap & eV & The electronic band gap \\
nkpoints & none & The number of kpoints used in the optic calculation \\
\hline
\end{tabular}%
}
\label{tab:json}
\caption{The data fields of a downloadable absorption spectrum from MP.}
\end{table*}
\section{Usage Note}
We present a database of calculated optical absorption spectra for 1,116 compounds. The data should be of broad interest to materials applications relating to electronic structure, particularly optoelectronic and other electronic properties.
For example, we expect this database to be used in the understanding of optical absorption and in the search for new solar absorbers with unique and tailored properties.
The above use cases are facilitated by the Materials Project website interface which allows users to search for materials with associated metrics, such as stability, band gap and/or density.
With this new dataset and underlying data and software infrastructure, users will now be able to request calculated optical absorption coefficients and frequency-dependent dielectric functions. Furthermore, these capabilities open up opportunities in data-driven endeavors, such as the application of machine learning techniques to identify structural and chemical features that are key to highly absorbing materials, hence extrapolating the data beyond its current coverage and accelerating the discovery of novel optoelectronic materials.
\section{Acknowledgement}
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. We thank Rachel Woods-Robinson for helpful discussions. The work is supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under contract no. DE-AC02-05-CH11231 (Materials Project program KC23MP).
| 9def9e66b7f9cc06c84736f14c216c72858f676b | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section*{Introduction}\label{section 0}
Let $E$ and $F$ be complex Banach spaces and let $U$ be an open subset of $E$. A mapping $f\colon U\to F$ is said to be \textit{locally compact (resp., locally weakly compact, locally Rosenthal, locally Asplund)} if every point $x\in U$ has a neighborhood $U_x\subseteq U$ such that $f(U_x)$ is relatively compact (resp., relatively weakly compact, Rosenthal, Asplund) in $F$.
R. M. Aron and M. Schottenloher \cite{AroSch-76} proved that every locally compact holomorphic mapping $f\colon E\to F$ admits a factorization of the form $f=T\circ g$, where $G$ is a Banach space, $T\colon G\to F$ belongs to the ideal of compact operators and $g\colon E\to G$ is holomorphic. Analogous results were stated by R. Ryan \cite{Rya-88}, M. Lindstr\"om \cite{Lin-89} and N. Robertson \cite{Rob-92} for locally weakly compact holomorphic mappings, locally Rosenthal holomorphic mappings and locally Asplund holomorphic mappings, respectively, with $T$ belonging to the corresponding ideal of bounded linear operators.
In \cite{GonGut-00}, M. Gonz\'alez and J. M. Guti\'errez provided a unified approach to this problem by introducing a new class of holomorphic mappings which allowed them to extend the aforementioned factorizations to every closed surjective operator ideal.
Since many operator ideals $\I$ can be naturally associated to polynomial ideals $\P$, R. Aron, G. Botelho, D. Pellegrino and P. Rueda \cite{AroBotPelRue-10} initiated a research program whose objective was to relate those holomorphic mappings $f$ that admit a factorization $f=T\circ g$, where $T$ belongs to an operator ideal $\I$ and $g$ is holomorphic, with those $f$ whose derivatives belong to the associated composition polynomial ideal $\I\circ\P$.
Our aim is to present here some advances in this research program on the factorization of bounded holomorphic mappings $f=T\circ g$, where $T$ is in an operator ideal $\I$ and $g$ is a bounded holomorphic mapping (see Section 3 in \cite{AroBotPelRue-10}).
This paper is organized as follows. In Section \ref{section 1}, we set up a linearization theorem of bounded holomorphic mappings due to J. Mujica \cite{Muj-91} that will be a key tool to obtain our results.
In the spirit of the definition of operator ideal, we introduce in Section \ref{section 2} the concept of ideal of (bounded) holomorphic mappings. Then, we adapt to the holomorphic setting a method to produce operator ideals from a given operator ideal $\I$ and so we obtain ideals of bounded holomorphic mappings by composition with linear operators of $\I$. This useful technique has been applied with the same purpose in different settings as, for example, polynomial and multilinear \cite{BotPelRue-07}, polynomial and holomorphic \cite{AroBotPelRue-10}, polynomial \cite{AroRue-12} and Lipschitz settings \cite{AchRueSanYah-16, Saa-17}.
In Section \ref{section 3}, we see that some known ideals of holomorphic mappings and bounded holomorphic mappings can be produced by this procedure as, for instance, ideals of holomorphic mappings with local range or (global) range of bounded type.
Furthermore, we introduce and analyze new ideals of bounded holomorphic mappings such as $p$-integral and $p$-nuclear holomorphic mappings with $1\leq p<\infty$. We prove that these types of ideals are generated by the method of composition, but also give some examples of ideals of bounded holomorphic mappings that can not be produced in this way.
The study of holomorphic mappings on Banach spaces which have relatively (weakly) compact range was initiated by J. Mujica in \cite{Muj-91} and completed by J. M. Sepulcre and the last two authors in \cite{JimRuiSep-22}. We prove here that every $p$-nuclear ($p$-integral) holomorphic mapping from $U$ to $F$ has relatively compact (weakly compact) range.
On the other hand, the dual ideal $\I^\mathrm{dual}(E,F)$ of an operator ideal $\I(E,F)$ is the operator ideal formed by all bounded linear operators $T\colon E\to F$ such that its adjoint operator $T^*$ belongs to $\I(F^*,E^*)$. This procedure to produce ideals of linear operators has been applied to generate ideals of other types of mappings: polynomials and multilinear mappings between Banach spaces \cite{Pie-83, FloGar-03}, homogeneous polynomials between Banach spaces \cite{BotCalMor-14} and Lipschitz mappings from metric spaces into Banach spaces \cite{AchRueSanYah-16, Saa-17}.
Motivated by these works and with the aid of the transpose of a bounded holomorphic mapping, we carry out in Section \ref{section 4} a similar study of the dual procedure in the setting of bounded holomorphic mappings. To be more precise, we introduce the bounded-holomorphic dual $(\I^{\H^\infty})^\mathrm{dual}$ of an operator ideal $\I$ and prove that $(\I^{\H^\infty})^\mathrm{dual}$ is not only a bounded-holomorphic ideal but also belongs to the class of composition ideals, that is, the bounded holomorphic mappings belonging to $(\I^{\H^\infty})^\mathrm{dual}$ are exactly those that admit a factorization through $\I^\mathrm{dual}$.
We refer to the monograph by A. Pietsch \cite{Pie-80} for the theory of operator ideals, the book by J. Diestel, H. Jarchow and A. Tonge \cite{DisJarTon-95} for the theory of integral and nuclear operators, and the book by J. Mujica \cite{Muj-86} for the theory of holomorphic mappings on infinite-dimensional spaces.
\section{Preliminaries}\label{section 1}
From now on, $E$ and $F$ will denote complex Banach spaces and $U$ an open subset of $E$.
Let us recall that a mapping $f\colon U\to F$ is said to be \textit{holomorphic} if for each $a\in U$, there exist an open ball $B(a,r)$ with center at $a$ and radius $r>0$ contained in $U$ and a sequence of continuous $m$-homogeneous polynomials $(P_m)_{m\in\N_0}$ from $E$ into $F$ such that
$$
f(x)=\sum_{m=0}^\infty P_m(x-a),
$$
where the series converges uniformly for $x\in B(a,r)$.
A mapping $P\colon E\to F$ is said to be a \textit{continuous $m$-homogeneous polynomial} if there exists a continuous $m$-linear mapping $A\colon E\to F$ such that $P(x)=A(x,\stackrel{(m}{\ldots},x)$ for all $x\in E$.
If $U\subseteq E$ and $V\subseteq F$ are open sets, $\H(U,V)$ will represent the set of all holomorphic mappings from $U$ to $V$. We will denote by $\H(U,F)$ the linear space of all holomorphic mappings from $U$ into $F$, and by $\H^\infty(U,F)$ the subspace of all $f\in\H(U,F)$ such that $f(U)$ is bounded in $F$.
In the case $F=\C$, we will write $\H(U)$ and $\H^\infty(U)$ instead of $\H(U,\C)$ and $\H^\infty(U,\C)$, respectively.
It is known that the linear space $\H^\infty(U)$, equipped with the uniform norm:
$$
\left\|f\right\|_\infty=\sup\left\{\left|f(x)\right|\colon x\in U\right\}\qquad \left(f\in\H^\infty(U)\right),
$$
is a dual Banach space. We denote by $\G^\infty(U)$ the \textit{geometric predual of $\H^\infty(U)$}. Let us recall that $\G^\infty(U)$ is the norm-closed linear hull in $\H^\infty(U)^*$ of the set $\left\{\delta(x)\colon x\in U\right\}$ of evaluation functionals defined by
$$
\delta(x)(f)=f(x)\qquad \left(f\in\H^\infty(U)\right).
$$
The following linearization theorem by J. Mujica \cite{Muj-91} will be an essential tool to establish our results.
Given a complex Banach space $E$, we will denote by $B_E$, $S_E$ and $E^*$ the closed unit ball, the unit sphere and the dual space of $E$, respectively. If $E$ and $F$ are Banach spaces, $\L(E,F)$ denotes the Banach space of all continuous linear operators from $E$ into $F$ with the operator canonical norm.
\begin{theorem}\cite[Theorem 2.1]{Muj-91}\label{teo0}
Let $E$ be a complex Banach space and $U$ an open subset of $E$.
\begin{enumerate}
\item $\H^\infty(U)$ is isometrically isomorphic to $\G^\infty(U)^*$, under the mapping $J_U\colon\H^\infty(U)\to\G^\infty(U)^*$ given by
$$
J_U(f)(\gamma)=\gamma(f)\qquad \left(\gamma\in\G^\infty(U),\; f\in\H^\infty(U)\right).
$$
\item The mapping $g_U\colon U\to\G^\infty(U)$ defined by $g_U(x)=\delta(x)$ for all $x\in U$ is holomorphic with $g_U(U)\subseteq S_{\G^\infty(U)}$.
\item For each complex Banach space $F$ and each mapping $f\in\H^\infty(U,F)$, there exists a unique operator $T_f\in\L(\G^\infty(U),F)$ such that $T_f\circ g_U=f$.
Furthermore, $\left\|T_f\right\|=\left\|f\right\|_\infty$.
\item The mapping $f\mapsto T_f$ is an isometric isomorphism from $\H^\infty(U,F)$ onto $\L(\G^\infty(U),F)$.$\hfill\qed$
\end{enumerate}
\end{theorem}
We now recall some concepts and results of the theory of operator ideals which have been borrowed from \cite{DisJarTon-95, Pie-80}.
An operator ideal $\I(E,F)$ (see definition in \cite[1.1.1]{Pie-80}) is said to be:
\begin{enumerate}
\item \textit{closed} if $\I(E,F)$ is closed in $\L(E,F)$ \cite[Section 4.2]{Pie-80}.
\item \textit{injective} if given an operator $T\in\L(E,F)$, a Banach space $G$ and an injective operator $\iota\in\L(F,G)$, we have that $T\in\I(E,F)$ whenever $\iota\circ T\in\I(E,G)$ \cite[Section 4.6]{Pie-80}.
\item \textit{surjective} if given an operator $T\in\L(E,F)$, a Banach space $G$ and a surjective operator $\pi\in\L(G,E)$, we have that $T\in\I(E,F)$ whenever $T\circ\pi\in\I(G,F)$ \cite[Section 4.7]{Pie-80}.
\end{enumerate}
An operator $T\in\L(E,F)$ is said to be \textit{compact (resp., separable, weakly compact, Rosenthal, Asplund)} if $T(B_E)$ is relatively compact (resp., separable, relatively weakly compact, Rosenthal, Asplund) in $F$. We denote by $\F(E,F)$, $\overline{\F}(E,F)$, $\K(E,F)$, $\S(E,F)$, $\W(E,F)$, $\Ro(E,F)$ and $\A(E,F)$ the ideals of bounded finite-rank linear operators, approximable linear operators (i.e., operators which are the norm limits of bounded finite-rank operators), compact linear operators, bounded separable linear operators, weakly compact linear operators, Rosenthal linear operators and Asplund linear operators from $E$ into $F$, respectively. The following inclusions are known:
\begin{align*}
\F(E,F)&\subseteq\overline{\F}(E,F)\subseteq\K(E,F)\subseteq\W(E,F)\subseteq\Ro(E,F)\cap\A(E,F),\\
\K(E,F)&\subseteq\S(E,F)
\end{align*}
\section{Ideal and composition ideal of bounded holomorphic mappings}\label{section 2}
Inspired by the preceding definitions, we introduce the concept of an ideal of (bounded) holomorphic mappings and its different properties.
\begin{definition}\label{def-ideal}
An ideal of holomorphic mappings (or simply, a holomorphic ideal) is a subclass $\I^{\H}$ of $\H$ such that for any complex Banach space $E$, any open subset $U$ of $E$ and any complex Banach space $F$, the components
$$
\I^{\H}(U,F):=\I^{\H}\cap\H(U,F)
$$
satisfy the following three properties:
\begin{enumerate}
\item[(I1)] $\I^{\H}(U,F)$ is a linear subspace of $\H(U,F)$.
\item[(I2)] For any $g\in\H(U)$ and $y\in F$, the mapping $g\cdot y\colon x\mapsto g(x)y$ from $U$ to $F$ is in $\I^{\H}(U,F)$.
\item[(I3)] The ideal property: If $H,G$ are complex Banach spaces, $V$ is an open subset of $H$, $h\in\H(V,U)$, $f\in\I^{\H}(U,F)$ and $S\in\L(F,G)$, then $S\circ f\circ h$ is in $\I^{\H}(V,G)$.
\end{enumerate}
An ideal of bounded holomorphic mappings (or simply, a bounded-holomorphic ideal) is a subclass $\I^{\H^\infty}$ of $\H^\infty$ of the form $\I^{\H^\infty}=\I^{\H}\cap\H^\infty$, where $\I^{\H}$ is a holomorphic ideal.
A bounded-holomorphic ideal $\I^{\H^\infty}$ is said to be:
\begin{enumerate}
\item closed if each component $\I^{\H^\infty}(U,F)$ is a closed subspace of $\H^\infty(U,F)$ with the topology of supremum norm.
\item injective if for any mapping $f\in\H^\infty(U,F)$, any complex Banach space $G$ and any isometric linear embedding $\iota\colon F\to G$, we have that $f\in\I^{\H^\infty}(U,F)$ whenever $\iota\circ f\in\I^{\H^\infty}(U,G)$.
\item surjective if for any mapping $f\in\H^\infty(U,F)$, any open subset $V$ of a complex Banach space $G$ and any surjective mapping $\pi\in\H(V,U)$, we have that $f\in\I^{\H^\infty}(U,F)$ whenever $f\circ \pi\in\I^{\H^\infty}(V,F)$.
\end{enumerate}
A bounded-holomorphic ideal $\I^{\H^\infty}$ is said to be normed (Banach) if there exists a function $\left\|\cdot\right\|_{\I^{\H^\infty}}\colon\I^{\H^\infty}\to\mathbb{R}_0^+$ such that for every complex Banach space $E$, every open subset $U$ of $E$ and every complex Banach space $F$, the following three conditions are satisfied:
\begin{enumerate}
\item[(N1)] $(\I^{\H^\infty}(U,F),\left\|\cdot\right\|_{\I^{\H^\infty}})$ is a normed (Banach) space with $ \left\|f\right\|_\infty\leq\left\|f\right\|_{\I^{\H^\infty}}$ for all $f\in\I^{\H^\infty}(U,F)$.
\item[(N2)] $\left\|g\cdot y\right\|_{\I^{\H^\infty}}=\left\|g\right\|_\infty\left\|y\right\|$ for every $g\in\H^\infty(U)$ and $y\in F$.
\item[(N3)] If $H,G$ are complex Banach spaces, $V$ is an open subset of $H$, $h\in\H(V,U)$, $f\in\I^{\H^\infty}(U,F)$ and $S\in\L(F,G)$, then $\left\|S\circ f\circ h\right\|_{\I^{\H^\infty}}\leq \left\|S\right\|\left\|f\right\|_{\I^{\H^\infty}}$.
\end{enumerate}
\end{definition}
\begin{comment}
It is possible to propose the following two stronger versions of the ideal property (I3); in both cases, the uniform norms of the bounded holomorphic mapping $h$ or the bounded linear operator $T$ would not appear to be involved to compute the norms of $S\circ f\circ h$ or $S\circ f\circ T$, as also occurs in (N3):
\begin{enumerate}
\item If $H,G$ are complex Banach spaces, $V$ is an open subset of $H$, $h\in\H(V,U)$, $f\in\I^{\H^\infty}(U,F)$ and $S\in\L(F,G)$, then $S\circ f\circ h$ is in $\I^{\H^\infty}(V,G)$ with $\left\|S\circ f\circ h\right\|_{\I^{\H^\infty}}\leq \left\|S\right\|\left\|f\right\|_{\I^{\H^\infty}}$.
\item If $H,G$ are complex Banach spaces, $T\in\L(H,E)$, $f\in\I^{\H^\infty}(U,F)$ and $S\in\L(F,G)$, then $S\circ f\circ \left.T\right|_{T^{-1}(U)}$ is in $\I^{\H^\infty}(T^{-1}(U),G)$ with $\left\|S\circ f\circ \left.T\right|_{T^{-1}(U)}\right\|_{\I^{\H^\infty}}\leq \left\|S\right\|\left\|f\right\|_{\I^{\H^\infty}}$.
\end{enumerate}
\end{comment}
We now recall the composition method to produce ideals of holomorphic mappings. This linear method has been applied in different contexts: multilinear, polynomial, Lipschitz and holomorphic (see, for example, \cite{AchRueSanYah-16, AroBotPelRue-10, BotPelRue-07,GonGut-00, Saa-17}).
\begin{definition}\label{comp-ideal}
Let $E,F$ be complex Banach spaces and $U$ an open set in $E$. Given an operator ideal $\I$, a mapping $f\in\H(U,F)$ (resp. $f\in\H^\infty(U,F)$) belongs to the composition ideal $\I\circ\H$ (resp. $\I\circ\H^\infty$), and we write $f\in\I\circ\H(U,F)$ (resp. $f\in\I\circ\H^\infty(U,F)$), if there are a complex Banach space $G$, an operator $T\in\I(G,F)$ and a mapping $g\in\H(U,G)$ (resp. $g\in\H^\infty(U,G)$) such that $f=T\circ g$.
If $(\I,\left\|\cdot\right\|_\I)$ is a normed operator ideal, we denote
$$
\left\|f\right\|_{\I\circ\H^\infty}=\inf\left\{\left\|T\right\|_\I\left\|g\right\|_\infty\right\},
$$
where the infimum is extended over all representations of $f$ as above.
\end{definition}
The following result states the linearization of the members of the composition ideal $\I\circ\H^\infty$. The first part was proved in \cite{AroBotPelRue-10}, but we have included it here for the convenience of the reader.
\begin{theorem}\cite[Theorem 3.2]{AroBotPelRue-10}\label{ideal}
Let $\I$ be an operator ideal and $f\in\H^\infty(U,F)$. The following conditions are equivalent:
\begin{enumerate}
\item $f$ belongs to $\I\circ\H^\infty(U,F)$.
\item Its linearization $T_f$ is in $\I(\G^\infty(U),F)$.
\end{enumerate}
If $(\I,\left\|\cdot\right\|_\I)$ is a normed operator ideal, we have that $\left\|f\right\|_{\I\circ\H^\infty}=\left\|T_f\right\|_\I$ and the infimum $\left\|f\right\|_{\I\circ\H^\infty}$ is attained at $T_f\circ g_U$ (\textit{Mujica's factorization of $f$}). Furthermore, the mapping $f\mapsto T_f$ is an isometric isomorphism from $(\I\circ\H^\infty(U,F),\left\|\cdot\right\|_{\I\circ\H^\infty})$ onto $(\I(\G^\infty(U),F),\left\|\cdot\right\|_\I)$.
\end{theorem}
\begin{proof}
$(1)\Rightarrow(2)$: If $f\in\I\circ\H^\infty(U,F)$, then there are a complex Banach space $G$, a mapping $g\in\H^\infty(U,G)$ and an operator $T\in\I(G,F)$ such that $f=T\circ g$. Since $f=T_f\circ g_U$ and $g=T_g\circ g_U$ by Theorem \ref{teo0}, it follows that $T_f\circ g_U=T\circ T_g\circ g_U$ which implies that $T_f=T\circ T_g$ by the linear denseness of $ g_U(U)$ in $\G^\infty(U)$, and thus $T_f\in\I(\G^\infty(U),F)$ by the ideal property of $\I$. Further, if $(\I,\left\|\cdot\right\|_\I)$ is normed, we have
$$
\left\|T_f\right\|_\I=\left\|T\circ T_g\right\|_\I\leq\left\|T\right\|_\I\left\|T_g\right\|=\left\|T\right\|_\I\left\|g\right\|_\infty,
$$
and taking the infimum over all representations of $f$, we deduce that $\left\|T_f\right\|_\I\leq\left\|f\right\|_{\I\circ\H^\infty}$.
$(2)\Rightarrow(1)$: If $T_f\in\I(\G^\infty(U),F)$, then $f=T_f\circ g_U\in\I\circ\H^\infty(U,F)$ since $\G^\infty(U)$ is a complex Banach space and $ g_U\in\H^\infty(U,\G^\infty(U))$ by Theorem \ref{teo0}. Moreover, if $(\I,\left\|\cdot\right\|_\I)$ is normed, we have
$$
\left\|f\right\|_{\I\circ\H^\infty}=\left\|T_f\circ g_U\right\|_{\I\circ\H^\infty}\leq\left\|T_f\right\|_\I\left\| g_U\right\|_\infty=\left\|T_f\right\|_\I.
$$
Finally, the last assertion of the statement easily follows by applying the above proof and Theorem \ref{teo0}.
\end{proof}
We now see that some properties of the operator ideal $\I$ are transferred to the composition ideal $\I\circ\H^\infty$.
\begin{corollary}\label{new2}
If $\I$ is a closed (resp., normed, Banach) operator ideal, then $\I\circ\H^\infty$ is a closed (resp., normed, Banach) bounded-holomorphic ideal.
\end{corollary}
\begin{proof}
Let $\I$ be an operator ideal. We have:
\begin{enumerate}
\item[(I1)] If $\alpha_1,\alpha_2\in\mathbb{C}$ and $f_1,f_2\in\I\circ\H^\infty(U,F)$, then $T_{\alpha_1f_1+\alpha_2f_2}=\alpha_1T_{f_1}+\alpha_2T_{f_1}\in\I(\G^\infty(U),F)$ by Theorems \ref{teo0} and \ref{ideal}. Hence $\alpha_1f_1+\alpha_2f_2\in\I\circ\H^\infty(U,F)$ by Theorem \ref{ideal}. Therefore $\I\circ\H^\infty(U,F)$ is a linear subspace of $\H^\infty(U,F)$.
\item[(I2)] Given $g\in\H^\infty(U)$ and $y\in F$, we can write $g\cdot y=T_{g\cdot y}\circ g_U$, where $ g_U\in\H^\infty(U,G^\infty(U))$ and $T_{g\cdot y}\in\F(G^\infty(U),F)$ by Theorem \ref{teo0} and \cite[Theorem 2.1]{JimRuiSep-22}, and this tells us that $g\cdot y\in \F\circ\H^\infty(U,F)$ and since always $\F\subseteq\I$, we conclude that $g\cdot y\in\I\circ\H^\infty(U,F)$.
\item[(I3)] Let $H,G$ be complex Banach spaces, $V$ be an open subset of $H$, $h\in\H(V,U)$, $f\in\I\circ\H^\infty(U,F)$ and $S\in\L(F,G)$. By \cite[Corollary 1.4]{JimRuiSep-22}, there exists a unique operator $\widehat{h}\in\L(\G^\infty(V),\G^\infty(U))$ such that $\widehat{h}\circ g_V=g_U\circ h$. Furthermore, $||\widehat{h}||=1$. Since
$$
S\circ f\circ h=S\circ(T_f\circ g_U)\circ h=(S\circ T_f\circ\widehat{h})\circ g_V,
$$
with $S\circ T_f\circ\widehat{h}\in\I(\G^\infty(V),G)$ and $g_V\in\H^\infty(V,\G^\infty(V))$, we have $S\circ f\circ h\in\I\circ\H^\infty(V,G)$.
\end{enumerate}
This proves that $\I\circ\H^\infty(U,F)$ is a bounded-holomorphic ideal.
We now show that $\I\circ\H^\infty(U,F)$ is closed whenever $\I$ is so. Let $f\in\I\circ\H^\infty(U,F)$ and let $(f_n)_{n\in\N}$ be a sequence in $\I\circ\H^\infty(U,F)$ such that $\left\|f_n-f\right\|_\infty\to 0$ as $n\to\infty$. Since $T_{f_n}\in\I(G^\infty(U),F)$ by Theorem \ref{ideal} and $\left\|T_{f_n}-T_f\right\|=\left\|T_{f_n-f}\right\|=\left\|f_n-f\right\|_\infty$ for all $n\in\N$, we have that $T_f\in\I(\G^\infty(U),F)$, and thus $f\in\I\circ\H^\infty(U,F)$ by Theorem \ref{ideal}.
Assume now that the operator ideal $(\I,\left\|\cdot\right\|_\I)$ is normed. We have:
\begin{enumerate}
\item[(N1)] Since $\left\|f\right\|_{\I\circ\H^\infty}=\left\|T_f\right\|_\I$ for all $f\in\I\circ\H^\infty(U,F)$ by Theorem \ref{ideal}, it easily follows that $\left\|\cdot\right\|_{\I\circ\H^\infty}$ is a norm on $\I\circ\H^\infty(U,F)$ and
$$
\left\|f\right\|_\infty=\left\|T_f\right\|\leq \left\|T_f\right\|_\I=\left\|f\right\|_{\I\circ\H^\infty}
$$
for all $f\in\I\circ\H^\infty(U,F)$.
\item[(N2)] Given $g\in\H^\infty(U)$ and $y\in F$, we have
$$
\left\|g\right\|_\infty\left\|y\right\|=\left\|g\cdot y\right\|_\infty\leq \left\|g\cdot y\right\|_{\I\circ\H^\infty},
$$
and conversely, since $g\cdot y=M_y\circ g$ where $M_y\in\F(\mathbb{C},F)\subseteq\I(\mathbb{C},F)$ is the operator defined $M_y(\lambda)=\lambda y$ for all $\lambda\in\mathbb{C}$, we have
$$
\left\|g\cdot y\right\|_{\I\circ\H^\infty}\leq\left\|g\right\|_\infty\left\|M_y\right\|=\left\|g\right\|_\infty\left\|y\right\|.
$$
\item[(N3)] Following the above proof of (I3), we have
\begin{align*}
\left\|S\circ f\circ h\right\|_{\I\circ\H^\infty}&=\left\|(S\circ T_f\circ\widehat{h})\circ g_V\right\|_{\I\circ\H^\infty}\\
&\leq \left\|S\circ T_f\circ\widehat{h}\right\|_\I \left\|g_V\right\|_\infty\\
&\leq \left\|S\right\|\left\|T_f\right\|_\I\left\|\widehat{h}\right\|\left\|g_V\right\|_\infty\\
&=\left\|S\right\|\left\|f\right\|_{\I\circ\H^\infty}.
\end{align*}
\end{enumerate}
So, we have proved that the ideal $(\I\circ\H^\infty(U,F),\left\|\cdot\right\|_{\I\circ\H^\infty})$ is normed.
Finally, since $(\I\circ\H^\infty(U,F),\left\|\cdot\right\|_{\I\circ\H^\infty})$ is isometrically isomorphic to $(\I(\G^\infty(U),F),\left\|\cdot\right\|_\I)$ by Theorem \ref{ideal}, then $(\I\circ\H^\infty(U,F),\left\|\cdot\right\|_{\I\circ\H^\infty})$ is a Banach ideal whenever $(\I,\left\|\cdot\right\|_\I)$ is so.
\end{proof}
We finish this section with some properties of bounded-holomorphic ideals.
\begin{proposition}
Let $\I$ and $\J$ be two operator ideals. We have:
\begin{enumerate}
\item If $\I\circ\H^\infty(U,F)\subseteq\J\circ\H^\infty(U,F)$, then $\I(G^\infty(U),F)\subseteq\J(G^\infty(U),F)$.
\item If $\I\circ\H^\infty(U,F)=\H^\infty(U,F)$, then $\I(G^\infty(U),F)=\L(G^\infty(U),F)$.
\item If the identity operator $\mathrm{id}_F\in\I(F,F)$, then $\I\circ\H^\infty(U,F)=\H^\infty(U,F)$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Let $T\in\I(G^\infty(U),F)$. Then $f:=T\circ g_U\in\I\circ\H^\infty(U,F)$. This implies that $T_f=T$ by Theorem \ref{teo0}. Since $f\in\J\circ\H^\infty(U,F)$, it follows that $T=T_f\in\J(G^\infty(U),F)$ by Theorem \ref{ideal}.
(2) Let $T\in\L(G^\infty(U),F)$. Then $f:=T\circ g_U\in\H^\infty(U,F)$ and $T_f=T$ by Theorem \ref{teo0}. Hence $f\in\I\circ\H^\infty(U,F)$. Then $T=T_f\in\I(G^\infty(U),F)$ by Theorem \ref{ideal}.
(3) Let $f\in\H^\infty(U,F)$. Then we have $f=\mathrm{id}_F\circ f\in\I\circ\H^\infty(U,F)$.
\end{proof}
\section{Examples of composition ideals of bounded holomorphic mappings}\label{section 3}
We have divided this section into four parts. In Subsection \ref{subsection 1}, we will recall that some known ideals of holomorphic mappings $f$ with local range of bounded type (for example, with compact, weakly compact, Rosenthal or Asplund range) are generated by composition of a holomorphic mapping $g$ with an operator $T$ in the corresponding operator ideal $\I$.
If $f$ is in addition bounded, one cannot assure in general that the function $g$ is also bounded. However, this is possible if we consider some smaller classes of such ideals. To be more precise, in Subsection \ref{subsection 2} we will show that the ideals of bounded holomorphic mappings with (global) range of bounded type are generated by composition of a bounded holomorphic mapping $g$ with an operator $T$ in $\I$.
Finally, motivated by the ideals of $p$-integral operators and $p$-nuclear operators between Banach spaces for $1\leq p<\infty$ (see, for example, \cite{DisJarTon-95}), we will introduce in Subsections \ref{subsection 3} and \ref{subsection 4} the analogs in the holomorphic setting and state that such ideals of bounded holomorphic mappings are generated by the method of composition.
\subsection{Holomorphic mappings with local range of bounded type}\label{subsection 1}
Let $\H_{k}(U,F)$ (resp., $\H_{w}(U,F)$, $\H_{r}(U,F)$, $\H_{a}(U,F)$) denote the linear subspace of all locally compact (resp., locally weakly compact, locally Rosenthal, locally Asplund) mappings of $\H(U,F)$. In the bounded case, we write
$$
\H^\infty_i(U,F)=\H_i(U,F)\cap\H^\infty(U,F)\qquad (i=k,w,r,a).
$$
It is clear that
$$
\H_k(U,F)\subseteq\H_w(U,F)\subseteq\H_r(U,F)\cap\H_a(U,F).
$$
\begin{proposition}\label{Hi}
For $i=k,w,r,a$, $\H_i(U,F)$ is a holomorphic ideal and $\H^\infty_i(U,F)$ is a bounded-holomorphic ideal.
\end{proposition}
\begin{proof}
Let $i=k,w,r,a$. Clearly, $\H_i(U,F)$ is a linear subspace of $\H(U,F)$. Given $g\in\H(U)$ and $y\in F$, it is clear that $g\cdot y\in\H(U,F)$ with $(g\cdot y)(U)=g(U)y$. Since $g$ is locally bounded by \cite[Lemma 5.6]{Muj-86}, every point $x\in U$ has a neighborhood $U_x\subseteq U$ such that $g(U_x)$ is bounded in $\C$, that is, relatively compact in $\C$. Hence $(g\cdot y)(U_x)=g(U_x)y$ is relatively compact in $F$ and thus $g\cdot y\in\H_k(U,F)\subseteq\H_i(U,F)$.
To prove the ideal property of $\H_i(U,F)$, let $H,G$ be complex Banach spaces, $V$ be an open subset of $H$, $h\in\H(V,U)$, $f\in\H_i(U,F)$ and $S\in\L(F,G)$. Let $x\in V$. Then there exists a neighborhood of $h(x)$, $U_{h(x)}\subseteq U$, such that $f(U_{h(x)})$ is relatively compact (resp., relatively weakly compact, Rosenthal, Asplund) in $F$. Denote $V_x=h^{-1}(U_{h(x)})$. Hence $S(f(h(V_x))$ is relatively compact (resp., relatively weakly compact, Rosenthal, Asplund) in $G$, and thus $S\circ f\circ h\in\H_i(V,G)$. This proves that $\H_i(U,F)$ is a holomorphic ideal. Hence $\H^\infty_i(U,F)$ is a bounded-holomorphic ideal.
\end{proof}
Some known results show that the holomorphic ideals $\H_i(U,F)$ for $i=k,w,r,a$ are generated by the method of composition with an operator ideal. Namely, we have
\begin{align*}
\K\circ\H(E,F)=H_k(E,F)\qquad&\text{\cite{AroSch-76}},\\
\W\circ\H(E,F)=H_w(E,F)\qquad&\text{\cite{Rya-88}},\\
\Ro\circ\H(E,F)=H_r(E,F)\qquad&\text{\cite{Lin-89}},\\
\A\circ\H(E,F)=H_a(E,F)\qquad&\text{\cite{Rob-92}}.
\end{align*}
More generally, let $\U$ be a closed surjective operator ideal and let $\mathcal{C}_{\U}(F)$ the collection of all $A\subseteq F$ so that $A\subseteq T(B_G)$ for some complex Banach space $G$ and some operator $T\in\U(G,F)$. Let $H^\U(E,F)$ denote the space of all $f\in\H(E,F)$ such that each $x\in E$ has a neighborhood $V_x\subseteq E$ with $f(V_x)\in\mathcal{C}_\U(F)$. In \cite[Theorem 6]{GonGut-00}, it is proved that
$$
\U\circ\H(E,F)=H^\U(E,F).
$$
See also the paper \cite{AroBotPelRue-10} for a study of such spaces as associated to composition ideals of polynomials.
\subsection{Holomorphic mappings with range of bounded type}\label{subsection 2}
Let $\H^\infty_{\K}(U,F)$ (resp., $\H^\infty_{\W}(U,F)$, $\H^\infty_{\Ro}(U,F)$, $\H^\infty_{\A}(U,F)$) denote the linear space of all holomorphic mappings $f\colon U\to F$ such that $f(U)$ is relatively compact (resp., relatively weakly compact, Rosenthal, Asplund) in $F$. Note that $f(U)$ is a bounded subset of $F$ in all the cases.
We will also consider the spaces:
\begin{align*}
\H^\infty_{\F}(U,F)&=\left\{f\in\H^\infty(U,F)\colon \lin(f(U))\text{ is finite-dimensional in }F\right\},\\
\H^\infty_{\overline{\F}}(U,F)&=\left\{f\in\H^\infty(U,F)\colon \exists (f_n)_n^\infty\subseteq\H_\F^\infty(U,F)\;|\; \left\|f_n-f\right\|_\infty\to 0\right\},\\
\H^\infty_{\S}(U,F)&=\left\{f\in\H^\infty(U,F)\colon f(U)\text{ is separable in }F\right\}.
\end{align*}
Clearly, we have the inclusions:
\begin{align*}
\H^\infty_{\F}(U,F)&\subseteq\H^\infty_{\overline{\F}}(U,F)\subseteq\H^\infty_\K(U,F)\subseteq\H^\infty_\W(U,F)\subseteq\H^\infty_\Ro(U,F)\cap\H^\infty_\A(U,F),\\
\H^\infty_{\K}(U,F)&\subseteq\H^\infty_{\S}(U,F),
\end{align*}
and
$$
\H^\infty_{\I}(U,F)\subseteq\H^\infty_{i}(U,F)\qquad ((\I,i)=(\K,k),(\W,w),(\Ro,r),(\A,a)).
$$
\begin{proposition}\label{HI}
For $\I=\F,\K,\S,\overline{\F},\W,\Ro,\A$, the set $\H^\infty_\I(U,F)$ is a bounded-holomorphic ideal. Furthermore, we have:
\begin{enumerate}
\item $\H^\infty_{\I}(U,F)$ is not closed if and only if $\I=\F$.
\item $\H^\infty_{\I}(U,F)$ is injective and surjective whenever $\I=\K,\S,\W,\Ro,\A$.
\end{enumerate}
\end{proposition}
\begin{proof}
The first assertion follows with a proof similar to that Proposition \ref{Hi}. Applying \cite[Corollary 2.11]{JimRuiSep-22} and that $\I$ is not closed for the operator norm if and only if $\I=\F$, we deduce the equivalence in $(1)$.
Let $\I=\K,\S,\W,\Ro,\A$ and $f\in\H^\infty(U,F)$. On the one hand, assume that $\iota\circ f\in\H^\infty_{\I}(U,G)$ for any complex Banach space $G$ and any isometric linear embedding $\iota\colon F\to G$. Since $\iota\circ T_f=T_{\iota\circ f}\in\I(\G^\infty(U),G)$ by \cite[Corollary 2.11]{JimRuiSep-22} and the operator ideal $\I$ is injective (see, for example, \cite[p. 471]{GonGut-00}), it follows that $T_f\in\I(\G^\infty(U),F)$, thus $f\in\H_{\I}^\infty(U,F)$ by \cite[Corollary 2.11]{JimRuiSep-22}, and this proves that $\H^\infty_{\I}(U,F)$ is injective.
On the other hand, suppose that $f\circ \pi\in\I^{\H^\infty}(V,F)$, where $V$ is an open subset of a complex Banach space $G$ and $\pi\in\H(V,U)$ is surjective. By \cite[Corollary 1.4]{JimRuiSep-22}, there exists a unique operator $\widehat{\pi}\in\L(\G^\infty(V),\G^\infty(U))$ such that $g_U\circ\pi=\widehat{\pi}\circ g_V$. Since $T_f\circ \widehat{\pi}\in\L(\G^\infty(V),F)$ and $T_f\circ\widehat{\pi}\circ g_V=T_f\circ g_U\circ\pi=f\circ\pi$, we deduce that $T_{f\circ\pi}=T_f\circ\widehat{\pi}$ by Theorem \ref{teo0}. Since $T_f\circ\widehat{\pi}=T_{f\circ \pi}\in\I(\G^\infty(V),F)$ by \cite[Corollary 2.11]{JimRuiSep-22} and the operator ideal $\I$ is surjective \cite[p. 471]{GonGut-00}, we have that $T_f\in\I(\G^\infty(U),F)$, hence $f\in\H_{\I}^\infty(U,F)$ by \cite[Corollary 2.11]{JimRuiSep-22}, and thus $\H^\infty_{\I}(U,F)$ is surjective.
\end{proof}
We next see that the preceding bounded-holomorphic ideals are generated by composition with the corresponding operator ideal.
\begin{proposition}\label{new}
For $\I=\F,\overline{\F},\K,\S,\W,\Ro,\A$, we have $\H^\infty_{\I}(U,F)=\I\circ\H^\infty(U,F)$ and $\left\|f\right\|_\infty=\left\|f\right\|_{\I\circ\H^\infty}$ for all $f\in\H^\infty_{\I}(U,F)$.
\end{proposition}
\begin{proof}
Let $\I=\F,\overline{\F},\K,\S,\W,\Ro,\A$. Note first that $\H^\infty_{\I}(U,F)=\I\circ\H^\infty(U,F)$. Indeed, if $f\in\H^\infty_{\I}(U,F)$, then $f=T_f\circ g_U$ where $T_f\in\I(\G^\infty(U),F)$ by \cite[Corollary 2.11]{JimRuiSep-22}, and thus $f\in\I\circ\H^\infty(U,F)$. Conversely, if $f\in\I\circ\H^\infty(U,F)$, then $f=T\circ g$ for some complex Banach space $G$, $g\in\H^\infty(U,G)$ and $T\in\I(G,F)$. If $\I=\F$ or $\I=\overline{\F}$, then $f$ has a finite rank or $f$ can be approximated by bounded finite-rank holomorphic mappings, respectively. If $\I=\K,\S,\W,\Ro,\A$, since $g(U)$ is bounded in $G$, it follows that $f(U)=T(g(U))$ is relatively compact (resp., separable, relatively weakly compact, Rosenthal, Asplund) in $F$. Hence $f\in\H^\infty_{\I}(U,F)$, as required.
Now, if $\left\|\cdot\right\|_\I$ denotes the operator norm, recall that the mappings
\begin{align*}
f\in(\H^\infty_{\I}(U,F),\left\|\cdot\right\|_\infty)&\mapsto T_f\in(\I(\G^\infty(U),F),\left\|\cdot\right\|_\I),\\
f\in(\I\circ\H^\infty(U,F),\left\|\cdot\right\|_{\I\circ\H^\infty})&\mapsto T_f\in(\I(\G^\infty(U),F),\left\|\cdot\right\|_\I),
\end{align*}
are isometric isomorphisms by \cite[Corollary 2.11]{JimRuiSep-22} and Theorem \ref{ideal}, respectively. Hence we have
$$
\left\|f\right\|_\infty=\left\|T_f\right\|_\I=\left\|f\right\|_{\I\circ\H^\infty}
$$
for all $f\in\H^\infty_{\I}(U,F)$.
\end{proof}
\subsection{p-integral holomorphic mappings}\label{subsection 3}
Following \cite[p. 93]{DisJarTon-95}, given two Banach spaces $E$, $F$ and $1\leq p\leq\infty$, we denote by $\I_p(E,F)$ the Banach space of all $p$-integral linear operators $T\colon E\to F$ with the norm
$$
\iota_p(T)=\inf\left\{\left\|A\right\|\left\|B\right\|\right\},
$$
where the infimum is taken over all such $p$-integral factorizations $(A,I^\mu_{\infty,p},B)$ of $T$ in the form
$$
\kappa_F\circ T=A\circ I^\mu_{\infty,p}\circ B\colon E\stackrel{B}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{A}{\rightarrow}F^{**},
$$
where $(\Omega,\Sigma,\mu)$ is a probability measure space, $A\in\L(L_p(\mu),F^{**})$ and $B\in\L(E,L_\infty(\mu))$. As usual, $I^\mu_{\infty,p}\colon L_\infty(\mu)\to L_p(\mu)$ is the formal identity, and $\kappa_F\colon F\to F^{**}$ is the canonical isometric embedding.
Let $p^*$ denote the conjugate index of $p\in [1,\infty]$ defined by $p^*=p/(p-1)$ if $p\neq 1$, $p^*=\infty$ if $p=1$, and $p^*=1$ if $p=\infty$.
The concept of $p$-integral linear operator motivates us to introduce the holomorphic analog as follows.
\begin{definition}\label{def-bsLio}
Let $E,F$ be complex Banach spaces, $U$ an open subset of $E$ and $1\leq p\leq\infty$. A mapping $f\colon U\to F$ is said to be $p$-integral holomorphic if there exist a probability measure space $(\Omega,\Sigma,\mu)$, an operator $T\in\L(L_p(\mu),F^{**})$ and a mapping $g\in\H^\infty(U,L_\infty(\mu))$ giving rise to the commutative diagram:
$$
\begin{tikzpicture}
\node (U) {$U$};
\node (F) [right of=U] {$F$};
\node (F**) [right of=F] {$F^{**}$};
\node (L) [below of=U] {$L_\infty(\mu)$};
\node (Lp) [below of=F**] {$L_p(\mu)$};
\draw[->] (U) to node {$f$} (F);
\draw[->] (F) to node {$\kappa_F$} (F**);
\draw[->] (U) to node [swap] {$g$} (L);
\draw[->] (L) to node {$I^\mu_{\infty,p}$} (Lp);
\draw[->] (Lp) to node [swap] {$T$} (F**);
\end{tikzpicture}
$$
The triple $(T,I^\mu_{\infty,p},g)$ is called a $p$-integral holomorphic factorization of $f$. We denote
$$
\iota^{\H^\infty}_p(f)=\inf\left\{\left\|T\right\|\left\|g\right\|_\infty\right\},
$$
where the infimum is extended over all such factorizations of $f$. Let $\I^{\H^\infty}_p(U,F)$ denote the set of all $p$-integral holomorphic mappings from $U$ into $F$.
\end{definition}
Adapting to the holomorhic setting some techniques introduced in the linear context (see Theorem 5.2 in \cite{DisJarTon-95}) and applied in the Lipschitz context (see Proposition 2.6 in \cite{AchRueSanYah-16}), we will establish the following result.
\begin{proposition}\label{integral-2}
$\left(\I^{\H^\infty}_p(U,F),\iota^{\H^\infty}_p\right)$ is a Banach ideal of bounded holomorphic mappings.
\end{proposition}
\begin{proof
(N1): We first prove that $\I^{\H^\infty}_p(U,F)\subseteq\H^\infty(U,F)$ with $\left\|f\right\|_\infty\leq\iota^{\H^\infty}_p(f)$ if $f\in\I^{\H^\infty}_p(U,F)$. Indeed, given a $p$-integral holomorphic factorization of $f$:
$$
\kappa_F\circ f=T\circ I^\mu_{\infty,p}\circ g\colon U \stackrel{g}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T}{\rightarrow}F^{**},
$$
we have
$$
\left\|f(x)\right\|=\left\|\kappa_F(f(x))\right\|=\left\|T(I^\mu_{\infty,p}(g(x)))\right\|\leq\left\|T\right\|\left\|g(x)\right\|\leq\left\|T\right\|\left\|g\right\|_\infty
$$
for all $x\in U$, and thus $f$ is bounded with $\left\|f\right\|_\infty\leq\left\|T\right\|\left\|g\right\|_\infty$. The factorization of $f$ was arbitrary, so $\left\|f\right\|_\infty\leq\iota^{\H^\infty}_p(f)$. On the other hand, $\kappa_F\circ f=T\circ I^\mu_{\infty,p}\circ g\in\H(U,F^{**})$ by \cite[p. 39, Exercise 5.A]{Muj-86} since $T\circ I^\mu_{\infty,p}\in\L(L_\infty(\mu),F^{**})$ and $g\in\H(U,L_\infty(\mu))$. Now, since $\kappa_F\circ f\in\H(U,F^{**})$ and $\kappa_F^{-1}\in\L(\kappa_F(F),F)$, it follows that $f\in\H(U,F)$ again by applying the cited exercise.
We now prove that $(\I^{\H^\infty}_p(U,F),\iota^{\H^\infty}_p)$ is a normed space. Given $\lambda\in\C$, the triple $(\lambda T,I^\mu_{\infty,p},g)$ is a $p$-integral holomorphic factorization of $\lambda f$. Then $\lambda f\in\I^{\H^\infty}_p(U,F)$ and
$$
\iota^{\H^\infty}_p(\lambda f)\leq\left\|\lambda T\right\|\left\|g\right\|_\infty=\left|\lambda\right|\left\|T\right\|\left\|g\right\|_\infty.
$$
It follows that $\iota^{\H^\infty}_p(\lambda f)=0=\left|\lambda\right|\iota^{\H^\infty}_p(f)$ if $\lambda=0$, and that $\iota^{\H^\infty}_p(\lambda f)\leq\left|\lambda\right|\iota^{\H^\infty}_p(f)$ if $\lambda\neq 0$. Then, for $\lambda\neq 0$, we have $\iota^{\H^\infty}_p(f)=\iota^{\H^\infty}_p(\lambda^{-1}\lambda f)\leq\left|\lambda\right|^{-1}\iota^{\H^\infty}_p(\lambda f)$, hence $\left|\lambda\right|\iota^{\H^\infty}_p(f)\leq\iota^{\H^\infty}_p(\lambda f)$ and so $\iota^{\H^\infty}_p(\lambda f)=\left|\lambda\right|\iota^{\H^\infty}_p(f)$.
Let $f_1,f_2\in\I^{\H^\infty}_p(U,F)$ and $\varepsilon>0$. For $i=1,2$ we can find a probability space $(\Omega_i,\Sigma_i,\mu_i)$, the canonical inclusion mapping $I^{\mu_i}_{\infty,p,i}\colon L_\infty(\mu_i)\to L_p(\mu_i)$, a mapping $g_i\in\H^\infty(U,L_\infty(\mu_i))$ with $\left\|g_i\right\|_\infty=1/2$ and an operator $T_i\in\L(L_p(\mu_i),F^{**})$ such that $\kappa_F\circ f_i$ factors as
$$
\kappa_F\circ f_i=T_i\circ I^{\mu_i}_{\infty,p,i}\circ g_i\colon U \stackrel{g_i}{\rightarrow}L_\infty(\mu_i)\stackrel{I^{\mu_i}_{\infty,p,i}}{\rightarrow}L_p(\mu_i)\stackrel{T_i}{\rightarrow}F^{**}
$$
satisfying $\left\|T_i\right\|<\iota^{\H^\infty}_p(f_i)+\varepsilon/2$. We may assume also that $\Omega_1\cap\Omega_2=\emptyset$. Taking $\Omega:=\Omega_1\cup\Omega_2$ and
$$
\Sigma:=\left\{S\subseteq\Omega\colon S\cap\Omega_i\in\Sigma_i, \, i=1,2\right\},
$$
define the probability measure $\mu$ on $\Sigma$ by
$$
\mu(S)=\frac{\left\|T_1\right\|\mu_1(S\cap\Omega_1)+\left\|T_2\right\|\mu_2(S\cap\Omega_2)}{\left\|T_1\right\|+\left\|T_2\right\|}.
$$
Define also $T\colon L_p(\mu)\to F^{**}$ and $g\colon U\to L_\infty(\mu)$ by
\begin{align*}
T(s)&=T_1(\left.s\right|_{\Omega_1})+T_2(\left.s\right|_{\Omega_2}),\\
g(x)&=g_1(x)\cdot\chi_{\Omega_1}+g_2(x)\cdot\chi_{\Omega_2},
\end{align*}
where $\chi_{\Omega_i}$ is the characteristic function of $\Omega_i\subseteq\Omega$ for $i=1,2$. Clearly, $T$ is linear. Assume $1\leq p<\infty$. Using H\"older's inequality we obtain
\begin{align*}
\left\|T(s)\right\|
&\leq\left\|T_1\right\|\left\|\left.s\right|_{\Omega_1}\right\| _{L_p(\mu_1)}+\left\|T_2\right\|\left\|\left.s\right|_{\Omega_2}\right\| _{L_p(\mu_2)}\\
&=\left\|T_1\right\|^{1/p}\left\|T_1\right\|^{1/p^*}\left\|\left.s\right|_{\Omega_1}\right\| _{L_p(\mu_1)}+\left\|T_2\right\|^{1/p}\left\|T_2\right\|^{1/p^*}\left\|\left.s\right|_{\Omega_2}\right\| _{L_p(\mu_2)}\\
&\leq \left(\left\|T_1\right\|+\left\|T_2\right\|\right)^{1/p}\left(\left\|T_1\right\|^{1/p^*}\left\|\left.s\right|_{\Omega_1}\right\| _{L_p(\mu_1)}+\left\|T_2\right\|^{1/p^*}\left\|\left.s\right|_{\Omega_2}\right\| _{L_p(\mu_2)}\right)\\
&\leq\left(\left\|T_1\right\|+\left\|T_2\right\|\right)^{1/p}\left(\left\|\left.s\right|_{\Omega_1}\right\|^p_{L_p(\mu_1)}+\left\|\left.s\right|_{\Omega_2}\right\|^p_{L_p(\mu_2)}\right)^{1/p}
\left(\left\|T_1\right\|+\left\|T_2\right\|\right)^{1/p^*}\\
&=\left(\left\|T_1\right\|+\left\|T_2\right\|\right)\left\|s\right\| _{L_p(\mu)}
\end{align*}
for all $s\in L_p(\mu)$. Hence $T\in\L(L_p(\mu),F^{**})$ with $\left\|T\right\|\leq\left\|T_1\right\|+\left\|T_2\right\|$. Moreover, $g\in\H^\infty(U,L_\infty(\mu))$ with $\left\|g\right\|_\infty\leq 1$ since
\begin{align*}
\left\|g(x)\right\|_{L_\infty(\mu)}
&\leq\left\|g_1(x)\cdot\chi_{\Omega_1}\right\|_{L_\infty(\mu)}+\left\|g_2(x)\cdot\chi_{\Omega_2}\right\|_{L_\infty(\mu)}\\
&=\left\|g_1(x)\right\|_{L_\infty(\mu_1)}+\left\|g_2(x)\right\|_{L_\infty(\mu_2)}\\
&\leq \left\|g_1\right\|_\infty+\left\|g_2\right\|_\infty
\end{align*}
for all $x\in U$. For each $x\in U$, we have
\begin{align*}
T\circ I^\mu_{\infty,p} \circ g(x)
&=T\circ I^\mu_{\infty,p}\left(g_1(x)\cdot\chi_{\Omega_1}+g_2(x)\cdot\chi_{\Omega_2}\right)\\
&=T\left(I^\mu_{\infty,p}(g_1(x)\cdot\chi_{\Omega_1})+I^\mu_{\infty,p}(g_2(x)\cdot\chi_{\Omega_2})\right)\\
&=\sum_{i=1}^2 T_i\left(\left.\left(I^{\mu}_{\infty,p}(g_1(x)\cdot\chi_{\Omega_1})+I^{\mu}_{\infty,p}(g_2(x)\cdot\chi_{\Omega_2})\right)\right|_{\Omega_i}\right)\\
&=T_1\circ I^{\mu_1}_{\infty,p,1}\circ g_1(x)+T_2\circ I^{\mu_2}_{\infty,p,2}\circ g_2(x)\\
&=\kappa_F\circ f_1(x)+\kappa_F\circ f_2(x)\\
&=\kappa_F\circ(f_1+f_2)(x),
\end{align*}
and thus $\kappa_F\circ(f_1+f_2)=T\circ I^\mu_{\infty,p}\circ g$. Hence $f_1+f_2\in\I^{\H^\infty}_p(U,F)$ and
$$
\iota^{\H^\infty}_p(f_1+f_2)\leq\left\|T\right\|\left\|g\right\|_\infty\leq\left\|T_1\right\|+\left\|T_2\right\|\leq\iota^{\H^\infty}_p(f_1)+\iota^{\H^\infty}_p(f_2)+\varepsilon .
$$
Since $\varepsilon$ was arbitrary, it follows that $\iota^{\H^\infty}_p(f_1+f_2)\leq\iota^{\H^\infty}_p(f_1)+\iota^{\H^\infty}_p(f_2)$, as desired. The case $p=\infty$ is proved similarly.
To show that the norm $\iota^{\H^\infty}_p$ is complete, it is enough to check that if $(f_n)_{n\in\N}$ is a sequence in $\I^{\H^\infty}_p(U,F)$ such that $\sum_{n=1}^\infty\iota^{\H^\infty}_p(f_n)<\infty$, then the series $\sum_ {n\in\N}f_n$ converges in $(\I^{\H^\infty}_p(U,F),\iota^{\H^\infty}_p)$.
Since $\left\|\cdot\right\|_\infty\leq\iota^{\H^\infty}_p(\cdot)$ and $(\H^\infty(U,F),\left\|\cdot\right\|_\infty)$ is a Banach space, there exists $f:=\sum_{n=1}^\infty f_n\in\H^\infty(U,F)$. We claim that $f\in\I^{\H^\infty}_p(U,F)$ and $\iota^{\H^\infty}_p(f)\leq\sum_{n=1}^\infty\iota^{\H^\infty}_p(f_n)$. Indeed, let $\varepsilon>0$. For each $n\in\N$ we can find a probability space $(\Omega_n,\Sigma_n,\mu_n)$, the canonical inclusion mapping $I^{\mu_n}_{\infty,p,n}\colon L_\infty(\mu_n)\to L_p(\mu_n)$, a mapping $g_n\in\H^\infty(U,L_\infty(\mu_n))$ with $\left\|g_n\right\|_\infty=1/2^n$ and an operator $T_n\in\L(L_p(\mu_n),F^{**})$ such that $\kappa_F\circ f_n$ factors as
$$
\kappa_F\circ f_n=T_n\circ I^{\mu_n}_{\infty,p,n}\circ g_n\colon U \stackrel{g_n}{\rightarrow}L_\infty(\mu_n)\stackrel{I^{\mu_n}_{\infty,p,n}}{\rightarrow}L_p(\mu_n)\stackrel{T_n}{\rightarrow}F^{**}
$$
with $\left\|T_n\right\|<\iota^{\H^\infty}_p(f_n)+\varepsilon/2^n$. Let $(\Omega,\Sigma)$ be the direct sum measurable space of the $(\Omega_n,\Sigma_n)$, that is, $\Omega:=\cup_{n\in\N}\Omega_n$ and $\Sigma:=\left\{S\subseteq\Omega\colon S\cap\Omega_n\in\Sigma_n, \, \forall n\in\N\right\}$ where the $\Omega_n$'s are pairwise disjoint. Define a probability measure $\mu$ on $\Sigma$ by
$$
\mu(S)=\frac{\sum_{n=1}^\infty\mu_n(S\cap\Omega_n)\left\|T_n\right\|}{\sum_{n=1}^{\infty}\left\|T_n\right\|}\qquad (S\in\Sigma).
$$
Define $T\colon L_p(\mu)\to F^{**}$ and $g\colon U\to L_\infty(\mu)$ by
$$
T(s)=\sum_{n=1}^{\infty}T_n(\left.s\right|_{\Omega_n}),\qquad g(x)=\sum_{n=1}^{\infty}g_n(x)\cdot\chi_{\Omega_n}.
$$
Clearly, $T$ is linear. If $p=\infty$, we have $\left\|T\right\|\leq\sum_{n=1}^{\infty}\left\|T_n\right\|<\infty$. For $1\leq p<\infty$, we have
$$
\left\|T(s)\right\|
\leq\left\|s\right\| _{L_p(\mu)}\sum_{n=1}^{\infty}\left\|T_n\right\|
$$
for all $s\in L_p(\mu)$. Hence $T\in\L(L_p(\mu),F^{**})$ and $\left\|T\right\|\leq\sum_{n=1}^{\infty}\left\|T_n\right\|\leq\sum_{n=1}^\infty\iota^{\H^\infty}_p(f_n)+\varepsilon$. Moreover, $g\in\H^\infty(U,L_\infty(\mu))$ with $\left\|g\right\|_\infty\leq 1$ since
$$
\left\|g(x)\right\|_{L_\infty(\mu)}
\leq\sum_{n=1}^{\infty}\left\|g_n(x)\cdot\chi_{\Omega_n}\right\|_{L_\infty(\mu)}
=\sum_{n=1}^{\infty}\left\|g_n(x)\right\|_{L_\infty(\mu_n)}
\leq \sum_{n=1}^{\infty}\left\|g_n\right\|_\infty=\sum_{n=1}^{\infty}\frac{1}{2^n}=1
$$
for all $x\in U$. For each $x\in U$, we have
\begin{align*}
T\circ I^\mu_{\infty,p} \circ g(x)
&=T\circ I^\mu_{\infty,p}\left(\sum_{n=1}^{\infty}g_n(x)\cdot\chi_{\Omega_n}\right)\\
&=T\left(\sum_{n=1}^{\infty}I^\mu_{\infty,p}(g_n(x)\cdot\chi_{\Omega_n})\right)\\
&=\sum_{m=1}^{\infty}T_m\left(\left.\left(\sum_{n=1}^{\infty}I^\mu_{\infty,p}(g_n(x)\cdot\chi_{\Omega_n})\right)\right|_{\Omega_m}\right)\\
&=\sum_{m=1}^{\infty}T_m\circ I^\mu_{\infty,p,m}\circ g_m(x)\\
&=\sum_{m=1}^{\infty}\kappa_F\circ f_m(x)=\kappa_F\circ f(x),
\end{align*}
and thus $\kappa_F\circ f=T\circ I^\mu_{\infty,p}\circ g$. Hence $f\in\I^{\H^\infty}_p(U,F)$ and
$$
\iota^{\H^\infty}_p(f)\leq\left\|T\right\|\left\|g\right\|_\infty\leq\sum_{n=1}^\infty\iota^{\H^\infty}_p(f_n)+\varepsilon .
$$
By the arbitrariness of $\varepsilon$, we infer that $\iota^{\H^\infty}_p(f)\leq\sum_{n=1}^\infty\iota^{\H^\infty}_p(f_n)$ and this proves our claim.
We now will show that $f$ is the $\iota^{\H^\infty}_p$-limit of the sequence $(\sum_{k=1}^n f_k)_{n\in\N}$. For each $n\in\N$, define $t_n\colon L_p(\mu)\to F^{**}$ by $t_n(s)=\sum_{k=n+1}^\infty T_k(\left.s\right|_{\Omega_k})$. Clearly, $t_n\in\L(L_p(\mu),F^{**})$ with $\left\|t_n\right\|\leq\sum_{k=n+1}^\infty\left\|T_k\right\|$ and so $\lim_{n\to\infty}\left\|t_n\right\|=0$. It is easy to see that $f-\sum_{k=1}^n f_k=t_n\circ I^\mu_{\infty,p}\circ g$. Then $\iota^{\H^\infty}_p(f-\sum_{k=1}^n f_k)\leq\left\|t_n\right\|\left\|g\right\|_\infty$ and therefore $\lim_{n\to\infty}\iota^{\H^\infty}_p(f-\sum_{k=1}^n f_k)=0$ as desired.
(N2): We now prove that $g\cdot y\in\iota^{\H^\infty}_p(U,F)$ with $\iota^{\H^\infty}_p(g\cdot y)=\left\|g\right\|_\infty\left\|y\right\|$ whenever $g\in\H^\infty(U)$ and $y\in F$. Fix a point $x_0\in U$ and take $\Omega=\{x_0\}$, $\Sigma=\{\Omega,\emptyset\}$ and $\mu\colon\Sigma\to\R$ defined by $\mu(\Omega)=1$ and $\mu(\emptyset)=0$. Then $(\Omega,\Sigma,\mu)$ is a probability space. Clearly, $L_\infty(\mu)$ and $L_p(\mu)$ contain only constant functions.
Let $g\in\H^\infty(U)$ and $y\in F$. Define $T\in\L(L_p(\mu),F^{**})$ and $h\in\H^\infty(U,L_\infty(\mu))$ by $T(t\mathbf{1})=t\kappa_F(y)$ for all $t\in\C$ and $h(x)=g(x)\mathbf{1}$ for all $x\in U$, where $\mathbf{1}(x)=1$ for all $x\in\Omega$. It is clear that
$$
(\kappa_F\circ (g\cdot y))(x)= g(x)\kappa_F(y)=g(x)T(\mathbf{1})=T(g(x)\mathbf{1})=T\circ I^\mu_{\infty,p}(g(x)\mathbf{1})=T\circ I^\mu_{\infty,p}\circ h(x)
$$
for all $x\in U$. Then $g\cdot y\in\I^{\H^\infty}_p(U,F)$ and $\iota^{\H^\infty}_p(g\cdot y)\leq\left\|T\right\|\left\|h\right\|_\infty=\left\|y\right\|\left\|g\right\|_\infty$. Conversely, we have $\left\|y\right\|\left\|g\right\|_\infty=\left\|g\cdot y\right\|_\infty\leq\iota^{\H^\infty}_p(g\cdot y)$ by (N1).
(N3): We now prove the ideal property of $\I^{\H^\infty}_p(U,F)$. Let $H,G$ be complex Banach spaces, $V$ be an open subset of $H$, $h\in\H(V,U)$, $f\in\I_p^{\H^\infty}(U,F)$ and $S\in\L(F,G)$. Consider a typical $p$-integral holomorphic factorization of $f$:
$$
\kappa_F\circ f=T_0\circ I^\mu_{\infty,p}\circ g_0\colon U \stackrel{g_0}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T_0}{\rightarrow}F^{**}.
$$
Since $\kappa_G\circ S=S^{**}\circ \kappa_F$, putting $T=S^{**}\circ T_0\in\L(L_p(\mu),F^{**})$ and $g=g_0\circ h\in\H^\infty(V,L_\infty(\mu))$, we obtain
$$
\kappa_G\circ S\circ f\circ h=T\circ I^\mu_{\infty,p}\circ g\colon V\stackrel{g}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T}{\rightarrow}F^{**}.
$$
Hence $S\circ f\circ h\in\I^{\H^\infty}_p(V,G)$ and the inequality $\iota^{\H^\infty}_p(S\circ f\circ h)\leq\left\|S\right\|\iota^{\H^\infty}_p(f)$ follows readily from
$$
\iota^{\H^\infty}_p(S\circ f\circ h)\leq\left\|T\right\|\left\|g\right\|_\infty\leq\left\|S^{**}\right\|\left\|T_0\right\|\left\|g\right\|_\infty=\left\|S\right\|\left\|T_0\right\|\left\|g\right\|_\infty.
$$
\end{proof}
We now study the linearization of $p$-integral holomorphic mappings.
\begin{proposition}\label{integral}
Let $1\leq p\leq\infty$ and $f\in\H^\infty(U,F)$. Then $f\colon U\to F$ is $p$-integral holomorphic if and only if its linearization $T_f\colon\G^\infty(U)\to F$ is $p$-integral. In this case,
$$
\iota_p(T_f)=\iota_p^{\H^\infty}(f).
$$
Furthermore, the mapping $f\mapsto T_f$ is an isometric isomorphism from $(\I_p^{\H^\infty}(U,F),\iota_p^{\H^\infty})$ onto $(\I_p(\G^\infty(U),F),\iota_p)$.
\end{proposition}
\begin{proof}
If $f\colon U\to F$ is $p$-integral holomorphic, then we have
$$
\kappa_F\circ f=T\circ I^\mu_{\infty,p}\circ g\colon U\stackrel{g}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T}{\rightarrow}F^{**},
$$
where $(T,I^\mu_{\infty,p},g)$ is a $p$-integral holomorphic factorization of $f$. Applying Theorem \ref{teo0}, we obtain
$$
\kappa_F\circ T_f\circ g_U=T\circ I^\mu_{\infty,p}\circ T_g\circ g_U\colon U\stackrel{ g_U}{\rightarrow}\G^\infty(U)\stackrel{T_g}
{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T}{\rightarrow}F^{**},
$$
By the denseness of $\lin( g_U)$ in $\G^\infty(U)$, we deduce that
$$
\kappa_F\circ T_f=T\circ I^\mu_{\infty,p}\circ T_g\colon\G^\infty(U)\stackrel{T_g}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T}{\rightarrow}F^{**}
$$
and therefore $T_f\colon\G^\infty(U)\to F$ is $p$-integral. Furthermore, we have
$$
\iota_p(T_f)\leq\left\|T\right\|\left\|T_g\right\|=\left\|T\right\|\left\|g\right\|_\infty
$$
and taking infimum over all the $p$-integral holomorphic factorization of $f$, we deduce
$$
\iota_p(T_f)\leq\iota_p^{\H^\infty}(f).
$$
Conversely, if $T_f\colon\G^\infty(U)\to F$ is $p$-integral, we have
$$
\kappa_F\circ T_f=T\circ I^\mu_{\infty,p}\circ S\colon\G^\infty(U)\stackrel{S}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T}{\rightarrow}F^{**}
$$
where $(T,I^\mu_{\infty,p},S)$ is a $p$-integral factorization of $T_f$. Note that $g:=S\circ g_U\in\H^\infty(U,L_\infty(\mu))$, and since
$$
\kappa_F\circ f=T\circ I^\mu_{\infty,p}\circ g\colon U\stackrel{g}{\rightarrow}L_\infty(\mu)\stackrel{I^\mu_{\infty,p}}{\rightarrow}L_p(\mu)\stackrel{T}{\rightarrow}F^{**}
$$
we conclude that $f$ is $p$-integral holomorphic. Furthermore, we have
$$
\iota_p^{\H^\infty}(f)\leq\left\|T\right\|\left\|g\right\|_\infty=\left\|T\right\|\left\|S\right\|,
$$
and taking infimum over all the $p$-integral factorization of $T_f$, we deduce
$$
\iota_p^{\H^\infty}(f)\leq\iota_p(T_f).
$$
To prove the last assertion of the statement, it suffices to show that the mapping $f\mapsto T_f$ from $\I_p^{\H^\infty}(U,F)$ to $\I_p(\G^\infty(U),F)$ is surjective. Take $T\in\I_p(\G^\infty(U),F)$ and then $T=T_f$ for some $f\in\H^\infty(U,F)$ by Theorem \ref{teo0}. Hence $T_f\in\I_p(\G^\infty(U),F)$ and this implies that $f\in\I_p^{\H^\infty}(U,F)$ by the above proof.
\end{proof}
A first application of Proposition \ref{integral} shows that the bounded-holomorphic ideal $\I_p^{\H^\infty}$ is generated by composition with the operator ideal $\I_p$.
\begin{corollary}\label{now}
Let $1\leq p\leq\infty$. Then $\I_p^{\H^\infty}(U,F)=\I_p\circ\H^\infty(U,F)$ and $\iota_p^{\H^\infty}(f)=\left\|f\right\|_{\I_p\circ\H^\infty}$ for all $f\in\I_p^{\H^\infty}(U,F)$.
\end{corollary}
\begin{proof}
We first see that $\I_p^{\H^\infty}(U,F)=\I_p\circ\H^\infty(U,F)$. Indeed, if $f\in\I_p^{\H^\infty}(U,F)$, then $f=T_f\circ g_U$ by Theorem \ref{teo0}, where $T_f\in\I_p(\G^\infty(U),F)$ by Proposition \ref{integral}, and thus $f\in\I_p\circ\H^\infty(U,F)$. Conversely, if $f\in\I_p\circ\H^\infty(U,F)$, then $f=T\circ g$ for some complex Banach space $G$, $g\in\H^\infty(U,G)$ and $T\in\I_p(G,F)$. It follows that $T_f\circ g_U=T\circ T_g\circ g_U$ which implies that $T_f=T\circ T_g$. Then $T_f\in\I_p(G^\infty(U),F)$ by the ideal property of $\I_p$, and thus $f\in\I_p^{\H^\infty}(U,F)$ by Proposition \ref{integral}.
Now, taking into account that the mappings
\begin{align*}
f\in(\I_p^{\H^\infty}(U,F),\iota_p^{\H^\infty})&\mapsto T_f\in(\I_p(\G^\infty(U),F),\iota_p),\\
f\in(\I_p\circ\H^\infty(U,F),\left\|\cdot\right\|_{\I_p\circ\H^\infty})&\mapsto T_f\in(\I_p(\G^\infty(U),F),\iota_p),
\end{align*}
are isometries by Proposition \ref{integral} and Theorem \ref{ideal}, respectively, we conclude that
$$
\iota_p^{H^\infty}(f)=\iota_p(T_f)=\left\|f\right\|_{\I_p\circ\H^\infty}
$$
for all $f\in\I_p^{H^\infty}(U,F)$.
\end{proof}
We next see that if a mapping is 1-integral holomorphic, then it is p-integral holomorphic for any $1\leq p\leq\infty$.
\begin{corollary}
Let $1\leq p<q\leq\infty$. Then $\I^{\H^\infty}_p(U,F)\subseteq\I^{\H^\infty}_q(U,F)$ and $\iota^{\H^\infty}_q(f)\leq\iota^{\H^\infty}_p(f)$ for each $f\in\I^{\H^\infty}_p(U,F)$.
\end{corollary}
\begin{proof}
If $f\in\I^{\H^\infty}_p(U,F)$, then $T_f\in\I_p(\G^\infty(U),F)$ with $\iota_p(T_f)=\iota_p^{\H^\infty}(f)$ by Proposition \ref{integral}. Since $\I_p(\G^\infty(U),F)\subseteq\I_q(\G^\infty(U),F)$ with $\iota_q(T)\leq\iota_p(T)$ for all $T\in\I_p(\G^\infty(U),F)$ by \cite[Proposition 5.1]{DisJarTon-95}, it follows that $T_f\in\I_q(\G^\infty(U),F)$. Hence $f\in\I^{\H^\infty}_q(U,F)$ with $\iota_q(T_f)=\iota_q^{\H^\infty}(f)$ by Proposition \ref{integral}, and further $\iota^{\H^\infty}_q(f)=\iota_q(T_f)\leq\iota_p(T_f)=\iota^{\H^\infty}_p(f)$.
\end{proof}
We finish our study of $p$-integral holomorphic mappings with a description of their ranges.
\begin{corollary}
Let $1\leq p<\infty$. Every $p$-integral holomorphic mapping $f\colon U\to F$ has relatively weakly compact range.
\end{corollary}
\begin{proof}
Let $f\in\I_p^{\H^\infty}(U,F)$. Then $T_f\in\I_p(\G^\infty(U),F)$ by Proposition \ref{integral}, hence $T_f\in\W(\G^\infty(U),F)$ by \cite[Proposition 5.5 and Theorem 2.17]{DisJarTon-95}, and thus $f\in\H^\infty_\W(U,F)$ by \cite[Theorem 2.7]{JimRuiSep-22}.
\end{proof}
\subsection{p-nuclear holomorphic mappings}\label{subsection 4}
Given Banach spaces $E$, $F$ and $1\leq p<\infty$, we denote by $\Nu_p(E,F)$ the Banach space of all $p$-nuclear linear operators $T\colon E\to F$ with the norm
$$
\nu_p(T)=\inf\left\{\left\|A\right\|\left\|M_\lambda\right\|\left\|B\right\|\right\},
$$
where the infimum is taken over all such $p$-nuclear factorizations $(A,M_\lambda,B)$ of $T$ in the form
$$
T=A\circ M_{\lambda}\circ B\colon E\stackrel{B}{\rightarrow}\ell_\infty\stackrel{M_{\lambda}}{\rightarrow}\ell_p\stackrel{A}{\rightarrow}F,
$$
where $A\in\L(\ell_p,F)$, $B\in\L(E,\ell_\infty)$ and $M_\lambda\in\L(\ell_\infty,\ell_p)$ is a diagonal operator induced by a sequence $\lambda\in\ell_p$ (see \cite[p. 111]{DisJarTon-95}).
In analogy with this concept, we introduce the following variant in the holomorphic setting.
\begin{definition}
Let $E,F$ be complex Banach spaces, $U$ an open subset of $E$ and $1\leq p<\infty$. A mapping $f\colon U\to F$ is said to be $p$-nuclear holomorphic if there exist an operator $T\in\L(\ell_p,F)$, a mapping $g\in\H^\infty(U,\ell_\infty)$ and a diagonal operator $M_\lambda\in\L(\ell_\infty,\ell_p)$ induced by a sequence $\lambda\in\ell_p$ such that $f=T\circ M_\lambda\circ g$, that is, the following diagram commutes:
$$
\begin{tikzpicture}
\node (U) {$U$};
\node (F) [right of=U] {$F$};
\node (li) [below of=U] {$\ell_\infty$};
\node (lp) [below of=F] {$\ell_p$};
\draw[->] (U) to node {$f$} (F);
\draw[->] (U) to node [swap] {$g$} (li);
\draw[->] (li) to node {$M_\lambda$} (lp);
\draw[->] (lp) to node [swap] {$T$} (F);
\end{tikzpicture}
$$
The triple $(T,M_\lambda,g)$ is called a $p$-nuclear holomorphic factorization of $f$. We set
$$
\nu^{\H^\infty}_p(f)=\inf\left\{\left\|T\right\|\left\|M_\lambda\right\|\left\|g\right\|_\infty\right\},
$$
where the infimum is extended over all such factorizations of $f$. Let $\Nu^{\H^\infty}_p(U,F)$ denote the set of all $p$-nuclear holomorphic mappings from $U$ into $F$.
\end{definition}
\begin{proposition}\label{ideal nuclear}
$\left(\Nu^{\H^\infty}_p(U,F),\nu^{\H^\infty}_p\right)$ is a Banach ideal of bounded holomorphic mappings.
\end{proposition}
\begin{proof}
We first prove that $\left\|f\right\|_\infty\leq\nu^{\H^\infty}_p(f)$ whenever $f\in\Nu^{\H^\infty}_p(U,F)$. Let $(T,M_\lambda,g)$ be a $p$-nuclear holomorphic factorization of $f$. Clearly, $f=T\circ M_\lambda\circ g\in\H^\infty(U,F)$ with $\left\|f\right\|_\infty=\left\|T\circ M_\lambda\circ g\right\|_\infty\leq\left\|T\right\|\left\|M_\lambda\right\|\left\|g\right\|_\infty$ and passing to the infimum yields $\left\|f\right\|_\infty\leq\nu^{\H^\infty}_p(f)$.
Note that $L_p(\mu)$ and $L_\infty(\mu)$ are the spaces of complex-valued sequences $\ell_p$ and $\ell_\infty$, respectively, whenever $\mu$ is the counting measure on $\N$. Then, the proof of Proposition \ref{integral-2} can be adapted to prove that $(\Nu^{\H^\infty}_p(U,F),\nu^{\H^\infty}_p)$ is a Banach bounded-holomorphic ideal.
\end{proof}
A study on $p$-nuclear holomorphic mappings similar to that of the preceding subsection on $p$-integral holomorphic mappings is developed next.
\begin{proposition}\label{nuclear}
Let $1\leq p<\infty$ and $f\in\H^\infty(U,F)$. Then $f\colon U\to F$ is $p$-nuclear holomorphic if and only if its linearization $T_f\colon\G^\infty(U)\to F$ is $p$-nuclear. In this case,
$$
\nu_p(T_f)=\nu_p^{\H^\infty}(f).
$$
Furthermore, the mapping $f\mapsto T_f$ is an isometric isomorphism from $(\Nu_p^{\H^\infty}(U,F),\nu_p^{\H^\infty})$ onto $(\Nu_p(\G^\infty(U),F),\nu_p)$.
\end{proposition}
\begin{proof}
If $f\colon U\to F$ is $p$-nuclear holomorphic, then we have
$$
f=T\circ M_{\lambda}\circ g\colon U\stackrel{g}{\rightarrow}\ell_\infty\stackrel{M_{\lambda}}{\rightarrow}\ell_p\stackrel{T}{\rightarrow}F,
$$
where $(T,M_\lambda,g)$ is a $p$-nuclear holomorphic factorization of $f$. Using Theorem \ref{teo0}, we obtain
$$
T_f\circ g_U
=T\circ M_{\lambda}\circ T_g\circ g_U\colon U\stackrel{ g_U}{\rightarrow}\G^\infty(U)\stackrel{T_g}{\rightarrow}\ell_\infty\stackrel{M_{\lambda}}{\rightarrow}\ell_p\stackrel{T}{\rightarrow}F.
$$
By the denseness of $\lin( g_U)$ in $\G^\infty(U)$, we deduce that
$$
T_f=T\circ M_{\lambda}\circ T_g\colon\G^\infty(U)\stackrel{T_g}{\rightarrow}\ell_\infty\stackrel{M_{\lambda}}{\rightarrow}\ell_p\stackrel{T}{\rightarrow}F,
$$
and therefore $T_f\colon\G^\infty(U)\to F$ is $p$-nuclear. Furthermore, we have
$$
\nu_p(T_f)\leq\left\|T\right\|\left\|M_{\lambda}\right\|\left\|T_g\right\|=\left\|T\right\|\left\|M_{\lambda}\right\|\left\|g\right\|_\infty
$$
and since we were working with an arbitrary $p$-nuclear holomorphic factorization for $f$, we obtain
$$
\nu_p(T_f)\leq\nu_p^{\H^\infty}(f).
$$
Conversely, if $T_f\colon\G^\infty(U)\to F$ is $p$-nuclear, we have
$$
T_f=T\circ M_{\lambda}\circ S\colon\G^\infty(U)\stackrel{S}{\rightarrow}\ell_\infty\stackrel{M_{\lambda}}{\rightarrow}\ell_p\stackrel{T}{\rightarrow}F,
$$
where $(T,M_\lambda,S)$ is a $p$-nuclear factorization of $T_f$. Note that $g:=S\circ g_U\in\H^\infty(U,\ell_\infty)$ and since
$$
f=T\circ M_\lambda\circ g\colon U\stackrel{g}{\rightarrow}\ell_\infty\stackrel{M_{\lambda}}{\rightarrow}\ell_p\stackrel{T}{\rightarrow}F,
$$
we conclude that $f$ is $p$-nuclear holomorphic. Furthermore, we infer that
$$
\nu_p^{\H^\infty}(f)\leq\left\|T\right\|\left\|M_{\lambda}\right\|\left\|g\right\|_\infty=\left\|T\right\|\left\|M_{\lambda}\right\|\left\|S\right\|,
$$
and this ensures that
$$
\nu_p^{\H^\infty}(f)\leq\nu_p(T_f).
$$
To prove the last assertion of the statement, it suffices to show that the mapping $f\mapsto T_f$ from $\Nu_p^{\H^\infty}(U,F)$ to $\Nu_p(\G^\infty(U),F)$ is surjective. Take $T\in\Nu_p(\G^\infty(U),F)$ and then $T=T_f$ for some $f\in\H^\infty(U,F)$ by Theorem \ref{teo0}. Hence $T_f\in\Nu_p(\G^\infty(U),F)$ and this implies that $f\in\Nu_p^{\H^\infty}(U,F)$ by the above proof.
\end{proof}
\begin{corollary}
Let $1\leq p<\infty$. Then $\Nu_p^{\H^\infty}(U,F)=\Nu_p\circ\H^\infty(U,F)$ and $\nu_p^{\H^\infty}(f)=\left\|f\right\|_{\Nu_p\circ\H^\infty}$ for all $f\in\Nu_p^{\H^\infty}(U,F)$.
\end{corollary}
\begin{proof}
If $f\in\Nu_p^{\H^\infty}(U,F)$, then $f=T_f\circ g_U$ by Theorem \ref{teo0}, where $T_f\in\Nu_p(\G^\infty(U),F)$ by Proposition \ref{nuclear}, and thus $f\in\Nu_p\circ\H^\infty(U,F)$. Conversely, if $f\in\Nu_p\circ\H^\infty(U,F)$, then $f=T\circ g$ for some complex Banach space $G$, $g\in\H^\infty(U,G)$ and $T\in\Nu_p(G,F)$. It follows that $T_f\circ g_U=T\circ T_g\circ g_U$ which implies that $T_f=T\circ T_g$. Then $T_f\in\Nu_p(G^\infty(U),F)$ by the ideal property of $\Nu_p$, and thus $f\in\Nu_p^{\H^\infty}(U,F)$ by Proposition \ref{nuclear}. This proves that $\Nu_p^{\H^\infty}(U,F)=\Nu_p\circ\H^\infty(U,F)$.
Now, taking into account that the mappings
\begin{align*}
f\in(\Nu_p^{\H^\infty}(U,F),\nu_p^{\H^\infty})&\mapsto T_f\in(\Nu_p(\G^\infty(U),F),\nu_p),\\
f\in(\Nu_p\circ\H^\infty(U,F),\left\|\cdot\right\|_{\Nu_p\circ\H^\infty})&\mapsto T_f\in(\Nu_p(\G^\infty(U),F),\nu_p),
\end{align*}
are isometries by Proposition \ref{nuclear} and Theorem \ref{ideal}, respectively, we conclude that
$$
\nu_p^{H^\infty}(f)=\nu_p(T_f)=\left\|f\right\|_{\Nu_p\circ\H^\infty}
$$
for all $f\in\Nu_p^{H^\infty}(U,F)$.
\end{proof}
\begin{corollary}
Let $1\leq p<q<\infty$. Then $\Nu^{\H^\infty}_p(U,F)\subseteq\Nu^{\H^\infty}_q(U,F)$ and $\nu^{\H^\infty}_q(f)\leq\nu^{\H^\infty}_p(f)$ for each $f\in\Nu^{\H^\infty}_p(U,F)$.
\end{corollary}
\begin{proof}
If $f\in\Nu^{\H^\infty}_p(U,F)$, then $T_f\in\Nu_p(\G^\infty(U),F)$ with $\nu_p(T_f)=\nu_p^{\H^\infty}(f)$ by Proposition \ref{nuclear}. Since $\Nu_p(\G^\infty(U),F)\subseteq\Nu_q(\G^\infty(U),F)$ with $\nu_q(T)\leq\nu_p(T)$ for all $T\in\Nu_p(\G^\infty(U),F)$ by \cite[Corollary 5.24 (b)]{DisJarTon-95}, it follows that $T_f\in\Nu_q(\G^\infty(U),F)$. Hence $f\in\Nu^{\H^\infty}_q(U,F)$ with $\nu_q(T_f)=\nu_q^{\H^\infty}(f)$ by Proposition \ref{nuclear}, and so $\nu^{\H^\infty}_q(f)=\nu_q(T_f)\leq\nu_p(T_f)=\nu^{\H^\infty}_p(f)$.
\end{proof}
\begin{corollary}
Let $1\leq p<\infty$. Every $p$-nuclear holomorphic mapping $f\colon U\to F$ has relatively compact range.
\end{corollary}
\begin{proof}
Let $f\in\Nu_p^{\H^\infty}(U,F)$. Then $T_f\in\Nu_p(\G^\infty(U),F)$ by Proposition \ref{nuclear}, hence $T_f\in\K(\G^\infty(U),F)$ by \cite[Corollary 5.24 (a)]{DisJarTon-95}, and thus $f\in\H^\infty_\K(U,F)$ by \cite[Theorem 2.2]{JimRuiSep-22}.
\end{proof}
As in the linear case \cite[Theorem 5.27]{DisJarTon-95} and in the Lipschitz case \cite[Theorem 2.12]{Saa-17}, $p$-nuclear holomorphic mappings admit the following factorization.
\begin{corollary}\label{nuclear-integral}
Let $1\leq p<\infty$ and $f\in\H^\infty(U,F)$. Then $f\in\Nu_p^{\H^\infty}(U,F)$ if and only if there exist a Banach space $G$, an operator $T\in\K(G,F)$ and a mapping $g\in\I_p^{\H^\infty}(U,G)$ such that $f=T\circ g$.
\end{corollary}
\begin{proof}
If $f\in\Nu_p^{\H^\infty}(U,F)$, then $T_f\in\Nu_p(\G^\infty(U),F)$ by Proposition \ref{nuclear}. Then Theorem 5.27 in \cite{DisJarTon-95} shows that there exist a complex Banach space $G$, an operator $T\in\K(G,F)$ and an operator $S\in\I_p(G^\infty(U),G)$ such that $T_f=T\circ S$. Hence we have $f=T_f\circ g_U=T\circ S\circ g_U=T\circ g$, where $g=S\circ g_U\in\I_p^{\H^\infty}(U,F)$ by Proposition \ref{integral-2}.
Conversely, if there exist a Banach space $G$, an operator $T\in\K(G,F)$ and a mapping $g\in\I_p^{\H^\infty}(U,G)$ such that $f=T\circ g$, then $T_f\circ g_U=T\circ T_g\circ g_U$ which gives $T_f=T\circ T_g$ where $T_g\in\I_p(G^\infty(U),F)$ by Proposition \ref{integral}. Hence $T_f\in\Nu_p(G^\infty(U),F)$ by \cite[Theorem 5.27]{DisJarTon-95}, and so $f\in\Nu_p^{\H^\infty}(U,F)$ by Proposition \ref{integral}.
\end{proof}
Next, we study the inclusion relationships between the new classes of bounded holomorphic mappings considered. In a clear parallel to the linear case, we have the following.
\begin{corollary}
Let $1\leq p<\infty$.
\begin{enumerate}
\item $\Nu^{\H^\infty}_p(U,F)\subseteq\I^{\H^\infty}_p(U,F)$ and $\iota^{\H^\infty}_p(f)\leq\nu^{\H^\infty}_p(f)$ for all $f\in\Nu^{\H^\infty}_p(U,F)$.
\item If $F$ is finite-dimensional, then $\Nu^{\H^\infty}_p(U,F)=\I^{\H^\infty}_p(U,F)$ with $\nu^{\H^\infty}_p(f)=\iota^{\H^\infty}_p(f)$ for all $f\in\Nu^{\H^\infty}_p(U,F)$.
\end{enumerate}
\end{corollary}
\begin{proof}
(1) If $f\in\Nu^{\H^\infty}_p(U,F)$, then $T_f\in\Nu_p(\G^\infty(U),F)$ with $\nu_p(T_f)=\nu_p^{\H^\infty}(f)$ by Proposition \ref{nuclear}. Since $\Nu_p(\G^\infty(U),F)\subseteq\I_p(\G^\infty(U),F)$ with $\iota_p(T)\leq\nu_p(T)$ for all $T\in\Nu_p(\G^\infty(U),F)$ by \cite[Corollary 5.24 (c)]{DisJarTon-95}, it follows that $T_f\in\I_p(\G^\infty(U),F)$. Hence $f\in\I^{\H^\infty}_p(U,F)$ with $\iota_p(T_f)=\iota_p^{\H^\infty}(f)$ by Proposition \ref{integral}, and further $\iota^{\H^\infty}_p(f)=\iota_p(T_f)\leq\nu_p(T_f)=\nu^{\H^\infty}_p(f)$.
(2) Assume that $F$ is finite-dimensional. If $f\in\I^{\H^\infty}_p(U,F)$, then $T_f\in\I_p(\G^\infty(U),F)$ with $\iota_p(T_f)=\iota_p^{\H^\infty}(f)$ by Proposition \ref{integral}. Hence $T_f\in\Nu_p(\G^\infty(U),F)$ with $\nu_p(T_f)=\iota_p(T_f)$ by \cite[Theorem 5.26]{DisJarTon-95}. It follows that $f\in\Nu^{\H^\infty}_p(U,F)$ with $\nu^{\H^\infty}_p(f)=\nu_p(T_f)$ by Proposition \ref{nuclear}, and so $\nu^{\H^\infty}_p(f)=\iota^{\H^\infty}_p(f)$.
\end{proof}
We show that $\Nu^{\H^\infty}_p\neq\I^{\H^\infty}_p$ in the following example.
\begin{example}
Let $(\Omega,\Sigma,\mu)$ be a finite measure space and let $1\leq p<\infty$. The formal inclusion $i_p\colon L_\infty(\mu)\to L_p(\mu)$ is $p$-integral but not $p$-nuclear (see \cite[p. 113]{DisJarTon-95}). Let $g\in\H^\infty(L_\infty(\mu),L_\infty(\mu))$. Clearly, $f:=i_p\circ g\in \I^{\H^\infty}_p(L_\infty(\mu),L_p(\mu))$ by Corollary \ref{now}, but $f\notin\Nu^{\H^\infty}_p(L_\infty(\mu),L_p(\mu))$. Indeed, notice that $T_f=i_p\circ T_g$ by Theorem \ref{teo0}, and if $f$ were in $\Nu^{\H^\infty}_p(L_\infty(\mu),L_p(\mu))$, then we would have that $i_p\circ T_g=T_f\in\Nu_p(\G^\infty(L_\infty(\mu)),L_p(\mu))$ by Proposition \ref{nuclear}. Since $g$ was arbitrary and the mapping $g\mapsto T_g$ is an isometric isomorphism from $\H^\infty(L_\infty(\mu),L_\infty(\mu))$ onto $\L(\G^\infty(L_\infty(\mu)),L_\infty(\mu))$ by Theorem \ref{teo0}, it follows that $i_p\circ T\in\Nu_p(\G^\infty(L_\infty(\mu)),L_p(\mu))$ for all $T\in\L(\G^\infty(L_\infty(\mu)),L_\infty(\mu))$. This implies that $i_p\in\Nu_p(L_\infty(\mu),L_p(\mu))$, a contradiction.
\end{example}
In general, a bounded-holomorphic ideal $\I^{\H^\infty}$ does not coincide with $\I\circ\H^\infty$ as we see below.
\begin{examples}
The ideal $\H_w^\infty$ of locally weakly compact bounded holomorphic mappings does not coincide with $\W\circ\H^\infty$. For example, let $\interior{\D}$ be the open unit disc in $\mathbb{C}$, and let $f\colon\interior{\D}\to c_0$ be the mapping defined by $f(z)=(z^n)_{n=1}^\infty$. By Example 3.2 in \cite{Muj-91}, $f$ is in $\H_k^\infty(\interior{\D},c_0)$ but $f$ is not in $\H_\W^\infty(\interior{\D},c_0)$. Hence $T_f$ fails to belong to $\W(\G^\infty(\interior{\D}),c_0)$ by \cite[Proposition 3.4 (b)]{Muj-91}. So by Theorem \ref{ideal} $f$ is not in $\W\circ\H^\infty$. The same example shows that in general $\H_k^\infty\neq\K\circ\H^\infty$ (see Example 3.3 in \cite{AroBotPelRue-10}).
\end{examples}
\section{Dual ideal of bounded holomorphic mappings}\label{section 4}
According to \cite[Section 4.4]{Pie-80}, the dual ideal of an operator ideal $\I(E,F)$ between Banach spaces $E$ and $F$ is defined by
$$
\I^\mathrm{dual}(E,F)=\left\{T\in\L(E,F)\colon T^*\in\I(F^*,E^*)\right\},
$$
where $T^*$ is the adjoint operator of $T$. It is well known that $\I^\mathrm{dual}(E,F)$ is also an operator ideal. Moreover, if $(\I,\left\|\cdot\right\|_\I)$ is a normed or Banach operator ideal, then $\I^\mathrm{dual}$ is so equipped with the norm
$$
\left\|T\right\|_{\I^\mathrm{dual}}=\left\|T^*\right\|_\I.
$$
In order to introduce the concept of bounded-holomorphic dual of an operator ideal, we will first need a holomorphic variant of the concept of adjoint operator between Banach spaces.
\begin{definition}\cite{AroSch-76,Rya-88}
Let $E,F$ be complex Banach spaces and $U$ an open subset of $E$. The transpose of a bounded holomorphic mapping $f\colon U\to F$ is the mapping $f^t\colon F^*\to\H^\infty(U)$ defined by
$$
f^t(y^*)=y^*\circ f\qquad (y^*\in F^*).
$$
\end{definition}
It is easy to show (see, for example, \cite[Proposition 1.11]{JimRuiSep-22}) that $f^t$ is a continuous linear operator with $||f^t||=\left\|f\right\|_\infty$. Moreover, $f^t=J_U^{-1}\circ (T_f)^*$, where $J_U\colon\H^\infty(U)\to\G^\infty(U)^*$ is the isometric isomorphism defined in Theorem \ref{teo0}.
\begin{definition}
Given an operator ideal $\I$, the bounded-holomorphic dual of $\I$ is the set
$$
(\I^{\H^\infty})^\mathrm{dual}(U,F)=\left\{f\in\H^\infty(U,F)\colon f^t\in\I(F^*,\H^\infty(U))\right\}.
$$
If $(\I,\left\|\cdot\right\|_\I)$ is a normed operator ideal, define
$$
\left\|f\right\|_{(\I^{\H^\infty})^\mathrm{dual}}=\left\|f^t\right\|_\I.
$$
\end{definition}
Next result assures that the bounded holomorphic mappings belonging to the bounded-holomorphic dual of an operator ideal $\I$ are exactly those that factorize through $\I^\mathrm{dual}$.
\begin{theorem}\label{teo-dual}
Let $\I$ be an operator ideal. Then the transpose $f^t$ of a bounded holomorphic mapping $f$ belongs to $\I$ if and only if $f$ admits a factorization $f=T\circ g$ where $g$ is a bounded holomorphic mapping and the adjoint operator $T^*$ of the bounded linear operator $T$ belongs to $\I$, that is,
$$
(\I^{\H^\infty})^\mathrm{dual}=\I^\mathrm{dual}\circ\H^\infty.
$$
Moreover, if $(\I,\left\|\cdot\right\|_\I)$ is a normed operator ideal, then
$$
\left\|f\right\|_{(\I^{\H^\infty})^\mathrm{dual}}=\left\|f\right\|_{\I^\mathrm{dual}}\circ\H^\infty
$$
for all $f\in(\I^{\H^\infty})^\mathrm{dual}$.
\end{theorem}
\begin{proof}
Let $f\in(\I^{\H^\infty})^\mathrm{dual}(U,F)$. Then $f\in\H^\infty(U,F)$ and $f^t\in\I(F^{*},\H^\infty(U))$. By Theorem \ref{teo0}, there exists $T_f\in\L(\G^\infty(U),F)$ such that $f=T_f\circ g_U$. Since $(T_f)^*=J_U\circ f^t\in\I(F^*,\G^\infty(U)^*)$, the ideal property of $\I$ yields that $T_f\in\I^\mathrm{dual}(\G^{\infty}(U),F)$. Hence $f\in\I^\mathrm{dual}\circ\H^\infty(U,F)$ with $\left\|f\right\|_{\I^\mathrm{dual}}\circ\H^\infty=\left\|T_f\right\|_{\I^\mathrm{dual}}$ by Theorem \ref{ideal}, and this proves the inclusion
$$
(\I^{\H^\infty})^\mathrm{dual}(U,F)\subseteq\I^\mathrm{dual}\circ\H^\infty(U,F).
$$
Furthermore, we have
\begin{align*}
\left\|f\right\|_{\I^\mathrm{dual}\circ\H^\infty}&=\left\|T_f\right\|_{\I^\mathrm{dual}}=\left\|(T_f)^*\right\|_{\I}=\left\|J_U\circ f^t\right\|_\I\\
&\leq\left\|J_U\right\|\left\|f^t\right\|_{\I}=\left\|f^t\right\|_{\I}=\left\|f\right\|_{(\I^{\H^\infty})^\mathrm{dual}}.
\end{align*}
Conversely, let $f\in\I^\mathrm{dual}\circ\H^\infty(U,F)$. Then there are a complex Banach space $G$, a mapping $g\in\H^\infty(U,G)$ and an operator $T\in\I^\mathrm{dual}(G,F)$ such that $f=T\circ g$. Given $y^*\in F^*$, we have
$$
f^t(y^*)=(T\circ g)^t(y^*)=y^*\circ(T\circ g)=(y^*\circ T)\circ g=T^*(y^*)\circ g=g^t(T^*(y^*))=(g^t\circ T^*)(y^*),
$$
and thus $f^t=g^t\circ T^*$. Since $T^*\in\I(F^*,G^*)$ and $g^t\in\L(G^*,\H^\infty(U))$, we obtain that $f^t\in\I(F^*,\H^\infty(U))$. Hence $f\in(\I^{\H^\infty})^\mathrm{dual}(U,F)$ and this shows that
$$
\I^\mathrm{dual}\circ\H^\infty(U,F)\subseteq(\I^{\H^\infty})^\mathrm{dual}(U,F).
$$
Moreover, we have
\begin{align*}
\left\|f\right\|_{(\I^{\H^\infty})^\mathrm{dual}}&=\left\|f^t\right\|_{\I}=\left\|g^t\circ T^*\right\|_{\I}\\
&\leq\left\|g^t\right\|\left\|T^*\right\|_{\I}=\left\|g\right\|_\infty\left\|T\right\|_{\I^\mathrm{dual}},
\end{align*}
and taking the infimum over all representations $T\circ g$ of $f$, we conclude that
$$
\left\|f\right\|_{(\I^{\H^\infty})^\mathrm{dual}}\leq\left\|f\right\|_{\I^\mathrm{dual}\circ\H^\infty}.
$$
\end{proof}
Theorems 2.1, 2.2, 2.6 and 2.7 in \cite{JimRuiSep-22} can be deduced from our preceding results.
\begin{corollary}
Let $\I=\F,\overline{\F},\K,\W$. Then a bounded holomorphic mapping belongs to $\H^\infty_{\I}(U,F)$ if and only if its transpose belongs to $\I(F^*,\H^\infty(U))$.
\end{corollary}
\begin{proof}
We have
$$
\H^\infty_{\I}=\I\circ\H^\infty=\I^\mathrm{dual}\circ\H^\infty=(\I^{\H^\infty})^\mathrm{dual},
$$
where the first equality follows from Proposition \ref{new}, the second from \cite[Proposition 4.4.7]{Pie-80} and the third from Theorem \ref{teo-dual}.
\end{proof}
With a proof similar to the above but replacing \cite[Proposition 4.4.7]{Pie-80} by \cite[Theorem 5.15]{DisJarTon-95}, we obtain the following result on 1-integral holomorphic mappings.
\begin{corollary}
A bounded holomorphic mapping belongs to $\I_1^{\H^\infty}(U,F)$ if and only if its transpose belongs to $\I_1(F^*,\H^\infty(U))$. $\hfill\qed$
\end{corollary}
Regarding dual ideals, general representation theorems using topological tensor products provide useful tools for the linear case, even for the Lipschitz and multilinear cases. For future
research, it would be interesting to study a holomorphic analog for this kind of dual representations. It could provide another point of view more in the direction of the Defant--Floret book \cite{DefFlo-93} focusing on tensor products, which could open the door to more general approaches.\\
\textbf{Funding:} This research was partially supported by project UAL-FEDER grant UAL2020-FQM-B1858, by Junta de Andaluc\'{\i}a grants P20$\_$00255 and FQM194, and by Ministerio de Ciencia e Innovación grant PID2021-122126NB-C31. \\
\textbf{Data Availability Statement:} There are no data associate with this research.\\
\textbf{Competing Interests:} The authors declare that they have no conflict of interest.\\
\textbf{Author Contributions:} All authors contributed to the study conception and design. The first draft of the manuscript was written by Antonio Jim\'enez-Vargas and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
| f18d6ecc1fdc219a56f4ef48246374f52b6ae220 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction and statement of the main results}
A classical result of Katok \cite{katok} states that for integers $k \geq 2$ the space $S_{2k}(\Gamma)$ of cusp forms of weight $2k$ for a cofinite discrete subgroup $\Gamma \subset \mathrm{SL}_2(\R)$ is generated by a family of Poincar\'e series associated with primitive hyperbolic matrices $\gamma \in \Gamma$. Explicitly, these Poincar\'e series are defined by\footnote{We use a slightly different normalization than Katok \cite{katok} to simplify our formulas.}
\begin{align}\label{fkgamma}
f_{k,\gamma}(z) = - \frac{D_\gamma^{k-\frac{1}{2}}}{\pi}\sum_{g \in \Gamma_\gamma \backslash\Gamma}\frac{1}{(Q_{\gamma }\circ g)(z,1)^{k}},
\end{align}
where $\Gamma_\gamma = \{\pm \gamma^n: n \in \mathbb{Z}\}$\footnote{We will assume throughout that $-1 \in \Gamma$.}, $Q_\gamma(x,y) = cx^2+(d-a)xy-by^2$ is the binary quadratic form corresponding to $\gamma = \left(\begin{smallmatrix}a & b \\ c & d \end{smallmatrix}\right)$, $D_\gamma = \mathrm{tr}(\gamma)^2-4$ is the discriminant of $Q_\gamma$, and $\Gamma$ acts on binary quadratic forms in the usual way. These hyperbolic Poincar\'e series have many other interesting applications, the most prominent one being Kohnen's \cite{kohnen} construction of the holomorphic kernel function for the Shimura correspondence.
Katok \cite{katok} also gave a beautiful geometric formula for (the imaginary part of) the geodesic cycle integrals of the cusp forms $f_{k,\gamma}(z)$. If $\gamma,\sigma \in \Gamma$ are primitive hyperbolic elements in $\Gamma$ which have positive trace and which are not conjugacy equivalent, and $S_{\gamma}$ denotes the geodesic semi-circle in $\mathbb{H}$ connecting the two real fixed points of $\gamma$, then Katok's formula \cite[Theorem~3]{katok} states that
\begin{align} \label{katokformula}
\mathrm{Im} \left(\int_{z_0}^{\sigma. z_0} f_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz \right) = (D_\gamma D_\sigma)^{\frac{k - 1}{2}} \sum_{p \in [S_\gamma]\cap [S_\sigma]} \mu_p^{k} P_{k - 1}( \cos \theta_p),
\end{align}
where $z_0 \in \mathbb{H}$ and the path of integration can be chosen arbitrarily, the sum runs over the finitely many intersection points $p$ of the closed geodesics $[S_\gamma] = \Gamma_\gamma \backslash S_\gamma$ and $[S_\sigma] = \Gamma_\sigma \backslash S_{\sigma}$ in $\Gamma \setminus \mathbb{H}$, and $P_r$ denotes the $r$-th Legendre polynomial. Moreover, $\theta_p = \theta_p(\gamma,\sigma) \in [0, \pi]$ denotes the \emph{intersection angle} at $p$, which is measured counterclockwise from the tangent at $S_{\gamma}$ to the tangent at $S_{\sigma}$ at $p$, and $\mu_p = \mu_p(\gamma,\sigma) \in \{\pm 1\}$ denotes the \emph{sign of the intersection} at $p$, which is defined as follows: let $g \in \Gamma$ be chosen such that the intersection point $p$ corresponds to the intersection point of $S_\gamma$ and $S_{g \sigma g^{-1}}$ in $\mathbb{H}$, and suppose that $S_\gamma$ and $S_{g \sigma g^{-1}}$ are oriented clockwise (which means that the lower left entries of $\gamma$ and $g\sigma g^{-1}$ are positive). Then $\mu_p(\gamma,\sigma) = +1$ if the left endpoint of $S_{g \sigma g^{-1}}$ lies between the two endpoints of $S_\gamma$, and $\mu_p(\gamma,\sigma) = -1$ otherwise. The sign of $\mu_p$ changes if the orientation of either $S_\gamma$ or $S_{g \sigma g^{-1}}$ is reversed. Note that $\theta_p(\gamma,\sigma)$ does not depend on the orientation of $S_\gamma$ or $S_\sigma$, but it does depend on the order of $\gamma,\sigma$, that is, we have $\theta_p(\sigma,\gamma) = \pi - \theta_p(\gamma,\sigma)$. Similarly, we have $\mu_p(\gamma,\sigma) = -\mu_p(\sigma,\gamma)$.
More recently, Matsusaka~\cite{matsusaka} investigated the (homogenized) cycle integrals of certain modular integrals of weight $2$ for $\mathrm{SL}_2 (\Z)$ with rational period functions. These modular integrals were constructed by Duke, Imamo\={g}lu, and T\'oth in \cite{dit,ditmodularintegrals}, and are defined\footnote{Again, our normalization differs from \cite{matsusaka,dit,ditmodularintegrals}.} for primitive hyperbolic $\gamma \in \mathrm{SL}_2 (\Z)$ by
\begin{align}\label{ditmodularintegral}
F_{\gamma}(z) = \frac{2\sqrt{D_\gamma}}{\pi}\sum_{n = 0}^\infty \left(\int_{z_0}^{\gamma z_0}j_n(\tau)\frac{d\tau}{Q_\gamma(\tau,1)}\right)e^{2\pi i n z},
\end{align}
where $j_n(\tau)$ denotes the unique weakly holomorphic modular function for $\mathrm{SL}_2 (\Z)$ whose Fourier expansion has the shape $j_n(\tau) = q^{-n}+O(q)$ with $q = e^{2 \pi i \tau}$. It was shown in \cite{dit} that the series defining $F_\gamma(z)$ converges to a holomorphic function on $\mathbb{H}$, and satisfies for any $\sigma \in \mathrm{SL}_2 (\Z)$ the transformation formula
\begin{align}\label{ditcocycle}
r_\gamma(\sigma,z) = (F_{\gamma}|_{2}\sigma)(z) - F_{\gamma}(z) = \frac{2\sqrt{D_\gamma}}{\pi}\sum_{\substack{g \in \Gamma_\gamma\backslash \Gamma \\ w_{g^{-1}\gamma g}' < \sigma^{-1}. i \infty < w_{g^{-1}\gamma g}}}\frac{\mathrm{sign}(Q_\gamma\circ g)}{(Q_\gamma\circ g)(z,1)},
\end{align}
where $w_\gamma' < w_\gamma$ denote the two real fixed points of $\gamma$, and we put $\mathrm{sign}(Q) = \mathrm{sign}(A)$ for a binary quadratic form $Q(x, y) = Ax^2 + Bxy + Cy^2$. Note that the sum on the right-hand side is finite. The function $\sigma \mapsto r_\gamma(\sigma,z)$ defines a holomorphic weight $2$ cocycle for $\mathrm{SL}_2 (\Z)$ with values in the rational functions on $\mathbb{C}$, and the function $F_\gamma(z)$ is called a \emph{modular integral} of weight $2$ for $r_{\gamma}(\sigma,z)$. Matsusaka proved the remarkable formula
\begin{align}\label{matsusakaformula}
\mathrm{Im}\left(\lim_{n \to \infty}\int_{\sigma^n. z_0}^{\sigma^{n+1}.z_0}F_{\gamma}(z)dz\right) = \sum_{p \in [S_\gamma]\cap[S_\sigma]}1.
\end{align}
where $z_0 \in \mathbb{H}$ and the path of integration can be chosen arbitrarily, and $\gamma,\sigma \in \mathrm{SL}_2 (\Z)$ are primitive hyperbolic matrices with positive trace which are not conjugacy equivalent (compare \cite[Corollary~3.8, Theorem~3.3]{matsusaka}). Note that the homogenization of the cycle integral on the left-hand side is necessary to make it independent of $z_0$, and a conjugacy class invariant in $\sigma$. The right-hand side of the formula \eqref{matsusakaformula} counts the number of intersections of $[S_\gamma]$ and $[S_\sigma]$ in $\mathrm{SL}_2 (\Z) \backslash \mathbb{H}$, which by \cite[Theorem~3]{ditlinking} can also be interpreted as the linking number of certain modular knots associated with $\gamma$ and $\sigma$.
Notice that there is a striking similarity between Matsusaka's formula \eqref{matsusakaformula} and (the formal specialization to $k = 1$ of) Katok's formula \eqref{katokformula}.
Motivated by this observation, in the present work we extend Matsusaka's formula \eqref{matsusakaformula} to certain modular integrals of higher weight $2k$ (with $k \geq 2$) for cofinite discrete subgroups $\Gamma \subset \mathrm{SL}_2(\R)$, by evaluating their homogenized cycle integrals in terms of intersection angles of geodesics in $\Gamma \backslash \mathbb{H}$, much in the spirit of Katok's formula \eqref{katokformula}.
The modular integrals we consider here were introduced by Parson\footnote{In fact, our definition is not precisely Parson's, but differs from her original Poincar\'e series by a cusp form. Moreover, we use a different normalization than Parson to match our normalization of \eqref{fkgamma}.} in \cite{parson}, and are defined for integers $k \geq 2$ and primitive hyperbolic $\gamma \in \Gamma$ by
\begin{align}\label{parsonmodularintegral}
F_{k,\gamma}(z) := - \frac{D_\gamma^{k-\frac{1}{2}}}{\pi}\sum_{g \in \Gamma_\gamma \backslash\Gamma}\frac{\mathrm{sign}(Q_\gamma \circ g)}{(Q_{\gamma }\circ g)(z,1)^{k}}.
\end{align}
A direct computation shows that the holomorphic function $F_{k,\gamma}(z)$ satsifies for any $\sigma\in \Gamma$ the transformation law
\begin{align}\label{parsoncocycle}
r_{k,\gamma}(\sigma,z): = (F_{k,\gamma}|_{2k}\sigma)(z) - F_{k,\gamma}(z) = \frac{2D_\gamma^{k-\frac{1}{2}}}{\pi}\sum_{\substack{g \in \Gamma_\gamma \backslash\Gamma \\ w_{g^{-1}\gamma g}' < \sigma^{-1}. i \infty < w_{g^{-1}\gamma g}}}\frac{\mathrm{sign}(Q_\gamma\circ g)}{(Q_\gamma\circ g)(z,1)^{k}}.
\end{align}
The sum on the right hand side is finite. In particular, the map $\sigma \mapsto r_{k,\gamma}(\sigma,z)$ is a holomorphic weight $2k$ cocycle for $\Gamma$ with values in the rational functions on $\mathbb{C}$, and $F_{k,\gamma}(z)$ is a modular integral for $r_{k,\gamma}(\sigma,z)$.
Notice that the cocycle $r_{\gamma}(\sigma,z)$ in \eqref{ditcocycle} is the specialization to $k = 1$ of the cocycle $r_{k,\gamma}(\sigma,z)$ in \eqref{parsoncocycle}. Hence, we may view the weight $2$ modular integral $F_{\gamma}(z)$ defined in \eqref{ditmodularintegral} as the $k = 1$ analog of Parson's modular integral $F_{k,\gamma}(z)$ defined in \eqref{parsonmodularintegral}. However, $F_{k,\gamma}(z)$ does not converge for $k = 1$, although it is probably possible to extend the definition \eqref{parsonmodularintegral} to $k = 1$ using Hecke's trick, and to show that $F_{1,\gamma}(z) = F_{\gamma}(z)$.
Our main result is the following geometric formula for (the imaginary part of) the cycle integrals of Parson's modular integrals $F_{k,\gamma}(z)$. It is an analog of Katok's formula \eqref{katokformula} for modular integrals, and a higher weight analog of Matsusaka's formula \eqref{matsusakaformula}.
\begin{thm}\label{mainresult}
Let $k \geq 2$ be an integer and let $\gamma$, $\sigma$ be primitive hyperbolic elements in $\Gamma$ with positive trace which are not conjugacy equivalent. We have
\begin{equation} \label{modintkatok}
\mathrm{Im} \left( \lim_{n \to \infty}\int_{\sigma^n.z_0}^{\sigma^{n+1}.z_0} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz \right) = (D_\gamma D_\sigma)^{\frac{k - 1}{2}} \sum_{p \in [S_\gamma]\cap [S_\sigma]} \mu_p^{k - 1} P_{k - 1}( \cos \theta_p),
\end{equation}
where the notation is as in \eqref{katokformula}.
\end{thm}
The proof of Theorem~\ref{mainresult} will be given in Section~\ref{proofs} below. We would like to give a quick proof of the fact that the limit on the left-hand side exists, and is independent of $z_0$. First, note that the limit of $\sigma^n.z_0$ as $n \to \infty$ is independent of the choice of $z_0 \in \mathbb{H}$ and converges to $w_\sigma$ (if we assume for the moment that $\sigma$ has positive trace and positive lower left entry; see \cite[Lemma 2.7]{matsusaka}). Now the function $x\mapsto\displaystyle\int_{x}^{\sigma.x} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz$ is Lipschitz-continuous when $x$ is approaching $w_\sigma$. Indeed, note that \begin{align} \begin{split}\label{Lipschitz}
&\left\vert \int_{x_0}^{\sigma.x_0} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz - \int_{x_1}^{\sigma.x_1} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz \right\vert \\
&= \left\vert \int_{x_0}^{x_1} r_{k,\gamma} (\sigma, z) Q_\sigma(z, 1)^{k - 1} \, dz \right\vert \leq L_{k, \gamma, \sigma} \vert x_0 - x_1 \vert,
\end{split}
\end{align}
for some constant $L_{k,\gamma,\sigma} > 0$ and $x_0,x_1$ close to $w_\sigma$, as $r_{k,\gamma}(\sigma, z)$ is holomorphic at $w_\sigma$ if $\gamma$ and $\sigma$ are not conjugacy equivalent, so $\vert r_{k,\gamma}(\sigma, z) Q_\sigma(z, 1)^{k - 1} \vert$ is bounded in a neighbourhood of $w_\sigma$. This implies that $\displaystyle\int_{\sigma^n.z_0}^{\sigma^{n+1}.z_0} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz$ is a Cauchy sequence. Moreover, if we put $x_0 = \sigma^n z_0$ and $x_1 = \sigma^n z_1$ in \eqref{Lipschitz} and take the limit as $n \to \infty$, we see that the left-hand side in Theorem~\ref{mainresult} is independent of $z_0$. Note that this also implies that the homogenized cycle integral of $F_{k,\gamma}(z)$ is a conjugacy class invariant in $\sigma$. We would also like to remark that the right-hand side in Theorem~\ref{mainresult} is a finite sum, which can be explicitly computed (numerically) as explained in \cite{rickards}.
Previous to Matsusaka's formula (\ref{matsusakaformula}), Duke, Imamo\={g}lu, and T\'oth proved its "parabolic version" which expresses the number of intersections of the net of the geodesics equivalent to $S_\gamma$ with the non-compact geodesic $S_{- d / c}$ from $- \frac{d}{c}$ to $i \infty$ as the central value of the twisted $L$-function
\begin{align*}
L_{\gamma}\left( s, - \frac{d}{c} \right) = \sum_{n = 1}^\infty \frac{a_\gamma(n) e^{ - 2\pi i\frac{d}{c} n}}{n^s}, \quad (c,d) = 1, \; c > 0, \quad \mathrm{Re}(s) \gg 1,
\end{align*}
of the modular integral $F_\gamma(z) = \sum_{n = 1}^\infty a_\gamma(n) e^{2 \pi i n z}$ in (\ref{ditmodularintegral}). The explicit formula reads \begin{equation} \label{ditintersectionformula}
\frac{1}{2\pi} \; \mathrm{Re} L_{\gamma}\left(1, - \frac{d}{c} \right) = \sum_{p \in [S_\gamma] \; \cap \; S_{- d / c}} 1,
\end{equation}
where the equivalence class is over the action of $\mathrm{SL}_2 (\Z)$ on the geodesic $S_\gamma$ (see \cite[Theorem 5.2]{ditlinking}). We generalize (\ref{ditintersectionformula}) to the higher weight case.
The Parson Poincar\'e series $F_{k,\gamma}(z)$ defined in \eqref{parsonmodularintegral} has a Fourier expansion of the shape
\[
F_{k,\gamma}(z) = \sum_{\substack{n \in \frac{1}{N}\mathbb{Z}, n > 0}}a_{k,\gamma}(n)e^{2\pi i n z},
\]
where $N$ denotes the width of the cusp $\infty$ with respect to the group $\Gamma$. The Fourier coefficients $a_{k,\gamma}(n)$ can be explicitly computed in terms of Kloosterman sums and Bessel functions as in \cite[Theorem~3.1]{parson}, or in terms of cycle integrals of weakly holomorphic modular forms as in \cite[Theorem~3]{ditmodularintegrals}. We define the twisted $L$-function of $F_{k,\gamma}(z)$ by
\begin{align*}
L_{k,\gamma}\left( s, - \frac{d}{c} \right) = \sum_{n \in \frac{1}{N}\mathbb{Z}, n > 0}\frac{a_{k,\gamma}(n) e^{- 2\pi i\frac{d}{c} n}}{n^s}, \quad (c,d) = 1, \; c > 0, \quad \mathrm{Re}(s) \gg 1,
\end{align*}
At its central value, the twisted $L$-function satisfies the following analog of (\ref{ditintersectionformula}).
\begin{thm}\label{mainresult2}
Let $k \geq 2$ be an odd integer, $\gamma$ be a hyperbolic element with positive trace and $d, c$ be comprime integers with $c > 0$. We have \begin{align} \label{intersectionformulahigherk}
(-1)^{\frac{k - 1}{2}} \frac{(k - 1)!}{(2\pi)^k} \; \mathrm{Re} L_{k,\gamma}\left(k, - \frac{d}{c} \right) = D_\gamma^{\frac{k - 1}{2}} \sum_{p \in [S_\gamma] \; \cap \; S_{- d / c}} P_{k - 1} \left( \cos \theta_p \right),
\end{align}
where $\theta_p$ denotes the angle of intersection at the point $p$ between the geodesic in the equivalence class of $S_\gamma$ and the geodesic $S_{- d / c}$.
\end{thm}
The left-hand side of \eqref{intersectionformulahigherk} can be written as the imaginary part of the cycle integral of $F_{k,\gamma}(z)$ along the non-compact geodesic from $-d/c$ to $i\infty$, so Theorem~\ref{mainresult2} can be viewed as a "parabolic" analog of Theorem~\ref{mainresult}.
\section{The proofs of Theorem~\ref{mainresult} and Theorem~\ref{mainresult2}}\label{proofs}
\subsection{Proof of Theorem~\ref{mainresult}}
The proof is similar to the proof of Katok's formula \eqref{katokformula}, compare \cite[Theorem~3]{katok}. The main difference is that we can't unfold the Parson Poincar\'e series as such.
Up to interchanging $\gamma$ with $\gamma^{-1}$ and $\sigma$ with $\sigma^{-1}$ we may assume that the lower left entries of $\gamma$, $\sigma$ are positive. First, we rewrite
\[
\int_{\sigma^n.z_0}^{\sigma^{n + 1}.z_0} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz = \int_{z_0}^{\sigma.z_0} (c_n z + d_n)^{-2k} F_{k,\gamma}(\sigma^n.z) Q_\sigma(z, 1)^{k - 1} dz,
\]
where we put $\sigma^n = \SmallMatrix{*}{*}{c_n}{d_n}$. A direct computation shows that
\begin{equation} \label{lemmaintersect}
\mathrm{sign}(Q_\gamma \circ \sigma^{-n} (1, 0)) \to \mathrm{sign}(Q_\gamma(w_\sigma', 1)) \; \; \mathrm{as} \; n \to + \infty;
\end{equation}
see \cite[Lemma 2.7]{matsusaka}. If the two geodesics $S_\gamma$ and $S_\sigma$ intersect, we have $\mathrm{sign}(Q_\gamma(w_\sigma', 1)) = - \mu_p(\gamma, \sigma)$.
Now the Parson Poincar\'e series becomes \begin{align*}
(c_n z + d_n)^{-2k} F_{k,\gamma}(\sigma^n.z) &= - \frac{D_\gamma^{k - 1 / 2}}{\pi} \sum_{g \in \Gamma_\gamma \setminus \Gamma} \frac{\mathrm{sign}(Q_\gamma \circ g)}{(c_n z + d_n)^{2k} (Q_\gamma \circ g)(\sigma^n.z, 1)^k} \\
&= - \frac{D_\gamma^{k - 1 / 2}}{\pi}\sum_{g \in \Gamma_\gamma \setminus \Gamma} \frac{\mathrm{sign}(Q_\gamma \circ g \sigma^{-n}))}{(Q_\gamma \circ g)(z, 1)^k} \\
&\to - \frac{D_\gamma^{k - 1 / 2}}{\pi} \sum_{g \in \Gamma_\gamma \setminus \Gamma} \frac{\mathrm{sign}((Q_\gamma \circ g)(w_\sigma', 1))}{(Q_\gamma \circ g)(z, 1)^k}, \; \mathrm{as} \; n \to + \infty,
\end{align*}
where in the last line we applied (\ref{lemmaintersect}). The resulting series is modular of weight $2k$ for the group $\Gamma_\sigma=\{\pm \sigma^n: n\in \mathbb{Z}\}$. Hence, the cycle integral $$\int_{z_0}^{\sigma.z_0} \sum_{g \in \Gamma_\gamma \setminus \Gamma} \frac{\mathrm{sign}((Q_\gamma \circ g)(w_\sigma', 1))}{(Q_\gamma \circ g)(z, 1)^k} Q_\sigma(z, 1)^{k - 1} dz$$
is independent of the choice of $z_0$.
Now a typical unfolding argument yields \begin{align*}
&\int_{z_0}^{\sigma.z_0} \sum_{g \in \Gamma_\gamma \setminus \Gamma} \frac{\mathrm{sign}((Q_\gamma \circ g)(w_\sigma', 1))}{(Q_\gamma \circ g)(z, 1)^k} Q_\sigma(z, 1)^{k - 1} dz \\
&= \sum_{g \in \Gamma_\gamma \setminus \Gamma / \Gamma_\sigma} \sum_{m \in \mathbb{Z}} \int_{\sigma^m.z_0}^{\sigma^{m + 1}.z_0} \frac{\mathrm{sign}((Q_\gamma \circ g)(w_\sigma', 1))}{(Q_\gamma \circ g) (z, 1)^k} Q_\sigma(z, 1)^{k - 1} dz \\
&= \sum_{g \in \Gamma_\gamma \setminus \Gamma / \Gamma_\sigma} \int_{S_\sigma} \frac{\mathrm{sign}((Q_\gamma \circ g)(w_\sigma', 1))}{(Q_\gamma \circ g) (z, 1)^k} Q_\sigma(z, 1)^{k - 1} dz.
\end{align*}
Subtracting the complex conjugate from the last equation, gives the sum of integrals $$\sum_{g \in \Gamma_\gamma \setminus \Gamma / \Gamma_\sigma} \displaystyle\int_{C(\sigma)} \frac{\mathrm{sign}((Q_\gamma \circ g)(w_\sigma', 1))}{(Q_\gamma \circ g) (z, 1)^k} Q_\sigma(z, 1)^{k - 1} dz$$ over the circle $C(\sigma)$ through the roots of $Q_\sigma(z, 1) = 0$. The integrands are all meromorphic, with poles only at the real roots of $(Q_\gamma \circ g)(z,1)$. Hence, by Cauchy's theorem, they vanish if the geodesic connecting the roots of $(Q_\gamma \circ g) (z,1)$ does not intersect $S_\sigma$. Thus, we are left with
\[
\frac{D_\gamma^{k - 1 / 2}}{\pi} \sum_{p \in [S_\gamma] \cap [S_\sigma]} \mu_p(\gamma, \sigma) \displaystyle\int_{C(\sigma)} \frac{1}{(Q \circ g) (z, 1)^k} Q_\sigma(z, 1)^{k - 1} dz,
\]
as $\mu_p(\gamma, \sigma) = - \mathrm{sign}((Q \circ g) (w_\sigma', 1))$ if the geodesics $S_\sigma$ and $S_{g^{-1}\gamma g}$ intersect.
The integrals evaluate to $$\displaystyle\int_{C(\sigma)} \frac{1}{(Q_\gamma \circ g) (z, 1)^k} Q_\sigma(z, 1)^{k - 1} dz = 2 \pi i D_\gamma^{- k / 2} D_\sigma^{\frac{k - 1}{2}} \mu_p^k P_{k - 1} (\cos \theta_p);$$
see \cite[p. 478]{katok}. This finishes the proof of Theorem~\ref{mainresult}.
\subsection{Proof of Theorem~\ref{mainresult2}}
The proof is a careful application of \cite[Lemma 2]{katok}, which asserts that for odd $k \in \mathbb{Z}, k \geq 3$, and any $A, B, C \in \R$ with $D = B^2 - 4AC > 0$, we have \begin{equation} \label{legendreeva}
\int_{-\infty}^{\infty} \frac{t^{k - 1}}{(-At^2 + Bit + C)^k} dt = \begin{cases}0, &AC > 0, \\ (-1)^{\frac{k + 1}{2}} \mathrm{sign}(A) 2 \pi D^{- k / 2} P_{k - 1} \left( \frac{B}{\sqrt{D}} \right), &AC < 0. \end{cases}
\end{equation}
Consider the integral $\displaystyle\int_{- \frac{d}{c}}^{i \infty} F_{k, \gamma}(z) (cz + d)^{k - 1} dz$. A standard argument gives that \begin{equation*}
\int_{- \frac{d}{c}}^{i \infty} F_{k, \gamma}(z) (cz + d)^{k - 1} dz = \left( \frac{c}{2 \pi } \right)^k \Gamma(k) \frac{i^k}{c} L_{\gamma, k}\left( k, -\frac{d}{c} \right).
\end{equation*}
The imaginary part of the integral is equal to \begin{align*}
\mathrm{Im} \int_{- \frac{d}{c}}^{i \infty} F_{k, \gamma}(z) (cz + d)^{k - 1} dz &= c^{k - 1} \mathrm{Im} \int_{0}^{i \infty} F_{k, \gamma}\left(z - \frac{d}{c} \right) z^{k - 1} dz \\
&= - c^{k - 1} \frac{D_\gamma^{k-\frac{1}{2}}}{\pi} \sum_{g \in \Gamma_\gamma \setminus \Gamma} \mathrm{Im} \left( \int_{0}^{i \infty} \frac{\mathrm{sign}(Q_\gamma \circ g) z^{k - 1}}{(Q_\gamma \circ g)(z - d / c, 1)^k} dz \right),
\end{align*}
where we can exchange sum and integral by Fubini's theorem, as the integrand is absolutely convergent.
Fix $g \in \Gamma_\gamma \setminus \Gamma$ for the moment and write $Az^2 + Bz + C$ for $(Q_\gamma \circ g)(z, 1)$. The quadratic form
\[
(Q_\gamma\circ g)(z - d / c, 1) = Az^2 + \left(B - 2 A d / c \right)z + \left( A \left( d / c \right)^2 - B d / c + C \right) = A'z^2 + B'z + C'
\]
intersects with the non-compact geodesic $S_{- d / c}$ if and only if
\[
A' C' = A\left( A \left( d / c \right)^2 - B d / c + C \right) < 0.
\]
With (\ref{legendreeva}), we get
\begin{align*}
\mathrm{Im} \left( \int_{0}^{i\infty} \frac{\mathrm{sign}(A') z^{k - 1}}{\left(A'z^2 + B'z + C'\right)^k} dz \right)
&= \frac{(-1)^{\frac{k - 1}{2}}}{2} \int_{-\infty}^{\infty} \frac{\mathrm{sign}(A') t^{k - 1}}{\left(-A't^2 + B'it + C'\right)^k} dt \\
&= - \pi D_\gamma^{- k / 2} P_{k - 1} \left( \frac{B'}{\sqrt{D_\gamma}} \right)
\end{align*}
if and only if the geodesic $S_{g\gamma g^{-1}}$ intersects the non-compact geodesic $S_{- d / c}$ associated to the quadratic form $cz + d$. Otherwise, the integral evaluates to zero by (\ref{legendreeva}). By \cite[Proposition~2.2]{rickards}, the intersection angle $\theta_p$ between these two geodesics is given by $\cos \theta_p = \frac{Bc - 2Ad}{c \sqrt{D_\gamma}}$. This finishes the proof.
\section{Additional Remarks}
In this section, we present some further properties and possible applications of the homogenized cycle integrals and periods of Parson's modular integrals $F_{k,\gamma}(z)$.
\subsection{Explicit representation of the cycle integral} As the Parson Poincar\'e series is no longer modular, the cycle integral $\displaystyle\int_{z_0}^{\sigma.z_0} F_{k, \gamma} (z) Q_\sigma(z, 1)^{k - 1} dz$ depends on the choice of the point $z_0$ and is a complicated function in terms of $z_0$. Only when taking the homogenization we get a conjugacy class invariant object independent of the choice of $z_0$, which has the nice representation on the right hand side of (\ref{modintkatok}).
Explicitly, we have \begin{align*}
&\int_{z_0}^{\sigma.z_0} F_{k, \gamma}(z) Q_\sigma(z, 1)^{k - 1} dz = \int_{i\infty}^{\sigma.i \infty} F_{k, \gamma}(z) Q_\sigma(z, 1)^{k - 1} dz \\
&+ \sum_{n = 0}^{2k - 2} \sum_{\substack{g \in \Gamma_\gamma \setminus \Gamma, \\ w_{g \gamma g^{-1}}' < \sigma^{-1}.i \infty < w_{g \gamma g^{-1}}}} \rho_{n, g, \gamma, \sigma}(z_0) \;_2 F_1\left(k, 2k - 1 - n; 2k; 1 - \frac{z_0 - w_Q}{z_0 - w_Q'} \right),
\end{align*}
where $\rho_{n, g, \gamma, \sigma}(z_0)$ is a rational function given by $\rho_{n, g, \gamma, \sigma}(z_0) = \frac{\Gamma(2k - n - 1)}{\Gamma(2k)} \frac{\partial_z^n Q_\sigma(z, 1)^{k - 1} \vert_{z = z_0}}{(z_0 - w_{g \gamma g^{-1}}')^{n - 2k - 1}}$. One can also see from this representation that the integral converges as $z_0 \to w_\sigma$.
To prove this, we use $$\int_{z_0}^{\sigma.z_0} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz = \int_{i \infty}^{\sigma.i \infty} F_{k,\gamma}(z) Q_\sigma(z, 1)^{k - 1} dz + \int_{i \infty}^{z_0} r_{k,\gamma}(\sigma, z) Q_\sigma(z, 1)^{k - 1} dz.$$
This follows from differentiating both sides in $z_0$ and observing that both sides are equal at $z_0 \to i \infty$. Rewriting the polynomial $Q_\sigma(z, 1)^{k - 1} = \sum_{n = 1}^{2k - 2} a_{n, \sigma}(z_0) (z - z_0)^n$ in its Taylor expansion about $z_0$, we obtain a finite sum of integrals $$\int_{i \infty}^{z_0} r_{k,\gamma}(\sigma, z) Q_\sigma(z, 1)^{k - 1} dz = \sum_{n = 1}^{2k - 2} a_{n, \sigma}(z_0) \sum_{\substack{Q \sim Q_\gamma, \\ w_Q' < \sigma^{-1}.i \infty < w_Q}} \int_{z_0}^{i \infty} \frac{(z - z_0)^n}{(z - w_Q)^k(z - w_Q')^k} dz,$$ which can be solved. Standard integral transformations give \begin{align}
\begin{split} \label{integralevaluation}
&\int_{z_0}^{i\infty}\frac{(z-z_0)^{n}}{(z-w_Q)^k(z-w_Q')^k}dz = \\
&\frac{\Gamma(2k - n - 1) \Gamma(n + 1)}{\Gamma(2k)} (z-w_Q')^{n - 2k + 1} \ _2 F_1\left(k, 2k - n - 1, 2k; 1-\frac{z-w_Q}{z-w_Q'}\right).
\end{split}
\end{align}
That the right hand side of (\ref{integralevaluation}) is symmetric in $w_Q'$ and $w_Q$ also follows from the identity $\ _2F_1(c-a,b,c;z/(z-1)) = (1-z)^b \ _2 F_1(a,b,c;z)$.
The integral evaluation (\ref{integralevaluation}) can also be used to prove that the weight $2 - 2k$ cocycle \begin{align*}
R_\gamma(\sigma, z) &= \frac{(-2 \pi i)^{2k - 1} D^{k - 1 / 2}}{\pi (2k - 1)! \binom{2k - 2}{k - 1}} \sum_{w_Q' < - \frac{d}{c} < w_Q} \frac{1}{\vert Q(1, 0) \vert (z - w_Q')} \ _2 F_1\bigg(k,1,2k;1-\frac{z-w_Q}{z-w_Q'}\bigg) \\
&+ \frac{(-2 \pi i)^{2k - 1}}{ (2k - 2)!} \frac{i}{c^{2k-1}} \sum_{n = 0}^{2k-2}\binom{2k-2}{n}i^n \left(\frac{c}{2 \pi} \right)^{n + 1}\Gamma(n + 1) L_{k, \gamma}(n+1,a/c) (cz+d)^{n}
\end{align*}
for $\sigma = \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \Gamma$ and $Q$ running over the equivalence class $[Q_\gamma]$ is a $(2k - 1)$-th primitive of $r_\gamma(g, z)$, i.e. $\mathcal{D}^{2k - 1} R_\gamma(g, z) = r_\gamma(g, z)$ with $\mathcal{D} = \frac{1}{2\pi i}\frac{\partial}{\partial z}$. The Fourier coefficients $a_{k, \gamma}(n)$ can be explicitly calculated as in \cite[Theorem~3]{parson}.
\subsection{Periods of modular integrals} In this subsection we let $\Gamma = \mathrm{SL}_2 (\Z)$. Kohnen and Zagier~\cite{kohnenzagier} studied the periods of the cusp forms $f_{k,\gamma}(z)$ defined in~\eqref{fkgamma}, and showed that certain linear combinations of these periods are rational numbers.
A natural follow-up question to Theorem \ref{mainresult} would be to study the periods of the Parson Poincar\'e series, i.e. $p_n(F_{k, \gamma}) = \displaystyle\int_0^\infty F_{k, \gamma}(it) t^n \; dt$ for $0 \leq n \leq 2k-2$. As in~\cite{kohnenzagier} we define the symmetrizations
\[
F_{k, \gamma}^{+} = F_{k,\gamma} + F_{k, \gamma'}(z), \qquad F_{k, \gamma}^{-} = i( F_{k, \gamma}(z) - F_{k, \gamma'}(z)),
\]
where $\gamma' = \left( \begin{smallmatrix} -1 & 0 \\ 0 & 1 \end{smallmatrix} \right) \gamma \left( \begin{smallmatrix} -1 & 0 \\ 0 & 1 \end{smallmatrix} \right)$. We split the period polynomial $$p(F_{k, \gamma})(z) = \displaystyle\int_{0}^{i\infty} F_{k, \gamma} (z)(x-z)^{2k-2}dz = \sum_{n=0}^{2k-2} i^{-n+1}\binom{2k-2}{n}p_n(F_{k, \gamma})x^{2k-2-n}$$ of $F_{k, \gamma}$ into its even and odd part as $p(F_{k, \gamma}) = ip^+(F_{k, \gamma})+p^-(F_{k, \gamma})$ with
\begin{align*}
p^+(F_{k, \gamma})(x) &= \sum_{\substack{0 \leq n \leq 2k-2 \\ n \text{ even}}}(-1)^{n/2}\binom{2k-2}{n}p_n(F_{k, \gamma})x^{2k-2-n}, \\
p^-(F_{k, \gamma})(x) &= \sum_{\substack{0 < n < 2k-2 \\ n \text{ odd}}}(-1)^{(n-1)/2}\binom{2k-2}{n}p_n(F_{k, \gamma})x^{2k-2-n}.
\end{align*}
By closely following the proof of \cite[Theorem 5]{kohnenzagier}, one obtains
\begin{align} \label{thm5ratperiods}
\begin{split}
&p^+(F_{k,\gamma}^+)(x) + p^-(F_{k,\gamma}^-)(x) \\
&\quad \doteq -2\sum_{\substack{[a,b,c] \in [Q_\gamma] \\ a < 0 < c}}(ax^2-bx + c)^{k-1} - \frac{2D^{k-1/2}\zeta_{Q_\gamma}(k)}{\binom{2k-2}{k-1}(2k-1)\zeta(2k)}(x^{2k-2}-1),
\end{split}
\end{align}
where $\zeta_{Q_\gamma}(s)$ is the $\zeta$-function associated with $Q_\gamma$ as in \cite[p. 222]{kohnenzagier}, $\zeta(s)$ is the Riemann $\zeta$-function, and $\doteq$ means equality up to a non-zero multiplicative constant\footnote{The formulas \eqref{thm5ratperiods} and \eqref{thm4ratperiods} are correct if we normalize $F_{k,\gamma}$ as in \cite{kohnenzagier}. Since our normalization of $F_{k,\gamma}$ is different, we get some simple but unpleasant extra factors.}.
In particular, the periods $p_n(F_{k,\gamma}^+)$ for even $0< n < 2k-2$ and the periods $p_n(F_{k,\gamma}^-)$ for odd $0 < n < 2k-2$ are rational.
We let $F_{k,D}(z) = \sum_{D_\gamma = D}F_{k,\gamma}(z)$ where the sum ranges over a system of representatives $\gamma$ of the conjugacy classes of primitive hyperbolic elements in $\mathrm{SL}_2 (\Z)$ with discriminant $D_\gamma = D$. From (\ref{thm5ratperiods}) it follows that for odd $k \in \mathbb{Z}$ we have
\begin{align} \label{thm4ratperiods}
\begin{split}
p^+(F_{k, D})
& \doteq -\sum_{\substack{[a,b,c] \in \mathcal{Q}_D \\ a < 0 < c}}(ax^2+bx + c)^{k-1} - \frac{D^{k-1/2}\zeta(k)L_D(k)}{\binom{2k-2}{k-1}(2k-1)\zeta(2k)}(x^{2k-2}-1),
\end{split}
\end{align}
where $L_D(s)$ is the Dirichlet $L$-function associated to the Kronecker symbol $\left( \frac{D}{\cdot} \right)$. Formula (\ref{thm4ratperiods}) is the analog of \cite[Theorem 4]{kohnenzagier}. In particular, the periods $p_n(F_{k,D})$ for even $0 < n < 2k-2$ are rational and satisfy the symmetry $p_{2k-2-n}(F_{k, D}) = p_n(F_{k, D})$.
\subsection{The homogenized cycle integral as an "inner product"} By unfolding and \cite[Proposition~7]{kohnen}, one can also show that
\begin{align*}
& \lim_{n \to \infty} \int_{\sigma^n.z_0}^{\sigma^{n + 1}.z_0} F_{k, \gamma}(z) Q_\sigma(z, 1)^{k - 1} dz \\
&\qquad \doteq \int_{\Gamma \setminus \mathbb{H}} \sum_{g \in \Gamma_\gamma\backslash \Gamma} \sum_{h \in \Gamma_\sigma\backslash \Gamma} \frac{\mathrm{sign}((Q_{\gamma}\circ g)(w_{h \sigma h^{-1}}', 1))}{(Q_\gamma \circ g)(z, 1)^k (Q_\sigma\circ h)(\overline{z}, 1)^k} y^{2k} \frac{dxdy}{y^2}.
\end{align*}
Since $w_{h \sigma h^{-1}}' = h^{-1}.w_{\sigma}'$, the integrand is $\Gamma$-invariant.
One should compare this with the fact that the cycle integral $\displaystyle\int_{z_0}^{\sigma.z_0} f_{k, \gamma}(z) Q_\sigma(z, 1)^{k - 1} dz$ is up to constants equal to the Petersson inner product $\langle f_{k, \sigma}, f_{k, \gamma} \rangle$ of the hyperbolic Poincar\'e series (\ref{fkgamma}).
\subsection{Equidistribution of intersection angles} A possible application of Theorem \ref{mainresult} could be to give another proof the fact that the intersection angles $\theta_p$ of a fixed geodesic $[S_\gamma]$ with the geodesics of discriminant $D$ (for $\mathrm{SL}_2 (\Z)$) equidistribute to the measure $\frac{1}{2} \sin \theta \; d \theta$ as $D \to + \infty$. This was conjectured by Rickards \cite[Conjecture~4.2]{rickards} and recently proved by Jung and Sardari \cite{jungsardari}. To show this, it suffices to prove that $\cos \theta_p$ equidistribute to the Lebesgue measure on $[-1, 1]$. The Legendre polynomials $P_{k-1}$ with $k \geq 1$ form a complete orthonormal system of the space of continuous functions on $[-1, 1]$. For $k > 1$ even, the Weyl sum on the right hand side of (\ref{katokformula}) can be estimated using the Shimura-theory of cusp forms \cite{kohnen} and non-trivial bounds on the Fourier coefficients of half-integral weight cusp forms \cite{iwaniec}. A Siegel-type bound for the number of intersections can be obtained using (\ref{matsusakaformula}), \cite[Theorems~3.3 and 4.7, Remark~4.8]{matsusaka}, and some elementary considerations on continued fractions. For $k > 1$ odd, this approach does not work due to the appearance of the $\mu_p$-factor, so the Weyl sum here is given by the right hand side in (\ref{modintkatok}). However, it appears to be difficult to estimate the traces of the homogenized cycle integrals. We plan to come back to this in the future.
\subsection*{Acknowledgments} We are indebted to Özlem Imamo\={g}lu for suggesting the topic of the paper to us and for many insightful discussions. Moreover, we thank \'Arp\'ad T\'oth and Toshiki Matsusaka for helpful discussions. The first author was supported by SNF project 200021\_185014 and the the second author was supported by SNF projects 200021\_185014 and PZ00P2\_202210.
| b5853ac3f9763caabbbfb569a61aad5a5e3fcd25 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The exponential growth of the Internet \cite{Faloutsos1999} and
the World-Wide-Web \cite{Broder2000} confronts us with the
information overload: we face too many data and data sources,
making us unable to find the relevant results. As a consequence
we need automated ways to deal with the data. Recently, a lot of
work has been done in this field. The two main directions of the
research are correlation-based methods~\cite{Balab97,Pazzani99}
and spectral methods~\cite{Maslov00}. A~good overview of the
achieved results can be found
in~\cite{Herlocker04,Adomavicius05}.
Despite the amount of work done, the problem is not
satisfactorily exploited yet as both the prediction accuracy and
the computational complexity can be improved further. In this
letter we propose a~new method based on diffusion of the users'
opinions in an object-to-object network. This method can be used
for any data where users evaluate objects on an integer scale.
Using data from a real recommender application (GroupLens
project) we show that the present model performs better then
the standard recommendation methods. In addition, a Green
function method is proposed here to further reduce computation
in some cases.
\section{The model}
In the input data, the total number of users we label as $M$ and
the total number of objects as $N$ (since we focus here on the
movie recommendation, instead of the general term \emph{object}
we often use the term \emph{movie}). To make a better
distinction between these two groups, for user-related indices
we use lower case letters $i,j,k,\dots$ and for movie-related
indices we use Greek letters $\alpha,\beta,\gamma,\dots$. We
assume that users' assessments are given in the integer scale
from 1~(very bad) to 5~(very good). The rating of user $i$ for
movie $\alpha$ we denote $v_{i\alpha}$. The number of movies
rated by user $i$ we label $k_i$. The rating data can be
described by the weighted bipartite graph where the link between
user $i$ and movie $\alpha$ is formed when user $i$ has already
rated movie $\alpha$ and the link weight is $v_{i\alpha}$. Such
a bipartite graph can give rise to two different types of graphs
(often called \emph{projections}): object-to-object and
user-to-user. A~general discussion on information networks can
be found in~\cite{Newman03}, projections of bipartite graphs are
closely investigated in~\cite{Tao07,Yu07}.
The recommendation process starts with preparation of a
particular object-to-object projection of the input data.
Projections usually lead to a loss of information. In order to
eliminate this phenomenon, instead of merely creating a link
between two movies, we link the ratings given to this pair of
movies. As a result we obtain 25 separate connections (channels)
for each movie pair. This is illustrated in fig.~\ref{fig-links}
on an example of a user who has rated three movies; as a result,
three links are created between the given movies. When we process
data from all users, contributions from all users shall
accumulate to obtain an aggregate representation of the input
data: a weighted movie-to-movie network. From the
methodological point of view, this model is similar to the
well-known Quantum Diffusion process (see~\cite{Ian98,Kim03}).
\begin{figure}
\onefigure{QDfig-1}
\caption{Graphical representation of the links created by a user
who has rated only movies 1 (rating~5), 2 (rating~3), and 3
(rating~4).}
\label{fig-links}
\end{figure}
To each user we need to assign a weight. In general, if user $i$
has rated $k_i$ movies, $k_i(k_i-1)/2$ links in the network are
created (or fortified). If we set the user weight to
$1/(k_i-1)$, the total contribution of user $i$ is directly
proportional to $k_i$, and this is a plausible
premise.\footnote{Here one can recall the famous set of
equations for PageRank $G(i)$ of webpage $i$. It has the form
$G(i)=\alpha+(1-\alpha)\sum_{j\sim i} G(j)/k_j$, where the
subscript $j$ runs over all the webpages that contain a link to
webpage $i$ ($j\sim i$), for details see~\cite{Brin98}. Here a
similar scaling of the contributions by the inverse of the node
degree arises. By a numerical solution of the set, one obtains
values $G(i)$ which are essential for the Google search
algorithm.}
Since the users who have seen only one movie add no links to the
movie-to-movie network, the divergence of the weight $1/(k_i-1)$
at $k_i=1$ is not an obstacle.
Since between each pair of movies $(\alpha,\beta)$ we create
multiple links, it is convenient to write their weights as a
$5\times5$ matrix $\mathsf{W}_{\alpha\beta}$. Each rating can be
represented by a column vector in 5-dimensional space: rating
$v_{i\alpha}=1$ we represent as
$\vect{v}_{i\alpha}=(1,0,0,0,0)^T$, rating $v_{i\alpha}=2$ as
$\vect{v}_{i\alpha}=(0,1,0,0,0)^T$, and so forth. If the vote
has not been given yet, we set
$\vect{v}_{i\alpha}=(0,0,0,0,0)^T$. Then using the linking
scheme from fig.~\ref{fig-links} and the user weights
$1/(k_i-1)$ we write
\begin{equation}
\label{W-matrix}
\mathsf{W}_{\alpha\beta}=\sum_{i=1}^M
\frac{\vect{v}_{i\alpha}\vect{v}_{i\beta}^T}{k_i-1},
\end{equation}
where we sum contributions from all users. In this way we
convert the original data represented by a weighted bipartite
graph into a weighted object-to-object network.
The non-normalized weights $\mathsf{W}_{\alpha\beta}$ form a
symmetric matrix $\mathsf{W}$ with dimensions $5N\times5N$. By
the column normalization of $\mathsf{W}$ we obtain an
unsymmetric matrix $\Omega$. It describes a diffusion process
on the underlying network with the outgoing weights from any
node in the graph normalized to unity (see also a similar
diffusion-like process in~\cite{Ou2007} and the PageRank
algorithm\footnote{Incidentally, PageRank algorithm normalizes
the flux outgoing from a node in a similar way and thus it also
represents diffusion or a random walk. If one chooses the row
normalization instead, the resulting process is equivalent to
heat conduction in the network.}).
Now we shall investigate the equation
\begin{equation}
\label{stationary}
\Omega\vect{h}=\vect{h},
\end{equation}
where $\vect{h}$ is a $5N$-dimensional vector (the first 5
elements correspond to movie 1, next 5 elements to movie 2,
etc.). Denote $n_{\alpha s}$ ($\alpha=1,\dots,M$, $s=1,\dots,5$)
the number of times movie $\alpha$ has been rated with the
rating $s$. Here we exclude the votes given by the users who
have rated only one movie because these users do not contribute
to $\Omega$. It is easy to prove that the vector
\begin{equation}
\vect{h}^*=(n_{11},\dots,n_{15},\dots,
n_{N1},\dots,n_{N5})^T
\end{equation}
is a solution of eq.~(\ref{stationary}). Moreover, the solution
is unique up to multiplication by a constant and as we will see
later, all vectors in the form $\lambda\vect{h}$,
$\lambda\neq0$, lead to identical predictions. Denote
$L:=\mathsf{1}-\Omega$ the Laplace matrix, the forementioned
uniqueness of $\vect{h}^*$ is equivalent to
$\text{rank}(L)=5N-1$, which we prove in the following
paragraph. It is worthwhile to emphasize that the unique
solution $\vect{h}^*$ reproduces some features of the original
input data, which strongly supports rationality and relevance of
the construction of $\Omega$.
Using elementary row/column operations one can shift all the
rows/columns corresponding to the zero-rows/zero-columns of
$\Omega$ to the bottom and right of $L$, leading to
$\bigl(\begin{smallmatrix}
L' & O\\
O & \mathsf{1}
\end{smallmatrix}\bigr)$, where $O$ and $\mathsf{1}$ are the
zero and the identity matrix. The dimension of $\mathsf{1}$ we
label as $D$, the dimension of $L'$ is then $5N-D$. The matrix
$L'$ has four properties: (i) All its diagonal elements are 1.
(ii) All its non-diagonal elements lie in the range $[-1,0]$.
(iii) The sum of each column is zero. (iv) In each row, there is
at least one non-diagonal nonzero element. One can prove that
the rank of any matrix with these four properties is equal to
its dimension minus one, $5N-D-1$ in this case. Since
$\text{rank}(\mathsf{1})=D$, together we have
$\text{rank}(L)=\text{rank}(L')+\text{rank}(\mathsf{1})=5N-1$.
Details of the proof will be shown in an extended paper.
The matrix $\Omega$ codes the connectivities between different
ratings in the movie-to-movie network, and could yield to a
recommendation for a particular user. Since the matrix
represents only the aggregated information, in order to
recommend for a particular user, we need to utilize opinions
expressed by this user. We do so by imposing these ratings as
fixed elements of $\vect{h}$ in eq.~(\ref{stationary}). These
fixed elements can be considered as a boundary condition of the
given diffusion process; they influence our expectations on
unexpressed ratings. In other words, large weights in $\Omega$
represent strong patterns in user ratings (\emph{e.g.} most of
those who rated movie X with 5 gave 3 to movie Y) and diffusion
of the ratings expressed by a particular user in the
movie-to-movie network makes use of these patterns.
The discussion above leads us to the equation
\begin{equation}
\label{recommendation}
\Omega_i\vect{h}_i=\vect{h}_i,
\end{equation}
where $\Omega_i:=\Omega$ for the rows corresponding to the
movies unrated by user $i$ and $\Omega_i:=\mathsf{1}$ for the
remaining rows. Such a definition keeps entries corresponding to
the movies rated by user $i$ preserved. The solution of
eq.~(\ref{recommendation}) can be numerically obtained in a
simple iterative way. We start with $\vect{h}_i^{(0)}$ where
elements corresponding to the movies rated by user $i$ are set
according to these ratings and the remaining elements are set to
zero. Then by the iteration equation
$\vect{h}_i^{(n+1)}=\Omega_i\vect{h}_i^{(n)}$ we
propagate already expressed opinions of user $i$ over the
network, eventually leading to the stationary solution
$\vect{h}_i$. Intermediate results $\vect{h}_i^{(n)}$ contain
information about the movies unrated by user $i$, which can give
rise to a recommendation. We obtain the rating prediction as the
standard weighted average. For example, if for a given movie in
$\vect{h}_i$ we obtain the 5-tuple $(0.1,0.2,0.4,0.3,0.0)^T$,
the rating prediction is $\hat v=2.9$. Notice that if a user
has rated no movies, we have to use a different method (for
example the movie average introduced later) to make a
prediction. This feature is common for recommender systems
producing personalized predictions.
\section{Avoiding the iterations}
While simple, the iterative way to solve
eq.~(\ref{recommendation}) has one important drawback: the
iterations have to be made for every user separately.
Consequently, the computational complexity of the algorithm is
high. To get rid of this difficulty we rewrite
eq.~(\ref{recommendation}) as $L\vect{h}_i=\vect{j}_i$, again
$L=\mathsf{1}-\Omega$. Here the external flux $\vect{j}_i$ is
nonzero only for the elements representing the boundary
condition of user $i$.
The solution $\vect{h}_i$ can be formally written in the form
$\vect{h}_i=\mathsf{G}\vect{j}_i$. This resembles the well-known
Green function approach: once $\mathsf{G}$ is known,
$\vect{h}_i$ can be found by a simple matrix multiplication.
While the source term $\vect{j}_i$ is not a~priori known, we can
get rid of it by
reshuffling of the movies and grouping the boundary elements in
$\vect{h}_i$. After this formal manipulation we obtain
\begin{equation}
\label{green-start}
\binom{\vect{h}_i^\ab{B}}{\vect{h}_i^\ab{F}}=
\begin{pmatrix}
\mathsf{G}_\ab{BB} & \mathsf{G}_\ab{BF}\\
\mathsf{G}_\ab{FB} & \mathsf{G}_\ab{FF}
\end{pmatrix}
\binom{\vect{j}_i^\ab{B}}{\vect{0}},
\end{equation}
where B stands for \emph{boundary} and F for \emph{free}. Now it
follows that
$\vect{h}_i^\ab{B}=\mathsf{G}_\ab{BB}\vect{j}_i^\ab{B}$
and $\vect{h}_i^\ab{F}=\mathsf{G}_\ab{FB}\vect{j}_i^\ab{B}$,
leading us to the final result
\begin{equation}
\label{solution}
\vect{h}_i^\ab{F}=
\mathsf{G}_\ab{FB}\mathsf{G}_\ab{BB}^{-1}\vect{h}_i^\ab{B}.
\end{equation}
Since most users have rated only a small part of all $M$ movies,
the dimension of $\mathsf{G}_\ab{BB}$ is usually much smaller
than that of $\mathsf{G}$ and thus the inversion
$\mathsf{G}_\ab{BB}^{-1}$ is cheap.
The last missing point is that since $L$ is singular (as we have
mentioned, $\text{rank}(L)=5N-1$), the form of $\mathsf{G}$ can
not be obtained by inverting $L$. Hence we use the
\emph{Moore-Penrose pseudoinverse}~\cite{Penrose1955}
\begin{equation}
\label{G-form} \mathsf{G}=L^\dag=\lim_{k\to\infty}
\big[\mathsf{1}+\Omega+\Omega^2+\dots+\Omega^k-
k\vect{w}_\ab{R}\vect{w}_\ab{L}\big],
\end{equation}
where $\vect{w}_\ab{R}$ and $\vect{w}_\ab{L}$ is the right and
left eigenvector of $\Omega$ respectively, both corresponding to
the eigenvalue 1. For practical purposes, the infinite summation
in eq.~(\ref{G-form}) can be truncated at a finite value $k$.
\section{Personal polarization}
Before the described method can be used in real life examples,
there is one important technical problem. Each user has a
different style of rating---some people tend to be very strict
and on average give low marks, some people prefer to give either
1 or 5, some don't like to give low marks, and so forth. Thus,
ratings cannot be grouped together in matrices
$\mathsf{W}_{\alpha\beta}$ in the straightforward and na\"ive
way we described before for they mean different things to
different people.
To deal with this phenomenon, which we refer to as personal
polarization, \emph{unification} of ratings from different users
is used before summing users' contributions in the
object-to-object network. Consequently, before reporting
resulting predictions to a user, the output of the algorithm has
to be shifted back to the user's scale and \emph{personalization}
is needed.
To characterize the rating profile of user $i$ we use the mean
$\mu_i$ and the standard deviation $\sigma_i$ of the votes given
by him, and we compare these values with the mean $m_i$ and the
standard deviation $s_i$ of the ratings given by all users.
Notably, the quantities $m_i$ and $s_i$ take into account only
the movies rated by user $i$---if a user has a low average
rating because he has been rating only bad movies, there is no
need to manipulate his ratings. To conform a user rating profile
to the society rating profile we use the linear transformation
\begin{equation}
\label{unification}
u_{i\alpha}=m_i+(v_{i\alpha}-\mu_i)\,\frac{s_i}{\sigma_i}.
\end{equation}
Personalization of the predicted value is done by the inverse
formula $v_{i\alpha}=\mu_i+(u_{i\alpha}-m_i)\sigma_i/s_i$. We
can notice that while $v_{i\alpha}$ is an integer value,
$u_{i\alpha}$ is a real number. Nevetheless, one can obtain its
vector representation in the straightforward way: \emph{e.g.}
$u=3.7$ is modelled by the vector $(0,0,0.3,0.7,0)^T$; the
weighted mean corresponding to this vector is equal to the input
value $3.7$.
\section{Benchmark methods}
In correlation-based methods, rating correlations between users
are quantified and utilized to obtain predictions. We present
here one implementation of such a method, which serves as a
benchmark for the proposed diffusion model. The correlation
$C_{ij}$ between users $i$ and $j$ is calculated with Pearson's
formula
\begin{equation}
\label{correlation}
C_{ij}=
\frac{\sum_{\alpha}^*(v_{i\alpha}-\mu_i)(v_{j\alpha}-\mu_j)}
{\sqrt{\sum_{\alpha}^*(v_{i\alpha}-\mu_i)^2}
\sqrt{\sum_{\alpha}^*(v_{j\alpha}-\mu_j)^2}},
\end{equation}
where we sum over all movies rated by both $i$ and $j$ (to
remind this, there is a star added to the summation symbols);
$C_{ij}:=0$ when users $i$ and $j$ have no movies in common. Due
to the data sparsity, the number of user pairs with zero
correlation can be high and the resulting prediction performance
poor. To deal with this effect, in~\cite{Laureti07} it is
suggested to replace the zero correlations by the society
average of $C_{ij}$. In the numerical tests presented in this
Letter the resulting improvement was small and thus we use
eq.~(\ref{correlation}) in its original form. Finally,
the predictions are obtained using the formula
\begin{equation}
\label{corr-pred}
\hat v_{i\alpha}=\mu_i+\sum\nolimits_j'
\frac{C_{ij}}{\sum_k'C_{ik}}\,(v_{j\alpha}-\mu_j).
\end{equation}
Here we sum over the users who have rated movie $\alpha$ (prime
symbols added to sums are used to indicate this), the term
$\sum\nolimits_k' C_{ik}$ serves as a normalization factor.
As a second benchmark method we use recommendation by the movie
average (MA) where one has $\hat v_{i\alpha}=m_{\alpha}$,
$m_{\alpha}$ is the average rating of movie $\alpha$. This
method is not personalized (for a~given object, all users obtain
the same prediction) and has an inferior performance. As it is
very fast and easy to implement, it is still widely used.
Notably, when unification-personalization scheme is employed
together with MA, the predictions get personalized. As we will
see later, in this way the prediction performance is increased
considerably without a notable impact on the computation
complexity.
\section{Numerical results}
To test the proposed method based on opinion diffusion (OD) we
use the GroupLens project data, available at
\texttt{www.grouplens.org}. The total number of users is
$M=943$, the total number of movies is $N=1\,682$, and the
ratings are integer values from 1 to 5. The number of given
ratings is 100\,000, corresponding to the voting matrix sparsity
around 6\%.
To test the described methods, randomly selected 10\% of the
available data is transfered to the probe file $\mathcal{P}$,
and the remaining 90\% is used as an input data for the
recommendation. Then we make a prediction for all entries
contained in the probe and measure the difference between the
predicted value $\hat v_{i\alpha}$ and the actual value
$v_{i\alpha}$. For an aggregate review of the prediction
performance we use two common quantities: \emph{root mean square
error} (RMSE) and \emph{mean absolute error} (MAE). They are
defined as
\begin{subequations}
\begin{eqnarray}
\label{errors}
\mathrm{MAE}&=&
\frac1n\sum_{\mathcal{P}}
\lvert v_{i\alpha}-\hat v_{i\alpha}\rvert,\\
\mathrm{RMSE}&=&
\bigg[\frac1n\sum_{\mathcal{P}}
(v_{i\alpha}-\hat v_{i\alpha})^2\bigg]^{1/2},
\end{eqnarray}
\end{subequations}
where the summations go over all user-movie pairs $(i,\alpha)$
included in the probe $\mathcal{P}$ and $n$ is the number of
these pairs in each probe dataset. To obtain a better
statistics, the described procedure can be repeated many times
with different selections of the probe data. We used 10
repetitions and in addition to the averages of MAE and RMSE we
found also standard deviations of both quantities.
In contrast with the expectations, in fig.~\ref{fig-iterations}
it can be seen that the prediction performance is getting worse
by a small amount when more than one iteration of
eq.~(\ref{recommendation}) is used to obtain the prediction.
Probably this is due to the presence of overfitting---starting
from the second iteration, our expectations are influenced not
only by actually expressed ratings but also by our expectations
about unexpressed ratings obtained in previous iteration steps.
Nevertheless, as it will be shown later, the performance
achieved by the first iteration is good and justifies validity
of the proposed model. In the following paragraphs we use only
one iteration to obtain the predictions. Consequently, the Green
function method introduced above is not necessary---we decided
to expose it in this paper because it can be useful with other
datasets.
\begin{figure}
\onefigure[scale=0.3]{iterations}
\caption{Prediction performance for the predictions
$\hat v_{i\alpha}$ obtained by iterations of
eq.~(\ref{recommendation}) using various numbers of iterations
steps.}
\label{fig-iterations}
\end{figure}
\begin{table}
\caption{Comparison of the three recommendation methods: movie
average (MA), correlation-based method (CB), and opinion
diffusion (OD). Presented values are averages obtained using
10~different probes; standard deviations are approximately
$0.01$ in all investigated cases.}
\label{tab-comparison}
\begin{center}
\begin{tabular}{ccccc}
\hline\hline
& \multicolumn{2}{c}{no unification} &
\multicolumn{2}{c}{with unification}\\
method & RMSE & MAE & RMSE & MAE\\
\hline
MA & $1.18$ & $0.91$ & $1.01$ & $0.79$\\
CB & $1.09$ & $0.86$ & $1.09$ & $0.86$\\
OD & $1.00$ & $0.80$ & $0.93$ & $0.73$\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
In table~\ref{tab-comparison} we compare the prediction accuracy
for the movie-average method (MA), the correlation-based method
(CB), and for the opinion diffusion (OD). To measure the
prediction performances we use both RMSE and MAE as defined
above. All three methods are tested both with and without
employing the unification-personalization scheme. In accordance
with expectations, for MA and OD the performances with
unification included are better than without it; for the
simplest tested method, MA, the difference is particularly
remarkable. By contrast, CB is little sensitive to the
unification procedure and when we drop the multiplication by
$\sigma_i/s_i$ from the unification-personalization process
given by eq.~(\ref{unification}), the difference disappears
completely (which can be also confirmed analytically). According
to the prediction performances shown in
table~\ref{tab-comparison} we can conclude that the diffusion
method outperforms the other two clearly in all tested cases
(RMSE/MAE, with/without unification). When computation
complexity is taken into account, it can be shown that if $M>N$,
the proposed method is more effective than correlation-based
methods (but, of course, less effective than using the movie
average).
\section{Conclusion}
We have proposed a novel recommendation method based on
diffusion of opinions expressed by a user over the
object-to-object network. Since the rating polarization effect
is present, we have suggested the unification-personalization
approach as an additional layer of the recommender system.
To allow a computation reduction with some datasets, Green
function method has been introduced. The proposed method has
been compared with two standard recommendation algorithms and it
has achieved consistently better results. Notably, it is
executable even for the large dataset (17\,770 movies, 480\,189
users) released by Netflix (a DVD rental company, see
\texttt{www.netflixprize.com}). In addition, our model is
tune-free in essence---it does not require extensive testing and
optimization to produce a high-quality output. This is a good
news for practitioners.
\acknowledgements
This work is partially supported by Swiss National Science
Foundation (project 205120-113842). We acknowledge SBF
(Switzerland) for financial support through project C05.0148
(Physics of Risk), T. Zhou acknowledges NNSFC (No.~10635040).
We kindly acknowledge the computing resources provided by Swiss
National Supercomputing Center.
| 6082cd753becf8f87532318e79b29746a02398f5 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
In nuclear physics one often faces the problem of determining long-wavelength
properties of nuclei, such as binding energies, radii, or responses to low-momentum
probes. One approach would be to evaluate the relevant operators between exact nuclear
wave functions obtained from solutions of the many-body Schroedinger equation.
Because the NN potential is strong, characterized by anomalously large NN
scattering lengths, and highly repulsive at very short distances, this task becomes
exponentially more difficult as the nucleon number increases. Among available quasi-exact
methods, the variational and Green's function Monte Carlo work of the Argonne group has perhaps set
the standard \cite{argonne}, yielding accurate results throughout most of the $1p$ shell.
Effective theory (ET) potentially offers an alternative, a method that limits the numerical difficulty
of a calculation by restricting it to a finite Hilbert space (the $P$- or
``included"-space), while correcting the bare Hamiltonian $H$ (and other operators) for the
effects of the $Q$- or ``excluded"-space.
Calculations using the effective Hamiltonian $H^{eff}$ within $P$ reproduce
the results using $H$ within $P+Q$, over the domain of overlap.
That is, the effects of $Q$ on $P$-space
calculations are absorbed into $P(H^{eff}-H)P$.
One interesting challenge for ET is the case of a $P$-space basis of harmonic
oscillator (HO) Slater determinants. This is a special basis for nuclear physics
because of center-of-mass separability: if all Slater determinants containing
up to $N$ oscillator quanta are retained, $H^{eff}$ will be translationally invariant (assuming
$H$ is). Such bases are also important because of powerful shell-model (SM) techniques that
have been developed for iterative diagonalization and for evaluating inclusive responses. The larger
$P$ can be made, the smaller the effects of $H^{eff}-H$. If one could fully develop
harmonic-oscillator based effective theory (HOBET), it would provide a prescription for
eliminating the SM's many uncontrolled approximations, while retaining the
model's formidable numerical apparatus.
The long-term goal is a HOBET resembling standard effective field theories (EFTs) \cite{weinberg,savage}. That is, for a given choice of $P$, the effective interaction would be a sum of a long-distance
``bare" interaction whose form would be determined by chiral symmetry, augmented by
some general effective interaction that accounts for the excluded Q space. That effective
interaction would be expanded systematically and in some natural way, with the parameters
governing the strength of successive terms
determined by directly fitting to experiment. There would be no need to introduce or integrate
out any high-momentum NN potential, an unnecessary intermediate
effective theory between QCD and the SM scale.
One prerequisite for such an approach is the demonstration that a systematic expansion for
the HOBET effective interaction exists. This paper explores this issue, making use of numerically
generated effective interaction matrix elements for the deuteron, obtained by solving
the Bloch-Horowitz (BH) equation for the Argonne $v_{18}$ potential, an example of a
potential with a relatively
hard core ($\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}$ 2 GeV) \cite{av18}.
The BH $H^{eff}$ is a Hermitian but energy-dependent
Hamiltonian satisfying
\begin{eqnarray}
H^{eff} = H &+& H {1 \over E - Q H} Q H \nonumber \\
H^{eff} |\Psi_P \rangle = E |\Psi_P \rangle ~~~&&~~~ |\Psi_P \rangle
= (1-Q) |\Psi \rangle.
\label{BH}
\end{eqnarray}
Here $H$ is the bare Hamiltonian and $E$ and $\Psi$ are the exact eigenvalue and wave function
(that is, the solution of the Schroedinger equation in the full $P+Q$ space).
$E$ is negative for a bound state. Because $H^{eff}$
depends on the unknown exact eigenvalue $E$, Eqs. (\ref{BH}) must be solved self-consistently,
state by state,
a task that in practice proves to be relatively straightforward. If this is done, the $P$-space
eigenvalue will be the exact energy $E$ and the $P$-space wave function $\Psi_P$ will be the
restriction of the exact wave function $\Psi$ to $P$. This implies a nontrivial normalization
and nonorthogonality of the restricted ($P$-space) wave functions.
If $P$ is enlarged, new components are added to the existing ones,
and for a sufficiently large $P$ space, the norm approaches ones. This convergence is slow
for potentials like $v_{18}$, with many shells being required before norms near one are
achieved \cite{song,luu}. Observables calculated with the restricted wave
functions and the appropriate effective operators are independent of the choice of
$P$, of course. All of these properties follow from physics encoded in $H^{eff}$.
In HOBET $P$ and thus $H^{eff}$ are functions of the oscillator parameter $b$ and the
number of included HO quanta $\Lambda_P$. In this paper I study the behavior
of matrix elements
$\langle \alpha | H^{eff} | \beta \rangle$ generated for the Argonne $v_{18}$ potential, as both $b$
and $\Lambda_P$ are varied. In particular, $\Lambda_P$ is allowed to run from
very high values to the ``shell-model" scale of 8 $\hbar \omega$, in order
to test whether the physics above a specified scale can be efficiently absorbed
into the coefficients of some systematic expansion, e.g., one
analogous to the contact-gradient expansions employed in EFTs
(which are generally formulated in plane wave bases).
There are reasons the HOBET effective interaction could prove more
complicated:
\begin{itemize}
\item An effective theory defined by a subset of HO Slater determinants
is effectively an expansion around a typical momentum scale $q \sim 1/b$. That is, the $P$-space
omits both long-wavelength and short-wavelength degrees of freedom. The former are
connected with the overbinding of the HO, while the latter are due to absence
in $P$ of the strong, short-range NN interaction. As
any systematic expansion of the effective interaction must simultaneously address
both problems, the form of the effective interaction cannot be as simple as a
contact-gradient expansion (which would be appropriate if the missing physics were only
short-ranged).
\item The relative importance of the missing long-wavelength and short-wavelength excitations
is governed by the binding energy, $|E|$, with the former increasing as $|E| \rightarrow 0$.
These long-range interactions allow nuclear states to de-localize, minimizing the kinetic
energy. But nuclei are weakly bound -- binding energies are very small compared
to the natural scales set by the scalar and vector potentials in nuclei. One concludes that
the effective interaction must depend delicately on $|E|$.
\item An effective theory is generally considered successful if it can reproduce the lowest energy
excitations in $P$. But one asks for much more when one seeks to accurately represent
the effective interaction, which governs all of the spectral properties within $P$. The HO appears
to be an especially difficult case in which to attempt such a representation.
The kinetic energy operator in the HO has strong off-diagonal components which
raise or lower the nodal quantum number, and thus connect
Slater determinants containing $\Lambda_P$ quanta with those containing $\Lambda_P \pm 2$.
This means that $P$ and $Q$ are strongly coupled through low-energy excitations, a
situation that is usually problematic for an effective theory.
\end{itemize}
All of these problems involve the interplay, governed by $|E|$, of $QT$ (delocalization)
and $QV$ (corrections for short-range repulsion).
The explicit energy dependence
of the BH equation proves to be
a great advantage in resolving the problems induced by this interplay,
leading to a natural factorization of the long- and short-range contributions to the
effective interaction, and thereby to a successful systematic representation of the effective
interaction. (Conversely, techniques such as Lee-Suzuki \cite{suzuki} will intermingle these effects
in a complex way and obscure the underlying simplicity of the effective interaction.)
The result is an energy-dependent
contact-gradient expansion at N$^3$LO that
reproduces the entire effective interaction to an accuracy of about a few keV. The contact-gradient expansion is defined in a way that is appropriate to the HO, eliminating operator mixing and
producing a simple dependence on nodal quantum numbers. The coefficients
in the expansion play the role of generalized Talmi integrals.
The long-range physics residing in $Q$ can be isolated analytically and
expressed in terms of a single parameter, $\kappa = \sqrt{2 |E|/\hbar \omega}$, remarkably
the ratio of an observable ($|E|$) to a parameter one chooses in defining the ET. The
dependence of $H^{eff}$ on $\kappa$ is determined by summing $QT$ to
all orders. The resulting
$H^{eff}$ is defined by $\kappa$ and by the coefficients of the short-ranged expansion.
This
same parameter governs almost all of the state dependence that enters when
one seeks to describe multiple states. Thus it appears that there is a systematic,
rapidly converging representation for $H^{eff}$ in HOBET that could be used to
describe a set of nuclear states. The short-range parameters in that representation
are effectively state-independent, as the state-dependence usually attacked with
techniques like Lee-Suzuki is isolated in $\kappa$.
\section{Long- and short-wavelength separations in $H^{eff}$}
In Refs. \cite{song,luu} a study was done of the evolution of matrix elements
$\langle \alpha | H^{eff} | \beta \rangle$, for the deuteron and for $^3$He/$^3$H,
from the $\Lambda_P \rightarrow \infty$ limit, where $H^{eff} \rightarrow H$, down to
$\Lambda_P$ characteristic of the shell model (SM), e.g., small $P$ spaces with 4, 6, or 8
$\hbar \omega$ excitations, relative to the naive $1s$-shell ground state.
As noted above, this definition of $P$ in terms of the total quanta in HO Slater
determinants maintains center-of-mass separability and thus
leads to an $H^{eff}$ that is translationally invariant, just like $H$. Indeed, the HO
basis is the only set of compact wave functions with this attractive
property.
But this choice leads to a more complicated ET, as $P$ excludes both
short-distance and long-distance components of wave functions. This problem was first
explored in connection with the nonperturbative behavior of $H^{eff}$: the need
to simultaneously correct for the missing long- and short-distance behavior of
$\Psi_P$ is the reason one cannot tune $P$ to make
$H^{eff}$ converge rapidly. For example, while it is possible to ``pull" more of the
missing short-range physics into $P$ by choosing a small $b$, this adjustment
produces a more compact state with very large $Q$-space corrections to the kinetic
energy. Conversely, one can tune $b$ to large values
to improve the description of the nuclear tail, but at the cost of missing even more of
the short-range physics. At no value of $b$ are both problems handled well: Fig. \ref{fig_1}
shows that a poor minimum is reached at some intermediate $b$, with a $10 \hbar \omega$
``bare" calculation failing to bind the deuteron.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{luu_fig3_prl}
\end{center}
\caption{(Color online) Deuteron ground-state convergence for ``bare" calculations
in small $P$-spaces, which omit all effects
due to the multiple scattering of $V$ in $Q$. The three curves on the upper right
were calculated from the standard BH equation, which identifies the bare interaction
as $P(T+V)P$. These calculations fail
to bind the deuteron, even with $\Lambda_P=10$, for all values of the HO size
parameter $b$: the $P$-space estimate for $V$ is poor if $b$ is much above 1fm,
while the estimate for $T$ is poor if $b$ is below that value. The lower four curves were evaluated
for the bare interaction of the reordered BH given by Eq. (\ref{BHnew}), which incorporates
the long-range effects of $QT$ to all orders, building in the correct asymptotic form of
the wave function. This allows one to reduce $b$ to small values,
pulling most of the effects of $V$ into $P$, without distorting the long-distance behavior
of the wave function or, therefore, the estimate for $T$. Rather remarkably, this bare calculation reproduces the correct
binding energy for $P$ spaces as small as $\Lambda_P$=6. That is, by the combination of
the summation of $QT$ to all orders and the adjustment of $b$ to an optimal value
characteristic of the hard core radius of $v_{18}$, the effective interaction contribution
can be driven to such small values that it can be ignored.}
\label{fig_1}
\end{figure}
The solution found to this hard-core correlation/extended state quandary is an
a priori treatment of the overbinding of the harmonic oscillator. The BH equation is
rewritten in a form that allows the relative kinetic energy operator to be
summed to all orders. (This form was introduced in the first of Refs. \cite{luu}; a
detailed derivation can be found in the Appendix of the third of these references. The
kinetic energy sum can be done analytically for calculations performed in a
Jacobi basis.) This reordered BH equation has the form
\begin{equation}
H^{eff} = H + HQ {1 \over E-QH} QH =
{E \over E -TQ} \left[ T -T {Q \over E}T + V + V {1 \over E-QH} QV \right] {E \over E-QT}
\label{BHnew}
\end{equation}
where the bare $H$ is the sum of the relative kinetic energy and a two-body interaction
\begin{equation}
H = {1 \over 2} \sum_{i,j=1}^A \left(T_{ij} + V_{ij} \right), \mathrm{~~~with~~} T_{ij} = {({\bf p}_i-{\bf p}_j)^2 \over 2 A M}.
\end{equation}
This effective interaction is to be evaluated between a finite basis of Slater determinants
$| \alpha \rangle \in P$, which is equivalent to evaluating the Hamiltonian
\begin{equation}
\widetilde{H}^{eff} \equiv T -T {Q \over E}T + V + V {1 \over E-QH} QV
\end{equation}
between the states
\begin{equation}
|\widetilde{\alpha} \rangle \equiv {E \over E-QT} |\alpha \rangle
\end{equation}
By summing $QT$ to all orders, the proper behavior at large $r$ can be built in, which
then allows $b$ to be adjusted, without affecting
the long-wavelength properties of the wave function. Fig. \ref{fig_1}, from Ref. \cite{luu}, shows that the
resulting decoupling of the long- and short-wavelength physics can greatly improve
convergence: a ``bare" 6 $\hbar \omega$ calculation that neglects all contributions of $QV$
gives an excellent binding energy. This decoupling
of $QV$ and $QT$ is also important
in finding a systematic expansion for $H^{eff}$.
This reorganization produces an $H^{eff}$ with
three terms operating between HO Slater determinants,
\begin{eqnarray}
\langle \alpha | T {E \over E-QT} | \beta \rangle=\langle \alpha | {E \over E-TQ} T | \beta \rangle~&\stackrel{\mathrm{nonedge}}{\longrightarrow}&~\langle \alpha | T | \beta \rangle \nonumber \\
\langle \alpha | {E \over E-TQ} V {E \over E-QT} | \beta \rangle~&\stackrel{\mathrm{nonedge}}{\longrightarrow}&~\langle \alpha | V | \beta \rangle \nonumber \\
\langle \alpha | {E \over E-TQ} V {1 \over E-QH} QV {E \over E-QT} | \beta \rangle~&\stackrel{\mathrm{nonedge}}{\longrightarrow}&~\langle \alpha | V {1 \over E-QH} QV | \beta \rangle.
\label{wh:eq4}
\end{eqnarray}
The ladder properties of $QT$ make
$E/(E-QT)$ the identity operator except when it acts on an $|\alpha \rangle$
with energy $\Lambda_P\hbar \omega$ or $(\Lambda_P-1) \hbar \omega$. These are
called the edge states. For nonedge states, the new grouping
of terms in $H^{eff}$
reduces to the expressions on the right-hand side of Eq. \ref{wh:eq4}, the
conventional components of
$H^{eff}$. Thus the summation over $QT$ alters only a subset of the matrix elements
of $H^{eff}$, while leaving other states unaffected.
Figure \ref{fig_2} shows the extended tail of the relative two-particle wave function that is induced by $E/(E-QT)$ acting on an edge HO state \cite{luu}. As
will become apparent from later expressions, this tail has the proper exponential fall-off,
\begin{equation}
\sim {e^{-\kappa r} \over \kappa r}
\end{equation}
where $\kappa= \sqrt{2|E|/\hbar \omega}$ and $r=|\vec{r}_1-\vec{r}_2|/\sqrt{2}b$ is the
dimensionless Jacobi coordinate, not the Gaussian tail of the HO. At small $r$ the wave function
is basically unchanged (apart from normalization).
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{luu_fig2_prl}
\end{center}
\caption{(Color online) A comparison of the radial wave functions for the HO state $|n l\rangle$ (dashed)
and for the extended
state $(E/E-QT) |n l\rangle$ (solid), for $(n,l)=(6,0)$
in a $\Lambda_P=10$ deuteron calculation. The extended tail of the
latter is apparent. Note that the normalization of the extended
state has been adjusted to match that of $|nl\rangle$ at $r$=0, in order to show that the
shapes differ only at large $r$. Thus a depletion of the extended state at small $r$
is not apparent in this figure.}
\label{fig_2}
\end{figure}
\section{The HOBET Effective Interaction}
Contact-gradient expansions
are used in approaches like EFT to correct for the exclusion of short-range (high-momentum)
interactions. The most general scalar interaction is constructed, consistent with Hermiticity,
parity conservation, and time-reversal invariance, as an expansion in the momentum. Such
an interaction for the two-nucleon system, expanded to order N$^3$LO (or up to six gradients),
is shown in Table \ref{table:1}. (Later these operators will be slightly modified for HOBET.)
\begin{table}
\centering
\caption{Contact-gradient expansion for relative-coordinate two-particle matrix elements. Here
$\stackrel{\rightarrow}{D^2_M} = (\stackrel{\rightarrow}{\nabla} \otimes \stackrel{\rightarrow}{\nabla})_{2M}$,
$\stackrel{\rightarrow}{D^0_0} = [ (\sigma(1) \otimes \sigma(2))_2 \otimes D^2]_{00}$,
$\stackrel{\rightarrow}{F^3_M} = (\stackrel{\rightarrow}{\nabla} \otimes \stackrel{\rightarrow}{D^2})_{3M}$,
$\stackrel{\rightarrow}{F^1_M}= [ (\sigma(1) \otimes \sigma(2))_2 \otimes F^3]_{1M}$,
$\stackrel{\rightarrow}{G^4_M} = (\stackrel{\rightarrow}{D^2} \otimes \stackrel{\rightarrow}{D^2})_{4M},$
$\stackrel{\rightarrow}{G^2_M} = [(\sigma(1) \otimes \sigma(2))_2 \otimes G^4]_{2M}$,
and the scalar product of tensor operators is defined as $A^J \cdot B^J = \sum_{M=-J}^{M=J} (-1)^M A^J_M B^J_{-M}$.}
\label{table:1}
{\footnotesize
\begin{tabular}{|c||c|c|c|c|}
\hline
Transitions & LO & NLO & NNLO & N$^3$LO \\ \hline
${}^3S_1 \leftrightarrow {}^3S_1$ \T & $a_{LO}^{3S1} \delta({\bf r})$ & $a_{NLO}^{3S1} (\stackrel{\leftarrow}{\nabla^2} \delta({\bf r}) + \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2})$ & $a_{NNLO}^{3S1, 22} \stackrel{\leftarrow}{\nabla^2} \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2}$ & $a_{N^3LO}^{3S1, 42} (\stackrel{\leftarrow}{\nabla^4} \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2}+ \stackrel{\leftarrow}{\nabla^2} \delta({\bf r}) \stackrel{\rightarrow}{\nabla^4})$ \\
or ${}^1S_0 \leftrightarrow {}^1S_0$ \B & & & $a_{NNLO}^{3S1, 40} (\stackrel{\leftarrow}{\nabla^4} \delta({\bf r}) + \delta({\bf r}) \stackrel{\rightarrow}{\nabla^4})$ &
$a_{N^3LO}^{3S1, 60} (\stackrel{\leftarrow}{\nabla^6} \delta({\bf r}) + \delta({\bf r}) \stackrel{\rightarrow}{\nabla^6}$) \\ \hline
${}^3S_1 \leftrightarrow {}^3D_1$ \T & & $a_{NLO}^{SD} (\delta({\bf r}) \stackrel{\rightarrow}{D^0}+ \stackrel{\leftarrow}{D^0} \delta({\bf r}))$ & $a_{NNLO}^{SD, 22} (\stackrel{\leftarrow}{\nabla^2} \delta({\bf r}) \stackrel{\rightarrow}{D^0} + \stackrel{\leftarrow}{D^0} \delta({\bf r})\stackrel{\rightarrow}{\nabla^2} )$ & $a_{N^3LO}^{SD, 42} ( \stackrel{\leftarrow}{\nabla^4} \delta({\bf r}) \stackrel{\rightarrow}{D^0} +\stackrel{\leftarrow}{D^0} \delta({\bf r})\stackrel{\rightarrow}{\nabla^4})$\\
& & & $a_{NNLO}^{SD, 04} (\delta({\bf r}) \stackrel{\rightarrow}{\nabla^2} \stackrel{\rightarrow}{D^0} + \stackrel{\leftarrow}{D^0}\stackrel{\leftarrow}{\nabla^2} \delta({\bf r}))$ & $a_{N^3LO}^{SD, 24} ( \stackrel{\leftarrow}{\nabla^2}\delta({\bf r}) \stackrel{\rightarrow}{\nabla^2} \stackrel{\rightarrow}{D^0} +\stackrel{\leftarrow}{D^0} \stackrel{\leftarrow}{\nabla^2} \delta({\bf r})\stackrel{\rightarrow}{\nabla^2} )$\\
\B & & & & $a_{N^3LO}^{SD, 06} ( \delta({\bf r}) \stackrel{\rightarrow}{\nabla^4} \stackrel{\rightarrow}{D^0} +\stackrel{\leftarrow}{D^0} \stackrel{\leftarrow}{\nabla^4} \delta({\bf r}))$\\ \hline
${}^1D_2 \leftrightarrow {}^1D_2$ \T & & & $a_{NNLO}^{1D2} \stackrel{\leftarrow}{D^2} \cdot \delta({\bf r})\stackrel{\rightarrow}{D^2}$ & $a_{N^3LO}^{1D2} (\stackrel{\leftarrow}{D^2} \stackrel{\leftarrow}{\nabla^2} \cdot \delta({\bf r})\stackrel{\rightarrow}{D^2} + \stackrel{\leftarrow}{D^2} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2} \stackrel{\rightarrow}{D^2})$ \\
or ${}^3 D_J \leftrightarrow {}^3 D_J$\B & & & & \\ \hline
${}^3D_3 \leftrightarrow {}^3G_3$ \T & & & & $a_{N^3LO}^{DG} ( \stackrel{\leftarrow}{D^2} \cdot \delta({\bf r})\stackrel{\rightarrow}{G^2} + \stackrel{\leftarrow}{G^2} \cdot \delta({\bf r}) \stackrel{\rightarrow}{D^2})$ \B \\ \hline
${}^1P_1 \leftrightarrow {}^1P_1$ \T& & $a_{NLO}^{1P1} \stackrel{\leftarrow}{\nabla^{}} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^{}}$ & $a_{NNLO}^{1P1} (\stackrel{\leftarrow}{\nabla^{}} \stackrel{\leftarrow}{\nabla^2} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^{}} + \stackrel{\leftarrow}{\nabla^{}} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2} \stackrel{\rightarrow}{\nabla^{}})$ & $a_{N^3LO}^{1P1,33} \stackrel{\leftarrow}{\nabla^{}} \stackrel{\leftarrow}{\nabla^2} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2} \stackrel{\rightarrow}{\nabla^{}} $ \\
or ${}^3P_J \leftrightarrow {}^3P_J \B$ & & & & $a_{N^3LO}^{1P1,51} (\stackrel{\leftarrow}{\nabla^{}} \stackrel{\leftarrow}{\nabla^4} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^{}} + \stackrel{\leftarrow}{\nabla^{}} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^4} \stackrel{\rightarrow}{\nabla^{}})$ \\ \hline
${}^3P_2 \leftrightarrow {}^3F_2$ \T & & & $a_{NNLO}^{PF} (\stackrel{\leftarrow}{\nabla^{}} \cdot \delta({\bf r}) \stackrel{\rightarrow}{F^1} + \stackrel{\leftarrow}{F^1} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^{}})$
& $a_{N^3LO}^{PF, 33} (\stackrel{\leftarrow}{\nabla^{}} \stackrel{\leftarrow}{\nabla^2} \cdot \delta({\bf r}) \stackrel{\rightarrow}{F^1} + \stackrel{\leftarrow}{F^1} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2} \stackrel{\rightarrow}{\nabla^{}}) $ \\
\B & & & & $a_{N^3LO}^{PF, 1 5} (\stackrel{\leftarrow}{\nabla^{}} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^2}\stackrel{\rightarrow}{F^1} + \stackrel{\leftarrow}{F^1} \stackrel{\leftarrow}{\nabla^2} \cdot \delta({\bf r}) \stackrel{\rightarrow}{\nabla^{}})$ \\ \hline
${}^1F_3 \leftrightarrow {}^1F_3$ \T & & & & $a_{N^3LO}^{1F3} \stackrel{\leftarrow}{F^3} \cdot \delta({\bf r}) \stackrel{\rightarrow}{F^3} $ \\
or ${}^3F_J \leftrightarrow {}^3F_J$\B & & & & \\
\hline \hline
\end{tabular}}
\end{table}
The ``data" for testing such an expansion for HOBET are
deuteron matrix elements $\langle \alpha | P(H^{eff}-H)P | \beta \rangle$ evaluated
as in Refs. \cite{song,luu} for $v_{18}$. I take an 8$\hbar \omega$ $P$-space ($\Lambda_{P}$ = 8).
The evolution of the matrix elements will be followed as
contributions from scattering in $Q$ are integrated out progressively, starting with the highest
energy contributions. To accomplish this, the
contribution to $H^{eff}$
coming from excitations in $Q$ up to a scale $\Lambda > \Lambda_{P}$ is defined as
$H^{eff}(\Lambda)$, obtained by explicitly summing over all states in $Q$
up to that scale:
\begin{equation}
H^{eff}(\Lambda) \equiv H + H {1 \over E-Q_\Lambda H} Q_\Lambda H~~~~Q_\Lambda \equiv
\sum_{\alpha=\Lambda_P+1}^\Lambda |\alpha \rangle \langle \alpha|~~~~Q_{\Lambda_P} \equiv 0.
\end{equation}
Thus $H^{eff} = H^{eff} (\Lambda \rightarrow \infty)$ and $H^{eff}(\Lambda_P)=H$.
The quantity
\begin{equation}
\Delta(\Lambda) \equiv H^{eff}-H^{eff}(\Lambda) =
H {1 \over E-Q H} QH - H {1 \over E-Q_\Lambda H} Q_\Lambda H
\label{delta}
\end{equation}
represents the contributions to
$H^{eff}$ involving excitations in $Q$ above the scale $\Lambda$. For $\Lambda >> \Lambda_{P}$, one expects $\Delta(\Lambda)$
to be small and well represented by a LO
interaction. As $\Lambda$ runs to values closer to $\Lambda_{P}$, one would
expect to find that NLO, NNLO, N$^3$LO, .... contributions become successively more
important. If one could formulate
some expansion that continues to accurately reproduce the various matrix
elements of $\Delta(\Lambda)$ as
$\Lambda \rightarrow \Lambda_P$, then a successful expansion for the
HOBET effective interaction $\Delta(\Lambda_P) = H^{eff}-H$ would be in hand.
Figure \ref{fig_simple}a is a plot of $\Delta(\Lambda)$ for the 15 $^3S_1$ matrix elements in the chosen P-space.
For typical matrix elements $\Delta(\Lambda_P)=H^{eff}-H
\sim$ -12 MeV -- a great deal of the deuteron binding comes from the Q-space.
Five of the matrix elements involve bra or ket edge states.
The evolution of these contributions with $\Lambda$ appears to be less regular
than is observed for nonedge-state matrix elements.
One can test whether the results shown in Fig. \ref{fig_simple}a can be reproduced in a contact-gradient
expansion. At each $\Lambda$ the coefficients
$a_{LO}^{3S1}(\Lambda)$, $a_{NLO}^{3S1}(\Lambda)$, etc., would be determined
from the lowest-energy ``data,"
those matrix elements
$\langle \alpha | \Delta(\Lambda) | \beta \rangle$ carrying the fewest HO quanta.
Thus, in LO, $a_{LO}^{3S1}(\Lambda)$
would be determined from the $(n^\prime,n)=(1,1)$ matrix element. The remaining
14 $P$-space matrix elements are then predicted, not fit; in NNLO four coefficients would
be determined from the (1,1), (1,2), (1,3), and (2,2) matrix elements, and
eleven predicted.
Figures \ref{fig_simple}b-d show the residuals -- the differences between the predicted and calculated matrix elements.
For successive LO, NLO, and NNLO calculations, the scale at which
residuals in $\Delta$ are significant, say greater than 10 keV, is brought down successively,
e.g., from an initial $\sim 100 \hbar \omega$, to $\sim 60 \hbar \omega$ (LO), to $\sim 30 \hbar \omega$
(NLO), and finally to $\sim 20 \hbar \omega$ (NNLO), except for matrix elements involving edge
states. There the improvement is not significant, with noticeable deviations remaining at
$\sim 100 \hbar \omega$ even at NNLO. This irregularity indicates a flaw in the underlying
physics of this approach -- specifically the use of a short-range expansion for $H^{eff}$
when important contributions to $H^{eff}$ are coming from long-range interactions in $Q$.
So this must be fixed.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=7.5cm]{3s1_bare_H}
\includegraphics[width=7.5cm]{3s1_lo_nog}
\includegraphics[width=7.5cm]{3s1_nlo_nog}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=7.5cm]{3s1_nnlo_nog}
\includegraphics[width=7.5cm]{3s1_n3lo_nog}
\includegraphics[width=7.5cm]{3s1_rms_nog}
\end{center}
\end{minipage}
\caption{In a) the contributions to $H^{eff}-H$ from excitations in $Q$ above $\Lambda$ are plotted
for a calculation with $\Lambda_P=8$ and $b$=1.7 fm. Each line describes the running
of one of the 15 independent $P$-space
matrix elements $\langle n^\prime l^\prime=0 | H^{eff}-H | n l=0 \rangle$, $1 \leq n \leq n^\prime \leq 5$.
Ten of the matrix elements are between nonedge states (solid), four connect the $n^\prime=5$
edge state to the $n$=1,2,3,4
nonedge states (dashed), and one is the diagonal $n^\prime=n=5$ edge-edge case (dot dashed).
b)-e) show the residuals for naive LO, NLO, NNLO, and N$^3$LO fits (see text). f) shows the RMS
deviation for the set of $P$-space matrix elements.
The expected systematic improvement with increasing order is apparent only for matrix elements between nonedge states.}
\label{fig_simple}
\end{figure}
\subsection{The contact-gradient expansion for HOBET}
The gradient with respect to the dimensionless coordinate $\vec{r} \equiv (\vec{r}_1-\vec{r}_2) / b\sqrt{2}$ is denoted by $\overrightarrow{\nabla}$.
The coefficients $a_{LO}, a_{NLO}, ...$ in Table \ref{table:1} then carry the dimensions of MeV.
The contact-gradient expansion defined in Table \ref{table:1} is that commonly used in plane-wave bases,
where one expands around $\vec{k}=0$ with
\begin{equation}
\left. \overrightarrow{\nabla}^2 \exp{i \vec{k} \cdot \vec{r}} ~\right|_{\vec{k}=0} = 0.
\end{equation}
HOBET begins with a lowest-energy $1s$ Gaussian wave packet with a characteristic
momentum $\sim 1/b$. An analogous definition of gradients
such that
\begin{equation}
\overrightarrow{\nabla}^2 \psi_{1s}(b) = 0
\end{equation}
is obtained by redefining each operator appearing in Table \ref{table:1} by
\begin{equation}
O \rightarrow \bar{O} \equiv e^{r^2/2} O e^{r^2/2}.
\label{bar}
\end{equation}
The gradients appearing in the operators of Table \ref{table:1}
then act on polynomials in $r$. This leads to two
attractive properties. First is the removal of operator mixing. Once
$a_{LO}^{3S1}$ is fixed in LO to the $(n^\prime,n)=(1,1)$ matrix element, this quantity remains fixed in NLO, NNLO, etc. Higher-order terms make no contributions to this matrix element.
Similarly, $a_{NLO}$, once fixed to the $(1,2)$ matrix element, is unchanged
in NNLO. That is, the NLO results contain the LO results, and so on.
Second, this definition gives the HOBET effective interaction a simple
dependence on nodal quantum numbers,
\begin{equation}
\overrightarrow{\nabla}^2 \sim -4 (n-1)~~~~~~~~\overrightarrow{\nabla}^4 \sim16 (n-1)(n-2).
\end{equation}
(The Appendix describes this expansion in some detail.) In each channel, this dependence
agrees with the plane-wave result in lowest contributing order, but otherwise differs in
terms of relative order $1/n$. This HO form of the contact-gradient expansion is connected
with standard Talmi integrals \cite{talmi}, generalized for nonlocal potentials, e.g.,
\begin{eqnarray}
a_{LO} &\sim& \int^\infty_0 \int^\infty_0 e^{-r_1^2} \left[V(r_1,r_2)\right] e^{-r_2^2} r_1^2 r_2^2 dr_1 dr_2 \nonumber \\
a_{NLO} &\sim& \int^\infty_0 \int^\infty_0 e^{-r_1^2} \left[r_1^2 V(r_1,r_2) \right] e^{-r_2^2} r_1^2 r_2^2 dr_1 dr_2 = \int^\infty_0 \int^\infty_0 e^{-r_1^2} \left[ V(r_1,r_2) r_2^2 \right] e^{-r_2^2} r_1^2 r_2^2 dr_1 dr_2
\end{eqnarray}
and so on.
\subsection{Identifying terms with the contact-gradient expansion}
The next question is the association of the operators in Table \ref{table:1} with an appropriate
set of terms in $H^{eff}-H$, so that the difficulties apparent in Fig. \ref{fig_simple} are avoided.
The reorganized BH equation of Eq. (\ref{BHnew})
\begin{eqnarray}
H^{eff} & = &
{E \over E -TQ} \left[ T -T {Q \over E}T + V + V {1 \over E-QH} QV \right] {E \over E-QT} \nonumber \\
& \rightarrow &{E \over E -TQ} \biggl[ T -T {Q \over E}T + V + \sum_{ i=LO, NLO, ...} \bar{O}^i \biggr] {E \over E-QT}
\label{final}
\end{eqnarray}
isolates
$V (E-QH)^{-1} QV$, a term that is sandwiched between short-range operators that scatter
to high-energy states: one anticipates this term can be successfully represented by a short-range expansion
like the contact-gradient expansion. This identification is made here and tested later in this
paper.
This reorganization
only affects the edge-state matrix elements, clearly. As the process of fitting coefficients
uses matrix elements of low $(n^\prime,n)$, none of which involves edge states, the
coefficients are unchanged. But every matrix element involving edge states now includes
the effects of rescattering by $QT$ to all orders. Thus a procedure for
evaluating these matrix elements is needed.
\subsection{Matrix element evaluation}
There are several alternatives for evaluating Eq. (\ref{final}) for edge states. One of these
exploits the tri-diagonal form of $QT$ for the deuteron. If $|n l \rangle$ is an edge state in
$P$ then
\begin{equation}
{E \over E-QT} |n~ l \rangle = |n~ l \rangle + {1 \over E-QT} QT |n~ l \rangle = |n ~l \rangle
+ \sqrt{n(n+l+1/2)} {1 \over -\kappa^2 - {2 Q T \over \hbar \omega}} |n+1 ~l \rangle
\label{st1}
\end{equation}
where $E<0$ for a bound state. The dimensionless parameter $\kappa=\sqrt{2|E| \over \hbar \omega}$ depends on the ratio of the binding energy $|E|$
to the HO energy scale. Note that the second vector on the right in Eq. (\ref{st1}) lies entirely in $Q$. Now
\begin{eqnarray}
{2 \over \hbar \omega}QT |n+1~ l\rangle &=& (2n+l+3/2) |n+1~l\rangle + \sqrt{(n+1)(n+l+3/2)} |n+2~l\rangle \nonumber \\
{2 \over \hbar \omega}QT |n+2~l\rangle &=& \sqrt{(n+1)(n+l+3/2)} |n+1~l \rangle +
(2n+l+7/2) |n+2~l\rangle + \sqrt{(n+2)(n+l+5/2)} |n+3~l\rangle \nonumber \\
{2 \over \hbar \omega}QT |n+3~l\rangle &=& \sqrt{(n+2)(n+l+5/2)} |n+2~l\rangle+
(2n+l+11/2) |n+3~l\rangle+\sqrt{(n+3)(n+l+7/2)} |n+4~l \rangle \nonumber \\
{2 \over \hbar \omega}QT | n+4~l\rangle &=& ...
\end{eqnarray}
So the operator $2 QT/\hbar \omega$ in the basis $\{ |n+i~l\rangle, i=1, 2, ...$ \} has the form
\begin{equation}
{2 \over \hbar \omega} QT=\left( \begin{array}{ccccc} \alpha_1 & \beta_1 & 0 & 0 & \\ \beta_1 & \alpha_2 &
\beta_2 & 0 & \\ 0 & \beta_2 & \alpha_3 & \beta_3 & \cdots \\ 0 & 0 & \beta_3 & \alpha_4 & \\
& & \vdots & & \end{array} \right)
\end{equation}
where
\begin{equation}
\alpha_i = \alpha_i(n,l) = 2n + 2i + l-1/2,~~~~\beta_i =\beta_i(n,l) = \sqrt{(n+i)((n+i+l+1/2)}~.
\end{equation}
As is well known, if this representation of the operator $2 QT/\hbar \omega$ is truncated after
$k$ steps, the $2k-1$ nonzero coefficients \{$\alpha_i,\beta_i$\} determine the $2k-1$ operator moments
of the starting vector $|n+1~ l\rangle$,
\begin{equation}
\langle n+1~l | \left( {2 QT \over \hbar \omega} \right)^i |n+1~l \rangle,~~~i=1,....,2k-1
\end{equation}
A standard formula exists \cite{haydock} for the moments expansion of the Green's function acting on the first vector $|n+1~l\rangle$ of such a
tri-diagonal matrix, allowing us to write
\begin{equation}
\sqrt{n(n+l+1/2)} {1 \over -\kappa^2 - {2 QT \over \hbar \omega}} |n+1~ l\rangle= \widetilde{g}_1(-\kappa^2;n,l) |n+1~l\rangle + \widetilde{g}_2(-\kappa^2;n,l) |n+2~l\rangle + \widetilde{g}_3(-\kappa^2;n,l) |n+3~l \rangle + \cdots
\end{equation}
The coefficients \{$\widetilde{g}_i$\} can be obtained from an auxiliary set of continued fractions \{$g_i^\prime$\}
that are determined by downward recursion
\begin{eqnarray}
g_k^\prime(-\kappa^2;n,l) &\equiv& {1 \over -\kappa^2 - \alpha_k(n,l)} \nonumber \\
g_{i-1}^\prime(-\kappa^2;n,l) &=& {1 \over -\kappa^2 - \alpha_{i-1}(n,l) - \beta_{i-1}(n,l)^2 g_i^\prime(-\kappa^2;n,l)},~~i=k,....2
\end{eqnarray}
From these continued fractions the needed coefficients can be computed from the algebraic relations
\begin{eqnarray}
\widetilde{g}_1(-\kappa^2;n,l) &=& \sqrt{n(n+l+1/2)} g_1^\prime(-\kappa^2;n,l) \nonumber \\
\widetilde{g}_i(-\kappa^2;n,l) &=& \widetilde{g}_{i-1}(-\kappa^2;n,l) \beta_{i-1} g_i^\prime(-\kappa^2;n,l),~~ i=2,...,k
\end{eqnarray}
Defining $\widetilde{g}_0(-\kappa^2;n,l) \equiv 1$ it follows
\begin{eqnarray}
{E \over E-QT} |n~l> &=& \sum_{i=0}^{k \rightarrow \infty} \widetilde{g}_i(-\kappa^2;n,l) |n+i~l\rangle, ~~\mathrm{edge~state} \nonumber \\
&=& |n~l>,~~~~~~~~~~~~~~~~\mathrm{otherwise}
\label{gf}
\end{eqnarray}
where it is understood that $k$ is made large enough so that the moments expansion for
the Green's function is accurate throughout the region in coordinate space where
$E/(E-QT) |n~l\rangle$ is needed. Note that the first line of Eq. (\ref{gf}) can be viewed as the
general result if one defines
\begin{equation}
g_i(-\kappa^2;n,l) \equiv 0,~~i=1,...k,~~\mathrm{if}~|n~l\rangle ~\mathrm{is~not~an~edge~state}.
\end{equation}
(For $A \ge$ 3 one would be treating the 3($A$-1)-dimensional HO, with the role of
the spherical harmonics replaced by the corresponding hyperspherical harmonics.)
Eq. (\ref{gf}) can now be used to evaluate the various terms in Eq. (\ref{final}). \\
\noindent
{\it Matrix elements for the contact-gradient operators:} The matrix elements have the general form
\begin{equation}
\langle n' l' | {E \over E-TQ} \bar{O} {E \over E-QT} | n l \rangle = \sum_{i,j=0} \widetilde{g}_j(-\kappa^2;n',l') \widetilde{g}_i(-\kappa^2;n,l) \langle n'+j~ l | \bar{O} |n+i~ l \rangle
\end{equation}
where $\bar{O}$ is formed from gradients acting on the bra and ket, evaluated at $\vec{r}$=0.
The general matrix element (any partial wave) is worked out in the Appendix. For example,
one needs for $S$-wave channels the relation
\begin{equation}
(\vec{\nabla}^2)^p e^{r^2/2} R_{nl=0}(r) Y_{00}(\Omega_r) \big|_{\vec{r} \rightarrow 0} = (-4)^p ~{(n-1)! \over (n-1-p)!} ~ {1 \over \pi} \left[ {\Gamma(n+1/2) \over
(n-1)!}\right]^{1/2}
\end{equation}
from which it follows
\begin{eqnarray}
&& \langle n^\prime (l^\prime=0~S=1) J=1 |{E \over E -TQ} \biggl[ \sum_{ i=LO, ...,N^3LO} \bar{O}^{3S1,i} \biggr] {E \over E-QT} | n (l=0~S=1) J=1 \rangle = \nonumber \\
&&{2 \over \pi^2} \sum_{i,j=0} \widetilde{g}_j(-\kappa^2;n^\prime,l^\prime=0) \widetilde{g}_i(-\kappa^2;n,l=0) \left[ {\Gamma{n^\prime+j +1/2} \Gamma{n+i+1/2} \over (n^\prime+j-1)! (n+i-1)!} \right] \biggl[ a_{LO}^{3S1}
-4\left((n^\prime+j-1)+(n+i-1)\right) a_{NLO}^{3S1} \nonumber \\
&& +16\left\{(n^\prime+j-1)(n+i-1)a_{NNLO}^{3S1,22} + \left((n^\prime+j-1)(n^\prime+j-2)+
(n+i-1)(n+i-2)\right)a_{NNLO}^{3S1,40}\right\} \nonumber \\
&& -64 \left\{ (n^\prime+j-1)(n+i-1) \bigl( (n^\prime+j-2)+(n+i-2) \bigr) a_{N^3LO}^{3S1,42} \right. \nonumber \\
&& + \left. \left((n^\prime+j-1)(n^\prime+j-2)(n^\prime+j-3)+(n+i-1)(n+i-2)(n+i-3)\right) a_{N^3LO}^{3S1,60} \right\} \biggr].
\end{eqnarray}
In the case of nonedge states, $\widetilde{g}_i\equiv 0$ except for the case of $\widetilde{g}_0 \equiv 1$. Thus it is apparent
that the net consequence of the rearrangement of the BH equation and the identification of the
contact-gradient expansion with $V(E-QH)^{-1}QV$, is effectively a renormalization of the coefficients of that expansion
for the edge HO states. That renormalization is governed by $\kappa^2 =2|E|/\hbar \omega$, e.g.,
\begin{eqnarray}
a_{LO}(n',l',n,l) \rightarrow a_{LO}^\prime(E;n',l',n,l) = a_{LO}(n',l',n,l) \sum_{i,j=0} \widetilde{g}_j(-\kappa^2;n',l') \widetilde{g}_i(-\kappa^2;n,l) \nonumber \\
\times \left[ {\Gamma(n'+j+1/2)
\Gamma(n+i+1/2) \over \Gamma(n'+1/2) \Gamma(n+1/2)} \right]^{1/2}
\left[{(n'-1)! (n-1)! \over (n'+j-1)! (n+i-1)!} \right]^{1/2} .
\label{primes}
\end{eqnarray}
This renormalization is large, typically a reduction in strength by a factor of 2-4,
for $|E|$=2.224 MeV, and also remains substantial for more deeply bound systems, as
will be illustrated later. (The binding energy for this purpose is defined relative to the lowest particle
breakup channel, the first extended state.)
The effects encoded into $|\widetilde{\alpha} \rangle$ by summing $QT$ to all orders are
nontrivial: they depend on a nonperturbative strong interaction parameter $|E|$ as well
as $QT$, and they alter
effective matrix elements of the strong potential. For a given choice of $\Lambda_P$,
the renormalization depends
on a single parameter, $2|E|/\hbar \omega$, not on $|E|$ or $b$ separately.
In the plane-wave limit $b \rightarrow \infty$, this parameter is driven to $\infty$,
so that $a_{LO}^\prime \rightarrow a_{LO}$. No renormalization is required in this limit.
The dependence on $|E|$ is discussed in more detail later, including its connection
to the state-dependence
inherent in effective theory. \\
\noindent
{\it Matrix elements of the relative kinetic energy:} The relative kinetic energy operator couples
$P$ and $Q$ via strong matrix elements that grow as $n$. As Ref. \cite{luu} discusses, this coupling
causes difficulties with perturbative expansions in $H$ even in the case of $P$ spaces that
contain almost all of the wave function (e.g., $\Lambda_P
\sim$ 70). There is always a portion of the wave function
tail at large $r$ that is nonperturbative, involving matrix elements of $T$ that
exceed $\Lambda_P \hbar \omega/2$.
The kinetic energy contribution is
\begin{equation}
\langle \alpha | T+T {1 \over E-QT} QT | \beta \rangle = \langle \widetilde{\alpha} | T-T{Q \over E} T | \widetilde{\beta} \rangle = \langle \alpha |T |\widetilde{\beta} \rangle = \langle \widetilde{\alpha} | T | \beta \rangle
\end{equation}
where the last two terms show that the transformation to states $|\widetilde{\alpha} \rangle =
E/(E-QT) |\alpha \rangle$ reduces the calculation of the rescattering to that of a bare matrix element.
It follows from this expression
\begin{equation}
\langle n^\prime~l | T + T {1 \over E-QT} QT |n~l \rangle = \langle n^\prime~l | T | n~l \rangle + { \hbar \omega \over 2} \delta_{n^\prime n} \sqrt{n(n+l+1/2)} \widetilde{g}_1(-\kappa^2;n,l).
\end{equation}
Thus, rescattering via $QT$ alters the diagonal matrix element of the effective interaction
for edge states, as determined by
$\widetilde{g}_1(-\kappa^2; n,l)$.\\
\noindent
{\it Matrix elements of the bare potential:} The $P$-space matrix element of $V$ becomes
$\langle \widetilde{\alpha} | V | \widetilde{\beta} \rangle$ which, as is illustrated in Fig. \ref{fig_2}, involves
an integral over a wave function that, apart from normalization, differs from the
HO only in the tail, where the potential is weak. It can be evaluated by generating the
wave functions $| \widetilde{\alpha} \rangle$ and $|\widetilde{\beta} \rangle$ as HO expansions,
\begin{equation}
\left[ \sum_{j=0} \widetilde{g}_j(-\kappa^2;n^\prime,l^\prime) \langle n^\prime+j~l^\prime | \right]
~V~\left[ \sum_{i=0} |n+i~l \rangle \widetilde{g}_i(-\kappa^2;n,l) \right] = \sum_{i,j=0}\widetilde{g}_j(-\kappa^2;n^\prime,l^\prime) \widetilde{g}_i(-\kappa^2;n,l) \langle n^\prime+j~l^\prime |
~V~ |n+i~l \rangle .
\label{barev}
\end{equation}
though the alternative Green's function expression, discussed below, is simpler. \\
\noindent
{\it Use of the free Green's function:}
An alternative to an expansion in an HO basis is generation of $|\widetilde{\alpha}\rangle $ with
the free (modified Helmholtz) Green's function. For any P-space state $|n~l\rangle$,
\begin{equation}
(E-QT) |\widetilde{\alpha}\rangle = E |\alpha \rangle~\Rightarrow~ (E-T) |\widetilde{\alpha}\rangle = E |\alpha \rangle - PT |\widetilde{\alpha}\rangle
\end{equation}
That is, both $E-QT$ and $E-T$ project $|\widetilde{\alpha}\rangle$ back into the $P$-space. The
free Green's function equation can be written
\begin{eqnarray}
(E-T) |\widetilde{\alpha} \rangle &=& P \left[ E - T {E \over E-QT} \right] P | \alpha \rangle \nonumber \\
&=& \left[ P {1 \over E-T} P \right]^{-1} | \alpha \rangle.
\label{gf2}
\end{eqnarray}
Either of the driving terms on the right-hand side is easy to manipulate. The second expression
requires inversion of a P-space matrix, one most easily calculated
in momentum space, as the HO is its own Fourier transform and as the resulting momentum-space integrals can be done in
closed form. This form was used in
the three-body calculations of Ref. \cite{tom}.
Here I will use the first expression above, rewriting the right-hand-side driving term
in terms of $ |\alpha_{nlm_l} \rangle$,
\begin{eqnarray}
&&P \left[ E - T {E \over E-QT} \right] P |\alpha \rangle = {\hbar \omega \over 2} \biggl[ \biggl(-\kappa^2 - (2n+l-1/2) - \widetilde{g}_1(-\kappa^2;n,l) \sqrt{n(n+l+1/2)} \biggr) |n~l~m_l> \nonumber \\
&& - \sqrt{(n-1)(n+l-1/2)} |n-1~l~m_l\rangle - \sqrt{n(n+l+1/2)} P |n+1~l ~m_l\rangle \biggr] \equiv {\hbar \omega \over 2} |\alpha_{nlm_l} \rangle,
\label{greens}
\end{eqnarray}
where the driving term has been kept general, valid for either edge or nonedge states: the latter can be a
helpful numerical check, verifying that a HO wave function is obtained, for such cases, from
the expression below. For an edge state, $\widetilde{g}_1$ is nonzero and $P|n+1~l\rangle \equiv 0$;
for a nonedge state, $\widetilde{g}_1=0$ and $P$=1. Labeling the corresponding edge state
as $|\widetilde{\alpha}_{nlm_l} \rangle$,
\begin{eqnarray}
\langle \vec{r} |\widetilde{\alpha}_{nlm_l} \rangle &=& \int d^3 \vec{r}^{~\prime} {1 \over 4 \pi |\vec{r}-\vec{r}^{~\prime}|} e^{-\kappa |\vec{r}-\vec{r}^{~\prime}|} ~ \langle \vec{r}^{~\prime}|\alpha_{nlm_l}>
\nonumber \\
&=& -Y_{lm}(\Omega_r) \biggl[ {1 \over \sqrt{r}}~ I_{l+1/2}(\kappa r) \int_r^\infty d^3\vec{r}^{~\prime} (r^\prime)^{3/2} K_{l+1/2}(\kappa r^\prime)~\langle \vec{r}^{~\prime}|\alpha_{nlm_l}> \nonumber \\
&&+ {1 \over \sqrt{r}} ~K_{l+1/2}(\kappa r) \int_0^r d^3\vec{r}^{~\prime} (r^\prime)^{3/2} I_{l+1/2}(\kappa r^\prime)~\langle \vec{r}^{~\prime}|\alpha_{nlm_l}>
\label{eq:partial-wave}
\end{eqnarray}
where $I$ and $K$ denote the standard modified Bessel functions.
By expressing the HO radial wave functions in terms of the underlying Laguerre polynomials and integrating the polynomials term by term,
alternative expressions are obtained for the various quantities previously expressed as expansions
in the $\widetilde{g}_i$. This is detailed in the Appendix.
One finds, for example,
\begin{eqnarray}
\langle \vec{r}=0|\widetilde{\alpha}_{nlm_l} \rangle &=& \delta_{l,0} ~\delta_{m_l,0} ~ \sqrt{{(n-1)! \Gamma(n+1/2) \over 2 \pi}} ~\sum_{k=0}^{n} {(-2)^k \over k! (n-k)! \Gamma(k+3/2)}~ \times \nonumber \\
&&\biggl[(n-k)(\kappa^2 + 3n-3/2-k
+\widetilde{g}_1(-\kappa^2;n,0) \sqrt{n(n+1/2)})
+ P[n+1,l=0] n(n+1/2) \biggr] ~ \times \nonumber \\
&& \biggl[-\sqrt{2}~ \kappa ~\Gamma(k+3/2)~ {}_1F_1[k+3/2;3/2;\kappa^2/2] + k!~ {}_1F_1[k+1;1/2;\kappa^2/2] \biggr]
\end{eqnarray}
where ${}_1F_1$ is Kummer's confluent hypergeometric function, and where
$P[n+1,l=0] =1$ if $|n+1~l\rangle$ is in $P$, and 0 otherwise. Similar expressions can be
derived to handle all of the operators $\bar{O}$ appearing in the contact-gradient expansion
(see the Appendix).
\subsection{Numerical Tests}
In this subsection channel-by-channel N$^3$LO results are presented for $H^{eff}$
based on Eqs. (\ref{bar}) and (\ref{final}),
which isolate a short-range operator that plausibly can be
accurately and systematically expanded via contact-gradient operators.
For $\Lambda_P=8$, the fitting procedure determines all N$^3$LO
coefficients from nonedge matrix elements,
leaving all edge matrix elements and a substantial set of nonedge matrix
elements unconstrained. Thus one can use these matrix elements to test whether the
expansion systematically accounts for the ``data," the set of numerically generated $v_{18}$ matrix elements of $H^{eff}$. One test is the running of the results
as a function of $\Lambda$: a systematic progression through LO, NLO, etc., operators should
be observed as $\Lambda$ is lowered to the SM scale. A second test is the ``Lepage plot" \cite{lepage},
which displays residual errors in matrix elements: if the improvement is systematic, these
residual errors should reflect the nodal-quantum-number dependence of the operators that would
correct these results, in next order.
Eq. (\ref{final}) includes ``bare" terms -- the matrix elements $\langle \alpha | T | \widetilde{\beta} \rangle$
and $\langle \widetilde{\alpha} | V | \widetilde{\beta} \rangle$ -- and a term involving repeated
scattering by $H$ in $Q$, but sandwiched between the short-range operator $QV$.
To test the dependence on $\Lambda$, the rescattering term is decomposed in the manner of Eq. (\ref{delta}),
\[
\Delta_{QT}(\Lambda) = {E \over E-TQ} \left[V {1 \over E-QH} QV - V{1 \over E-Q_\Lambda H}
Q_\Lambda V \right] {E \over E-QT}, \]
to isolate the contribution of scattering above the
scale $\Lambda.$ $\Delta_{QT}(\Lambda)$ is evaluated numerically for $v_{18}$ at each required $\Lambda.$
The long-wavelength summation is always done to
all orders -- the running with $\Lambda$ thus reflects the behavior
of the short-range piece, $V (E-QH)^{-1} QV$.
The full $P$-space effective interaction is obtained as $\Lambda \rightarrow \Lambda_P$.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3s1_bare}
\includegraphics[width=8cm]{3s1_lo}
\includegraphics[width=8cm]{3s1_nlo}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3s1_nnlo}
\includegraphics[width=8cm]{3s1_n3lo}
\includegraphics[width=8cm]{3s1_rms}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_simple}, but for the $QT$-summed reordering of $H^{eff}$. The contributions to the
effective interaction from excitations in $Q$ above $\Lambda$, denoted $\Delta_{QT}(\Lambda)$ in
the text, are plotted. Each line gives the running of a $P$-space
matrix element.
b)-e) show the residuals for LO, NLO, NNLO, and N$^3$LO fits (see text). f) shows the RMS
deviation for the set of $P$-space matrix elements. The improvement with increasing order is
systematic and rapid: at N$^3$LO the RMS deviation for unconstrained matrix elements as
$\Lambda \rightarrow \Lambda_P$ is about 3 keV. That is, the entire effective interaction is
reproduced to a few parts in 10$^4$. }
\label{fig_3s1}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1s0_bare}
\includegraphics[width=8cm]{1s0_lo}
\includegraphics[width=8cm]{1s0_nlo}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1s0_nnlo}
\includegraphics[width=8cm]{1s0_n3lo}
\includegraphics[width=8cm]{1s0_rms}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_3s1}, but for the $^1S_0$ channel. The N$^3$LO results are seen
to reproduce the entire effective interaction to the accuracy of about a keV, or one part in
$10^4$. }
\label{fig_1s0}
\end{figure}
As outlined before, coefficients are fitting to the longest wavelength information.
For example, in $S$ channels, $a_{LO}$
is fixed to the $(n^\prime,n)$ = (1,1) matrix element; the absence of operator mixing then
guarantees this coefficient remains fixed, as higher order terms
are evaluated. The single $a_{NLO}$ coefficient is fixed to (2,1) (or equivalently (1,2)) ; $a_{NNLO}^{22}$ and $a_{NNLO}^{40}$ are determined from (2,2) and (3,1); and finally
$a_{N^3LO}^{42}$ and $a_{N^3LO}^{60}$ are fixed to (3,2) and (4,1). So at N$^3$LO there
are a total of 6 parameters. This procedure
is repeated for a series of $\Lambda$ ranging from 140 to $\Lambda_P$=8. The results in each
order, and the improvement order by order, are thus obtained as a function of $\Lambda$.
$P$ contains 15 independent matrix elements in the ${}^3S_1-{}^3S_1$ channel,
nine of which play no role in the fitting: these test whether the improvement is
systematic.
Figures \ref{fig_3s1} and \ref{fig_1s0} show the results for ${}^3S_1-{}^3S_1$ and ${}^1S_0-{}^1S_0$.
Panel a) shows the evolution of the matrix elements $\langle \alpha | \Delta_{QT}(\Lambda)
| \beta \rangle$ for each of the 15 independent matrix elements. Matrix
elements involving only nonedge states, a single edge state, or two edge states
are denoted by solid, dashed, and dash-dotted lines, respectively. Progressively more
binding is recovered as $\Lambda \rightarrow \Lambda_P$. In
the $^3S_1-{}^3S_1$ case, the contribution at $\Lambda_P$ is
$\sim$ 12-14 MeV for nonedge matrix elements, $\sim$ 7-8 MeV for
matrix elements with one edge state, and $\sim$ 5 MeV for the $\langle n=5 l=0 | \Delta_{QT}(\Lambda_P)
| n=5 l=0 \rangle$ double-edge matrix element.
Panels b)-e) show the residuals -- the difference
between the matrix elements of $\Delta_{QT}(\Lambda_P)$ and those of the contact-gradient
potential of Eq. (\ref{final}) -- from LO through N$^3$LO. The trajectories correspond to
the unconstrained matrix elements (14 in LO, 9 in N$^3$LO): the fitted matrix elements
produce the horizontal line at 0. Unlike the naive approach in Fig. \ref{fig_simple}, the improvement
is now systematic in all matrix elements. In the $^3S_1-{}^3S_1$ channel, a LO treatment
effectively removes all contributions in $Q$ above $\Lambda \sim$ 60; NLO lowers this
scale to $\sim$ 40, and NNLO is $\sim$ 20. The magnitude of N$^3$LO
residuals at $\Lambda_P$ is typically
$\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}$ 2 keV -- the entire effective interaction can be represented by Eq. (\ref{final}) to
an accuracy of about 0.01\%. Panel f) shows the root mean square (RMS) deviation among the
unconstrained matrix elements, and the rapid order-by-order improvement.
The pattern repeats in the $^1S_0-{}^1S_0$ channel, where the convergence (in terms
of the size of the residuals) is somewhat faster. The
N$^3$LO RMS deviation among the unconstrained matrix elements at $\Lambda_P$ is
$\sim$ 0.5 keV.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{sd_bare}
\includegraphics[width=8cm]{sd_nlo}
\end{center}
\caption{As in Figs. \ref{fig_3s1} and \ref{fig_1s0}, but for the $^3S_1-{}^3D_1$ channel. As with the cases
described before, the N$^3$LO results remain accurate at the few keV level, as the
integration is brought down to the shell-model scale, $\Lambda \rightarrow \Lambda_P$. }
\label{fig_sd}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{sd_nnlo}
\includegraphics[width=8cm]{sd_n3lo}
\includegraphics[width=8cm]{sd_rms}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3d1_bare}
\includegraphics[width=8cm]{3d1_nnlo}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3d1_n3lo}
\includegraphics[width=8cm]{3d1_rms}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_3s1}, but for the $^3D_1$ channel. }
\label{fig_3d1}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3d2_bare}
\includegraphics[width=8cm]{3d2_nnlo}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3d2_n3lo}
\includegraphics[width=8cm]{3d2_rms}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_3s1}, but for the $^3D_2$ channel. }
\label{fig_3d2}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3d3_bare}
\includegraphics[width=8cm]{3d3_nnlo}
\end{center}
\caption{As in Fig. \ref{fig_3d1} and \ref{fig_3d2}, but for the $^3D_3$ channel. This ``stretched"
configuration generates much larger residuals than the other $l=2$ channels.
Consequently a calculation to N$^4$LO would be needed to reduce typical matrix element
errors to $\sim$ 10 keV, in the limit $\Lambda \rightarrow \Lambda_P$.}
\label{fig_3d3}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3d3_n3lo}
\includegraphics[width=8cm]{3d3_n4lo}
\includegraphics[width=8cm]{3d3_rms}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1d2_bare}
\includegraphics[width=8cm]{1d2_nnlo}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1d2_n3lo}
\includegraphics[width=8cm]{1d2_rms}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_3s1}, but for the $^1D_2$ channel. }
\label{fig_1d2}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{dg_bare}
\includegraphics[width=8cm]{dg_n3lo}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{dg_n4lo}
\includegraphics[width=8cm]{dg_rms}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_3s1}, but for the $^3D_3-{}^3G_3$ channel. The N$^4$LO
contribution is also shown. }
\label{fig_dg}
\end{figure}
The remaining positive-parity channels that can be constrained at N$^3$LO are given
in Figs. \ref{fig_sd} through \ref{fig_dg}: ${}^3S_1-{}^3D_1$ (leading order contribution NLO);
${}^1D_2-{}^1D_2$, ${}^3D_1-{}^3D_1$, ${}^3D_2-{}^3D_2$, and ${}^3D_3-{}^3D_3$
(NNLO); and ${}^3D_3-{}^3G_3$ (N$^3$LO).
Table \ref{table:2} gives the resulting fitted couplings at N$^3$LO for all contributing
channels, along with numerical results for the root-mean-square $Q$-space contributions
to $\Delta_{QT}(\Lambda_P)$ and the root-mean-square residuals (the deviation between
the contact-gradient prediction and $\Delta_{QT}(\Lambda_P)$ for the remaining
unconstrained effective interactions matrix elements). The quality of the agreement found
in the $^1S_0$ and $^3S_1$channels is generally typical -- residuals at the few kilovolt
level -- though there are some exceptions and some general patterns that emerge.
One of these is the tendency of the triplet channels with spin and angular momentum aligned
($^3S_1-{}^3S_1$, $^3P_2-{}^3P_2$, $^3D_3-{}^3D_3$, and $^3F_4-{}^3F4$) to exhibit larger
residuals than the remaining $S, P, D$ and $F$ channels, respectively. The
$^3D_3-{}^3D_3$, which has contributions at NNLO and N$^3$LO,
stands out as the most difficult channel, with a residual of 122 keV, one to two
orders of magnitude greater than the typical scale of N$^3$LO residuals.
\begin{table*}
\begin{center}
\caption{The effective interaction for LO through N$^3$LO, with $\Lambda_P=8$ and $b$=1.7 fm.$^\dagger$}
\label{table:2}
\begin{tabular}{||l||c||c||c|c||c|c|c||c|c||}
\hline
Channel & \multicolumn{7}{c||}{Couplings (MeV)} & $\langle$M.E.$\rangle_{RMS}$ (MeV) & $\langle$Resid.$\rangle_{RMS}$ (keV) \\
\hline \hline
& $a_{LO}^S$ & $a_{NLO}^S$ & $a_{NNLO}^{S,22}$ & $a_{NNLO}^{S,40}$ & $a_{N^3LO}^{S,42}$ &
$a_{N^3LO}^{S,60}$ & & & \\
${}^1S_0-{}^1S_0$ & -32.851 & -2.081E-1 & -2.111E-3 & -1.276E-3 & -7.045E-6 & -1.8891E-6 & & 7.94 & 0.53 \\
${}^3S_1-{}^3S_1$ & -62.517 & -1.399 & -5.509E-2 & -1.160E-2 & -5.789E-4 & -1.444E-4 & & 11.97 & 2.71\\
\hline
& & $a_{NLO}^{SD}$ & $a_{NNLO}^{SD,22}$ & $a_{NNLO}^{SD,04}$ & $a_{N^3LO}^{SD,42}$ &
$a_{N^3LO}^{SD,24}$ & $a_{N^3LO}^{SD,06}$ & & \\
${}^3S_1 - {}^3D_1$ & &2.200E-1 & 1.632E-2 & 2.656E-2 & 2.136E-4 & 3.041E-4 & -1.504E-4 & 0.160 & 2.45 \\ \hline
& & & $a_{NNLO}^D$ & & $a_{N^3LO}^D$ & & & & \\
${}^1D_2 -{}^1D_2$ & & & -6.062E-3 & & -1.189E-4 & & & 0.027 & 1.21 \\
${}^3D_1-{}^3D_1$ & & & -1.034E-2 & & -1.532E-4 & & & 0.051 & 2.27 \\
${}^3D_2-{}^3D_2$ & & & -3.048E-2 & & -5.238E-4 & & & 0.141& 1.20 \\
${}^3D_3-{}^3D_3$ & & & -9.632E-2 & & -4.355E-3 & & & 0.303 & 122$^\ddagger$ \\ \hline
& & & & & $a_{N^3LO}^{SD}$ & & & &\\
${}^3D_3-{}^3G_3$ & & & & & 3.529E-4 & & & 0.012 & 12.2$^\ddagger$ \\ \hline
& & $a_{NLO}^P$ & $a_{NNLO}^P$ & &$a_{N^3LO}^{P,33}$ & $a_{N^3LO}^{P,51}$ & & & \\
${}^1P_1-{}^1P_1$ & & -8.594E-1 & -7.112E-3 & & -6.822E-5 & 1.004E-5 & & 0.694 & 0.11 \\
${}^3P_0-{}^3P_0$ & & -1.641 & -1.833E-2 & & -2.920E-4 & -1.952E-4 & & 1.283 & 2.26 \\
${}^3P_1-{}^3P_1$ & & -1.892 & -1.588E-2 & & -1.561E-4 & -6.737E-6 & & 1.526 & 0.08 \\
${}^3P_2-{}^3P_2$ & & -4.513E-1 & -1.257E-2 & & -5.803E-4 & -1.421E-4 & & 0.285 & 5.61 \\ \hline
& & & $a_{NNLO}^{PF}$ & & $a_{N^3LO}^{PF,33}$ & $a_{N^3LO}^{PF,15}$ & & & \\
${}^3P_2-{}^3F_2$ & & & -4.983E-3 & & 1.729E-5 & -5.166E-5 & & 0.034 & 1.43 \\ \hline
& & & & & $a_{N^3LO}^F$ & & & & \\
${}^1F_3-{}^1F_3$ & & & & & -3.135E-4 & & & 0.007 & 1.03 \\
${}^3F_2-{}^3F_2$ & & & & & -8.537E-4 & & & 0.020 & 2.34 \\
${}^3F_3-{}^3F_3$ & & & & & -2.647E-4 & & & 0.006 & 0.61 \\
${}^3F_4-{}^3F_4$ & & & & & -5.169E-4 & & & 0.008 & 6.23 \\
\hline \hline
\end{tabular}
\end{center}
\flushleft{$^\dagger$ The appropriate LO, NLO, and NNLO interactions are obtained by truncating the
table at the desired order. \\
$^\ddagger$ An $N^4LO$ calculation in the ${}^3D_3-{}^3D_3$ channel
yields $a_{N^4LO}^{3D3,44}$=-2.510E-4 MeV
and $a_{N^4LO}^{3D3,62}$ = -7.550E-5 MeV, and reduces $\langle \mathrm{Resid.} \rangle_{RMS}$ to 22.3 keV; and in the $^3D_3-{}^3G_3$ channel yields
$a_{N^4LO}^{DG,44}$ = -2.141E-5 MeV and $a_{N^4LO}^{DG,26}$ = 1.180E-5 MeV and
reduces $\langle \mathrm{Resid.} \rangle_{RMS}$ to 3.26 keV.}
\end{table*}
Figures \ref{fig_1p1} through \ref{fig_3f} show the convergence for the various channels involving
odd-parity states and contributing through N$^3$LO:
$^1P_1-{}^1P_1$, $^3P_J-{}^3P_J$, $^3P_2-{}^3F_2$, $^1F_3-{}^1F_3$, and $^3F_J-{}^3F_J$.
While the spin-aligned channels show slightly large residuals, overall the RMS errors at
N$^3$LO are at the one-to-few keV level. Thus a simple and essentially exact
representation for the effective interaction exists.\\
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1p1_bare}
\includegraphics[width=8cm]{1p1_nlo}
\end{center}
\caption{As in Fig. \ref{fig_3s1}, but for the $^1P_1$ channel. }
\label{fig_1p1}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1p1_nnlo}
\includegraphics[width=8cm]{1p1_n3lo}
\includegraphics[width=8cm]{1p1_rms}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3p0_bare}
\includegraphics[width=8cm]{3p0_nlo}
\end{center}
\caption{As in Fig. \ref{fig_3s1}, but for the $^3P_0$ channel. }
\label{fig_3p0}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3p0_nnlo}
\includegraphics[width=8cm]{3p0_n3lo}
\includegraphics[width=8cm]{3p0_rms}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3p1_bare}
\includegraphics[width=8cm]{3p1_nlo}
\end{center}
\caption{As in Fig. \ref{fig_3s1}, but for the $^3P_1$ channel. }
\label{fig_3p1}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3p1_nnlo}
\includegraphics[width=8cm]{3p1_n3lo}
\includegraphics[width=8cm]{3p1_rms}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3p2_bare}
\includegraphics[width=8cm]{3p2_nlo}
\end{center}
\caption{As in Fig. \ref{fig_3s1}, but for the $^3P_2$ channel. }
\label{fig_3p2}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3p2_nnlo}
\includegraphics[width=8cm]{3p2_n3lo}
\includegraphics[width=8cm]{3p2_rms}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{pf_bare}
\includegraphics[width=8cm]{pf_nnlo}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{pf_n3lo}
\includegraphics[width=8cm]{pf_rms}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_3s1}, but for the $^3P_2-{}^3F_2$ channel. }
\label{fig_pf}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1f3_bare}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{1f3_n3lo}
\end{center}
\end{minipage}
\caption{The lowest contributing order to the $1F_3$ channel is N$^3$LO. $\Delta_{QT}(\Lambda)$
and the N$^3$LO residuals for the five unconstrained matrix elements are shown. }
\label{fig_1f3}
\end{figure}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3f2_bare}
\includegraphics[width=8cm]{3f3_bare}
\includegraphics[width=8cm]{3f4_bare}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{3f2_n3lo}
\includegraphics[width=8cm]{3f3_n3lo}
\includegraphics[width=8cm]{3f4_n3lo}
\end{center}
\end{minipage}
\caption{As in Fig. \ref{fig_1f3}, but for the $^3F_J-{}^3F_J$ channels. As has been noted in other
cases, the stretched $^3F_4$ case has the largest residual.}
\label{fig_3f}
\end{figure}
\noindent
{\it Expansion parameters, naturalness:} The approach followed here differs from EFT,
where the formalism is based on an explicit expansion parameter, the ratio of the momentum
to a momentum cutoff. The input into the present calculation is a set of numerical matrix
elements of an iterated, nonrelativistic potential operating in $Q$. Potentials like $v_{18}$
are also effectively regulated at small $r$ by some assumed form, e.g., a Gaussian,
matched smoothly to the region in $r$ that is constrained by scattering data. Thus
there are no singular potentials iterating in $Q$.
Intuitively it is clear that the convergence apparent in Table \ref{table:2} is connected
with the range of hard-core interactions (once edge states are transformed by summing $T$).
A handwaving argument can be made by assuming rescattering in $Q$ effectively
generates a potential of the form
\[ V_0 e^{-r_{12}^2/a^2}, \]
where $r_{12} =|\vec{r_1}-\vec{r_2}|$.
This ansatz is local, so there is some arbitrariness in mapping it onto contact-gradient
expansion coefficients, which correspond to the most general nonlocal potential. But
a sensible prescription is to equate terms with equivalent
powers of $r^2$, in the bra and ket, when taking HO matrix of this potential. Then one finds,
for S-wave channels
\begin{equation}
a(m',m) \equiv a_{N^{m'+m}LO}^{S,2m'2m} = {1 \over 4^{m'+m} m'! m!} {(2m'+2m+1)!! \over
(2m'+1)!! (2m+1)!!} V_0 \left[ {\pi a^2 \over a^2+ 2b^2} \right]^{3/2} \left[ a^2 \over a^2 + 2b^2 \right]^{m'+m}
\end{equation}
where $b$ is the oscillator parameter. (The notation is such that, e.g., $a(m'=3,m=0)=
a_{N^3LO}^{S,60}$.) The last term is thus the expansion parameter:
if the range of the hard-core physics residing in $Q$ is small compared to the natural nuclear
size scale $b$, then each additional order in the expansion should be suppressed by
$\sim (a/b)^2$.
One can use this crude ansatz to assess whether the convergence shown in Table \ref{table:2}
is natural, or within expectations. The LO and NLO $^1S_0-{}^1S_0$ results effectively determine
$V_0$ and $a$; thus the strengths of four NNLO and N$^3$LO potentials
can be predicted relative to that of $a_{LO}$ and $a_{NLO}$. The predicted hierarchy
$a_{LO}:a_{NLO}:a_{NNLO}^{22}:a_{NNLO}^{40}:a_{N^3LO}^{42}:
a_{N^3LO}^{60}$ of
\[ 1:6.3 \times 10^{-3}:6.7\times 10^{-5}:2.0\times 10^{-5}:3.0\times 10^{-7}:4.2\times 10^{-8} \]
matches the relative strengths of the couplings in the table quite well,
\[1:6.3 \times 10^{-3}:6.4\times 10^{-5}:3.9\times 10^{-5}:2.1\times10^{-7}:5.7\times 10^{-8} ,\]
including qualitatively reproducing the ratios of the two NNLO and two N$^3$LO
coefficients. The parameters derived from $a_{LO}$ and $a_{NLO}$ are $a \sim 0.39$ fm
and $V_0 \sim -1.5$ GeV. In the $^1S_0$ channel the bare Argonne $v_{18}$ potential at small $r$
can be approximated by a Gaussian with $a \sim 0.33$ fm and $V_0 \sim 3.0$ GeV. So again
the crude estimates of range and even the strength are not unreasonable. [Note that the signs of the
two $V_0$s are correct -- the $P$-space lacks the appropriate short-range repulsion and thus
samples the iterated bare potential at small $r$, a contribution that then must be subtracted
off when $H^{eff}$ is evaluated.]
A similar exercise in the $^3S_1$ channel yields the predicted hierarchy
\[ 1:2.2 \times 10^{-2}:8.3\times 10^{-4}:2.5\times 10^{-4}:13.1\times 10^{-6}:1.9\times 10^{-6} \]
which compares with the coupling ratios calculated from Table \ref{table:2}
\[ 1:2.2 \times 10^{-2}:8.8\times 10^{-4}:1.9\times 10^{-4}:9.3\times 10^{-6}:2.3\times 10^{-6}. \]
The convergence is very regular but slower: in this case the effective Gaussian parameter
needed to describe these trend is $a \sim 0.75$ fm. The overall strength, $V_0 \sim -0.42$,
differs substantially from that found for the $^1S_0$ channel, though the underlying
$v_{18}$ potentials for $^3S_1-{}^3S_1$ and $^1S_0-{}^1S_0$ scattering are quite similar
(see Fig. \ref{fig_av18}).
The $^3S_1-{}^3S_1$ behavior is similar to that found in the other spin-aligned channels,
such as $^3D_3-{}^3D_3$ and $^3P_2-{}^3P_2$, where the scattering in $Q$ includes
contributions from the tensor force. The tensor force contributes to the LO s-wave coupling
through intermediate $D$-states in $Q$, e.g.,
\[ \langle n' l'=0 | V_{SD}Q | n'' l=2 \rangle {1 \over \langle E \rangle} \langle n'' l=2 |Q V_{SD} |n l=0\rangle, \]
as the product of two tensor operators has an s-wave piece. The radial dependence
of $V_{SD}$ for $v_{18}$, shown in Fig. \ref{fig_av18}, is significantly
more extended than in central-force $^3S_1-{}^3S_1$ and $^1S_0-{}^1S_0$ cases. This has the
consequences that (1) the mean excitation energy $\langle E \rangle$ for $^3S_1-{}^3D_1$ will
be lower (enhancing the importance of the tensor force) and (2) the
$P$-space $\langle ^3S_1 | H^{eff} | ^3S_1 \rangle$ matrix element will reflect the extended
range.
Once this point is appreciated -- that the effective expansion parameter are naturally
channel-dependent because of effects like the tensor force -- the results shown in Table
\ref{table:2} are very pleasing:
\begin{itemize}
\item In each channel the deduced couplings $a_{LO},~a_{NLO},~a_{NNLO},~a_{N^3LO},...$
evolve in a very orderly, or natural, fashion: one can reliably predict the size of the
next omitted term. The convergence appears related to an effective range characterizing
scattering in $Q$.
\item The convergence varies from channel to channel, but this variation reflects underlying
physics, such as role of the tensor force, governing the channel's range.
One does not find, nor perhaps should one expect to find,
some single parameter $p/\Lambda$ to characterize convergence independent of channel.
\item The convergence is very satisfactory in all channels: the measure used in Table
\ref{table:2}, $\langle$Resid.$\rangle_{RMS}$, is an {\it exceedingly} conservative one,
as discussed below. But even by by this standard, in only one channel ($^3D_3-{}^3D_3$)
do the RMS residual discrepancies among unconstrained matrix elements exceed
$\sim$ 10 keV. Given the arguments above, it is perfectly sensible to work to order NNLO
in rapidly-converging channels like $^1S_0-{}^1S_0$ and N$^4$LO in slowly converging
channels like $^3D_3-{}^3D_3$. As noted in the table, at N$^4$LO the residual in the
$^3D_3-{}^3D_3$ channel is reduced to 22 keV.
\end{itemize}
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{av18}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=8cm]{lepage3s1}
\end{center}
\end{minipage}
\caption{(Color online) The left panel shows the radial dependence of the Argonne $v_{18}$ potential in the $^1S_0-{}^1S_0$,
$^3S_1-{}^3S_1$, and $^3S_1-{}^3D_1$ (tensor) channels. The last is clearly more extended.
The right panel is a ``Lepage plot" displaying fractional errors as a function of the order of the
calculation, on log scales. The steepening of the slope with order is the sign of a well behaved, converging effective theory.}
\label{fig_av18}
\end{figure}
\noindent
{\it Convergence and the ``Lepage" plot:} The procedure often followed in an
effective theory is to use information about the low-lying excitations to parameterize an
effective Hamiltonian, which is then used to predict properties of other states near the
ground state. In contrast, the goal here has been to characterize the entire effective
interaction to high accuracy. As described below, the residual errors in the procedure
are typically dominated by matrix elements with the largest $n$ and $n^\prime$, corresponding
to minor components in the deuteron ground state, for example. The difference in the
deuteron binding energy using exact matrix elements of $H^{eff}$
versus using the N$^3$LO expansion is quite small ($\sim$ 40 eV).
Order-by-order improvement should be governed by
nodal quantum numbers. For example, in LO in the $^3S_1$ channel
the omitted NLO term would be
\begin{equation}
-{8 a_{NLO}^{3S1} \over \pi^2} (n'+n-2) \left[ {\Gamma[n^\prime+1/2] \Gamma[n+1/2]
\over (n^\prime-1)! (n-1)!} \right]^{1/2} \stackrel{n^\prime, n~\mathrm{large}}{\longrightarrow}
-{2a_{NLO}^{3S1} \over \pi^2} \left[(4 n^\prime-1)(4n-1)\right]^{1/2} (n^\prime+n-2)
\end{equation}
Thus the fractional error associated with the omission of the NLO terms relative to LO should
to be linear in the sum of the nodal
quantum numbers, if the expansion is capturing the correct physics.
That is, the expected
absolute (e.g., in keV) error for ($n^\prime,n$)=(5,5) would be about 16 times that for (1,2).
In higher orders this distinction between large and small $n$ grows. At LO+NLO, the
expected fractional errors in
matrix elements from omitted NNLO terms would be quadratic in $n$ and $n'$: the explicit functional
dependence is no longer simple as there are two NNLO operators, and one would not know
{\it a priori} the relevant quadratic combination of $n^\prime$ and $n$ governing the
error. At NNLO the fractional error would be a cubic polynomial in
$n$ and $n^\prime$.
While beyond LO the expected fractional errors have a dependence on both $n$ and $n^\prime$,
it is still helpful to display results as a 2D ``Lepage plot" using $n+n^\prime$ -- proportional to the
average $\langle p^2 \rangle$ of bra and ket -- as the variable. Such a plot makes clear whether
improved fits in an effective theory are systematic -- that is, due to a correct description
of the underlying physics, not just additional parameters.
The use of a single parameter, $n+n^\prime$, of course maps multiple matrix
elements onto the same $x$ coordinate, when the ET indicates this is a bit too simple
beyond LO. Nevertheless, the right panel in Fig. \ref{fig_av18} still shows rather nicely
that the nuclear
effective interactions problem is a
very well behaved effective theory. In LO the residual errors do map onto the
single parameter $n+n^\prime$ to very good accuracy,
and the residual error is linear. The steepening of
the convergence with order is consistent with the expected progression from linear to
quadratic to cubic behavior in nodal quantum numbers.
By NNLO, errors in unconstrained matrix elements for
small $n+n^\prime$ are tiny, compared to those with high $n+n^\prime$. That is, the expansion
converges most rapidly for matrix elements between long-wavelength states, as it should.
However, improvement is substantial and systematic everywhere, including at the
largest $n+n^\prime$.
\section{Properties and Energy Dependence of the Effective Interaction}
The results of the previous section demonstrate the existence of a simple systematic
operator expansion for the HOBET effective interaction. Its behavior order-by-order
and in the Lepage plot indicates that the short-wavelength physics is being
efficiently captured in the associated operator coefficients.
The error measure used in the N$^3$LO fit is
dominated by the absolute errors in matrix elements involving
the highest nodal quantum numbers: these matrix elements are large even though they
may not play a major role in determining low-lying eigenvalues.
(It might have been better to use the fractional error
in matrix elements, a measure that would be roughly independent of $n^\prime$ and $n$.)
Other possible measures of error are the
ground state energy; the first energy-moment of the effective interaction matrix (analogous
to the mean eigenvalue in the SM); the fluctuation between neighboring
eigenvalues of that matrix (analogous to the level spacing in the SM); and the overlap of
the eigenfunctions of that matrix with the exact eigenfunctions (analogous to wave function
overlaps in the SM).
The N$^3$LO interaction in the coupled $^3S_1-{}^3D_1$ channel
produces a ground state energy accurate to $\sim$ 40 eV;
a spectral first moment accurate to 1.81 keV; an RMS average deviation in the level spacing
of 3.52 keV; and wave function overlaps that are unity to better than four significant digits.
As rescattering in $Q$ contributes $\sim$ -10 MeV to eigenvalues, the accuracy of the
N$^3$LO representation of the effective interaction is, by these spectral measures, on the order
of 0.01\%.
As the best excited-state techniques in nuclear physics currently
yield error bars of about 100 keV for the lightest nontrivial nuclei, this representation of the
two-body effective interaction is effectively exact \cite{argonne,nocore}.
The approach requires one to sum $QT$ to all orders, producing a result that
depends explicitly on $|E|$ -- which in this context should be measured relative to the
first breakup channel. While the associated effects increase with decreasing $|E|$, it will
be shown later that the renormalization is substantial
even for well-bound
nuclear states.
The deuteron is definitely not an extreme case. The effects are also sensitive to
the choice of $P$, through $b$, which
controls the mean momentum within $P$ -- a small $b$ reduces the missing
hard-core physics, but exacerbates the problems at long wavelengths, and
conversely. Figure \ref{fig_1} suggests factor-of-two changes
in the $Q$-space contribution to the deuteron binding energy can result from
$\sim$ 20\% changes in $b$. At the outset, the dependence on $|E|$ and $b$ seems
like a difficulty for
nuclear physics, as modest changes in these parameters alter predictions.
One of the marvelous properties of the HO is that the $QT$ sum can be done.
The two effects discussed above turn out to be
governed by a single parameter, $\kappa$. The associated effects are nonperturbative
in {\it both} $QT$ and $QV$. In the case of $QT$ an explicit sum to all orders is
done. The effects are also implicitly
nonperturbative in $QV$, because of the dependence on $|E|$. This is why the BH
approach is so powerful: because $|E|$ is determined self-consistently, it is simple
to incorporate this physics directly into the iterative process (which has been shown to
converge very rapidly in the HOBET test cases A=2 and 3).
When this is done, one finds that $\kappa$ affects results in three ways:
\begin{itemize}
\item the rescattering of $QT$ to all orders, $T (E-QT)^{-1} QT$, is absorbed into a
new ``bare" matrix element
$\langle \alpha | T | \widetilde{\beta}(\kappa) \rangle$;
\item the new ``bare" matrix element $\langle \widetilde{\alpha}(\kappa) | V | \widetilde{\beta}(\kappa) \rangle$
captures the effects of $QT$ in all orders on the contribution first-order in $V$; and
\item the matrix elements of the short-range operators $\bar{O}$, which contain all the multiple scattering of $QV$, are similarly modified, $\langle \widetilde{\alpha}(\kappa) | \bar{O} | \widetilde{\beta}(\kappa) \rangle$.
\end{itemize}
So far the discussion has focused on the problem of a single bound state of fixed
binding energy $|E|$, the deuteron ground state. No discussion has occurred
of expectations for problems in which multiple
bound states, each with a different $H^{eff}(|E|)$, might arise. But
1) the dependence of $H^{eff}(|E|)$ on $\kappa$ arises already in the single-state case, which
was not {\it a priori} obvious; and 2) state dependence (energy dependence in the
case of BH) must arise in the case of multiple states, as this is the source of the
required nonorthogonality of states when restricted to $P$, a requirement for a
proper effective theory. So a question clearly arises about the connection between
the explicit $\kappa$ dependence found for fixed $|E|$, and the additional
energy dependence that might occur for a spectrum of states.
Because other techniques, like Lee-Suzuki, have been used to address problem 2),
it is appropriate to first stress the relationship between $\kappa$ and the strong interaction
parameters provided in Table \ref{table:2}. The choice $\Lambda_P$=8 is helpful, as
it shows there is no relation. Every short-range coefficient arising through order N$^3$LO
was determined from nonedge matrix elements: the fitting procedure matches the
coefficients to the set of matrix elements with $n^\prime+n \leq 5$, and there are no edge
states satisfying this constraint. Nothing in the treatment of the strong interaction ``knows"
about edge states. This then makes clear how efficiently $\kappa$ captures
the remaining missing physics. Without $\kappa$ one would have, in the contact-gradient
expansion to
N$^3$LO, a total of 78 poorly reproduced edge-state matrix elements, 10 of which
would be $S$-state matrix elements with errors typically of several MeV. With $\kappa$ --
a parameter nature (and the choice of $b$) determines -- all of the 78 matrix elements are
properly reproduced,
consistent with the general $\sim$ keV accuracy of the N$^3$LO description of $H^{eff}$.
Suppose someone were to prefer an $H^{eff}$ free of any dependence on $|E|$, again in the
context of an isolated state of energy $|E|$. Could this be done? Yes, but at the cost
of a cumbersome theory that obscures the remarkably simple physics behind the
proper description of the edge state matrix elements. Suppose one wanted merely
to fix the five $^3S_1-{}^3S_1$ edge state matrix elements, those where
$n^\prime=5$ couples to $n$=1, 2, 3,4,
and 5. One could introduce operators corresponding to the coefficients
\[ a_{N^4LO}^{S,80},~a_{N^5LO}^{S,82},~a_{N^6LO}^{S,84},~a_{N^7LO}^{S,86},~a_{N^8LO}^{S,88} \]
to correct these matrix elements. It is clear all five couplings would be needed -- that's
the price one would pay for mocking up long-range physics (a long series of high-order
Talmi integrals) with a set of short-range operators of this sort.
This would be a rather poorly motivated exercise:
\begin{itemize}
\item The problems in these matrix elements have nothing to do with high-order generalized
Talmi integrals of the strong potential, as was demonstrated
in the previous section.
\item This approach does not ``heal" the effective theory: the poor running of
matrix elements would remain. There would be no systematic improvement, for all
matrix elements, as a function of $\Lambda$, as one progresses from LO, to NLO, etc.
The five parameters introduced above would remove the numerical discrepancies
at $\Lambda_P$, but not fix the running as a function of $\Lambda$, even for just
the edge-state matrix elements.
\item This approach amounts to parameter fitting, in contrast to the systematic
improvement demonstrated in the Lepage plot. The parameter $a_{N^4LO}^{S,80}$
introduced to fix the $n=1$ to $n=5$ matrix element will not properly correct
the $n=2$ to $n=5$ matrix element, as the underlying physics has nothing to do
with the $r_1^8 r_2^0$-weighted Talmi integral of any potential.
\item If $\Lambda_P$ is increased, the number of such edge-state matrix elements that will
need to be corrected by the fictitious potential increases.
This contrasts with the approach where $|E|$ is
explicitly referenced: there the number of short-range coefficients needed to
characterize $Q$ will decrease (that is, the LO, NLO, ... expansion becomes more
rapidly convergent), while $\kappa$ remains the single parameter
governing the renormalization of those coefficients for edge-state matrix elements.
\end{itemize}
While these reasons are probably sufficient to discard any such notion of building
a $\kappa$-independent $H^{eff}$, consider now the consequences of changing
$b$ -- which after all is an arbitrary choice. The short-range coefficients in Table
\ref{table:2} will change: there is an underlying dependence on $QV(\vec{r}_{12}/b)$. This governs
natural variations in the coefficients -- one could estimate those variations based on some
picture of the range of multiple scattering in $Q$, as was done in the ``naturalness"
discussion. But there would be additional changes in the ratios of
edge to nonedge matrix elements, reflecting the changes in $\kappa$. This would
induce in any $\kappa$-independent potential unnatural evolution in $b$. That is, the
fake potential would look fake, as $b$ is changed.
The arguments above apply equally well to the case of the
state-dependence associated with techniques like Lee-Suzuki. To an accuracy of about
95\%, the $\kappa$-dependence isolated in $H^{eff}$ is {\it also} the state-dependence
that one encounters when $|E|$ is changed. This is a lovely result: the natural
$\kappa$-dependence that is already present in the case of a short-range
expansion of $H^{eff}$ for a fixed state, also gives us ``for free" the BH state-dependence. The result
is not at all surprising, physically: changes in $|E|$ will alter the balance between $QT$ and
$QV$, and that is precisely the physics that was disentangled by introducing $\kappa$.
Mathematically, it is also not surprising: changing $b$ at fixed $|E|$ alters $\kappa$,
just as changes in $|E|$ for fixed $b$ would. Thus all
of the $QT$ effects identified above, in considering a single state, must also arise
when one considers spectral properties.
This argument depends on showing that other, implicit energy dependence in $H^{eff}$
is small compared in the explicit dependence captured in $\kappa$. Such implicit
dependence can reside in only one place, the fitted short-range coefficients.
\subsection{Energy Dependence}
The usual procedure for solving the BH equation,
\[ H^{eff} = H + HQ {1 \over E-QH} QH ,\]
involves steps to ensure self-consistency. As
the energy appearing in the Green's function is the energy of the state being calculated,
self-consistency requires iteration on this energy until convergence
is achieved: an initial guess for $E$ yields an $H^{eff}(E)$ and thus an eigenvalue $E^\prime$,
which then can be used in a new calculation of the interaction $H^{eff}(E^\prime)$.
This procedure is iterated
until the eigenvalue coresponds to the energy used in calculating $H^{eff}$. In practice,
the convergence is achieved quite rapidly,
typically after about five cycles.
As the BH procedure produces a Hermitian $H^{eff}$, this energy dependence is
essential in building into the formalism the correct relationship between the $P$-space
and full-space wave functions, that the former are the restrictions of the latter (and
thus cannot form an orthonormal set). This relationship
allows the wave function to evolve smoothly to the
exact result, in form and in normalization, as $\Lambda_P \rightarrow \infty$.
Generally this energy dependence remains implicit because the BH equation is solved
numerically: one obtains distinct sets of matrix elements $\langle \alpha | H^{eff}(E_i) | \beta \rangle$
for each state $i$, but the functional dependence on $E_i$ is not immediately apparent.
But that is not the case in the present treatment, where an
analytic representation for the effective interaction has been obtained.
While significant energy-dependent effects governed by $\kappa$ have been isolated,
additional sources remain in the case of a spectrum of bound states. The identified energy-dependent terms are
\begin{itemize}
\item $\langle \alpha | T + {E \over E-QT} QT | \beta \rangle = \langle \alpha |T|\widetilde{\beta}(\kappa) \rangle$;
\item $\langle \alpha | {E \over E-TQ} V {E \over E-QT} | \beta \rangle = \langle \widetilde{\alpha}(\kappa) |V|\widetilde{\beta}(\kappa) \rangle$; and
\item $\langle \alpha | {E \over E-TQ} \bar{O} {E \over E-QT} | \beta \rangle = \langle \widetilde{\alpha}(\kappa) |\bar{O}|\widetilde{\beta}(\kappa) \rangle$
\end{itemize}
The implicit energy dependence not yet isolated resides in the coefficients of the contact-gradient
expansion,
\begin{itemize}
\item $\langle \alpha | V {E \over E-QH} QV | \beta \rangle = \langle \alpha | \bar{O}(E)|\beta \rangle$.
\end{itemize}
To isolate this dependence, one must repeat the program that was executed for the deuteron ground state at a variety
of energies, treating $H^{eff}(|E|)$ as a function of $|E|$. The resulting variations in the extracted coefficients
will then determine the size of the implicit energy dependence. Of course, all of the explicit energy
dependence is treated as before, using the appropriate $\kappa.$
The simplest of the explicit terms is the ``bare" kinetic energy \[ \langle n^\prime l | T | \widetilde{n} \widetilde{ l}(\kappa) \rangle \equiv \langle n^\prime l | T + T {1 \over E-QT} QT |n l \rangle = \langle n^\prime l | T | n l \rangle + {\hbar \omega \over 2} \delta_{n^\prime n} \sqrt{n(n+l+1/2)} ~\widetilde{g}_1(-\kappa^2;n,l). \]
where effects only arise in the double-edge-state case. Two limits define the range of
variation. As $|E| \rightarrow \infty$, $\widetilde{g}_1 \rightarrow 0$, so
the edge-state matrix element takes on its bare value, $(2n+l-1/2) \hbar \omega/2$.
Similarly one can show $\widetilde{g}_1(-\kappa^2;n,l) \rightarrow n$ as the binding energy $|E|$
approaches zero. Thus for small binding, the matrix element approaches $(n+l-1/2) \hbar \omega/2$.
Thus the range is a broad one, $n \hbar \omega/2$, about 35 MeV for the parameters used in
this paper. The behavior between these limits
can be calculated. The results over 20 MeV in binding are shown in the upper left
panel of Fig. \ref{fig_edep} for $S$, $P$, and $D$ states. One finds that even deeply
bound (E=-20 MeV) states have very significant corrections due to $QT$: the scattering in $Q$
reduces the edge-state kinetic energy matrix elements by (2-3) $\hbar \omega/2$, which
serves to lower the energy of the bound state. The kinetic energy decreases monitonically
as $|E| \rightarrow 0$.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=9cm]{ke_e}
\includegraphics[width=9cm]{v_e}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=9cm]{n3lo_e}
\end{center}
\caption{Contributions to $H^{eff}$ with explicit energy dependence, for $P$ defined
by $\Lambda_P=8$ and $b=1.7$ fm.
The upper left panel shows the diagonal ``bare" kinetic energy term $\langle \alpha | T |
\widetilde{\alpha} \rangle$ for the edge states $| \alpha \rangle = | n=5~ l=0 \rangle$,
$| n=4 ~l=1 \rangle$, and $| n=4~ l=2 \rangle$. The dots indicate the limiting values
for very large and very small binding energies. The kinetic energy plotted is dimensionless,
given in terms of $\hbar \omega/2$. The lower left panel gives the matrix elements
of the bare potential $V$ between $^3S_1$ edge states, as a function of $E$.
The upper right panel shows the evolution of the
quantities $a_{LO}^\prime(E;n',l',n,l)/a_{LO}(n',l',n,l)$, $a_{NLO}^\prime(E;n',l',n,l)/
a_{NLO}(n',l',n,l)$, etc., through N$^3$LO for the diagonal matrix element with $| n=5~ l=0 \rangle.$
The general softening of such matrix elements is apparent, for small binding energy -- repeated
scattering by $T$ through high-energy oscillator states in $Q$
spreads the wave function and thus reduces the effects of the strong
potential at short range. This effect is carried by the edge states, because their renormalization
is affected by the missing long-range physics. See the text for further discussion.}
\label{fig_edep}
\end{minipage}
\end{figure}
The second $\kappa$-dependent term, the ``bare" potential energy
$\langle \widetilde{\alpha}(\kappa) |V|\widetilde{\beta}(\kappa) \rangle$,
is displayed over the same range in the lower left
panel of Fig. \ref{fig_edep} for the five $^3S_1-{}^3S_1$ edge-state matrix elements.
These matrix elements are again quite sensitive to $|E|$, varying by 2-3 MeV over the
20 MeV range displayed in the figure.
The third $\kappa$-dependence is the renormalization of the contact-gradient
coefficients for edge states,
\begin{equation}
\langle n^\prime ~l^\prime | {E \over E-TQ} \bar{O} {E \over E-QT} | n~l\rangle =
\sum_{i,j=0} \widetilde{g}_j(-\kappa^2;n',l') \widetilde{g}_i(-\kappa^2;n,l)~\langle n^\prime+j~ l^\prime |~ \bar{O}~ | n+i~l \rangle
\label{ren}
\end{equation}
Here $\bar{O}$ is fixed, while the explicit energy dependence carried by the
$\widetilde{g}_i$ (i.e., the effects of the interplay between $QT$ and $QV$) is evaluated.
The upper left panel in Fig. \ref{fig_edep} gives the result for the diagonal
edge-state matrix element, $| n^\prime~l^\prime \rangle=|n~l \rangle = |5~0 \rangle$.
As has been seen in other cases, the reduction due to the $QT-QV$ interplay is substantial
throughout the illustrated 20 MeV range. Thus the large effects
observed for the deuteron, a relatively weakly bound state, are in fact generic.
But weakly bound states are more strongly affected, with the differences between the
corrections for the double-edge states changing by a factor of nearly two between
$|E|$=20 MeV and $|E| \sim 0$ MeV.
The results for
single-edge-state matrix elements are similar, but the
changes are smaller by a factor of two.
In doing these calculations, some care is needed in going to the limit of very small binding energies. One can show for edge states
\begin{equation}
\widetilde{g}_i(-\kappa^2;n,l) \stackrel{\mathrm{small}~\kappa}{\longrightarrow} (-1)^i
\left[ {\Gamma(n+l+1/2) (n-1+i)! \over \Gamma(n+l+1/2+i) (n-1)! }\right]^{1/2}
\end{equation}
If one uses this in Eq. (\ref{ren}) with $\kappa \equiv 0$, one finds that
\[ {\sum_{i=0} \widetilde{g}_i(0;n,l) \langle \vec{r}=0~ |~n+i~l \rangle \over \langle \vec{r}=0~ |~n~l \rangle} \]
oscillates (for an edge state) between 0 and 1, with every increment in $i$. However,
a nonzero $\kappa^2$ acts as a convergence factor. If it is quite small, but
not zero, the ratio then goes smoothly to 1/2. Consequently, as Fig. \ref{fig_edep} shows,
$a_{LO}^\prime/a_{LO} \rightarrow$ 1/4 in the limit of small, but nonzero $\kappa$.
The effects illustrated in Fig. \ref{fig_edep} -- the three effects explicitly governed
by $\kappa$ -- are associated with the coupling between $P$ and $Q$ generated
by $T$. Because this operator connects states with $\Delta n = \pm 1$,
there is no large energy scale associated with excitations.
As the effects are encoded into a subset of the matrix elements,
the overall scale of the $\kappa$ dependence on spectral properties is, at this
point, still not obvious.
This leaves us with one remaining term that, qualitatively, seems quite different,
\begin{equation}
V {1 \over E-QH} QV \leftrightarrow \{a_{LO}(|E|),~ a_{NLO}(|E|),~ a_{NNLO}(|E)|,~a_{N^3LO}(|E|),...\}.
\label{constante}
\end{equation}
Here the energy dependence is implicit, encoded in the parameters fitted to the
lowest energy matrix elements of $H^{eff}$. The underlying potentials are dominated
by strong, short-ranged potentials, much larger than nuclear binding energies. Thus
the implicit ratio governing this energy dependence -- $|E|$ vs. the strength of
the hard-core potential -- is a small parameter. For this reason one
anticipates that the resulting energy dependence might be gentler than in the
cases just explored.
After repeating the fitting procedure over a range of energies, one obtains the results
shown in Fig. \ref{fig_aofe}. Because the energy variation is quite small, results are
provided only for the channels that contribute in low order,
$^1S_0$, $^3S_1$, $^1P_1$ and $^3P_J$. The variation is very modest
and regular, varying inversely with $|E|$ and
well fit by the assumption (motivated by the form of $V (E-QH)^{-1} QV$)
\[ a(E) = {a(10 MeV) \over 1 + \alpha |E|}. \]
The variation is typically at the level of a few percent, over 20 MeV.
The progression in the slopes within each channel, order by order, correspond to
expectation: the lowest order terms, which account for the hardest part of the scattering
in $Q$, have the weakest dependence on $|E|$. Comparisons between channels also
reflect expectations. In the earlier discussion of naturalness, the
rapid convergence in the $^1S_0$ channel, order by order, was consistent with
very short range interactions in $Q$. Accordingly, $a_{LO}^{1S0}$
varies by just 0.72\% over a 10 MeV interval, and $a_{NLO}^{1S0}$ by 1.10\%.
This channel contrasts with the $^3S_1$ channel, where convergence
in the contact-gradient expansion is slower, consistent with somewhat longer range
interactions in $Q$. For the $^3S_1$ case one finds 2.64\% variations in $a_{LO}(^3S_1)$ and
5.17\% variations in $a_{NLO}(^3S_1)$ per 10 MeV interval.
Are such variations of any numerical significance, compared to the explicit
variations isolated in $\kappa$? That is, if one were to determine
a HOBET interaction directly from bound-state properties of light nuclei, would
the neglect of this implicit energy dependence lead to significant errors in binding
energies? One can envision doing such a fit over bound-state data spanning
$\sim$ 10 MeV, finding the couplings as a function of $|E|$, so that the error induced by
using average energy-independent couplings $a_{LO}(|\bar{E}|)$ can be
assessed. These errors would reflect variations in the matrix
elements to which these couplings are fit, following the procedures previously described.
Such a study showed that only two channels exhibited
drifts $\Delta$ in excess of 15 keV over 10 MeV,
\[ a_{LO}^{1S0}: \Delta \sim \pm 21 \mathrm{~keV}~~~a_{LO}^{3S1}: \Delta \sim \pm 148
\mathrm{~keV}~~~
a_{NLO}^{3S1}: \Delta \sim \pm 32 \mathrm{~keV} \]
One concludes that the $^3S_1$ channel is, by a large factor, the dominant source of
implicit energy dependence in the HOBET interaction.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=9cm]{strong_s_e}
\end{center}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=9cm]{strong_p_e}
\end{center}
\end{minipage}
\caption{The calculated energy dependence of derived coefficients for the
contact-gradient expansion are indicated by the markers, for the various $S-S$
and $P-P$ channels. Over a 10 MeV interval typical of bound-state nuclear
spectra, variations are typically at the few percent level. The continuous lines represent
simple linear fits, $a$(10 MeV)/$a(E)$ = 1 + $\alpha |E|$, to the results. The fit
is generally excellent.}
\label{fig_aofe}
\end{figure}
This allows one to do a more quantitative calculation that focuses on the most difficult
channel ($^3S_1$) and compares the relative sizes of the $\kappa$-dependent and
implicit energy dependences, as reflected in changes in the matrix $H^{eff}(|E|)$.
Thus this matrix is constructed at
$|E|=10$ MeV and at $|E| \sim 0$ MeV (including the coupling to $^3D_1$), and
changes in global quantities
of that matrix over 10 MeV are
examined: shifts in the first moment (the average eigenvalue) , the RMS
shifts of levels relative to the first moment (related to the stability of level splittings),
and eigenvalue overlaps. The four energy-dependent effects discussed here are
separately turned on and off.
Thus this exercise should provide a good test of the relative
importance of these effects. The results are shown in Table \ref{table:3}.
\begin{table}
\begin{center}
\caption{Spectral property variations in $H^{eff}(E)$ over 10 MeV}
\label{table:3}
\begin{tabular}{|l|c|c|c|c|}
\hline
Term & Parameter & 1st Moment Shift (MeV) & RMS Level Variation (MeV) & Wave Function Overlaps \\
\hline \hline
$\langle \alpha |T | \widetilde{\beta} \rangle$ & $\kappa$ & 2.554 & 1.107 & 95.75-99.74\% \\
$\langle \widetilde{\alpha} | V | \widetilde{\beta} \rangle$ & $\kappa$ & 0.272 & 0.901 & 99.35-99.82\% \\
$\langle \widetilde{\alpha} | \bar{O} | \widetilde{\beta} \rangle$ & $\kappa$ & -0.239 & 0.957 & 99.51-99.99\% \\
$\langle \alpha | \bar{O}(E) | \beta \rangle$ & implicit & 0.135 & 0.107 & 99.95-100\% \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
Despite the selection of the worst channel, $^3S_1$,
the implicit energy dependence is small, intrinsically and in comparison
with the implicit energy dependence embedded in $\kappa$. The implicit
dependence in the first moment -- a quantity important to absolute binding energies -- is
5\% that of the explicit dependence in $\langle \alpha |T | \widetilde{\beta} \rangle$. The
RMS shifts in levels relative the first moment are at the $\sim$ 1 MeV level for each of
the implicit terms, but $\sim$ 100 keV for the implicit term. Eigenfunction overlaps show
almost no dependence on the implicit term,
exceeding 99.95\% in all cases: variations 10-100 times larger arise
from the analytical terms in $\kappa$.
Thus a simple representation of the HOBET effective interaction exists:
\begin{itemize}
\item The requirements for a state of fixed $|E|$ are a series of short-range
coefficients and a single parameter $\kappa$ that governs long-range corrections
residing in $Q$,
including certain terms that couple $QV$ and $QT$. By various measures explored here, an
N$^3$LO expansion is accurate to about a few keV
\item The $\kappa$ dependence found for a state of definite energy $|E|$
also captures almost all of the energy
dependence resulting from varying $|E|$, the state-dependence in BH.
Even in the most troublesome channel, calculations show that $\sim$ 95\%
of the energy dependence associated with changes in $|E|$ is explicit.
It appears that neglect of the implicit energy dependence would induce errors of $\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}$ 100 keV,
for a spectrum spanning $\pm$ 10 MeV. This kind of error would be within the uncertainties
of the best {\it ab initio} excited-state methods for light (p-shell) nuclei,
such as Green's function Monte Carlo \cite{argonne}
or large-basis no-core SM diagonalizations \cite{nocore}.
\item If better results are desired, the program described here can be extended to include
the implicit energy dependence. The expansion around an average energy $E_0$
\[ V{1 \over E-QH} QV = V \left[ {1 \over E_0-QH} - {1 \over E_0-QH} (E-E_0) {1 \over E_0-QH} + \cdots \right] QV \]
generates the correction linear in $E$ that is seen numerically. This second term is
clearly quite small, explicitly suppressed by the ratio of scales discussed above.
But, in any troublesome channel, the second term could be represented by
contact-gradient operators of low order, with the contribution suppressed by an
overall factor of $(E-E_0)$.
\end{itemize}
\section{Discussion and Summary}
One of the important motivations for trying to formulate an effective theory for nuclei in
a harmonic oscillator basis is the prospect of incorporating into the approach some
of the impressive numerical technology of approaches like the SM.
Numerical techniques could be used to solve significant $P$-space problems,
formulated in spaces, such as completed $N \hbar \omega$ bases, that preserve
the problem's translational invariance.
My collaborators and I made an initial effort to construct a HOBET some years ago, using
a contact-gradient expansion modeled after EFT. We performed a shell-by-shell integration,
in the spirit of
a discrete renormalization group method, but encountered several abnormalities connected
with the running of the coefficients of the expansion.
Subsequent numerical work in which we studied
individual matrix elements revealed the problems illustrated in Fig. \ref{fig_simple}.
These problems -- the difficulty of representing $Q$-space contributions that are
both long range and short range -- are not only important for HOBET, but also are
responsible for the lack of convergence of perturbative expansions of the effective
interaction. Fig. \ref{fig_1} provides one example. In other work \cite{luu} we have shown that
convergent expansions in the bare interaction (deuteron) or g-matrix ($^3$He/$^3$H)
for $H^{eff}$ do exist, if the long-range part of this problem is first solved, as we have done here.
Thus the current paper returns to the problem of constructing a contact-gradient
expansion for the effective interaction, taking into account what has been learned since
the first, less successful effort.
This paper introduces a form for that expansion that eliminates operator mixing,
simplifying the fitting of coefficients and guaranteeing that coefficients determined in
a given order remain fixed when higher-order terms are added. Thus the N$^3$LO
results presented here contain the results of all lower orders. The expansion is
one in nodal quantum numbers, and is directly connected with traditional Talmi
integral expansions, generalized for nonlocal interactions.
Convergence does vary from channel to channel, but in each channel the order-by-order
convergence is very regular. Each new order brings down the
scale $\Lambda$ at which deviations appear, and in each new order the Lepage plot
steepens, showing that the omitted physics does have the expected
dependence on higher-order polynomials in $(n^\prime, n)$.
The channel-by-channel variations in convergence reflect similar
behavior seen in EFT approaches, where the need for alternative power counting
schemes has been noted to
account for this behavior. From a practical standpoint, however,
the N$^3$LO results are effectively exact: in the most important difficult channel, $^3S_1$,
measures of the quality of the matrix $H^{eff}$ yielded results on the order of (1-3) keV.
The summation done over $QT$ yields a simple result, but still one that is quite remarkable
in that long-range physics is governed by a single parameter $\kappa$, that depends
on the ratio of $|E|$ and $\hbar \omega$. Despite all of the attractive analytic properties
of the HO as a basis for bound states, its unphysical binding at large $r$ has been viewed
as a shortcoming. But the ladder properties of the HO in fact
allow an exact summation of $QT$. It seems unlikely that any other bound-state
basis would allow the coupling of $P$ and $Q$ by $T$ to be exactly removed. That is,
the HO basis may be the only one that allows the long-range physics in $Q$ to be
fully isolated, and thus subtracted systematically. In this sense it may be the optimal basis
for correctly describing the asymptotic behavior of the wave function. Note, in particular,
that the right answer is not going to result from using ``improved" single-particle
bases: $\kappa$ depends of $|E|$, not on single-particle energies of some mean field.
The effects associated with $\kappa$ are large, typically shifting edge-state matrix elements
by several MeV, and altering spectral measures, like the first energy moment of $H^{eff}$,
by similar amounts. This dependence, if not isolated, destroys the
systematic order-by-order improvement important to HOBET, as Fig. \ref{fig_simple}
clearly illustrates.
The explicit energy dependence captured in $\kappa$ accounts for almost
all of the energy dependence of $H^{eff}(|E|)$. In more complicated calculations this
dependence, in the
BH formulation used here, generates the state-dependence that allows ET wave functions
to have the proper relationship to the exact wave functions, namely that the former are the
$P$-space restrictions of the latter.
While in principle additional energy dependence important to this
evolution resides in
$V (E-QH)^{-1} QV$ and thus in the coefficients of the contact-gradient expansion, in
practice this residual implicit energy dependence was found to be very weak. This
dependence was examined channel by channel, and its impact on global properties of
$H^{eff}(|E|)$ was determined for the most troublesome channel, $^3S_1$. Even in this channel, the
impact of the remaining implicit energy dependence on $H^{eff}(|E|)$ spectral properties such
as the
first moment, eigenvalue spacing, and eigenvalue overlaps, was found to
be quite small compared to the explicit dependence isolated in $\kappa$.
This is physically very reasonable:
$QT$ generates nearest-shell couplings between $P$ and $Q$, so that excitation scales
are comparable to typical nuclear binding energies. Thus this physics, extracted
and expressed as a function of $\kappa$, should be sensitive to binding energies.
In contrast, $V (E-QH)^{-1} QV$
involves large scales associated with the hard core, and thus should be relatively insensitive
to variations in $|E|$. In the $^3S_1$ channel, the explicit dependence
captured in $\kappa$ is about 20 times larger than the implicit energy dependence buried
in the contact-gradient coefficients. Numerically, the latter could cause drifts on the
order of 100 keV over 10 MeV intervals. Thus, to an excellent approximation, one could
treat these coefficients as constants in fitting the properties of low-lying spectra.
Alternatively, the HOBET procedure for accounting for this implicit energy dependence
has been described, and could be used in any troublesome channel.
The weakness of the implicit energy dependence will certainly
simplify future HOBET efforts to determine
$H^{eff}(E,b,\Lambda_P)$ directly from data (rather than from an NN potential like $v_{18}$).
Indeed, such an effort will be the next step in the program. The
approach outlined here is an attractive starting point, as it can be shown that
the states $|\widetilde{\alpha}(\kappa) \rangle$ become asymptotic plane-wave states, when
$E$ is positive. Thus the formalism relates bound and continuum states through a
common set of strong-interaction coefficients operating in a finite orthogonal basis.
The relationship between current work and some
more traditional treatments of the $H^{eff}$ for model-based approaches, like the SM,
should be mentioned.
Efforts like those of Kuo and Brown are often based on the division $H=H_0+(V-V_0)$,
where $H_0$ is the HO Hamiltonian \cite{kuobrown}. Such a division would allow the same BH
reorganization done here: $QT$ and $Q(H_0-V_0)$ are clearly equivalent. But in
practice terms are, instead, organized in perturbation theory according to $H_0$, i.e., so that
Green's functions involve single-particle energies. This would co-mingle the long-
and short-range physics is a very complicated way. In addition, often the definitions of $Q$
and $P$ used in numerical calculations are not those of the HO: instead, a plane-wave
momentum cut is often used, which simplifies the calculations but introduces
uncontrolled errors. Either this approximation (plane waves
are diagonal in $T$) or the use of perturbation theory (because of the co-mingling) would
appear to make it impossible to separate long- and short-range physics correctly, as has
been done here.
Another example is $V_{low~k}$, in which a softer two-nucleon potential is derived
by integration over high-momentum states \cite{achim}. This is a simpler description of $Q$ than
arises in bound-state bases problems, like those considered here: the division between
$P$ and $Q$ is a specified momentum, and $T$ is diagonal. There would be no
analog of the $\kappa$-dependence
found in HOBET. However, HOBET and $V_{low~k}$ may have an interesting relationship.
Effective operators for HOBET and for EFT approaches (which also employ a momentum
cutoff) agree in lowest contributing order. When there are differences in higher order, it would
seem that these differences must vanish by taking the appropriate limit, namely
the limit of the HOBET $Q(b,\Lambda_P)$ where $b \rightarrow \infty$
while $\Lambda_P/b$ is kept fixed. This keeps the average $\langle p^2 \rangle$ of the
last included shell fixed, while forcing the number of shells to infinity and the shell splitting to zero.
Numerically it would be sufficient to approach this limit, so that $Q$ resembles the plane-wave
limit over a distance characteristic of the nuclear size.
It is a reasonable conjecture that $V_{low~k}$ would emerge from such
a limit of the HOBET $H^{eff}(b,\Lambda_P)$. It would follow that all $\kappa$
dependence should vanish in that limit. It would be interesting to try to verify these
conjectures in future work, and to study the evolution of the HOBET effective interaction
coefficients as this limit is taken
The state-dependence of effective interactions is sometimes treated
in nuclear physics by the method of Lee and Suzuki. One form of Lee-Suzuki
produces a Hermitian energy-independent interaction. While it is always possible to
find such an $H$ to reproduce eigenvalues, it is clear that basic wave function requirements of an
effective theory -- that the included wave functions correspond to restrictions of the
true wave functions to $P$ -- are not consistent with such an $H$.
Another form produces
an energy-independent but nonHermitian $H$. This can be done consistently in an
effective theory.
However, the results presented here make it difficult to motivate such a transformation.
It appears that the state-dependence is almost entirely attributable to the interplay between
$QT$ and $QV$, removed here analytically in terms of a function
of one parameter $\kappa$, which relates the bound-state momentum scale (in $\hbar \omega$)
to a state's energy. There is no obvious benefit in obscuring this simple dependence
in a numerical transformation of the potential, given that the Lee-Suzuki method is not
easy to implement. The physics is far more transparent in the BH formulation, and the
self-consistency required in BH makes the use of an energy-dependent potential as
easy as an energy-independent one. More to the point, the necessary $\kappa$ dependence
is already encoded in the potential for a single state of definite energy $|E|$ -- thus
no additional complexity is posed by the state-dependence of the potential.
I thank Tom Luu and Martin Savage for helpful discussions. This work was supported in
part by the Office of Nuclear Physics and SciDAC, US Department of Energy.
\section{References}
| 903ba49b247933f910f23e587267c237aabb0e4c | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
On Earth there are $235$ countries (states, where the provinces are
disregarded), which have totally about few million cities, where
more than 6 billion humans live and speak about $7000$ languages, at
present. The whole life goes on over an area of some $510$
million km$^2$. Figure 1 is plotted utilizing the present empirical
data \cite{six} for the relative population of the countries (divided
by the world population, thick line) in rank order; i.e., the most
populous country (China) has the rank one, the next populous country
(India) has the rank two, and the third populous country (USA) has
the rank three, etc., where the corresponding relative area of the
countries (divided by the area of the Earth) is designated by the
plot in the thin line. The area plot depicts fluctuation; yet, it
roughly follows the population line. The inset (Figure 1) is for the
population density in capita per km$^2$, which shows that the
population density is almost constant (about $18.5$ capita per
km$^2$) over the countries.
Our essential aim in the present contribution is to show that the time
evolution of the world with her cities, languages and countries,
etc., might be governed by two opposite processes: random
multiplicative noise for growth in size and fragmentation for spread
in number and extinction. Secondly, we aim at obtaining a wide
panorama for the world (cities, languages, countries and their
distributions, lifetimes, etc.) in terms of a single simulation,
where the related results are obtained at the same time. The model is
developed in \cite{three}, where the cities and the languages are treated
differently and as connected; languages split since cities split,
etc. (For a quantitative method for the formation of the languages,
please see references in \cite{three}.) Results for the size
distribution functions, the probability distribution functions (PDF)
and various other functions for both the cities and the languages
are found to be in good agreement with the empirical data. Yet, the
results for the language families (in \cite{three}) are considerably
far from reality.
In the present work, our focus is on the families; here the size
distributions of the families for both the cities (countries) and
the languages are given besides their distributions over the number
of their members, etc. In \cite{three}, the number of the language
families was not changing in time and the city families were not
considered at all. Here, both the offspring cities and the languages
may create new families; in other words, the current families may
fragment as explained in Sec. 3.3. Secondly, we apply random
punctuation for the cities and random change of the languages, which
was not followed in \cite{three}. We now also study bilinguals.\cite{seven}
Thus, we present here a richer panorama of the world,
where all of the results are obtained in the same simulation, at
the same time for many parameters.
The following section is the model, and the next one is the applications and
results. The last section is devoted for discussion and conclusion.
Appendix is a brief description of the model, which is given
extensively in \cite{three}.
\section{Model:}
This section is the definition (2.1.) and a brief review (2.2.-2.4.)
of the model, where also introduced are the meaning of the relevant
concepts and the parameters, with the symbols in capital letters for
the cities and those in lower case for the languages. The subscript
fam is used for the families.
The initial world has $M(0)$ ancestors for the cities $(I)$ and $m(0)$ ancestors for the languages
$(i)$, with $M(0)\neq m(0)$. Each city has a random size $(P_I(0))$
and she speaks one of the initial languages, which is selected
randomly. So, $P_I(0)$ is the population of each ancestor city, and
$p_i(0)$ is the number of people speaking each ancestor language.
(It is clear that the total number of the citizens and the speakers
is the same and it is equal to the initial world population.)
\subsection{Definition:}
Populations of the cities grow in time $t$, with a random rate $R_I \le R$,
where $R$ is universal within a random multiplicative noise process,
$$P_I(t) = (1 + R_I)P_I(t - 1) . \eqno(1)$$
As the initial cities grow in population the initial languages grow
in size $(p_i(t))$, where the cities (and consequently, the
languages) fragment in the meantime. If a random number (between 0
and 1; defined differently at each time step $t$)
for a city is larger than some
$G$ close to $1$, then the city becomes extinct (random elimination,
punctuation); otherwise, if it is smaller than some small $H$, the
city splits after growing, with the splitting ratio (fragmentation,
mutation factor) $S$: If the current number of habitants of a city
$I$ is $P_I(t)$, $SP_I(t)$ many members form another population
and $(1-S)P_I(t)$ many survive within the same city. The number of
the cities $M(t)$ increases by one if one city splits; if any two of
them split at $t$, then $M(t)$ increases by two, etc. When a city is
generated she speaks with probability $h_f$ a new language, with
probability $h_s$ a randomly selected current language, and with the
remaining probability $1 - h_f - h_s$ the old language (of the
mother city).
\subsection{Lifetimes for cities or languages:}
Lifetime is the difference between the number of the time step at
which a city or a language is generated and that one at which the
given agent became extinct. The agent becomes extinct if its
size becomes less then unity in terms of fragmentation or if it
is randomly eliminated (with $G<1$). If all the cities which were
speaking a given language are eliminated (by any means), then we
consider the given language(s) as eliminated. And, if all the
members of a family become extinct (by any means), then we consider
the family as extinct. The age of a living agent (at the present) is
considered as the time passed from the time of their formation up to
now.
\subsection{Family trees for the cities or the languages:}
We construct the family trees for the cities and the languages as follows:
We assume that the initial cities and the initial languages have
different families; i.e., we have $F(0)$ many city families and
$f(0)$ many language families at t=0. We label each city by these
numbers, i.e. the city family number and the language family
number, which may not be the same later (for example, due to long
and mass immigration, as in reality). In this manner, we are
able to compute the number of the members of each family, as well as
their sizes at the present time, etc. (The given labels may be also
utilized to trace the generation level of the offspring agents.) It
is obvious that the unification (merging) of the cities or the
languages are kept out of the present scope.
\subsection{ Bilinguals:}
Some citizens of a given city (country) may select another language
(other than the common or official language of the home city, home
country, i.e., the mother tongue) to speak, where several reasons
may be decisive. We consider here the size distribution of the
second languages (bilinguals \cite{seven}), where we assume that an adult
(speaking a language $i$ as a mother tongue) selects one of the current
languages if this language $k$ is bigger than the mother language,
$p_k > p_i$. Then
$$p'(t)_k \propto p(t)_i(p(t)_k - p(t)_i)\lambda r'_i , \eqno(2)$$
where, $p(t)_i < p(t)_k$ and the prime denotes the second
language. In Eq. (2) $r'_i$ is a random number which is uniformly
distributed between zero and one. So; $0\le \lambda r'_i
<\lambda$ for a given $\lambda$, which is proportional to the
percentage (up to randomness) of the population of the language
$i$ the speakers of which select $k$ as the second language, and
$\lambda$ is taken as universal. It is obvious that, $\lambda$ has the
unit per capita (person) and as the size difference $(p(t)_k -
p(t)_i)$ increases, the language $k$ becomes more favorite and the
related percentage $((p(t)_k - p(t)_i)\lambda r'_i)$ increases.
\section{Applications and results:}
The parameters for the rates of growth (Sect. 2, with the symbols in
capital letters for the cities and these in lateral ones for the
languages) have units involving time: here, the number of the
interaction tours may be chosen as arbitrary (without following
historical time, since we do not have historical data to match
with); and the parameters (with units) may be refined accordingly.
Yet, our initial conditions (with the given initial parameters for
ancestors) may be considered as corresponding to some $10,000$ years
ago from now. So the unit for our time steps may be taken as (about)
$5$ years, since we consider $2,000$ time steps for the evolution.
After some period of the evolution in time we (reaching the present)
stop the computation and calculate PDF for size, and for some other
functions such as extinction frequency, lifetime, etc. (for the
cities or the languages and their families, etc.).
Empirical criteria for our results are: i) The number of the living
cities (towns, villages, etc.) and that of the living languages may
be different; but, total size for the present time must be the same
for both cases (and also for the families of the cities or the
languages), where the mentioned size is the world population (Eqn.
(2)). ii) World population increases exponentially with
time.\cite{four,five,six} iii) At present, the biggest language (Mandarin
Chinese) is used by about $1.025$ billion people and world
population (as, the prediction made by United Nations) is $6.5$
billion in $2005$, (and will be about $10$ billion in $2050$)
\cite{four,five,six}; so the ratio of the size for the biggest language
to (the total size, i.e.,) world population must be (about) $1:6.5$.
iv) Size distribution for the present time must be power law --1 for
the cities (Pareto-Zipf law), and this may be considered as slightly
asymmetric log-normal for the languages. We first consider the
cities (Sect. 3.1), later we study the languages (Sect. 3.2), with
the lifetimes, etc., in all; and the families are considered finally
(Sect. 3.3).
\subsection{Cities:}
The initial world population ($W(0)$) is about $M(0)P_I(0)/2$, since
the average of uniform random numbers between zero and unity is 1/2.
Thus, we assume power law zero for the initial distribution of the
cities or the languages over size.
We tried many smooth (Gauss, exponential, etc.) initial distributions
(not shown); and, all of them underwent similar time evolutions
within 2,000 time steps, under the present processes of the random
multiplication for growth, and random fragmentation for spread and
origination and extinction, where we utilized also various
combinations of the parameters $H$ and $G$. We tried also delta
distribution, which is equivalent to assuming a single ancestor, for
the initial case; it also evolved into a power law about --1 (with
different set of parameters, not shown) in time. Since we do not
have real data for the initial time, we tried several parameters for
$M(0)$ (=1,000; 300; 50, etc.) and for $P_I(0)$ (=1,000; 500) at
$t=0$. In all of them, it is observed that the city distribution at
present (Pareto-Zipf law) is independent of the initial (probable)
distributions, disregarding some extra ordinary ones. Please note
that, similar results may be obtained (not shown) for $M(0)=1$,
i.e., single ancestor.
{\it Evolution:} As $t$ increases, the cities start
to be organized; and within about 200 time steps, we have a picture
of the current world which is similar to the present world, where
the distribution of the cities over population is considerably far
from randomness. With time, the number of the cities ($M(t)$) and
the population of the world ($W(t)$) increases exponentially with
different exponents.\cite{three} Please note that these simulations
have about two
million cities and the world population comes out as about 4.5
billion at $t=2000$ (present time, the year 2000), for $M(0)=1000$,
$P(0)\le 1000$, $R=0.0075$, $H=0.006$, $G=0.9992$ and $S=0.5$.
With another set of the parameters; for $R=0.0073$, $H=0.004$ and
$G=1$ (keeping other parameters same as before) we have about 450
thousand ($M(2000)$) cities with $6.7$ billion total citizens
($W(2000)$), etc.
In Figure 2 the plots in circles (open ones for $t=320$ and solid ones
for $t=2000$) represent the time evolution of size distribution of
the cities (PDF), all of which split and grow by the same parameters
(Set 1), where $G=0.9992$. Thus, we have abrupt (punctuated)
elimination of the cities here, which is not followed in
\cite{three}; yet the results are not much different, because the
punctuation we applied here is light (low). This means that it is
not strong enough to disturb the running processes, where the
negative effect of the (light) punctuation (in decreasing the
numbers) is diminished by the positive effect of the fragmentation (in
increasing the numbers). Please note that, in Fig.2 the (dashed)
arrow has the slope --1, which indicates the (empirical) Pareto-Zipf
law for the cities.
Furthermore, we observe that, as the initial
cities spread in number by fragmentation, the initial random
distribution turns out to be log-normal for intermediate times (as
the parabolic fit indicates, for $t=320$ for example) which becomes
a power law --1 (at tail, i.e., for big sizes) for the present time.
The inset (right) in Fig. 2 is the distribution of the world
population ($P$) at $t=320$ (dashed line) and $t=2000$ (solid line)
over the cities ($C$), which are in rank order along the horizontal
axis. Please note that, in the figure and in the inset, axes are
logarithmic. It may be observed in the plots in the inset (Fig. 2,
right) that, the world population (along the vertical axis, $P$)
increases slightly more rapidly than the number of the cities (along
the horizontal axis, $C$). So, Fig. 2 may be considered as the
summary for history of the evolution of the cities (or the
languages, see Section 3.2.), where two opposite physical processes
underline the evolution; the random multiplicative noise and the
fragmentation.
{\it Lifetimes:} We obtain the time distribution of the cities (lifetime
for extinct cities and ages of the livings ones, not shown here) as
decreasing exponentials (disregarding the cases for small number of
ancestors and high punctuation) as given in the related figures in
\cite{three}. Simple probability (density) functions for the
lifetimes are also exponential (not shown), which means that the
cities occupy the time distribution plots in exponential order; more
cities for small $t$, and fewer cities for big $t$, for a given number
of time steps in all.
\subsection{Languages:}
We guess that there were many simple languages (composed of some
fewer and simple words and rules), which were spoken by numerous
small human groups (families, tribes, etc.) at the very beginning.
And, as people came together in towns, these primary languages might
have united. Yet, we predict that the initial world is not (much)
relevant for the present size configuration of the languages (as
well as in the case for cities; see, Sect. 3.1). Moreover, we may
obtain similar target configurations for different evolution
parameters (not shown). Within the present approach, the ancestor
cities and the ancestor languages are associated randomly; since,
the languages with their words, grammatical rules, etc. might have
been formed randomly (\cite{three}, and references therein); the
societies grew and fragmented randomly (as mentioned in Sect. 2);
new cities randomly formed new languages or changed their language
and selected a new one randomly. We predict that, the index $i$
(roughly) decreases as $I$ increases for small $I$ (not shown). We
predict also the distribution of the present languages over the
present cities, where we have power law minus unity (not shown). It
may be worthwhile to remark that, younger cities prefer younger
languages; which means also that the new cities (or the new
countries which are composed of the new cities) emerge mostly with
new languages. Secondly, as $t$ increases the indices $I$ and $i$
increase, and the plot of $I$ versus $i$ extends upward and
moves rightward, since the number of the current languages ($m(t)$)
and the number of cities, which speak a given language, increase (as
a result of the fragmentation of the cities).
Furthermore, we
compute the number (abundance) of the speakers for the present
languages ($p_i(t)$, in Eq. (2)) (not shown), where we have few
thousand ($m(t=2000)=7587$) living languages. Within this
distribution of the present languages over the speakers, we predict
power law minus unity (not shown). It may be worthwhile to remark that
older languages have more speakers; and in reality (Mandarin)
Chinese, Indian, etc., are big and old languages. For example, we
have about one billion people speaking the language number 1, which
is one of the oldest languages of the world; and less people
speaking the language number 2, etc.
In Figure 2, we display the PDF for the size distributions of the
languages at $t=320$ (historical, open squares) and $t=2000$
(present, solid squares), where the number of the ancestor languages
($m(0)$) is 300. We plotted several similar curves for $m(0)=1$
i.e., for the case where only one ancestor language is spoken in
each ancestor city and obtained similar results (not shown).
Splitting rate and splitting ratio for languages are not defined
here, since languages split as a result of splitting of the cities;
and the splitting ratio of the splitting language comes out as the
ratio of the population of the new city (which creates a new
language) to the total population of the cities which speak the
fragmented language. Please note that, in the plots (Fig.2) for the
languages at the present time (solid squares for $t=2000$) we have
slightly asymmetric Gauss for big sizes as the parabolic fit (dashed
line) indicates; and we have an enhancement for the small languages
in agreement with reality \cite{eight}. Fig. 2 may be considered as
the summary for history of the evolution of the cities and the
languages.
We think that, the (random) elimination of the languages (with all of its speakers) is not
realistic (excluding the small languages with small number of
speakers), and it is not recorded in the history for the recent
times. On the other hand, changing (replacing) a language by another
one may be realistic. And, in case of random (light) elimination
(i.e., changing the language with a current one), the fragmentation
rate may accordingly be increased to obtain the empirical data for
the number of the languages at the present. In other words, the
number of the languages increases by $h_f$ and decreases by $h_s$,
which may be considered as punctuation for the languages with $1 -
h_s< g$. In Fig. 2 (and in other related ones) we utilized $h_f
=0.0013$ and $h_s=0.0002$.
Lifetimes for the languages and the related probability
densities are decreasing exponentially (as for the cities); which
means that many languages (cities) become extinct soon after they emerge
and the remaining ones live long (not shown) as in reality. Please
see Fig. 2 in \cite{eight} for the related empirical data. The
(negative) exponent of the present decay is about $0.0007$ per time
step for $2000$ time steps.
\subsection{Families of the cities (countries) or the languages and the bilinguals:}
We obviously do not know how the city (language) families \cite{eightplus} are
distributed over the cities (languages) initially; since, we do not
have any historical record about the issue. Yet, we predicted that
the initial conditions for the cities (languages) are almost
irrelevant for the present results. And we considered several
initial conditions for the city families and the language families,
which are discussed in \cite{eight} to some extent in empirical
terms.
We think that, the number of the city families and the language families were roughly the
same, (yet, the number of the cities and the languages might be
different) initially; and we take $F(0)=30$, $f(0)=18$ (for
$M(0)=1000$ and $m(0)=300)$.
Figure 3 and Figure 4 are for the city families and
the language families, with $H_{\rm fam-f} =0.0005$, $H_{\rm fam-s} =0.0003$
$(G_{\rm fam}=1 - H_{\rm fam-s})$ and $h_{\rm fam-f} =0.0004, h_{\rm fam-s} =0.0001 \; (g_{\rm fam}=1 -
h_{\rm fam-s})$, all respectively; where, other parameters are as before.
Figs. 3 and 4 may be considered as in good agreement with the
empirical plots in Fig. 1 (and in refs. \cite{four,five,six}). Please
note that, the families evolve in time here (with the parameters
given for the related fragmentation in this paragraph), which is not
considered in \cite{three}.
For the bilinguals we assume that a small fraction $\lambda$ of the
citizens selects a second language out of the bigger ones and the
introduced probability increases with the difference in sizes.
Figure 5 is the PDF for the distribution of the bilinguals over the
relative population (to the total), where Eq. (3) is utilized with
the present languages (Figs. 2 and 4) for $\lambda=0.01$. We
observe in Fig. 5 that few big languages are favored as the second language
by the majority (about 90 \%) of the speakers, with the given $\lambda$ .
We think that selecting big languages as second languages may help
increase the sizes of the big languages.
\section{Discussion and conclusion:}
Starting with random initial conditions and utilizing many
parameters in two random processes (the multiplicative noise for
growth and the fragmentation for generation and extinction of the
cities or the languages) for the evolution, we obtained several
regularities (for size and time distributions, etc.) within the
results; all of which may be considered as in good agreement with
the empirical data.
We predict that the results are (almost)
independent of the initial conditions, disregarding some extra
ordinary ones. Furthermore, punctuation (besides fragmentation)
eliminates the ancestors, with time. For $G\neq1$ ($G\approx1$), we
need longer time to mimic target configuration (if other parameters
are kept the same as before), where new generated cities or
languages may be inserted, in terms of fragmentation, provided
$G_{\rm critical}\le G$, and $1\le G+H$.
Many cities or languages become extinct in their youth, and
less become extinct as they become old. In other words,
languages or cities become extinct either with short lifetime (soon
after their generation), or they hardly become extinct later and
live long (which may be considered as a kind of natural selection).
We consider the mentioned result (which is observed in reality
\cite{eight})as an important prediction of the present model and we
had obtained similar results for the evolution biological species
\cite{nine}, which may be coming out because of the present random
multiplicative noise and fragmentation processes.
It might be argued (objected, by the reader) that,
there are many parameters in the model. Each of them is needed for
some measure of the related evolution in reality. Secondly, as we
predict that the initial conditions are (almost) irrelevant for the
present results and many parameters (for $t=0$) may be ignored.
Thirdly, the most important parameter in the model is $R$ (the rate
for population growth), and $H$ is related to $R$ implicitly, since
we need more cities to be established (per time) as the world
population increases. The punctuation parameter $G$ may also be
considered as a parameter dependent (implicitly) on $R$; since the
probability for the emergence and spread of wars, illnesses, etc.
increases, as world population increases. The rates for the
languages ($h$, $g$, etc.) may also depend (implicitly) on $R$;
since more new languages (per time) are needed with the increasing
world population, etc. The rates for the families certainly depend
on the number of their current members, the rate of which may
(ultimately) be controlled by $R$. Speaking geometrically, the area
under each plot for the size distribution of the cities, languages,
language families, city families (countries) must always equal to
the world population at any time $t$; and this constraint constructs
the bridge for the given implicit dependence of the rate parameters
(for the considered functions) on $R$.
As a final remark we claim that the
original model may be useful to predict also the historical size
distribution of the cities: We predict that the initial distribution
of the cities over the population becomes parabolic for some
intermediate time $t$ ($<2000$) in log-log scale and it turns to be
power law --1 as time goes on (i.e., for the present time; $t=2000$).
The mentioned distribution may be checked within the archaeological
data (as a subject of a potential field of science; namely, physical
history) for the ancient cities (towns); where, the time evolution
of the mentioned distribution into power law --1 may also be
considered.
\section{APPENDIX}
In the present model, we have (with the symbols in capital letters
for the cities and those in lower case for the languages; and the
sub index fam is used for the families) $M(0)$ ancestors for the
cities $(I)$ and $m(0)$ ancestors for the languages $(i)$, with
$M(0) \neq m(0)$. Each city has a random size ($P_I(0)$) and she
speaks one of the initial languages, which is selected randomly. So,
($P_I(0)$) is the population of each ancestor city, and $p_i(0)$ is
the number of people speaking each ancestor language. It is clear
that the total number of the citizens and the speakers is same (for
any $t$) and it is equal to the current world population;
$$W(t) = \sum^{M(t)}_{I=1} Pi_I(t) = \sum^{m(t)}_{i=1} p_i(t) . \eqno(A)$$
The cities have fixed growth rates ($R_i$), which are distributed
randomly over the ancestors and they (and, these for the offspring;
where the offspring carry the same growth rate as their ancestors)
are not changed later; yet, the maximum value ($R$) for the growth
rates is constant for all of the cities (so is for world).
Furthermore, we have $F(0)$ initial city families and $f(0)$ initial
language families, with $F(0)\ne f(0)$. Please note that, all of the
introduced parameters are about physical quantities, which represent
several situations in reality.
The time evolution of the cities (or
the languages and their families) is considered in terms of two
random processes, the multiplicative random noise \cite{one} and the
random fragmentation \cite{two}, which are coupled; here, the
cities or the languages (and their families) are taken as a whole
and the individuals are ignored. The cities grow in number by
splitting (with constant ratio $S=1/2$) where, the fragmentation
rate is $H$; and, the languages, the city families and the language
families follow them accordingly, with various fragmentation rates:
If a new city forms a new language ($h_f$) then it means that, the
language of the home city is fragmented; here, the splitting ratio
($S$) is the ratio of the population of this new city to the total
population of the cities which speak the old language. It is obvious
that $h_f$ is small ($h_f\approx0$); yet, many new languages may
emerge at each time step, since many new cities emerge in the mean
time, and $h_f$ becomes important. On the other hand, a new (and an
old) city may change her language and select one of the current
languages as the new one (with $h_s\approx0$, for all), where
colonization may take place or teachers may teach the new language
\cite{three}, etc. In this case, size of the old (new) language decreases
(increases) by the population of the new city. The language which is
spoken by many cities has a higher chance for being selected by a new
city; and so, big languages are favored in case of selecting a new
language.
We consider the countries (city families) as follows: When a city is
newly generated she establishes a new country (state, as we know
many historical examples where each city was a state (city-state)
and many new countries started with a new city) with probability
$H_{\rm fam-f}$; with probability $H_{\rm fam-s}$ she is colonized (i.e., changes
country); and, with the remaining probability $1 - H_{\rm fam-f} -
H_{\rm fam-s}$ she continues to survive within the home country. It is
obvious that when a city (or a group of cities, due to the present
randomness) starts a new country, it means that the old country is
fragmented. Secondly, not only the newly generated cities but also
the old ones may be colonized. The countries with all of her cities
may also be colonized (conquered) as we know from many examples in
history. Similar treatment may cover the language families with the
parameters $h_{\rm fam-f}$, $h_{\rm fam-s}$ and $1 - h_{\rm fam-f} - h_{\rm fam-s}$ (the
probability for starting a new language family, for changing the
language (and so the language family, while surviving within the
home country, i.e., being culturally colonized) and for continuing
to speak a language which belongs to the home language family;
respectively).
Please note that, the fragmentation causes new agents
to emerge (birth), and at the same time it drives them to extinction
in terms of splitting, and any agent with a member less than unity
is considered as extinct. The number of the cities increases,
decreases, or fluctuates about $M(0)$ for relatively big numbers for
$H$ (high fragmentation) and $G$ (low elimination), for small
numbers for $H$ (low fragmentation) and $G$ (high elimination), and
for $H + G = 1$ (equal fragmentation and elimination), respectively;
out of which we regard only the first case, where we have (for
$1<H+G$) an increase in the number of cities, and we disregard the
others. We try several numbers of the ancestors $M(0)$, with sizes
$P_i(0)$, where we assign new random growth rates for the new
cities, which are not changed later, as well as the growth rates for
ancestors are kept as same through the time evolution.
It is obvious that $H=1=G$ gives the gradual evolution for the cities, where we
have regular fragmentation with $H$ (and with some $S$) at each
time step $t$. This case is kept out of the present scope, because
we consider it as (historically) unrealistic.
It may be worthwhile to stress that elimination (punctuation, $G<1$) plays a role which is
opposite to that of fragmentation ($H$) and growth ($R$) in
evolution; here, $H$ and $R$ develop the evolution forward, and $G$
recedes. So the present competition turns out to be the one between
$H$ and $R$, and $G$, where two criteria are crucial: For a given
number of time steps, $R$, and $M(0)$, etc., there is a critical
value for $G$; where, for $G_{\rm critical} <G \cong 1$ cities survive, and
for smaller values of $G$ (i.e., if $G\cong G_{\rm critical}$) cities may
become extinct totally. (For similar cases in the competition
between species in biology, one may see [10].) Secondly, sum of H
and $G$ is a decisive parameter for the evolution: If for a given
$G$ (with $G_{\rm critical} <G$), $H+G=1$, then the number of cities does
not increase and does not decrease, but oscillates about $M(0)$,
since (almost) the same amount of cities emerges (by $H$) and
becomes extinct (by $G$) at each time step, and we have intermediate
elimination. On the other hand, if $H+G<1$, then the cities decrease
in number with time and we have high (strong, heavy) elimination.
Only for $1<H+G$ (with $G\neq1$) we have low (weak, light)
elimination of cities, where the number increases (yet, slowly with
respect to the case for $G=1$). In summary, only light punctuation
of the cities may be historically real, and it does not affect the
evolution and size distribution of the cities, as we observed in
many runs (not shown), where we increase the fragmentation ($H$) and
population growth rate ($R$) to compensate the negative effect of
punctuation on the number of cities and world population,
respectively. Yet, the ancestor cities i.e., those at age of
$t$ at any time $t$, decay more quickly in time as (punctuation
increases) $G$ decreases (since the generated cities may be
substituted by new generated ones after elimination; but the
ancestor ones can not be re-built.) It is obvious that punctuation
of a city (together with all of the citizens) is realistic as many
(regrettable) examples occurred during many wars.
| 17aec1818694531959f358da79ee70b8ccccde1a | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\label{intro}
Could the graviton have a non-zero rest mass? The observations have shown that this is a possibility. One of the most accurate bounding on the mass of the graviton comes from the observations of the planetary motion in the solar system. Variations on the third Kepler law comparing the orbits of Earth and Mars can lead us to $m_g < 7.8 \times 10^{-55}g$ \cite{Talmadge88}. Another bound comes from the analysis of galaxy clusters that lead to $m_g < 2 \times 10^{-62}g$ \cite{Goldhaber74} which is considerably more restrictive but less robust due to the uncertainties in the content of the universe in large scales. Studying rotation curves of galactic disks, \cite{JC} has found that we should have a massive graviton of $m_g \ll 10^{-59}g$ in order to obtain a galactic disk with a scale length of $b\backsim10$ kpc.
The above tests are obtained from static fields based on deviations of the newtonian gravity. In the weak field limit has been proposed \cite{Finn2002} to constraint $m_g$ using data on the orbital decay of binary pulsars. From the binary pulsar PSR B1913+16 (Hulse-Taylor pulsar) and PSR B1534+12 it is found the limit $m_g < 1.4 \times 10^{-52}g$, which is weaker than the bounds in static field.
It is worth recalling that the mass term introduced via a Pauli-Fierz (PF) term in the linearized approximation produces a theory whose predictions do not reduce to those of general relativity for $m_g \rightarrow 0$. This is the so called van Dam Veltmann Zakharov discontinuity \cite{Veltman1970}. Moreover the Minkowski space as background metric is unstable for the PF theory \cite{Gruzinov2005}. However, there is no reason to prefer the PF term over any other non-PF quadratic terms.
It is important to emphasize that these mass terms do not have clear extrapolation to strong fields. A way to do that was proposed by Visser \cite{visser98}. To generalize the theory to strong fields, Visser makes use of two metrics, the dynamical metric ($g_{\mu\nu}$) and a non-dynamical background metric ($\left( g_0\right) _{\mu\nu}$) that are connected by the mass term. Although adding a prior geometry is not in accordance with the usual foundations underlying Einstein gravity, it keeps intact the principles of equivalence (at least in its weak form) and general covariance in the Visser's work. Some interesting physical features emerge from the theory such as extra states of polarizations of the gravitational waves \cite{wayne2004}.
In the present article, we explore some aspects which are not treated by Visser in his original paper. In the great part of the astrophysical studies the Minkowski metric is the most appropriate choice to the background metric. However, in the study of cosmology, it is not possible to consider this kind of metric, and we need some prior considerations regarding a background metric. Once this problem emerges from the coupling of the two metrics and the energy's conservation condition, we analyze an alternative interpretation of this condition. We also show that this interpretation is in accordance with the equivalence principle and recovers naturally the special relativity in the absence of gravitational sources. Arguments in favor of a Minkowskian background metric in Visser's theory are also considered.
This paper is organized as follows: in section \ref{sec:1} we show how to introduce a mass for the graviton through a non-PF term. We present the strong field extrapolation as given by Visser in section \ref{sec:2}. In section \ref{sec:3} we show that the theory is not in accordance with a Minkowski background metric in the study of cosmology. In section \ref{sec:4} we re-interpret the stress-energy conservation in order to keep Minkowski as background in any case. In particular, we show that our re-interpretation is in accordance with the equivalence principle. In section \ref{sec:5} we show why Minkowski is the most natural choice to the background metric. We briefly study some cosmological consequences of our interpretation of the energy-momentum conservation in section \ref{sec:6}. And finally, we present our conclusions in the last section.
\section{The linearized approximation}
\label{sec:1}
The action of a massive gravity in weak field limit may be given by
\begin{equation}\label{actionweak}
I=\int d^4x \bigg\{ \frac{1}{2}\left[h^{\mu\nu}{\Box}^2 h_{\mu\nu}
-\frac{1}{2}h\Box^2h\right] - \frac{1}{2}\frac{m_g^2c^2}{\hbar^2}\left[ h^{\mu\nu}h_{\mu\nu}-\frac{1}{2}h^2\right]+\frac{8\pi G}{c^4}h^{\mu\nu}T_{\mu\nu} \bigg\} ,
\end{equation}
where the first term is the linearization of the usual Einstein-Hilbert Lagrangian and the second term is the mass term for the graviton that is a non-PF one. This fact is essential to have a well-behaved classical limit as the graviton mass goes to zero. From equation (\ref{actionweak}) we have the field equation in the weak field regime
\begin{equation}\label{weakfieldeq}
\Box^2\left[ h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}h\right]-\frac{m_g^2c^2}{\hbar^2}\left[ h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}h\right]=-\frac{16\pi G}{c^4}T_{\mu\nu}
\end{equation}
or
\begin{equation}\label{weakfieldtwo}
\left( \Box^2-\frac{m_g^2c^2}{\hbar^2}\right) \bar{h}_{\mu\nu}=-\frac{16\pi G}{c^4}T_{\mu\nu}
\end{equation}
where
\begin{equation}
\bar{h}_{\mu\nu}=h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}h
\end{equation}
The equation (\ref{weakfieldtwo}) is a Klein-Gordon type. Note that this equation in the limit $m_g \rightarrow 0$ gives us the weak field equations as in general relativity and the newtonian potential in the non-relativistic limit.
Taking the mass term as above we obtain the condition
\begin{equation}\label{gauge}
\partial_\nu\bar{h}^{\mu\nu}=0.
\end{equation}
as a natural consequence of the energy's conservation \cite{visser98}, instead of a gauge condition as in general relativity. But, as we will see later, this is not the case when one considers strong fields.
\section{The Visser's strong field equations}
\label{sec:2}
Following Visser, the extrapolation of the mass term in the equation (\ref{actionweak}) to strong fields could be made by introducing a background metric $g_0$, which would not be subject to a dynamical equation. So, the mass term of strong fields is given by the action
\begin{eqnarray}\label{actiostrong}
I_{mass} &=& \frac{1}{2}\frac{m_g^2c^2}{\hbar^2}\int d^4x\sqrt{-g_0}\bigg\{( g_0^{-1})^{\mu\nu}( g-g_0)_{\mu\sigma}( g_0^{-1})^{\sigma\rho}( g-g_0)_{\rho\nu} \nonumber
\\ & & -\frac{1}{2}\left[( g_0^{-1})^{\mu\nu}( g-g_0)_{\mu\nu}\right]^2\bigg\}
\end{eqnarray}
that recovers the action (\ref{actionweak}) when we consider the weak field limit:
\begin{equation}
g_{\mu\nu}=(g_0)_{\mu\nu}+h_{\mu\nu},~~~|h|<<1.
\end{equation}
Then, the full action considered by Visser is
\begin{equation}\label{fullaction}
I=\int d^4x\bigg[ \sqrt{-g}\frac{c^4R(g)}{16\pi G}+{\cal{L}}_{mass}(g,g_0)+{\cal{L}}_{matter}(g)\bigg]
\end{equation}
in which the background metric shows up only in the mass term for the graviton. The equations of motion that comes from (\ref{fullaction}) may be written such as the Einstein equations
\begin{equation}\label{eqmotion}
G_{\mu\nu}=-\frac{8\pi G}{c^4}\left[ T_{\mu\nu}+T_{\mu\nu}^{mass}\right],
\end{equation}
where the contribution of the mass term appears like an extra contribution to the stress-energy tensor, namely
\begin{eqnarray}\label{extrastress-energy}
T^{\mu\nu}_{mass} &=& -\frac{m_g^2c^6}{8\pi G\hbar^2} \bigg\{ \left( g_0^{-1}\right)^{\mu\sigma}\big[ \left( g-g_0\right)_{\sigma\rho} \nonumber
\\ & & -\frac{1}{2}(g_0)_{\sigma\rho}\left( g_0^{-1}\right) ^{\alpha\beta}\left( g-g_0\right) _{\alpha\beta}\big]\left( g_0^{-1}\right)^{\rho\nu}\bigg\}.
\end{eqnarray}
Following equation (\ref{gauge}), the natural extrapolation to strong fields is
\begin{equation}\label{massconserv}
\nabla_\nu T^{\mu\nu}_{mass} = 0.
\end{equation}
\section{Visser's field equations with a Minkowski background metric}
\label{sec:3}
As pointed out by Visser \cite{visser98}, the most sensible choice for almost all astrophysical applications is to choose $g_0$ as Minkowski. However, some problems appear when we consider this kind of background in cosmology.
To show how these problems emerges, we take the Robertson-Walker as dynamical metric and we consider $k=0$ for simplicity:
\begin{equation}\label{frw}
ds^2=c^2dt^2-a^2(t)\left[ dr^2+r^2(d\theta^2+sin^2\theta d\phi^2)\right].
\end{equation}
To the background metric, we take the following class of metrics:
\begin{equation}\label{backmetric}
ds_0^2=b^2_0(t)c^2dt^2-a^2_0(t)\left[ dr^2+r^2(d\theta^2+sin^2\theta d\phi^2)\right].
\end{equation}
Using these two metrics in the mass tensor and applying (\ref{massconserv}) we obtain
\begin{eqnarray}\label{aoboa}
\frac{\dot{a}}{a}\bigg[\bigg(\frac{a}{a_0b_0}\bigg)^2+\frac{1}{4}\bigg(\frac{a}{a_0}\bigg)^4+\frac{1}{4b_0^4}-\frac{1}{b_0^2}\bigg]+\frac{1}{2}\frac{\dot{a}_0}{a_0}\bigg(\frac{a}{a_0b_0}\bigg)^2 \nonumber \\ -\frac{\dot{b}_0}{b_0}\bigg[\frac{1}{2}\bigg(\frac{a}{a_0b_0}\bigg)^2+\frac{1}{3b_0^4}-\frac{2}{3b^2_0}\bigg]=0,
\end{eqnarray}
where dots represent time derivatives.
Thus, $a_0(t)$, $b_0(t)$ and the scale factor $a(t)$ are related to by the differential equation (\ref{aoboa}). For example, if we choose the background metric as Minkowski ($a_0=b_0=1$), we obtain that the dynamical metric is Minkowski too. Obviously this is not the case in an expanding Universe, for example.
So, in this case, we cannot consider Minkowski and we need some particular choice to the background metric. Some of these possible choices are discussed in the Visser's paper \cite{visser98}.
If a consistent gravitation theory is based on a prior metric, we expect that such a metric would be compatible with any astrophysical case. Once the problems regarding the Minkowskian background metric arises from the condition (\ref{massconserv}), we will explore an alternative interpretation of the energy-momentum conservation in the remaining of the paper. Such a interpretation has the intention of to keep Minkowski as the background metric in any astrophysical study in Visser's theory.
\section{The energy-momentum conservation revisited}
\label{sec:4}
From the field equations in the Visser's theory we may adopt an alternative energy-momentum conservation condition. Taking the divergence of (\ref{eqmotion}), the left-hand-side is a Bianchi identity that is automatically null and from the right-hand-side we get
\begin{equation}\label{new conservation}
\nabla_\nu\left[ T^{\mu\nu}+T_{mass}^{\mu\nu}\right]=0
\end{equation}
We will verify if this equation is in accordance with the equations of motion of a free fall test particle describing a geodesic and, therefore, if it is in accordance with the equivalence principle. In the well known Rosen bimetric theory of gravitation \cite{rosen1973}, for example, it was pointed out the importance of the field equations be in accordance with the geodesic equation which is obtained independently.
To proceed, we adopt the energy momentum tensor to a perfect-fluid:
\begin{equation}\label{perfect fluid}
T^{\mu\nu}=(\rho+p)U^\mu U^\nu+pg^{\mu\nu}.
\end{equation}
Substituting this into equation (\ref{new conservation}) we have
\begin{equation}
\left[(\rho+p)U^\mu U^\nu+pg^{\mu\nu}\right]_{;\nu} =-T^{\mu\nu}_{mass~;\nu}
\end{equation}
\begin{equation}\label{cons mass}
\left[ (\rho+p)U^\nu\right]_{;\nu}U^\mu+(\rho+p){U^\mu}_{;\nu}U^\nu=-T^{\mu\nu}_{mass~;\nu}
\end{equation}
where ``$;$ " denotes the covariant derivative.
Multiplying (\ref{cons mass}) by $U_\mu$ and using
\begin{equation}\label{proper vel}
U^\mu U_\mu=1,
\end{equation}
we obtain
\begin{equation}\label{cons mass 2}
\left[ (\rho+p)U^\nu\right]_{;\nu}+(\rho+p)U_\mu {U^\mu}_{;\nu}U^\nu=-T^{\mu\nu}_{mass~;\nu}U_{\mu}.
\end{equation}
Manipulating ($\ref{proper vel}$) we have
\begin{equation}\label{proper vel 2}
{U^\mu}_{;\nu}U^{\nu}=-U^\mu {U^\nu}_{;\nu};
\end{equation}
from which we can rewrite equation (\ref{cons mass 2}) as
\begin{equation}\label{cons mass 3}
\left[ (\rho+p)U^\nu\right]_{;\nu}-(\rho+p)U^\mu U_\mu {U^\nu}_{;\nu}=-T^{\mu\nu}_{mass~;\nu}U_{\mu}.
\end{equation}
From (\ref{proper vel}) we can find that the second term in the left-hand-side of (\ref{cons mass 3}) is zero, therefore
\begin{equation}\label{cons mass 4}
\left[ (\rho+p)U^\nu\right]_{;\nu}=-T^{\mu\nu}_{mass~;\nu}U_{\mu}.
\end{equation}
Now substituting (\ref{cons mass 4}) in (\ref{cons mass}) we get
\begin{equation}
-U_{\alpha}T^{\alpha\nu}_{mass~;\nu}U^{\mu}+(\rho+p) {U^\mu}_{;\nu}U^\nu=-T^{\mu\nu}_{mass~;\nu}.
\end{equation}
Since the strong field equations are in accordance with the geodesic equation we have
\begin{equation}
{U^\mu}_{;\nu}U^\nu=0
\end{equation}
which can be rewritten as
\begin{equation}
\frac{d^2x^\mu}{d\tau^2}+\Gamma^\mu_{\alpha\nu}\frac{dx^\alpha}{d\tau}\frac{dx^\nu}{d\tau}=0.
\end{equation}
The last equation can be obtained independently by considering a free fall test particle and the equivalence principle, just as the general relativity theory. Therefore, we conclude that since the energy-momentum conservation condition (\ref{new conservation}) is in accordance with the equivalence principle, the following relation to the mass term needs to be respected:
\begin{equation}\label{mass term condition}
T^{\mu\nu}_{mass~;\nu}=U^{\mu}U_\alpha T^{\alpha \nu}_{mass~;\nu}.
\end{equation}
If we adopt the four-velocity in the rest frame
\begin{equation}\label{propervelnew}
U_\mu=(1,0,0,0),
\end{equation}
then, we will need to have non null components of the divergence of the mass term when $\mu=0$ and $\nu=0,1,2,3$.
Note that the condition imposed for the mass term (\ref{mass term condition}) is not dependent on the form of the tensor $T^{\mu\nu}_{mass}$, so the expression (\ref{new conservation}) is valid to any second rank tensor ``interacting" with the perfect fluid.
\section{Arguments in favor of a Minkowski background}
\label{sec:5}
A classical theory of gravity with a massive graviton apparently needs a background metric for the propagation of this particle. But what is the best physical choice to a background metric? In the Rosen theory \cite{rosen1973} the second metric is a flat metric that describes the inertial forces. We will analyze this issue in Visser's theory.
To do that, we take the field equations (\ref{eqmotion}) in the absence of gravitational source:
\begin{equation}
G^{\mu\nu}=\frac{8\pi G}{c^4}T^{\mu\nu}_{mass}.
\end{equation}
In this particular case, following the treatment that we give in this paper, the covariant divergence produces:
\begin{equation}\label{vac cons}
\nabla_\nu T^{\mu\nu}_{mass}=0.
\end{equation}
Once the mass tensor is constructed by the dynamical metric and by the background metric (and not by derivatives of the metrics), we can conclude that the \textit{most simple way} of satisfying (\ref{vac cons}) is:
\begin{equation}\label{nulity of g0}
\nabla_\nu(g_0)^{\mu\nu}=0
\end{equation}
since the divergence of $g_{\mu\nu}$ is null by construction of the covariant derivatives. Then, the natural solution of (\ref{nulity of g0}) is:
\begin{equation}\label{metrics equality}
(g_0)_{\mu\nu}=g_{\mu\nu}.
\end{equation}
Which by the construction of the mass term (\ref{extrastress-energy}) leads to
\begin{equation}
T^{\mu\nu}_{mass}=0
\end{equation}
and therefore
\begin{equation}\label{vac sol}
G_{\mu\nu}=0.
\end{equation}
In the absence of gravitational sources the simplest solution of (\ref{vac sol}) is:
\begin{equation}
g_{\mu\nu}=\eta_{\mu\nu}
\end{equation}
where $\eta_{\mu\nu}$ is the Minkowski metric and by (\ref{metrics equality}) we get:
\begin{equation}
(g_0)_{\mu\nu}=\eta_{\mu\nu}.
\end{equation}
The meaning of our result may be summarized saying that in the absence of gravitational sources the two metrics coincide and we have only one flat metric: Minkowski. In fact this is a simplicity criterion since we expect to recover the results of special relativity in the absence of gravitation. Take, for example, our energy-momentum conservation condition (\ref{new conservation}), if the background metric is Minkowski, when the dynamical metric is Minkowski too, we get naturally the energy conservation as given in special relativity:
\begin{equation}
\partial_\nu(T^{\mu\nu})=0,
\end{equation}
once the mass term vanishes.
If the background metric is not Minkowski the special relativity is not recovered, because the mass term would not disappear due the coupling of the two metrics.
With all these features the bases of the theory is very close to the foundations of general relativity.
\section{Cosmological consequences?}
\label{sec:6}
To illustrate the condition (\ref{new conservation}), let us consider the simple case of matter in the form of an ideal pressure-less fluid, i.e., a cloud of dust particles:
\begin{equation}
T^{\mu\nu}=\rho U^\mu U^\nu,
\end{equation}
the Robertson-Walker metric as the dynamic metric and Minkowski as the background one. Then, applying the condition (\ref{new conservation}) we have
\begin{equation}\label{rho evol}
\dot{\rho}+\left[ 3\rho+\frac{3m_g^2c^6}{16\pi G\hbar^2}(4a^2+a^4-3)\right] \frac{\dot{a}}{a}=0,
\end{equation}
and equation (\ref{cons mass}) is automatically satisfied.
Solving (\ref{rho evol}) we obtain the evolution of the energy density as a function of the scale factor:
\begin{equation}\label{dust evolution}
\rho(a)=\frac{\rho_0}{a^3}-\frac{3m_g^2c^6}{8\pi G\hbar^2}\left( \frac{a^4}{14}+\frac{2a^2}{5}-\frac{1}{2}\right),
\end{equation}
here, the first term is the evolution of the energy density as calculated in general relativity, and we have an additional term due to the mass term. This may be an interesting treatment of the mass of the graviton in cosmological scenarios, once we can interpret it like a fluid and maybe explain some observational effects that has been attributed to the cosmological constant, quintessence and other exotic fluids \cite{marcio2006}.
\\
\\
Another interesting feature emerges from our treatment. It is not possible to obtain a de Sitter solution for the vacuum.
Einstein gravity has a family of solutions given by:
\begin{equation}
G_{\mu\nu}-\Lambda g_{\mu\nu}=-\frac{8\pi G}{c^4}T_{\mu\nu}
\end{equation}
that is in accordance with the conservation laws for any small constant $\Lambda$. The vacuum solution of this equation with the Robertson-Walker metric with $k=0$, gives us the de Sitter space-time:
\begin{equation}
ds^2=dt^2-[\exp 2(\tfrac{1}{3}\Lambda)^{\frac{1}{2}}t][dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)]
\end{equation}
If we add a cosmological constant in the vacuum equations of Visser gravity:
\begin{equation}\label{eqmotion with lambda}
G_{\mu\nu}-\Lambda g_{\mu\nu}=-\frac{8\pi G}{c^4} T_{\mu\nu}^{mass},
\end{equation}
and taking the covariant divergence, from the right-hand-side we reobtain equation (\ref{vac cons}). Since the background metric is Minkowski, from (\ref{aoboa}) the dynamical metric $g_{\mu\nu}$ is Minkowski too and we obtain
\begin{equation}
G_{\mu\nu}=T_{\mu\nu}^{mass}=0,
\end{equation}
and from (\ref{eqmotion with lambda}) we have
\begin{equation}
\Lambda=0.
\end{equation}
Thus, in order to have consistency, $\Lambda$ must be rigorously zero. Since the background metric needs to be Minkowski, the cosmological vacuum solution in Visser's theory is the static flat Minkowski space-time or, e.g., some kind of cosmological parameter (like $\Lambda (t)$). For this last alternative, we would have a coupling equation like (\ref{rho evol}), which would describe the evolution of the energy density of the vacuum component.
\section{Conclusion}
\label{sec:7}
Our interpretation of the energy-momentum conservation in the Visser's massive gravity is in accordance to the equivalence principle and recover naturally the results of special relativity in the absence of gravitational sources.
The point of view considered in this paper allow us to consider Minkowski as background metric in Visser's theory in all astrophysical cases including cosmology.
This new interpretation may lead to interesting cosmological results once we can construct a cosmological model in a theory with massive gravitons with a Minkowski background. Additional contributions to the cosmological fluids will appear due to the modifications in the interaction potential, which, maybe, would be a way of treat the dark-energy problem. The analyses of the theory in the absence of gravitational sources lead us to exclude the de Sitter space-time as a vacuum solution of the massive gravity, once a constant $\Lambda$ term is rigorously zero in a flat background.
Another interesting feature is that our interpretation of the energy conservation in strong fields is independent of the form of the tensor which interact with the perfect-fluid tensor, so this can be used to other models with additional energy-momentum contribution.
\begin{acknowledgements}
MESA would like to thank the Brazilian Agency FAPESP for
support (grant 06/03158-0). ODM and JCNA would like to thank the Brazilian
agency CNPq for partial support (grants 305456/2006-7 and 303868/2004-0
respectivelly).
\end{acknowledgements}
| c0efb20ea685e119a31e25542f2a94804696b567 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
An important property of Yang-Mills theory is that it contains
Wilson loop operators labeled by irreducible representations of the
gauge group $G$ \cite{Wilson}. Their product is controlled by the
representation ring of $G$ and therefore determines $G$ uniquely.
The work of Goddard, Nuyts, and Olive \cite{GNO} on magnetic sources
can be reinterpreted \cite{KWH} as saying that Yang-Mills theory
admits another class of loop operators labeled by irreducible
representations of the Langlands-dual group ${{}^LG}$. Such operators
are called 't Hooft loop operators. The Montonen-Olive duality
conjecture \cite{MO} states that ${\mathcal N}=4$ super-Yang-Mills theory
with gauge group $G$ is isomorphic to ${\mathcal N}=4$ super-Yang-Mills
theory with gauge group ${{}^LG}$, and this isomorphism exchanges Wilson
and 't Hooft loop operators. This conjecture therefore predicts that
the product of 't Hooft loop operators is controlled by the
representation ring of ${{}^LG}$.
This implication of the Montonen-Olive conjecture has been verified
in \cite{KW} for suitably supersymmetrized versions of 't Hooft
loops. The idea is to twist ${\mathcal N}=4$ SYM theory into a 4d Topological
Field Theory (TFT), so that either Wilson or 't Hooft loop operators
become topological observables. One can show then that the product
of loop operators is independent of the distance between them, and
in fact loop operators form a commutative ring. In the case of
Wilson loop operators, it is straightforward to show that this ring
is the representation ring of $G$. In the case of 't Hooft loop
operators, it has effectively been argued in \cite{KW} that the ring
is the $K^0$-group of the category of equivariant perverse sheaves
on the affine Grassmannian $Gr_G$. It has been shown by Lusztig
\cite{Lusztig} that this ring is the representation ring of ${{}^LG}$; a
categorification of this statement, known as the geometric Satake
correspondence, has been proved in \cite{Ginz,MV1,MV2}. As explained
in \cite{KW}, the geometric Satake correspondence can also be
interpreted in physical terms, by replacing loop operators with line
operators.
Yang-Mills theory also admits mixed Wilson-'t Hooft loop operators.
As explained in \cite{KWH}, they are labeled by elements of the set
$$
{\widehat\Lambda}(G)/{\mathcal W}=(\Lambda_w(G)\oplus \Lambda_w({{}^LG}))/{\mathcal W},
$$
where $\Lambda_w(G)$ is the weight lattice of $G$ and ${\mathcal W}$ is the
Weyl group (which is the same for $G$ and ${{}^LG}$). It is natural to
ask what controls the product of such more general operators. The
answer must somehow unify the representation theory of $G$ and
${{}^LG}$. In this paper we partially answer this question. A natural
framework for it is the holomorphic-topological twisted version of
the ${\mathcal N}=4$ SYM theory described in \cite{htft}, since it admits
Wilson-'t Hooft loop operators labeled by arbitrary elements of
${\widehat\Lambda}/{\mathcal W}$.\footnote{In the topological field theory described in
\cite{KW}, depending on the choice of a BRST operator, either Wilson
or 't Hooft loop operators may exist, but not both at the same time.
In what follows we will refer to this TFT as the GL-twisted theory,
where GL stands for ``geometric Langlands''.} As explained in
\cite{htft}, Wilson-'t Hooft loop operators in the twisted theory
form a commutative ring, and this ring is abstractly isomorphic to
the Weyl-invariant part of the group algebra ${\widehat\Lambda}(G)$. But this does
not completely determine the operator product, since we do not yet
know which element of the group algebra corresponds to a particular
element of the set ${\widehat\Lambda}(G)/{\mathcal W}$ labeling Wilson-'t Hooft loop
operators.
In this paper we determine the answer for $G=PSU(2)$ and $G=SU(2)$
assuming S-duality, and then verify the prediction in a special case
by a direct gauge-theory computation at weak coupling. We also
outline a procedure for computing the product of Wilson-t' Hooft
loop operators for arbitrary $G$. The procedure is very similar to
that for 't Hooft operators in \cite{KW}. As in \cite{KW}, an
important role is played by the fact that loop operators can be
promoted to line operators, i.e. ``open'' analogs of loop operators.
While loop operators form a commutative ring, line operators form a
monoidal category (i.e. an additive category with a ``tensor
product''). We argue below that the ring of loop operators can be
thought of as the $K^0$-group of the category of line operators. The
Montonen-Olive duality predicts that these categories for gauge
groups $G$ and ${{}^LG}$ are equivalent. In some sense, this can be
viewed as the classical limit of the geometric Satake
correspondence, but $G$ and ${{}^LG}$ enter more symmetrically. As
discussed in the concluding section, this conjecture, when
interpreted in mathematical terms, has previously appeared in
\cite{BFM}.
\section{A brief review of the Hitchin moduli space}
In this preliminary section we review some basic facts about the
moduli space of Hitchin equations ${{\mathcal M}_H}(G,C)$ and the sigma-model
with target ${{\mathcal M}_H}(G,C)$. The reader familiar with this material may
skip this section. A more detailed discussion may be found in
\cite{KW}.
Given a gauge group $G$, let us consider a principal $G$-bundle $E$
over a Riemann surface $C$, a connection $A$ on $E$, and a 1-form
$\phi$ with values in ${\rm ad}(E)$. The Hitchin equations are
$$F-i\, \phi{\wedge} \phi=0,\quad D\phi=0,\quad D\star \phi=0,$$
where $D=d+iA$ is the covariant differential, $F=-iD^2$ is the
curvature of $A$, and $\star$ is the Hodge star operator. The space
of solutions of this equations modulo gauge transformations is known
as the Hitchin moduli space and will be denoted ${{\mathcal M}_H}(G,C)$ or simply
${{\mathcal M}_H}$ (we suppress $E$ from the notation, because we regard
${{\mathcal M}_H}(G,C)$ as a disconnected sum of components corresponding to all
possible topological types of $E$).
A crucial fact for us is that ${\cal M}_H$ is a hyperk\"ahler
manifold. In particular, it has three complex structures $I,J,K$
satisfying $IJ=K$. One way to describe these complex structures
explicitly is to specify holomorphic coordinates on ${{\mathcal M}_H}$. For a
local complex coordinate $z$ on $C$ we write
$$A=A_z dz+A_{{\bar z}}d{{\bar z}},\quad \phi=\phi_z dz +\phi_{{\bar z}}d{{\bar z}}.$$
For the complex structure $I$ the holomorphic coordinates are
$A_{{\bar z}}$ and $\phi_z.$ For the complex structure $J$ the
holomorphic coordinates are $A_{{\bar z}}+i\phi_{{\bar z}}$ and $A_z+i\phi_z.$
Finally, the complex structure $K$ is defined by the quaternion
relation $K=IJ.$ In the present paper we mostly work with complex
structure $I$ and use notation ${\cal M}_{Higgs}(G,C)$ for ${\cal
M}_H(G,C)$ with this choice of complex structure. The reason for
this notation is that ${{\mathcal M}_H}$ equipped with the complex structure $I$
is naturally identified with the moduli space of Higgs bundles, i.e.
pairs $({\mathcal E},\varphi)$, where ${\mathcal E}$ is a holomorphic $G_{\mathbb C}$ bundle,
and $\varphi$ is a holomorphic section of $K_C\otimes {\rm ad}({\mathcal E})$.
This identification maps the triple $(E,A,\phi)$ to the holomorphic
$G_{\mathbb C}$-bundle defined by the $(0,1)$ part of $D$ and the
holomorphic Higgs field $\varphi=\phi^{1,0}$. Note that the subset
of ${\mathcal M}_{Higgs}(G,C)$ given by $\varphi=0$ is the moduli space of
stable holomorphic $G_{\mathbb C}$ bundles, which we will denote ${\mathcal M}(G,C)$.
In the complex structure $J$ the Hitchin moduli space can be
identified with the moduli space of flat $G_{\mathbb C}$ connections on $C$;
this moduli space was denoted ${\mathcal M}_{flat}(G,C)$ in \cite{KW}. But
this identification will not play a role in this paper.
Consider now the supersymmetric sigma-model with target ${{\mathcal M}_H}$. Since
${{\mathcal M}_H}$ is hyperk\"ahler, such a sigma-model has ${\mathcal N}=(4,4)$
supersymmetry. One may twist this sigma-model into a topological
field theory by picking a pair of complex structures $(J_+,J_-)$ on
${{\mathcal M}_H}(G,C)$. For $J_+=J_-$ one gets a B-model, while for $J_+=-J_-$
one gets an A-model. In this paper we will be mostly interested in
the special case $J_+=J_-=I$, i.e. the B-model in complex structure
$I$.
Given a topological twist of the sigma-model, one can consider the
corresponding category of topological branes. This is a category of
boundary conditions for the sigma-model on a worldsheet of the form
${\mathbb R}\times {\rm I}$ where ${\rm I}$ is the unit interval. The
boundary conditions are required to be invariant with respect to the
BRST operator of the twisted model. Equivalently, one may say that
the boundary conditions are required to preserve one complex
supercharge (in the untwisted theory). But since the untwisted model
has $(4,4)$ supersymmetry, there also exist branes which preserve
two complex supercharges. Such branes are compatible with more than
one topological twist. In this paper we will encounter
$(B,B,B)$-branes, which are B-branes in complex structures $I,J,K,$
as well as $(B,A,A)$-branes which are of B-type in complex
structure $I$ and of $A-$type in the other two complex structures.
\section{Holomorphic-topological twist of ${\mathcal N}=4$ SYM}\label{twist}
Let us recall how one can twist ${\mathcal N}=4$ gauge theory on
$\Sigma\times C$ into a holomorphic-topological theory \cite{htft}
which upon reduction gives the B-model on $\Sigma$ with target
${\mathcal M}_{Higgs}(G,C)$. It is convenient to treat ${\mathcal N}=4$ SYM as ${\mathcal N}=2$
SYM with a hypermultiplet in the adjoint representation. The theory
has $SU(2)_R\times U(1)_N\times U(1)_B$ symmetry. The holonomy group
is $U(1)_C\times U(1)_\Sigma$. One twists $U(1)_C$ action by a
suitable linear combination of $U(1)_R\subset SU(2)_R$ and $U(1)_B$,
and twists $U(1)_\Sigma$ by $U(1)_N$.
The resulting field theory has the following bosonic fields: the
gauge field $A$, the adjoint Higgs field $\varphi=\Phi_w dw\in
K_\Sigma\otimes {\rm ad}(E)$, the adjoint Higgs field $q=q_{\bar z} d{\bar z}\in
{\bar K}_C\otimes {\rm ad}(E),$ and the adjoint Higgs field ${{\tilde q}}\in {\rm ad}(E)$. Here
$K_\Sigma$ and $K_C$ are the pull-backs of the canonical line
bundles of $\Sigma$ and $C$ to $\Sigma\times C$. We also define
$\Phi_{\bar w}=\Phi_w^\dag$ and $q_z=q_{\bar z}^\dag$.
The fermionic fields are the ``gauginos''
$\lambda_w,{\bar\lambda}_{\bar w},\lambda_z, {\bar\lambda}_z, \lambda_{{\bar z} w},
{\bar\lambda}_{{\bar z} {\bar w}}, \lambda_{w{\bar w}},{\bar\lambda}_{w{\bar w}}$ and the
``quarks'' $\psi_{\bar w}, {\overline{\chi}}_w, \psi_{\bar z}, {\overline{\chi}}_{\bar z}, \chi_{z{\bar w}},
{\overline{\psi}}_{zw}, \chi_{z{\bar z}}, {\overline{\psi}}_{z{\bar z}}.$ The fermions are all in the
adjoint representation.
The field content depends on complex structures of $C$ and $\Sigma$.
The dependence on the complex structure on $C$ is inescapable, but
the dependence on the complex structure on $\Sigma$ is merely an
artifact of our way of presentation. It is possible to combine
fields with holomorphic and anti-holomorphic indices into
form-valued fields on $\Sigma$ so that the dependence on the complex
structure on $\Sigma$ is eliminated \cite{htft}.
In order to specify the theory completely, one has to pick a BRST
operator. The twisted theory has two BRST operators $Q_\ell$ and
$Q_r$ which square to zero and anticommute, so the most general BRST
operator is
$$
Q=u Q_\ell +v Q_r,
$$
where $u,v$ are homogeneous coordinates on ${\mathbb P}^1$. It is often
convenient to work with an affine coordinate $t=v/u$ taking values
in ${\mathbb C}\cup \{\infty\}$. To get a theory which is topological on
$\Sigma$ and holomorphic on $C$, one needs to assume that $u$ and
$v$ are both nonzero, i.e. $t\neq 0,\infty$ \cite{htft}.\footnote{If
$t=0$ or $t=\infty$, the twisted theory is holomorphic on both $C$
and $\Sigma$. Such a theory does not admit line operators which we
are interested in.} The precise choice of $t$ then does not matter
\cite{htft}; we let $t=i$ from now on.
The action of the twisted theory can be written as a sum of a
BRST-exact piece and a piece which is independent of the gauge
coupling $e^2$ and the $\theta$-parameter (after a rescaling of
fermions). Therefore semiclassical computations in the twisted
theory are exact \cite{htft}. We will use this important fact
throughout the rest of the paper.
The path-integral of the twisted theory localizes on $Q$-invariant
field configurations. The conditions of $Q$-invariance imply, among
other things, that the complex connection ${\mathcal A}=A+i\varphi+i\varphi^\dag$
has a curvature ${\mathcal F}$ whose only nonzero components are along
$\Sigma$. In the limit when the volume of $C$ goes to zero, the
equations simplify and imply the Hitchin equations for $A_z$ and
$q_z$
$$
F_{z{\bar z}}-i[q_z,q_{\bar z}]=0,\quad D_{\bar z} q_z=0
$$
as well as
$$
D_{\bar z} {{\tilde q}}^\dag=0,
$$
which implies that ${{\tilde q}}$ is generically zero. Thus in this limit the
field theory reduces to a sigma-model with target
${\mathcal M}_{Higgs}(G,C)$. There are further equations which say that this
sigma-model is a B-model in the natural complex structure (the one
which we denote $I$).
The Montonen-Olive duality, as usually defined, maps $G$ to ${{}^LG}$
and maps \cite{KW,htft} the BRST operator at $t=i$ to another BRST
operator with
$$
{{}^Lt}=\frac{|\tau|}{\tau}t.
$$
But since the phase of $t$ can be changed by an automorphism of the
theory (an R-symmetry transformation), one can redefine the
Montonen-Olive duality so that it leaves $t$ invariant. We adopt
this definition of Montonen-Olive duality from now on.
In this paper we mostly focus on the case when $\Sigma$ has a flat
metric. Then the twist along $\Sigma$ is a trivial operation, and
the theory can be regarded as twisted only along $C$. In the limit
${\rm vol}(C)\rightarrow 0$ it becomes equivalent to an untwisted supersymmetric
sigma-model with target ${{\mathcal M}_H}(G,C)$. Since ${{\mathcal M}_H}$ is hyperk\"ahler,
this sigma-model has ${\mathcal N}=(4,4)$ supersymmetry, i.e. it has two
left-moving and two right-moving complex supercharges, as well as
their complex conjugates. The BRST operator defined above is a
particular linear combination of these supercharges. The BRST
operator of the GL twisted theory considered in \cite{KW} is another
such linear combination (depending on a single complex parameter
$t$). Both kinds of BRST operators can be included into a more
general three-parameter family of BRST operators \cite{KW}.
\section{Wilson-'t Hooft operators in the twisted theory}\label{wh}
\subsection{Definition}
In any gauge theory one can define various loop operators: Wilson,
't Hooft, and Wilson-'t Hooft. The Wilson loop operator in
representation $R$ is usually defined as
$$
W_R(\gamma)={\rm Tr}_R\, P\exp i\int_\gamma A
$$
where $\gamma$ is a closed curve. Instead of labeling the operator
by an irreducible representation, one can label it by the orbit of
its highest weight under the Weyl group. The 't Hooft loop operator
is a disorder operator defined by the requirement that near a curve
$\gamma$ the gauge field has a singularity of a Dirac-monopole kind.
Such singularities are labeled by conjugacy classes of homomorphisms
from $U(1)$ to $G$, which is equivalent to saying that they are
labeled by orbits of the Weyl group in the coweight lattice
$\Lambda_{cw}$ of $G$. More generally, Wilson-'t Hooft operators are
labeled by Weyl orbits in the product $\Lambda_w(G)\times
\Lambda_{cw}(G)$ \cite{KWH}.
In the ${\mathcal N}=4$ SYM theory there are more possibilities for loop
operators, since one can construct them not only from gauge fields,
but also from other fields. By imposing natural symmetry
requirements (namely, the geometric symmetries and supersymmetry),
one can cut down on the number of possibilities.
In the twisted ${\mathcal N}=4$ theory we have to require that loop operators
be BRST-invariant. For $t=i$, we see that none of the components of
$A$ are BRST-invariant. But we also see that ${\mathcal A}_w=A_w+i\Phi_w$ and
${\mathcal A}_{\bar w}=A_{\bar w}+i\Phi_{\bar w}$ are BRST-invariant. Hence if $\gamma$ is a
closed curve on $\Sigma$ and $p$ is a point on $C$, the Wilson
operator
$$
W_R(\gamma,p)={\rm Tr}_R\, P\exp i \int_{\gamma\times p}{\mathcal A}
$$
is BRST-invariant.
By MO duality, there should also be BRST-invariant 't Hooft
operators at $t=i$.\footnote{This is unlike the GL twisted theory,
where for $t=i$ only Wilson operators are BRST-invariant.} Indeed,
if $\gamma$ is given by the equation $x^1={\rm Re\hskip0.1em} w=0$ and we require
the gauge field to have a Dirac-like singularity in the
$x^1,x^2,x^3$ plane:
\begin{equation}\label{HopF}
F\sim \star_3 d\left(\frac{\mu}{2r}\right)
\end{equation}
for some $\mu\in{\mathfrak g}$, then the condition of $Q$-invariance requires
$\Phi_w$ to be singular as well:
\begin{equation}\label{Hopphi}
\Phi_w\sim \frac{\mu}{2r}.
\end{equation}
It is a plausible guess that such a disorder operator is mapped to
the Wilson operator by the MO duality.
Finally, we may consider more general Wilson-'t Hooft loop operators
which source both electric and magnetic fields. Roughly speaking,
they are products of Wilson and 't Hooft operators. To define a WH
loop operator more precisely, let it be localized at $x^{1,2,3}=0$.
Then we require the components of the curvature in the $123$ plane
to have a singularity as in (\ref{HopF}), the real part of $\Phi_w$
to have a singularity as in (\ref{Hopphi}), and insert into the
path-integral a factor
$$
{\rm Tr}_{R}\, P\exp i \int_{\gamma\times p}{\mathcal A}
$$
where $R$ is an irreducible representation of the stabilizer
subgroup $G_\mu\subset G$ of $\mu$. This definition makes sense
because in the infinitesimal neighborhood of $\gamma$ the component
of ${\mathcal A}$ tangent to $\gamma$ must lie in the centralizer subalgebra
${\mathfrak g}_\mu\subset{\mathfrak g}$ of $\mu$ \cite{KWH}. One may describe $R$ by
specifying its highest weight $\nu$, which is defined up to an
action of the subgroup of the Weyl group which preserves $\mu$. The
net result is that the WH operator is labeled by a pair
$(\mu,\nu)\in \Lambda_{cw}(G)\times \Lambda_w(G)$ defined up to the
action of the Weyl group ${\mathcal W}$. We will denote the abelian group
$\Lambda_{cw}(G)\times \Lambda_w(G)$ by ${\widehat\Lambda}(G)={\widehat\Lambda}({{}^LG})$. The WH
operator labeled by the Weyl-equivalence class of $(\mu,\nu)$ will
be denoted $WT_{\mu,\nu}(\gamma,p)$.
There is a natural action of the S-duality group on ${\widehat\Lambda}(G)$. It is
a natural conjecture that this is how the S-duality group acts on
the corresponding WH operators. One of the goals of this paper is to
test this conjecture.
Note that all our loop operators are localized at points on $C$. If
we take the volume of $\Sigma$ to be small compared to that of $C$,
then the twisted theory reduces to an effective 2d field theory on
$C$, and in this effective 2d field theory our loop operators behave
in all ways like local operators. There are no BRST-invariant
operators which are localized on loops in $C$.
\subsection{Basic properties}
As explained in \cite{htft}, in the twisted theory all correlators
depend holomorphically on coordinates on $C$ and are invariant under
arbitrary diffeomorphisms of $\Sigma$. This puts strong constraints
on the correlators of WH loop operators. We will be mostly
interested in the Operator Product Expansion of WH loop operators.
That is, we will assume that $\Sigma$ is flat, pick a pair of points
$p,p'\in C$ and a pair of straight lines $\gamma$ and $\gamma'$ on
$\Sigma$ and consider a pair of WH operators localized on
$\gamma\times p$ and $\gamma'\times p'$. So far, we have assumed
that the curve on which the WH operator is localized is closed; if
we want to maintain this, we may assume that $\Sigma$ locally looks
like a cylinder with a flat metric; since the theory is
diffeomorphism-invariant along $\Sigma$, the only thing that matters
is that both $\gamma$ and $\gamma'$ are closed and isotopic to each
other. One may also consider WH operators localized on lines rather
than closed curves; we will return to this possibility later.
Consider now a correlator involving these WH loop operators. If
$\gamma$ and $\gamma'$ do not have common points, then there is no
singularity as one takes the limit where $p$ coincides with $p'$. If
$z$ is a local complex coordinate on $C$ centered at $p$, then the
correlator is a holomorphic function of $z(p')$ in the neighborhood
of zero. By continuity, this implies that even when $\gamma$ and
$\gamma'$ coincide, the correlator is a holomorphic function of $z$.
Therefore the Operator Product of any two WH operators is
nonsingular. More generally, this conclusion holds for any two
BRST-invariant loop operators in the twisted theory which are
localized on $C$.
Given this result, we can define a commutative algebra of loop
operators, simply by taking the coincidence limit. For Wilson and 't
Hooft loop operators this result can be more easily obtained using
the GL twisted theory of \cite{KW}, but here we see that it holds
for general loop operators in the holomorphic-topological twisted
theory.
At this stage it is natural to ask whether the subspace spanned by
WH operators is closed with respect to the operator product. More
optimistically, one could hope that WH operators form a basis in the
space of loop operators in the twisted theory, and therefore the
vector space spanned by them is automatically closed with respect to
the operator product. We will argue below that both statements are
true, {\it if only closed loops are considered}.
\subsection{Line versus loop operators}
As emphasized in \cite{KW}, one may also consider analogs of Wilson
and 't Hooft operators localized on open curves instead of loops.
The endpoints of a curve must lie on the boundaries of the
four-manifold. Such ``operators'' are called line operators in
\cite{KW}. We put the word ``operators'' in quotes because they do
not act on the Hilbert space of the theory; rather, they alter the
definition of the Hilbert space of the theory.
To be concrete, suppose $\Sigma={\mathbb R}\times X_1$, where $X_1$ is
either $S^1$ or an interval $I$. We regard ${\mathbb R}$ as the time
direction. Consider a Wilson line operator $W_R(\gamma,p)$, where
$\gamma\subset \Sigma$ has the form ${\mathbb R}\times q$ for some $q\in
X_1$. Insertion of such a Wilson line operator means that the
Hilbert space of the gauge theory has to be modified: instead of
gauge-invariant wave-functions on the space of fields on $X_1\times
C$, one has to consider gauge-invariant elements of the tensor
product of the space of all wave-functions and the representation
space of $R$. Similarly, when we insert an open 't Hooft operator,
we have to change the class of fields on which the wave-functions are
defined.
While loop operators form a commutative algebra, line operators form
a category. A morphism between line operators ${\mathsf A}$ and ${\mathsf B}$ is a
local BRST-invariant operator inserted at a junction of ${\mathsf A}$ and
${\mathsf B}$. Composition of morphisms is defined in an obvious way. There
is also an obvious structure of a complex vector space on the space
of morphisms and an obvious way to define a sum of line operators.
Thus line operators form an additive ${\mathbb C}$-linear category.
The distinction between line and loop operators has played some role
in \cite{KW} and it is even more important in the context of the
holomorphic-topological theory, as we will see below.
It is often convenient to relax the condition that local operators
inserted at the junction of two line operators be BRST-invariant,
and define the space of morphisms to be the space of all local
operators. This space is graded by the ghost number and is acted
upon by the BRST-differential. Thus the set of morphisms between any
two line operators has the structure of a complex of vector spaces,
and composition of morphisms is compatible with the differentials.
That is, line operators form a differential graded category
(DG-category). This viewpoint is convenient for keeping track of the
dependence of various correlators on parameters, such as the
insertion point on $C$ (see below).
There is one more important operation for line operators in the
twisted theory: an associative tensor product. In other words, the
category of line operators is a monoidal category. The product is
defined by taking two line operators ``side-by-side'' on $\Sigma$
and ``fusing'' them together. The product of line operators need not
be commutative, in general. But for Wilson-'t Hooft line operators
it is commutative because of a discrete symmetry: parity reversal.
Indeed, consider the twisted gauge theory on ${\mathbb R}\times{\mathbb R}\times C$,
where we regard the first copy of ${\mathbb R}$ as time and the second one
as space. It is easy to check that spatial reflection $x\rightarrow -x$ is a
symmetry of the theory.\footnote{This is particularly obvious from a
2d viewpoint, as any B-model is parity-invariant.} Furthermore,
Wilson-'t Hooft line operators are invariant under this symmetry.
Therefore, we can change the order of WH line operators on the
spatial line by a symmetry transformation.
\subsection{Remarks on TFT in arbitrary dimension}
A similar discussion applies to the GL twisted theory considered in
\cite{KW}, and in fact to any topological field theory in any number
of dimensions. That is, in any TFT line operators form a monoidal
${\mathbb C}$-linear additive category.
In the case of a TFT in dimension $d>3$ the fusion product is
necessarily symmetric, because there is no diffeomorphism-invariant
way to order line operators. In dimension $d=3$ there may be
nontrivial braiding, so in general the category of line operators is
braided rather than symmetric. A well-known example is the
Chern-Simons theory \cite{WittenCS}, where the category of Wilson
line operators is equivalent to the category of representations of a
quantum group. In dimension $d=2$ the monoidal structure need not be
either symmetric or braided, in general.
In this paper we are dealing with a holomorphic-topological field
theory rather than a TFT, and the ``topological'' part of the
manifold is two-dimensional. From the abstract viewpoint the
situation is very much like in a 2d TFT, because every line operator
in the twisted gauge theory on $\Sigma\times C$ can be regarded as a
line operator in the B-model on $\Sigma$ with target
${\mathcal M}_{Higgs}(G,C)$. But the converse is not necessarily true,
because line operators in gauge theory are local on $C$, while line
operators in the B-model on $\Sigma$ are not subject to this
constraint. (Below we will construct a large class of examples of line
operators in the B-model which do not lift to ordinary line
operators in the gauge theory.) To enforce locality, one has to keep
track of the dependence of all correlators on the insertion point
$p\in C$ of the line operator. To put it differently, if we denote
by ${\mathsf V}(q,p)$ the Hilbert space of the twisted theory on ${\mathbb R}\times
X_1 \times C$ with an insertion of a line operator at $q\times p\in
X_1\times C$, then for fixed $q$ this family of vector spaces can be
thought of as a holomorphic vector bundle ${\mathsf V}_q$ over $C$.
Similarly, spaces of morphisms between different line operators can
be thought of as holomorphic vector bundles over $C$.
To make precise the idea of a ``holomorphically varying space of
morphisms'', it is very convenient to take the viewpoint that the
space of morphisms is a differential graded vector space, i.e. a
complex. Let ${\mathsf W}(p)$ be the vector space of all (not necessarily
BRST-invariant) local operators inserted at the junction of two line
operators ${\mathsf A}$ and ${\mathsf B}$, both located at a point $p\in C$. The
space ${\mathsf W}(p)$ is graded by the ghost number and carries the
BRST-differential $Q$. The complexes ${\mathsf W}(p)$ fit into a complex of
smooth vector bundles ${\mathsf W}$ on $C$. Let us tensor this complex of
vector bundles with the Dolbeault complex of $C$. The resulting
space of sections is acted upon by both $Q$ and ${\bar\partial}$ and
carries all the information about the dependence of morphisms on
$p$. ``Holomorphic dependence'' means simply that ${\bar\partial}$ is
$Q$-exact, and therefore acts trivially on the cohomology of $Q$.
We can put our discussion of line operators in a more general
perspective by noting that $n$-dimensional TFTs form a $n$-category.
1-Morphisms in this $n$-category are codimension-1 walls separating
a pair of TFTs. We will call codimension-1 walls 1-walls, for short.
1-walls themselves form an $n-1$ category: 2-morphisms are
codimension-2 walls which separate different 1-walls between the
same pair of TFTs. And so on.
If we consider all 1-walls between a pair of identical TFTs, they
can be ''fused'' together. This gives a kind of monoidal structure
on an $n-1$ category of 1-walls. In this $n-1$-category there is a
unit object: the ``trivial 1-wall'' which is equivalent to no wall
at all. 2-walls living on the trivial 1-wall form a monoidal $n-2$
category with a unit, and so on. Thus line operators considered
above belong to a rather special variety: they live on a trivial
$n-2$ wall which lives on a trivial $n-3$-wall, etc. For example, in
the GL twisted theory at $t=i$ Wilson line operators form a category
which is equivalent to the category of finite-dimensional
representations of $G$. Gukov and Witten also considered nontrivial
2-walls in this theory and line operators living on such 2-walls
\cite{GW}.
Boundary conditions for an $n$-dimensional TFT also fit into this
general scheme: they are 1-morphisms between a given TFT and an
``empty'' TFT. For this reason they form an $n-1$ category (which is
not monoidal, in general). A special case of this is the well-known
fact that D-branes in a 2d TFT form a category.
In connection with possible 2-dimensional generalizations of the
Geometric Langlands Duality, it would be interesting to understand
the 3-category of boundary conditions for the GL twisted ${\mathcal N}=4$
SYM, as well as the monoidal 3-category of 1-walls in the same
theory. The latter acts on the former. These 3-categories appear to
be suitable 2d generalizations of the derived category of
${\mathcal M}_{flat}(G,C)$ and the representation category of $G$,
respectively.
\subsection{Deformations of line operators}
In the case of the GL twisted theory at $t=i$ the product of two
parallel Wilson loop operators $W_{R_1}$ and $W_{R_2}$ is a Wilson
loop operator $W_{R_1\otimes R_2}$. This means that Wilson loop
operators form a closed algebra, which happens to be commutative and
associative. Wilson loop operators corresponding to irreducible
representations of $G$ form a basis in this algebra. A similar
statement holds for Wilson line operators: the subcategory of Wilson
line operators is closed with respect to the monoidal structure,
i.e. it is a symmetric monoidal category, and any Wilson line
operator is isomorphic to a direct sum of Wilson line operators
corresponding to irreducible representations of $G$. By S-duality,
similar statements hold for 't Hooft operators in the GL twisted
theory (for $t=1$).
At $t=i$ any line operator in the GL-twisted theory is isomorphic to
a Wilson line operator for some $R$ (which can be reducible). One
way to see it is to first classify line operators with the right
bosonic symmetries in the untwisted theory (this has been done in
\cite{KWH}) and then impose the condition of BRST-invariance. A
similar statement holds for 't Hooft operators at $t=1$.
One consequence of this is that there are no infinitesimal
deformations of Wilson line operators in the GL-twisted theory. This
can also be checked directly. From the mathematical viewpoint,
infinitesimal deformations of a line operator ${\mathsf A}$ are classified
by degree-1 cohomology of the complex ${\rm Hom}({\mathsf A},{\mathsf A})$. One can check
that this cohomology is trivial by considering BRST-invariant local
operators which can be inserted at a point of the Wilson line ${\mathsf A}$.
For line operators in the holomorphic-topological twisted theory the
situation is more complicated. The difficulty is that twisting
breaks $SO(3)$ rotational symmetry used in \cite{KWH} down to
$U(1)$. A generic Wilson-'t Hooft operators (i.e. not purely
electric or purely magnetic) also preserves only rotation symmetry
in the $z$-plane (which is present when $C\simeq {\mathbb C}$).
The simplest question one can ask in this regard is whether there
are infinitesimal deformations of a Wilson-'t Hooft line operator.
One obvious deformation arises from varying the insertion point on
$C$. For a Wilson line $W_R(p)$, is easy to exhibit the degree-1
endomorphism corresponding to such a deformation. It is a fermionic
field
$$
\Gamma_z=\lambda_z+{\bar\lambda}_z.
$$
It is BRST-invariant and can be inserted into a Wilson line in any
representation $R$. The corresponding infinitesimal deformation of
$W_R(p)$ is obtained as follows. First, we apply the descent
procedure to $\Gamma_z$, i.e. look for a boson $\Delta_z$ such that
$$
{\mathcal D}_\Sigma \Gamma_z=\delta \Delta_z.
$$
Note the covariant differential on the left-hand side. Usually,
descent is applied to gauge-invariant operators, in which case one
uses ordinary de Rham differential. In our case, the operator
becomes gauge-invariant only after insertion into a Wilson line, and
this requires replacing ordinary differential with the covariant
one. The descent equation is solved by
$$
\Delta_z={\mathcal F}_{zw} dw+{\mathcal F}_{z{\bar w}} d{\bar w}.
$$
The deformed Wilson operator is
$$
{\rm Tr}_R P\exp\left(i\int {\mathcal A}+\Delta_z \epsilon^z\right)
$$
where $\epsilon^z$ is an infinitesimal parameter. It is easy to see that
this is the same as a Wilson operator evaluated at a nearby point,
shifted from $p$ by a vector $\epsilon^z\partial_z.$
Similarly, given any two line operators and a degree-$1$ morphism
between them, one can construct their ``bound state'', which is a
deformation of the direct sum of the two line operators. In
homological algebra, this is known as the mapping cone construction.
In section 6.1 we will see some examples of the mapping cone construction
with less obvious deformations of Wilson-'t Hooft line operators
which do not correspond to changing the insertion point on $C$.
\subsection{Line operators and K-theory}
The existence of nontrivial deformations suggests that the category
of Wilson-'t Hooft line operators may not be closed with respect to
the tensor product. But we will argue below that the space of
Wilson-'t Hooft {\it loop} operators is closed with respect to the
product. Therefore it is important to understand the relationship
between loop and line operators. We would like to argue here that
loop operators should be thought of as elements of the $K^0$-group
of the category of line operators. The closure of the space of
Wilson-'t Hooft loop operators under operator product suggests that
these operators form a basis for the $K^0$-group of the category of
line operators, but we will not try to prove this here.
First, let us recall the definition of the $K^0$-group of a
DG-algebra ${\mathcal A}$. A finitely-generated projective DG-module over
${\mathcal A}$ is any DG-module which can be obtained from free DG-modules of
finite rank using the following three operations: shift of grading,
cone, and taking a direct summand. Consider a free abelian group
generated by the isomorphism classes of finitely-generated
projective DG-modules and quotient it by the relations
$$
M\sim (-1)^n M[n]
$$
for any integer $n$, and
$$
M_1\oplus M_2\sim M
$$
for any exact sequence of DG-modules
$$
0\rightarrow M_1\rightarrow M\rightarrow M_2\rightarrow 0.
$$
This quotient group is $K^0({\mathcal A})$.
The definition of the $K^0$-group of a small DG-category ${\mathfrak A}$ is
similar.\footnote{A small category is a category whose objects are
members of a set rather than a class. We sincerely hope that line
operators in a twisted gauge theory form a set.} The idea is to
think about a category as an ``algebra with several objects''. A
DG-module ${\mathfrak M}$ over a small DG-category ${\mathfrak A}$ is a DG-functor
from ${\mathfrak A}$ to the DG-category of complexes of vector spaces. In
more detail, it is a collection of DG-modules ${\mathfrak M}({\mathsf A})$ over the
DG-algebras ${\rm Hom}_{\mathfrak A}({\mathsf A},{\mathsf A})$ for all ${\mathsf A}\in Ob({\mathfrak A})$ and
DG-morphisms from the complex ${\rm Hom}_{\mathfrak A}({\mathsf A},{\mathsf B})$ to the complex
${\rm Hom}({\mathfrak M}({\mathsf A}),{\mathfrak M}({\mathsf B}))$ for any ${\mathsf A},{\mathsf B}\in Ob({\mathfrak A})$. These
data should satisfy some fairly obvious compatibility conditions.
The analog of a free rank-1 DG-module is a presentable DG-module
${\mathfrak M}_{\mathsf B}$ corresponding to an object ${\mathsf B}$ of ${\mathfrak A}$. Given any
${\mathsf B}\in Ob({\mathfrak A})$, we let ${\mathfrak M}_{\mathsf B}({\mathsf A})={\rm Hom}_{\mathfrak A}({\mathsf B},{\mathsf A})$. It has
an obvious DG-module structure over the DG-algebra
${\rm Hom}_{\mathfrak A}({\mathsf A},{\mathsf A})$. A finitely-generated projective DG-module over
${\mathfrak A}$ is a DG-module which is obtained from presentable modules by
the operations of shift, cone, and taking a direct summand. To get
the $K^0$-group of ${\mathfrak A}$, we consider the free abelian group
generated by isomorphism classes of finitely-generated projective
DG-modules and quotient it by the relations coming from shift of
grading and short exact sequences of DG-modules.
Now let ${\mathfrak A}$ be the DG-category of line operators. A presentable
DG-module corresponding to a line operator ${\mathsf B}$ is a module
${\mathfrak M}_B$ such that ${\mathfrak M}_{\mathsf B}({\mathsf A})$ is the space of local operators
which can be inserted at the joining point of line operators ${\mathsf B}$
and ${\mathsf A}$. There is a special line operator: the Wilson line
corresponding to the trivial representation of $G$. It is a unit
object with respect to the monoidal structure on ${\mathfrak A}$. The space
of local operators which can be inserted at such a trivial line
operator is the same as the space of ``bulk'' local operators.
A loop operator is a line operator with no insertions of local
operators and with the endpoints identified. A convenient geometry
to study a loop operator ${\mathsf A}$ is to take $\Sigma=S^1_\tau\times
S^1_\sigma$, where $S^1_\tau$ is regarded as the compactified
Euclidean time and $S^1_\sigma$ is the compactified spatial
direction. We consider an arbitrary number of insertions of line
operators, one of which is our ${\mathsf A}$. All line operators are taken
to ``run'' along the $\tau$ direction and are located at fixed
$\sigma$. We also allow arbitrary local insertions at all line
operators except ${\mathsf A}$. This includes bulk local operator
insertions, which may be regarded as local operators sitting on the
trivial line operator. If all such correlators are unchanged when
one replaces ${\mathsf A}$ with another loop operator ${\mathsf A}'$, it is natural
to identify ${\mathsf A}$ and ${\mathsf A}'$. We claim that this happens if
${\mathfrak A}$-modules ${\mathfrak M}_{\mathsf A}$ and ${\mathfrak M}_{{\mathsf A}'}$ are in the same $K^0$
class.
To see this, let us reformulate the set-up slightly. First of all,
we can lump all line operators except ${\mathsf A}$ and all bulk local
operators into a single line operator ${\mathsf B}$ with a single insertion.
It is easy to see that the Hilbert space of the twisted gauge theory
on ${\mathbb R}\times S^1_\sigma\times C$ is the homology of the complex
${\rm Hom}_{\mathfrak A}({\mathsf A},{\mathsf B})$. Equivalently, we can say that it is the
homology of ${\mathfrak M}_{\mathsf A}({\mathsf B})$. The local operator inserted into ${\mathsf B}$
can be thought of as an endomorphism $T$ of the complex
${\mathfrak M}_{\mathsf A}({\mathsf B})$, and the correlator is the supertrace of $T$. It is
obvious that shifting the grading of ${\mathfrak M}_{\mathsf A}$ by $n$ changes the
supertrace by a factor $(-1)^n$. The other equivalence relation has
to do with short exact sequences of ${\mathfrak A}$-modules. If ${\mathfrak M}_{\mathsf A}$ is
the middle term of a short exact sequence
$$
0\rightarrow {\mathfrak M}_1\rightarrow {\mathfrak M}_{\mathsf A}\rightarrow {\mathfrak M}_2\rightarrow 0,
$$
then we have a short exact sequence of complexes
$$
0\rightarrow {\mathfrak M}_1({\mathsf B})\rightarrow {\mathfrak M}_{\mathsf A}({\mathsf B}) \rightarrow {\mathfrak M}_2({\mathsf B})\rightarrow 0
$$
and the corresponding long exact sequence in homology. The
endomorphism $T$ of ${\mathfrak M}$ induces an endomorphism $\mathcal T$ of
this long exact sequence, regarded as a complex of vector spaces. We
may assume that both $T$ and $\mathcal T$ are of degree zero, since
otherwise all supertraces vanish for trivial reasons. Now the
statement that the supertrace of $T$ depends only on the $K^0$-class
of ${\mathsf A}$ is equivalent to the statement that the supertrace of
$\mathcal T$ vanishes. But this is an immediate consequence of
exactness: if $d$ denotes the differential in the long exact
sequence, and ${\mathcal R}$ denotes the sum of all terms in the long exact
sequence regarded as a graded vector space, then by exactness one
can write
$$
\mathcal T =d{\mathcal P}+{\mathcal P} d,
$$
for some linear map ${\mathcal P}:{\mathcal R}\rightarrow{\mathcal R}$ of degree $-1$. The supertrace
of the anticommutator of two odd endomorphisms of a graded vector
space obviously vanishes.
\subsection{Line operators as functors on branes}
We have seen that in the twisted theory line operators form a
monoidal ${\mathbb C}$-linear category, or, better, a monoidal DG-category.
As in \cite{KW}, it is useful to think of objects of this category
as functors on the category of B-branes on ${\mathcal M}_{Higgs}(G,C)$. This
makes the monoidal structure more obvious: it is simply given by the
composition of functors.\footnote{Alternatively, one can regard a
B-brane as a 1-morphism between an empty theory and the B-model on
${\mathcal M}_{Higgs}(G,C)$, regarded as objects of the 2-category of 2d
TFTs, and one can regard a line operator as a 1-morphism from the
B-model to itself. Then the action of the line operator on the brane
is given by the composition of 1-morphisms.}
It is particularly simple to describe the functor corresponding to a
Wilson line operator $W_R(p)$. It tensors every B-brane on
${\mathcal M}_{Higgs}(G,C)$ by a holomorphic vector bundle $R({\mathcal E}(p))$, where
${\mathcal E}(p)$ is a restriction to $p\in C$ of the universal $G$-bundle
${\mathcal E}$ on ${\mathcal M}_{Higgs}(G,C)\times C$ \cite{KW}.
The functor corresponding to an 't Hooft operator is a Hecke
transformation, as explained in \cite{KW}. Let us remind what a
Hecke transformation is in the case $G=U(N)$. Instead of a principal
$U(N)$-bundle, it is convenient to work with a holomorphic vector
bundle $E$ associated via the tautological $N$-dimensional
representation of $U(N)$. A Hecke transformation of $E_-=E$ at a
point $p\in C$ is another holomorphic vector bundle $E_+$ of the
same rank which is isomorphic to $E_-$ on $C\backslash p$. One can
always choose a basis of holomorphic sections $f_1,\ldots,f_N$ of
$E_-$ near $p$ so that $E_+$ is locally generated by
$$
s_1=z^{-\mu_1} f_1,\ldots, s_N=z^{-\mu_N} f_N,
$$
where $\mu_1,\ldots,\mu_N$ are integers. The integers
$\mu_1,\ldots,\mu_N$ are well-defined modulo permutation and can be
thought of as a coweight of $U(N)$ modulo the action of the Weyl group.
For fixed $E_-$ and $\mu$, the space of allowed $E_+$ is a
finite-dimensional submanifold ${\mathcal C}_\mu$ of the infinite-dimensional
affine Grassmannian $GL(N,{\mathbb C}((z)))/GL(N,{\mathbb C}[[z]])$, where
${\mathbb C}((z))$ is the field of formal Laurent series and ${\mathbb C}[[z]]$ is
the ring of formal Taylor series. Specifically, ${\mathcal C}_\mu$ is the
orbit of the matrix
$$
Z_\mu(z)={\rm diag}(z^{-\mu_1},\ldots,z^{-\mu_N})
$$
under the left action of $GL(N,{\mathbb C}[[z]])$. This describes how 't
Hooft transformations act on structure sheaves of points on
${\mathcal M}(G,C)\subset {\mathcal M}_{Higgs}(G,C)$. One can similarly define the
transformation of a more general point with a nontrivial Higgs
field, see \cite{KW} for details. One can also define how 't
Hooft/Hecke operators act on more general objects of the category of
B-branes, but we will not need this here.
For a general gauge group $G$, the situation is similar. One defines
the affine Grassmannian $Gr_G$ as the quotient $G((z))/G[[z]]$,
where $G((z))$ is the group of $G_{\mathbb C}$-valued Laurent series and
$G[[z]]$ is the group of $G_{\mathbb C}$-valued Taylor series. $Gr_G$ is a
union of Schubert cells ${\mathcal C}_\mu$ labeled by the elements of
$\Lambda_{cw}(G)/{\mathcal W}$. For a fixed coweight $\mu$ the space of Hecke
transformations of a holomorphic $G$-bundle $E_-$ is the
corresponding Schubert cell ${\mathcal C}_\mu$.
The functor corresponding to a general Wilson-'t Hooft operator is a
combination of a Hecke transformation and tensoring with a certain
holomorphic vector bundle on ${\mathcal C}_\mu$. For simplicity, let us only
consider the case when the initial B-brane is a point $E_-$ of
${\mathcal M}(G,C)\subset {\mathcal M}_{Higgs}(G,C)$. Recall that the ``electric''
part of the Wilson-'t Hooft operator can be described by a
representation $R$ of the group $H$ which is the stabilizer subgroup
of the coweight $\mu$ (under the adjoint representation). Clearly,
the electric degree of freedom will live in some vector bundle over
${\mathcal C}_\mu$. This bundle is associated via $R$ to a certain principal
$H$-bundle over ${\mathcal C}_\mu$.
To determine this bundle, note that over ${\mathcal C}_\mu$ there is a
principal $G$-bundle whose fiber can be identified with the fiber of
$E_+$ over $z=0$. A formal definition, in the case $G=U(N)$, is as
follows. ${\mathcal C}_\mu$ can be thought of as the set of equivalence
classes of matrix functions of the form
$$
F(z) Z_\mu(z) G(z), \quad F(z), G(z) \in GL(N,{\mathbb C}[[z]]), \quad
Z_\mu(z)={\rm diag}(z^{-\mu_1},\ldots, z^{-\mu_N})
$$
under the right action of $GL(N,{\mathbb C}[[z]])$. Let us now replace
$GL(N,{\mathbb C}[[z]])$ with its subgroup $GL_0(N,{\mathbb C}[[z]])$ consisting of
matrix functions which are identity at $z=0$. The set of equivalence
classes of matrices $F(z)Z_\mu(z)G(z)$ under the right
$GL_0(N,{\mathbb C}[[z]])$ action is clearly a principal $G$-bundle
${\mathcal P}_\mu$ over ${\mathcal C}_\mu$. This $G$-bundle has a reduction to a
principal $P$-bundle ${\mathcal Q}_\mu$, where $P$ is the parabolic subgroup
whose quotient by the maximal unipotent subgroup is $H_{\mathbb C}$. (This
reflects the fact that the gauge group is broken down to $H$ near
$z=0$). Explicitly, ${\mathcal Q}_\mu$ consists of
$GL_0(N,{\mathbb C}[[z]])$-equivalence classes of matrix functions
$F(z)Z_\mu(z)G(z)$ such that $G(0)\in P$. The group $H$ acts by
right multiplication. The $H_{\mathbb C}$-bundle we are after is the
quotient of ${\mathcal Q}_\mu$.
In the next section, we will discuss in detail Wilson-'t Hooft line
operators for $G=PSU(2)$; as a preparation, let us describe the
relevant vector bundles over ${\mathcal C}_\mu$ in the case when $\mu$ is the
smallest nontrivial coweight. The Schubert cell ${\mathcal C}_\mu$ in this
case is simply ${\mathbb P}^1=PSU(2)/U(1)=PSL(2,{\mathbb C})/B$, where $B$ is the
Borel subgroup of $G_{\mathbb C}=PSL(2,{\mathbb C})$. The $B$-bundle in question is
simply the tautological bundle $G_{\mathbb C}\rightarrow G_{\mathbb C}/B$, and the $H=U(1)$
bundle is the Hopf bundle. The coweight (resp. weight) lattice of
$PSL(2,{\mathbb C})$ is isomorphic to the lattice of integers (resp. even
integers). The electric degree of freedom of a Wilson-'t Hooft line
operator with $\mu=1$ and $\nu \in 2 {\bf Z}$ by definition takes
values in the fiber of the line bundle $L$ associated with the Hopf
bundle via a $U(1)$ representation of charge $\nu$. Since the Hopf
bundle is the circle bundle of the line bundle ${\mathcal O}(-1)$ over
${\mathbb P}^1,$ we conclude that $L={\mathcal O}(-\nu).$
As a rule, a functor from the derived category of $X$ to itself is
``representable'' by an object of the derived category of $X\times
X$. It is not known whether this is the case for all reasonable
functors, but it is certainly true for functors corresponding to
line operators. To show this, let $\Sigma\simeq {\mathbb R}^2$, and suppose
for simplicity that the line operator has the shape of a straight
line. Using the ``folding trick'' we can regard the field theory on
${\mathbb R}^2$ with an insertion of a straight line operator as a product
of two copies of the same field theory on a half-plane, with a
particular boundary condition. The product of two copies of a
B-model with target $X$ is a B-model with target $X\times X$, and
the boundary condition corresponds to a B-brane on $X\times X$. This
B-brane represents the functor corresponding to the line operator.
For example, in the case of Wilson line operator $W_R(p)$, the
corresponding object is the diagonal of ${\mathcal M}_{Higgs}(G,C)\times
{\mathcal M}_{Higgs}(G,C)$ equipped with the holomorphic vector bundle
$R({\mathcal E}_p)$.
The ``folding trick'' reduces the study of line operators to the
study of boundary conditions. There is a converse trick which
reduces the study of boundary conditions to the study of line
operators. Consider a B-model on a strip $I\times {\mathbb R}$ with some
boundary conditions $\alpha$ and $\beta$. We can identify the
$\alpha$ and $\beta$ boundaries and replace $I$ with $S^1$, with an
insertion of a line operator. If we think of $\beta$ as a 1-wall
between our B-model and the empty theory, and about $\alpha$ as the
1-wall between the empty theory and the B-model, then the line
operator is obtained by fusing together these 1-walls to get a
1-wall between the B-model and itself. We will call this the
``gluing trick''.
One application of the ``gluing trick'' is to produce new examples
of line operators from known boundary conditions. Given any two
B-branes on ${\mathcal M}_{Higgs}(G,C)$ we may produce a line operator in the
B-model with target ${\mathcal M}_{Higgs}(G,C)$. However, this construction
is not local on $C$ and does not produce new line operators in the
twisted gauge theory. For example, if we start with boundary
conditions for the B-model which can be lifted to the gauge theory,
then ``gluing'' them produces a 3-wall in the gauge theory rather
than a 1-wall. Only upon further compactification on $C$ does one
get a line operator in the 2d TFT.
We have argued above that any correlator involving a loop operator
${\mathsf A}$ and any other loop, line, or local operator depends only on
the $K^0$-class of ${\mathsf A}$. It was assumed that $\Sigma=S^1\times
S^1$. One may ask if the statement remains true if $\Sigma={\mathbb R}\times
I$ with suitable boundary conditions. Since the ``gluing trick''
replaces any pair of boundary conditions with a line operator, the
answer appears to be ``yes''. But we have to keep in mind that line
operators produced by the ``gluing trick'' are not local on $C$.
Therefore, to apply the above reasoning we need to work with a
different $K^0$-group: the $K^0$-group of the category of all line
operators in the B-model with target ${\mathcal M}_{Higgs}(G,C)$.
\subsection{The algebra of loop operators and
S-duality}\label{s:sduality}
We have argued above that loop operators form a commutative algebra.
To identify this algebra, one can use the fact that the gauge theory
becomes abelian in the infrared, if the Higgs field has a generic
expectation value (with all eigenvalues distinct). More precisely,
the gauge group is broken down to a semi-direct product of the
maximal torus $T$ of $G$ and the Weyl group ${\mathcal W}$. Loop operators in
such a theory are labeled by Weyl-invariant combinations of loop
operators in the abelian gauge theory with gauge group $T$. The
latter are labeled by electric and magnetic charges, i.e. by
elements of ${\rm Hom}(T,U(1))=\Lambda_w(G)$ and
${\rm Hom}(U(1),T)=\Lambda_{cw}(G)$. The algebra structure is also
obvious: under Operator Product electric and magnetic charges simply
add up, so the algebra of loop operators is isomorphic to the
Weyl-invariant part of the group algebra of
$\Lambda_w(G)\oplus\Lambda_{cw}(G)={\widehat\Lambda}(G)$.
This reasoning may seem suspect, because a vacuum with a particular
expectation value of a Higgs field is not BRST-invariant, and if we
try to integrate over all expectation values, we have to include
vacua where nonabelian gauge symmetry is restored. One can give a
more careful argument as follows. Let us consider again the case
where $M=\Sigma\times C$, and $\Sigma$ has a nonempty boundary. From
the viewpoint of the effective field theory on $\Sigma$, the theory
``abelianizes'' in the limit where the Higgs field $q_z$ is large
and all of its eigenvalues are distinct. The problem is that one has
to integrate over all values of $q_z$, including those where some of
the eigenvalues coincide. To argue that we can perform the
computation locally in the target space ${\mathcal M}_{Higgs}(G,C)$, recall
that in the B-model the path-integral localizes on constant maps.
Therefore if we impose a boundary condition which keeps $q_z$ away
from the dangerous region, we can be sure that the dangerous regions
of the target space will not contribute. For example, one can take a
boundary condition corresponding to a B-brane which is a generic
fiber of the Hitchin fibration. If $\partial\Sigma$ has several
components, it is sufficient to impose such a boundary condition
only on one component of the boundary.
It is known how the S-duality group acts on the algebra of loop
operators \cite{KWH}. The generator $T,$ which shifts $\tau\rightarrow
\tau+1,$ does not change the magnetic charge $\mu\in\Lambda_{cw}(G)$
and acts on the electric charge $\nu\in\Lambda_w(G)$ by
$$
\nu\rightarrow\nu+\mu.
$$
Here we regard $\mu$ as an element of ${\mathfrak t}^*$ using the
identification of ${\mathfrak t}$ and ${\mathfrak t}^*$, defined by the canonical
metric on ${\mathfrak t}$ (the Killing metric is normalized so that short
coroots have length $\sqrt 2$). The shift of the electric charge is
due to the Witten effect \cite{witteneffect}. The generator $S$
which exchanges $G$ and ${{}^LG}$ conjecturally acts by
$$
(\mu, \nu)\rightarrow ( {\mathfrak R}\cdot \mu, {\mathfrak R}\cdot \nu) \begin{pmatrix} 0 &
-1/\sqrt n_{\mathfrak g} \\ \sqrt n_{\mathfrak g} & 0\end{pmatrix}
$$
Here ${\mathfrak R}$ is a certain orthogonal transformation which squares to
an element of the Weyl group \cite{GNO,AKS}. For simply-laced groups
one can define Montonen-Olive duality so that ${\mathfrak R}=1$.
These results, however, do not yet allow us to compute the OPE of
any two given Wilson-'t Hooft operator. To do that, one needs to
know which element of the group algebra of ${\widehat\Lambda}(G)$ corresponds to
any particular Wilson-'t Hooft operator. Recall that the space of WH
operators has a natural basis labeled by elements of
$$
{\widehat\Lambda}(G)/{\mathcal W}
$$
So what we are looking for is a basis for the Weyl-invariant part of
the group algebra of ${\widehat\Lambda}(G)$ labeled by this set.
The most obvious such basis is obtained simply by taking an element
of ${\widehat\Lambda}(G)$ in a particular Weyl-equivalence class and averaging it
over the Weyl group. Such basis elements correspond to loop
operators in the abelian gauge theory with particular electric and
magnetic charges.\footnote{Averaging over the Weyl group reflects
the fact that the gauge group is really a semidirect product of the
Weyl group and the maximal torus of $G$.} But this is not the basis
we are looking for. For example, consider a Wilson operator for an
irreducible representation $R_\nu$ with highest weight $\nu$. From
the viewpoint of the effective abelian gauge theory, it is a sum of
Wilson operators with electric charges given by decomposing $R_\nu$
with respect to the maximal torus of $G$. All weights of $R_\nu$
appear in this decomposition, not just the weights which are in the
Weyl-orbit of the highest weight. Similarly, in the phase with the
broken nonabelian gauge symmetry an 't Hooft operator corresponding
to a coweight $\mu$ of $G$ decomposes as a sum over weights of the
representation ${{}^LR}_\mu$ of the dual group. The explanation of this
phenomenon is more subtle than for Wilson operators and involves
``monopole bubbling'' \cite{KW}.
In the case $G=PSU(2)$ (or $G=SU(2)$), the desired basis is uniquely
determined by imposing S-duality. To simplify notation, let us
identify the group algebra of $\Lambda_{cw}(PSU(2))\simeq{\mathbb Z}$ with
the space of polynomials of $x,x^{-1}$, and the group algebra of
$\Lambda_w(PSU(2))\simeq 2\cdot{\mathbb Z}$ with the space of polynomials of
$y^2,y^{-2}$. The Weyl group acts by $x\rightarrow x^{-1}, y\rightarrow y^{-1}$.
Then the algebra of WH loop operators can be identified with the
space of Weyl-invariant polynomials of $x,x^{-1},y^2,y^{-2}$ (for
$G=PSU(2)$) or of $x^2,x^{-2}, y, y^{-1}$ (for $G=SU(2)$). We know
already that the Wilson loop in the representation
with highest weight $n\in{\mathbb Z}$ corresponds to the polynomial
$$
WT_{0,n}=y^{n}+y^{n-2}+\ldots+y^{-n}.
$$
Here $n$ is an arbitrary integer if $G=SU(2)$ and an even integer if
$G=PSU(2)$. Similarly, the 't Hooft loop labeled by the coweight
$m\in{\mathbb Z}$ corresponds to the polynomial
$$
WT_{m,0}=x^m+x^{m-2}+\ldots+ x^{-m},
$$
where $m\in{\mathbb Z}$ if $G=PSU(2)$ and $m\in 2\cdot {\mathbb Z}$ if $G=SU(2)$.
This is, of course, compatible with the Montonen-Olive duality,
which acts by
$$
(m,n)\mapsto (n,-m).
$$
Moreover, any pair $(m,n)\in {\widehat\Lambda}(G)$ can be brought to the form
$(m',0)$ by an S-duality transformation. This determines the
polynomial corresponding to an arbitrary Wilson-'t Hooft operator
for $G=PSU(2)$ or $G=SU(2)$:
$$
WT_{m,n}=x^m y^n+x^{m-2a} y^{n-2b}+x^{m-4a} y^{n-4b}+\ldots + x^{-m}
y^{-n}.
$$
Here the integers $a,b$ are defined by the condition that $m/n=a/b$,
$a$ and $b$ have the same signs as $m$ and $n$, respectively, and
the fraction $a/b$ is reduced.
For higher-rank groups, S-duality is not sufficient to fix the
basis. This is because electric and magnetic charges need not be
linearly dependent for higher-rank gauge groups.
In the next section, we will test some predictions of S-duality for
the gauge group $PSU(2)$ by a direct computation of the OPE of WH
loop operators at weak coupling. The same method could be used to
determine the OPE of WH loop operators for higher-rank groups, but
the computations become very complicated.
\section{OPE at weak coupling}\label{ope}
\subsection{Semiclassical quantization of Wilson-'t Hooft operators}
To compute the OPE of a pair of Wilson-'t Hooft line operators we
will follow the same method as in \cite{KW}. We will quantize the
twisted gauge theory on a manifold with boundaries $C\times I\times
{\mathbb R}$, with suitable boundary conditions and with two insertions of
Wilson-'t Hooft operators. From the 2d viewpoint, the boundary
conditions correspond to B-branes on ${\mathcal M}_{Higgs}(G,C)$. The problem
reduces to the supersymmetric quantum mechanics on the space of zero
modes of the gauge theory. In principle, one has to study the limit
where the two operators approach each other, but in the twisted
theory this last step is not necessary, if the line operators are
sitting at the same point on $C$.
As in \cite{KW}, it is convenient to choose the branes so that in
the absence of Wilson-'t Hooft line operators the Hilbert space of
the twisted gauge theory is one-dimensional. One possible choice is
to take the brane $\alpha$ at $y=0$ to be the 0-brane at a point $r$
of ${\mathcal M}_{Higgs}(G,C)$ with vanishing Higgs field. The brane $\beta$
at $y=1$ will be the trivial line bundle on ${\mathcal M}_{Higgs}(G,C)$. Both
of these branes are of type $(B,B,B)$. The classical space of vacua
in this case consists of a single point $r$, with no zero modes, so
the Hilbert space is one-dimensional. Alternatively, as in
\cite{KW}, one could take two branes of type $(B,A,A)$ intersecting
at a single point. The former choice is somewhat easier, so we will
stick to it, but in practice there is not much difference between
the two.
Having chosen the boundary conditions, we can assign to any
collection ${\mathsf A},{\mathsf B},\ldots,$ of WH line operators the graded vector
space ${\mathfrak H}_{\alpha\beta}({\mathsf A},{\mathsf B},\ldots)$, or better yet the
corresponding BRST complex. Note that this assignment need not be
invariant with respect to S-duality. This is because the choice of
branes necessarily breaks the S-duality group. Neither is this
assignment compatible with the monoidal structure on the category of
line operators. That is, it is not true, in general, that
${\mathfrak H}_{\alpha\beta}({\mathsf A},{\mathsf B})$ is isomorphic to
$$
{\mathfrak H}_{\alpha\beta}({\mathsf A})\otimes {\mathfrak H}_{\alpha\beta}({\mathsf B}).
$$
This is in contrast with the situation in the GL-twisted theory
\cite{KW}.
The ultimate reason for this difference is that the twisted gauge
theory we are dealing with is not topological, but only
holomorphic-topological. Suppose we fix the location of the line
operator ${\mathsf B}$, but vary the location of ${\mathsf A}$ on $I\times C$. The
BRST-complex ${\mathfrak H}_{\alpha\beta}({\mathsf A},{\mathsf B})$ is a differential graded
vector bundle over $I\times C$ with a connection along $I$ and a
${\bar\partial}$ operator along $C$. If one fixes $p\in C$ and varies
$y\in I$ (without colliding with ${\mathsf B}$), then the BRST complexes are
all naturally isomorphic. But there is no isomorphism between
complexes corresponding to different $p$.
In the GL-twisted theory, one can choose all line operators to be at
the same point on $X_1$ and different points on $C$. Because line
operators are local along $C$, the supersymmetric quantum mechanics
describing this situation decomposes as a product of supersymmetric
quantum-mechanical systems corresponding to each line operator. This
implies that the quantum Hilbert space also factorizes.
In the holomorphic-topological field theory, if we want to study the
OPE, we have to work with all line operators inserted at the same
point on $C$ (but different points on $X_1$), and the arguments like
in the previous paragraph do not apply.
For simplicity, let us begin with the case where all line operators
are either Wilson or 't Hooft, with no ``mixed'' ones. When
quantizing the theory at weak coupling, the roles of Wilson and 't
Hooft operators are very different. 't Hooft operators directly
affect the equations for the BRST-invariant configurations whose
solutions determine the space of bosonic zero modes. A Wilson
operator corresponds to inserting an extra degree of freedom, which
couples weakly to the gauge fields, and can be treated
perturbatively.
The first step is to ignore the Wilson operators completely. As
explained in \cite{KW}, 't Hooft operators are line operators of
type $(B,A,A)$, i.e. they can be viewed either as line operators in
the B-model on ${\mathcal M}_{Higgs}(G,C)$, or in the A-model on
${\mathcal M}_{flat}(G,{\mathbb C})$. When $\Sigma$ is flat and has no boundary, we
can regard the twisted gauge theory on $\Sigma\times C$ as a
supersymmetric sigma-model with $(4,4)$ supersymmetry. The
introduction of boundaries (either of A or B types) breaks $3/4$ of
supercharges and effectively eliminates one of the spatial
directions, so we end up with a supersymmetric quantum mechanics
with $N=2$ supersymmetry. The corresponding supersymmetry algebra
has a single complex supercharge $Q$ satisfying
$$
Q^2=0,\qquad \{Q,Q^\dag\}=2H,
$$
where $H$ is the Hamiltonian. $Q$-cohomology can be identified with
the space of supersymmetric ground states, i.e. states satisfying
$$
Qa=Q^\dag a=0.
$$
Strictly speaking, this is guaranteed only when the target space of
the supersymmetric quantum mechanics is compact. In the case of
interest to us, the target space is the Schubert cell ${\mathcal C}_\mu$ (if
there is a single 't Hooft operator), or a product of several
Schubert cells, which are noncompact unless all coweights are
minuscule \cite{KW}. From the physical viewpoint, the correct
version of $Q$-cohomology is the $L^2$-cohomology, and we will
assume some version of Hodge theory works for the $L^2$-cohomology.
There are two well-known kinds of $N=2$ supersymmetric quantum
mechanics (SQM). $N=2$ SQMs of the first kind are classified by a
choice of a Riemannian target and a flat vector bundle $V$ over it;
its space of states is the space of differential forms with values
in $V$, and the corresponding operator $Q$ is the twisted de Rham
differential. This is the kind of effective SQM which appears when
considering 't Hooft operators as line operators in the A-model
\cite{KW}. It is clear that this SQM is not suitable for the
B-model, because once we include the Wilson operators, the bundle
over ${\mathcal C}_\mu$ will not be flat. Also, in the B-model the BRST
operator $Q$ is likely to be a Dolbeault-type operator.
$N=2$ SQMs of the second kind look more promising: they are
classified by a choice of a K\"ahler target space and a holomorphic
vector bundle over it. The space of states is the space of
differential forms of type $(0,p)$ with values in a holomorphic
vector bundle $W$, and $Q$ acts as the Dolbeault operator.
In the next section we will perform the reduction to a SQM in some
detail and show that in the absence of Wilson operators $W$ is the
bundle of forms of type $(p,0)$ (for any $p$). But we can deduce
this result in a simpler way by making use of both A and B-models.
Indeed, if we take as our boundary conditions branes of type
$(B,A,A)$, we can interpret the space of ground states of the SQM in
terms of either model. For the Dolbeault cohomology of $W$ to be
isomorphic to the de Rham cohomology of ${\mathcal C}_\mu$, $W$ has to be the
bundle
$$
\oplus_p \Omega^{p,0}({\mathcal C}_\mu).
$$
In the presence of a Wilson line, this also has to be tensored with
the holomorphic vector bundle corresponding to the Wilson line.
\subsection{Bosonic zero modes}
Our next task is to analyze bosonic zero modes in the presence of 't
Hooft operators. The BPS equations are simply the Bogomolny
equations, if the boundary conditions are suitably chosen \cite{KW}.
In fact, it has been shown in \cite{KW} that if in the absence of an
't Hooft operator the solution is unique, then in the presence of 't
Hooft operators the moduli space of solutions is ${\mathcal C}_\mu$ (for a
single 't Hooft operator), or a tower of several Schubert cells
${\mathcal C}_{\mu_i}$ fibered over each other (for several 't Hooft
operators). So the bosonic zero modes span the tangent space to
${\mathcal C}_\mu$ or its generalization. However, it is useful to have an
explicit description of the tangent space in terms of solutions of
linearized Bogomolny equations in order to identify the fermionic
zero modes.
Recall that $w=y+ix_0$ with $y \in [0,1],\, x_0 \in {\mathbb R},$ is a
complex coordinate on $\Sigma$, while $z$ is a complex coordinate on
a closed Riemann surface $C$. For $t=i$ the BRST-invariant
``holomorphic connection'' on $\Sigma$ is \begin{equation} {\cal
A}=(A_w+i\Phi_w)dw+(A_{{\overline{w}}}+i\Phi_{{\overline{w}}})d{\overline{w}} \label{holcon}
\end{equation} We further define the ``anti-holomorphic connection'' \begin{equation} {\check
A}=(A_w-i\Phi_w)dw+(A_{{\overline{w}}}-i\Phi_{{\overline{w}}})d{\overline{w}} \label{anticon}
\end{equation} and introduce corresponding covariant differentials in the
adjoint representation:
$${\cal D}={\partial} +i[{\cal A},\cdot ]=
D-[\Phi,\cdot ],\quad {\check {\cal D}}={\partial} +i[{\check A},\cdot ]=D+[\Phi,\cdot ].$$
Note that holomorphic and anti-holomorphic connections are related
by Hermitean conjugation:
$$ {\cal A}^{{\dagger}}={\check A}.$$
We set background $q_z$ and ${{\tilde q}}$ to zero. Then it can be shown analogously
to \cite{KW} that variations of these fields are also zero.
Therefore, the complete set of BPS equations is obtained by setting to zero the
BRST variations of gauginos. These are written down\footnote{In comparing with
\cite{htft} exchange $z$ and $w.$} in \cite{htft}.
Let us first consider one of the ``real'' BPS equations: \begin{equation} -i\Bigl(D_w
\Phi_{{\overline{w}}} +D_{{\overline{w}}}\Phi_w\Bigr)=g_{w{\overline{w}}}g^{z{\overline{z}}} \Bigl(F_{z {\overline{z}}}-
i[q_z,q_{{\overline{z}}}]+2g_{z{\overline{z}}}[{{\tilde q}},{{\tilde q}}^{{\dagger}}]\Bigr)
\label{odin} \end{equation} where $w=y+ix^0$ and $z=x^1+ix^2.$ Variation of
(\ref{odin}) gives
\begin{equation} -i D_w \left(\delta \Phi_{{\overline{w}}}\right)-i
D_{{\overline{w}}} \left(\delta \Phi_{w}\right)+[\delta A_w,
\Phi_{{\overline{w}}}]+[\delta A_{{\overline{w}}}, \Phi_w]
=g_{w{\overline{w}}}g^{z{\overline{z}}}\Bigl(D_z \delta A_{{\overline{z}}}-D_{{\overline{z}}} \delta
A_z\Bigr) \label{vari} \end{equation} where $2D_z=D_1-iD_2.$ We further assume
that all fields are independent of time $x^0$ and that background
fields $A_0=\Phi_0=0,$ so that $D_w=D_{{\overline{w}}}={ \frac{1}{2} } D_y$ and
$\Phi_w=\Phi_{{\overline{w}}}={ \frac{1}{2} } \Phi_y.$ Then, (\ref{vari}) becomes \begin{equation}
-D_y(\delta \Phi_w +\delta\Phi_{{\overline{w}}})+ i[\Phi_y,\delta A_w +
\delta A_{{\overline{w}}}]+2ig_{w{\overline{w}}}g^{z{\overline{z}}} D_z(\delta
A_{{\overline{z}}})-2ig_{w{\overline{w}}}g^{z{\overline{z}}} D_{{\overline{z}}}(\delta A_z)=0
\label{varii} \end{equation}
Now we impose a gauge-fixing condition: \begin{equation} D_y\left(\delta {\cal
A}_w+\delta {\cal A}_{{\overline{w}}}\right)+ [\Phi_y,\delta {\cal A}_w
+\delta {\cal A}_{{\overline{w}}}]+ 4g_{w{\overline{w}}}g^{z{\overline{z}}} D_{z}(\delta
A_{\bar z})=0 \label{gauge} \end{equation}
From (\ref{varii}) and (\ref{gauge}) follows \begin{equation}
\frac{g^{w{\overline{w}}}}{2}{\cal D}_y \left( \delta {\check A}_w +\delta
{\check A}_{{\overline{w}}} \right) +2g^{z{\overline{z}}} D_{{\overline{z}}}\bigl(\delta
A_z\bigr)=0 \label{variii} \end{equation} Taking hermitean conjugate of
(\ref{variii}) gives \begin{equation} \frac{g^{w{\overline{w}}}}{2}{\check {\cal D}}_y \left(
\delta {\cal A}_w +\delta {\cal A}_{{\overline{w}}} \right) +2g^{z{\overline{z}}}
D_z\bigl(\delta A_{{\overline{z}}}\bigr)=0 \label{newvariii} \end{equation} Next we
consider the complex BPS equations: \begin{equation} {\cal F}_{{\overline{z}} w}=0,\quad
{\cal F}_{{\overline{z}} {\overline{w}}}=0 \label{dva} \end{equation} Variation of these two
equations gives \begin{equation} D_{{\overline{z}}} \left( \delta {\cal A}_w
\right)-{\cal D}_w(\delta A_{{\overline{z}}})=0
\label{variv} \end{equation} and \begin{equation} D_{{\overline{z}}} \left( \delta {\cal A}_{{\overline{w}}}
\right)-{\cal D}_{{\overline{w}}}(\delta A_{{\overline{z}}})=0 \label{varv} \end{equation}
The sum of (\ref{variv}) and (\ref{varv}) gives \begin{equation}
D_{{\overline{z}}}\left(\delta {\cal A}_w + \delta {\cal A}_{{\overline{w}}}
\right)-{\cal D}_y\delta A_{{\overline{z}}}=0
\label{varvi} \end{equation} Taking hermitean conjugate of (\ref{varvi}) we
obtain \begin{equation} D_z\left( \delta {\check A}_w +\delta {\check A}_{{\overline{w}}}
\right)- {\check {\cal D}}_y\left(\delta A_z\right)=0 \label{varvii} \end{equation}
We conclude that $T{\cal M}$ splits into two parts. Holomorphic
bosonic modes from the first part satisfy Dirac-like equation: \begin{equation}
O_1:=\left(
\begin{tabular}{cc}
${\cal D}_y$ & $ 2D_{{\overline{z}}} $\\
$2 g^{z {\overline{z}}} D_z$ & $-g^{w{\overline{w}}}{\check {\cal D}}_y$\\
\end{tabular}
\right) \left(
\begin{tabular}{c}
$-\delta A_{{\overline{z}}}$ \\
$
{ \frac{1}{2} }\left(\delta {\cal A}_w +\delta {\cal A}_{{\overline{w}}}\right)$ \\
\end{tabular}
\right)=0 \label{holrecast} \end{equation} We impose boundary conditions \begin{equation}
\delta A_{{\overline{z}}}(0)=0,\quad \delta {\cal A}_w(1) +\delta {\cal
A}_{{\overline{w}}}(1)=0 \label{boscond} \end{equation}
The difference of (\ref{variv}) and (\ref{varv}) as well as
variation of the second ``real'' BPS condition ${\cal F}_{w {\overline{w}}}=0$
give equations for the remaining holomorphic
bosonic variation $\delta {\cal A}_w -\delta{\cal A}_{{\overline{w}}}$:
\begin{equation} D_{{\overline{z}}} \left(\delta {\cal A}_w -\delta {\cal
A}_{{\overline{w}}}\right)=0, \quad
{\cal D}_y\left(\delta {\cal A}_w -\delta {\cal A}_{{\overline{w}}}\right)=0.
\label{otherhol} \end{equation} Analogously, ${\overline T}{\cal M}$ splits
into two parts. Some of the anti-holomorphic bosonic zero modes
satisfy Dirac-like equation:
\begin{equation} O_2:=\left(
\begin{tabular}{cc}
${\check {\cal D}}_y$ & $ 2D_z $\\
$2 g^{z {\overline{z}}} D_{{\overline{z}}}$ & $- g^{w{\overline{w}}}{\cal D}_y$\\
\end{tabular}
\right) \left(
\begin{tabular}{c}
$-\delta A_z$ \\
${ \frac{1}{2} }\left(
\delta {\check A}_w +\delta {\check A}_{{\overline{w}}}\right)$ \\
\end{tabular}
\right)=0 \label{antirecast} \end{equation} We impose boundary conditions \begin{equation}
\delta A_{z}(0)=0,\quad \delta {\check A}_w(1) +\delta {\check
A}_{{\overline{w}}}(1)=0 \label{boscondii} \end{equation}
The remaining anti-holomorphic bosonic variation $\delta {\check
A}_w -\delta{\check A}_{{\overline{w}}}$ satisfy \begin{equation} D_{z} \left(\delta
{\check A}_w -\delta {\check A}_{{\overline{w}}}\right)=0, \quad
{\check {\cal D}}_y\left(\delta {\check A}_w -\delta {\check A}_{{\overline{w}}}\right)=0.
\label{otheranti} \end{equation}
There are no non-trivial solutions of (\ref{otherhol}) and
(\ref{otheranti}) and we conclude that $T{\cal M}$(resp. ${\overline
T} {\cal M}$) is defined as the kernel of the operator $O_1$(resp.
$O_2$).
\subsection{Fermionic zero modes}
The gaugino equations of motion are: \begin{equation}
D_z({\overline{\lambda}}_{{\overline{w}}})+D_{{\overline{w}}}({\overline{\lambda}}_z)+[\Phi_{{\overline{w}}}, \lambda_z]=0
\label{fermi} \end{equation}
\begin{equation} D_z(\lambda_{w})+D_{w}(\lambda_z)+[\Phi_{w},{\overline{\lambda}}_z]=0
\label{fermii} \end{equation}
\begin{equation} D_w{\overline{\lambda}}_{{\overline{w}}} +[\lambda_w,\Phi_{{\overline{w}}}] -g_{w {\overline{w}}} g^{z
{\overline{z}}} D_{{\overline{z}} } {{\overline{\lambda}}}_z=0 \label{fermiii} \end{equation}
\begin{equation} D_{{\overline{w}}}\lambda_{w} +[{\overline{\lambda}}_{{\overline{w}}},\Phi_{w}] -g_{w {\overline{w}}} g^{z
{\overline{z}}} D_{{\overline{z}} } {\lambda}_z=0 \label{fermiv} \end{equation}
The sum of (\ref{fermi}) and (\ref{fermii}) gives( recall
$\Phi_w=\Phi_{{\overline{w}}}={ \frac{1}{2} } \Phi_y,\quad D_w=D_{{\overline{w}}}={ \frac{1}{2} } D_y$)
\begin{equation} 2 D_z(\lambda_w +{\overline{\lambda}}_{{\overline{w}}})+ D_y(\lambda_z
+{\overline{\lambda}}_{{\overline{z}}})+[\Phi_y,(\lambda_z +{\overline{\lambda}}_{{\overline{z}}})]=0 \label{fermv}
\end{equation} Meanwhile, the sum of (\ref{fermiii}) and (\ref{fermiv}) gives
\begin{equation} 2 g^{z {\overline{z}}} D_{{\overline{z}}}(\lambda_z +{\overline{\lambda}}_{{\overline{z}}}) -g^{w {\overline{w}}}
D_y(\lambda_w +{\overline{\lambda}}_{{\overline{w}}})+g^{w{\overline{w}}} [\Phi_y,(\lambda_w
+{\overline{\lambda}}_{{\overline{w}}})]=0 \label{fermvi} \end{equation}
The two equations (\ref{fermv}) and (\ref{fermvi}) can be recast as
a Dirac-like equation: \begin{equation} \left(
\begin{tabular}{cc}
${\check {\cal D}}_y$ & $ 2D_z $\\
$2 g^{z {\overline{z}}} D_{{\overline{z}}}$ &
$- g^{w {\overline{w}}}{\cal D}_y$\\
\end{tabular}
\right) \left(
\begin{tabular}{c}
$\lambda_z +{\overline{\lambda}}_z$ \\
$\lambda_w+{\overline{\lambda}}_{{\overline{w}}}$ \\
\end{tabular}
\right)=0 \label{recastii} \end{equation} Similarly, the difference of
equations (\ref{fermi}) and (\ref{fermii}) combines with the
difference of equations (\ref{fermiii}) and (\ref{fermiv}) into
another Dirac-like equation: \begin{equation} \left(
\begin{tabular}{cc}
${\cal D}_y$ & $ 2D_z $\\
$2 g^{z {\overline{z}}} D_{{\overline{z}}}$ &
$- g^{w {\overline{w}}}{\check {\cal D}}_y$\\
\end{tabular}
\right) \left(
\begin{tabular}{c}
${\overline{\lambda}}_z-{\lambda}_z$ \\
${\overline{\lambda}}_{{\overline{w}}}-{\lambda}_w$ \\
\end{tabular}
\right)=0 \label{morerecast} \end{equation} We impose boundary conditions at
$y=0$ or $y=1$: \begin{equation} \lambda_z(0)+{\overline{\lambda}}_z(0)=0,\quad
\lambda_w(1)+{\overline{\lambda}}_{{\overline{w}}}(1)=0 \label{bry} \end{equation} \begin{equation}
\lambda_z(1)-{\overline{\lambda}}_z(1)=0,\quad \lambda_w(0)-{\overline{\lambda}}_{{\overline{w}}}(0)=0
\label{morebry} \end{equation} Note that (\ref{bry}) are BRST invariant
boundary condition, moreover they are BRST variations of the bosonic
boundary conditions (\ref{boscondii}). Meanwhile the BRST variation
of (\ref{morebry}) gives
$$q_z T {{\tilde q}}^{{\dagger}}{|}_{y=1}=0$$
which is zero in the background we consider, i.e. with $q_z=0$ and
${{\tilde q}}=0.$ Comparing (\ref{recastii}) with equations of motion for the
anti-holomorphic bosonic zero modes (\ref{antirecast}), we conclude
that solutions of (\ref{recastii}) are in one-to-one correspondence
with elements of ${\overline T}{\cal M}.$
Eq. (\ref{morerecast}) has no nontrivial solutions for the following
reason. Let us denote by $O$ the operator in (\ref{morerecast}). In
addition to (\ref{morebry}) we impose boundary conditions on ghost
number $-1$ fermions \begin{equation} {\lambda}_{{\overline{z}} w}(0)-{\overline{\lambda}}_{{\overline{z}}
{\overline{w}}}(0)=0,\quad {\lambda}_{w {\overline{w}}}(1)-{\overline{\lambda}}_{w {\overline{w}}}(1)=0
\label{morebryii} \end{equation} The boundary conditions (\ref{morebry}) and
(\ref{morebryii}) are chosen so that in computing hermitean
conjugate ${\cal O}^{{\dagger}}$ we can drop boundary terms obtained from
integration by parts. Then we find \begin{equation} O^{{\dagger}}
O=-\Bigl(2\Delta_C+{ \frac{1}{2} } \Delta_{\Sigma}\Bigr)I_{2\times 2}
\label{proof} \end{equation} where $\Delta_C=g^{z
{\overline{z}}}(D_{{\overline{z}}}D_z+D_zD_{{\overline{z}}})$ and $\Delta_{\Sigma}=g^{w
{\overline{w}}}({\check {\cal D}}_y{\cal D}_y+{\cal D}_y {\check {\cal D}}_y).$ In
obtaining (\ref{proof}) we used BPS equations for the background
fields. Since both $-\Delta_C$ and $-\Delta_{\Sigma}$ are
nonnegative operators, the kernel of the operator $O$ must be
annihilated by both Laplacians. This implies, in particular, that
${\overline{\lambda}}_z-{\lambda}_z$ is constant on the interval $y \in[0,1].$ However,
such a mode is necessarily zero due to boundary conditions
(\ref{morebryii}).
Equations of motion for matter fermions (using the ${\mathcal N}=2$ language)
are \begin{equation} \left(
\begin{tabular}{cc}
${\check {\cal D}}_y$ & $2 D_{{\overline{z}}}$ \\
$2 g^{z {\overline{z}}} D_{z}$ & $-g^{w {\overline{w}}}{\cal D}_y$\\
\end{tabular}
\right) \left(
\begin{tabular}{c}
$\psi_{{\overline{z}}} +{\overline{\chi}}_{{\overline{z}}}$ \\
$-\left(\psi_{{\overline{w}}}+{\overline{\chi}}_w\right)$ \\
\end{tabular}
\right)=0 \label{recastiii} \end{equation} and \begin{equation} \left(
\begin{tabular}{cc}
${\cal D}_y$ & $2D_{{\overline{z}}}$ \\
$2 g^{z {\overline{z}}} D_{z}$ & $- g^{w {\overline{w}}}{\check {\cal D}}_y$\\
\end{tabular}
\right) \left(
\begin{tabular}{c}
${\overline{\chi}}_{{\overline{z}}}-\psi_{{\overline{z}}}$ \\
$\psi_{{\overline{w}}}-{\overline{\chi}}_w$ \\
\end{tabular}
\right)=0 \label{recastiiii} \end{equation}
We impose the following boundary conditions at $y=0$ or $y=1$: \begin{equation}
\psi_{{\overline{w}}}(1) -{\overline{\chi}}_{w}(1)=0,\quad
{\overline{\chi}}_{{\overline{z}}}(0)-\psi_{{\overline{z}}}(0)=0 \label{bryii} \end{equation} \begin{equation}
\psi_{{\overline{w}}}(0) +{\overline{\chi}}_{w}(0)=0,\quad
{\overline{\chi}}_{{\overline{z}}}(1)+\psi_{{\overline{z}}}(1)=0 \label{newbryii} \end{equation} Note that
(\ref{newbryii}) are BRST invariant, meanwhile the BRST variation of
(\ref{bryii}) gives
$$D_{{\overline{z}}}{{\tilde q}}^{{\dagger}}{|}_{y=0}=0,\quad
{\cal D}_y {{\tilde q}}^{{\dagger}}{|}_{y=1}=0
$$
Matter fermions (\ref{recastiiii}) belong to $T{\cal M},$ as can be
seen by comparing with (\ref{holrecast}). Eq. (\ref{recastiii}) has
no nontrivial solutions. The proof is similar to that for operator
$O$ above. Let us denote by $O'$ the operator in (\ref{recastiii}).
In addition to (\ref{newbryii}) we impose boundary conditions on
ghost number $-1$ fermions \begin{equation} \chi_{z {\overline{w}}}(0)+{\overline{\psi}}_{z
w}(0)=0,\quad \chi_{z {\overline{z}}}(1)+{\overline{\psi}}_{z {\overline{z}}}(1)=0
\label{newmorebryii} \end{equation} The boundary conditions (\ref{newbryii})
and (\ref{newmorebryii}) are chosen so that when computing hermitean
conjugate ${O'}^{{\dagger}}$ we can drop boundary terms obtained from
integration by parts. Then we use BPS equations for the background
to show \begin{equation} {O'}^{{\dagger}} O'=-\Bigl(2\Delta_C+{ \frac{1}{2} }
\Delta_{\Sigma}\Bigr)I_{2\times 2} \label{proofii} \end{equation} where
$\Delta_C=g^{z {\overline{z}}}(D_{{\overline{z}}}D_z+D_zD_{{\overline{z}}})$ and
$\Delta_{\Sigma}=g^{w {\overline{w}}}({\check {\cal D}}_y{\cal D}_y+{\cal D}_y
{\check {\cal D}}_y).$ Since both $-\Delta_C$ and $-\Delta_{\Sigma}$ are
non-negative operators, the kernel of the operator $O'$ must be
annihilated by both Laplacians. This implies, in particular, that
${\overline{\chi}}_{{\overline{z}}}+\psi_{{\overline{z}}}$ is constant on the interval $y
\in[0,1].$ However, such a mode is necessarily zero due to boundary
conditions (\ref{newbryii}).
The result of this analysis is that fermionic zero modes span
$T{\mathcal M}\oplus {\overline T}{\mathcal M}$. Therefore the Hilbert space of the
effective SQM is the space of $L^2$ sections of the vector bundle
$$
\oplus_p \Lambda^p\left(T^*{\mathcal M}\oplus {\overline
T^*}{\mathcal M}\right)=\oplus_{p,q}\Omega^{p,q}({\mathcal M}).
$$
From the formulas for BRST transformation we see that BRST variation
of bosonic zero modes are precisely the fermionic zero modes
spanning ${\overline T}{\mathcal M}$, while BRST variations of fermionic zero
modes vanish. This means that the BRST operator acts as the
Dolbeault operator.
\section{OPE of Wilson-'t Hooft operators for $G=PSU(2)$}
In this section we study in detail the OPE of WH loop operators in
the special case $G=PSU(2)$. The main goal is to test the
predictions of S-duality explained in \ref{s:sduality}.
\subsection{OPE of a Wilson and an 't Hooft operator}
Let us begin by considering the OPE of a Wilson and an 't Hooft
operator. The most naive approach is to regard an 't Hooft operator
as creating a classical field configuration, and analyze the
electric degree of freedom corresponding to the Wilson operator in
this classical background. As explained above, the field singularity
at the insertion point of an 't Hooft operator $T_\mu$ breaks the
gauge group $G=PSU(2)$ down its subgroup $H=U(1)$, so it seems that
all we have to do is to decompose the representation $R$ associated
to the Wilson operator into irreducibles with respect to $H$. If we
label representations of $PSU(2)$ by an even integer $n$ which is
twice the isospin, and denote the magnetic charge of the 't Hooft
operator by $m\in {\mathbb N}$, then the OPE at weak coupling appears to be
$$
T_m \cdot W_n = WT_{m,n}+WT_{m,n-2}+\ldots +WT_{m,-n}.
$$
But this contradicts S-duality, which requires that there be a
symmetry under $n\rightarrow -m, m\rightarrow n$. In fact, S-duality predicts that
the OPE also contains contributions from WH operators with smaller
magnetic charge. As explained in \cite{KW}, this is due the
``monopole bubbling'': the magnetic charge of an 't Hooft operator
can decrease by $2$ when it absorbs a BPS monopole. Such process is
possible because the moduli space of solutions of the Bogomolny
equations is noncompact for $m>1$; configurations with smaller
magnetic charge can be associated with points at infinity. The naive
argument ignored monopole bubbling and therefore missed all such
contributions.
This explanation also suggests that for $m=1$, where the moduli
space is simply ${\mathbb P}^1$ and therefore is compact, the naive argument
is valid. To compare this with the S-duality predictions, we follow
the procedure outlined in section \ref{s:sduality}. To the loop
operators $T_1$ and $W_n$ one associates Laurent polynomials
$$
WT_{1,0}(x)=x+x^{-1},\quad WT_{0,n}(y)=y^n+y^{n-2}+\ldots + y^{-n}.
$$
To the WH loop operator $WT_{1,k}$ one associates the Laurent
polynomial
$$
WT_{1,k}(x,y)=x y^k+x^{-1}y^{-k}.
$$
We see that
\begin{equation}\label{opeone}
WT_{1,0}(x)WT_{0,n}(y)=\sum_{j=0}^n WT_{1,n-2j}(x,y),
\end{equation}
in agreement with the naive formula.
This example also provides a nice illustration of the difference
between line and loop operators. Recall that the Hilbert space
${\mathfrak H}(A)$ associated to the line operator $WT_{1,k}$ is the space of
sections of the differential graded vector bundle
$$
{\mathcal O}(-k)\otimes \oplus_{p,q} \Omega^{p,q}
$$
over the Schubert cell ${\mathcal C}_\mu\simeq {\mathbb P}^1$, with the differential
being the Dolbeault differential. Here the first factor comes from
the electric degree of freedom, and the rest comes from fermionic
zero modes. Instead of this differential graded vector bundle, we
can think of the corresponding coherent sheaf
$$
{\mathcal O}(-k)\otimes \Omega^*({\mathbb P}^1)={\mathcal O}(-k)+{\mathcal O}(-k-2).
$$
The SQM Hilbert space is the Dolbeault resolution of this coherent
sheaf, so instead of thinking about the BRST cohomology, we can
think about the cohomology of this sheaf. Thus the sum of the WH
line operators on the right-hand side of eq. (\ref{opeone})
corresponds to the coherent sheaf
$$
\left({\mathcal O}(-n)+{\mathcal O}(-n+2)+\ldots +{\mathcal O}(n)\right)\otimes
\Omega^*({\mathbb P}^1).
$$
On the other hand, the product of a Wilson operator $W_n$ and an 't
Hooft operator $T_1$ gives a trivial vector bundle of rank $n+1$
over ${\mathbb P}^1$, tensored with $\Omega^*({\mathbb P}^1)$. Clearly, the equality
between left-hand side and right-hand side of eq. (\ref{opeone})
does not hold on the level of line operators, because
\begin{equation}\label{ineq}
{\mathcal O}\otimes {\mathbb C}^{n+1}\neq {\mathcal O}(-n)+{\mathcal O}(-n+2)+\ldots +{\mathcal O}(n).
\end{equation}
But the equality does hold on the level of K-theory.\footnote{We are
grateful to Roman Bezrukavnikov for providing the following
argument.} To see this, we will exhibit a filtration of ${\mathcal O}\otimes
{\mathbb C}^{n+1}$ whose cohomology is precisely the right-hand-side of eq.
(\ref{ineq}). Recall that ${\mathbb P}^1=G_{\mathbb C}/B$, where $G_{\mathbb C}=SL(2,{\mathbb C})$
and $B$ is the group of upper-triangular matrices with unit
determinant. The fiber of the trivial vector bundle $V$ of rank
$n+1$ carries the representation of $G_{\mathbb C}$ of isospin $n$; for
example, we can realize it by thinking of the fiber of $V$ as the
space of homogeneous degree-$n$ polynomials in variables $u$ and
$v$, which we denote ${\mathcal D}_n(u,v)$. $SL(2,{\mathbb C})$ acts on it by linear
substitutions. To define a filtration on $V$, we can specify a
$B$-invariant filtration on ${\mathcal D}_n(u,v)$. The obvious filtration is
to take $F_k$ to be the subspace of ${\mathcal D}_n(u,v)$ consisting of
polynomials of degree $k$ or lower in $u$, with $k$ ranging from $0$
to $n$. It is easy to check that $F_k$ is $B$-invariant for any $k$.
Obviously, $F_k/F_{k-1}$ is one-dimensional and the maximal torus of
$SL(2,{\mathbb C})$ acts on it with weight $2k-n$. Hence $V$ acquires a
filtration of length $n+1$ whose $k$-th cohomology is ${\mathcal O}(2k-n)$.
\subsection{OPE of WH operators with minuscule coweights}
In this subsection we compute the product of WH with the smallest
nontrivial coweights (for $G=PSU(2)$). The weights may be arbitrary.
This case is very special, because when the WH operators are not
coincident, the moduli space of Bogomolny equations is compact. This
happens because the smallest nontrivial coweight of $G=PSU(2)$ is
minuscule.\footnote{The corresponding representation of ${{}^LG}=SU(2)$
has the property that all its weights lie in a single Weyl orbit.}
Therefore the monopole bubbling is absent, as discussed in
\cite{KW}. The main difficulty is to determine the behavior of the
zeromode wavefunctions in the limit when the two WH operators
coincide.
Let us recall what the moduli space of Bogomolny equations looks
like for two noncoincident WH operators with $\mu=1$ located at the
same point on $C$ \cite{KW}. It is a Hirzebruch surface $F_2$ which
is a fibration of ${\mathbb P}^1$ over ${\mathbb P}^1$. One can think of it as a
blow-up of the weighted projective plane ${\WW\PP}_{112}$ at the
${\mathbb Z}_2$-orbifold point. This blow-up is associated with moving the
WH operators apart in the $y$ directions. Thus ${\WW\PP}_{112}$ is the
coincidence limit of the moduli space. The orbifold point
corresponds to the trivial solution of the Bogomolny equations
(without the monopole singularity), while the complement of the
orbifold point is isomorphic to $T{\mathbb P}^1$ and corresponds to
solutions of the Bogomolny equations with one singularity of
coweight $\mu=\pm 2$. This implies \cite{KW} that the product of two
WH operators with coweight $\mu=\pm 1$ may contain WH operators with
coweight $\mu=\pm 2$ and WH operators with coweight $\mu=0$. To
understand which WH operators appear in the product, one has to
understand the zeromode wavefunctions in the coincidence limit.
Those which remain spread-out on the complement of the orbifold
point correspond to WH operators with $\mu=\pm 2$, while those which
concentrate in the neighborhood of the exceptional divisor
correspond to WH operators with $\mu=0$.
As explained above, the wavefunctions of the effective SQM in the
presence of WH operators are square-integrable forms on the moduli
space with values in a certain holomorphic line bundle which satisfy
the equations
\begin{equation}\label{eqsdolbope}
{\bar D}\rho=0,\qquad {\bar D}^\dag\rho=0.
\end{equation}
where ${\bar D}$ is the covariant Dolbeault differential\footnote{
See sections 6.3 and 6.4 for appropriate ${\bar D}.$}.
In the coincidence limit, the K\"ahler metric on the moduli space
degenerates so that in the neighborhood of the orbifold point it
becomes a flat metric on ${\mathbb C}^2/{\mathbb Z}_2$. More generally, when the WH
operators are close to each other, the metric in the neighborhood of
the exceptional divisor is well-approximated by a hyperk\"ahler
metric on the blow-up of ${\mathbb C}^2/{\mathbb Z}_2$ \cite{KW}. This is because
this region in the moduli space corresponds to solutions of the
Bogomolny equations which are trivial everywhere except in a small
neighborhood of a point on $C\times I$ (the point at which one of
the WH operators is inserted). Such solutions are arbitrarily well
approximated by patching together solutions on ${\mathbb R}^3$ with the
trivial solution on $C\times I$. Therefore the metric will be
arbitrarily well approximated by the metric on the moduli space of
Bogomolny equations on ${\mathbb R}^3$, which is hyperk\"ahler.
The blow-up of ${\mathbb C}^2/{\mathbb Z}_2$ is isomorphic to $T^*{\mathbb P}^1$ and has a
unique asymptotically flat hyperk\"ahler metric: the Eguchi-Hanson
metric. Therefore, one can produce approximate solutions of
equations (\ref{eqsdolbope}) on $F_2$ by first solving them on the
Eguchi-Hanson space and on $T{\mathbb P}^1$, assuming square-integrability
in both cases, and patching them with the zero solution on the
remainder of $F_2$. The solutions coming from the Eguchi-Hanson
space will represent contributions to the zeromode Hilbert space
from WH operators with $\mu=0$, while the solutions coming from
$T{\mathbb P}^1$ will represent contributions from $\mu=\pm 2$.
The contribution to the product of WH operators coming from $T{\mathbb P}^1$
will be called the ``bulk'' contribution, while the one coming from
$T^*{\mathbb P}^1$ will be called the ``bubbled'' contribution, because it
is due to monopole bubbling. The ``bulk'' contribution is rather
trivial and in fact can be determined without any computations: the
magnetic charges of the singularities simply add up, the same
applies to the electric charges, and therefore the bulk contribution
must be simply
$$
WT_{2,2m+2k}.
$$
The ``bubbled'' contributions are much more subtle and will be
determined below by solving the equations (\ref{eqsdolbope}) on
$T^*{\mathbb P}^1$. We will also solve the same equations on $T{\mathbb P}^1$, not
because it is required to determine the operator product, but
because this computation will provide a consistency check on our
approach, see section 6.5.
As a preliminary step, let us exhibit the predictions of $SL(2,{\mathbb Z})$
duality for the product of WH operators with coweight $\mu=\pm 1$:
$$
WT_{1,2m}\cdot
WT_{1,2k}=WT_{2,2m+2k}+WT_{0,2m-2k}-WT_{0,0}-WT_{0,2m-2k-2}
$$
Here $m$ and $k$ are integers, and we assume $m\neq k$. We can
simplify our problem a bit by noting that by applying the
$T$-transformation several times, we can reduce to the case $k=0$,
in which case the duality predicts that for $m\neq 0$ we have
\begin{equation}\label{maineqtotest}
WT_{1,2m}\cdot WT_{1,0}=WT_{2,2m}+WT_{0,2m}-WT_{0,0}-WT_{0,2m-2}
\end{equation}
The ``bulk'' contribution is as expected, while the ``bubbled''
contributions are far from obvious. Note that some of the
coefficients are negative, unlike for 't Hooft operators in
\cite{KW}. This is because we are working in the K-theory of the
category of line operators, where negative signs occur naturally.
Similar manipulations in the case $m=0$ lead to
\begin{equation}\label{opeKW}
T_1\cdot T_1=T_2+T_0.
\end{equation}
This is S-dual to the fact that the tensor square of the defining
representation of $SU(2)$ is a sum of the adjoint representation
(corresponding to the 't Hooft operator $T_2$) and the trivial
representation (corresponding to $T_0$). This prediction was checked
in \cite{KW} for the GL-twisted theory. Briefly speaking, in the
GL-twisted theory we are looking for harmonic square-integrable
forms on $F_2$ and study their behavior in the limit when $F_2$
degenerates to ${\WW\PP}_{112}$. Since topologically $F_2$ is the same as
${\mathbb P}^1\times{\mathbb P}^1$, and harmonic forms can be interpreted in
topological terms (as cohomology classes), we know a priori that the
dimension of the space of harmonic forms is the same as the
dimension of $H^*({\mathbb P}^1\times{\mathbb P}^1,{\mathbb C})$, which is four. It is also
well-known that there is a unique square-integrable harmonic form on
the Eguchi-Hanson space (in degree 2). Therefore the Eguchi-Hanson
space contributes one state, and $T{\mathbb P}^1$ contributes three states.
The latter states arise precisely from the quantization of the
moduli space of the Bogomolny equations with a single singularity of
coweight $\mu=\pm 2$. This leads to the formula (\ref{opeKW}), as
predicted by S-duality.
The case $m\neq 0$ is different in two respects. First of all, we
have to consider forms with values in a holomorphic line bundle
${\mathcal L}$. Second, the equations we have to solve (\ref{eqsdolbope})
involve the Dolbeault operator rather than the de Rham operator.
To fix ${\mathcal L}$, let us use the same boundary conditions as before,
i.e. assume that the boundary condition on which $WT_{1,2m}$ acts
corresponds to a particular $PSU(2)$ bundle on $C$. Then the line
bundle on $F_2$ is the pull-back of ${\mathcal O}(-2m)$ from the base
${\mathbb P}^1$. (This is because the electric degree of freedom is
associated, via weight $2m$, with the $U(1)$ bundle coming from the
first Hecke transformation and does not care about the second Hecke
transformation. The base ${\mathbb P}^1$ is the parameter space for the
first Hecke transformation, while the fiber ${\mathbb P}^1$ is the parameter
space for the second Hecke transformation.) Therefore, in the
``bulk'' part of the computation, ${\mathcal L}$ is simply the pull-back of
${\mathcal O}(-2m)$ from the base of $T{\mathbb P}^1$ to the total space.
Similarly, in the ``bubbled'' part of the computation the line
bundle is a pull-back of ${\mathcal O}(-2m)$ from the base of $T^*{\mathbb P}^1$ to
the total space. To see this, we can make use of an explicit
description of $F_2$ as a K\"ahler quotient of ${\mathbb C}^4$ by $U(1)^2$
\cite{KW}. Let the coordinates on ${\mathbb C}^4$ be $u,v,b,$ and $b'$. The
first $U(1)$ action has weights $1,1,2,0$, and the second $U(1)$
action has weights $0,0,1,1$. The moment map equations are
$$
|u|^2+|v|^2+2|b|^2=1,\quad |b|^2+|b'|^2=d.
$$
where $d$ is assumed to be positive and smaller than $1/2$. These
equations imply that $u$ and $v$ cannot vanish simultaneously and
can be regarded as homogeneous coordinates on ${\mathbb P}^1$. Therefore the
map $(u,v,b,b')\mapsto (u,v)$ defines a fibration over ${\mathbb P}^1$. Its
fiber is also a ${\mathbb P}^1$ with homogeneous coordinates $b$ and $b'$.
To degenerate $F_2$ into ${\WW\PP}_{112}$ one need to take the limit
$d\rightarrow 1/2$. The exceptional divisor is given by $b'=0$. The
neighborhood of the exceptional divisor is the subset given by
$b\neq 0$. We can see that it is a copy of $T^*{\mathbb P}^1$ by letting
$a=b'/b$. Since $u,v,a$ have zero weights with respect to the second
$U(1)$ and since every orbit of the second $U(1)$ action contains a
unique representative with ${\rm arg}(b)=0$, we conclude that the
subset $b\neq 0$ can be identified with the K\"ahler quotient of
${\mathbb C}^3$ parameterized by $u,v,a$ by the first $U(1)$. Since the
weights of these variables are $1,1,-2$, and $u$ and $v$ cannot
vanish simultaneously, this quotient is the total space of the line
bundle ${\mathcal O}(-2)$ over ${\mathbb P}^1$, which is the same as $T^*{\mathbb P}^1$.
Now, the line bundle ${\mathcal L}$ on $F_2$ can be defined as the quotient
of the space of quintuples $u,v,b,b',\rho$ by the $({\mathbb C}^*)^2$ action
with weights $$(1,0),(1,0),(2,1),(0,1),(-2m,0).$$ The variable
$\rho$ parameterizes the fiber of ${\mathcal L}$. When we restrict to the
subset $b\neq 0$, we may forget about the second ${\mathbb C}^*$, and
replace $b$ and $b'$ with $a=b'/b$. Thus the restriction of ${\mathcal L}$ to
this subset is the quotient of the space of quadruples $u,v,a,\rho$
by the ${\mathbb C}^*$ action with weights $1,1,-2,-2m$. This is clearly the
total space of the line bundle over $T^*{\mathbb P}^1$ which is a pull-back
of ${\mathcal O}(-2m)$ on the ${\mathbb P}^1$ base.
\if To count observables we decompose ${\cal M}$ into ``bulk part''
$TP^1,$ which captures the vicinity of the curve $C_1=s+f$ with
normal bundle\footnote{ The degree of the normal bundle to a curve
in the class $C$ is given by $-C\cdot K.$} $O(2)$ and ''bubbled
part'' $T^*P^1,$ which zooms at the vicinity of the curve $C_2=s-f$
with normal bundle $O(-2).$ Both $C_1$ and $C_2$ are $P^1$'s as
follows from adjunction formula for the genus of holomorphic curve
$C$ inside complex surface with canonical class $K=-2s-4f.$
$$2g-2=C^2+C\cdot K.$$
Therefore, to check (\ref{basicii}) we should compute
$H^p\left(\Omega^q\otimes O(2m+2k),TP^1\right)$ and
$H^p\left(\Omega^q\otimes O(2m-2k),T^*P^1\right).$ In the following
two subsections we compute these cohomologies. \fi
\subsection{Wavefunctions on $T{\mathbb P}^1.$}
Let $u,v,b$ be homogeneous coordinates on $T{\mathbb P}^1$, with ${\mathbb C}^*$
weights $1,1,2$. Let us work in the patch $u\neq 0$ and define
inhomogeneous ``coordinates'' \begin{equation} z=\frac{v}{u},\quad
w=\frac{\sqrt{b}}{u} \label{defi} \end{equation} We put the word
``coordinates'' in quotation marks, because $w$ is defined up to a
sign and is not really a good coordinate. The good coordinate is
$w^2$.
Our goal is to solve equations (\ref{eqsdolbope}) on $T{\mathbb P}^1$, i.e.
to find harmonic representatives of the $L^2$ Dolbeault cohomology
groups
$$
H^p(\Omega^q(-2m),T{\mathbb P}^1),\qquad p,q,=0,1,2.
$$
Here
$$
\Omega^q(-2m)=\Omega^q\otimes {\mathcal O}(-2m).
$$
The sum of these cohomology groups is nothing but the vector space
${\mathfrak H}({\mathsf A})$, where ${\mathcal A}$ is the WH operator $WT_{2,2m}$. In section 6.5 we
will use the knowledge of ${\mathfrak H}({\mathsf A})$ for this and other WH
operators on the r.h.s. of eq. (\ref{maineqtotest}) to make a
consistency check on our computations.
\subsubsection{The metrics}
\if $\chi\bigl(TP^1,O(2m)\bigr)=\sum_{p=0}^2\sum_{q=0}^2 (-)^{p+q}
h^p\bigl(\Omega^q(2m),TP^1\bigr)$ where $\Omega^q(k)=\Omega^q
\otimes O(k).$ \fi
While we do not know the precise form of the K\"ahler metric on
$T{\mathbb P}^1$ coming from the Bogomolny equations, it is tightly
constrained by symmetry considerations. Indeed, $PSU(2)$ gauge
transformations act on the moduli space by isometries which preserve
the complex structure, and the orbits have real codimension $1$,
therefore the most general ansatz will depend on functions of a
single variable. The $PSU(2)$ action in question acts on $u,v$ as a
two-dimensional projective representation, and acts trivially on
$b$. Using this, it is easy to show that the most general
$PSU(2)$-invariant $(1,1)$-form on $TP^1$ is
$$J=f_1(\lambda) e_1 {\wedge} {\overline{e}}_1+ f_2(\lambda) e_2 {\wedge} {\overline{e}}_2$$
where \begin{equation} e_1=\frac{dw}{w}-{\overline{z}} e_2,\quad e_2= \frac{dz}{1+{|} z
{|}^2} \label{defii} \end{equation} and $f_1,f_2$ are functions of the
$PSU(2)$ invariant \begin{equation} \lambda=\frac{{|} w {|}^2}{1+{|}
z{|}^2} \label{defiii} \end{equation} The K\"ahler condition $dJ=0$ implies
$f_1=-\lambda f_2'$ so that geometry is specified in terms of a
single function $f_2(\lambda)$ on $[0,\infty)$. Its behavior at zero
is constrained by the requirement that the metric be smooth at
$w=0$. Its behavior at infinity is constrained by the requirement
that after one-point compactification of $T{\mathbb P}^1$ the neighborhood
of infinity looks like ${\mathbb C}^2/{\mathbb Z}_2$ with a flat metric. These two
conditions are equivalent to
\begin{equation} f_2\rightarrow \frac{1}{2\lambda} \quad for \quad
\lambda\rightarrow \infty,\quad f_2\rightarrow const \quad for \quad
\lambda \rightarrow 0. \label{asymp} \end{equation}
The standard Fubini-Study metric on ${\WW\PP}_{112}$ corresponds to
specific $f_1,f_2$ with these asymptotics:
$$f_1=\frac{(\sqrt{1+8\lambda^2}-1)^2}{4 \lambda^2 \sqrt{1+8\lambda^2}},
\quad f_2=\frac{\sqrt{1+8\lambda^2}-1}{4 \lambda^2}$$
Let us consider the line bundle ${\mathcal O}(n)$ over $T{\mathbb P}^1$. The $PSU(2)$
action on $T{\mathbb P}^1$ lifts to a $PSU(2)$ action on ${\mathcal O}(n)$ if $n$ is
even, or to an $SU(2)$ action if $n$ is odd. We are mainly
interested in even $n$. In a unitary trivialization, the most
general $SU(2)$-invariant connection on ${\mathcal O}(n)$ is: \begin{equation}
A^{(n)}=\frac{\lambda f'_{(n)}}{f_{(n)}}e_1-\frac{n}{2}{\overline{z}} e_2,\quad {\overline A}^{(n)}=-\frac{\lambda f'_{(n)}}{f_{(n)}}{\overline{e}}_1+
\frac{n}{2}z {\overline{e}}_2 \label{conn} \end{equation} and covariant differentials
are defined as
$$ D={\partial}+A^{(n)},\quad {\overline {D}}={\overline {\partial}} +{\overline A}^{(n)}$$
For $n=-2m,\, m\in {\mathbb Z}$ the function $f_{(-2m)}$ has the following
asymptotics:
\begin{equation} f_{(-2m)} \rightarrow \lambda^{m} \quad
for \quad \lambda\rightarrow \infty,\quad f_{(-2m)}\rightarrow 1
\quad for \quad \lambda \rightarrow 0. \label{asympii} \end{equation} The
asymptotic at $\lambda\rightarrow \infty$ is chosen in such a way
that the norm of the holomorphic section $w^{-2m}$ approaches a
constant, i.e. we go to the unitary trivialization \begin{equation}
s_{unit}=s_{hol}(1+|z|^2)^mf_{(-2m)}(\lambda) \label{transf} \end{equation} and
require the pointwise norm ${|} s_{unit} {|}^2$ to approach a
constant. The reason is that in the neighborhood of the orbifold
point $u=v=0$ $w^{-2m}$ represents a section which transforms
trivially between the two charts $u\ne 0$ and $v\ne 0,$ and provides
a local holomorphic trivialization of ${\mathcal O}(-2m)$. We would like its
norm neither to diverge nor to become zero at the orbifold point.
The postulated behavior at $\lambda \rightarrow 0$ ensures that the
connection is smooth at the zero section of $T{\mathbb P}^1$.
\subsubsection{A heuristic argument}
Since solving partial differential equations is hard, it is useful
to have some idea about the kind of solutions one expects to find.
There is a heuristic argument, explained to us by Roman
Bezrukavnikov, which gives the dimensions of the cohomology groups
we are after. Let us start with the case $m=0$ where we already know
the structure of solutions \cite{KW}: all of cohomology is of type
$(p,p)$, and there is a single solution for $p=0,1,2$:
$$
H^p(T{\mathbb P}^1,\Omega^q)=\delta_{pq}V_1
$$
where $V_{2j+1}$ stands for $(2j+1)-$dimensional
irreducible representation of $SL(2,{\mathbb C}).$
Next we note that the line bundle ${\mathcal O}(2)$ corresponds to the
divisor $D$, where $D$ is the zero section of $T{\mathbb P}^1$. Hence we
have a short exact sequence of coherent sheaves on $T{\mathbb P}^1$:
$$
0\rightarrow {\mathcal O}\rightarrow {\mathcal O}(D)\rightarrow N_D\rightarrow 0,
$$
where $N_D$ is the normal bundle of $D$. This gives a long exact
sequence for sheaf cohomology groups. We are of course interested
not in sheaf cohomology groups, but in $L^2$ Dolbeault cohomology of
the corresponding line bundles. But let us cheat and ignore this
distinction. Then the long exact sequence implies
$$
H^0(T{\mathbb P}^1,{\mathcal O}(2))=V_1+V_3, \quad H^i(T{\mathbb P}^1,{\mathcal O}(2))=0,\ i=1,2.
$$
Similarly, if we tensor the short exact sequence with the sheaf
$\Omega^i$, $i=1,2$, and then write down the corresponding long
exact sequences, we infer:
$$
H^0(T{\mathbb P}^1,\Omega^1(2))=V_1, \quad H^0(T{\mathbb P}^1,\Omega^2(2))=0, \quad
H^i(T{\mathbb P}^1,\Omega^j(2))=0,\ i,j=1,2.
$$
Having determined all relevant cohomology groups for $m=-1$, we can
move on to $m=-2$ and write down a short exact sequence involving
${\mathcal O}(4)\simeq {\mathcal O}(2D)$:
$$
0\rightarrow {\mathcal O}(D)\rightarrow {\mathcal O}(2D)\rightarrow N_D(D),
$$
which implies
\begin{equation}
H^0(T{\mathbb P}^1,{\mathcal O}(4))=V_1+V_3+V_5,\quad H^0(T{\mathbb P}^1,\Omega^1(4))=V_3+V_1+V_3,
\end{equation}
$$H^0(T{\mathbb P}^1,\Omega^2(4))=V_1,$$
with all higher cohomologies vanishing. Continuing in this fashion,
we can determine cohomology groups for all negative $m$. We find
that only degree-0 cohomology is nonvanishing. If we let $k=-m>0$,
then degree-0 cohomology groups are
\begin{eqnarray}
&H^0(T{\mathbb P}^1,{\mathcal O}(2k))=\sum_{j=0}^k V_{2j+1},\\
&H^0(T{\mathbb P}^1,\Omega^1(2k))=V_{2k-1}+\sum_{j=1}^{k-1}V_{2j-1}+
\sum_{j=1}^{k-1}V_{2j+1},\\
&H^0(T{\mathbb P}^1,\Omega^2(2k))=\sum_{j=0}^{k-2}V_{2j+1}.
\end{eqnarray}
If $m>0$, we can find cohomology groups by applying
Kodaira-Serre duality to the results for $m<0$:
$$
H^p(T{\mathbb P}^1,\Omega^q(2m))=H^{2-p}(T{\mathbb P}^1,\Omega^{2-q}(-2m)).
$$
Thus for positive $m$ only degree-$2$ cohomology is nontrivial.
Below we write down an explicit basis for degree-0 cohomology groups
and check that all elements of the basis are square-integrable. By
Kodaira-Serre duality, this also verifies the predictions for
degree-$2$ cohomology. We have not been able to prove that
degree-$1$ $L^2$ cohomology groups vanish for all $m$. We only
checked that degree-$1$ $L^2$ cohomology, if it exists, does not
contain irreducible $PSL(2,{\mathbb C})$ representations of dimensions $1$
and $3$. (For larger $PSL(2,{\mathbb C})$ representations, the analysis
becomes very complicated, and we were not able to push it through.)
\subsubsection{$H^0(T{\mathbb P}^1,{\mathcal O}(2k)).\quad k>0.$} First we find holomorphic sections
of the line bundle ${\mathcal O}(2k)$ on $T{\mathbb P}^1$ . In a holomorphic
trivialization these sections are $ b^{k-j}P_{2j}(u,v)$ for
$j=0,\ldots, k$ where $P_{2j}(u,v)$ is a homogeneous polynomial of
degree $2j$ in variables $u,v$. For each $j$ these sections
transform in a representation $V_{2j+1}.$
To see that all these sections are
in $L^2$ we go to the unitary trivialization (\ref{transf}) and
compute the norm. For $s_{hol}=w^{2(k-j)} z^p$ with $p\le 2j \le 2k$
we find \begin{equation} \int_{T{\mathbb P}^1}{|} s_{unit} {|}^2 \,
f_1({\lambda})f_2({\lambda}) e_1{\wedge} {\overline{e}}_1 {\wedge} e_2 {\wedge} {\overline{e}}_2=
\frac{\pi}{2}\int \frac{d{\lambda}}{{\lambda}} {\lambda}^{2(k-j)}\, f_1\, f_2\,
f_{(2k)}^2
\, \int \frac{{|} z {|}^{2p} dz d{\overline{z}}}{\bigl(1+{|} z{|}^2\bigr)^{2+2j}}
\label{norma} \end{equation} where we used (\ref{defii}) and (\ref{transf}).
The $z$-integral is convergent for $p\le 2j$, while the integral
over ${\lambda}$ is finite since the integrand behaves \footnote{We use
the asymptotics (\ref{asymp}) and (\ref{asympii}) of
$f_1,f_2,f_{(2k)}$.} at infinity as $\frac{1}{{\lambda}^{2j+3}}$ and at
zero as ${\lambda}^{1+2(k-j)}$ for $j=0,\ldots,k.$
\subsubsection{$H^0(T{\mathbb P}^1,\Omega^1(2k)),\quad k>0.$} Next we find holomorphic
sections of the vector bundle $\Omega^1(2k)$ on $T{\mathbb P}^1$. Again it
is easy to do it in a holomorphic trivialization. $2k-1$ sections
pulled back from the base transform in a representation $V_{2k-1}$:
$$\rho_{hol}=uvP_{2k-2}(u,v)\bigl({du\over u}-{dv\over v}\bigr).$$
All these sections are square-integrable. Indeed, in the chart
$u\neq 0$ they are of the form $\rho_{hol}=z^pdz$ for
$p=0,\ldots,2k-2$ and their norm is finite: \begin{equation} \rho_{unit}{\wedge}
{\overline
*\rho_{unit}}= {\pi \over 2}\int {d{\lambda} \over {\lambda}} f_1\,
f_{(2k)}^2
\, \int {{|} z {|}^{2p} dz d{\overline{z}} \over \bigl(1+{|}
z{|}^2\bigr)^{2k}}<\infty .
\label{normaii} \end{equation}
Also, there are holomorphic sections of the form
$$\rho_{hol}=\bigl(db-b{du\over u}-b{dv \over v}\bigr)F_{2k-2}(u,v,b)+
b{\tilde F}_{2k-2}(u,v,b)\bigl({du\over u}-{dv\over v}\bigr),$$ where
$F_{2k-2}$ and ${\tilde F}_{2k-2}(u,v,b)$ must satisfy(to ensure
non-singular behavior)
$${\tilde F}_{2k-2}-F_{2k-2}=ug_{2k-3}(u,v,b),\quad
{\tilde F}_{2k-2}+F_{2k-2}=v{\tilde g}_{2k-3}(u,v,b).$$
We further write
$$g_{2k-3}(u,v,b)=\sum_{j=1}^{k-1} b^{k-1-j} P_{2j-1}(u,v),\quad
{\tilde g}_{2k-3}(u,v,b)=\sum_{j=1}^{k-1} b^{k-1-j} {\tilde P}_{2j-1}(u,v)$$ So
the total number of mixed-type sections is $4\sum_{j=1}^{k-1}j.$
To see that these sections decompose as
$\sum_{j=1}^{k-1}\Bigl( V_{2j+1}+V_{2j-1}\Bigr)$ we write them in a
unitary trivialization (in the chart $u\neq 0$)
$$\rho_{unit}=w^2f_{(2k)}\bigl(1+{|} z {|}^2\bigr)^{-k}
\Bigl(\bigl(z {\tilde g}_{2k-3}-g_{2k-3}\bigr)e_1-\bigl({\overline{z}} g_{2k-3}+{\tilde g}_{2k-3}\bigr)e_2
\Bigr).$$
where
$${\tilde g}_{2k-3}=\sum_{j=1}^{k-1}w^{2k-2-2j}\sum_{n=0}^{2j-1}a^{(j)}_n z^{2j-1-n},\quad
g_{2k-3}=\sum_{j=1}^{k-1}w^{2k-2-2j}\sum_{n=0}^{2j-1}c^{(j)}_n z^{2j-1-n}$$
Then, $\rho_{unit}$ is brought to the form
$$\rho_{unit}=w^{2k}f_{(2k)}(\lambda)\bigl(1+{|} z{|}^2\bigr)^{-k}\Biggl(
e_1\sum_{j=1}^{k-1}w^{-2j}\Bigl(a^{(j)}_0z^{2j}-
c^{(j)}_{2j-1}+\sum_{n=0}^{2j}\beta^{(j)}_n z^{2j-n}\Bigr)+$$
$$(-){{\overline{w}}\over w}e_2\sum_{j=1}^{k-1}{1\over w^{2j-2}{|} w{|}^{2}}
\Bigl(a^{(j)}_0z^{2j-1}+c^{(j)}_{2j-1}{\overline{z}}+\sum_{n=1}^{2j-1}(a^{(j)}_n+
c^{(j)}_{n-1}{|} z {|}^2) z^{2j-1-n}\Bigr)\Biggr)$$
where
$$\beta^{(j)}_0=a^{(j)}_0,\quad \beta^{(j)}_{2j}=-c^{(j)}_{2j-1},\quad \beta^{(j)}_n=a^{(j)}_n-c^{(j)}_{n-1},\quad n=1,\ldots,2j-1
$$
Now recall that $e_1$ and ${{\overline{w}}\over w}e_2$ are $SL(2,{\mathbb C})$ invariant (1,0) forms,
and $\lambda={{|} w {|}^2 \over 1+{|} z{|}^2}$ is $SL(2,{\mathbb C})$ invariant.
We see that the $e_1$ piece in $\rho_{unit}$ transforms as $\sum_{j=1}^{k-1}V_{2j+1},$ i.e.
for each $j=0,\ldots, k-1$
$$w^{-2j}\sum_{n=0}^{2j}\beta^{(j)}_n z^{2j-n}$$
transforms as $V_{2j+1}.$
The $e_2$ piece in $\rho_{unit}$ transforms as $\sum_{j=1}^{k-1}V_{2j-1}$ if
we impose $2j+1$ constraints for each $j=0,\ldots,k-1$
$$a^{(j)}_0=0,\quad c^{(j)}_{2j-1}=0,\quad a^{(j)}_n=c^{(j)}_{n-1} \quad n=1,\ldots 2j-1.$$
All these sections are in $L^2.$ Indeed, the norm of each section
in $V_{2j+1}$ is not greater than
$${\pi \over 2}\int {d{\lambda} \over {\lambda}} f_2 \, f_{(2k)}^2
\, \int {{|} z^{2j} w^{2(k-j)} {|}^{2} dz d{\overline{z}} \over \bigl(1+{|} z{|}^2\bigr)^{2k+2}},$$
where $j=1,\ldots, k-1.$
Using (\ref{defii}) the integral is
brought to the form
$${\pi \over 2}\int {d{\lambda} \over {\lambda}} {\lambda}^{2(k-j)}\, f_2\, f_{(2k)}^2
\, \int {{|} z {|}^{4j} dz d{\overline{z}} \over \bigl(1+{|} z{|}^2\bigr)^{2+2j}}, $$
which is finite in the relevant range, i.e. for $j=1,\ldots,k-1.$
Analogously, the norm of each section
in $V_{2j-1}$ is not greater than
$${\pi \over 2}\int {d{\lambda} \over {\lambda}} f_1 \, f_{(2k)}^2 {\lambda}^{2(k-j)}
\, \int {{|} z{|}^{2(2j-2)} dz d{\overline{z}} \over \bigl(1+{|} z{|}^2\bigr)^{2j}},$$
which is finite in the relevant range, i.e. for $j=1,\ldots,k-1.$
\subsubsection{$H^0\bigl(T{\mathbb P}^1,\Omega^2(2k)\bigr),\quad k>0.$} Finally we find
holomorphic sections of the line bundle $\Omega^2(2k)$ on $T{\mathbb P}^1$.
In a holomorphic trivialization they are
$$ \rho_{hol}=F_{2k-4}(u,v,b)\bigl(vdu-udv\bigr){\wedge} \bigl(db-b{du\over u}-b{dv \over v}\bigr),$$
where
$$F_{2k-4}=\sum_{j=0}^{k-2}b^{k-2-j}\, P_{2j}(u,v).$$
In a unitary trivialization they have the form
$$\rho_{unit}=w^{2k}{f_{(2k)}(\lambda)\over \lambda}\bigl(1+{|} z {|}^2\bigr)^{-k}\,\sum_{j=0}^{k-2}
w^{-2j}\, \sum_{p=0}^{2j} a^{(j)}_p z^{2j-p}\, e_1 {\wedge} \left(e_2
{{\overline{w}} \over w}\right),$$ so we conclude that they transform in a
representation $\sum_{j=0}^{k-2}V_{2j+1}.$
All these sections have
finite $L^2$ norm, since for $j=0,\ldots,k-2$ and $p=0,\ldots, 2j$ we find
$${\pi \over 2}\int {d{\lambda} \over {\lambda}} {\lambda}^{2(k-j-1)}\, f_{(2k)}^2
\, \int {{|} z {|}^{2p} dz d{\overline{z}} \over \bigl(1+{|} z{|}^2\bigr)^{2+2j}} < \infty.$$
\subsection{Wavefunctions on $T^*{\mathbb P}^1.$}
We regard $T^*{\mathbb P}^1$ as the total space of the line bundle ${\mathcal O}(-2)$
over ${\mathbb P}^1$ and use homogeneous coordinates $u,v,b'$ with ${\mathbb C}^*$
weights $1,1,-2$. In the patch $u\neq 0$ we define inhomogeneous
coordinates \begin{equation} z={v\over u},\quad w'=\sqrt{b'}u \label{newdefi} \end{equation}
Our goal is to compute the $L^2$ Dolbeault cohomology of the bundles
$\Omega^i(-2m)$, $i=0,1,2$.
\if
To confirm the "bubbled"(i.e. with zero magnetic charge) part of
the basic OPE (\ref{basic}) we need to compute image of the Chern
map of the K-theory class
$$K\bigl(T^*P^1,O(2m)\bigr)=\sum_{p=0}^2\sum_{q=0}^2 (-)^{p+q}
H^p\bigl(\Omega^q(2m),T^*P^1\bigr)$$ where $\Omega^q(k)=\Omega^q
\otimes O(k).$
\fi
\subsubsection{Metrics}
The most general $SU(2)$-invariant K\"ahler form on $T^*{\mathbb P}^1$ is
$$J=f_1(x) e_1' {\wedge} {\overline{e}}'_1+ f_2(x) e_2 {\wedge} {\overline{e}}_2,$$
where \begin{equation} e_1'={dw'\over w'}+{\overline{z}} e_2, \quad e_2={dz\over 1+{|} z
{|}^2} \label{newdefii} \end{equation} and $f_1,f_2$ are functions of THE
$SU(2)$ invariant \begin{equation} x:={|} w' {|}^2 (1+{|} z{|}^2).
\label{newdefiii} \end{equation} From $dJ=0$ we find $f_1=x f_2'$ so that
geometry is specified in terms of a single function $f_2(x)$, which
we take to be a positive function with the following asymptotics:
\begin{equation} f_2\rightarrow x \quad for \quad x \rightarrow \infty,\quad
f_2\rightarrow const \quad for \quad x \rightarrow 0.
\label{newasymp} \end{equation} The first condition ensures that at $x
\rightarrow \infty$ the metric becomes flat. The second condition is
required so that for $x=0$ the metric is nonsingular.
Next consider the line bundle ${\mathcal O}(2k)$ over $T^*{\mathbb P}^1$. In a
unitary trivialization the connection on this bundle is \begin{equation}
A^{(2k)}={x f'_{(2k)}\over f_{(2k)}}e_1'-k{\overline{z}} e_2,\quad {\overline A}^{(2k)}=-{x f'_{(2k)}\over f_{(2k)}}{\overline{e}}_1'+k z {\overline{e}}_2
\label{newconn} \end{equation}
and covariant differentials are defined as
$$ D={\partial}+A^{(2k)},\quad {\overline {D}}={\overline {\partial}} +{\overline A}^{(2k)}$$
For $k=-m,\, m\in {\mathbb Z},$ the function $f_{(-2m)}$ has the
asymptotics \begin{equation} f_{(-2m)} \rightarrow x^{-m} \quad for \quad x
\rightarrow \infty,\quad f_{(-2m)}\rightarrow 1 \quad for \quad x
\rightarrow 0. \label{newasympii} \end{equation} The behavior for $x
\rightarrow \infty$ is chosen in such a way that asymptotically the
holomorphic section $w'^{2m}$ of ${\mathcal O}(-2m)$ has constant pointwise
norm. The reason for this choice is that $w'^{2m}$ continues in the
limit $x \rightarrow \infty$ to a section which transforms
trivially between the two charts $u\ne 0$ and $v\ne 0.$ The behavior
for $x \rightarrow 0$ ensures that we have a nonsingular metric when
restricting to the zero section of $T^*{\mathbb P}^1$.
\subsubsection{A heuristic argument}
Again we begin with a heuristic argument. The sheaf ${\mathcal O}(2)$ on
$T^*{\mathbb P}^1$ can be identified with ${\mathcal O}(-D)$, where $D$ is the zero
section $b'=0$. A short exact sequence of sheaves
\begin{equation}\label{shortEH}
0\rightarrow {\mathcal O}(-D)\rightarrow {\mathcal O}\rightarrow {\mathcal O}_D\rightarrow 0
\end{equation}
implies a long exact sequence for sheaf cohomology. Let us assume
that it holds also for $L^2$ Dolbeault cohomology. We also recall
\cite{KW} that for $m=0$ the only square-integrable solution of
equations (\ref{eqsdolbope}) on $T^*{\mathbb P}^1$ is of type $(1,1)$, so
$$
H^1(T^*{\mathbb P}^1,\Omega^1)=V_1,
$$
and all other $L^2$ Hodge numbers on $T^*{\mathbb P}^1$ vanish. Then the
long exact sequence coming from (\ref{shortEH}) and its relatives
obtained by tensoring (\ref{shortEH}) with $\Omega^i$ imply
$$
H^1(T^*{\mathbb P}^1,{\mathcal O}(2))=V_1,\quad H^1(T^*{\mathbb P}^1,\Omega^2(2))=V_1,\quad
H^1(T^*{\mathbb P}^1,\Omega^1(2))=V_3,
$$
and all other cohomologies for $m=-1$ vanish. Now that we know
cohomology for $m=-1$, we can tensor (\ref{shortEH}) with
$\Omega^i(2)$ and determine cohomology for $m=-2$, etc. In this way
we obtain the following predictions for dimensions of $L^2$
cohomology groups for $k=-m>0$:
$$H^1\bigl(T^*{\mathbb P}^1,\Omega^1(2k)\bigr)=V_1+V_{2k-1}+V_{2k+1}+
2\sum_{j=1}^{k-2}V_{2j+1},\quad k\ge 3,$$
$$ H^1\bigl(T^*{\mathbb P}^1,\Omega^1(2)\bigr)=V_3,\quad H^1\bigl(T^*{\mathbb P}^1,\Omega^1(4)\bigr)=V_1+V_3+V_5,$$
$$H^1\bigl(T^*{\mathbb P}^1,\Omega^2(2k)\bigr)=H^1\bigl(T^*{\mathbb P}^1,{\mathcal O}(2k)\bigr)=
\sum_{j=0}^{k-1}V_{2j+1}.$$ The results for $k<0$ are obtained by
Kodaira-Serre duality; in fact, from the above formulas it is easy
to see that cohomology groups depend only on $|k|$.
Below we will find exactly the right number of square-integrable
solutions of (\ref{eqsdolbope}) in cohomological degree $1$, with
the correct transformation properties under $PSU(2)$. We also
checked that in degree zero (and by Kodaira-Serre duality, in degree
$2$) all $L^2$ cohomology vanishes, just as the long exact sequence
predicts. We have not been able to verify that we have found all
square-integrable solutions of (\ref{eqsdolbope}) in degree $1$. We
only checked that if other solutions in degree $1$ exist, they
cannot transform in $PSU(2)$ representations of dimensions $1$ and
$3$.
\subsubsection{$H^1\bigl(\Omega^1(2k)\bigr),\quad k>0.$} The most general ansatz
(in the unitary trivialization and in the chart $u\neq 0$) for the
component of the $(n+1)$-plet in $H^1\bigl(\Omega^1(2k)\bigr)$ with
the $PSU(2)$ isospin projection $J_3=-(n/2)$ is: \begin{equation}
\omega={f_{(2k)}\over (1+ {|} z {|}^2)^{k}} w'^{-2k} \, \Biggl(a
e_1' {\wedge} {\overline{e}}'_1 +be_2{\wedge} {\overline{e}}_2+c{{\overline{w}}'\over w'} e_1'{\wedge}
{\overline{e}}_2+ d{w'\over {\overline{w}}'} e_2{\wedge} {\overline{e}}'_1\Biggr), \label{start}
\end{equation} where
$$a=\sum_{p=0}^n \, a_n(x) w'^{n-p}\,({\overline{w}}' {\overline{z}})^p$$
and the functions $b,c,d$ have a similar form. We have used that
$e'_1$ and ${w'\over {\overline{w}}'}e_2$ are $SU(2)$-invariant $(1,0)$
forms. Various terms in $a$ correspond to different ways of building
up the component of a $(n+1)$-plet with $J_3=-(n/2),$ i.e.
$u^n,u^{n-1}{\overline{v}},\ldots ,{\overline{v}}^n$.
Imposing $D(*\omega)=0$ and ${\overline {D}}(\omega)=0$ we found that
non-trivial cohomology groups come from using two simple special
cases of the general ansatz (\ref{start}).
${\bf I.}~~~$The first simplified ansatz has the form: \begin{equation}
\omega={w'^{-2k} f_{(2k)}\over \bigl( 1 +{|}
z{|}^2\bigr)^{k}}\Omega_n, \label{ans} \end{equation} where
$$\Omega_n=w'^n\Biggl(a(x) e_1' {\wedge} {\overline{e}}'_1
+b(x) e_2{\wedge} {\overline{e}}_2+d(x){\overline{z}} e_2{\wedge} {\overline{e}}'_1\Biggr).$$ From
${\overline {D}} \omega=0$ we find \begin{equation} a-xb'+d=0.\label{haha} \end{equation} Meanwhile,
$D(*\omega)=0$ gives \begin{equation} 2k{f_1\over f_2}b-xd'-2xd{f'_{(2k)}\over
f_{(2k)}} +(n-2k)\Bigl({f_1b\over f_2}-d\Bigr)=0, \label{hahaii} \end{equation}
\begin{equation} x\left({f_2 a\over f_1}\right)'-{f_1\over f_2}b+
\Bigl(n-2k+2x{f'_{(2k)}\over f_{(2k)}}\Bigr){f_2\over f_1}a=0.
\label{hahaiii} \end{equation}
${\bullet ~~~}$Let us first assume $n=2k$, then a linear combination
of (\ref{hahaii}) and (\ref{hahaiii}) gives
$$\Bigl(2k{f_2\over f_1}a-d\Bigr)'+
2{f'_{(2k)}\over f_{(2k)}}\Bigl(2k{f_2\over f_1}a-d\Bigr)=0,$$ which
can be integrated to express $d$ in terms of $a$ as \begin{equation}
d=2k{f_2\over f_1}a-{C_0\over f_{(2k)}^2} \label{uraii} \end{equation} where
$C_0$ is an integration constant. {}From (\ref{hahaiii}) $b$ can
also be expressed in terms of $a$ and its derivative, so that the
system (\ref{haha}-\ref{hahaiii}) reduces to a second-order
inhomogeneous differential equation: \begin{equation} -x^2 {f_2\over
f_1}\phi''-\left(x\left({xf_2\over f_1}\right)'+ 2x^2{f_2\over
f_1}{f'_{(2k)}\over f_{(2k)}}\right)\phi'+ \left(2k+{f_1\over f_2}-
2x\left({xf_2\over f_1}{f'_{(2k)}\over f_{(2k)}}\right)'\right)\phi=
{C_0\over f_{(2k)}^2}, \label{finii} \end{equation} where $\phi={f_2 a \over
f_1}.$ Near $x \rightarrow \infty$ (\ref{finii}) becomes
$$-x^2\phi''-(1+2k)x\phi'+(1+2k)\phi={C_0\over x^{2k}},$$
and its general solution behaves at infinity as \begin{equation} \phi={C_0\over
(1+2k)x^{2k}}+C_1x+{C_2\over x^{1+2k}} \label{infsolii} \end{equation} where
$C_1$ and $C_2$ parameterize the general solution of the homogeneous
equation.
Near $x\rightarrow 0$ (\ref{finii}) becomes
$$-x\phi''+\phi'+2kx\phi=C_0 x,$$
and its general solution behaves at zero as \begin{equation} \phi={C_0\over
2k}+C_3x^2+C_4x^2Logx, \label{zerosolii} \end{equation} where $C_3$ and $C_4$
parameterize the general solution of the homogeneous equation.
To ensure that $\omega$ is well-behaved near the origin we must
choose $C_4=0.$ This is always possible. Starting from any decaying solution
of the inhomogeneous equation at infinity
$$\phi_{inhom}={C_0\over (1+2k)x^{2k}}+{{{\tilde C}}_2\over x^{2k+1}}$$
we may always add a decaying solution of the homogeneous equation so
that
$$\phi=\phi_{inhom}+{C_2\over x^{2k+1}}$$
continues to small $x$ in a desired way, i.e. $C_4=0.$
Finally we note that $\omega$ has a finite $L^2$ norm:
$$
{\pi \over 2}\int {dx \over x} f_{(2k)}^2
\, \int {dz d{\overline{z}} \over
\bigl(1+{|} z{|}^2\bigr)^{2+2k}}\left(a^2 {f_2\over f_1}+
b^2 {f_1\over f_2}+d^2{|} z {|}^2\right)$$
Indeed, using the asymptotics at $x\rightarrow \infty$
$$a \sim {1\over x^{2k}},\quad b \sim {1\over x^{2k}}, \quad
d \sim {1\over x^{2k}}$$ we find that integral converges for $2k \ge
2.$ We conclude that using ansatz (\ref{ans}) for $n=2k$ we found a
well-behaved $(2k+1)$-plet with finite $L^2$ norm.
$\bullet~~~~ n\le 2k-2,\, n>0$
Let us consider (\ref{haha}-\ref{hahaiii}) with $n\ne 2k.$ Then a
linear combination of (\ref{hahaii}) and (\ref{hahaiii}) can be
integrated to express $d$ in terms of $a$ as \begin{equation} d=n{f_2\over
f_1}a-{C_0 x^{2k-n} \over f_{(2k)}^2}, \label{uraiii} \end{equation} where
$C_0$ is an integration constant. From equation (\ref{hahaiii}) $b$
can also be expressed in terms of $a$ and its derivative, so that
the system (\ref{haha}-\ref{hahaiii}) reduces to a second-order
inhomogeneous differential equation: \begin{equation} -x^2 {f_2\over f_1}\phi''-
\left(x\left({xf_2\over f_1}\right)'+ 2x^2{f_2\over
f_1}{f'_{(2k)}\over f_{(2k)}}+{xf_2(n-2k)\over f_1} \right)\phi'+
\label{finiii} \end{equation}
$$\left(n+{f_1\over f_2}-
2x\left({xf_2\over f_1}{f'_{(2k)}\over f_{(2k)}}\right)'+
(2k-n)x\left({f_2\over f_1}\right)'\right)\phi= {C_0 x^{2k-n}\over
f_{(2k)}^2},$$ where $\phi={f_2 a \over f_1}.$ Near $x\rightarrow
\infty$ (\ref{finiii}) becomes
$$-x^2\phi''-(1+n)x\phi'+(1+n)\phi={C_0\over x^n},$$
and its general solution at infinity is \begin{equation} \phi={C_0\over
(1+n)x^n}+C_1x+{C_2\over x^{1+n}} \label{infsoliii} \end{equation} where $C_1$
and $C_2$ parameterize the general solution of the homogeneous
equation.
We must set $C_1=0$ to obtain $\omega$ with a finite $L^2$ norm for
$n>0$:
$$\int_{T^*{\mathbb P}^1} \omega {\wedge} *{\overline \omega}=
\int {dx \over x^{1+n}} \, \int {dz d{\overline{z}} \over \left(1+{|}
z{|}^2\right)^{2+n}}.$$ Near $x\rightarrow 0$ (\ref{finiii})
becomes
$$-x^2\phi''+(2k-n+1)x\phi'+2(n-2k)\phi=C_0 x^{2k-n+2}$$
For $n < 2k-2$ general solution near zero is \begin{equation} \phi={C_0
x^{2k-n+2}\over 2(n-2k)}+C_3x^2+C_4x^{2k-n}, \label{zerosoliii} \end{equation}
and for $n=2k-2$ \begin{equation} \phi=-{C_0 x^{4}\over 4}+C_3x^2+C_4 x^2 Log(x),
\label{zerosoliv} \end{equation} where $C_3$ and $C_4$ parameterize the general
solution of the homogeneous equation.
For $n=2k-2$ there is a good solution if $C_4=0.$ For
even\footnote{Recall that $x^2$ is a good coordinate, but $x$ is
not, so odd powers of $x$ are ill-behaved.} $n$ such that $n<2k-2$
there is a good solution if $C_3=0$. Such solutions always exist.
Starting from any decaying solution of the inhomogeneous equation at infinity
$$\phi_{inhom}={C_0\over (1+n)x^n}+{{{\tilde C}}_2\over x^{n+1}}$$
we may always add a decaying solution of the homogeneous equation so
that
$$\phi=\phi_{inhom}+{C_2\over x^{n+1}}$$
continues to small $x$ in the desired way, i.e. $C_4=0$ or $C_3=0.$
We conclude that using the ansatz (\ref{ans}) we found a
well-behaved $(n+1)$-plet with a finite norm for even $n$ such that
$n>0$ and $n\le 2k-2.$
${\bf II.}~~~$The second simplified ansatz has the form: \begin{equation}
\omega={w'^{n-2k} f_{(2k)}\over \bigl( 1 +{|} z{|}^2\bigr)^{k}}
d(x){w'\over {\overline{w}}'} e_2 {\wedge} {\overline{e}}'_1. \label{ansnew} \end{equation} Imposing
$D(*\omega)=0$ and ${\overline {D}}(\omega)=0$ gives \begin{equation}
d(x)={x^{2k-n-1}\over f_{(2k)}^2}. \label{solnew} \end{equation} For $x
\rightarrow 0$ $\omega$ is well-behaved if $n$ is even and
satisfies the inequality $n\le 2k-4.$ Also, this solution has finite
$L^2$ norm for $n>0$:
$$\int \omega {\wedge} *\overline{\omega} \sim
\int {dx \over x^{n+3}}
\, \int {dz d{\overline{z}} \over
\bigl(1+{|} z{|}^2\bigr)^{2+n}}.$$
\subsubsection{$H^1\bigl({\mathcal O}(2k)\bigr)$ and $H^1\bigl(\Omega^2(2k)),\quad k>0$}
For $k>0$ we start from an ansatz (in the unitary trivialization and
in the chart $u\neq 0$) for the component of the $(n+1)$-plet in
$H^1\bigl(O(2k)\bigr)$ with $J_3=-(n/2)$: \begin{equation} \omega={f_{(2k)}\over
(1+ {|} z {|}^2)^{k}} w'^{n-2k} \, \Biggl(\beta(x) {\overline{e}}'_1
+\alpha(x) {{\overline{w}}'\over w'} {\overline{e}}_2 \Biggr). \label{startferm} \end{equation}
Imposing ${\overline {D}}(\omega)=0$ and $D(*\omega)=0$ gives the following
result. For even $n$ such that $0\le n\le 2k-2$
$$\omega=w'^n \left({{\overline{w}}'\over w'}\right)^{k}
{x^{k-n}\over f_{(2k)}f_2}{\overline{e}}'_1$$ belongs to
$H^1\bigl({\mathcal O}(2k)\bigr),$ has finite $L^2$ norm and is well-behaved
for $x\rightarrow 0.$
The component of the $(n+1)$-plet in $H^1\bigl(\Omega^2(2k)\bigr)$
with $J_3=-(n/2)$ can be found analogously. For even $n$ such that
$0 \le n\le 2k-2$
$$\omega=w'^n \left({{\overline{w}}'\over w'}\right)^{k-1}
{x^{k-n-1}f_1 \over f_{(2k)}}e_1'{\wedge} e_2{\wedge} {\overline{e}}'_1$$ belongs to
$H^1\bigl(\Omega^2(2k)\bigr),$ has finite $L^2$ norm and is
well-behaved for $x\rightarrow 0.$
\subsection{Testing S-duality}
We are now ready to perform a test of the S-duality prediction
(\ref{maineqtotest}). Summing up all cohomology groups
$H^p(T^*{\mathbb P}^1,\Omega^q(-2m))$ with the sign $(-1)^{p+q}$, we find
the ``bubbled'' contribution to the zeromode Hilbert space:
$$
V_{2m+1}-V_1-V_{2m-1},
$$
where $V_{2j+1}$ is the $2j+1$-dimensional representation of
$PSL(2,{\mathbb C})$. This corresponds to the sum of Wilson loops
$$
W_{2m}-W_0-W_{2m-2},
$$
in precise agreement with the S-duality prediction
(\ref{maineqtotest}).
As a consistency check on our computation, let us consider the Euler
characteristics of the graded vector spaces ${\mathfrak H}({\mathsf A},{\mathsf B},\ldots)$
associated to the the left-hand side and right-hand side of eq.
(\ref{eqsdolbope}). According to our computations, the ``bulk''
contribution to the Euler characteristic of the right-hand side is
$$
1+(2m+1)-(2m-1)=3.
$$
The ``bubbled'' contribution is
$$
(2m+1)-1-(2m-1)=1.
$$
Therefore the Euler characteristic of the right-hand side is $4$. We
can compute the Euler characteristic of the left-hand side by moving
the WH operators so that they are inserted at the same point on the
interval $I$ but at different points on $C$.\footnote{Unlike in
\cite{KW}, there is no natural flat connection on the sheaf of the
zeromode Hilbert spaces ${\mathfrak H}({\mathsf A},{\mathsf B},\ldots)$, and in principle the
stalk of this sheaf might depend on the locations of the insertion
points. Nevertheless, while the dimensions of the individual graded
components might jump, the Euler characteristic must be constant.}
If the WH operators are inserted at different points on $C$, the
space of zero modes factorizes, and so does the Euler
characteristic. The Hilbert space ${\mathfrak H}(WT_{1,0})$ is purely even
and two-dimensional. The Hilbert space ${\mathfrak H}(WT_{1,2m})$ is
$$
\oplus_{p,q=0}^1 H^p({\mathbb P}^1,\Omega^q(-2m)),
$$
and its Euler characteristic is $2$ for any $m$. Therefore the Euler
characteristic of the left-hand side is also $4$.
\section{Concluding remarks}
As mentioned in the introduction, 't Hooft line operators can be
interpreted mathematically as objects of the category of equivariant
perverse sheaves on the affine Grassmannian $Gr_G$. Then the algebra
of loop operators can be identified with the K-theory of this
category, and the S-duality prediction is equivalent to the
geometric Satake correspondence. 't Hooft loop operators labeled by
coweights of $G$ define a distinguished basis in the $K^0$-group.
It was suggested by R.~Bezrukavnikov that the algebra of Wilson-'t
Hooft loop operators can be similarly interpreted as the $K^0$-group
of the equivariant derived category of coherent sheaves on a certain
subset $\Lambda_G$ of the cotangent bundle of $Gr_G$. $\Lambda_G$ is
defined as the union of the conormal bundles to the Schubert cells
in $Gr_G$ and is invariant under the left $G[[z]]$ action on $Gr_G$.
Just like $Gr_G$ parameterizes Hecke transformations of holomorphic
$G$-bundles, $\Lambda_G$ parameterizes Hecke transformations of
Higgs bundles. Thus any object of the $G[[z]]$-equivariant derived
category of $\Lambda_G$ can be used to define a functor from the
derived category of ${\mathcal M}_{Higgs}(G,C)$ to itself and can be thought
of as a line operator. It was proved in \cite{BFM} that the
$K^0$-group of $D^b_{eq}(\Lambda_G)$ is the Weyl-invariant part of
the group algebra of ${\widehat\Lambda}(G)$, in agreement with the physical
arguments. Further, it was conjectured in \cite{BFM} that the
obvious invariance of ${\widehat\Lambda}(G)$ under the exchange of $G$ and ${{}^LG}$
comes from an equivalence between categories $D^b_{eq}(\Lambda_G)$
and $D^b_{eq}(\Lambda_{{}^LG})$. From the physical viewpoint, this
conjecture means that the categories of line operators for $G$ and
${{}^LG}$ are equivalent and thus follows from the S-duality conjecture.
Note also that the physical definition of the Wilson-'t Hooft loop
operator suggests that there is a distinguished basis in the
K-theory of $D^b_{eq}(\Lambda_G)$ labeled by elements of
${\widehat\Lambda}(G)/{\mathcal W}$, and that the S-duality group acts on this basis in a
natural way. The mathematical significance of this basis remains
unclear. Moreover, Wilson-'t Hooft {\it line} operators should
correspond to some distinguished objects in $D^b_{eq}(\Lambda_G).$
It was conjectured by R. Bezrukavnikov that these distinguished
objects are certain perverse coherent sheaves on $\Lambda_G$.
\section*{Acknowledgments}
We would like to thank R.~Bezrukavnikov, A.~Braverman, S.~Gukov,
M.~Finkelberg, I.~Mirkovic, L.~Positselski and E.~Witten for
discussions. We are especially grateful to R.~Bezrukavnikov for
valuable advice without which this work would not be possible. We
would like to express our thanks to the Aspen Center for Physics for
hospitality. A.K. is also grateful to the Independent University of
Moscow for staying open during the winter holidays of 2006-2007 and
thereby providing an opportunity to share some preliminary results
with interested mathematicians and to receive their feedback. This
work was supported in part by the DOE grant DE-FG03-92-ER40701.
| b8db6ae017d628c512276b491a946e57e8f6e73f | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
In hadrons consisting of $u$ and $d$ quarks there are two crucially
important properties of QCD - chiral symmetery and confinement. Their
interrellations and mechanisms are not yet understood. What we do know
theoretically is that at zero temperature and density in the confining
phase chiral symmetry must be necessarily spontaneously broken in
the vacuum \cite{hooft}. Another conceptual and closely related issue
is the generation of hadron mass in the light quark sector. It was
considered almost self-evident that such a mass generation proceeds
via the chiral symmetry breaking in the vacuum and the most important
characteristics that determines the hadron mass is the quark condensate
of the vacuum. Indeed, it is firmly established both phenomenologically
and on the lattice that to leading order the pion mass squared is
proportional to the bare quark mass and the quark condensate \cite{GOR}.
In the baryon sector the very absence of the chiral partner to the nucleon
implies that its mass is at least mostly related to the spontaneous breaking
of chiral symmetry in the vacuum. This fact is supported by the Ioffe
formula \cite{ioffe} that connects, though not rigorously, the nucleon
mass with the quark condensate. Another obvious sign of the strong
dynamical chiral symmetry breaking effects in the nucleon is the
large pion-nucleon coupling constant. Indeed, it is well understood that
the coupling of the Goldstone bosons to the nucleon is a direct consequence
of the spontaneous chiral symmetry breaking and is a basis for nucleon chiral
perturbation theory \cite{bcpt}. One more strong evidence for the chiral
symmetry breaking in the nucleon is its large axial charge, $g_A = 1.26$.
A main message
of this talk is that the mass generation mechanism in excited hadrons
is essentially different - the quark condensate of the vacuum becomes
less and less important with the excitation and the chiral as well as
the $U(1)_A$ symmetries get
eventually approximately restored in the given hadron, even though they are
strongly broken in the vacuum. This is referred to as effective restoration
of chiral symmetry, for a review see ref. \cite{G3}.
It is important to precisely characterize what is implied under effective
restoration of chiral and $U(1)_A$ symmetry in excited hadrons. A
mode of symmetry is defined only by the properties of the vacuum.
If a symmetry is spontaneously broken in the vacuum, then it is the
Nambu-Goldstone mode and the whole spectrum of excitations on the
top of the vacuum is in the Nambu-Goldstone mode. However, it may happen
that the role of the chiral symmetry breaking condensates becomes
progressively less important higher in the spectrum, because the
valence quarks decouple from the quark condensates. This means
that the chiral symmetry breaking effects become less and less
important in the highly excited states and asymptotically the
states approach the regime where their properties are determined
by the underlying unbroken chiral symmetry (i.e. by the symmetry
in the Wigner-Weyl mode). This effective restoration
in excited hadrons should not
be confused with the chiral symmetry restoration in the vacuum at
high temperature/density. In the latter case the quark vacuum becomes
trivial and the system is in the Wigner-Weyl mode. In the former case
the symmetry is always broken in the vacuum, however this symmetry breaking
in the vacuum gets irrelevant in the highly excited states.
\section{Empirical
evidence for chiral restoration in excited nucleons}
The nucleon excitation spectrum is shown in Fig. 1. Only well-established
states (i.e. without stars in boxes) should be seriously considered. It is
well seen that there is no chiral partner to the nucleon. This necessarily
implies that chiral symmetry is strongly broken in the nucleon and consequently
is realized nonlinearly \cite{wein}. Obvious approximate parity doublets
are observed in the region 1.7 GeV and higher.
An absence of parity doublets for the
lowest-lying states and their apparent appearance for (highly) excited states
was taken in refs. \cite{G1,CG1,CG2,G2} as evidence for chiral restoration
in excited baryons, for a review see ref. \cite{G3}.
The parity doublets in the 1.7 GeV region have
been assigned to the $(0,1/2)+(1/2,0)$ representation of the parity-chiral
group because there are no approximately degenerate doublets in the same mass
region in the spectrum of the delta-resonance \cite{CG2,G3}.
A clear testable prediction of
the chiral symmetry restoration scenario is an existence of chiral partners
of the well established high-lying resonances $N(2190)$ and $N(2600)$.
A dedicated
experimental search of these missing states can be undertaken \cite{S}.
Similar situation takes place in the Delta-spectrum.
\begin{figure}[hb]
\begin{center}
\includegraphics[height=8cm,angle=-90,clip=]{nspectrum.ps}
\label{ns}
\caption{Low- and high-lying nucleons. Those states which are
not yet established are marked by ** or * according to the PDG classification.}
\end{center}
\end{figure}
While these parity doubling patterns are impressive, they are still only
suggestive,
because so far no other complementary experimental data would independently
tell us that these parity doublets are due to effective chiral symmetry
restoration. Strict chiral restoration in a given baryon would imply that its
diagonal axial charge is zero and hence the diagonal coupling constant to
the pion must vanish \cite{G3,CG3,GN1,JPS1,JPS2}. This is one of the most
important implications of the chiral symmetry restoration and is reviewed below.
Assume that we have a free $I=1/2$ chiral doublet $B$ in the $(0,1/2)+(1/2,0)$
representation and there are no chiral symmetry breaking terms. This
doublet is a column \cite{LEE}
\begin{equation}
B = \left(\begin{array}{c}
B_+\\
B_-
\end{array} \right),
\label{doub}
\end{equation}
\noindent
where the bispinors
$B_+$ and $B_-$ have positive and negative parity, respectively.
The chiral transformation law
under the $(0,1/2) \oplus (1/2,0)$ representation
provides a mixing of two fields $B_+$ and $B_-$ \footnote{Note that
the axial transformation given in \cite{LEE} is incorrect as it
breaks chiral symmetry of the kinetic term. The correct axial
transformation is given in ref. \cite{G3}.}
\begin{equation}
B \rightarrow
\exp \left( \imath \frac{\theta^a_V \tau^a}{2}\right)B; ~~
B \rightarrow
\exp \left( \imath \frac{\theta^a_A\tau^a}{2} \sigma_1
\right)B.
\label{VAD}
\end{equation}
\noindent
Here $\sigma_i$ is a Pauli matrix that acts in the $2 \times 2$
space of the parity doublet.
Then the chiral-invariant Lagrangian of the free parity doublet is given as
\begin{equation}
\mathcal{L}_0 = i \bar{B} \gamma^\mu \partial_\mu B - m_0 \bar{B}
B \nonumber \\
= i \bar{B}_+ \gamma^\mu \partial_\mu B_+ +
i \bar{B}_- \gamma^\mu \partial_\mu B_-
- m_0 \bar{B}_+ B_+ - m_0 \bar{B}_- B_- .
\label{lag}
\end{equation}
\noindent
Alternative forms for this Lagrangian can be found in refs. \cite{TKU,TIT}.
A crucial
property of this Lagrangian is that the fermions
$B_+$ and $B_-$ are exactly degenerate and
have a nonzero chiral-invariant mass $m_0$. In contrast, for
usual (Dirac) fermions chiral symmetry in the Wigner-Weyl mode
restricts particles to be massless, hence they acquire
their mass only in the Nambu-Goldstone mode of chiral symmetry
due to chiral symmetry breaking in the vacuum (i.e. via the coupling
with the quark condensate of the vacuum). The chiral parity doublets
have their chiral-invariant mass term already in the Wigner-Weyl mode
and this mass term has no relation with the quark condensate.
From the axial transformation law (\ref{VAD}) one can read off the
axial charge matrix, which is $\gamma_5 \sigma_1$. Hence the diagonal axial
charges of the opposite parity baryons are exactly 0, $g_+^A = g_-^A = 0$,
while the off-diagonal axial charge is 1, $ |g_{+-}^A| = |g_{-+}^A| = 1$.
This is another crucial property that distinguishes the parity
doublets from the Dirac fermions where $g^A = 1$.
The axial vector current conservation,
$q^\mu \langle B_\pm | A_\mu | B_\pm \rangle = 0$,
translates this axial charge matrix via the Goldberger-Treiman relation
into the $\pi B_{\pm}B_{\pm}$ coupling constants which are zero. Hence a small
(vanishing) value of the pion-baryon coupling constant taken together
with the large baryon mass would tell us that the origin of this mass is not
due to chiral symmetry breaking in the vacuum.
An experimental verification of the smallness of the diagonal axial
charges or smallness of the pion-baryon coupling constants would be a direct
verification of the chiral symmetry restoration scenario in excited nucleons.
It is unclear, however, how to measure experimentally these quantities.
There is rich experimental data on strong decays of excited hadrons. It
turnes out that the chiral restoration implies a very strong
selection rule \cite{G4}. Namely, it predicts that if chiral symmetry is
completely
restored in a given excited nucleon ($B$), then it cannot decay into the
$\pi N$ channel,
i.e. the coupling constant $f_{BN\pi}$ must vanish. This selection rule
is based exclusively on general properties of chiral symmetry and
hence is model-independent.
Let us prove this selection rule.
Assume that a $\pi N$ decay of an exact chiral doublet
is possible. Then there must be a self-energy contribution
$B_\pm \rightarrow \pi N \rightarrow B_\pm$ into its mass. Then
the axial rotation (\ref {VAD}) would require that the S-wave
$\pi N$ state transforms into the P-wave $\pi N$ state. However,
in the Nambu-Goldstone mode the axial rotations of the pion and
nucleon states are fixed - these are the nonlinear axial transformations
\cite{JPS1,JPS2}. Given these well known axial transformation
properties of the Goldstone boson and nucleon \cite{wein} it is not possible
to rotate the S-wave
$\pi N$ state into the P-wave $\pi N$ state. Therefore, there cannot be
any $\pi N$ self-energy component in $B_\pm$.
Hence a decay
$B_\pm \rightarrow \pi N$ is forbidden.
However, a
decay of the exact chiral doublet into e.g. $N\rho$ or $N\pi\pi$ is not
forbidden.
Hence, if a state is a member of an approximate chiral multiplet, then its
decay into $N \pi$ must be strongly suppressed,
$(f_{B N\pi}/f_{NN\pi})^2 \ll 1$.
If, in contrast, the excited baryon has no chiral partner, then its
mass, like in the nucleon case is exclusively due to chiral symmetry breaking
in the vacuum. Its axial charge should be comparable with the nucleon
axial charge. Then nothing forbids its strong decay into $N\pi$. One then
expects that the decay coupling constant should be of the same order
as the pion-nucleon coupling constant. These two extreme cases suggest
that a magnitude of the $BN\pi$ decay constant can be used as an indicator
of the mass origin.
\begin{table}
\begin{center}
\caption{Chiral multiplets of excited nucleons.
Comment: There
are two possibilities to assign the chiral representation:
$(1/2,0) \oplus (0,1/2)$ or $(1/2,1) \oplus (1,1/2)$ because
there is a possible chiral pair in the $\Delta$ spectrum
with the same spin with similar mass. }
\begin{tabular}{|llll|} \hline
Spin & Chiral multiplet & Representation &
$(f_{B_+N\pi}/f_{NN\pi})^2 - (f_{B_-N\pi}/f_{NN\pi})^2$ \\ \hline
1/2& $N_+(1440 ) - N_-(1535)$ & $(1/2,0) \oplus (0,1/2)$ &
0.15 - 0.026 \\
1/2& $N_+(1710) - N_-(1650)$ & $(1/2,0) \oplus (0,1/2)$ &
0.0030 - 0.026 \\
3/2& $N_+(1720) - N_-(1700)$ & $(1/2,0) \oplus (0,1/2)$ &
0.023 - 0.13 \\
5/2&$N_+(1680) - N_-(1675)$ & $(1/2,0) \oplus (0,1/2)$ &
0.18 - 0.012 \\
7/2&$N_+(?) - N_-(2190)$ & see comment &
? - 0.00053 \\
9/2&$N_+(2220) - N_-(2250)$ &
see comment &
0.000022 - 0.0000020 \\
11/2&$N_+(?) - N_-(2600)$ & see comment &
? - 0.000000064 \\
\hline
\hline
3/2& $ N_-(1520)$ & no chiral partner &
2.5 \\
\hline
\end{tabular}
\end{center}
\label{t3}
\end{table}
The decay constants
$f_{BN\pi}$ can be extracted from the $B \rightarrow N + \pi$ decay widths,
see e.g. \cite{CK,RB}. The pion-nucleon coupling constant is well-known,
$f_{NN\pi} =1.0$. In Table 1 we show ratios $(f_{BN\pi}/f_{NN\pi})^2$ for
all well-established states. It is well seen that this ratio is $\sim 0.1$
or smaller for approximate $J=1/2,3/2,5/2$ parity doublets. For the high-spin
states this ratio is practically vanishing. This is consistent with the
recent demonstration of the large J-rate of chiral restoration within the only
known exactly solvable
confining and chirally-symmetric model \cite{WG}.
From Fig. 1 one can see that the only well established
excited state which has no
obvious chiral partner is $3/2^-, ~ N(1520)$. It decays
very strongly into $N \pi$, indeed. This implies that a nature of mass of
this state is rather different compared to approximate parity doublets.
One observes a 100\% correlation
of the spectroscopic patterns with the $N \pi$ decays, as predicted
by the chiral symmetry restoration.
The Fig. 1 and the Table 1 suggest that the lowest
approximate chiral doublet is $N(1440) - N(1535)$. If correct, the
diagonal axial charges of these states must be small.
While it is impossible to measure these charges experimentally,
this can be done on the lattice. The axial charge of $N(1535)$ has just been
measured by Takahashi and Kunihiro and they report it to be
surprisingly small, smaller than 0.2 \cite{TK}. Certainly lattice studies
of other states are welcome.
\section{Symmetries in excited mesons}
\begin{figure}
\begin{center}
\includegraphics[height=5cm,,clip=]{mesons.eps}
\caption{Masses (in GeV) of the well established states from PDG
(circles) and
new $\bar n n$
states from the proton-antiproton annihilation (strips). Note
that the well-established states include $f_0(1500), f_0(1710)$, which
are the glueball and $\bar s s$ states with some mixing and hence are
irrelevant from the chiral symmetry point of view. Similar, the
$f_0(980), a_0(980)$ mesons most probably are not $\bar n n$ states and
also should be excluded from the consideration. The same is true for
$\eta(1475)$, which is the $\bar s s$ state and $\eta(1405)$ with
the unknown nature.}
\label{lear}
\end{center}
\end{figure}
Fig. 2 shows the spectra of the well established mesons from the PDG and
new, not yet confirmed $\bar n n$ states from the partial wave analysis
\cite{BUGG1,BUGG2} of $\bar p p$ annihilation at LEAR (CERN). Obvious
high symmetry of the high-lying $\bar n n$ states is seen. These data
have been analysed in ref. \cite{G5} and it turned out that the
high-lying $\bar n n$ mesons perfectly fit all possible linear
chiral multiplets of both $SU(2)_L \times SU(2)_R$ and $U(1)_A$ groups
with a few still missing states. In particular, the chiral symmetry
predicts a duplication of some of the $J > 0$ states with the given
quantum numbers, which is indeed observed in data.
If the chiral symmetry is
indeed responsible for positive-negative parity degeneracy of the states,
then there should be chiral multiplets for the high-spin states at the
levels $M \sim 2$ GeV, $M \sim 2.3$ GeV and, possibly, at $M \sim 1.7$ GeV.
These states are presently missing in refs. \cite{BUGG1,BUGG2} and it would
be extraordinary important to find them or to reliably exclude them. Note
that such high-spin parity doublets are well seen in the nucleon spectrum -
see Fig. 1.
The chiral and $U(1)_A$ symmetries can connect only states
with the same spin. Certainly we observe larger degeneracy, the states
with different spins are also degenerate. The large degeneracy might be
understood if, on top of chiral and $U(1)_A$ restorations, a principal
quantum number $ N = n + J$ existed.
There are suggestions in the literature to explain this large degeneracy
without resorting to chiral symmetry, assuming the $\vec J = \vec L +\vec S$
coupling scheme and that there is a principal
quantum number $ N = n + L$, where $L$ is the {\it conserved}
orbital angular momentum
in the quark-antiquark system \cite{af,kl,sv}. This suggestion is hard to
reconcile with the Lorentz and chiral symmetries, however \cite{GN}.
\section{Chirally symmetric and confining solvable model}
There exists only one known manifestly chirally-symmetric and confining
model in four dimensions that is solvable \cite{Orsay}, sometimes
called Generalized Nambu and Jona-Lasinio model (GNJL). This model
can be considered as a generalization of the 1+1 dimensional
't Hooft model, that is QCD in the large $N_c$ limit \cite{HOOFT}.
It is postulated within the GNJL model that there
exists a linear confining potential of the Coulomb type in four dimensions.
The chiral symmetry breaking
and the properties of the Goldstone bosons have
been obtained from the solution of the Schwinger-Dyson and Bethe-Salpeter
equations \cite{Adler:1984ri,Alkofer:1988tc,BR,BN,COT,W}. The complete
spectrum of $\bar q q$ mesons has been calculated only recently,
in ref. \cite{WG},
which exhibits restoration of the chiral symmetry.
\begin{figure}
\begin{center}
\includegraphics[height=5.cm]{J_0a.ps}
\includegraphics[height=5.cm,clip=]{J_1a.ps}
\includegraphics[height=5.cm,clip=]{J_2a.ps}
\caption{Spectra of the $\bar q q$ mesons in the $(1/2,1/2)_a$ representations.}
\end{center}
\end{figure}
Part of the spectra is shown in Fig. 3 and a fast chiral restoration with
increasing of $J$ is observed, while a slow rate is seen with respect
to the radial quantum number $n$. It is possible to see directly a mechanism
of the chiral restoration. The chiral symmetry breaking Lorentz-scalar
dynamical mass of quarks $M(q)$ arises via selfinteraction loops and
vanishes fast at large momenta. When one increases the spin of the hadron
$J$, or its radial quantum number $n$, one also increases the typical
momentum of valence quarks. Consequently, the chiral symmetry violating
dynamical mass of quarks becomes small and chiral symmetry gets approximately
restored. This mechanism of chiral restoration is in accord with a general
semiclassical analysis \cite{G2,G3,GNR}.
A higher degeneracy is recovered for $J \rightarrow \infty$. In this
limit all states with the same $J$ and $n$ fall into reducible representation
$[(0,1/2) \oplus (1/2,0)] \times [(0,1/2) \oplus (1/2,0)]$,
hence the quantum loop effects
become irrelevant and all possible states with different quark
chiralities become equivalent.
\medskip
Support of the Austrian Science Fund through grant
P19168-N16 is acknowledged.
| 0ba9f82b0bac9b8cee617ce5e0b8ad45d9381e7f | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}\label{sec1}
\setcounter{equation}{0}
In the \textit{classical} theory of linear particle transport, the incremental probability that a particle at position ${\bm x} = (x,y,z)$ will experience a collision while traveling an incremental distance $ds$ in the background material is given by
\bal\label{1.1}
dp = \sigma_t({\bm x})ds\,,
\nal
where $\sigma_t$ represents the macroscopic total cross section \cite{case}.
The implicit assumption is that $\sigma_t$ is independent of the particle's direction-of-flight ${\bf \Omega}$ and of the particle's free-path $s$, defined as the distance traveled by the particle since its previous interaction (birth or scattering).
This assumption leads to the particle flux being exponentially attenuated (Beer-Lambert law).
We remark that extending the results discussed here to include energy- or frequency-dependence is straightforward.
The theory of \textit{nonclassical} particle transport employs a generalized form of the linear Boltzmann equation to model processes in which the particle flux is \textit{not} attenuated exponentially.
This area has been significantly researched in recent years
\cite{lar_07,larvas_11,fragou_10,vaslar_14a,deo_14,siam_15,aml_16,frasun_16,vassla_17,cam_17,larfra_17,jctt_18,deon_18a,deon_18b}.
Originally introduced to describe photon transport in the Earth's cloudy atmosphere \cite{davmar_10,kryber_13,davxu_14,xudav_16,davxu_18,davxu_18b}, it has found its way to other applications, including nuclear engineering \cite{vaslar_09,vas_13,vaslar_14b,vassla_16,vaskry_17}
and computer graphics
\cite{wren_17,jarabo,bitterli}.
Furthermore, an analogous theory yielding a similar kinetic equation has been independently derived for the periodic Lorentz gas
by Marklof and Str\"ombergsson \cite{marstr_10,marstr_11,marstr_14,marstr_15}
and by Golse (cf.~\cite{gol_12}).
The nonclassical transport equation allows the \textit{nonclassical} macroscopic total cross section $\Sigma_t$ to be a function of the particle's free-path, and is defined in an extended phase space that includes $s$ as an independent variable.
If we define
\bal\label{1.2}
P({\bm x},{\bf \Omega},s)ds =\left(
\begin{array}{l}
\text{the probability that a particle released at position ${\bm x}$ in the}\\
\text{direction ${\bf \Omega}$ will experience its next collision while traveling}\\
\text{an incremental interval between $s$ and $s+ds$}
\end{array}
\right)\,,
\nal
then we can define the ensemble average
\bal\label{1.3}
p(s) = \left< P({\bm x},{\bf \Omega},s) \right>_{({\bm x},{\bf \Omega},\mathcal{R})}
\nal
over all ``release positions" ${\bm x}$ in a realization of the system, all directions ${\bf \Omega}$, and all possible realizations $\mathcal{R}$.
In this case, $p(s)$ represents the free-path distribution function, and the nonclassical cross section $\Sigma_t(s)$ satisfies
\bal\label{1.4}
p(s) = \Sigma_t(s) e^{-\int_0^s\Sigma_t(s')ds'}\,.
\nal
It is possible to extend this definition to include angular-dependent free-path distributions and cross sections \cite{vaslar_14a}, but in this paper we will restrict ourselves to the case given by \cref{1.4}.
The steady-state, one-speed nonclassical transport equation with isotropic scattering can be written as \cite{larvas_11}
\bsub\label[pluraleq]{1.5}
\bal
&\frac{\partial}{\partial s} \Psi({\bm x}, {\bf \Omega}, s) + {\bf \Omega}\cdot{\bf \nabla}\Psi({\bm x}, {\bf \Omega}, s) + \Sigma_t(s) \Psi({\bm x}, {\bf \Omega}, s) = \label{1.5a}\\
&\qquad\qquad\delta(s)\left[\frac{c}{4\pi} \int_{4\pi} \int_0^{\infty} \Sigma_t(s') \Psi({\bm x}, {\bf \Omega}', s')ds' d\Omega' + \frac{Q({\bm x})}{4\pi}\right]\,,
\hspace{5pt} {\bm x}\in V,\,\, {\bf \Omega} \in 4\pi,\,\,0<s, \nonumber
\nal
where $\Psi$ is the nonclassical angular flux, $c$ is the scattering ratio, and $Q$ is an isotropic internal source.
The Dirac delta function $\delta(s)$ on the right-hand side of \cref{1.5a} represents the fact that a particle that has just undergone scattering or been born will have its free-path value (distance since previous interaction) set to $s =0$.
If we consider vacuum boundaries, \cref{1.5a} is subject to the boundary condition
\bal
\Psi({\bm x},{\bf \Omega}, s) = 0\,, \quad {\bm x} \in \partial V,\,\, {\bm n} \cdot {\bf \Omega} <0,\,\, 0<s\,.
\nal
\nsub
We remark that, if $\Sigma_t(s)$ is assumed to be independent of $s$, then $\Sigma_t(s) = \sigma_t$ and the free-path distribution in \cref{1.4} reduces to the exponential
\bal\label{1.6}
p(s) = \sigma_t e^{-\sigma_t s}\,.
\nal
In this case, \cref{1.5a} can be shown to reduce to the corresponding classical linear Boltzmann equation
\bsub\label[pluraleq]{1.7}
\bal
{\bf \Omega}\cdot{\bf \nabla}\Psi_c({\bm x}, {\bf \Omega}) + \sigma_t \Psi_c({\bm x}, {\bf \Omega}) = \frac{c}{4\pi} \int_{4\pi} \sigma_t\Psi_c({\bm x}, {\bf \Omega}')d\Omega' + \frac{Q({\bm x})}{4\pi}\,,
\hspace{10pt} {\bm x}\in V,\,\, {\bf \Omega} \in 4\pi\, ,
\nal
with vacuum boundary condition given by
\bal
\Psi_c({\bm x},{\bf \Omega}) = 0\,, \quad {\bm x} \in \partial V,\,\, {\bm n} \cdot {\bf \Omega} <0\,.
\nal
\nsub
Here, the classical angular flux $\Psi_c$ is given by
\bal\label{1.8}
\Psi_c({\bm x},{\bf \Omega}) = \int_0^\infty \Psi({\bm x},{\bf \Omega},s)ds\,.
\nal
Recently, a spectral approach was developed to represent the nonclassical flux as a series of Laguerre polynomials in the variable $s$ \cite{jcp_20}.
The resulting equation has the form of a classical transport equation that can be solved in a deterministic fashion using traditional methods.
Specifically, the nonclassical solution was obtained using the conventional discrete ordinates (S$_N$) formulation \cite{lewis} and a source iteration (SI) scheme \cite{adalar_02}.
However, for highly scattering systems the spectral radius of the transport problem can get arbitrarily close to unity \cite{lewis}, and numerical acceleration becomes important.
The goal of this paper is to introduce transport synthetic acceleration techniques--namely, S$_2$ synthetic acceleration (S$_2$SA)--to speed up the solution of the nonclassical spectral S$_N$ equations.
We also present numerical results that confirm the benefit of using this approach; to our knowledge, this is the first time such acceleration methods are applied to this class of nonclassical spectral problems.
The remainder of the paper is organized as follows.
In \cref{sec2}, we present the nonclassical spectral S$_N$ equations for slab geometry.
We discuss transport synthetic acceleration in \cref{sec3} and present an iterative method to efficiently solve the nonclassical problem.
Numerical results are given in \cref{sec4} for problems with both exponential (\cref{sec4.1}) and nonexponential (\cref{sec4.2}) choices of $p(s)$.
We conclude with a brief discussion in \cref{sec5}.
\section{Nonclassical Spectral S$_N$ Equations in Slab Geometry}\label{sec2}
\setcounter{equation}{0}
In this section we briefly sketch out the derivation of the one-speed nonclassical spectral S$_N$ equations in slab geometry.
For a detailed derivation, we direct the reader to the work presented in \cite{jcp_20}.
In slab geometry, \cref{1.5} can be written as
\bsub\label[pluraleq]{2.1}
\bal
&\frac{\partial}{\partial s} \Psi(x, \mu, s) + \mu \frac{\partial}{\partial x} \Psi(x, \mu, s) + \Sigma_t(s) \Psi(x, \mu, s) = \label{2.1a}\\
&\qquad\delta(s)\left[\frac{c}{2} \int_{-1}^1 \int_0^{\infty} \Sigma_t(s') \Psi(x, \mu', s')ds' d\mu' + \frac{Q(x)}{2}\right]\,,
\hspace{5pt} 0<x<X,\,\, -1<\mu<1,\,\,0<s, \nonumber\\
&\Psi(0, \mu, s) = 0\,, \quad 0<\mu\leq1\,, 0< s\,,\\
&\Psi(X, \mu, s) = 0\,, \quad -1\leq\mu<0\,, 0<s\,,
\nal
\nsub
where $\mu$ is the cosine of the scattering angle.
\Cref{2.1a} can be written in an equivalent ``initial value" form:
\bsub\label[pluraleq]{2.2}
\bal
&\frac{\partial}{\partial s} \Psi(x, \mu, s) + \mu \frac{\partial}{\partial x} \Psi(x, \mu, s) + \Sigma_t(s) \Psi(x, \mu, s) = 0\,,\\
&\Psi(x,\mu,0) = \frac{c}{2} \int_{-1}^1 \int_0^{\infty} \Sigma_t(s') \Psi(x, \mu', s')ds' d\mu' + \frac{Q(x)}{2}\,.\label{2.2b}
\nal
\nsub
Note that, due to scattering and internal source being isotropic, the right-hand side of \cref{2.2b} does not depend on $\mu$.
Defining $\psi$ such that
\bal\label{2.3}
\Psi(x, \mu, s) \equiv \psi(x,\mu,s)e^{-\int_0^s \Sigma_t(s')ds'}\,,
\nal
we can rewrite the nonclassical problem as
\bsub\label[pluraleq]{2.4}
\bal
&\frac{\partial}{\partial s}\psi(x, \mu, s) + \mu \frac{\partial}{\partial x} \psi(x, \mu, s) = 0,\label{2.4a}\\
& \psi(x, \mu, 0) = \frac{c}{2} \int_{-1}^1 \int_0^{\infty} p(s')\psi(x, \mu', s')ds'd\mu' + \frac{Q(x)}{2}\,,
\nal
where $p(s)$ is given by \cref{1.4}.
This problem has the vacuum boundary conditions
\bal
&\psi(0, \mu, s) = 0\,, \quad 0<\mu\leq1\,, 0< s\,,\\
&\psi(X, \mu, s) = 0\,, \quad -1\leq\mu<0\,, 0<s\,.
\nal
\nsub
Next, we write $\psi$ as a truncated series of Laguerre polynomials in $s$:
\bal\label{2.5}
\psi(x, \mu, s) = \sum_{m=0}^M \psi_m(x, \mu) L_m(s),
\nal
where $L_m(s)$ is the Laguerre polynomial of order $m$ and $M$ is the expansion (truncation) order.
The Laguerre polynomials $\{ L_m(s)\}$ are orthogonal with respect to the weight function $e^{-s}$ and satisfy $\frac{d}{ds}L_m(s) = \left(\frac{d}{ds}-1\right)L_{m-1}(s)$ for $m>0$ \cite{hoc_72}.
We introduce this expansion in the nonclassical problem and perform the following steps \cite{jcp_20}: (i) multiply \cref{2.4a} by $e^{-s}L_m(s)$; (ii) integrate from $0$ to $\infty$ with respect to $s$; and (iii) use the properties of the Laguerre polynomials to simplify the result.
This procedure returns the following nonclassical spectral problem:
\bsub
\bal
&\mu \frac{\partial}{\partial x} \psi_m(x, \mu) + \psi_m(x, \mu) = S(x) + \frac{Q(x)}{2} - \sum_{j=0}^{m-1} \psi_j(x, \mu), \quad m = 0, 1, ..., M\,,\\
&\psi_m(0, \mu) = 0\,, \quad 0<\mu\leq1\,,m = 0, 1, ..., M\,,\\
&\psi_m(X, \mu) = 0\,, \quad -1\leq\mu<0\,,m = 0, 1, ..., M\,,
\nal
where the in-scattering term $S(x)$ (the scattering source) is given by
\bal
S(x) = \frac{c}{2} \int_{-1}^1 \sum_{k=0}^M \psi_k(x, \mu')\left[\int_0^{\infty}p(s) L_k(s)ds\right] d\mu'\,.
\nal
\nsub
The nonclassical angular flux $\Psi$ is recovered from \cref{2.3,2.5}.
The classical angular flux $\Psi_c$ is obtained using \cref{1.8}, such that
\bal
\Psi_c(x,\mu) = \int_0^\infty \Psi(x,\mu,s) ds = \sum_{m=0}^M \psi_m(x,\mu)\int_0^\infty L_m(s)e^{-\int_0^s\Sigma_t(s')ds'}ds\,.
\nal
Finally, using the discrete ordinates formulation \cite{lewis}, we can write the nonclassical spectral S$_N$ equations
\bsub\label[pluraleq]{2.8}
\bal
&\mu_n \frac{d}{d x} \psi_{m,n}(x) + \psi_{m,n}(x) = S(x) + \frac{Q(x)}{2} - \sum_{j=0}^{m-1} \psi_{j,n}(x), \label{2.8a} \\
& \hspace{250pt} m = 0, 1, ..., M,\, n = 1, 2, ..., N\,,\nonumber\\
&\psi_{m,n}(0) = 0\,, \quad m = 0, 1, ..., M,\,n = 1, 2, ..., \frac{N}{2}\,,\\
&\psi_{m,n}(X) = 0\,, \quad m = 0, 1, ..., M,\,n = \frac{N}{2}+1, ..., N\,,\\
& S(x) = \frac{c}{2} \sum_{n=1}^N \omega_n \sum_{k=0}^M \psi_{k,n}(x)\left[\int_0^{\infty}p(s) L_k(s)ds\right]\,,\\
&\Psi_{c_n}(x) = \sum_{m=0}^M \psi_{m,n}(x)\int_0^\infty L_m(s)e^{-\int_0^s\Sigma_t(s')ds'}ds\,, \quad n = 1, 2, ..., N\,. \label{2.8e}
\nal
\nsub
Here, the cosine of the scattering angle $\mu$ has been discretized in $N$ discrete values $\mu_n$.
Thus, $\psi_{m,n}(x) = \psi_{m}(x,\mu_n)$, $\Psi_{c_n}(x) = \Psi_c(x,\mu_n)$, and the angular integral has been approximated by the angular quadrature formula with weights $\omega_n$.
\section{Source Iteration and Synthetic Acceleration}\label{sec3}
\setcounter{equation}{0}
To solve the nonclassical spectral S$_N$ equations using standard source iteration \cite{adalar_02}, we lag the scattering source on the right-hand side of \cref{2.8a}:
\bsub
\bal
\mu_n \frac{d}{d x} \psi_{m,n}^{i+1}(x) + \psi_{m,n}^{i+1}(x) = S^i(x) + \frac{Q(x)}{2} - \sum_{j=0}^{m-1} \psi_{j, n}^{i+1}(x),
\nal
where $i$ is the iteration index and
\bal
S^i(x) = \frac{c}{2} \sum_{n=1}^N \omega_n \sum_{k=0}^M \psi_{k,n}^i(x) \left[\int_0^{\infty}p(s) L_k(s)ds\right]\,.
\nal
\nsub
In order to accelerate the convergence of this approach, the iterative scheme is broken into multiple stages.
Standard synthetic acceleration methods consist of two stages.
The first stage is a single transport sweep.
The second stage is error-correction, which uses an approximation of the error equation to estimate the error at each iteration.
Our synthetic acceleration scheme has the following steps:
\begin{itemize}
\item[1.] \textit{Determine the new ``half iterate" $\psi^{i+\frac{1}{2}}$ (solution estimate) using one transport sweep.}
This is done by solving
\bal\label{3.2}
\mu_n \frac{d}{d x} \psi_{m,n}^{i+\frac{1}{2}}(x) + \psi_{m,n}^{i+\frac{1}{2}}(x) = S^i(x) + \frac{Q(x)}{2} - \sum_{j=0}^{m-1} \psi_{j, n}^{i+\frac{1}{2}}(x).
\nal
\item[2.] \textit{Approximate the error $\epsilon^{i+1}$ in this half iterate using an approximation to the error equation (error estimate).}
To do that, we first subtract \cref{3.2} from the exact equation \cref{2.8a}, then add and subtract $\psi_{k,n}^{i+\frac{1}{2}}(x)$ to the in-scattering term on the right-hand side.
This yields
\bsub\label[pluraleq]{3.3}
\bal
\mu_n \frac{d}{d x} \left(\psi_{m,n}(x)-\psi_{m,n}^{i+\frac{1}{2}}(x)\right) &+ \left(\psi_{m,n}(x)-\psi_{m,n}^{i+\frac{1}{2}}(x)\right) = \\
&\qquad\qquad \left(S(x)-S^i(x)\right) - \sum_{j=0}^{m-1} \left(\psi_{j,n}(x)-\psi_{j,n}^{i+\frac{1}{2}}(x)\right)\,, \nonumber
\nal
where
\bal
S(x)-S^{i}(x) = \frac{c}{2} \sum_{n=1}^N \omega_n \sum_{k=0}^M &\left(\psi_{k,n}(x)-\psi_{k,n}^{i+\frac{1}{2}}(x)+ \right.\\
&\qquad\qquad\left.\psi_{k,n}^{i+\frac{1}{2}}(x)-\psi_{k,n}^i(x)\right)\left[\int_0^{\infty}p(s) L_k(s)ds\right]\,.\nonumber
\nal
\nsub
Defining the error $\epsilon^{i+1}_{m,n}$ as
\bal
\epsilon^{i+1}_{m,n}(x) \equiv \psi_{m,n}(x) - \psi_{m,n}^{i+\frac{1}{2}}(x)\,,
\nal
we rewrite \cref{3.3} as
\bsub\label[pluraleq]{3.5}
\bal
\mu_n \frac{d}{d x} \epsilon^{i+1}_{m,n}(x) + \epsilon^{i+1}_{m,n}(x) - S^{{i+1},\epsilon}(x) =\label{3.5a} \left(S^{i+\frac{1}{2}}(x)-S^i(x)\right)- \sum_{j=0}^{m-1}\epsilon^{i+1}_{j,n}(x)\,,
\nal
with
\bal
S^{{i+1},\epsilon}(x)= \frac{c}{2} \sum_{n=1}^N \omega_n \sum_{k=0}^M \epsilon^{i+1}_{k,n}(x) \left[\int_0^{\infty}p(s)L_k(s)ds\right]\,.
\nal
\nsub
We solve \cref{3.5} and obtain the error estimate $\epsilon^{i+1}$.
\item[3.] \textit{Correct the solution estimate using the error estimate.}
The corrected solution estimate $\psi^{i+1}$ is given by
\bal
\psi^{i+1}_{m,n}(x) = \psi^{i+\frac{1}{2}}_{m,n}(x) + \epsilon^{i+1}_{m,n}(x)\,.
\nal
\item[4.] \textit{Check for convergence and loop back if necessary.}
\end{itemize}
\noindent We remark that this transport synthetic acceleration procedure accelerates each one of the $M$ Laguerre moments of the angular flux.
In this paper, we have chosen to approximate the error estimate in \cref{3.5} by setting $N=2$, thus applying S$_2$ synthetic acceleration (S$_2$SA).
\section{Numerical Results}\label{sec4}
\setcounter{equation}{0}
In this section we provide numerical results that confirm the benefit of using transport synthetic acceleration for the iterative numerical solution of the nonclassical spectral S$_N$ equations (\ref{2.8}).
For validation purposes, we first apply this nonclassical approach to solve a transport problem with an exponential $p(s)$, which leads to classical transport.
Then, we proceed to solve a nonclassical transport problem that mimics diffusion, with a nonexponential $p(s)$.
For all numerical experiments in this section we use the Gauss-Legendre angular quadrature \cite{burden} with $N=16$ for \cref{2.8} and $N=2$ for \cref{3.5}, thus solving the nonclassical spectral S$_{16}$ equations using S$_2$ synthetic acceleration.
We discretize the spatial variable into 200 elements and use the linear discontinuous Galerkin finite element method \cite{adams}.
Furthermore, the improper integrals $\int_0^\infty (\cdot)ds$ in these equations are calculated numerically in the same fashion as in \cite{jcp_20}: the upper limit is truncated to $1.5$ times the length of the slab, and a Gauss-Legendre quadrature is used to solve them.
Here, we set the order of this quadrature to $M$, the same order as the Laguerre expansion.
The stopping criterion adopted is that the relative deviations between two consecutive estimates of the classical scalar flux
\bal\label{4.1}
\Phi(x) = \sum_{n=1}^N\omega_n\Psi_{c_n}(x)
\nal
in each point of the spatial discretization grid need to be smaller than or equal to a prescribed positive constant $\xi$.
For all our calculations we fix $\xi=10^{-6}$, such that the stopping criterion is given by
\bal
\frac{||\Phi^{i+1}(x) - \Phi^{i}(x)||}{||\Phi^{i}(x)||} \leq \xi\,.
\nal
\subsection{Exponential $p(s)$}
\label{sec4.1}
To validate the approach, we use the nonclassical method to solve a transport problem in which $p(s)$ is given by the exponential function provided in \cref{1.6}.
This yields \cite{larvas_11}
\bal\label{4.3}
\Sigma_t(s) = \frac{p(s)}{\int_s^\infty p(s')ds'} = \frac{\sigma_t e^{-\sigma_t s}}{\int_s^\infty \sigma_t e^{-\sigma_t s'}ds'} = \sigma_t \text{ (independent of $s$).}
\nal
In this case, the flux $\Psi_{c_n}$ given by \cref{2.8e} should match the one obtained by solving the corresponding \textit{classical} S$_N$ transport problem
\bsub\label[pluraleq]{4.4}
\bal
&\mu_n\frac{d}{dx}\Psi_{c_n}(x) + \sigma_t\Psi_{c_n}(x) = \frac{c}{2}\sigma_t\sum_{n=1}^N\omega_n\Psi_{c_n}(x) + \frac{Q(x)}{2}\,,\quad 0<x<X\,, n=1,2,...,N\,,\\
&\Psi_{c_n}(0) = 0\,, \quad n=1,2,...,\frac{N}{2}\,,\\
&\Psi_{c_n}(X) = 0\,, \quad n=\frac{N}{2}+1,...,N\,.
\nal
\nsub
Let us consider a slab of length $X=20$, total cross section $\sigma_t = 1.0$, scattering ratio $c=0.999$, and internal source $Q(x) = 1.0$, and let us assume
the truncation order of the Laguerre expansion to be $M=10$.
\Cref{fig1} depicts the scalar flux obtained when solving the nonclassical (\ref{2.8}) and classical (\ref{4.4}) problems.
As expected, the solutions match each other.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.1]{fig1}
\caption{Scalar flux generated by the solution of classical and nonclassical transport equations}
\label{fig1}
\end{figure}
\begin{table}[!b]
\begin{center}
\caption{Convergence Data for Nonclassical Transport with Exponential $p(s)$}\label{tab1}
\begin{tabular}{||c||c|c||c|c||}
\hline
\hline
& \multicolumn{2}{|c||}{\textbf{Number of}} & \multicolumn{2}{c||}{\textbf{Spectral}}\\
& \multicolumn{2}{|c||}{\textbf{Iterations}} & \multicolumn{2}{c||}{\textbf{Radius}} \\
\cline{2-5}
\vspace{-10pt} &&&&\\
$\mathbf{c}$ & \textbf{SI} & \textbf{S$_2$SA} & \textbf{SI} & \textbf{S$_2$SA} \\
\hline
\vspace{-10pt} &&&&\\
0.8 & 56 & 6 & 0.7997 & 0.1328 \\
0.9 & 110 & 6 & 0.8997 & 0.1565 \\
0.99 & 906 & 6 & 0.9899 & 0.1748 \\
0.999 & 6439 & 6 & 0.9989 & 0.1685\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Next, we compare the iteration count and spectral radius for stand-alone source iteration (SI) and transport synthetic acceleration (S$_2$SA) for the nonclassical method.
Once again, we set $Q(x)=1.0$, and $p(s)$ and $\Sigma_t(s)$ are given respectively by Eqs.~(\ref{1.6}) and (\ref{4.3}), with $\sigma_t =1.0$.
However, this time we increase the domain size to $X= 200$.
We assume the truncation order of the Laguerre expansion to be $M=50$, and vary the scattering ratio $c$ from 0.8 to 0.999.
\Cref{tab1} presents the number of iterations and the spectral radius for each case.
As expected, we observe a significant reduction in the spectral radius and iteration count, with the number of iterations decreasing 3 orders of magnitude for the highest scattering ratio of $c=0.999$.
\subsection{Nonexponential $p(s)$}
\label{sec4.2}
Let us consider the diffusion equation in a homogeneous slab
\bsub\label[pluraleq]{4.5}
\bal
-\frac{1}{3\sigma_t}\frac{d^2}{dx^2} \phi(x) + (1-c)\sigma_t \phi(x) = Q(x)\,,
\nal
with Marshak boundary conditions \cite{bell}
\bal
&\phi(0)-\frac{2}{3\sigma_t}\frac{d}{dx}\phi(0) = 0\,,\\
&\phi(X)+\frac{2}{3\sigma_t}\frac{d}{dx}\phi(X) = 0\,.
\nal
\nsub
Here, $\phi$ is the (diffusion) scalar flux.
If the free-path distribution $p(s)$ is given by the nonexponential function
\bal\label{4.6}
p(s) = 3\sigma_t^2 s\, e^{-\sqrt{3}\sigma_t s},
\nal
it has been shown that the collision-rate density $\sigma_t\phi(x)$ of the diffusion problem, given by \cref{4.5}, will match the \textit{nonclassical} collision-rate density \cite{siam_15,aml_16}
\bal
f(x) = \int_0^\infty\Sigma_t(s)\int_{-1}^1\Psi(x,\mu,s)d\mu ds,
\nal
where $\Psi(x,\mu,s)$ is the solution of the nonclassical problem given by \cref{2.1}, and
\bal\label{4.8}
\Sigma_t(s) = \frac{p(s)}{\int_s^\infty p(s')ds'} = \frac{3\sigma_t^2 s e^{-\sqrt{3}\sigma_t s}}{\int_s^\infty 3\sigma_t^2 s' e^{-\sqrt{3}\sigma_t s'} ds'} = \frac{3\sigma_t^2 s}{1+\sqrt{3}\sigma_t s}\,.
\nal
Once again, let us consider a slab of length $X=20$, $\sigma_t = 1.0$, $c=0.999$, and $Q(x) = 1.0$.
\Cref{fig2} shows a comparison between the collision-rate densities of the diffusion problem (\cref{4.5}) and the nonclassical spectral S$_N$ method (\cref{2.8}), with the latter being given by
\bal
f(x) = \sum_{n=1}^N \omega_n \sum_{m=0}^{M} \psi_{m,n}(x)\int_0^\infty p(s)L_m(s)ds,
\nal
where $M=10$.
As in the previous case, the solutions match as expected.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.1]{fig2}
\caption{Collision-rate density generated with diffusion and nonclassical approaches}
\label{fig2}
\end{figure}
\begin{table}[!b]
\begin{center}
\caption{Convergence Data for Nonclassical Transport with Nonexponential $p(s)$}\label{tab2}
\begin{tabular}{||c||c|c||c|c||}
\hline
\hline
& \multicolumn{2}{|c||}{\textbf{Number of}} & \multicolumn{2}{c||}{\textbf{Spectral}}\\
& \multicolumn{2}{|c||}{\textbf{Iterations}} & \multicolumn{2}{c||}{\textbf{Radius}} \\
\cline{2-5}
\vspace{-10pt} &&&&\\
$\mathbf{c}$ & \textbf{SI} & \textbf{S$_2$SA} & \textbf{SI} & \textbf{S$_2$SA} \\
\hline
\vspace{-10pt} &&&&\\
0.8 & 56 & 6 & 0.7997 & 0.1538 \\
0.9 & 110 & 7 & 0.8997 & 0.1811 \\
0.99 & 906 & 6 & 0.9989 & 0.1885 \\
0.999 & 6443 & 6 & 0.9989 & 0.1802\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
At this point, we compare the iteration count and spectral radius for stand-alone source iteration (SI) and transport synthetic acceleration (S$_2$SA).
We set $Q(x)=1.0$, and $p(s)$ and $\Sigma_t(s)$ are given respectively by \cref{4.6,4.8}, with $\sigma_t =1.0$.
Once more, we increase the domain size to $X= 200$, and assume
the truncation order of the Laguerre expansion to be $M=50$.
\Cref{tab2} presents the number of iterations and the spectral radius for different choices of the scattering ratio $c$.
Similar to the previous case, there is a reduction in both the spectral radius and iteration count, with a decrease of 3 orders of magnitude in the iterations for the highest scattering ratio.
\section{Discussion}\label{sec5}
\setcounter{equation}{0}
We have introduced a transport synthetic acceleration procedure that speeds up the source iteration scheme for the solution of the one-speed nonclassical spectral S$_N$ equations in slab geometry.
Specifically, we used S$_2$ synthetic acceleration to solve nonclassical spectral S$_{16}$ equations for problems involving exponential and nonexponential free-path distributions.
The numerical results successfully confirm the advantage of the method; to our knowledge, this is the first time a numerical acceleration approach is used in this class of nonclassical spectral problems.
Moreover, although we assumed for simplicity monoenergetic transport and isotropic scattering, extending the method to include energy-dependence and anisotropic scattering shall not lead to significant additional theoretical difficulties.
When compared to stand-alone SI, S$_2$SA yields a significant reduction in number of iterations (up to three orders of magnitude) and spectral radii.
The values of the spectral radius for stand-alone SI remain virtually unchanged for the exponential and nonexponential cases for a fixed value of the scattering ratio $c$.
However, all spectral radii for S$_2$SA are larger in the nonexponential case than in the exponential case for the same value of $c$, increasing from $6.5\%$ (when $c = 0.999$) to $13.6\%$ (when $c=0.8$).
In fact, we do not see spectral radius values that are exactly consistent with those found when applying corresponding techniques to the classical S$_N$ transport equation \cite{adalar_02}.
This can be attributed to the fact that the nonclassical equation contains an altogether different scattering term, which depends on the free-path $s$.
Although a full convergence analysis is beyond the scope of this paper, we shall perform it in a future work in order to investigate this feature.
\section*{Acknowledgements}
J.~K.~Patel and R.~Vasques acknowledge support under award number NRC-HQ-84-15-G-0024 from the Nuclear Regulatory Commission.
This study was financed in part by the Coordena\c{c}\~ao de
Aperfei\c{c}oamento de Pessoal de N\'ivel Superior - Brasil (CAPES) -
Finance Code 001.
L.~R.~C.~Moraes and R.~C.~Barros also would like to express their gratitude to the support of Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico - Brasil (CNPq) and Funda\c{c}\~ao Carlos Chagas Filho de Amparo \`a Pesquisa do Estado do Rio de Janeiro - Brasil (FAPERJ).
| e39c00e379860705ddc3afedeacffd7ee172f4c0 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\subsection*{Details of the Datasets} \label{section-appendix-datasets}
\begin{itemize}
\item \textbf{IMDB}:
Plot keywords of movies are provided by the IMDB. Following \cite{DBLP:conf/www/WangJSWYCY19}, we use the bag-of-words representation of plot keywords to denote movie features, corresponding to a 1,537-dimensional feature for each movie. Director/actor features are the average representation of movies that they directed/acted, whose dimensions are both 1,537.
\item \textbf{OGB-MAG}:
Open Graph Benchmark (OGB) \cite{DBLP:conf/nips/HuFZDRLCL20} contains a diverse set of challenging benchmark datasets for graph machine learning research. Leaderboards are set up for each dataset and state-of-the-art models are ranked based on their performance. Moreover, all the models are listed with open-sourced implementation to reproduce the results. OGB-MAG is a heterogeneous academic network in OGB, where each paper is associated with a 128-dimensional Word2Vec feature. For nodes that do not have features, we generate their features by the metapath2vec \cite{DBLP:conf/kdd/DongCS17} model.
As a result, the feature of each author/ field/ institution node corresponds to a 128-dimensional vector. The feature of each paper is the concatenation of the given 128-dimensional Word2Vec feature and the generated 128-dimensional structural feature, corresponding to a 256-dimensional vector.
\item \textbf{OAG-Venue}:
We use the pre-processed graph in the Computer Science (CS) domain extracted from Open Academic Graph (OAG) by \cite{DBLP:conf/www/HuDWS20} to conduct experiments\footnote{HGT authors only shared the graph in the CS domain.}. Features of all types of nodes are given in the OAG dataset. Specifically, the feature of each paper is a 768-dimensional vector, corresponding to the weighted combination of each word's representation in the paper's title. Each word's representation and the attention score are obtained from a pre-trained XLNet \cite{DBLP:conf/nips/YangDYCSL19}. The feature of each author is the average of his/her published paper representations, corresponding to a 768-dimensional vector as well. The features of other types of nodes are generated by the metapath2vec model to reflect the heterogeneous graph structure, whose dimensions are all set to 400. One potential issue with the OAG dataset in \cite{DBLP:conf/www/HuDWS20} is the information leakage, since target nodes and the nodes with ground truth are connected with edges. To solve this issue, we remove all the edges between paper nodes and nodes with ground truth that we aim to predict. Specifically, the classification task on OAG-Venue is to predict the published venues of papers, so we remove all edges between paper nodes and venue nodes in the original OAG dataset. We select venues that associated with no less than 200 papers to conduct experiments. In total, there are 241 venues in OAG-Venue, making the task as a 241-class classification problem.
\item \textbf{OAG-L1-Field}:
The classification task on OAG-L1-Field is to predict the $L1$-level field that each paper belongs to, so we remove all the edges between paper nodes and field nodes in the original OAG dataset. We select fields that associated with no less than 100 papers to conduct experiments. In total, there are 52 fields in OAG-L1-Field, making the task as a 52-class classification problem.
\end{itemize}
\subsection*{Selection of Dropout and Learning Rate} \label{section-appendix-hyper-parameters}
On IMDB, the dropout and learning rate are searched in $\left[0.0, 0.1, \cdots, 0.9\right]$ and $\left[0.001, 0.005, 0.01\right]$, respectively. On OGB-MAG, we search the dropout and learning rate in $\left[0.0, 0.1, 0.2, 0,3, 0.4, 0.5\right]$ and $\left[0.001, 0.01\right]$. On OAG-Venue and OAG-L1-Field,the dropout and learning rate are searched in $\left[0.0, 0.1, 0.2, 0,3\right]$ and $\left[0.001, 0.01\right]$, respectively. The settings of dropout and learning rate on all the methods are shown in \tabref{tab:hyperparameters}.
\subsection*{Node Clustering} \label{section-appendix-node_clustering}
On the small-scale dataset, we feed the learned representations of all the movie nodes into k-means algorithm to achieve the clustering performance of different models. On large-scale datasets, it is infeasible to feed all the paper nodes into k-means algorithm. Therefore, we first select top-five classes of papers in the testing set and then randomly select 1000 papers from each class, and finally obtain 5,000 papers. Then we feed the selected 5,000 paper nodes into k-means algorithm to get the clustering results. The number of clusters is equal to the number of real classes in each dataset (i.e., 3 for IMDB, and 5 for OGB-MAG, OAG-Venue and OAG-L1-Field).
\subsection*{Link Prediction} \label{section-appendix-link_prediction}
Due to the huge number of edges on large-scale datasets, it is infeasible to do link prediction on all the edges. Therefore, we adjust the number of sampled edges on the datasets. In particular, 3\%, 1\% and 1\% of the edges are sampled as training, validation and testing sets on OGB-MAG, respectively. Correspondingly, 15\%, 5\% and 5\% on OAG-Venue, and 30\%, 10\% and 10\% on OAG-L1-Field. Each edge in the training set is associated with five randomly sampled negative edges, and each edge in the validation or testing sets is associated with a randomly sampled negative edge.
\bibliographystyle{IEEEtran}
\section{Introduction}\label{section-1}}
\IEEEPARstart{H}{eterogeneous} graphs are pervasive in real-world scenarios, such as academic networks, e-commerce and social networks \cite{DBLP:journals/sigkdd/SunH12,DBLP:journals/tkde/ShiLZSY17}.
Learning rich information in heterogeneous graphs such as meaningful node representations could facilitate various tasks, including node classification \cite{DBLP:conf/kdd/DongCS17,DBLP:conf/www/ZhangXKLMZ18}, node clustering \cite{DBLP:conf/aaai/LiKRY19}, link prediction \cite{DBLP:conf/icdm/DongTWTCRC12,DBLP:conf/aaai/LiSCLTL20} and item recommendation \cite{DBLP:journals/tkde/ShiHZY19}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=1.0\columnwidth]{figures/motivation.png}
\caption{
Motivation illustration. For a target node (see the central paper node), it is necessary to explicitly learn disparate representations of the target node with respect to different relations. Moreover, it is also important to learn the representations of relations and fuse the relation-aware representations of target nodes semantically.
}
\label{fig:motivation}
\end{figure}
In heterogeneous graphs, nodes are usually connected with different types of neighbors via different types of relations. Take the heterogeneous academic graph in \figref{fig:motivation} as an example, a target paper node is connected with nodes of authors, papers, fields and venues via "is written by", "is cited by", "belongs to" and "is published at" relations, respectively. Different types of relations can reflect disparate characteristics of the target nodes. For instance, the "belongs to" relation often reveals the paper's research topic, and the "is published at" relation tends to indicate the paper's technical quality. Therefore, it is essential to explicitly capture the underlying semantics of relations and learn \textit{relation-aware} node representations by maintaining a group of relation-specific node representations to encode more fine-grained information. The learned relation-aware node representations can reflect node characteristics with regard to each specified relation and make adaptive contributions of different relations for various downstream tasks. When estimating the popularity of papers, the "belongs to" relation that conveys the trending research topics would be more important. When we want to infer the paper category, the "is published at" relation may help more because it reflects whether a paper is more relevant to theoretical analysis or applied sciences.
However, this task is challenging in how to design a framework to ponder on the roles of nodes and relations in heterogeneous graphs in a collaborative manner.
We summarize the relevant existing methods in the following three major ways.
The first way falls into applying Graph Neural Networks (GNNs) for representation learning on graphs.
Recent GNNs, including GCN \cite{DBLP:conf/iclr/KipfW17}, GraphSAGE \cite{DBLP:conf/nips/HamiltonYL17} and GAT \cite{DBLP:conf/iclr/VelickovicCCRLB18}, have shown the superiority in modelling graph-structured data.
However, most GNNs were designed for homogeneous graphs with only one type of nodes and one type of edges, and they cannot directly handle different types of nodes and relations in heterogeneous graphs.
The second way focuses on designing specialized GNNs to learn node representations in heterogeneous graphs. \cite{DBLP:conf/www/WangJSWYCY19} designed HAN by leveraging pre-defined meta-paths and the attention mechanism to learn on heterogeneous graphs. \cite{DBLP:conf/kdd/ZhangSHSC19} presented HetGNN to consider the heterogeneity of node features and neighbors using Bi-LSTMs. \cite{DBLP:conf/aaai/HongGLYLY20} designed type-aware attention layers in HetSANN to study on neighboring nodes and associated edges with different types. \cite{DBLP:conf/www/HuDWS20} introduced HGT to investigate heterogeneous graphs using type-specific parameters based on the Transformer \cite{DBLP:conf/nips/VaswaniSPUJGKP17}. Although these methods provide some insights on heterogeneous graph learning, they primarily capture the characteristics of nodes. The relation semantics, which are also essential, have not been explicitly studied yet.
The third way is founded on modelling the properties of relations, which also carry essential information in graphs. RGCN \cite{DBLP:conf/esws/SchlichtkrullKB18} was proposed to deal with multiple relations in knowledge graphs. \cite{DBLP:conf/icdm/ZhuZPZW19} designed RSHN to learn on the constructed edge-centric coarsened line graph to tackle the relation diversity. \cite{DBLP:conf/aaai/LuSH019} presented RHINE to handle the affiliation and interaction relations. \cite{DBLP:conf/kdd/CenZZYZ019} introduced GATNE to model different types of relations between users and items. \cite{DBLP:conf/sigir/JinG0JL20} proposed MBGCN to capture multi-typed user behaviors by embedding propagation layers. While promising, these relation-centered methods still fail to explicitly learn relation semantics and the features of nodes with different types are not well discriminated.
To this end, we propose a \textbf{R}elation-aware \textbf{H}eterogeneous \textbf{G}raph \textbf{N}eural \textbf{N}etwork (R-HGNN), to learn not only fine-grained node representations according to different types of relations, but also the semantic representations of relations. Specifically, we first design a graph convolution module to propagate information on each relation-specific graph separately and learn node representation specified to the corresponding relation. Then, we present a cross-relation message passing module to improve the interactions of node representations across different relations. Next, the semantic representations of relations are explicitly learned layer by layer to guide the node representation learning process. Finally, to facilitate downstream tasks, the relation-aware node representations are semantically aggregated into a compact representation based on the learned relation representations. Extensive experiments are conducted on various graph tasks and the results show that our approach outperforms existing methods consistently among all the tasks.
Our key contributions include:
\begin{itemize}
\item
We propose a relation-aware node representation learning method. For each node, we derive a fine-grained representation from a group of relation-specific node representations, where each relation-specific representation reflects the characteristics of the node with regard to a specified relation.
\item Accompanied with the relation-aware node representation learning process, we also provide a parallel relation representation learning module to learn the semantic representations of relations and guide the node representation learning process collaboratively.
\item A semantic fusing module is proposed to aggregate relation-aware node representations into a compact representation to facilitate downstream tasks, considering the semantic characteristics of relations.
\end{itemize}
The rest of this paper is organized as follows:
\secref{section-2} summarizes previous research related to the studied problem.
\secref{section-3} formalizes the studied problem.
\secref{section-4} presents the framework and introduces each component of our model.
\secref{section-5} evaluates the proposed model through experiments.
Finally, \secref{section-6} concludes the entire paper.
\section{Related work}\label{section-2}
This section reviews existing literature related to our work, and also points out the differences of previous studies with our research.
\textbf{Graph Mining.}
Over the past decades, a great number of efforts have been made on graph mining. Classical methods based on manifold learning mainly focus on reconstructing graphs, such as Locally Linear Embedding (LLE) \cite{roweis2000nonlinear} and Laplacian Eigenmaps (LE) \cite{DBLP:conf/nips/BelkinN01}. Inspired by the Skip-gram model \cite{DBLP:conf/nips/MikolovSCCD13}, more advanced methods were proposed to learn representations of nodes in the network, including DeepWalk \cite{DBLP:conf/kdd/PerozziAS14}, node2vec \cite{DBLP:conf/kdd/GroverL16} and metapath2vec \cite{DBLP:conf/kdd/DongCS17}. These methods first adopt random walk strategy to generate sequences of nodes and then use Skip-gram to maximize co-occurrence probability of nodes in the same sequence.
However, the above methods only studied on the graph topology structure and could not consider node attributes, resulting in inferior performance. These methods are outperformed by the recently proposed GNNs, which could handle node attributes and the graph structure simultaneously.
\textbf{Graph Neural Networks.}
Recent years have witnessed the success of applying GNNs in various applications, such as node classification \cite{DBLP:conf/iclr/KipfW17,DBLP:conf/nips/HamiltonYL17}, graph classification \cite{DBLP:conf/iclr/XuHLJ19}, traffic prediction \cite{DBLP:conf/ijcai/YuYZ18}, and recommendation systems \cite{DBLP:conf/kdd/YingHCEHL18,DBLP:conf/sigir/Wang0WFC19,DBLP:conf/sigir/0001DWLZ020}. GNNs first propagate information among nodes and their neighbors, and then provide node representations by aggregating the received information. Generally, GNNs could be divided into spectral-based and spatial-based methods. As a spectral-based method, GCN \cite{DBLP:conf/iclr/KipfW17} introduces a localized first-order approximation and performs convolution in the Fourier domain. As spatial-based methods, GraphSAGE \cite{DBLP:conf/nips/HamiltonYL17} propagates information in the graph domain directly and utilizes different functions to aggregate neighbors' information. GAT \cite{DBLP:conf/iclr/VelickovicCCRLB18} leverages the attention mechanism to adaptively select more important neighbors.
However, most existing GNNs were designed for homogeneous graphs, and could not handle different types of nodes and relations in heterogeneous graphs.
\textbf{Relational Graph Learning.}
There are some attempts to investigate the relations in graphs in recent years. \cite{DBLP:conf/esws/SchlichtkrullKB18} presented RGCN to model the relations in knowledge graphs by employing specialized transformation matrices for each relation type. \cite{DBLP:conf/icdm/ZhuZPZW19} designed RSHN to handle various relations by first building edge-centric coarsened line graph to describe relation semantic information and then using the learned relation representations to aggregate neighboring nodes. \cite{DBLP:conf/aaai/LuSH019} first conducted empirical studies to divide relations into two categories, i.e., the affiliation relations and the interaction relations, and then introduced RHINE to deal with these relations. \cite{DBLP:conf/kdd/CenZZYZ019} proposed GATNE to capture the multi-type interactions between users and items, which could support both transductive and inductive learning.
In the field of recommendation systems, several studies focused on the multi-behavior recommendation problem where multiple behaviors of users could be seen as different relations between users and items. \cite{DBLP:conf/cikm/ZhangMCX20} introduced MGNN to learn both behavior-shared and behavior-specific embeddings for users and items to model the collective effects of multi-typed behaviors. \cite{DBLP:conf/sigir/JinG0JL20} first constructed a graph to represent multi-behavior data and then designed MBGCN to learn the strength as well as semantics of behaviors by embedding propagation layers. \cite{DBLP:conf/aaai/XiaHXDZYPB21} proposed KHGT to study on knowledge-aware multi-behavior graph, which captured both type-specific user-item interactive patterns and cross-type behavior dependencies. Although the above methods focused on the modelling of relations, the semantic representations of relations are still not explicitly learned. Moreover, the features associated with different types of nodes in heterogeneous graphs are not well discriminated.
\textbf{Heterogeneous Graph Learning.}
Recently, a number of efforts aimed to design GNNs for heterogeneous graphs learning.
\cite{DBLP:conf/www/WangJSWYCY19} presented HAN to learn the importance of neighbors and multiple hand-designed meta-paths based on an attention mechanism. \cite{DBLP:conf/www/0004ZMK20} considered the intermediate nodes in meta-paths and proposed MAGNN to aggregate the intra-meta-path and inter-meta-path information.
HetGNN \cite{DBLP:conf/kdd/ZhangSHSC19} first adopts a random walk strategy to sample neighbors and then uses specialized Bi-LSTMs to integrate heterogeneous node features and neighboring nodes. \cite{DBLP:conf/aaai/HongGLYLY20} designed HetSANN to learn on different types of neighboring nodes as well as the associated edges through type-aware attention layers, which could directly encode the graph information via a dedicated attention mechanism.
Based on the architecture of Transformer \cite{DBLP:conf/nips/VaswaniSPUJGKP17}, \cite{DBLP:conf/www/HuDWS20} introduced HGT to learn the characteristics of different nodes and relations with type-specific parameters.
However, the above methods are mostly developed by following the propagation mechanism of node representations, and the role of relations has not been comprehensively exploited yet.
Different with the above mentioned methods, we consider the role of relations to improve the learning of more fine-grained node representations in heterogeneous graph learning. In particular, our approach could collaboratively learn both relation-aware node representations and semantic representations of relations.
\section{Preliminaries}
\label{section-3}
This section provides the definitions of heterogeneous graphs as well as the formalization of the studied problem.
\subsection{Definitions}
\begin{definition}
\textbf{Heterogeneous Graph}. A heterogeneous graph is defined as $\mathcal{G}=\left(\mathcal{V},\mathcal{E},\mathcal{A},\mathcal{R}\right)$ with a node type mapping function $\phi : \mathcal{V} \rightarrow \mathcal{A}$ and an edge type mapping function $\psi : \mathcal{E} \rightarrow \mathcal{R}$, where $\mathcal{V}$, $\mathcal{E}$, $\mathcal{A}$ and $\mathcal{R}$ correspond to the set of nodes, edges, node types and edge types, respectively.
Each node $v \in \mathcal{V}$ and each edge $e \in \mathcal{E}$ belong to one specific type in $\mathcal{A}$ and $\mathcal{R}$, i.e., $\phi(v) \in \mathcal{A}$ and $\psi(e) \in \mathcal{R}$. Each heterogeneous graph has multiple node or edge types such that $|\mathcal{A}| + |\mathcal{R}| > 2$.
\end{definition}
\textbf{Example.} As shown in \figref{fig:motivation}, a heterogeneous academic graph contains multiple types of nodes (i.e., papers, authors, fields and venues) and relations (e.g., "is written by" relation between papers and authors, "belongs to" relation between papers and fields).
\begin{definition}
\textbf{Relation}.
A relation represents the connecting pattern of the source node, the corresponding edge and the target node.
Specifically, for an edge $e=(u,v)$ linked from source node $u$ to target node $v$, the corresponding relation is denoted by $\left \langle \phi(u), \psi(e), \phi(v) \right \rangle$.
Naturally, the inverse relation is represented by $\left \langle \phi(v), \psi(e)^{-1}, \phi(u) \right \rangle$ in this paper.
\textbf{Example.} In \figref{fig:motivation}, the relations consist of "is written by", "is cited by", "belongs to" and "is published at". In this paper, we study the role of relations and learn relation-aware node representations.
\end{definition}
\subsection{Problem Formalization}
Given a heterogeneous graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{A},\mathcal{R})$, representation learning on heterogeneous graph aims to learn a function $f: \mathcal{V} \rightarrow \mathbb{R}^d$ to embed each node into a $d$-dimensional representation with $d \ll |V|$.
The learned representations should capture both node features and relation information to facilitate various tasks, such as node classification, node clustering and link prediction.
\section{Methodology}\label{section-4}
This section first presents the framework of the proposed model and then introduces each component step by step.
\subsection{Framework of the Proposed Model}
The framework of the proposed model is shown in \figref{fig:framework}, which takes a sampled graph $\mathcal{G}$ for the target node with node feature matrices as the input and provides the low-dimensional node representation $\bm{h}_v$ for $v \in \mathcal{V}$ as the output, which could be applied in various downstream tasks.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=2.0\columnwidth]{figures/framework.png}
\caption{Framework of the proposed model. R-HGNN could collaboratively learn relation-aware node representations for target node $P_1$ (i.e., $\bm{h}_{P_1,r_1}^L$, $\bm{h}_{P_1,r_2}^L$ and $\bm{h}_{P_1,r_3}^L$) as well as the relation representations for $r_1$, $r_2$ and $r_3$ (i.e., $\bm{h}_{r_1}^L$, $\bm{h}_{r_2}^L$ and $\bm{h}_{r_3}^L$). Finally, a compact representation $\bm{h}_{P_1}$ for node $P_1$ is provided to facilitate downstream tasks.}
\label{fig:framework}
\end{figure*}
The proposed model consists of four components: relation-specific node representation learning, cross-relation message passing, relation representation learning and relation-aware representations fusing.
Originally, the sampled heterogeneous graph for the target node is decomposed into multiple relation-specific graphs based on the types of relations.
The first component performs graph convolution to learn unique node representations from each relation-specific graph separately.
The second component establishes connections to improve the interactions of node representations across different relations.
The third component explicitly learns relation representation in a layer-wise manner, and uses them to guide the node representation learning process.
The fourth component aims to aggregate relation-aware node presentations into a compact representation considering the semantic characteristics of relations to facilitate downstream tasks, such as node classification, node clustering, and link prediction.
\subsection{Relation-specific Node Representation Learning}\label{section-3-relation_specific_node_representation_laerning}
As shown in \figref{fig:motivation}, in heterogeneous graphs, a target node is usually associated with multiple relations.
Existing heterogeneous graph learning methods are primarily designed by following the propagation mechanism of node representations, while the role of relations is not explicitly exploited.
Therefore, we aim to learn node representations considering the relation-aware characteristics, which indicates that each node is associated with a relation-specific representation to reflect the characteristics of the node with regard to the corresponding relation.
To learn the relation-specific node representation for a target node, we first decompose the heterogeneous graph $\mathcal{G}$ into multiple relation-specific graphs $\left\{ \mathcal{G}_r, r \in \mathcal{R}\right\}$ based on the relation types.
Note that the inverse relation $r^{-1}$ is also added in graph $\mathcal{G}_r$ to allow the two connected nodes propagate information from each other.
Then, we design a dedicated graph convolution module to learn unique node representations from each relation-specific graph. Finally, we present a weighted residual connection to combine the target node features and the aggregated neighboring information adaptively. Details of the two modules are introduced as follows.
\textbf{Relation-specific Convolution.}
The process of the convolution on each relation-specific graph is shown in \figref{fig:single_relation_convolution}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.98\columnwidth]{figures/single_relation_convolution.png}
\caption{Illustration of the convolution on relation-specific graph $\mathcal{G}_{r_3}$. Activation functions are omitted for simplicity.}
\label{fig:single_relation_convolution}
\end{figure}
Mathematically, we first project source node $u$, target node $v$ and relation $\psi(e)$\footnote{Since each edge type is unique in heterogeneous graphs, we use edge type $\psi(e)$ to denote the relation $\left \langle \phi(u), \psi(e), \phi(v) \right \rangle$ for simplicity, unless otherwise stated.} into their latent spaces via the node-type and relation-type specific transformation matrices via the following equations,
\begin{equation}
\bm{c}_{u,{\psi(e)}}^l = \bm{W}_{\phi(u)}^l \bm{h}_{u,{\psi(e)}}^{l-1},
\end{equation}
\begin{equation}
\bm{c}_{v,\psi(e)}^l = \bm{W}_{\phi(v)}^l \bm{h}_{v,\psi(e)}^{l-1},
\end{equation}
\begin{equation}
\bm{c}_{\psi(e)}^l = \bm{W}_{\psi(e)}^l \bm{h}_{\psi(e)}^{l-1},
\end{equation}
where $\bm{W}_{\phi(u)}^l$, $\bm{W}_{\phi(v)}^l$ and $\bm{W}_{\psi(e)}^l$ are type-specific trainable transformation matrices for source node $u$, target node $v$ and relation $\psi(e)$ at the $l$-th layer, respectively.
$\bm{h}_{u,\psi(e)}^{l-1}$, $\bm{h}_{v,{\psi(e)}}^{l-1}$ and $\bm{h}_{\psi(e)}^{l-1}$ are the representations of source node $u$, target node $v$ and relation $\psi(e)$ in the corresponding relation-specific graph at layer $l-1$. We set $\bm{h}_{u,\psi(e)}^0$, $\bm{h}_{v,\psi(e)}^0$ and $\bm{h}_{\psi(e)}^0$ to their original features $\bm{x}_u$, $\bm{x}_v$ and $\bm{x}_{\psi(e)}$ initially. The original node features $\bm{x}_u$ and $\bm{x}_v$ are usually given by the graph. The original relation feature $\bm{x}_{\psi(e)}$ is represented in one-hot encoding, where the entry of 1 corresponds to the relation type. Then we calculate the normalized importance of source node $u$ to target node $v$ by
\begin{equation}
s_{v,u}^{\psi(e),l} = LeakyReLU\left({\bm{c}_{\psi(e)}^l}^\top \left[\bm{c}_{v,\psi(e)}^l, \bm{c}_{u,{\psi(e)}}^l \right]\right),
\end{equation}
\begin{equation}
\alpha_{v,u}^{\psi(e),l} = \frac{\exp{\left(s_{v,u}^{\psi(e),l}\right)}}{\sum_{u\prime \in \mathcal{N}_{\psi(e)}(v)} \exp{\left(s_{v,u\prime}^{\psi(e),l}\right)}},
\end{equation}
where $\left[\cdot,\cdot\right]$ is the concatenation operation, $\mathcal{N}_{\psi(e)}(v)$ denotes the set of $v$'s neighbors with relation $\psi(e)$.
Finally, the information of $v$'s neighbors with relation $\psi(e)$ is aggregated with the learned importance as follows,
\begin{equation}
\widetilde{\bm{z}}_{v,\psi(e)}^l = ReLU\left(\sum_{u \in \mathcal{N}_{\psi(e)}(v)} \alpha_{v,u}^{\psi(e),l} \cdot \bm{c}_{u,{\psi(e)}}^l\right).
\end{equation}
It is worth mentioning that previous research design trainable parameters in each layer separately to aggregate neighbor information, which may decrease the correlations of layers. To improve the layer relevance, we leverage the relation representations, which are propagated in a layer-wise manner (elaborate latter in \secref{section-3-relation_propagate}), to calculate the importance of target node's neighbors. Since the relation representations are propagated layer by layer, the interactions between adjacent layers could be enhanced.
\textbf{Weighted Residual Connection.} In addition to aggregating neighbor information by the relation-specific convolution, the features of the target node are also assumed to be important, because they reflect the node properties inherently. However, simply incorporating the target node with and neighbor information via summation could not distinguish their different importance.
Hence, we combine the target node features and the aggregated neighbor information via a residual connection \cite{DBLP:conf/cvpr/HeZRS16} with trainable weight parameters by
\begin{equation}
\bm{z}_{v,\psi(e)}^l = \lambda_{\phi(v)}^l \cdot \widetilde{\bm{z}}_{v,\psi(e)}^l + \left(1-\lambda_{\phi(v)}^l\right) \cdot \bm{W}_{\phi(v),align}^l \bm{h}_{v,\psi(e)}^{l-1},
\end{equation}
where $\lambda_{\phi(v)}^l$ controls the combination importance and is trainable during the training process. $\bm{W}_{\phi(v),align}^l$ is used to align the dimensions of $\widetilde{\bm{z}}_{v,\psi(e)}^l$ and $\bm{h}_{v,\psi(e)}^{l-1}$. Due to the design of trainable importance, our model could aggregate the target features and neighbor information adaptively.
\subsection{Cross-relation Message Passing}
\label{section-3-cross_relation_message_passing}
Through the above relation-specific node representation learning component, we could obtain multiple representations of the target node which are specific to relation types. In fact, different relations that the target node interacts with tend to be correlated with each other. Treating the node representations under each relation disparately would neglect the relation interactions and lead to inferior performance, which is empirically validated in \secref{section-4-ablation_study}. Therefore, it is necessary to propagate the messages across different relations to provide more informative node representations. However, naive pooling operations across relations become infeasible because they fail to discern the node representations with regard to different relation types. In this paper, we establish connections of node representations to improve message passing across different relations and automatically distinguish the importance of relations.
Formally, let $\mathcal{R}(v)$ denote the set of relations that node $v$ associated with. Given the learned node representations of $v$ with respect to each relation, i.e., $\left\{\bm{z}_{v,r}^l, r \in \mathcal{R}(v)\right\}$, the message passing between relation $\psi(e)$ and the relations in $\mathcal{R}(v)$ is implemented by
\begin{equation}
\bm{h}_{v,\psi(e)}^l = \sum_{r \in \mathcal{R}(v)} \beta_{\psi(e),r}^l \cdot \bm{z}_{v,r}^l,
\end{equation}
where $\beta_{\psi(e),r}^l$ is the normalized relevance of relation $r$ to relation $\psi(e)$ at the $l$-th layer, and it is calculated by
\begin{equation}
\beta_{\psi(e),r}^l = \frac{\exp{\left(LeakyReLU\left({\bm{q}_{\psi(e)}^l}^\top\bm{z}_{v,r}^l\right)\right)}}{\sum_{r\prime \in \mathcal{R}(v)} \exp{\left(LeakyReLU\left({\bm{q}_{\psi(e)}^l}^\top\bm{z}_{v,r\prime}^l\right)\right)}}.
\end{equation}
$\bm{q}_{\psi(e)}^l$ is the trainable attention vector specific to relation $\psi(e)$ at the $l$-th layer, which is used to control the information flow between relation $\psi(e)$ and the relations in $\mathcal{R}(v)$. By establishing the connections of node representations across relations, each unique representation specific to the corresponding relation could contact with other relations and become more informative.
\subsection{Relation Representation Learning} \label{section-3-relation_propagate}
As illustrated in \secref{section-1}, relations reflect the connecting patterns of nodes and carry rich information. Therefore, in addition to learning relation-aware node representations by a specialized node representation propagation mechanism through the above two components, our approach also founds on the semantic characteristics of relations, which are rarely studied in existing methods. It is worth noticing that although existing methods like HAN, RGCN and HGT design trainable parameters specific to meta-paths or relations to capture such characteristics, they fail to focus on the role of relations explicitly, which indicates that the semantic representations of different relations are ignored.
To explicitly learn the relation semantic representations, we propose a general propagation mechanism for relation representations, which could be formalized as
\begin{equation}
\bm{h}_{\psi(e)}^l = PROPAGATE^l(\bm{h}_{\psi(e)}^{l-1}, \bm{extra}).
\label{equ:general_propagate_relation}
\end{equation}
$PROPAGATE^l(\cdot)$ represents the propagation mechanism to update relation representations at layer $l$, which could be achieved by a succinct linear propagation or a sophisticated gated updating propagation \cite{DBLP:journals/neco/HochreiterS97}. $\bm{extra}$ denotes the inputs except for the relation representation $\bm{h}_{\psi(e)}^{l-1}$, such as the hidden states of relations in the gated updating propagation. In this work, we implement the relation propagating mechanism without considering $\bm{extra}$ and rewrite \equref{equ:general_propagate_relation} as follows,
\begin{equation}
\bm{h}_{\psi(e)}^l = \bm{W}_{\psi(e),upd}^l \bm{h}_{\psi(e)}^{l-1} + \bm{b}_{\psi(e),upd}^l,
\end{equation}
where $\bm{W}_{\psi(e),upd}^l$ and $\bm{b}_{\psi(e),upd}^l$ are the trainable parameters to update the representations for relation $\psi(e)$ at layer $l$. The relation representations could not only capture the semantic characteristics of relations, but also guide the learning process of node representations, making nodes and relations to be collaboratively learned (introduced in \secref{section-3-relation_specific_node_representation_laerning}).
\subsection{Relation-aware Representations Fusing}
We define that a R-HGNN layer is composed of the aforementioned three components and stack $L$ layers to receive information from multi-hop neighbors. Finally, the $L$ layers could provide relation-aware node representations for target node $v$, i.e., $\left\{ \bm{h}_{v,r}^L, r \in \mathcal{R}(v)\right\}$, as well as the representations of relations associated with $v$, that is, $\left\{ \bm{h}_{r}^L, r \in \mathcal{R}(v)\right\}$. For downstream tasks, a compact node representation is usually required. One could apply simple pooling operations (e.g., average or max pooling) on the relation-aware representations to obtain the compact representation, but such operations fail to consider the importance of node representations across different relations (empirically validated in \secref{section-4-ablation_study}). Therefore, we design a semantic fusing component to aggregate the relation-aware node representations into a compact node representation to facilitate various downstream tasks.
In particular, this component takes $\left\{ \bm{h}_{v,r}^L, r \in \mathcal{R}(v)\right\}$ and $\left\{ \bm{h}_{r}^L, r \in \mathcal{R}(v)\right\}$ as inputs, and provides a compact representation $\bm{h}_v$ for node $v$ by a relation-aware attention mechanism via the following equations,
\begin{equation}
\gamma_{v,r} = \frac{\exp{\left(LeakyReLU\left(\left({\bm{V}_{r}\bm{h}_{v,r}^L}\right)^\top\bm{E}_{r}\bm{h}_{r}^L\right)\right)}}{\sum_{r^\prime \in \mathcal{R}(v)}\exp{\left(LeakyReLU\left(\left({\bm{V}_{r^\prime}\bm{h}_{v,r^\prime}^L}\right)^\top\bm{E}_{r^\prime}\bm{h}_{r^\prime}^L\right)\right)}},
\end{equation}
\begin{equation}
\bm{h}_{v} = \sum_{r \in \mathcal{R}(v)}\gamma_{v,r} \cdot \bm{V}_{r}\bm{h}_{v,r}^L,
\end{equation}
where $\gamma_{v,r}$ represents the learned importance of relation $r$ to node representation $\bm{h}_{v}$. $\bm{V}_{r}$ and $\bm{E}_{r}$ represent the transformation matrix for node representation $\bm{h}_{v,r}^L$ and relation representation $\bm{h}_{r}^L$, respectively. Finally, the compact node representation $\bm{h}_{v}$ are obtained through the weighted aggregation using the learned relation importance, which could be applied in downstream tasks. It is worth noticing that the relation-aware node representations are semantically aggregated by considering the relation representations, and such a design is in line with the motivation of our approach, that is, investigating on the role of relations to improve the learning of node representations.
\subsection{End-to-End Learning Process}
We build the proposed R-HGNN by first stacking $L$ R-HGNN layers to learn relation-aware node representations and then employing the relation-aware representations fusing component to integrate multiple representations component into a compact representation. We also adopt the multi-head attention mechanism to enhance the model capacity and make the training process more stable, where the outputs of different heads are combined via the concatenation operation. The proposed R-HGNN could be trained in an end-to-end manner with the following strategies.
\textbf{Semi-supervised learning strategy}. For tasks where the labels are available (e.g., node classification), R-HGNN could be optimized by minimizing the following cross entropy loss,
\begin{equation}
\label{equ:semi_supervised_loss}
\mathcal{L} = - \sum_{v \in \mathcal{V}_{label}} \sum_{c=1}^{C} y_{v,c} \cdot \log \hat{y}_{v,c},
\end{equation}
where $\mathcal{V}_{label}$ denotes the set of labeled nodes. $y_{v,c}$ and $\hat{y}_{v,c}$ represent the ground truth and predicted possibility of node $v$ at the $c$-th dimension. $\hat{y}_{v,c}$ can be obtained from a classifier (e.g., a single-layer neural network), which takes $\bm{h}_v$ as the input and provides $\hat{\bm{y}}_v$ as the output.
\textbf{Unsupervised learning strategy}. For tasks without using node labels (e.g., link prediction), R-HGNN could be optimized by minimizing the binary cross entropy with negative sampling strategy in Skip-gram \cite{DBLP:conf/nips/MikolovSCCD13}, which takes representations of paired nodes as the inputs,
\begin{equation}
\label{equ:unsupervised_loss}
\mathcal{L} = - \sum_{(v, u) \in \Omega_P} \log \sigma \left(\bm{h}_v^\top \bm{h}_u\right) - \sum_{(v^\prime, u^\prime) \in \Omega_N} \log \sigma \left(-\bm{h}_{v^\prime}^\top \bm{h}_{u^\prime}\right),
\end{equation}
where $\sigma(\cdot)$ is the sigmoid activation function, $\Omega_P$ and $\Omega_N$ denote the set of positive observed and negative sampled node pairs, respectively.
\subsection{Analysis of the Proposed Model}
Here we give the complexity analysis and summarize the advantages of R-HGNN as follows.
\textbf{Model Complexity}.
Our R-HGNN is highly efficient and could be easily parallelized.
Let $N_{in}^l$ and $N_{out}^l$ denote the input and output dimension of node representations at the $l$-th layer. Let $R_{in}^l$ and $R_{out}^l$ represent the input and output dimension of relation representations at the $l$-th layer. Let $L$ denote the number of stacked R-HGNN layers and $d$ denote the dimension of final compact node representation. The time complexity of calculating the relation-specific node representation with respect to relation $r \in \mathcal{R}$ is linear to the number of nodes and edges in the relational graph $\mathcal{G}_r$. It can be denoted by $\mathcal{O}(\alpha_1 |\mathcal{V}_r| + \beta |\mathcal{E}_r| + \gamma)$, where $|\mathcal{V}_r|$ and $|\mathcal{E}_r|$ are the number of nodes and edges in the relational graph $\mathcal{G}_r$. $\alpha_1=N_{out}^l(N_{in}^l + |\mathcal{R}|)$, $\beta=R_{in}^l N_{out}^l$ and $\gamma=R_{in}^l R_{out}^l$. Hence, the relation-specific node representations can be calculated individually across nodes and relations. When fusing the relation-aware representations to provide the compact node representation, the time complexity is linear to the number of nodes in the heterogeneous graph $\mathcal{G}$, which can be represented as $\mathcal{O}(\alpha_2 |\mathcal{V}_r|)$ with $\alpha_2=d|\mathcal{R}|(R_{out}^L + N_{out}^L)$.
\textbf{Model Superiority}.
1) R-HGNN can directly learn on the heterogeneous graph by first decomposing the original graph into several relation-specific graphs naturally and then learning node representations via the dedicated node representation learning component. Due to the exploration on the inherent structure of heterogeneous graph, our model leverages the information of all the nodes and achieves more comprehensive node representations.
2) R-HGNN discerns node representations with respect to different relation types, where each relation-specific representation reflects the node disparate characteristics. R-HGNN also explicitly captures the role of relations through learning relation semantics, which are used to guide the learning process of relation-aware node representations. The design of learning relation-aware node representations improves the model learning ability and captures more fine-grained information.
3) R-HGNN has the advantage of discovering more important relations, which is beneficial for heterogeneous graph analysis. According to the learned importance of each relation-specific node representation, we could intuitively observe which relation makes more contributions for the downstream task and analyze the results better.
\section{Experiments}\label{section-5}
This section evaluates the performance of the proposed method by experiments on various graph learning tasks, including node classification, node clustering, node visualization and link prediction.
\begin{table*}[!htbp]
\centering
\caption{Statistics of the datasets.}
\label{tab:dataset_description}
\resizebox{2\columnwidth}{!}
{
\begin{tabular}{c|c|c|c|c|c|c}
\hline
Datasets & Nodes & Edges & Features & Feature Extraction & Split Strategy & Split Sets \\ \hline
IMDB & \begin{tabular}[c]{@{}c@{}}\# Movie (M): 4,076\\ \# Director (D): 1,999\\ \# Actor (A): 5,069\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# M-D: 4,076\\ \# M-A: 12,228\end{tabular} & \begin{tabular}[c]{@{}c@{}}M:1,537\\ D:1,537\\ A:1,537\end{tabular} & \begin{tabular}[c]{@{}c@{}}M:bag-of-words of keywords\\ D:average of directed movies\\ A:average of acted movies\end{tabular} & \begin{tabular}[c]{@{}c@{}}Random Split\\ (following \cite{DBLP:conf/www/WangJSWYCY19})\end{tabular} & \begin{tabular}[c]{@{}c@{}}Train: 817\\ Validation: 407\\ Test: 2,852\end{tabular} \\ \hline
OGB-MAG & \begin{tabular}[c]{@{}c@{}}\# Paper (P): 736,389\\ \# Author (A): 1,134,649\\ \# Field (F): 59,965\\ \# Institution (I): 8,740\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# P-A: 7,145,660\\ \# P-P: 5,416,271\\ \# P-F: 7,505,078\\ \# A-I: 1,043,998\end{tabular} & \begin{tabular}[c]{@{}c@{}}P:256\\ A:128\\ F:128\\ I:128\end{tabular} & \begin{tabular}[c]{@{}c@{}}P:Word2Vec \& metapath2vec\\ A:metapath2vec\\ F:metapath2vec\\ I:metapath2vec\end{tabular} & \begin{tabular}[c]{@{}c@{}}Time-based Split\\ (following \cite{DBLP:conf/nips/HuFZDRLCL20})\end{tabular} & \begin{tabular}[c]{@{}c@{}}Train: 629,571\\ Validation: 64,879\\ Test: 41,939\end{tabular} \\ \hline
OAG-Venue & \begin{tabular}[c]{@{}c@{}}\# Paper (P): 166,065\\ \# Author (A): 510,189\\ \# Field (F): 45,717\\ \# Institution (I): 9,079\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# P-A: 477,676\\ \# P-P: 851,644\\ \# P-F: 1,700,497\\ \# A-I: 612,872\end{tabular} & \begin{tabular}[c]{@{}c@{}}P:768\\ A:768\\ F:400\\ I:400\end{tabular} & \begin{tabular}[c]{@{}c@{}}P:XLNet\\ A:average of published papers\\ F:metapath2vec\\ I:metapath2vec\end{tabular} & \begin{tabular}[c]{@{}c@{}}Time-based Split\\ (following \cite{DBLP:conf/www/HuDWS20})\end{tabular} & \begin{tabular}[c]{@{}c@{}}Train: 106,058\\ Validation: 24,255\\ Test: 35,752\end{tabular} \\ \hline
OAG-L1-Field & \begin{tabular}[c]{@{}c@{}}\# Paper (P): 119,483\\ \# Author (A): 510,189\\ \# Venue (V): 6,934\\ \# Institution (I): 9,079\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# P-A: 340,959\\ \# P-P: 329,703\\ \# P-V: 119,483\\ \# A-I: 612,872\end{tabular} & \begin{tabular}[c]{@{}c@{}}P:768\\ A:768\\ V:400\\ I:400\end{tabular} & \begin{tabular}[c]{@{}c@{}}P:XLNet\\ A:average of published papers\\ V:metapath2vec\\ I:metapath2vec\end{tabular} & \begin{tabular}[c]{@{}c@{}}Time-based Split\\ (following \cite{DBLP:conf/www/HuDWS20})\end{tabular} & \begin{tabular}[c]{@{}c@{}}Train: 81,071\\ Validation: 16,439\\ Test: 21,973\end{tabular} \\ \hline
\end{tabular}
}
\end{table*}
\subsection{Description of Datasets}
We conduct experiments on four real-world datesets, containing a small-scale dataset (IMDB) and three large-scale datasets (OGB-MAG, OAG-Venue and OAG-L1-Field).
\begin{itemize}
\item \textbf{IMDB}\footnote{https://data.world/data-society/imdb-5000-movie-dataset}:
Following \cite{DBLP:conf/www/WangJSWYCY19}, we extract a subset of IMDB and construct a heterogeneous graph containing movies (M), directors (D) and actors (A). The movies are divided into three categories: Action, Comedy and Drama. The movie features are denoted by the bag-of-words representation of the plot keywords. Director/actor features are the average representation of movies that they directed/acted. We use the random split strategy in \cite{DBLP:conf/www/WangJSWYCY19} to spilt the dataset.
\item \textbf{OGB-MAG}:
OGB-MAG \cite{DBLP:conf/nips/HuFZDRLCL20} is a heterogeneous academic network extracted from the Microsoft Academic Graph (MAG), consisting of paper (P), authors (A), fields (F) and institutions (I). Papers are published on 349 different venues. Each paper is associated with a Word2Vec feature. All the other types of nodes are not associated with input features and we adopt the metapath2vec \cite{DBLP:conf/kdd/DongCS17} model to generate their features. We use the time-based split strategy in \cite{DBLP:conf/nips/HuFZDRLCL20} to conduct experiments.
\item \textbf{OAG-Venue}:
OAG-Venue \cite{DBLP:conf/www/HuDWS20} is a heterogeneous graph in the Computer Science (CS) domain, which consists of paper (P), authors (A), fields (F) and institutions (I). The papers are published on 241 different venues. Paper's features are obtained from a pre-trained XLNet \cite{DBLP:conf/nips/YangDYCSL19} and the feature of each author is the average of his/her published paper representations. The features of other types of nodes are generated by the metapath2vec \cite{DBLP:conf/kdd/DongCS17} model. The dataset is split by the time-based split strategy in \cite{DBLP:conf/www/HuDWS20}.
\item \textbf{OAG-L1-Field}:
OAG-L1-Field \cite{DBLP:conf/www/HuDWS20} is another heterogeneous graph in the CS domain, containing papers (P), authors (A), venues (V) and institutions (I). The papers belong to 52 different $L1$-level fields. The feature extraction and split strategy are the same with those used in OAG-Venue.
\end{itemize}
Statistics of the datasets are summarized in \tabref{tab:dataset_description}.
Please refer to the | 6a30b3b556fc5004f10d212b8aac70383df0aeb9 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Proofs for Section \ref{sec:Learning Robust Model Results in Better OOD Generalization}}\label{app:proof in Learning Robust Model Results in Better OOD Generalization}
\subsection{Proofs for Section \ref{sec:Robustness Corresponds with Better OOD Generalization}}\label{app:proof in Robustness Corresponds with Better OOD Generalization}
\subsubsection{Proof of Theorem \ref{thm:ood generalization upper bound}}
To start the proof of Theorem \ref{thm:ood generalization upper bound}, we need the following lemma.
\begin{lemma}
\label{lem:equivalence}
For any $\text{\boldmath{$w$}}$ and $r$, we have
\begin{equation}
\small
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) = \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right].
\end{equation}
\end{lemma}
\begin{proof}
Let $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}) = \boldsymbol{x} + \arg\max_{\{\boldsymbol{\delta}: \|\boldsymbol{\delta}\|_{\infty} \leq r\}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$ with $\boldsymbol{x}$ is an input data. The existence of $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ is guaranteed by the continuity of $f(\text{\boldmath{$w$}}, \boldsymbol{x})$. $P_{r}$ is the distribution of $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ with $\boldsymbol{x}\sim P_{0}$. Then
\begin{equation}
\small
\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] = \mathbb{E}_{P_{r}}[f(\text{\boldmath{$w$}}, \boldsymbol{x})].
\end{equation}
Since
\begin{equation}
\small
\mathsf{W}_{\infty}(P_{0}, P_{r}) \leq \mathbb{E}_{P_{0}}[\|\boldsymbol{x} - T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\|_{\infty}] \leq r,
\end{equation}
we have
\begin{equation}
\small
\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] \leq \sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}).
\end{equation}
On the other hand, let $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$. Due to Kolmogorov's theorem, $P^{*}_{r}$ can be distribution of some random vector $\boldsymbol{z}$, due to the definition of $\mathsf{W}_{\infty}$-distance, we have $\|\boldsymbol{z} - \boldsymbol{x}\|_{\infty}\leq r$ holds almost surely. Then we conclude
\begin{equation}
\small
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) = R_{P^{*}_{r}}(\text{\boldmath{$w$}}) = \mathbb{E}_{P^{*}_{r}}[f(\text{\boldmath{$w$}}, \boldsymbol{z})] \leq \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right].
\end{equation}
Thus, we get the conclusion.
\end{proof}
This lemma shows that the distributional perturbation measured by $\mathsf{W}_{\infty}$-distance is equivalent to input perturbation. Hence we can study $\mathsf{W}_{\inf}$-distributional robustness through $\ell_{\inf}$-input-robustness. The basic tool for our proof is the covering number, which is defined as follows.
\begin{definition}\citep{wainwright2019}
A $r$-cover of $(\mathcal{X}, \|\cdot\|_{p})$ is any point set $\{\boldsymbol{u}_{i}\}\subseteq\mathcal{X}$ such that for any $\boldsymbol{u}\in\mathcal{X}$, there exists $\boldsymbol{u}_{i}$ satisfies $\|\boldsymbol{u} - \boldsymbol{u}_{i}\|_{p}\leq r$. The covering number $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{p})$ is the cardinality of the smallest $r$-cover.
\end{definition}
Now we are ready to give the proof of Theorem \ref{thm:ood generalization upper bound} which is motivated by \citep{xu2012robustness}.
\begin{proof}[Proof of Theorem \ref{thm:ood generalization upper bound}]
We can construct a $r$-cover to $(\mathcal{X}, \|\cdot\|_{2})$ then $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{2})\leq (2d_{0})^{(2D/r^{2} + 1)} = N$, because the $\mathcal{X}$ can be covered by a polytope with $\ell_{2}$-diameter smaller than $2D$ and $2d_{0}$ vertices, see \citep{vershynin2018} Theorem 0.0.4 for details. Due to the geometrical structure, we have $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{\infty})\leq (2d_{0})^{(2D/r^{2} + 1)}$. Then, there exists $(C_{1}, \cdots, C_{N})$ covers $(\mathcal{X}, \|\cdot\|_{\infty})$ where $C_{i}$ is disjoint with each other, and $\|\boldsymbol{u} - \boldsymbol{v}\|_{\infty} \leq r$ for any $\boldsymbol{u}, \boldsymbol{v}\in C_{i}$. This can be constructed by $C_{i} = \hat{C}_{i}\bigcap\left(\bigcup_{j=1}^{i-1}\hat{C}_{j}\right)^{c}$ with $(\hat{C}_{1},\cdots, \hat{C}_{N})$ covers $(\mathcal{X}, \|\cdot\|_{\infty})$, and the diameter of each $\hat{C}_{i}$ is smaller than $r$ since $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{\infty}) \leq N$. Let $A_{j} = \{\boldsymbol{x}_{i}: \boldsymbol{x}_{i}\in C_{j}\}$, and $|A_{j}|$ is the cardinality of $A_{j}$. Due to Lemma \ref{lem:equivalence}, we have
\begin{equation}
\label{eq:generalization decomposition}
\small
\begin{aligned}
\left|\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r_{0})}R_{P}(\text{\boldmath{$w$}}) - R_{P_{n}}(\text{\boldmath{$w$}}) \right| & = \left|\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] - R_{P_{n}}(\text{\boldmath{$w$}})\right| \\
& = \left|\sum\limits_{j=1}^{N}\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\mid \boldsymbol{x}\in C_{j}\right]P_{0}(C_{j}) - R_{P_{n}}(\text{\boldmath{$w$}})\right| \\
& \leq \left|\sum\limits_{j=1}^{N}\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\mid \boldsymbol{x}\in C_{j}\right]\frac{|A_{j}|}{n} - \frac{1}{n}\sum\limits_{i=1}^{n}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})\right|\\
& + \left|\sum\limits_{j=1}^{N}\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\mid \boldsymbol{x}\in C_{j}\right]\left(\frac{|A_{j}|}{n} - P_{0}(C_{j})\right)\right| \\
& \leq \left|\frac{1}{n}\sum\limits_{j=1}^{N}\sum\limits_{\boldsymbol{x}_{i}\in C_{j}}\sup_{\boldsymbol{x}\in C_{j} + B_{\infty}(\textbf{0}, r_{0})}|f(\text{\boldmath{$w$}}, \boldsymbol{x}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})|\right| + M\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right| \\
& \overset{a}{\leq} \frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq 2r}\left|f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})\right| + M\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right| \\
& \leq \epsilon + M\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right|.
\end{aligned}
\end{equation}
Here $a$ is due to $C_{j} + B_{\infty}(\textbf{0}, r) \subseteq B_{\infty}(\boldsymbol{x}_{i}, 2r)$ when $\boldsymbol{x}_{i}\in C_{j}$, since $\ell_{\infty}$-diameter of $C_{j}$ is smaller than $r$. The last inequality is due to $(2r, \epsilon, P_{n}, \infty)$-robustness of $f(\text{\boldmath{$w$}}, \boldsymbol{x})$. On the other hand, due to Proposition A6.6 in \citep{van2000weak}, we have
\begin{equation}
\label{eq:multinomial concentration}
\small
\mathbb{P}\left(\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right|\geq \theta\right) \leq 2^{N}\exp\left(\frac{-n\theta^{2}}{2}\right).
\end{equation}
Combine this with \eqref{eq:generalization decomposition}, due to the value of $N$, we get the conclusion.
\end{proof}
\subsubsection{Proof of Theorem \ref{thm:ood generalization upper bound l2}}
There is a little difference of proving Theorem \ref{thm:ood generalization upper bound l2} compared with Theorem \ref{thm:ood generalization upper bound}. Because the out-distribution $P$ constrained in $B_{\mathsf{W}_{\infty}}(P_{0}, r)$ only correspond with OOD data that contained in a $\ell_{\infty}$-ball of in-distribution data almost surely, see Lemma \ref{lem:equivalence} for a rigorous description. Hence, we can utilize $\ell_{\infty}$-robustness of model to derive the OOD generalization under $\mathsf{W}_{\infty}$-distance by Theorem \ref{thm:ood generalization upper bound}.
However, in the regime of $\mathsf{W}_{2}$-distance, roughly speaking, the transformed OOD data $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ is contained in a $\ell_{2}$-ball of $\boldsymbol{x}$ in expectation. Thus, Lemma \ref{lem:equivalence} is invalid under $\mathsf{W}_{2}$-distance.
\par
To discuss the OOD generalization under $\mathsf{W}_{2}$-distance, we need to give a delicate characterization to the distribution $P\in B_{\mathsf{W}_{2}}(P_{0}, r)$. First, we need the following lemma.
\begin{lemma}\label{lem:optimal}
For any $r$ and $\text{\boldmath{$w$}}$, let $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$. Then, there exists a mapping $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ such that $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\sim P^{*}_{r}$ with $\boldsymbol{x}\sim P_{0}$.
\end{lemma}
\begin{proof}
The proof of Theorem 6 in \citep{sinha2018certifying} shows that
\begin{equation}
\small
R_{P^{*}_{r}}(\text{\boldmath{$w$}}) = \sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) = \inf_{\lambda\geq 0}\sup_{P, \pi\in (P, P_{0})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}d\pi(\boldsymbol{x}, \boldsymbol{z}) + \lambda r\right).
\end{equation}
We next show that the supremum over $\pi$ in the last equality is attained by the joint distribution $(T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}), \boldsymbol{x})$, which implies our conclusion. For any $\lambda > 0$, we have
\begin{equation}
\small
\sup_{P, \pi\in (P, P_{0})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}d\pi(\boldsymbol{x}, \boldsymbol{z})\right) \leq \int_{\mathcal{X}}\sup_{\boldsymbol{x}}\left(f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}\right)dP_{0}(\boldsymbol{z}),
\end{equation}
due to the supremum in the left hand side is taken over $P$ and $\pi$. On the other hand, let $P(\cdot\mid \boldsymbol{z})$ and $\boldsymbol{x}(\cdot)$ respectively be the regular conditional distribution on $\mathcal{X}$ with $\boldsymbol{z}$ given and the function on $\mathcal{X}$. Since $P(\cdot\mid \boldsymbol{z})$ is measurable,
\begin{equation}
\small
\begin{aligned}
\sup_{P, \pi\in (P, P_{0})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}d\pi(\boldsymbol{x}, \boldsymbol{z})\right) & \geq \sup_{P(\cdot\mid \boldsymbol{z})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}dP(\boldsymbol{x}\mid \boldsymbol{z})dP_{0}(\boldsymbol{z})\right) \\
& \geq \sup_{\boldsymbol{x}(\cdot)}\left(\int_{\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}(\boldsymbol{z})) - \lambda\|\boldsymbol{x}(\boldsymbol{z}) - \boldsymbol{z}\|^{2}dP_{0}(\boldsymbol{z})\right) \\
& \geq \int_{\mathcal{X}}\sup_{\boldsymbol{x}}\left(f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}\right)dP_{0}(\boldsymbol{z}).
\end{aligned}
\end{equation}
Thus, we get the conclusion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:ood generalization upper bound l2}]
Similar to the proof of Theorem \ref{thm:ood generalization upper bound}, we can construct a disjoint cover $(C_{1}, \cdots, C_{N})$ to $(\mathcal{X}, \|\cdot\|_{2})$ such that $N\leq (2d_{0})^{(2\epsilon^{2}D/r^{2} + 1)}$, and the $l_{2}$-diameter of each $C_{i}$ is smaller than $r/\epsilon$. Let $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$, by Lemma \ref{lem:optimal}, we have
\begin{equation}
\label{eq:sup bound}
\small
\begin{aligned}
\sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) & = R_{P^{*}_{r}}(\text{\boldmath{$w$}}) \\
& = \mathbb{E}_{P_{0}}\left[f(\text{\boldmath{$w$}}, T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}))\right] \\
& = \mathbb{E}_{P_{0}}\left[f(\text{\boldmath{$w$}}, T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}))\left(\textbf{1}_{T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\in B_{2}(\boldsymbol{x}, r / \epsilon)} + \textbf{1}_{T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\notin B_{2}(\boldsymbol{x}, r / \epsilon)}\right)\right] \\
& \leq \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] + M\mathbb{P}(T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\notin B_{2}(\boldsymbol{x}, r / \epsilon)).
\end{aligned}
\end{equation}
Due to the definition of $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$, by Markov's inequality, we have
\begin{equation}
\small
\left(\frac{r}{\epsilon}\right) \mathbb{P}(T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\notin B_{2}(\boldsymbol{x}, r / \epsilon)) \leq \int_{\mathcal{X}}\|T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}) - \boldsymbol{x}\|^{2} dP_{0}(\boldsymbol{x}) = \mathsf{W}_{2}(P_{0}, P^{*}_{r}) \leq r.
\end{equation}
Plugging this into \eqref{eq:sup bound}, and due to the definition of Wasserstein distance, we have
\begin{equation}
\label{eq:upper bound on optimal}
\small
\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] \leq \sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) \leq \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] + M\epsilon.
\end{equation}
Similar to the proof of Theorem \ref{thm:ood generalization upper bound}, due to the model is $(2r/\epsilon, \epsilon, P_{n}, 2)$-robust, we have
\begin{equation}
\small
\left|\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] - R_{P_{n}}(\text{\boldmath{$w$}})\right|\leq \epsilon + M\sqrt{\frac{(2d_{0})^{(2\epsilon^{2}D/r^{2} + 1)}\log{2} + 2\log{(1/\theta)}}{n}}
\end{equation}
holds with probability at least $1 - \theta$. Combining this with \eqref{eq:upper bound on optimal}, we get the conclusion.
\end{proof}
\subsection{Proofs for Section \ref{sec:robust training}}\label{app:proof in robust training}
The proof of Theorem \ref{thm:convergence} is same for $p\in\{2, \infty\}$, we take $p=\infty$ as an example. Before providing the proof, we first give a lemma to characterize the convergence rate of the first inner loop in Algorithm \ref{alg:sgd}.
\begin{lemma}
\label{lem:convergence}
For any $\text{\boldmath{$w$}}, \boldsymbol{x}\in\{\boldsymbol{x}_{i}\}$, and $r$, there exists $\boldsymbol{\delta}^{*}\in\arg\max_{\{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty}\leq r\}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$ such that
\begin{equation}
\small
\|\boldsymbol{\delta}_{K + 1} - \boldsymbol{\delta}^{*}\|^{2} \leq \left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}\|\boldsymbol{\delta}_{1} - \boldsymbol{\delta}^{*}\|^{2}
\end{equation}
when $\boldsymbol{\delta}_{k + 1} = \emph{\text{Proj}}_{B_{\infty}(\emph{\textbf{0}}, r)}\left(\boldsymbol{\delta}_{k} +\eta_{\boldsymbol{x}}\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k})\right)$ with $\eta_{\boldsymbol{x}} = 1 /L_{22}$.
\end{lemma}
\begin{proof}
The existence of $\boldsymbol{\delta}^{*}$ is due to the continuity of $f(\text{\boldmath{$w$}}, \cdot)$. Then
\begin{equation}
\label{eq:distance descent}
\small
\begin{aligned}
f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}^{*}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k + 1}) & = f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}^{*}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}) + f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k + 1})\\
& \overset{a}{\leq} \langle \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}), \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k}\rangle -\frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2}\\
& + \langle\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}), \boldsymbol{\delta}_{k} - \boldsymbol{\delta}_{k + 1}\rangle + \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} \\
& = \langle\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}), \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k + 1}\rangle - \frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} + \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} \\
& \overset{b}{\leq} L_{22}\langle \boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}, \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k + 1}\rangle - \frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} + \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} \\
& = L_{22}\langle \boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}, \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k}\rangle - \frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} - \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2},
\end{aligned}
\end{equation}
where $a$ is due to the $L_{22}$-Lipschitz continuity of $\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x})$ and strongly convexity, $b$ is because the property of projection (see Lemma 3.1 in \citep{bubeck2014convex}). Then we get
\begin{equation}
\small
\begin{aligned}
\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}^{*}\|^{2} & = \|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} + \|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} + 2\langle\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}, \boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\rangle \\
& \leq \left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2}
\end{aligned}
\end{equation}
by plugging \eqref{eq:distance descent} into the above equality and $f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}^{*}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k + 1}) \geq 0$. Thus, we get the conclusion.
\end{proof}
This lemma shows that the inner loop in Algorithm \ref{alg:sgd} can efficiently approximate the worst-case perturbation for any $\text{\boldmath{$w$}}_{t}$ and $\boldsymbol{x}_{i}$. Now we are ready to give the proof of Theorem \ref{thm:convergence}.
\par
We need the following lemma, which is Theorem 6 in \citep{rakhlin2012making}.
\begin{lemma}
\label{lem:concentration}
Let $\{\xi_{1},\cdots, \xi_{t}\}$ be a martingale difference sequence with a uniform upper bound $b$. Let $V_{t} = \sum_{j=1}^{t}\mathsf{Var}(\xi_{j}\mid\mathcal{F}_{j - 1})$ with $\mathcal{F}_{j}$ is the $\sigma$-field generated by $\{\xi_{1},\cdots,\xi_{j}\}$. Then for every $a$ and $v>0$,
\begin{equation}
\small
\mathbb{P}\left(\bigcup_{s\leq t}\left(\left\{\sum\limits_{j=1}^{t}\xi_{j} \geq a\right\}\bigcap \left\{V_{t} \leq v \right\}\right)\right) \leq \exp\left(\frac{-a^{2}}{2(v + ba)}\right).
\end{equation}
\end{lemma}
This is a type of Bennett's inequality which is sharper compared with Azuma-Hoeffding's inequality when the variance $v$ is much smaller than uniform bound $b$.
\subsubsection{Proof of Theorem \ref{thm:convergence}}
\begin{proof}
With a little abuse of notation, let $r(p) = r$ and define $g(\text{\boldmath{$w$}}, \boldsymbol{x}) = \sup_{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty} \leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$. Lemma A.5 in \citep{nouiehed2019solving} implies $g(\text{\boldmath{$w$}}, \boldsymbol{x})$ has $L_{11} + \frac{L_{12}L_{21}}{\mu_{\boldsymbol{x}}}$-Lipschitz continuous gradient with respect to $\text{\boldmath{$w$}}$ for any specific $\boldsymbol{x}$. Then $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})$ has $L = L_{11} + \frac{L_{12}L_{21}}{\mu_{\boldsymbol{x}}}$-Lipschitz continuous gradient. Let $\boldsymbol{x}^{*}\in \boldsymbol{x} + \arg\max_{\{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty}\leq r\}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$, due to the Lipschitz gradient of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})$,
\begin{equation}
\small
\begin{aligned}
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) & \leq \langle\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}), \text{\boldmath{$w$}}_{t + 1} - \text{\boldmath{$w$}}_{t}\rangle + \frac{L}{2}\|\text{\boldmath{$w$}}_{t + 1} - \text{\boldmath{$w$}}_{t}\|^{2} \\
& = -\eta_{\text{\boldmath{$w$}}_{t}}\langle\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}), \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\rangle + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}L}{2}\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\|^{2} \\
& = -\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|^{2} + \eta_{\text{\boldmath{$w$}}_{t}}\langle\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}), \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\rangle \\
& + \eta_{\text{\boldmath{$w$}}_{t}}\langle\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}),\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*})\rangle + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}L}{2}\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\|^{2}.
\end{aligned}
\end{equation}
Here the last equality is due to $\nabla_{\text{\boldmath{$w$}}}g(\text{\boldmath{$w$}}, \boldsymbol{x}) = \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}^{*})$ (Similar to Danskin's theorem, see Lemma A.5 in \citep{nouiehed2019solving}), and $\boldsymbol{x}^{*}_{i_{t}}$ is the local maxima approximated by $\boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K}$ in Lemma \ref{lem:convergence}. By taking expectation to $\text{\boldmath{$w$}}_{t + 1}$ with $\text{\boldmath{$w$}}_{t}$ given in the both side of the above equation, Jesen's inequality, combining Lemma \ref{lem:convergence} and $\eta_{\text{\boldmath{$w$}}_{t}} = 1/\mu_{\text{\boldmath{$w$}}}t$,
\begin{equation}
\label{eq:convergence in exp}
\small
\begin{aligned}
\mathbb{E}[\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1})] - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) & \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) -\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|^{2} \\
& + \mathbb{E}\left[\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\|\right] + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L}{2} \\
& \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) -\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|^{2} + \eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}\mathbb{E}\left[\|\boldsymbol{\delta}_{1} - \boldsymbol{\delta}_{i_{t}}^{*}\|^{2}\right] + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L}{2} \\
& \leq \left(1 - 2\mu_{\text{\boldmath{$w$}}}\eta_{\text{\boldmath{$w$}}_{t}}\right)\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right) + \eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L \\
& = \left(1 - \frac{2}{t}\right)\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right) + \frac{G^{2}L}{\mu_{\text{\boldmath{$w$}}}^{2}t^{2}}.
\end{aligned}
\end{equation}
Here the third inequality is because
\begin{equation}
\small
\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}\|\boldsymbol{\delta}_{1} - \boldsymbol{\delta}_{i_{t}}^{*}\|^{2} \leq \eta_{\text{\boldmath{$w$}}_{t}}G\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}4d_{0}r^{2} \leq \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L}{2},
\end{equation}
for any $\boldsymbol{\delta}_{i_{t}}^{*}$, since
\begin{equation}
\small
K\log{\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)} \leq -K\frac{\mu_{\boldsymbol{x}}}{L_{22}} \leq \log{\left(\frac{GL}{8T\mu_{\text{\boldmath{$w$}}}d_{0}r^{2}}\right)}.
\end{equation}
Then by induction,
\begin{equation}
\small
\begin{aligned}
\mathbb{E}[\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1})] - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) & \leq \frac{G^{2}L}{\mu^{2}_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{1}{j^{2}}\prod_{k=j + 1}^{t}\left(1 - \frac{2}{k}\right) \\
& = \frac{G^{2}L}{\mu^{2}_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{1}{j^{2}}\frac{(j - 1)j}{(t - 1)t} \\
& \leq \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}}.
\end{aligned}
\end{equation}
Thus we get the first conclusion of convergence in expectation by taking $t=T$ for $t\geq 2$. For the second conclusion, let us define $\xi_{t} = \langle\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}),\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*})\rangle$. Then Schwarz inequality implies that
\begin{equation}
\small
|\xi_{t}| \leq \|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*})\| \leq 2G^{2}.
\end{equation}
Similar to \eqref{eq:convergence in exp}, for $t \geq 2$,
\begin{equation}
\label{eq:high probablity bound}
\small
\begin{aligned}
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) & \leq \left(1 - 2\mu_{\text{\boldmath{$w$}}}\eta_{\text{\boldmath{$w$}}_{t}}\right)\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right) + \eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L + 2\eta_{\text{\boldmath{$w$}}_{t}}\xi_{t} \\
& \leq \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} + \frac{2}{\mu_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{\xi_{j}}{j}\prod_{k = j + 1}^{t}\left(1 - \frac{2}{k}\right)\\
& = \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} + \frac{2}{\mu_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{1}{j}\frac{(j - 1)j}{(t - 1)t}\xi_{j} \\
& = \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} + \frac{2}{\mu_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{(j - 1)}{(t - 1)t}\xi_{j}.
\end{aligned}
\end{equation}
Since the second term in the last inequality is upper bonded by $\sum_{j=2}^{t}\xi_{j}$ which is a sum of martingale difference, and $|\xi_{j}| \leq 2G^{2}$, a simple Azuma-Hoeffding's inequality based on bounded martingale difference (Corollary 2.20 in \citep{wainwright2019}) can give a $\mathcal{O}(1/\sqrt{t})$ convergence rate in the high probability. However, we can sharpen the convergence rate via a Bennett's inequality (Proposition 3.19 in \citep{duchi2016lecture}), because the conditional variance of $\xi_{j}$ will decrease across training.
We consider the conditional variance of $\sum_{j=2}^{t}(j - 1)\xi_{j}$, let $\mathcal{F}_{j}$ be the $\sigma$-field generated by $\{\text{\boldmath{$w$}}_{1}, \cdots, \text{\boldmath{$w$}}_{j}\}$, since $\mathbb{E}[\xi_{j}] = 0$ we have
\begin{equation}
\small
\begin{aligned}
\mathsf{Var}\left(\sum_{j=2}^{t}(j - 1)\xi_{j}\mid \mathcal{F}_{j - 1}\right) & = \sum_{j=2}^{t}(j - 1)^{2}\mathsf{Var}\left(\xi_{j}\mid \mathcal{F}_{j - 1}\right) \\
& = \sum_{j=2}^{t}(j - 1)^{2}\mathbb{E}\left[\xi_{j}^{2} \mid \mathcal{F}_{j - 1}\right] \\
& \leq 4G^{2}\sum_{j=2}^{t}(j - 1)^{2} \|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j})\|^{2} \\
& \leq 8G^{2}L\sum_{j=2}^{t}(j - 1)^{2} \left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right),
\end{aligned}
\end{equation}
where first inequality is from Schwarz's inequality and the last inequality is because
\begin{equation}
\small
\begin{aligned}
\tilde{R}_{P_{n}}\left(\text{\boldmath{$w$}}^{*}\right) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) & \leq \tilde{R}_{P_{n}}\left(\text{\boldmath{$w$}} - \frac{1}{L}\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \\
& \leq -\left\langle
\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}), \frac{1}{L}\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right\rangle + \frac{L}{2}\left\|\frac{1}{L}\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right\|^{2} \\
& = -\frac{1}{2L}\left\|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right\|^{2},
\end{aligned}
\end{equation}
for any $\text{\boldmath{$w$}}$. By applying Lemma \ref{lem:concentration}, as long as $T\geq 4$ and $0 < \theta < 1 / e$, then with probability at least $1 - \theta$, for all $t \leq T$,
\begin{equation}
\small
\begin{aligned}
& \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\\
& \leq \frac{8G}{\mu_{\text{\boldmath{$w$}}}(t - 1)t}\max\left\{\sqrt{2L\sum_{j=2}^{t}(j - 1)^{2} \left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right)}, G(t - 1)\sqrt{\log{\left(\frac{\log{T}}{\theta}\right)}}\right\}\sqrt{\log{\left(\frac{\log{T}}{\theta}\right)}} + \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} \\
& \leq \frac{8G\sqrt{\log{(\log{(T/\theta)})}}}{\mu_{\text{\boldmath{$w$}}}(t - 1)t}\sqrt{2L\sum_{j=2}^{t}(j - 1)^{2} \left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right)} + \frac{(8\mu_{\text{\boldmath{$w$}}}G^{2}\log{(\log{(T/\theta)})} + G^{2}L)}{t\mu_{\text{\boldmath{$w$}}}^{2}}.
\end{aligned}
\end{equation}
Then, an upper bound to the first term in the last inequality can give our conclusion. Note that if $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})$ is smaller than $\mathcal{O}(1 / j - 1)$, the conclusion is full-filled. To see this, we should find a large constant $a$ such that $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) \leq a / t$. This is clearly hold when $a\geq G^{2} / 2\mu_{\text{\boldmath{$w$}}}$ for $t = 1$ due to the PL inequality and bounded gradient. For $t\geq 2$, we find this $a$ by induction. Let $b = 8G\sqrt{2L\log{(\log{(T/\theta)})}}/\mu_{\text{\boldmath{$w$}}}$ and $c = (8\mu_{\text{\boldmath{$w$}}}G^{2}\log{(\log{(T/\theta)})} + G^{2}L) / \mu_{\text{\boldmath{$w$}}}^{2}$. A satisfactory $a$ yields
\begin{equation}
\small
\begin{aligned}
\frac{a}{t} \geq \frac{b}{(t - 1)t}\sqrt{a\sum\limits_{j=2}^{t}(j - 1)} + \frac{c}{t}
= \frac{b}{(t - 1)t}\sqrt{\frac{at(t - 1)}{2}} + \frac{c}{t} \geq\frac{1}{t}\left(b\sqrt{\frac{a}{2}} + c\right).
\end{aligned}
\end{equation}
By solving a quadratic inequality, we conclude that $a - b\sqrt{a/2} - c \geq 0$. Then
\begin{equation}
\small
a \geq \left(\frac{b + \sqrt{b^{2} + 8c}}{2\sqrt{2}}\right)^{2}.
\end{equation}
By taking
\begin{equation}
\small
a \geq 2\left(\frac{2b^{2} + 8c}{8}\right) \geq \left(\frac{b + \sqrt{b + 8c}}{2\sqrt{2}}\right)^{2},
\end{equation}
we get
\begin{equation}
\small
a \geq \frac{64G^{2}L\log{(\log{(T/\theta)})}}{\mu_{\text{\boldmath{$w$}}}^{2}} + \frac{(16\mu_{\text{\boldmath{$w$}}}G^{2}\log{(\log{(T/\theta)})} + G^{2}L)}{\mu_{\text{\boldmath{$w$}}}^{2}} = \frac{G^{2}\log{(\log{(T/\theta)})}(64L + 16\mu_{\text{\boldmath{$w$}}}) + G^{2}L}{\mu_{\text{\boldmath{$w$}}}^{2}},
\end{equation}
due to the value of $b$ and $c$. Hence, we get the conclusion by taking $t=T$.
\end{proof}
\subsubsection{Proof of Proposition \ref{pro:robustness}}\label{app:proof of proposition robustness}
\begin{proof}
From the definition of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})$, for any $r\geq 0$, we have
\begin{equation}
\small
\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}(f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})) \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon.
\end{equation}
On the other hand
\begin{equation}
\small
\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}(f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta})) \leq R_{P_{n}}(\text{\boldmath{$w$}}) \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon.
\end{equation}
Take a sum to the two above inequalities, we get
\begin{equation}
\small
\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}|f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}))| \leq \frac{1}{n}\sum\limits_{i=1}^{n}\left(\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - \inf_{\|\boldsymbol{\delta}\|_{p}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}))\right) \leq 2\epsilon.
\end{equation}
Then the conclusion is verified.
\end{proof}
\section{Proofs for Section \ref{sec:pretrain improves ood}}
\subsection{Proof of Theorem \ref{thm:pretrain generalize}}\label{app:proof of theorem pretrain generalize}
\begin{proof}
We have $r(\infty) = r$ in this theorem. The key is to bound the $|\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}_{\text{pre}})- \sup_{Q\in B_{\mathsf{W}_{\infty}}(Q_{0}, r)}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})|$, then triangle inequality and Hoeffding's inequality imply the conclusion. Let $P^{*}_{r}\in \arg\max_{\{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)\}}R_{P}(\text{\boldmath{$w$}}_{\text{pre}})$. For any given $\boldsymbol{x}$, due to the continuity of $f(\text{\boldmath{$w$}}_{\text{pre}},\cdot)$, similar to Lemma \ref{lem:equivalence}, we can find the $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}) = \boldsymbol{x} + \arg\max_{\{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty}\leq r\}}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x} + \boldsymbol{\delta})$. Then due to Lemma \ref{lem:equivalence},
\begin{equation}
\small
R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) = \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x} + \boldsymbol{\delta})\right].
\end{equation}
Thus, $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}) \sim P^{*}_{r}$ when $\boldsymbol{x}\sim P_{0}$. We can find $\boldsymbol{z}\sim Q_{0}$ due to the Kolmogorov's Theorem, and let $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\sim Q^{*}_{r}$. By the definition of $\mathsf{W}_{\infty}$-distance, one can verify $\mathsf{W}_{\infty}(Q_{0}, Q^{*}_{r})\leq r$ as well as $R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \epsilon_{\text{pre}}$. Note that $0\leq f(\text{\boldmath{$w$}}_{\text{pre}}, \cdot) \leq M$, then
\begin{equation}
\label{eq:tv distance}
\small
\begin{aligned}
\left|R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) - R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}})\right| & = \left|\int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x})dP^{*}_{r}(\boldsymbol{x}) - \int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x})dQ^{*}_{r}(\boldsymbol{x})\right| \\
& = \left|\int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}))dP_{0}(\boldsymbol{x}) - \int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}))dQ_{0}(\boldsymbol{x})\right| \\
& \leq \int_{\mathcal{X}}\left|f(\text{\boldmath{$w$}}_{\text{pre}}, T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}))\right|\left|dP_{0}(\boldsymbol{x}) - dQ_{0}(\boldsymbol{x})\right| \\
& \leq M\int_{\mathcal{X}}\left|dP_{0}(\boldsymbol{x}) - dQ_{0}(\boldsymbol{x})\right| \\
& = 2M\mathsf{TV}(P_{0}, Q_{0}).
\end{aligned}
\end{equation}
The last equality is from the definition of total variation distance \citep{villani2008optimal}. Thus a simple triangle inequality implies that
\begin{equation}
\label{eq:tv bound}
\small
R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \left|R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) - R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}})\right| + R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \epsilon_{\text{pre}} + 2M\mathsf{TV}(P_{0}, Q_{0}).
\end{equation}
Next we give the concentration result of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\text{pre}})$. Due to the definition of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\text{pre}})$, it can be rewritten as $R_{P_{n}^{*}}(\text{\boldmath{$w$}}_{\text{pre}})$ where $P_{n}^{*}$ is the empirical distribution on $\{T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}_{i})\}$. Since $0 \leq f(\text{\boldmath{$w$}}_{\text{pre}}, \cdot) \leq M$ and $\{T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}_{i})\}$ are i.i.d draws from $P^{*}_{r}$. Azuma-Hoeffding's inequality (Corollary 2.20 in \citep{wainwright2019}) shows that with probability at least $1 - \theta$,
\begin{equation}
\small
\begin{aligned}
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\text{pre}}) - R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) & = \frac{1}{n}\sum\limits_{i=1}^{n}f(\text{\boldmath{$w$}}_{\text{pre}}, T(\boldsymbol{x}_{i})) - R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq M\sqrt{\frac{\log{(1/\theta)}}{2n}}.
\end{aligned}
\end{equation}
Hence we get our conclusion.
\end{proof}
\subsection{Proof of Theorem \ref{thm:pretrain generalize l2}}
With a little abuse of notation, let $r(2) = r/\epsilon_{\text{pre}}$ denoted by $r$ in the proof, and $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$. By Lemma \ref{lem:optimal}, there exists $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x})\sim P^{*}_{r}$ with $\boldsymbol{x}\sim P_{0}$. Then we can find $\boldsymbol{z}\sim Q_{0}$ due to Kolmogorov's Theorem. Let $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\sim Q^{*}_{r}$, we see
\begin{equation}
\small
\begin{aligned}
\mathsf{W}_{2}(Q_{0}, Q^{*}_{r})^{2} & \leq \int_{\mathcal{X}}\|\boldsymbol{z} - T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\|^{2}dQ_{0}(\boldsymbol{z}) \\
& \leq \int_{\mathcal{X}}\|\boldsymbol{z} - T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\|^{2}\left|dQ_{0}(\boldsymbol{z}) - dP_{0}(\boldsymbol{z})\right| + \int_{\mathcal{X}}\|\boldsymbol{z} - T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\|^{2} dP_{0}(\boldsymbol{z}) \\
& \leq D^{2}\int_{\mathcal{X}}\left|dQ_{0}(\boldsymbol{z}) - dP_{0}(\boldsymbol{z})\right| + r^{2} \\
& = 2D^{2}\mathsf{TV}(P_{0}, Q_{0}) + r^{2}.
\end{aligned}
\end{equation}
Thus $R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \epsilon_{\text{pre}}$. Similar to \eqref{eq:tv distance} and \eqref{eq:tv bound} we get the conclusion.
\section{Hyperparameters}\label{app:hyp on adv}
\begin{table*}[htbp]
\centering
\scalebox{0.9}{
\begin{minipage}{0.5\linewidth}\label{tbl:hyper adv cifar}
\caption{Hyperparameters of adversarial training on \texttt{CIFAR10}.}
\vspace{-0.1in}
\begin{tabular}{c c c c}
\hline
Hyperparam & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$\\
\hline
Learning Rate & 0.1 & 0.1 & 0.1 \\
Momentum & 0.9 & 0.9 & 0.9 \\
Batch Size & 128 & 128 & 128 \\
Weight Decay & 5e-4 & 5e-4 & 5e-4 \\
Epochs & 200 & 200 & 200 \\
Inner Loop Steps & - & 8 & 8 \\
Perturbation Size & - & 2/12 & 2/255 \\
Perturbation Step Size & - & 1/24 & 1/510 \\
\hline
\end{tabular}
\end{minipage}
\hspace{0.2in}
\begin{minipage}{0.5\linewidth}\label{tbl:hyper adv imagenet}
\caption{Hyperparameters of adversarial training on \texttt{ImageNet}.}
\vspace{-0.1in}
\begin{tabular}{c c c c}
\hline
Hyperparam & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$\\
\hline
Learning Rate & 0.1 & 0.1 & 0.1 \\
Momentum & 0.9 & 0.9 & 0.9 \\
Batch Size & 512 & 512 & 512 \\
Weight Decay & 5e-4 & 5e-4 & 5e-4 \\
Epochs & 100 & 100 & 100 \\
Inner Loop Steps & - & 3 & 3 \\
Perturbation Size & - & 0.25 & 2/255 \\
Perturbation Step Size & - & 0.05 & 1/510 \\
\hline
\end{tabular}
\end{minipage}
}
\end{table*}
\begin{table*}[htbp]
\caption{Hyperparameters of adversarial training on $\text{BERT}$ base model.}
\vspace{-0.1in}
\label{tbl:hyper}
\centering
\scalebox{0.8}{
{
\begin{tabular}{c c c c}
\hline
Hyperparam & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$\\
\hline
Learning Rate & 3e-5 & 3e-5 & 3e-5 \\
Batch Size & 32 & 32 & 32 \\
Weight Decay & 0 & 0 & 0 \\
Hidden Layer Dropout Rate & 0.1 & 0.1 & 0.1 \\
Attention Probability Dropout Rate & 0.1 & 0.1 & 0.1 \\
Max Epochs & 10 & 10 & 10 \\
Learning Rate Decay & Linear & Linear & Linear\\
Warmup Ratio & 0 & 0 & 0 \\
Inner Loop Steps & - & 3 & 3 \\
Perturbation Size & - & 1.0 & 0.001 \\
Perturbation Step Size & - & 0.1 & 0.0005 \\
\hline
\end{tabular}}}
\end{table*}
\section{Ablation Study}
\label{app:perturbation}
\subsection{Effect of Perturbation Size}\label{app:perturbation size}
We study the effect of perturbation size $r$ in adversarial training
in bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2}.
We vary the perturbation size $r$ in $\{2^{-5}/12, 2^{-4}/12, 2^{-3}/12, 2^{-2}/12, 2^{-1}/12, 2^{0}/12, 2^{1}/12, 2^{2}/12, 2^{3}/12, 2^{4}/12, 2^{5}/12, 2^{6}/12, 2^{7}/12\}$ for Adv-$\ell_{2}$ and in $\{2^{-4}/255, 2^{-3}/255, 2^{-2}/255, 2^{-1}/255, 2^{0}/255, 2^{1}/255, 2^{2}/255, 2^{3}/255, 2^{4}/255\}$ for Adv-$\ell_{\infty}$.
The perturbation step size $\eta_{\boldsymbol{x}}$ in Algorithm \ref{alg:sgd} is set to be $r/4$ \citep{salman2020adversarially}.
Experiments are conducted on $\texttt{CIFAR10}$ and the settings follow those in Section \ref{sec:Experiments on Image Classification}.
\par
The results are shown in Figures \ref{fig:adv_l2_r} and \ref{fig:adv_linf_r}. In the studied ranges, the accuracy on the OOD data from all categories exhibits similar trend, i.e.,
first increases and then decreases, as $r$ increases. This is consistent with our discussion in Section \ref{sec:Experiments on Image Classification} that there is an optimal perturbation size $r$ for improving OOD generalization via adversarial training. For data corrupted under types Fog, Bright and Contrast, adversarial training degenerates the performance in Table \ref{tbl:adversarial training on image}. We speculate this is because the three corruption types rescale the input pixel values to smaller values and the same perturbation size $r$
leads to relatively large perturbation.
Thus according to the discussion in Section \ref{sec:Experiments on Image Classification} that there is an optimal $r$ for improving OOD generalization,
we suggest conducting adversarial training with a smaller perturbation size to defend these three types of corruption.
Figures \ref{fig:adv_l2_r} and \ref{fig:adv_linf_r} also show
that smaller optimal perturbation sizes have better performances for these three types of corruption.
\subsection{Effect of the the Number of Training Samples}\label{app:number of training samples}
We study the effect of the number of training samples, as bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} suggest that more training samples lead to better OOD generalization.
We split \texttt{CIFAR10} into 5 subsets, each of which has 10000, 20000, 30000, 40000 and 50000 training samples.
The other settings follow those in Section \ref{sec:Experiments on Image Classification}.
The results are in shown Figures \ref{fig:adv_l2_num} and \ref{fig:adv_linf_num}.
\begin{figure*}[htbp]\centering
\subfloat[Clean.]{\includegraphics[width=0.198\textwidth]{./pic/image-c/clean.JPEG}}
\subfloat[Gauss.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/gauss.JPEG}}
\subfloat[Shot.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/shot.JPEG}}
\subfloat[Impulse.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/impulse.JPEG}}
\vspace{-0.1in}
\\
\subfloat[Defocus.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/defocus.JPEG}}
\subfloat[Glass.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/glass.JPEG}}
\subfloat[Motion.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/motion.JPEG}}
\subfloat[Zoom.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/zoom.JPEG}}
\vspace{-0.1in}
\\
\subfloat[Snow.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/snow.JPEG}}
\subfloat[Frost.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/frost.JPEG}}
\subfloat[Fog.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/fog.JPEG}}
\subfloat[Bright.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/bright.JPEG}}
\vspace{-0.1in}
\\
\subfloat[Contrast.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/contrast.JPEG}}
\subfloat[Elastic.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/elastic.JPEG}}
\subfloat[pixel.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/pixel.JPEG}}
\subfloat[JPEG.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/jpeg.JPEG}}
\vspace{-0.1in}
\caption{
15 types of artificially constructed corruptions from four categories: Noise, Blur, Weather, and Digital from the \texttt{ImageNet-C} dataset \citep{hendrycks2018benchmarking}.
Each corruption has five levels of severity with figures under severity 5 are shown here.}
\label{fig:imagenet-c}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Motion.png}}
\\
\vspace{-0.2in}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Frost.png}}
\\
\vspace{-0.2in}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Contrast.png}}
\\
\vspace{-0.2in}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/JPEG.png}}
\caption{Accuracy of Adv-$\ell_{2}$ on \texttt{CIFAR10-C} over various perturbation sizes. The $x$-axis means the perturbation size is $2^{x}/12$.}
\label{fig:adv_l2_r}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Motion.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Frost.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Contrast.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/JPEG.png}}
\vspace{-0.2in}
\caption{Accuracy of Adv-$\ell_{\infty}$ on \texttt{CIFAR10-C} over various perturbation sizes. The $x$-axis means the perturbation size is $2^{x}/255$.}
\label{fig:adv_linf_r}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Motion.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Frost.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Contrast.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/JPEG.png}}
\caption{Accuracy of Adv-$\ell_{2}$ on \texttt{CIFAR10-C} over various numbers of training samples.}
\label{fig:adv_l2_num}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Motion.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Frost.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Contrast.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/JPEG.png}}
\caption{Accuracy of Adv-$\ell_{\infty}$ on \texttt{CIFAR10-C} over various numbers of training samples.}
\label{fig:adv_linf_num}
\end{figure*}
\section{Introduction}
In the machine learning community, the training and test distributions are often not identically distributed. Due to this mismatching, it is desired to learn a model that generalizes well on out-of-distribution (OOD) data though only trained on data from one certain distribution. OOD generalization is empirically studied in \citep{hendrycks2019using,hendrycks2020many,hendrycks2020pretrained} by
evaluating the performance of the model on the test set that is close to the original training samples. However, the theoretical understanding of these empirical OOD generalization behaviors remains unclear.
\par
Intuitively, the OOD generalization measures the performance of the model on the data from a shifted distribution around the original training distribution \citep{hendrycks2018benchmarking}. This is equivalent to the distributional robustness \citep{namkoong2019reliable,shapiro2017distributionally} which measures the model's robustness to perturbations the distribution of training data. Inspired by this, we study the OOD generalization by utilizing the Wasserstein distance to measure the shift between distributions (Definition \ref{def: model robustness}). We theoretically find that if a model is robust to input perturbation on training samples (namely, input-robust model), it also generalizes well on OOD data.
\par
The connection of input-robustness and OOD generalization inspires us to find an input-robust model since it generalizes well on OOD data. Thus we consider adversarial training (AT)~\citep{madry2018towards}
as
\citet{athalye2018obfuscated} show that a model is input-robust if it defends adversarial perturbations \citep{szegedy2013intriguing}.
Mathematically, AT can be formulated as a minimax optimization problem and solved by the multi-step
SGD algorithm \citep{nouiehed2019solving}. Under mild assumptions, we prove that the convergence rate of this multi-step SGD for AT is
$\tilde{\mathcal{O}}(1/T)$
both in expectation and in high probability,
where $T$ is the number of training steps and $\tilde{\mathcal{O}}(\cdot)$ is defined in the paragraph of notations. Then, combining the convergence result with the relationship between input-robustness and OOD generalization, we theoretically show that for the model adversarially trained with $n$ training samples for $T$ steps, its excess risk on the OOD data is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$, which guarantees its performance on the OOD data.
\par
Besides models trained from scratch, we also study the OOD generalization
on downstream tasks
of pre-trained models, as
the paradigm of first pre-training on a large-scale dataset and then fine-tuning on downstream tasks
has achieved remarkable performance in both computer vision (CV)~\citep{hendrycks2019using,kornblith2019better} and natural language processing (NLP) domains \citep{devlin2019bert} recently. Given the aforementioned relationship of input-robustness and OOD generalization, we theoretically show that a pre-trained model more robust to input perturbation also provides a better initialization for generalization on downstream OOD data. Thus, we suggest conducting adversarial pre-training like \citep{salman2020adversarially,hendrycks2019using,utrera2020adversarially}, to
improve the OOD generalization in downstream tasks.
\par
We conduct various experiments on both image classification (IC) and natural language understanding (NLU) tasks to verify our theoretical findings.
\par
For IC task, we conduct AT on \texttt{CIFAR10} \citep{krizhevsky2009learning} and \texttt{ImageNet} \citep{deng2009imagenet}, and then evaluate the OOD generalization of these models on corrupted OOD data \texttt{CIFAR10-C} and \texttt{ImageNet-C} \citep{hendrycks2018benchmarking}. For NLU tasks, we similarly conduct AT as in \citep{zhu2019freelb} on datasets \texttt{SST-2}, \texttt{IMBD}, \texttt{MNLI} and \texttt{STS-B}.
Then we follow the strategy in \citep{hendrycks2020pretrained} to evaluate the OOD generalization.
Empirical results on both IC and NLU tasks verify that AT improves OOD generalization.
\par
To see the effect of the initialization provided by an input-robust pre-trained model, we adversarially pre-train a model on \texttt{ImageNet} to improve the input-robustness, and then fine-tune the pre-trained model on \texttt{CIFAR10}.
Empirical results show that this initialization enhances the OOD generalization on downstream tasks after fine-tuning.
Another interesting observation is that for language models, standard pre-training by masked language modeling \citep{devlin2019bert,liu2019roberta} improves the input-robustness of the model.
Besides, models pre-trained with more training samples and updating steps
are more input-robust. This may also explain the better OOD generalization on downstream tasks~\citep{hendrycks2020pretrained} of these models.
\paragraph{Notations.}
For vector $\boldsymbol{x}\in\mathbb{R}^{d_{0}}$, $\|\boldsymbol{x}\|_{p}$ is its $\ell_{p}$-norm, and
its $\ell_{2}$-norm is simplified as $\|\boldsymbol{x}\|$.
$\mathcal{P}(\mathcal{X})$ is the set of probability measures on metric space $(\mathcal{X}, \|\cdot\|_{p})$ with $\mathcal{X} \subseteq \mathbb{R}^{d_{0}}$.
$\mathcal{O}(\cdot)$ is the order of a number, and $\tilde{\mathcal{O}}(\cdot)$ hides a poly-logarithmic factor in problem parameters e.g.,
$\mathcal{O}(M_1\log{d_{0}}) = \tilde{O}(M_{1})$.
For $P, Q\in\mathcal{P}(\mathcal{X})$,
let $(P, Q)$ be their couplings (measures on $\mathcal{X}\times \mathcal{X}$).
The $p$-th ($p<\infty$) Wasserstein distance \citep{villani2008optimal} between $P$ and $Q$ is
\begin{equation}
\label{eq:w distance}
\small
\mathsf{W}_{p}(P, Q) = \left(\inf_{\pi\in(P, Q)}\mathbb{E}_{(\boldsymbol{u}, \boldsymbol{v})\sim \pi}\left[\|\boldsymbol{u} - \boldsymbol{v}\|^{p}_{p}\right]\right)^{\frac{1}{p}}.
\end{equation}
When $p=\infty$, the $\infty$-Wasserstein distance is $\mathsf{W}_{\infty}(P, Q) = \lim_{p\to\infty}\mathsf{W}_{p}(P, Q)$.
In the sequel, the $p$-Wasserstein distance is abbreviated as $\mathsf{W}_{p}$-distance.
The total variation distance \citep{villani2008optimal} is a kind of distributional distance and is defined as
\begin{equation}\label{eq:tv}
\small
\mathsf{TV}(P, Q) = \frac{1}{2}\int_{\mathcal{X}}\left|dP(\boldsymbol{x}) - dQ(\boldsymbol{x})\right|.
\end{equation}
\section{Related Work}
\paragraph{OOD Generalization.}
OOD generalization measures a model's ability to extrapolate beyond the training distribution~\citep{hendrycks2018benchmarking}, and
has been widely explored in both CV~\citep{recht2019imagenet,schneider2020improving,salman2020unadversarial} and NLP domains~\citep{tu2020empirical,lohn2020estimating}.
\citet{hendrycks2018benchmarking} observe that the naturally trained models are sensitive to artificially constructed OOD data. They also find that adversarial logit pairing \citep{kannan2018adversarial} can improve a model's performance on noisy corrupted OOD data.
\citet{hendrycks2020pretrained} also empirically find that pre-trained language models
generalize on downstream OOD data.
But the theoretical understanding behind these observations remains unclear.
\paragraph{Adversarial Training.}
Adversarial training \citep{madry2018towards} is proposed to improve input-robustness
by dynamically constructing the augmented adversarial samples \citep{szegedy2013intriguing,goodfellow2015explaning}
using projected gradient descent across training.
In this paper, we first show the close relationship between OOD generalization and distributional robustness \citep{ben2013robust,shapiro2017distributionally}, and then
explore the OOD generalization by
connecting input-robustness
and distributional robustness.
\par
The most related works to ours are
\citep{sinha2018certifying,lee2018minimax,volpi2018generalizing}.
They also use AT to train distributionally robust models under the Wasserstein distance, but their results are restricted to a specialized AT objective with an additional regularizer.
The regularizer can be impractical due to its large penalty parameter.
Moreover, their bounds are built upon the entropy integral and increase with model capacity, which can be meaningless for high-dimensional models.
On the other hand, our bound is
(i) based on the input-robustness, regardless of how it is obtained; and (ii) irrelevant to model capacity.
\paragraph{Pre-Training.}
Pre-trained models transfer the knowledge in the pre-training stage to downstream tasks,
and are widely used in both CV \citep{kornblith2019better} and NLP \citep{devlin2019bert} domains.
For instance, \citet{dosovitskiy2020image,brown2020language,radford2021learning} pre-train the transformer-based models on large-scale datasets, and obtain remarkable results on downstream tasks.
Standard pre-training is empirically found to help
reduce the uncertainty of the model for both image data \citep{hendrycks2019using,hendrycks2020many}
and textual data~\citep{hendrycks2020pretrained}.
Adversarial pre-training is explored in \citep{hendrycks2019using} and \citep{salman2020adversarially}, and is shown to improve the robustness and generalization on downstream tasks
, respectively.
In this work, we theoretically analyze the OOD generalization on downstream tasks from the perspective of the input-robustness of the pre-trained model.
\section{Adversarial Training Improves OOD Generalization}
\label{sec:Learning Robust Model Results in Better OOD Generalization}
In this section, we first show that the input-robust model can generalize well on OOD data after specifying the definition of OOD generalization.
Then, to learn a robust model, we suggest adversarial training (AT) \citep{madry2018towards}.
Under mild conditions, we prove a $\tilde{\mathcal{O}}(1/T)$ convergence rate for AT both in expectation and in high probability.
With this, we show that the excess risk of an adversarially trained model on OOD data is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$ where $n$ is the number of training samples.
\subsection{Input-Robust Model Generalizes on OOD Data}
\label{sec:Robustness Corresponds with Better OOD Generalization}
Suppose
$\{(\boldsymbol{x}_{i}, y_{i})\}$ is the
training set with
$n$ i.i.d. training samples $\{\boldsymbol{x}_{i}\}$ and their labels $\{y_{i}\}$.
We assume the training sample distribution $P$
has compact support $\mathcal{X}\subseteq \mathbb{R}^{d_{0}}$, thus there exists $D > 0$, such that $\forall \boldsymbol{u}, \boldsymbol{v}\in\mathcal{X}$, $\|\boldsymbol{u} - \boldsymbol{v}\|_{1}\leq D$.
For training sample $\boldsymbol{x}$ and its label $y$,
the loss on $(\boldsymbol{x}, y)$ with model parameter $\text{\boldmath{$w$}}$ is $\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))$, where $\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))$ is continuous and differentiable for both $\text{\boldmath{$w$}}$ and $(\boldsymbol{x}, y)$.
Besides, we assume $0 \leq \mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))\leq M$ for constant $M$ without loss of generality.
We represent the expected risk under training distribution $P$ and label distribution $P_{y\mid \boldsymbol{x}}$
\footnote{$P_{y\mid \boldsymbol{x}_{i}}(\cdot)=\textbf{1}_{\{\cdot = y_{i}\}}$ where $\textbf{1}_{\{\cdot = y_{i}\}}$ is the indicator function.}
as $R_{P}(\text{\boldmath{$w$}}) = \mathbb{E}_{P}[\mathbb{E}_{P_{y\mid \boldsymbol{x}}}[\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))]]$.
For simplicity of notation, let $\mathbb{E}_{P_{y\mid \boldsymbol{x}}}[\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))] = f(\text{\boldmath{$w$}}, \boldsymbol{x})$ in the sequel,
e.g., $f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}) = \mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}_{i}, y_{i}))$.
\par
Intuitively, the OOD generalization is decided by the performance of the model on a shifted distribution close to the training data-generating distribution $P_{0}$ \citep{hendrycks2018benchmarking,hendrycks2020pretrained}.
Thus defining OOD generalization should involve the distributional distance which measures the distance between distributions.
We use the Wasserstein distance as in \citep{sinha2018certifying}.
\par
Let
$P_{n}(\cdot)=\frac{1}{n}\sum_{i=1}^{n}\textbf{1}_{\{\cdot = \boldsymbol{x}_{i}\}}$ be the empirical distribution, and $B_{\mathsf{W}_{p}}(P_{0}, r) = \{P: \mathsf{W}_{p}(P_{0}, P) \leq r\}$.
Then we define the OOD generalization error as
\begin{equation}
\label{eq:ood gen}
\small
\mathcal{E}_{\text{gen}}^{\text{ood}}(p, r) = \left|\sup_{P\in B_{\mathsf{W}_{p}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) - R_{P_{n}}(\text{\boldmath{$w$}})\right|,
\end{equation}
under the $\mathsf{W}_{p}$-distance with $p\in\{2, \infty\}$. Extension to the other OOD generalization with $p < \infty$ is straightforward by generalizing the analysis for $p=2$.
Note that \eqref{eq:ood gen} reduces to the generalization error on in-distribution data when $r=0$.
\begin{definition}\label{def: model robustness}
A model is $(r, \epsilon, P, p)$-input-robust, if
\begin{equation}
\small
\mathbb{E}_{P}\left[\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}|f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x})|\right] \leq \epsilon.
\end{equation}
\end{definition}
With the input-robustness in Definition~\ref{def: model robustness},
the following Theorems~\ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} give the generalization bounds on the OOD data drawn from $Q\in B_{\mathsf{W}_{p}}(P_{0}, r_{0})$ with $p\in\{2, \infty\}$.
\begin{theorem}\label{thm:ood generalization upper bound}
If a model is $(2r, \epsilon, P_{n}, \infty)$-input-robust, then with probability at least $1 - \theta$,
\begin{equation}
\label{eq:ood bound linf}
\small
\begin{aligned}
\mathcal{E}_{\text{\emph{gen}}}^{\text{\emph{ood}}}(\infty, r_{0})
\leq \epsilon + M\sqrt{\frac{(2d_{0})^{\frac{2D}{r^2} + 1}\log{2} + 2\log{(\frac{1}{\theta}})}{n}},
\end{aligned}
\end{equation}
for any $r_{0}\leq r$. Here $D$ is the $\ell_{1}$-diameter of data support $\mathcal{X}$ with dimension $d_{0}$, and $M$ is an upper bound of $f(\text{\boldmath{$w$}}, \boldsymbol{x})$.
\end{theorem}
\begin{theorem}
\label{thm:ood generalization upper bound l2}
If a model is $(2r/\epsilon, \epsilon, P_{n}, 2)$-input-robust, then with probability at least $1 - \theta$,
\begin{equation}
\label{eq:ood bound l2}
\small
\begin{aligned}
\!\!\!\!\! \mathcal{E}_{\text{\emph{gen}}}^{\text{\emph{ood}}}(2, r_{0})
\!\leq \! (M\!+\!1)\epsilon \! + \! M\sqrt{\frac{(2d_{0})^{\frac{2\epsilon^{2}D}{r^2}\! + \! 1}\log{2} \!+\! 2\log{(\frac{1}{\theta})}}{n}},
\end{aligned}
\end{equation}
for any $r_{0}\leq r$, where the notations follow Theorem \ref{thm:ood generalization upper bound}.
\end{theorem}
\begin{remark}
When $r_{0}=0$, the bounds in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} become the generalization bounds on in-distribution data.
\end{remark}
\par
\begin{remark}
The $\epsilon$ in Theorem \ref{thm:ood generalization upper bound l2} can not be infinitely small, as the model is required to be robust in $B(\boldsymbol{x}_{i}, 2r/\epsilon)$ for each $\boldsymbol{x}_{i}$. Specifically, when $\epsilon \to 0$, the robust region $B(\boldsymbol{x}_{i}, 2r/\epsilon)$ can cover the data support $\mathcal{X}$, then the model has almost constant output in $\mathcal{X}$.
\end{remark}
\begin{remark}
The bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} become vacuous when $r$ is large. Thus, our results can not be applied to those OOD data from distributions far away from the original training distribution. For example, ImageNet-R \citep{hendrycks2020many} consists of data from different renditions e.g., photo vs. cartoon, where most pixels vary, leading to large $\|\boldsymbol{u} - \boldsymbol{v}\|_{p}^{p}$ in \eqref{eq:w distance}, and thus large distributional distance.
\end{remark}
The proofs of Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} are in Appendix \ref{app:proof in Robustness Corresponds with Better OOD Generalization}.
Lemmas \ref{lem:equivalence} and \ref{lem:optimal} in Appendix \ref{app:proof in Learning Robust Model Results in Better OOD Generalization} show that the OOD data concentrates around the in-distribution data with high probability. Thus, the robustness of model on training samples guarantees the generalization on OOD data.
The observations from Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} are summarized as follows.
\begin{enumerate}
\item
The right-hand sides of bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} imply that a more input-robust model (i.e., a larger $r$ and a smaller $\epsilon$ in Definition \ref{def: model robustness}) has smaller OOD generalization bound, and thus better performance on OOD data.
\item For both \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2},
a larger number of training samples $n$ results in smaller upper bounds. This indicates that in a high-dimensional data regime with a large feature dimension $d_{0}$ of data and diameter $D$ of data support, more training samples can compensate for generalization degradation caused by large $d_{0}$ and $D$.
\item The bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} are independent of the model capacity.
Compared with other uniform convergence generalization bounds which increase with the model capacity (e.g., Rademacher complexity \citep{yin2019rademacher} or entropy integral \citep{sinha2018certifying}), our bounds are superior for models with high capacity.
\end{enumerate}
\subsection{Adversarial Training Improves Input-Robustness}\label{sec:robust training}
\begin{algorithm}[t!]
\caption{Multi-Step SGD.}
\label{alg:sgd}
\textbf{Input:} Number of training steps $T$, learning rate for model parameters $\eta_{\text{\boldmath{$w$}}_{t}}$ and adversarial input $\eta_{\boldsymbol{x}}$, two initialization points $\text{\boldmath{$w$}}_{1}, \boldsymbol{\delta}_{1}$, constant $p\in\{2, \infty\}$ and perturbation size $r$.\\
\textbf{Return} $\text{\boldmath{$w$}}_{T + 1}$.
\begin{algorithmic}[1]
\FOR {$t=1, \cdots, T$}
\STATE {Uniformly sample $i_{t}$ from $\{1,\cdots, n\}$.}
\FOR {$k=1, \cdots, K$}
\STATE{$\boldsymbol{\delta}_{k + 1} = \text{Proj}_{B_{p}(\textbf{0}, r)}\left(\boldsymbol{\delta}_{k} \!+\! \eta_{\boldsymbol{x}}\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} \!+\! \boldsymbol{\delta}_{k})\right)$.}
\ENDFOR
\STATE {$\text{\boldmath{$w$}}_{t + 1} = \text{\boldmath{$w$}}_{t} - \eta_{\text{\boldmath{$w$}}_{t}}\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K + 1})$.}
\ENDFOR
\end{algorithmic}
\end{algorithm}
As is justified in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}, the input-robust model can generalize on OOD data.
Thus we consider
training an input-robust model with the following objective
\begin{equation}
\small
\begin{aligned}
& \min_{\text{\boldmath{$w$}}}\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p) = \min_{\text{\boldmath{$w$}}}\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r(p)}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta})\\
& \!=\! \min_{\text{\boldmath{$w$}}}\frac{1}{n}\sum\limits_{i=1}^{n}[\underbrace{f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})}_{\text{clean acc}} \!+\! \sup_{\|\boldsymbol{\delta}\|_{p}\leq r(p)}\underbrace{(f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} \!+\! \boldsymbol{\delta}) \!-\! f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}))}_{\text{input-robustness}}], \label{eq:objective}
\end{aligned}
\end{equation}
which is from AT~\citep{madry2018towards}, and can be decomposed into the clean accuracy term and the input-robustness term.
We consider $p\in\{2, \infty\}$ as in Section \ref{sec:Robustness Corresponds with Better OOD Generalization}, with $r(2) \!=\! 2r/\epsilon_{0}, r(\infty) \!=\! 2r$ for any given small constant $\epsilon_{0}$.
\par
Besides the general assumptions in Section~\ref{sec:Robustness Corresponds with Better OOD Generalization}, we also use the following mild assumptions in this subsection.
\begin{assumption}
\label{ass:Lip continuous}
The loss $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ satisfies the following Lipschitz smoothness conditions
\begin{equation}
\small
\begin{aligned}
\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{1}, \boldsymbol{x}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{2}, \boldsymbol{x})\| & \leq L_{11}\|\text{\boldmath{$w$}}_{1} - \text{\boldmath{$w$}}_{2}\|, \\
\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{1}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{2})\| & \leq L_{12}\|\boldsymbol{x}_{1} - \boldsymbol{x}_{2}\|, \\
\|\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{1}, \boldsymbol{x}) - \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{2}, \boldsymbol{x})\| & \leq L_{21}\|\text{\boldmath{$w$}}_{1} - \text{\boldmath{$w$}}_{2}\|, \\
\|\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{1}) - \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{2})\| & \leq L_{22}\|\boldsymbol{x}_{1} - \boldsymbol{x}_{2}\|.
\end{aligned}
\end{equation}
\end{assumption}
\begin{assumption}
\label{ass:grad_bound}
$\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x})\|$ is upper bounded by $G$.
\end{assumption}
\begin{assumption}
\label{ass:PL inequality}
For $p\in\{2, \infty\}$, $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)$ in \eqref{eq:objective} satisfies the PL-inequality:
\begin{equation}
\small
\!\!\frac{1}{2}\|\nabla_{\text{\boldmath{$w$}}} \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)\|^{2} \!\geq\! \mu_{\text{\boldmath{$w$}}}\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p) \!-\! \inf_{\text{\boldmath{$w$}}}\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)\right).
\end{equation}
For any $\text{\boldmath{$w$}}$ and training sample $\boldsymbol{x}_{i}$, $f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta})$ is $\mu_{\boldsymbol{x}_{i}}$-strongly concave in $\boldsymbol{\delta}$ for $\|\boldsymbol{\delta}\|_{p} \leq r(p)$:
\begin{equation}
\small
f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}},\boldsymbol{x}_{i}) \leq \langle \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}), \boldsymbol{\delta}\rangle - \frac{\mu_{\boldsymbol{x}_{i}}}{2}\|\boldsymbol{\delta}\|^{2},
\end{equation}
where $\mu_{\text{\boldmath{$w$}}}$ and $\mu_{\boldsymbol{x}_{i}}$ are constants.
\end{assumption}
Assumptions \ref{ass:Lip continuous} and \ref{ass:grad_bound} are widely used in minimax optimization problems~\citep{nouiehed2019solving,sinha2018certifying}.
PL-inequality in Assumption \ref{ass:PL inequality} means that although $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ may be non-convex on $\text{\boldmath{$w$}}$, all the stationary points are global minima.
This is observed or proved recently for over-parameterized neural networks \citep{xie2017diversity,du2019gradient,allen2019convergence,liu2020toward}.
The local strongly-concavity in Assumption \ref{ass:PL inequality} is reasonable when the perturbation size $\|\boldsymbol{\delta}\|_{p}$ is small.
\par
To solve the minimax optimization problem \eqref{eq:objective}, we consider the multi-step stochastic gradient descent (SGD) in Algorithm \ref{alg:sgd}~\citep{nouiehed2019solving}. $\text{Proj}_{A}(\cdot)$ in Algorithm \ref{alg:sgd} is the $\ell_{2}$-projection operator onto
$A$.
Note that the update rule of $\boldsymbol{\delta}_{k}$ in Algorithm \ref{alg:sgd} is different from that in PGD adversarial training \citep{madry2018towards}, where $\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{k})$ in Line 4 is replaced with the sign of it.
\par
The following theorem gives the convergence rate of Algorithm $\ref{alg:sgd}$ both in expectation and in high probability.
\begin{theorem}
\label{thm:convergence}
Let $\text{\boldmath{$w$}}_{t}$ be updated by
Algorithm \ref{alg:sgd},
$p\!\in\!\{2,\infty\}$, $\eta_{\text{\boldmath{$w$}}_{t}}\!=\!\frac{1}{\mu_{\text{\boldmath{$w$}}}t}$, $\eta_{\boldsymbol{x}}\!=\!\frac{1}{L_{22}}$,
$K\geq \frac{L_{22}}{\mu_{\boldsymbol{x}}}\log{\left(\frac{8T\mu_{\text{\boldmath{$w$}}}d_{0}r^{2}(p)}{GL}\right)}$, where $\mu_{\boldsymbol{x}} = \min_{1\leq i \leq n}\mu_{\boldsymbol{x}_{i}}$ and $L = L_{11} + \frac{L_{12}L_{21}}{\mu_{\boldsymbol{x}}}$.
Under Assumptions \ref{ass:Lip continuous}, \ref{ass:grad_bound}, and \ref{ass:PL inequality},
we have
\begin{equation}
\small
\mathbb{E}[\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{T + 1}, p)] - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}, p) \leq \frac{G^{2}L}{T\mu^{2}_{\text{\boldmath{$w$}}}},
\end{equation}
and with probability at least $1 - \theta$,
\begin{equation}\label{eq:convergence in probability}
\small
\begin{aligned}
\tilde{R}_{P_{n}}&(\text{\boldmath{$w$}}_{T + 1}, p) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}, p) \\
& \leq \frac{G^{2}\log{(\log{(T/\theta)})}(64L + 16\mu_{\text{\boldmath{$w$}}}) + G^{2}L}{T\mu_{\text{\boldmath{$w$}}}^{2}},
\end{aligned}
\end{equation}
for $0< \theta < 1/e$, $T\geq 4$, with $\text{\boldmath{$w$}}^{*} \in\arg\min_{\text{\boldmath{$w$}}}\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)$.
\end{theorem}
This theorem shows that Algorithm \ref{alg:sgd} is able to find the global minimum of the adversarial objective \eqref{eq:objective} both in expectation and in high probability.
Specifically, the convergence rate of Algorithm~\ref{alg:sgd} is $\mathcal{O}(1/\lceil T/K\rceil) = \mathcal{O}(K/T) = \tilde{\mathcal{O}}(1/T)$, since the number of inner loop steps $K$ is $\mathcal{O}(\log{(Td_0r(p)^2)})$, which increases with the feature dimension of input data $d_{0}$ and the size of perturbation $r$. The proof of Theorem \ref{thm:convergence} is in Appendix \ref{app:proof in robust training}.
\par
The following Proposition~\ref{pro:robustness} (proof is in Appendix \ref{app:proof of proposition robustness}) shows that the model trained by Algorithm \ref{alg:sgd} has a small error on clean training samples, and satisfies the
condition of input-robustness
in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}.
\begin{proposition}
\label{pro:robustness}
If $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon$ for $\text{\boldmath{$w$}}$ and a constant $\epsilon$, then $R_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon$, and $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ is $(r(p), 2\epsilon, P_{n}, p)$-input-robust.
\end{proposition}
\par
According to Theorem \ref{thm:convergence} and Proposition \ref{pro:robustness},
after $T$ training steps in Algorithm \ref{alg:sgd}, we can obtain a $(r(p), \tilde{\mathcal{O}}(1/T), P_{n}, p)$-input-robust model when $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})$ is close to zero.
Thus, combining Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}, we get the following corollary which shows that the adversarially trained model generalizes on OOD data.
\begin{corollary}
\label{cor:excess risk}
For $p\in\{2, \infty\}$, with the same notations as
Theorem \ref{thm:ood generalization upper bound} and \ref{thm:convergence},
if $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}, p)\leq \epsilon_{0}$, then with probability at least $1 - \theta$,
\begin{equation*}\label{eq:excess risk bound l2}
\small
\begin{aligned}
& \sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r/\epsilon_{0})} R_{P}(\text{\boldmath{$w$}}_{T + 1}, 2) \leq (2M + 3)\epsilon_{0} \\
& + (2M\! + \!3)\!\left(\frac{G^{2}\log{(\log{(2T/\theta)})}(64L + 16\mu_{\text{\boldmath{$w$}}}) + G^{2}L}{T\mu_{\text{\boldmath{$w$}}}^{2}}\right)\\
& + \!M\sqrt{\frac{(2d_{0})^{\frac{2\epsilon_{0}^{2}D}{r^{2}} \!+\! 1}\log{2} \!+\! 2\log{(2/\theta)}}{n}},
\end{aligned}
\end{equation*}
and
\begin{equation*}\label{eq:excess risk bound linf}
\small
\begin{aligned}
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)} & R_{P}(\text{\boldmath{$w$}}_{T + 1}, \infty) \leq 3\epsilon_{0} \\
& + \frac{G^{2}\log{(\log{(2T/\theta)})}(192L + 48\mu_{\text{\boldmath{$w$}}}) + 3G^{2}L}{T\mu_{\text{\boldmath{$w$}}}^{2}} \\
& + M\sqrt{\frac{2d_{0}^{\frac{2D}{r^{2}} + 1}\log{2} + 2\log{(2/\theta)}}{n}},
\end{aligned}
\end{equation*}
for any $0\leq \theta \leq 1/e$ and $T\geq 4$.
\end{corollary}
\par
This corollary is directly obtained by combining Theorem \ref{thm:ood generalization upper bound}, \ref{thm:ood generalization upper bound l2}, \ref{thm:convergence}, and Proposition \ref{pro:robustness}.
It shows that the excess risk (i.e., the terms in the left-hand side of the above two inequalities) of the adversarially trained model on OOD data is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$ after $T$ steps.
The dependence of the bounds on hyperparameters like input data dimension $d_{0}$, $\ell_{1}$-diameter $D$ of data support $\mathcal{X}$ are from the OOD generalization bounds \eqref{eq:ood bound linf}, \eqref{eq:ood bound l2}, and convergence rate \eqref{eq:convergence in probability}.
\section{Robust Pre-Trained Model has Better Initialization on Downstream Tasks}\label{sec:pretrain improves ood}
The paradigm of ``first pre-train and then fine-tune'' has been widely explored recently \citep{radford2021learning,hendrycks2020pretrained}.
In this section, we theoretically show that the input-robust pre-trained model provides an initialization that generalizes on downstream OOD data.
\par
Assume the $m$ i.i.d. samples $\{\boldsymbol{z}_{i}\}$ in the pre-training stage are from distribution $Q_{0}$.
For a small constant $\epsilon_{\text{pre}}$ and given $r(2) = r/\epsilon_{\text{pre}}, r(\infty) = r$,
the following Theorems \ref{thm:pretrain generalize} and \ref{thm:pretrain generalize l2}
show that the pre-trained model with a small excess risk on OOD data in the pre-training stage also generalizes on downstream OOD data. The proofs are in Appendix \ref{app:proof of theorem pretrain generalize}.
\begin{theorem}
\label{thm:pretrain generalize}
If $\sup_{Q\in B_{\mathsf{W}_{\infty}}(Q_{0}, r(\infty))}R_{Q}(\text{\boldmath{$w$}}_{\emph{\text{pre}}})\leq \epsilon_{\emph{\text{pre}}}$, then
\begin{equation}\label{eq:initialized error linf}
\small
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r(\infty))}R_{P}(\text{\boldmath{$w$}}_{\emph{\text{pre}}}) \leq \epsilon_{\emph{\text{pre}}} + 2M\mathsf{TV}(P_{0}, Q_{0}),
\end{equation}
and with probability at least $1 - \theta$,
\begin{equation}
\small
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\emph{\text{pre}}}, \infty) \leq \epsilon_{\emph{\text{pre}}}\! + \!2M\mathsf{TV}(P_{0}, Q_{0}) + M\sqrt{\frac{\log{(1/\theta)}}{2n}}.
\end{equation}
\end{theorem}
\begin{theorem}
\label{thm:pretrain generalize l2}
If $\sup_{Q\in B_{\mathsf{W}_{2}}(Q_{0}, r_{0})}R_{Q}(\text{\boldmath{$w$}}_{\emph{\text{pre}}})\leq \epsilon_{\emph{\text{pre}}}$ with $r_{0}= \sqrt{2D^{2}\mathsf{TV}(P_{0}, Q_{0}) + r(2)^{2}}$, then
\begin{equation}\label{eq:initialized error l2}
\small
\sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r(2))}R_{P}(\text{\boldmath{$w$}}_{\emph{\text{pre}}}) \leq \epsilon_{\emph{\text{pre}}} + 2M\mathsf{TV}(P_{0}, Q_{0}).
\end{equation}
\end{theorem}
\par
\begin{remark}
The self-supervised pre-training (e.g., masked language modeling in BERT \citep{devlin2019bert}) can also be included into the $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ in Section \ref{sec:Robustness Corresponds with Better OOD Generalization},
if we take label $y\sim P_{y \mid \boldsymbol{x}}$ as the distribution of the artificially constructed labels (e.g., masked tokens in BERT).
\end{remark}
When we implement fine-tuning on downstream tasks, the model is initialized by $\text{\boldmath{$w$}}_{\text{pre}}$.
Combining the results in
Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}
(an input-robust model has small OOD generalization error)
with Theorems \ref{thm:pretrain generalize} and \ref{thm:pretrain generalize l2},
we conclude that the input-robust model
has small excess risk on the OOD data in the pre-training stage, and thus
generalizes on the OOD data of downstream tasks. Specifically, \eqref{eq:initialized error linf} and \eqref{eq:initialized error l2} show that
the initial OOD excess risk in the fine-tuning stage $\sup_{P\in B_{\mathsf{W}_{p}}(P_{0}, r(p))}R_{P}(\text{\boldmath{$w$}}_{\text{pre}})$ is decided by terminal OOD excess risk in pre-training stage $\sup_{Q\in B_{\mathsf{W}_{p}}(Q_{0}, r(p))}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})$ and the total variation distance $\mathsf{TV}(P_{0}, Q_{0})$.
The intuition is that if $\text{\boldmath{$w$}}_{\text{pre}}$ generalizes well on distributions around $Q_{0}$, and $P_{0}$ is close to $Q_{0}$ under the total variation distance, then $\text{\boldmath{$w$}}_{\text{pre}}$ generalizes on downstream OOD data.
\par
To satisfy the condition $\sup_{Q\in B_{\mathsf{W}_{p}}(Q_{0}, r(p))}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})\leq \epsilon_{\text{pre}}$ in Theorems \ref{thm:pretrain generalize} and \ref{thm:pretrain generalize l2}, we can use adversarial pre-training.
Corollary \ref{cor:excess risk} implies $\epsilon_{\text{pre}}=\mathcal{O}(1/\sqrt{m})$ by implementing sufficient adversarial pre-training. Thus, massive training samples $m$ in the adversarial pre-training stage improves the OOD generalization on downstream tasks as $\epsilon_{\text{pre}}=\mathcal{O}(1/\sqrt{m})$ appears in the bounds \eqref{eq:initialized error linf} and \eqref{eq:initialized error l2}.
\par
\citet{radford2021learning,hendrycks2020pretrained} empirically verify that the standardly pre-trained model also generalizes well on downstream OOD data. It was shown that sufficient standard training by gradient-based algorithm can also find the most input-robust model under some mild conditions \citep{soudry2018implicit,lyu2019gradient}. Thus, $\sup_{Q\in B_{\mathsf{W}_{\infty}}(Q_{0}, r(p))}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})\leq \epsilon_{\text{pre}}$ can hold even for standardly pre-trained model. However, the convergence to the most input-robust model of standard training is much slower compared with AT, e.g., for
linear model \citep{soudry2018implicit,li2019inductive}.
Hence, to efficiently learn an input-robust model in the pre-training stage, we suggest adversarial pre-training.
\section{Experiments}
\begin{table*}[t!]
\caption{Clean and corruption accuracy (\%) of ResNet34 on \texttt{CIFAR10-C} and \texttt{ImageNet-C} using standard training and adversarial training under both $\ell_{2}$-norm and $\ell_{\infty}$-norm.}
\label{tbl:adversarial training on image}
\centering
\scalebox{0.645}{
{
\begin{tabular}{l|c|c|ccc|cccc|cccc|cccc|c}
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multirow{2}{*}{Clean} & \multicolumn{3}{c|}{Noise} & \multicolumn{4}{c|}{Blur} & \multicolumn{4}{c|}{Weather} & \multicolumn{4}{|c|}{Digital} & \multirow{2}{*}{Avg.} \\
& & & Gauss & Shot & Impulse & Defocus & Glass & Motion & Zoom & Snow & Frost & Fog & Bright & Contrast & Elastic & Pixel & JPEG & \\ \hline
\multirow{3}{*}{\texttt{CIFAR10-C}} & Std & 94.82 & 34.75 & 40.43 & 25.45 & 59.85 & 48.95 & 67.58 & 63.85 & 73.31 & 62.87 & \textbf{67.03} & \textbf{90.69} & \textbf{36.83} & 76.00 & 42.89 & 75.84 & 57.75 \\
& Adv-$\ell_{2}$ & 94.93 & 70.39 & 74.24 & 45.17 & 72.77 & 71.34 & 73.51 & 80.26 & 83.28 & 81.36 & 51.08 & 89.37 & 19.49 & 83.39 & 79.78 & \textbf{89.52} & 71.00 \\
& Adv-$\ell_{\infty}$ & 93.48 & \textbf{80.18} & \textbf{80.80} & \textbf{62.73} & \textbf{77.71} & \textbf{77.10} & \textbf{75.46} & \textbf{82.47} & \textbf{83.45} & \textbf{82.32} & 41.00 & 88.15 & 16.10 & \textbf{83.82} & \textbf{85.98} & 89.36 & \textbf{73.78} \\ \hline
\multirow{3}{*}{\texttt{ImageNet-C}} & Std & 74.01 & 18.97 & 18.39 & 12.98 & 6.32 & 9.76 & 11.49 & 9.37 & 8.78 & 12.98 & 6.21 & 33.74 & 4.31 & 18.29 & 23.91 & 29.08 & 14.97 \\
& Adv-$\ell_{2}$ & 73.66 & \textbf{30.13} & \textbf{28.93} & \textbf{25.05} & \textbf{32.91} & 25.61 & \textbf{34.50} & 32.84 & \textbf{27.39} & \textbf{33.82} & \textbf{36.52} & \textbf{62.18} & \textbf{31.73} & 42.91 & 47.86 & 51.55 & \textbf{36.26} \\
& Adv-$\ell_{\infty}$ & 68.36 & 25.94 & 25.61 & 21.17 & 24.56 & \textbf{32.81} & 32.20 & \textbf{34.57} & 26.70 & 33.47 & 11.22 & 56.07 & 12.34 & \textbf{47.67} & \textbf{57.32} & \textbf{59.10} & 33.38 \\ \hline
\end{tabular}}}
\end{table*}
\subsection{Adversarial Training Improves OOD Generalization}\label{sec:at improves ood}
In this section, we verify our conclusion in Section \ref{sec:Learning Robust Model Results in Better OOD Generalization} that OOD generalization can be improved by AT (Corollary \ref{cor:excess risk}).
\subsubsection{Experiments on Image Classification}
\label{sec:Experiments on Image Classification}
\paragraph{Data.} We use the following benchmark datasets.
\begin{itemize}
\item \texttt{CIFAR10} \citep{krizhevsky2009learning} has 50000 colorful images as training samples from 10 object classes. \texttt{CIFAR10-C} simulates OOD colorful images with 15 types of common visual corruptions, which serves as a benchmark to verify the OOD generalization of model trained on \texttt{CIFAR10}. Each type of corruption has five levels of severity, and each severity has 10000 validation samples. The 15 types of corruptions are divided into 4 groups: Noise, Blur, Weather and Digital.
\item \texttt{ImageNet} \citep{deng2009imagenet} contains colorful images with over 1 million training samples from 1,000 categories. Similar to \texttt{CIFAR10-C}, \texttt{ImageNet-C} serves as a benchmark of OOD data with 15 types of corruptions. Each type of corruption has five levels of severity with 50000 validation samples in it. A visualization of \texttt{ImageNet-C} is in Figure \ref{fig:imagenet-c} in Appendix.
\end{itemize}
\paragraph{Setup.}
The model used in this subsection is ResNet34 \citep{he2016deep}. To verify that adversarial training helps improve OOD performance, we conduct Algorithm \ref{alg:sgd} on \texttt{CIFAR10}, \texttt{ImageNet} and evaluate the model on \texttt{CIFAR10-C} and \texttt{ImageNet-C}, respectively. The number of inner loop steps $K$ is 8 for \texttt{CIFAR10}, and 3 for \texttt{ImageNet}. The models are trained by SGD with momentum.
The number of training epochs is 200 for \texttt{CIFAR10}, and 100 for \texttt{ImageNet}.
The learning rate starts from 0.1 and decays by a factor 0.2 at epochs 60, 120, 160 (resp. 30, 60, 90) for \texttt{CIFAR10} (resp. \texttt{ImageNet}).
Detailed hyperparameters are in Appendix \ref{app:hyp on adv}.
\par
We compare adversarial training under $\ell_{2}$- and $\ell_{\infty}$-norm (respectively abbreviated as ``Adv-$\ell_{2}$'' and ``Adv-$\ell_{\infty}$'') against standard training (abbreviated as ``Std'').
For Adv-$\ell_{\infty}$, we replace $\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{k})$ in Line 4 of Algorithm~\ref{alg:sgd} with the sign of it as in \citep{madry2018towards},
in order to find stronger adversarial perturbation~\citep{goodfellow2015explaning}.
\paragraph{Main Results.}
In Table \ref{tbl:adversarial training on image}, for each type of corruption, we report the test accuracy on \texttt{CIFAR10-C} under the strongest corruption severity level 5\footnote{Lighter severity levels exhibit similar trends but with smaller performance gaps between adversarial and standard training}.
For \texttt{ImageNet-C}, we report the average test accuracy of five severity levels as in \citep{hendrycks2018benchmarking}.
We also report the test accuracy on \texttt{CIFAR10} and \texttt{ImageNet} in the column of ``Clean'' for comparison.
As can be seen, Adv-$\ell_{2}$ and Adv-$\ell_{\infty}$ improve the average accuracy on OOD data, especially under corruption types Noise and Blur. This supports our finding in Section~\ref{sec:Learning Robust Model Results in Better OOD Generalization} that AT makes the model generalize on OOD data.
Though AT improves the OOD generalization on all corruption types for \texttt{ImageNet-C}, it degenerates the performance
for data corrupted under types Fog, Bright and Contrast in \texttt{CIFAR10-C}.
We speculate this is because these three corruptions intrinsically rescale the adversarial perturbation,
and refer readers to Appendix \ref{app:perturbation size} for a detailed discussion.
\paragraph{Ablation Study.}
We study the effect of perturbation size $r$ and the number of training samples $n$ for adversarial training
in bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2}.
Due to the space limit, we put the implementation details and results in Appendix \ref{app:perturbation}.
\par
The results for the effect of perturbation size $r$ are in Figures~\ref{fig:adv_l2_r}-\ref{fig:adv_linf_r} in Appendix \ref{app:perturbation size}.
As can be seen, the accuracy on OOD data \texttt{CIFAR10-C} first increases and then decreases with an increasing $r$.
This is because the upper bounds of excess risk in \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} are decided by both the clean accuracy and input-robustness.
However, an increasing perturbation size $r$ improves the input-robustness, but harms the clean accuracy~\citep{raghunathan2019adversarial}.
Specifically, when the perturbation size $r$ is small, the clean accuracy is relatively stable and the robustness dominates.
Thus the overall OOD performance increases as $r$ increases.
However, when $r$ is relatively large, a larger $r$ leads to worse clean accuracy though better robustness, and can lead to worse overall OOD performance.
Thus, to achieve the optimal performance on OOD data, we should properly choose the perturbation size $r$ rather than continually increasing it.
\par
The results for the effect of the number of training samples $n$ are in Figures~\ref{fig:adv_l2_num}-\ref{fig:adv_linf_num} in Appendix \ref{app:number of training samples}.
The accuracy on OOD data increases with the number of training samples, which is consistent with our findings in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}.
\subsubsection{Experiments on Natural Language Understanding}\label{sec:Experiments on Natural Language Understanding}
\begin{table}[t!]
\caption{Performance of $\text{BERT}$ base model on NLU tasks using standard training and adversarial training under both $\ell_{2}$-norm and $\ell_{\infty}$-norm.}
\label{tbl:adversarial training on text}
\centering
\scalebox{0.6}{
{
\begin{tabular}{c|cc|ccc}
\hline
Dataset & Train & Test & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$ \\
\hline
\multirow{8}{*}{\texttt{STS-B}} & \multirow{2}{*}{Images} & Images & 98.38 & 97.81 & 96.39 \\
& & MSRvid & 89.52(-8.86) & \textbf{90.61}(-7.20) & 90.09(-6.30) \\
\cline{2-6}
& \multirow{2}{*}{MSRvid} & MSRvid & 98.55 & 97.45 & 96.65 \\
& & Images & \textbf{84.12}(-14.43) & 83.63(-13.82) & 83.11(-13.54) \\
\cline{2-6}
& \multirow{2}{*}{Headlines} & Headlines & 97.59 & 96.73 & 95.75 \\
& & MSRpar & 62.07(-35.52) & 64.48(-32.25) & \textbf{67.67}(-28.08) \\
\cline{2-6}
& \multirow{2}{*}{MSRpar} & MSRpar & 97.55 & 97.33 & 97.55 \\
& & Headlines & 75.58(-21.97) & 75.27(-22.06) & \textbf{76.12}(-21.43) \\
\hline
\multirow{4}{*}{\texttt{SST-2}; \texttt{IMDb}} & \multirow{2}{*}{SST-2}& SST-2 & 93.57 & 93.57 & 93.92 \\
& & IMDb & 90.06(-3.51) & \textbf{91.50}(-2.07) & 91.32(-2.60) \\
\cline{2-6}
& \multirow{2}{*}{IMDb} & IMDb & 94.36 & 94.88 & 94.68 \\
& & SST-2 & 87.00(-7.36) & \textbf{88.53}(-6.35) & 88.07(-6.61) \\
\hline
\multirow{3}{*}{\texttt{MNLI}} & \multirow{3}{*}{Telephone} & Telephone & 83.01 & 83.16 & 82.90 \\
& & Letters & 82.45(-0.56) & 83.76(+0.60) & \textbf{84.07}(+1.17) \\ & & Face-to-face & 81.56(-1.45) & \textbf{83.59}(+0.43) & \textbf{83.59}(+0.69) \\
\hline
\end{tabular}}}
\end{table}
\paragraph{Data.}
As in \citep{hendrycks2020pretrained}, we use three pairs of datasets as the original and OOD datasets for NLU tasks.
\begin{itemize}
\item \texttt{SST-2} \citep{socher2013recursive} and \texttt{IMDb} \citep{maas2011learning} are sentiment analysis datasets,
with pithy expert and full-length lay movie reviews, respectively.
As in \citep{hendrycks2020pretrained}, we train on one dataset and evaluate on the other.
Then we report the accuracy of a review's binary sentiment predicted by the model.
\item \texttt{STS-B} consists of texts from different genres and sources.
It requires the model to predict the textual similarity between pairs of sentences \citep{cer2017semeval}.
As in \citep{hendrycks2020pretrained}, we use four sources from two genres: MSRpar(news), Headlines (news); MSRvid(captions), Images(captions). The evaluation metric is Pearson's correlation coefficient.
\item \texttt{MNLI} is a textual entailment dataset which contains sentence pairs from different genres of text \citep{williams2018broad}.
We select training samples from two genres of transcribed text (Telephone and Face-to-Face) and the other of written text (Letters) as in \citep{hendrycks2020pretrained}, and report the classification accuracy.
\end{itemize}
\paragraph{Setup.} For a pre-trained language model e.g., BERT,
each input token
is encoded as a one-hot vector and then mapped into a continuous embedding space.
Instead of adding perturbations to the one-hot vectors,
we construct adversarial samples in the word embedding space as in \citep{zhu2019freelb}.
\par
The backbone model is the base version of $\text{BERT}$ \citep{devlin2019bert} which has been widely used in the NLP community.
We conduct AT in the fine-tuning stage to see its effectiveness on OOD generalization.
The models are trained by AdamW \citep{loshchilov2018decoupled} for 10 epochs.
Detailed hyperparameters are in Appendix \ref{app:hyp on adv}.
As in Section \ref{sec:Experiments on Image Classification},
we compare Adv-$\ell_{2}$ and Adv-$\ell_{\infty}$ with Std.
\paragraph{Main Results.}
In Table \ref{tbl:adversarial training on text},
we report the results on in-distribution data and OOD data, and the gap between them (in the brackets) as in \citep{hendrycks2020pretrained}. The gaps in brackets are used to alleviate the interference by the general benefits from AT itself, since it was shown in \citep{zhu2019freelb} that AT can improve the generalization ability of model on in-distribution textual data.
\par
As can be seen, adversarially trained models perform similarly or even better than standardly trained models on in-distribution data, while significantly better on OOD data especially for \texttt{MNLI}. The smaller gaps between in-distribution and OOD data support our finding that AT can be used to improve OOD generalization.
\subsection{Robust Pre-Trained Model Improves OOD Generalization}
\label{expt:pretrain}
Previously in Section \ref{sec:pretrain improves ood}, we theoretically show that
an input-robust pre-trained model gives a better initialization for fine-tuning on downstream task, in terms of OOD generalization.
In this section, we empirically show that this better initialization also leads to better OOD generalization after finetuning on image classification tasks.
\begin{table*}[htbp!]
\caption{Clean and corruption accuracy (\%) of ResNet34 on \texttt{CIFAR10-C} with no pre-training, standard pre-training, and adversarial pre-training under $\ell_{2}$-norm and $\ell_{\infty}$-norm.}
\label{tbl:adversarial pre-training}
\centering
\scalebox{0.64}{
{
\begin{tabular}{l|c|c|ccc|cccc|cccc|cccc|c}
\hline
\multirow{2}{*}{Fine-Tuning} & \multirow{2}{*}{Pre-Training} & \multirow{2}{*}{Clean} & \multicolumn{3}{c|}{Noise} & \multicolumn{4}{c|}{Blur} & \multicolumn{4}{c|}{Weather} & \multicolumn{4}{|c|}{Digital} & \multirow{2}{*}{Avg.} \\
& & & Gauss & Shot & Impulse & Defocus & Glass & Motion & Zoom & Snow & Frost & Fog & Bright & Contrast & Elastic & Pixel & JPEG & \\ \hline
\multirow{4}{*}{Std} & No & 95.21 & 40.55 & 40.64 & 19.91 & 83.21 & 67.77 & 77.86 & 90.31 & 80.71 & 77.91 & 67.27 & \textbf{90.88} & 48.14 & 80.80 & 81.99 & 80.84 & 68.59 \\
& Std & 94.65 & 41.25 & 42.91 & 22.58 & 85.19 & 71.03 & 78.49 & \textbf{90.82} & 82.78 & \textbf{80.04} & 67.66 & 89.97 & 45.70 & \textbf{83.89} & 82.03 & \textbf{80.99} & 69.69 \\
& Adv-$\ell_{2}$ & 95.06 & \textbf{45.10} & \textbf{50.58} & 27.57 & 87.27 & \textbf{72.95} & 79.08 & 90.57 & \textbf{83.29} & 77.25 & 65.41 & 90.15 & \textbf{50.41} & 82.81 & 78.01 & 78.95 & 70.63 \\
& Adv-$\ell_{\infty}$ & 94.30 & 40.94 & 46.42 & \textbf{29.39} & \textbf{87.60} & 70.79 & \textbf{81.44} & 90.69 & 82.77 & 79.28 & \textbf{68.84} & 89.19 & 45.29 & 83.59 & \textbf{83.13} & 80.86 & \textbf{70.68} \\ \hline
\multirow{4}{*}{Adv-$l_{2}$} & No & 94.43 & 56.82 & 60.58 & 29.34 & 85.44 & 71.67 & 81.80 & 90.08 & 83.68 & 80.37 & 61.68 & 89.96 & 34.76 & 83.76 & 85.16 & 83.24 & 71.89 \\
& Std & 94.09 & 57.64 & 60.96 & 26.35 & 86.78 & \textbf{73.52} & 82.16 & 90.46 & 82.12 & 80.64 & 62.58 & 88.98 & 34.68 & 84.29 & 83.42 & 83.42 & 71.87 \\
& Adv-$\ell_{2}$ & 94.45 & \textbf{58.98} & \textbf{62.99} & \textbf{35.08} & 87.07 & 72.29 & 81.66 & 91.07 & 83.53 & 81.38 & 62.82 & 89.52 & \textbf{39.53} & 84.35 & \textbf{86.60} & 88.55 & 73.69 \\
& Adv-$\ell_{\infty}$ & 95.25 & 58.64 & 62.18 & 29.86 & \textbf{88.15} & 73.00 & \textbf{82.95} & \textbf{91.98} & \textbf{84.76} & \textbf{83.86} & \textbf{64.76} & \textbf{91.00} & 37.35 & \textbf{84.65} & 86.57 & \textbf{88.59} & \textbf{73.89} \\ \hline
\multirow{4}{*}{Adv-$\ell_{\infty}$} & No & 92.46 & 80.91 & 81.69 & 52.00 & 79.58 & 80.94 & 77.42 & 80.21 & 80.57 & 79.35 & 35.41 & 83.15 & 18.06 & 83.51 & 87.79 & 87.44 & 72.54 \\
& Std & 92.05 & 80.21 & 81.06 & \textbf{63.02} & 77.94 & 77.80 & 75.60 & 80.04 & \textbf{83.77} & 81.22 & 41.57 & \textbf{89.94} & \textbf{19.04} & 82.39 & 85.49 & \textbf{88.76} & 73.86 \\
& Adv-$\ell_{2}$ & 92.55 & \textbf{81.96} & \textbf{82.86} & 58.95 & \textbf{80.51} & \textbf{82.66} & \textbf{78.21} & \textbf{86.56} & 81.49 & 81.10 & 42.07 & 89.76 & 18.56 & \textbf{84.58} & \textbf{88.53} & 88.05 & \textbf{75.06} \\
& Adv-$\ell_{\infty}$ & 92.28 & 81.74 & 82.37 & 56.96 & 80.34 & 81.90 & 77.94 & 85.76 & 81.48 & \textbf{81.70} & \textbf{42.99} & 89.00 & 18.45 & 84.50 & 88.07 & 87.50 & 74.71 \\ \hline
\end{tabular}}}
\end{table*}
\paragraph{Setup.}
Following \citep{salman2020adversarially}, we pre-train the model on \texttt{ImageNet} and then fine-tune it on \texttt{CIFAR10}.
To get an input-robust model in the pre-training stage, we consider adversarially pre-train the model.
We compare adversarial pre-training (Adv-$\ell_{2}$ and Adv-$\ell_{\infty}$) against standard pre-training and no pre-training
as in Section \ref{sec:Experiments on Image Classification}.
In the fine-tuning stage, the data from \texttt{CIFAR10} are resized to $224\times224$ as in \citep{salman2020adversarially}. We also compare stadard fine-tuning and adversarial fine-tuning under both $\ell_{2}$- and $\ell_{\infty}$-norm. After fine-tuning, we verify the OOD generalization on \texttt{CIFAR10-C}. The other settings are the same as Section \ref{sec:Experiments on Image Classification}.
\paragraph{Main Results.}
The results
are shown in Table \ref{tbl:adversarial pre-training}.
As can be seen,
for all fine-tuning methods,
adversarially pre-trained models consistently achieve better performance on OOD data
than standardly pre-trained models or models without pre-training.
Thus, the initialization from the adversarially pre-trained input-robust model leads to
better OOD generalization on downstream tasks after fine-tuning. In addition, standard pre-training slightly improves the OOD generalization compared with no pre-training
when we conduct Adv-$\ell_{\infty}$ fine-tuning or standard fine-tuning.
We also observe that for all four kinds of pre-training,
adversarial fine-tuning under $\ell_{\infty}$-norm has better performance than $\ell_{2}$-norm.
This agrees with the observations in Section~\ref{sec:Experiments on Image Classification}.
Note that the results of
models without pre-training are different from those in Table \ref{tbl:adversarial training on image} due to the resized input data.
\subsection{Discussion}
\label{sec:discussion}
It is shown in \citep{hendrycks2020pretrained} that the language model BERT~\citep{devlin2019bert} pre-trained on large corpus generalizes well on downstream OOD data, and RoBERTa~\citep{liu2019roberta} pre-trained with more training data and updates generalizes even better than BERT.
We speculate this is because (i) sufficient pre-training obtains an input-robust model as discussed in Section \ref{sec:pretrain improves ood}, and this better-initialization leads to better OOD generalization after finetuning as observed in Section~\ref{expt:pretrain}; and (ii) the objective of masked language modeling predicts the masked (perturbed) input tokens and enables a certain amount of input-robustness.
In this section, we empirically show
that the model initialized by BERT has higher input-robustness than a randomly initialized model.
Besides, compared with BERT, RoBERTa is pre-trained with more training samples and updating steps
and the model initialized by it is more robust to input perturbations.
\paragraph{Setup.} We compare the input-robustness of the base versions of pre-trained language model BERT
\citep{devlin2019bert} and RoBERTa
\citep{liu2019roberta},
against a randomly initialized model whose
parameters
are independently sampled from $\mathcal{N}(0, 0.02^{2})$ \citep{wolf2020transformers}.
The three models have exactly the same structure.
Compared with $\text{BERT}$, $\text{RoBERTa}$ is pre-trained on a larger corpus for more updating steps.
Experiments are performed on \texttt{MRPC} and \texttt{CoLA} datasets from the GLUE benchmark \citep{wang2018glue},
with 3.7k and 8.5k training samples, respectively.
Similar as Section \ref{sec:Experiments on Natural Language Understanding},
we add adversarial perturbations in the embedding space.
We use 3 steps of $\ell_{\infty}$-norm attack to construct perturbation.
The perturbation size is 0.001 and the perturbation step size 0.0005.
Since the the last classification layer of $\text{BERT}$ or $\text{RoBERTa}$ is randomly initialized during downstream task fine-tuning,
we study the difference in the hidden states of the last Transformer layer
before the classification layer.
Denote $\textbf{h}, \textbf{h}_{\text{per}}\in \mathbb{R}^{128\times 768}$ as the hidden states from the original input and the adversarially perturbed input, respectively. We use the $\ell_{2}$-norm $\|\textbf{h}_{\text{per}} - \textbf{h}\|$ and the cosine
similarity $\langle\textbf{h}, \textbf{h}_{\text{per}}\rangle/(\|\textbf{h}\|\|\textbf{h}_{\text{per}}\|)$ to measure the difference. The cosine similarity is used to alleviate the potential interference caused by the scale of $\textbf{h}$ over different pre-trained models. The results are in Figure \ref{fig:comp}.
\begin{figure}[htbp]
\centering
\subfloat{
\includegraphics[width=0.3\textwidth]{pic/robustness/legend.pdf}}
\vspace{-0.1in}
\\
\addtocounter{subfigure}{-1}
\subfloat[\texttt{MRPC.}\label{fig:mrpc_norm}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/mrpc_norm.pdf}}
\subfloat[\texttt{CoLA.}\label{fig:cola_norm}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/cola_norm.pdf}}\\
\subfloat[\texttt{MRPC.}\label{fig:mrpc}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/mrpc.pdf}}
\subfloat[\texttt{CoLA.}\label{fig:cola}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/cola.pdf}}
\vspace{-0.05in}
\caption{Difference of hidden states in the last Transformer layer between the original input and adversarially perturbed input, measured by $\ell_{2}$-norm and cosine similarity. The models compared are randomly initialized model, $\text{BERT}$, and $\text{RoBERTa}$.
The datasets used are \texttt{MRPC} and \texttt{CoLA} from the GLUE benchamrk.
The dashed lines in the upper and bottom figures are respectively the mean of $\|\textbf{h}_{\text{per}} - \textbf{h}\|$ and $\langle\textbf{h}, \textbf{h}_{\text{per}}\rangle/(\|\textbf{h}\|\|\textbf{h}_{\text{per}}\|)$ from all samples in a dataset.}
\label{fig:comp}
\vspace{-0.1in}
\end{figure}
\paragraph{Main Results.}
The histograms of $\|\textbf{h}_{\text{per}} - \textbf{h}\|$ and $\langle\textbf{h}, \textbf{h}_{\text{per}}\rangle/(\|\textbf{h}\|\|\textbf{h}_{\text{per}}\|)$ from all training samples
in MRPC and CoLA
are shown in Figure \ref{fig:mrpc_norm}, \ref{fig:cola_norm} and Figure \ref{fig:mrpc}, \ref{fig:cola}, respectively.
We can observe that (i) $\text{BERT}$ is more robust than the randomly initialized model,
indicating that the masked language modeling objective and
sufficient pre-training improves input-robustness, and leads to better ood performance after fine-tuning;
(ii) $\text{RoBERTa}$ is more input-robust compared with $\text{BERT}$, which implies that
that more training samples and updating steps in the pre-training stage improve the input-robustness.
Combining with that a more input-robust pre-trained model also leads to better OOD generalization on downstream tasks empirically (Section~\ref{sec:pretrain improves ood}),
the above observations (i) and (ii) may also explain the finding in \citep{hendrycks2020pretrained} that $\text{BERT}$ generalizes worse on downstream OOD data than $\text{RoBERTa}$, but much better than the model without pretraining.
\section{Conclusion}
In this paper, we explore the relationship between the robustness and OOD generalization of a model.
We theoretically show that the input-robust model can generalize well on OOD data
under the definition of OOD generalization via Wasserstein distance. Thus, for a model trained from scratch,
we suggest using adversarial training to improve the input-robustness of the model which results in better OOD generalization.
Under mild conditions, we show that the excess risk on OOD data of an adversarially trained model is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$.
For the framework of first pre-training and then fine-tuning,
we show that a pre-trained input-robust model provides a theoretically good initialization which empirically improves OOD generalization after fine-tuning. Various experiments on CV and NLP verify our theoretical findings.
\section{Proofs for Section \ref{sec:Learning Robust Model Results in Better OOD Generalization}}\label{app:proof in Learning Robust Model Results in Better OOD Generalization}
\subsection{Proofs for Section \ref{sec:Robustness Corresponds with Better OOD Generalization}}\label{app:proof in Robustness Corresponds with Better OOD Generalization}
\subsubsection{Proof of Theorem \ref{thm:ood generalization upper bound}}
To start the proof of Theorem \ref{thm:ood generalization upper bound}, we need the following lemma.
\begin{lemma}
\label{lem:equivalence}
For any $\text{\boldmath{$w$}}$ and $r$, we have
\begin{equation}
\small
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) = \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right].
\end{equation}
\end{lemma}
\begin{proof}
Let $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}) = \boldsymbol{x} + \arg\max_{\{\boldsymbol{\delta}: \|\boldsymbol{\delta}\|_{\infty} \leq r\}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$ with $\boldsymbol{x}$ is an input data. The existence of $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ is guaranteed by the continuity of $f(\text{\boldmath{$w$}}, \boldsymbol{x})$. $P_{r}$ is the distribution of $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ with $\boldsymbol{x}\sim P_{0}$. Then
\begin{equation}
\small
\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] = \mathbb{E}_{P_{r}}[f(\text{\boldmath{$w$}}, \boldsymbol{x})].
\end{equation}
Since
\begin{equation}
\small
\mathsf{W}_{\infty}(P_{0}, P_{r}) \leq \mathbb{E}_{P_{0}}[\|\boldsymbol{x} - T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\|_{\infty}] \leq r,
\end{equation}
we have
\begin{equation}
\small
\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] \leq \sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}).
\end{equation}
On the other hand, let $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$. Due to Kolmogorov's theorem, $P^{*}_{r}$ can be distribution of some random vector $\boldsymbol{z}$, due to the definition of $\mathsf{W}_{\infty}$-distance, we have $\|\boldsymbol{z} - \boldsymbol{x}\|_{\infty}\leq r$ holds almost surely. Then we conclude
\begin{equation}
\small
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) = R_{P^{*}_{r}}(\text{\boldmath{$w$}}) = \mathbb{E}_{P^{*}_{r}}[f(\text{\boldmath{$w$}}, \boldsymbol{z})] \leq \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right].
\end{equation}
Thus, we get the conclusion.
\end{proof}
This lemma shows that the distributional perturbation measured by $\mathsf{W}_{\infty}$-distance is equivalent to input perturbation. Hence we can study $\mathsf{W}_{\inf}$-distributional robustness through $\ell_{\inf}$-input-robustness. The basic tool for our proof is the covering number, which is defined as follows.
\begin{definition}\citep{wainwright2019}
A $r$-cover of $(\mathcal{X}, \|\cdot\|_{p})$ is any point set $\{\boldsymbol{u}_{i}\}\subseteq\mathcal{X}$ such that for any $\boldsymbol{u}\in\mathcal{X}$, there exists $\boldsymbol{u}_{i}$ satisfies $\|\boldsymbol{u} - \boldsymbol{u}_{i}\|_{p}\leq r$. The covering number $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{p})$ is the cardinality of the smallest $r$-cover.
\end{definition}
Now we are ready to give the proof of Theorem \ref{thm:ood generalization upper bound} which is motivated by \citep{xu2012robustness}.
\begin{proof}[Proof of Theorem \ref{thm:ood generalization upper bound}]
We can construct a $r$-cover to $(\mathcal{X}, \|\cdot\|_{2})$ then $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{2})\leq (2d_{0})^{(2D/r^{2} + 1)} = N$, because the $\mathcal{X}$ can be covered by a polytope with $\ell_{2}$-diameter smaller than $2D$ and $2d_{0}$ vertices, see \citep{vershynin2018} Theorem 0.0.4 for details. Due to the geometrical structure, we have $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{\infty})\leq (2d_{0})^{(2D/r^{2} + 1)}$. Then, there exists $(C_{1}, \cdots, C_{N})$ covers $(\mathcal{X}, \|\cdot\|_{\infty})$ where $C_{i}$ is disjoint with each other, and $\|\boldsymbol{u} - \boldsymbol{v}\|_{\infty} \leq r$ for any $\boldsymbol{u}, \boldsymbol{v}\in C_{i}$. This can be constructed by $C_{i} = \hat{C}_{i}\bigcap\left(\bigcup_{j=1}^{i-1}\hat{C}_{j}\right)^{c}$ with $(\hat{C}_{1},\cdots, \hat{C}_{N})$ covers $(\mathcal{X}, \|\cdot\|_{\infty})$, and the diameter of each $\hat{C}_{i}$ is smaller than $r$ since $\mathcal{N}(r, \mathcal{X}, \|\cdot\|_{\infty}) \leq N$. Let $A_{j} = \{\boldsymbol{x}_{i}: \boldsymbol{x}_{i}\in C_{j}\}$, and $|A_{j}|$ is the cardinality of $A_{j}$. Due to Lemma \ref{lem:equivalence}, we have
\begin{equation}
\label{eq:generalization decomposition}
\small
\begin{aligned}
\left|\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r_{0})}R_{P}(\text{\boldmath{$w$}}) - R_{P_{n}}(\text{\boldmath{$w$}}) \right| & = \left|\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] - R_{P_{n}}(\text{\boldmath{$w$}})\right| \\
& = \left|\sum\limits_{j=1}^{N}\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\mid \boldsymbol{x}\in C_{j}\right]P_{0}(C_{j}) - R_{P_{n}}(\text{\boldmath{$w$}})\right| \\
& \leq \left|\sum\limits_{j=1}^{N}\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\mid \boldsymbol{x}\in C_{j}\right]\frac{|A_{j}|}{n} - \frac{1}{n}\sum\limits_{i=1}^{n}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})\right|\\
& + \left|\sum\limits_{j=1}^{N}\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r_{0}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\mid \boldsymbol{x}\in C_{j}\right]\left(\frac{|A_{j}|}{n} - P_{0}(C_{j})\right)\right| \\
& \leq \left|\frac{1}{n}\sum\limits_{j=1}^{N}\sum\limits_{\boldsymbol{x}_{i}\in C_{j}}\sup_{\boldsymbol{x}\in C_{j} + B_{\infty}(\textbf{0}, r_{0})}|f(\text{\boldmath{$w$}}, \boldsymbol{x}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})|\right| + M\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right| \\
& \overset{a}{\leq} \frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq 2r}\left|f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})\right| + M\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right| \\
& \leq \epsilon + M\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right|.
\end{aligned}
\end{equation}
Here $a$ is due to $C_{j} + B_{\infty}(\textbf{0}, r) \subseteq B_{\infty}(\boldsymbol{x}_{i}, 2r)$ when $\boldsymbol{x}_{i}\in C_{j}$, since $\ell_{\infty}$-diameter of $C_{j}$ is smaller than $r$. The last inequality is due to $(2r, \epsilon, P_{n}, \infty)$-robustness of $f(\text{\boldmath{$w$}}, \boldsymbol{x})$. On the other hand, due to Proposition A6.6 in \citep{van2000weak}, we have
\begin{equation}
\label{eq:multinomial concentration}
\small
\mathbb{P}\left(\sum\limits_{j=1}^{N}\left|\frac{|A_{j}|}{n} - P_{0}(C_{j})\right|\geq \theta\right) \leq 2^{N}\exp\left(\frac{-n\theta^{2}}{2}\right).
\end{equation}
Combine this with \eqref{eq:generalization decomposition}, due to the value of $N$, we get the conclusion.
\end{proof}
\subsubsection{Proof of Theorem \ref{thm:ood generalization upper bound l2}}
There is a little difference of proving Theorem \ref{thm:ood generalization upper bound l2} compared with Theorem \ref{thm:ood generalization upper bound}. Because the out-distribution $P$ constrained in $B_{\mathsf{W}_{\infty}}(P_{0}, r)$ only correspond with OOD data that contained in a $\ell_{\infty}$-ball of in-distribution data almost surely, see Lemma \ref{lem:equivalence} for a rigorous description. Hence, we can utilize $\ell_{\infty}$-robustness of model to derive the OOD generalization under $\mathsf{W}_{\infty}$-distance by Theorem \ref{thm:ood generalization upper bound}.
However, in the regime of $\mathsf{W}_{2}$-distance, roughly speaking, the transformed OOD data $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ is contained in a $\ell_{2}$-ball of $\boldsymbol{x}$ in expectation. Thus, Lemma \ref{lem:equivalence} is invalid under $\mathsf{W}_{2}$-distance.
\par
To discuss the OOD generalization under $\mathsf{W}_{2}$-distance, we need to give a delicate characterization to the distribution $P\in B_{\mathsf{W}_{2}}(P_{0}, r)$. First, we need the following lemma.
\begin{lemma}\label{lem:optimal}
For any $r$ and $\text{\boldmath{$w$}}$, let $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$. Then, there exists a mapping $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$ such that $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\sim P^{*}_{r}$ with $\boldsymbol{x}\sim P_{0}$.
\end{lemma}
\begin{proof}
The proof of Theorem 6 in \citep{sinha2018certifying} shows that
\begin{equation}
\small
R_{P^{*}_{r}}(\text{\boldmath{$w$}}) = \sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) = \inf_{\lambda\geq 0}\sup_{P, \pi\in (P, P_{0})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}d\pi(\boldsymbol{x}, \boldsymbol{z}) + \lambda r\right).
\end{equation}
We next show that the supremum over $\pi$ in the last equality is attained by the joint distribution $(T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}), \boldsymbol{x})$, which implies our conclusion. For any $\lambda > 0$, we have
\begin{equation}
\small
\sup_{P, \pi\in (P, P_{0})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}d\pi(\boldsymbol{x}, \boldsymbol{z})\right) \leq \int_{\mathcal{X}}\sup_{\boldsymbol{x}}\left(f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}\right)dP_{0}(\boldsymbol{z}),
\end{equation}
due to the supremum in the left hand side is taken over $P$ and $\pi$. On the other hand, let $P(\cdot\mid \boldsymbol{z})$ and $\boldsymbol{x}(\cdot)$ respectively be the regular conditional distribution on $\mathcal{X}$ with $\boldsymbol{z}$ given and the function on $\mathcal{X}$. Since $P(\cdot\mid \boldsymbol{z})$ is measurable,
\begin{equation}
\small
\begin{aligned}
\sup_{P, \pi\in (P, P_{0})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}d\pi(\boldsymbol{x}, \boldsymbol{z})\right) & \geq \sup_{P(\cdot\mid \boldsymbol{z})}\left(\int_{\mathcal{X}\times\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}dP(\boldsymbol{x}\mid \boldsymbol{z})dP_{0}(\boldsymbol{z})\right) \\
& \geq \sup_{\boldsymbol{x}(\cdot)}\left(\int_{\mathcal{X}}f(\text{\boldmath{$w$}}, \boldsymbol{x}(\boldsymbol{z})) - \lambda\|\boldsymbol{x}(\boldsymbol{z}) - \boldsymbol{z}\|^{2}dP_{0}(\boldsymbol{z})\right) \\
& \geq \int_{\mathcal{X}}\sup_{\boldsymbol{x}}\left(f(\text{\boldmath{$w$}}, \boldsymbol{x}) - \lambda\|\boldsymbol{x} - \boldsymbol{z}\|^{2}\right)dP_{0}(\boldsymbol{z}).
\end{aligned}
\end{equation}
Thus, we get the conclusion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:ood generalization upper bound l2}]
Similar to the proof of Theorem \ref{thm:ood generalization upper bound}, we can construct a disjoint cover $(C_{1}, \cdots, C_{N})$ to $(\mathcal{X}, \|\cdot\|_{2})$ such that $N\leq (2d_{0})^{(2\epsilon^{2}D/r^{2} + 1)}$, and the $l_{2}$-diameter of each $C_{i}$ is smaller than $r/\epsilon$. Let $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$, by Lemma \ref{lem:optimal}, we have
\begin{equation}
\label{eq:sup bound}
\small
\begin{aligned}
\sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) & = R_{P^{*}_{r}}(\text{\boldmath{$w$}}) \\
& = \mathbb{E}_{P_{0}}\left[f(\text{\boldmath{$w$}}, T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}))\right] \\
& = \mathbb{E}_{P_{0}}\left[f(\text{\boldmath{$w$}}, T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}))\left(\textbf{1}_{T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\in B_{2}(\boldsymbol{x}, r / \epsilon)} + \textbf{1}_{T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\notin B_{2}(\boldsymbol{x}, r / \epsilon)}\right)\right] \\
& \leq \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] + M\mathbb{P}(T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\notin B_{2}(\boldsymbol{x}, r / \epsilon)).
\end{aligned}
\end{equation}
Due to the definition of $T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})$, by Markov's inequality, we have
\begin{equation}
\small
\left(\frac{r}{\epsilon}\right) \mathbb{P}(T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x})\notin B_{2}(\boldsymbol{x}, r / \epsilon)) \leq \int_{\mathcal{X}}\|T_{r}^{\text{\boldmath{$w$}}}(\boldsymbol{x}) - \boldsymbol{x}\|^{2} dP_{0}(\boldsymbol{x}) = \mathsf{W}_{2}(P_{0}, P^{*}_{r}) \leq r.
\end{equation}
Plugging this into \eqref{eq:sup bound}, and due to the definition of Wasserstein distance, we have
\begin{equation}
\label{eq:upper bound on optimal}
\small
\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] \leq \sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) \leq \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] + M\epsilon.
\end{equation}
Similar to the proof of Theorem \ref{thm:ood generalization upper bound}, due to the model is $(2r/\epsilon, \epsilon, P_{n}, 2)$-robust, we have
\begin{equation}
\small
\left|\mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{2}\leq r/\epsilon}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})\right] - R_{P_{n}}(\text{\boldmath{$w$}})\right|\leq \epsilon + M\sqrt{\frac{(2d_{0})^{(2\epsilon^{2}D/r^{2} + 1)}\log{2} + 2\log{(1/\theta)}}{n}}
\end{equation}
holds with probability at least $1 - \theta$. Combining this with \eqref{eq:upper bound on optimal}, we get the conclusion.
\end{proof}
\subsection{Proofs for Section \ref{sec:robust training}}\label{app:proof in robust training}
The proof of Theorem \ref{thm:convergence} is same for $p\in\{2, \infty\}$, we take $p=\infty$ as an example. Before providing the proof, we first give a lemma to characterize the convergence rate of the first inner loop in Algorithm \ref{alg:sgd}.
\begin{lemma}
\label{lem:convergence}
For any $\text{\boldmath{$w$}}, \boldsymbol{x}\in\{\boldsymbol{x}_{i}\}$, and $r$, there exists $\boldsymbol{\delta}^{*}\in\arg\max_{\{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty}\leq r\}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$ such that
\begin{equation}
\small
\|\boldsymbol{\delta}_{K + 1} - \boldsymbol{\delta}^{*}\|^{2} \leq \left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}\|\boldsymbol{\delta}_{1} - \boldsymbol{\delta}^{*}\|^{2}
\end{equation}
when $\boldsymbol{\delta}_{k + 1} = \emph{\text{Proj}}_{B_{\infty}(\emph{\textbf{0}}, r)}\left(\boldsymbol{\delta}_{k} +\eta_{\boldsymbol{x}}\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k})\right)$ with $\eta_{\boldsymbol{x}} = 1 /L_{22}$.
\end{lemma}
\begin{proof}
The existence of $\boldsymbol{\delta}^{*}$ is due to the continuity of $f(\text{\boldmath{$w$}}, \cdot)$. Then
\begin{equation}
\label{eq:distance descent}
\small
\begin{aligned}
f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}^{*}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k + 1}) & = f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}^{*}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}) + f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k + 1})\\
& \overset{a}{\leq} \langle \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}), \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k}\rangle -\frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2}\\
& + \langle\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}), \boldsymbol{\delta}_{k} - \boldsymbol{\delta}_{k + 1}\rangle + \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} \\
& = \langle\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k}), \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k + 1}\rangle - \frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} + \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} \\
& \overset{b}{\leq} L_{22}\langle \boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}, \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k + 1}\rangle - \frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} + \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} \\
& = L_{22}\langle \boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}, \boldsymbol{\delta}^{*} - \boldsymbol{\delta}_{k}\rangle - \frac{\mu_{\boldsymbol{x}}}{2}\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} - \frac{L_{22}}{2}\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2},
\end{aligned}
\end{equation}
where $a$ is due to the $L_{22}$-Lipschitz continuity of $\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x})$ and strongly convexity, $b$ is because the property of projection (see Lemma 3.1 in \citep{bubeck2014convex}). Then we get
\begin{equation}
\small
\begin{aligned}
\|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}^{*}\|^{2} & = \|\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}\|^{2} + \|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2} + 2\langle\boldsymbol{\delta}_{k + 1} - \boldsymbol{\delta}_{k}, \boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\rangle \\
& \leq \left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)\|\boldsymbol{\delta}_{k} - \boldsymbol{\delta}^{*}\|^{2}
\end{aligned}
\end{equation}
by plugging \eqref{eq:distance descent} into the above equality and $f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}^{*}) - f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}_{k + 1}) \geq 0$. Thus, we get the conclusion.
\end{proof}
This lemma shows that the inner loop in Algorithm \ref{alg:sgd} can efficiently approximate the worst-case perturbation for any $\text{\boldmath{$w$}}_{t}$ and $\boldsymbol{x}_{i}$. Now we are ready to give the proof of Theorem \ref{thm:convergence}.
\par
We need the following lemma, which is Theorem 6 in \citep{rakhlin2012making}.
\begin{lemma}
\label{lem:concentration}
Let $\{\xi_{1},\cdots, \xi_{t}\}$ be a martingale difference sequence with a uniform upper bound $b$. Let $V_{t} = \sum_{j=1}^{t}\mathsf{Var}(\xi_{j}\mid\mathcal{F}_{j - 1})$ with $\mathcal{F}_{j}$ is the $\sigma$-field generated by $\{\xi_{1},\cdots,\xi_{j}\}$. Then for every $a$ and $v>0$,
\begin{equation}
\small
\mathbb{P}\left(\bigcup_{s\leq t}\left(\left\{\sum\limits_{j=1}^{t}\xi_{j} \geq a\right\}\bigcap \left\{V_{t} \leq v \right\}\right)\right) \leq \exp\left(\frac{-a^{2}}{2(v + ba)}\right).
\end{equation}
\end{lemma}
This is a type of Bennett's inequality which is sharper compared with Azuma-Hoeffding's inequality when the variance $v$ is much smaller than uniform bound $b$.
\subsubsection{Proof of Theorem \ref{thm:convergence}}
\begin{proof}
With a little abuse of notation, let $r(p) = r$ and define $g(\text{\boldmath{$w$}}, \boldsymbol{x}) = \sup_{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty} \leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$. Lemma A.5 in \citep{nouiehed2019solving} implies $g(\text{\boldmath{$w$}}, \boldsymbol{x})$ has $L_{11} + \frac{L_{12}L_{21}}{\mu_{\boldsymbol{x}}}$-Lipschitz continuous gradient with respect to $\text{\boldmath{$w$}}$ for any specific $\boldsymbol{x}$. Then $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})$ has $L = L_{11} + \frac{L_{12}L_{21}}{\mu_{\boldsymbol{x}}}$-Lipschitz continuous gradient. Let $\boldsymbol{x}^{*}\in \boldsymbol{x} + \arg\max_{\{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty}\leq r\}}f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta})$, due to the Lipschitz gradient of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})$,
\begin{equation}
\small
\begin{aligned}
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) & \leq \langle\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}), \text{\boldmath{$w$}}_{t + 1} - \text{\boldmath{$w$}}_{t}\rangle + \frac{L}{2}\|\text{\boldmath{$w$}}_{t + 1} - \text{\boldmath{$w$}}_{t}\|^{2} \\
& = -\eta_{\text{\boldmath{$w$}}_{t}}\langle\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}), \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\rangle + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}L}{2}\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\|^{2} \\
& = -\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|^{2} + \eta_{\text{\boldmath{$w$}}_{t}}\langle\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}), \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\rangle \\
& + \eta_{\text{\boldmath{$w$}}_{t}}\langle\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}),\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*})\rangle + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}L}{2}\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\|^{2}.
\end{aligned}
\end{equation}
Here the last equality is due to $\nabla_{\text{\boldmath{$w$}}}g(\text{\boldmath{$w$}}, \boldsymbol{x}) = \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}^{*})$ (Similar to Danskin's theorem, see Lemma A.5 in \citep{nouiehed2019solving}), and $\boldsymbol{x}^{*}_{i_{t}}$ is the local maxima approximated by $\boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K}$ in Lemma \ref{lem:convergence}. By taking expectation to $\text{\boldmath{$w$}}_{t + 1}$ with $\text{\boldmath{$w$}}_{t}$ given in the both side of the above equation, Jesen's inequality, combining Lemma \ref{lem:convergence} and $\eta_{\text{\boldmath{$w$}}_{t}} = 1/\mu_{\text{\boldmath{$w$}}}t$,
\begin{equation}
\label{eq:convergence in exp}
\small
\begin{aligned}
\mathbb{E}[\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1})] - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) & \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) -\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|^{2} \\
& + \mathbb{E}\left[\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K})\|\right] + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L}{2} \\
& \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) -\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|^{2} + \eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}\mathbb{E}\left[\|\boldsymbol{\delta}_{1} - \boldsymbol{\delta}_{i_{t}}^{*}\|^{2}\right] + \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L}{2} \\
& \leq \left(1 - 2\mu_{\text{\boldmath{$w$}}}\eta_{\text{\boldmath{$w$}}_{t}}\right)\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right) + \eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L \\
& = \left(1 - \frac{2}{t}\right)\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right) + \frac{G^{2}L}{\mu_{\text{\boldmath{$w$}}}^{2}t^{2}}.
\end{aligned}
\end{equation}
Here the third inequality is because
\begin{equation}
\small
\eta_{\text{\boldmath{$w$}}_{t}}\|\nabla \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}\|\boldsymbol{\delta}_{1} - \boldsymbol{\delta}_{i_{t}}^{*}\|^{2} \leq \eta_{\text{\boldmath{$w$}}_{t}}G\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)^{K}4d_{0}r^{2} \leq \frac{\eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L}{2},
\end{equation}
for any $\boldsymbol{\delta}_{i_{t}}^{*}$, since
\begin{equation}
\small
K\log{\left(1 - \frac{\mu_{\boldsymbol{x}}}{L_{22}}\right)} \leq -K\frac{\mu_{\boldsymbol{x}}}{L_{22}} \leq \log{\left(\frac{GL}{8T\mu_{\text{\boldmath{$w$}}}d_{0}r^{2}}\right)}.
\end{equation}
Then by induction,
\begin{equation}
\small
\begin{aligned}
\mathbb{E}[\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1})] - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) & \leq \frac{G^{2}L}{\mu^{2}_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{1}{j^{2}}\prod_{k=j + 1}^{t}\left(1 - \frac{2}{k}\right) \\
& = \frac{G^{2}L}{\mu^{2}_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{1}{j^{2}}\frac{(j - 1)j}{(t - 1)t} \\
& \leq \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}}.
\end{aligned}
\end{equation}
Thus we get the first conclusion of convergence in expectation by taking $t=T$ for $t\geq 2$. For the second conclusion, let us define $\xi_{t} = \langle\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}),\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*})\rangle$. Then Schwarz inequality implies that
\begin{equation}
\small
|\xi_{t}| \leq \|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t})\|\|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}}^{*})\| \leq 2G^{2}.
\end{equation}
Similar to \eqref{eq:convergence in exp}, for $t \geq 2$,
\begin{equation}
\label{eq:high probablity bound}
\small
\begin{aligned}
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) & \leq \left(1 - 2\mu_{\text{\boldmath{$w$}}}\eta_{\text{\boldmath{$w$}}_{t}}\right)\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right) + \eta_{\text{\boldmath{$w$}}_{t}}^{2}G^{2}L + 2\eta_{\text{\boldmath{$w$}}_{t}}\xi_{t} \\
& \leq \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} + \frac{2}{\mu_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{\xi_{j}}{j}\prod_{k = j + 1}^{t}\left(1 - \frac{2}{k}\right)\\
& = \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} + \frac{2}{\mu_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{1}{j}\frac{(j - 1)j}{(t - 1)t}\xi_{j} \\
& = \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} + \frac{2}{\mu_{\text{\boldmath{$w$}}}}\sum\limits_{j=2}^{t}\frac{(j - 1)}{(t - 1)t}\xi_{j}.
\end{aligned}
\end{equation}
Since the second term in the last inequality is upper bonded by $\sum_{j=2}^{t}\xi_{j}$ which is a sum of martingale difference, and $|\xi_{j}| \leq 2G^{2}$, a simple Azuma-Hoeffding's inequality based on bounded martingale difference (Corollary 2.20 in \citep{wainwright2019}) can give a $\mathcal{O}(1/\sqrt{t})$ convergence rate in the high probability. However, we can sharpen the convergence rate via a Bennett's inequality (Proposition 3.19 in \citep{duchi2016lecture}), because the conditional variance of $\xi_{j}$ will decrease across training.
We consider the conditional variance of $\sum_{j=2}^{t}(j - 1)\xi_{j}$, let $\mathcal{F}_{j}$ be the $\sigma$-field generated by $\{\text{\boldmath{$w$}}_{1}, \cdots, \text{\boldmath{$w$}}_{j}\}$, since $\mathbb{E}[\xi_{j}] = 0$ we have
\begin{equation}
\small
\begin{aligned}
\mathsf{Var}\left(\sum_{j=2}^{t}(j - 1)\xi_{j}\mid \mathcal{F}_{j - 1}\right) & = \sum_{j=2}^{t}(j - 1)^{2}\mathsf{Var}\left(\xi_{j}\mid \mathcal{F}_{j - 1}\right) \\
& = \sum_{j=2}^{t}(j - 1)^{2}\mathbb{E}\left[\xi_{j}^{2} \mid \mathcal{F}_{j - 1}\right] \\
& \leq 4G^{2}\sum_{j=2}^{t}(j - 1)^{2} \|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j})\|^{2} \\
& \leq 8G^{2}L\sum_{j=2}^{t}(j - 1)^{2} \left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right),
\end{aligned}
\end{equation}
where first inequality is from Schwarz's inequality and the last inequality is because
\begin{equation}
\small
\begin{aligned}
\tilde{R}_{P_{n}}\left(\text{\boldmath{$w$}}^{*}\right) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) & \leq \tilde{R}_{P_{n}}\left(\text{\boldmath{$w$}} - \frac{1}{L}\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \\
& \leq -\left\langle
\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}), \frac{1}{L}\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right\rangle + \frac{L}{2}\left\|\frac{1}{L}\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right\|^{2} \\
& = -\frac{1}{2L}\left\|\nabla\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})\right\|^{2},
\end{aligned}
\end{equation}
for any $\text{\boldmath{$w$}}$. By applying Lemma \ref{lem:concentration}, as long as $T\geq 4$ and $0 < \theta < 1 / e$, then with probability at least $1 - \theta$, for all $t \leq T$,
\begin{equation}
\small
\begin{aligned}
& \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\\
& \leq \frac{8G}{\mu_{\text{\boldmath{$w$}}}(t - 1)t}\max\left\{\sqrt{2L\sum_{j=2}^{t}(j - 1)^{2} \left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right)}, G(t - 1)\sqrt{\log{\left(\frac{\log{T}}{\theta}\right)}}\right\}\sqrt{\log{\left(\frac{\log{T}}{\theta}\right)}} + \frac{G^{2}L}{t\mu^{2}_{\text{\boldmath{$w$}}}} \\
& \leq \frac{8G\sqrt{\log{(\log{(T/\theta)})}}}{\mu_{\text{\boldmath{$w$}}}(t - 1)t}\sqrt{2L\sum_{j=2}^{t}(j - 1)^{2} \left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})\right)} + \frac{(8\mu_{\text{\boldmath{$w$}}}G^{2}\log{(\log{(T/\theta)})} + G^{2}L)}{t\mu_{\text{\boldmath{$w$}}}^{2}}.
\end{aligned}
\end{equation}
Then, an upper bound to the first term in the last inequality can give our conclusion. Note that if $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{j}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})$ is smaller than $\mathcal{O}(1 / j - 1)$, the conclusion is full-filled. To see this, we should find a large constant $a$ such that $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{t + 1}) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}) \leq a / t$. This is clearly hold when $a\geq G^{2} / 2\mu_{\text{\boldmath{$w$}}}$ for $t = 1$ due to the PL inequality and bounded gradient. For $t\geq 2$, we find this $a$ by induction. Let $b = 8G\sqrt{2L\log{(\log{(T/\theta)})}}/\mu_{\text{\boldmath{$w$}}}$ and $c = (8\mu_{\text{\boldmath{$w$}}}G^{2}\log{(\log{(T/\theta)})} + G^{2}L) / \mu_{\text{\boldmath{$w$}}}^{2}$. A satisfactory $a$ yields
\begin{equation}
\small
\begin{aligned}
\frac{a}{t} \geq \frac{b}{(t - 1)t}\sqrt{a\sum\limits_{j=2}^{t}(j - 1)} + \frac{c}{t}
= \frac{b}{(t - 1)t}\sqrt{\frac{at(t - 1)}{2}} + \frac{c}{t} \geq\frac{1}{t}\left(b\sqrt{\frac{a}{2}} + c\right).
\end{aligned}
\end{equation}
By solving a quadratic inequality, we conclude that $a - b\sqrt{a/2} - c \geq 0$. Then
\begin{equation}
\small
a \geq \left(\frac{b + \sqrt{b^{2} + 8c}}{2\sqrt{2}}\right)^{2}.
\end{equation}
By taking
\begin{equation}
\small
a \geq 2\left(\frac{2b^{2} + 8c}{8}\right) \geq \left(\frac{b + \sqrt{b + 8c}}{2\sqrt{2}}\right)^{2},
\end{equation}
we get
\begin{equation}
\small
a \geq \frac{64G^{2}L\log{(\log{(T/\theta)})}}{\mu_{\text{\boldmath{$w$}}}^{2}} + \frac{(16\mu_{\text{\boldmath{$w$}}}G^{2}\log{(\log{(T/\theta)})} + G^{2}L)}{\mu_{\text{\boldmath{$w$}}}^{2}} = \frac{G^{2}\log{(\log{(T/\theta)})}(64L + 16\mu_{\text{\boldmath{$w$}}}) + G^{2}L}{\mu_{\text{\boldmath{$w$}}}^{2}},
\end{equation}
due to the value of $b$ and $c$. Hence, we get the conclusion by taking $t=T$.
\end{proof}
\subsubsection{Proof of Proposition \ref{pro:robustness}}\label{app:proof of proposition robustness}
\begin{proof}
From the definition of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}})$, for any $r\geq 0$, we have
\begin{equation}
\small
\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}(f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})) \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon.
\end{equation}
On the other hand
\begin{equation}
\small
\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}(f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta})) \leq R_{P_{n}}(\text{\boldmath{$w$}}) \leq \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon.
\end{equation}
Take a sum to the two above inequalities, we get
\begin{equation}
\small
\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}|f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}))| \leq \frac{1}{n}\sum\limits_{i=1}^{n}\left(\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - \inf_{\|\boldsymbol{\delta}\|_{p}\leq r}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}))\right) \leq 2\epsilon.
\end{equation}
Then the conclusion is verified.
\end{proof}
\section{Proofs for Section \ref{sec:pretrain improves ood}}
\subsection{Proof of Theorem \ref{thm:pretrain generalize}}\label{app:proof of theorem pretrain generalize}
\begin{proof}
We have $r(\infty) = r$ in this theorem. The key is to bound the $|\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}_{\text{pre}})- \sup_{Q\in B_{\mathsf{W}_{\infty}}(Q_{0}, r)}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})|$, then triangle inequality and Hoeffding's inequality imply the conclusion. Let $P^{*}_{r}\in \arg\max_{\{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)\}}R_{P}(\text{\boldmath{$w$}}_{\text{pre}})$. For any given $\boldsymbol{x}$, due to the continuity of $f(\text{\boldmath{$w$}}_{\text{pre}},\cdot)$, similar to Lemma \ref{lem:equivalence}, we can find the $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}) = \boldsymbol{x} + \arg\max_{\{\boldsymbol{\delta}:\|\boldsymbol{\delta}\|_{\infty}\leq r\}}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x} + \boldsymbol{\delta})$. Then due to Lemma \ref{lem:equivalence},
\begin{equation}
\small
R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) = \mathbb{E}_{P_{0}}\left[\sup_{\|\boldsymbol{\delta}\|_{\infty}\leq r}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x} + \boldsymbol{\delta})\right].
\end{equation}
Thus, $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}) \sim P^{*}_{r}$ when $\boldsymbol{x}\sim P_{0}$. We can find $\boldsymbol{z}\sim Q_{0}$ due to the Kolmogorov's Theorem, and let $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\sim Q^{*}_{r}$. By the definition of $\mathsf{W}_{\infty}$-distance, one can verify $\mathsf{W}_{\infty}(Q_{0}, Q^{*}_{r})\leq r$ as well as $R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \epsilon_{\text{pre}}$. Note that $0\leq f(\text{\boldmath{$w$}}_{\text{pre}}, \cdot) \leq M$, then
\begin{equation}
\label{eq:tv distance}
\small
\begin{aligned}
\left|R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) - R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}})\right| & = \left|\int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x})dP^{*}_{r}(\boldsymbol{x}) - \int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, \boldsymbol{x})dQ^{*}_{r}(\boldsymbol{x})\right| \\
& = \left|\int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}))dP_{0}(\boldsymbol{x}) - \int_{\mathcal{X}}f(\text{\boldmath{$w$}}_{\text{pre}}, T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}))dQ_{0}(\boldsymbol{x})\right| \\
& \leq \int_{\mathcal{X}}\left|f(\text{\boldmath{$w$}}_{\text{pre}}, T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}))\right|\left|dP_{0}(\boldsymbol{x}) - dQ_{0}(\boldsymbol{x})\right| \\
& \leq M\int_{\mathcal{X}}\left|dP_{0}(\boldsymbol{x}) - dQ_{0}(\boldsymbol{x})\right| \\
& = 2M\mathsf{TV}(P_{0}, Q_{0}).
\end{aligned}
\end{equation}
The last equality is from the definition of total variation distance \citep{villani2008optimal}. Thus a simple triangle inequality implies that
\begin{equation}
\label{eq:tv bound}
\small
R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \left|R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) - R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}})\right| + R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \epsilon_{\text{pre}} + 2M\mathsf{TV}(P_{0}, Q_{0}).
\end{equation}
Next we give the concentration result of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\text{pre}})$. Due to the definition of $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\text{pre}})$, it can be rewritten as $R_{P_{n}^{*}}(\text{\boldmath{$w$}}_{\text{pre}})$ where $P_{n}^{*}$ is the empirical distribution on $\{T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}_{i})\}$. Since $0 \leq f(\text{\boldmath{$w$}}_{\text{pre}}, \cdot) \leq M$ and $\{T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x}_{i})\}$ are i.i.d draws from $P^{*}_{r}$. Azuma-Hoeffding's inequality (Corollary 2.20 in \citep{wainwright2019}) shows that with probability at least $1 - \theta$,
\begin{equation}
\small
\begin{aligned}
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\text{pre}}) - R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) & = \frac{1}{n}\sum\limits_{i=1}^{n}f(\text{\boldmath{$w$}}_{\text{pre}}, T(\boldsymbol{x}_{i})) - R_{P^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq M\sqrt{\frac{\log{(1/\theta)}}{2n}}.
\end{aligned}
\end{equation}
Hence we get our conclusion.
\end{proof}
\subsection{Proof of Theorem \ref{thm:pretrain generalize l2}}
With a little abuse of notation, let $r(2) = r/\epsilon_{\text{pre}}$ denoted by $r$ in the proof, and $P^{*}_{r}\in\arg\max_{P\in B_{\mathsf{W}_{2}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}})$. By Lemma \ref{lem:optimal}, there exists $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{x})\sim P^{*}_{r}$ with $\boldsymbol{x}\sim P_{0}$. Then we can find $\boldsymbol{z}\sim Q_{0}$ due to Kolmogorov's Theorem. Let $T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\sim Q^{*}_{r}$, we see
\begin{equation}
\small
\begin{aligned}
\mathsf{W}_{2}(Q_{0}, Q^{*}_{r})^{2} & \leq \int_{\mathcal{X}}\|\boldsymbol{z} - T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\|^{2}dQ_{0}(\boldsymbol{z}) \\
& \leq \int_{\mathcal{X}}\|\boldsymbol{z} - T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\|^{2}\left|dQ_{0}(\boldsymbol{z}) - dP_{0}(\boldsymbol{z})\right| + \int_{\mathcal{X}}\|\boldsymbol{z} - T^{\text{\boldmath{$w$}}_{\text{pre}}}_{r}(\boldsymbol{z})\|^{2} dP_{0}(\boldsymbol{z}) \\
& \leq D^{2}\int_{\mathcal{X}}\left|dQ_{0}(\boldsymbol{z}) - dP_{0}(\boldsymbol{z})\right| + r^{2} \\
& = 2D^{2}\mathsf{TV}(P_{0}, Q_{0}) + r^{2}.
\end{aligned}
\end{equation}
Thus $R_{Q^{*}_{r}}(\text{\boldmath{$w$}}_{\text{pre}}) \leq \epsilon_{\text{pre}}$. Similar to \eqref{eq:tv distance} and \eqref{eq:tv bound} we get the conclusion.
\section{Hyperparameters}\label{app:hyp on adv}
\begin{table*}[htbp]
\centering
\scalebox{0.9}{
\begin{minipage}{0.5\linewidth}\label{tbl:hyper adv cifar}
\caption{Hyperparameters of adversarial training on \texttt{CIFAR10}.}
\vspace{-0.1in}
\begin{tabular}{c c c c}
\hline
Hyperparam & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$\\
\hline
Learning Rate & 0.1 & 0.1 & 0.1 \\
Momentum & 0.9 & 0.9 & 0.9 \\
Batch Size & 128 & 128 & 128 \\
Weight Decay & 5e-4 & 5e-4 & 5e-4 \\
Epochs & 200 & 200 & 200 \\
Inner Loop Steps & - & 8 & 8 \\
Perturbation Size & - & 2/12 & 2/255 \\
Perturbation Step Size & - & 1/24 & 1/510 \\
\hline
\end{tabular}
\end{minipage}
\hspace{0.2in}
\begin{minipage}{0.5\linewidth}\label{tbl:hyper adv imagenet}
\caption{Hyperparameters of adversarial training on \texttt{ImageNet}.}
\vspace{-0.1in}
\begin{tabular}{c c c c}
\hline
Hyperparam & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$\\
\hline
Learning Rate & 0.1 & 0.1 & 0.1 \\
Momentum & 0.9 & 0.9 & 0.9 \\
Batch Size & 512 & 512 & 512 \\
Weight Decay & 5e-4 & 5e-4 & 5e-4 \\
Epochs & 100 & 100 & 100 \\
Inner Loop Steps & - & 3 & 3 \\
Perturbation Size & - & 0.25 & 2/255 \\
Perturbation Step Size & - & 0.05 & 1/510 \\
\hline
\end{tabular}
\end{minipage}
}
\end{table*}
\begin{table*}[htbp]
\caption{Hyperparameters of adversarial training on $\text{BERT}$ base model.}
\vspace{-0.1in}
\label{tbl:hyper}
\centering
\scalebox{0.8}{
{
\begin{tabular}{c c c c}
\hline
Hyperparam & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$\\
\hline
Learning Rate & 3e-5 & 3e-5 & 3e-5 \\
Batch Size & 32 & 32 & 32 \\
Weight Decay & 0 & 0 & 0 \\
Hidden Layer Dropout Rate & 0.1 & 0.1 & 0.1 \\
Attention Probability Dropout Rate & 0.1 & 0.1 & 0.1 \\
Max Epochs & 10 & 10 & 10 \\
Learning Rate Decay & Linear & Linear & Linear\\
Warmup Ratio & 0 & 0 & 0 \\
Inner Loop Steps & - & 3 & 3 \\
Perturbation Size & - & 1.0 & 0.001 \\
Perturbation Step Size & - & 0.1 & 0.0005 \\
\hline
\end{tabular}}}
\end{table*}
\section{Ablation Study}
\label{app:perturbation}
\subsection{Effect of Perturbation Size}\label{app:perturbation size}
We study the effect of perturbation size $r$ in adversarial training
in bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2}.
We vary the perturbation size $r$ in $\{2^{-5}/12, 2^{-4}/12, 2^{-3}/12, 2^{-2}/12, 2^{-1}/12, 2^{0}/12, 2^{1}/12, 2^{2}/12, 2^{3}/12, 2^{4}/12, 2^{5}/12, 2^{6}/12, 2^{7}/12\}$ for Adv-$\ell_{2}$ and in $\{2^{-4}/255, 2^{-3}/255, 2^{-2}/255, 2^{-1}/255, 2^{0}/255, 2^{1}/255, 2^{2}/255, 2^{3}/255, 2^{4}/255\}$ for Adv-$\ell_{\infty}$.
The perturbation step size $\eta_{\boldsymbol{x}}$ in Algorithm \ref{alg:sgd} is set to be $r/4$ \citep{salman2020adversarially}.
Experiments are conducted on $\texttt{CIFAR10}$ and the settings follow those in Section \ref{sec:Experiments on Image Classification}.
\par
The results are shown in Figures \ref{fig:adv_l2_r} and \ref{fig:adv_linf_r}. In the studied ranges, the accuracy on the OOD data from all categories exhibits similar trend, i.e.,
first increases and then decreases, as $r$ increases. This is consistent with our discussion in Section \ref{sec:Experiments on Image Classification} that there is an optimal perturbation size $r$ for improving OOD generalization via adversarial training. For data corrupted under types Fog, Bright and Contrast, adversarial training degenerates the performance in Table \ref{tbl:adversarial training on image}. We speculate this is because the three corruption types rescale the input pixel values to smaller values and the same perturbation size $r$
leads to relatively large perturbation.
Thus according to the discussion in Section \ref{sec:Experiments on Image Classification} that there is an optimal $r$ for improving OOD generalization,
we suggest conducting adversarial training with a smaller perturbation size to defend these three types of corruption.
Figures \ref{fig:adv_l2_r} and \ref{fig:adv_linf_r} also show
that smaller optimal perturbation sizes have better performances for these three types of corruption.
\subsection{Effect of the the Number of Training Samples}\label{app:number of training samples}
We study the effect of the number of training samples, as bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} suggest that more training samples lead to better OOD generalization.
We split \texttt{CIFAR10} into 5 subsets, each of which has 10000, 20000, 30000, 40000 and 50000 training samples.
The other settings follow those in Section \ref{sec:Experiments on Image Classification}.
The results are in shown Figures \ref{fig:adv_l2_num} and \ref{fig:adv_linf_num}.
\begin{figure*}[htbp]\centering
\subfloat[Clean.]{\includegraphics[width=0.198\textwidth]{./pic/image-c/clean.JPEG}}
\subfloat[Gauss.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/gauss.JPEG}}
\subfloat[Shot.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/shot.JPEG}}
\subfloat[Impulse.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/impulse.JPEG}}
\vspace{-0.1in}
\\
\subfloat[Defocus.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/defocus.JPEG}}
\subfloat[Glass.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/glass.JPEG}}
\subfloat[Motion.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/motion.JPEG}}
\subfloat[Zoom.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/zoom.JPEG}}
\vspace{-0.1in}
\\
\subfloat[Snow.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/snow.JPEG}}
\subfloat[Frost.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/frost.JPEG}}
\subfloat[Fog.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/fog.JPEG}}
\subfloat[Bright.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/bright.JPEG}}
\vspace{-0.1in}
\\
\subfloat[Contrast.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/contrast.JPEG}}
\subfloat[Elastic.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/elastic.JPEG}}
\subfloat[pixel.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/pixel.JPEG}}
\subfloat[JPEG.]{\includegraphics[width=0.195\textwidth]{./pic/image-c/jpeg.JPEG}}
\vspace{-0.1in}
\caption{
15 types of artificially constructed corruptions from four categories: Noise, Blur, Weather, and Digital from the \texttt{ImageNet-C} dataset \citep{hendrycks2018benchmarking}.
Each corruption has five levels of severity with figures under severity 5 are shown here.}
\label{fig:imagenet-c}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Motion.png}}
\\
\vspace{-0.2in}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Frost.png}}
\\
\vspace{-0.2in}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Contrast.png}}
\\
\vspace{-0.2in}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2/JPEG.png}}
\caption{Accuracy of Adv-$\ell_{2}$ on \texttt{CIFAR10-C} over various perturbation sizes. The $x$-axis means the perturbation size is $2^{x}/12$.}
\label{fig:adv_l2_r}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Motion.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Frost.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Contrast.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv/JPEG.png}}
\vspace{-0.2in}
\caption{Accuracy of Adv-$\ell_{\infty}$ on \texttt{CIFAR10-C} over various perturbation sizes. The $x$-axis means the perturbation size is $2^{x}/255$.}
\label{fig:adv_linf_r}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Motion.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Frost.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Contrast.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_l2_num/JPEG.png}}
\caption{Accuracy of Adv-$\ell_{2}$ on \texttt{CIFAR10-C} over various numbers of training samples.}
\label{fig:adv_l2_num}
\end{figure*}
\begin{figure*}[htbp]\centering
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Gauss.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Shot.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Impulse.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Defocus.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Glass.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Motion.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Zoom.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Snow.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Frost.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Fog.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Bright.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Contrast.png}}
\vspace{-0.2in}
\\
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Elastic.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/Pixel.png}}
\subfloat{\includegraphics[width=0.33\textwidth]{./pic/adv_num/JPEG.png}}
\caption{Accuracy of Adv-$\ell_{\infty}$ on \texttt{CIFAR10-C} over various numbers of training samples.}
\label{fig:adv_linf_num}
\end{figure*}
\section{Introduction}
In the machine learning community, the training and test distributions are often not identically distributed. Due to this mismatching, it is desired to learn a model that generalizes well on out-of-distribution (OOD) data though only trained on data from one certain distribution. OOD generalization is empirically studied in \citep{hendrycks2019using,hendrycks2020many,hendrycks2020pretrained} by
evaluating the performance of the model on the test set that is close to the original training samples. However, the theoretical understanding of these empirical OOD generalization behaviors remains unclear.
\par
Intuitively, the OOD generalization measures the performance of the model on the data from a shifted distribution around the original training distribution \citep{hendrycks2018benchmarking}. This is equivalent to the distributional robustness \citep{namkoong2019reliable,shapiro2017distributionally} which measures the model's robustness to perturbations the distribution of training data. Inspired by this, we study the OOD generalization by utilizing the Wasserstein distance to measure the shift between distributions (Definition \ref{def: model robustness}). We theoretically find that if a model is robust to input perturbation on training samples (namely, input-robust model), it also generalizes well on OOD data.
\par
The connection of input-robustness and OOD generalization inspires us to find an input-robust model since it generalizes well on OOD data. Thus we consider adversarial training (AT)~\citep{madry2018towards}
as
\citet{athalye2018obfuscated} show that a model is input-robust if it defends adversarial perturbations \citep{szegedy2013intriguing}.
Mathematically, AT can be formulated as a minimax optimization problem and solved by the multi-step
SGD algorithm \citep{nouiehed2019solving}. Under mild assumptions, we prove that the convergence rate of this multi-step SGD for AT is
$\tilde{\mathcal{O}}(1/T)$
both in expectation and in high probability,
where $T$ is the number of training steps and $\tilde{\mathcal{O}}(\cdot)$ is defined in the paragraph of notations. Then, combining the convergence result with the relationship between input-robustness and OOD generalization, we theoretically show that for the model adversarially trained with $n$ training samples for $T$ steps, its excess risk on the OOD data is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$, which guarantees its performance on the OOD data.
\par
Besides models trained from scratch, we also study the OOD generalization
on downstream tasks
of pre-trained models, as
the paradigm of first pre-training on a large-scale dataset and then fine-tuning on downstream tasks
has achieved remarkable performance in both computer vision (CV)~\citep{hendrycks2019using,kornblith2019better} and natural language processing (NLP) domains \citep{devlin2019bert} recently. Given the aforementioned relationship of input-robustness and OOD generalization, we theoretically show that a pre-trained model more robust to input perturbation also provides a better initialization for generalization on downstream OOD data. Thus, we suggest conducting adversarial pre-training like \citep{salman2020adversarially,hendrycks2019using,utrera2020adversarially}, to
improve the OOD generalization in downstream tasks.
\par
We conduct various experiments on both image classification (IC) and natural language understanding (NLU) tasks to verify our theoretical findings.
\par
For IC task, we conduct AT on \texttt{CIFAR10} \citep{krizhevsky2009learning} and \texttt{ImageNet} \citep{deng2009imagenet}, and then evaluate the OOD generalization of these models on corrupted OOD data \texttt{CIFAR10-C} and \texttt{ImageNet-C} \citep{hendrycks2018benchmarking}. For NLU tasks, we similarly conduct AT as in \citep{zhu2019freelb} on datasets \texttt{SST-2}, \texttt{IMBD}, \texttt{MNLI} and \texttt{STS-B}.
Then we follow the strategy in \citep{hendrycks2020pretrained} to evaluate the OOD generalization.
Empirical results on both IC and NLU tasks verify that AT improves OOD generalization.
\par
To see the effect of the initialization provided by an input-robust pre-trained model, we adversarially pre-train a model on \texttt{ImageNet} to improve the input-robustness, and then fine-tune the pre-trained model on \texttt{CIFAR10}.
Empirical results show that this initialization enhances the OOD generalization on downstream tasks after fine-tuning.
Another interesting observation is that for language models, standard pre-training by masked language modeling \citep{devlin2019bert,liu2019roberta} improves the input-robustness of the model.
Besides, models pre-trained with more training samples and updating steps
are more input-robust. This may also explain the better OOD generalization on downstream tasks~\citep{hendrycks2020pretrained} of these models.
\paragraph{Notations.}
For vector $\boldsymbol{x}\in\mathbb{R}^{d_{0}}$, $\|\boldsymbol{x}\|_{p}$ is its $\ell_{p}$-norm, and
its $\ell_{2}$-norm is simplified as $\|\boldsymbol{x}\|$.
$\mathcal{P}(\mathcal{X})$ is the set of probability measures on metric space $(\mathcal{X}, \|\cdot\|_{p})$ with $\mathcal{X} \subseteq \mathbb{R}^{d_{0}}$.
$\mathcal{O}(\cdot)$ is the order of a number, and $\tilde{\mathcal{O}}(\cdot)$ hides a poly-logarithmic factor in problem parameters e.g.,
$\mathcal{O}(M_1\log{d_{0}}) = \tilde{O}(M_{1})$.
For $P, Q\in\mathcal{P}(\mathcal{X})$,
let $(P, Q)$ be their couplings (measures on $\mathcal{X}\times \mathcal{X}$).
The $p$-th ($p<\infty$) Wasserstein distance \citep{villani2008optimal} between $P$ and $Q$ is
\begin{equation}
\label{eq:w distance}
\small
\mathsf{W}_{p}(P, Q) = \left(\inf_{\pi\in(P, Q)}\mathbb{E}_{(\boldsymbol{u}, \boldsymbol{v})\sim \pi}\left[\|\boldsymbol{u} - \boldsymbol{v}\|^{p}_{p}\right]\right)^{\frac{1}{p}}.
\end{equation}
When $p=\infty$, the $\infty$-Wasserstein distance is $\mathsf{W}_{\infty}(P, Q) = \lim_{p\to\infty}\mathsf{W}_{p}(P, Q)$.
In the sequel, the $p$-Wasserstein distance is abbreviated as $\mathsf{W}_{p}$-distance.
The total variation distance \citep{villani2008optimal} is a kind of distributional distance and is defined as
\begin{equation}\label{eq:tv}
\small
\mathsf{TV}(P, Q) = \frac{1}{2}\int_{\mathcal{X}}\left|dP(\boldsymbol{x}) - dQ(\boldsymbol{x})\right|.
\end{equation}
\section{Related Work}
\paragraph{OOD Generalization.}
OOD generalization measures a model's ability to extrapolate beyond the training distribution~\citep{hendrycks2018benchmarking}, and
has been widely explored in both CV~\citep{recht2019imagenet,schneider2020improving,salman2020unadversarial} and NLP domains~\citep{tu2020empirical,lohn2020estimating}.
\citet{hendrycks2018benchmarking} observe that the naturally trained models are sensitive to artificially constructed OOD data. They also find that adversarial logit pairing \citep{kannan2018adversarial} can improve a model's performance on noisy corrupted OOD data.
\citet{hendrycks2020pretrained} also empirically find that pre-trained language models
generalize on downstream OOD data.
But the theoretical understanding behind these observations remains unclear.
\paragraph{Adversarial Training.}
Adversarial training \citep{madry2018towards} is proposed to improve input-robustness
by dynamically constructing the augmented adversarial samples \citep{szegedy2013intriguing,goodfellow2015explaning}
using projected gradient descent across training.
In this paper, we first show the close relationship between OOD generalization and distributional robustness \citep{ben2013robust,shapiro2017distributionally}, and then
explore the OOD generalization by
connecting input-robustness
and distributional robustness.
\par
The most related works to ours are
\citep{sinha2018certifying,lee2018minimax,volpi2018generalizing}.
They also use AT to train distributionally robust models under the Wasserstein distance, but their results are restricted to a specialized AT objective with an additional regularizer.
The regularizer can be impractical due to its large penalty parameter.
Moreover, their bounds are built upon the entropy integral and increase with model capacity, which can be meaningless for high-dimensional models.
On the other hand, our bound is
(i) based on the input-robustness, regardless of how it is obtained; and (ii) irrelevant to model capacity.
\paragraph{Pre-Training.}
Pre-trained models transfer the knowledge in the pre-training stage to downstream tasks,
and are widely used in both CV \citep{kornblith2019better} and NLP \citep{devlin2019bert} domains.
For instance, \citet{dosovitskiy2020image,brown2020language,radford2021learning} pre-train the transformer-based models on large-scale datasets, and obtain remarkable results on downstream tasks.
Standard pre-training is empirically found to help
reduce the uncertainty of the model for both image data \citep{hendrycks2019using,hendrycks2020many}
and textual data~\citep{hendrycks2020pretrained}.
Adversarial pre-training is explored in \citep{hendrycks2019using} and \citep{salman2020adversarially}, and is shown to improve the robustness and generalization on downstream tasks
, respectively.
In this work, we theoretically analyze the OOD generalization on downstream tasks from the perspective of the input-robustness of the pre-trained model.
\section{Adversarial Training Improves OOD Generalization}
\label{sec:Learning Robust Model Results in Better OOD Generalization}
In this section, we first show that the input-robust model can generalize well on OOD data after specifying the definition of OOD generalization.
Then, to learn a robust model, we suggest adversarial training (AT) \citep{madry2018towards}.
Under mild conditions, we prove a $\tilde{\mathcal{O}}(1/T)$ convergence rate for AT both in expectation and in high probability.
With this, we show that the excess risk of an adversarially trained model on OOD data is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$ where $n$ is the number of training samples.
\subsection{Input-Robust Model Generalizes on OOD Data}
\label{sec:Robustness Corresponds with Better OOD Generalization}
Suppose
$\{(\boldsymbol{x}_{i}, y_{i})\}$ is the
training set with
$n$ i.i.d. training samples $\{\boldsymbol{x}_{i}\}$ and their labels $\{y_{i}\}$.
We assume the training sample distribution $P$
has compact support $\mathcal{X}\subseteq \mathbb{R}^{d_{0}}$, thus there exists $D > 0$, such that $\forall \boldsymbol{u}, \boldsymbol{v}\in\mathcal{X}$, $\|\boldsymbol{u} - \boldsymbol{v}\|_{1}\leq D$.
For training sample $\boldsymbol{x}$ and its label $y$,
the loss on $(\boldsymbol{x}, y)$ with model parameter $\text{\boldmath{$w$}}$ is $\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))$, where $\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))$ is continuous and differentiable for both $\text{\boldmath{$w$}}$ and $(\boldsymbol{x}, y)$.
Besides, we assume $0 \leq \mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))\leq M$ for constant $M$ without loss of generality.
We represent the expected risk under training distribution $P$ and label distribution $P_{y\mid \boldsymbol{x}}$
\footnote{$P_{y\mid \boldsymbol{x}_{i}}(\cdot)=\textbf{1}_{\{\cdot = y_{i}\}}$ where $\textbf{1}_{\{\cdot = y_{i}\}}$ is the indicator function.}
as $R_{P}(\text{\boldmath{$w$}}) = \mathbb{E}_{P}[\mathbb{E}_{P_{y\mid \boldsymbol{x}}}[\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))]]$.
For simplicity of notation, let $\mathbb{E}_{P_{y\mid \boldsymbol{x}}}[\mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}, y))] = f(\text{\boldmath{$w$}}, \boldsymbol{x})$ in the sequel,
e.g., $f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}) = \mathcal{L}(\text{\boldmath{$w$}}, (\boldsymbol{x}_{i}, y_{i}))$.
\par
Intuitively, the OOD generalization is decided by the performance of the model on a shifted distribution close to the training data-generating distribution $P_{0}$ \citep{hendrycks2018benchmarking,hendrycks2020pretrained}.
Thus defining OOD generalization should involve the distributional distance which measures the distance between distributions.
We use the Wasserstein distance as in \citep{sinha2018certifying}.
\par
Let
$P_{n}(\cdot)=\frac{1}{n}\sum_{i=1}^{n}\textbf{1}_{\{\cdot = \boldsymbol{x}_{i}\}}$ be the empirical distribution, and $B_{\mathsf{W}_{p}}(P_{0}, r) = \{P: \mathsf{W}_{p}(P_{0}, P) \leq r\}$.
Then we define the OOD generalization error as
\begin{equation}
\label{eq:ood gen}
\small
\mathcal{E}_{\text{gen}}^{\text{ood}}(p, r) = \left|\sup_{P\in B_{\mathsf{W}_{p}}(P_{0}, r)}R_{P}(\text{\boldmath{$w$}}) - R_{P_{n}}(\text{\boldmath{$w$}})\right|,
\end{equation}
under the $\mathsf{W}_{p}$-distance with $p\in\{2, \infty\}$. Extension to the other OOD generalization with $p < \infty$ is straightforward by generalizing the analysis for $p=2$.
Note that \eqref{eq:ood gen} reduces to the generalization error on in-distribution data when $r=0$.
\begin{definition}\label{def: model robustness}
A model is $(r, \epsilon, P, p)$-input-robust, if
\begin{equation}
\small
\mathbb{E}_{P}\left[\sup_{\|\boldsymbol{\delta}\|_{p}\leq r}|f(\text{\boldmath{$w$}}, \boldsymbol{x} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}}, \boldsymbol{x})|\right] \leq \epsilon.
\end{equation}
\end{definition}
With the input-robustness in Definition~\ref{def: model robustness},
the following Theorems~\ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} give the generalization bounds on the OOD data drawn from $Q\in B_{\mathsf{W}_{p}}(P_{0}, r_{0})$ with $p\in\{2, \infty\}$.
\begin{theorem}\label{thm:ood generalization upper bound}
If a model is $(2r, \epsilon, P_{n}, \infty)$-input-robust, then with probability at least $1 - \theta$,
\begin{equation}
\label{eq:ood bound linf}
\small
\begin{aligned}
\mathcal{E}_{\text{\emph{gen}}}^{\text{\emph{ood}}}(\infty, r_{0})
\leq \epsilon + M\sqrt{\frac{(2d_{0})^{\frac{2D}{r^2} + 1}\log{2} + 2\log{(\frac{1}{\theta}})}{n}},
\end{aligned}
\end{equation}
for any $r_{0}\leq r$. Here $D$ is the $\ell_{1}$-diameter of data support $\mathcal{X}$ with dimension $d_{0}$, and $M$ is an upper bound of $f(\text{\boldmath{$w$}}, \boldsymbol{x})$.
\end{theorem}
\begin{theorem}
\label{thm:ood generalization upper bound l2}
If a model is $(2r/\epsilon, \epsilon, P_{n}, 2)$-input-robust, then with probability at least $1 - \theta$,
\begin{equation}
\label{eq:ood bound l2}
\small
\begin{aligned}
\!\!\!\!\! \mathcal{E}_{\text{\emph{gen}}}^{\text{\emph{ood}}}(2, r_{0})
\!\leq \! (M\!+\!1)\epsilon \! + \! M\sqrt{\frac{(2d_{0})^{\frac{2\epsilon^{2}D}{r^2}\! + \! 1}\log{2} \!+\! 2\log{(\frac{1}{\theta})}}{n}},
\end{aligned}
\end{equation}
for any $r_{0}\leq r$, where the notations follow Theorem \ref{thm:ood generalization upper bound}.
\end{theorem}
\begin{remark}
When $r_{0}=0$, the bounds in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} become the generalization bounds on in-distribution data.
\end{remark}
\par
\begin{remark}
The $\epsilon$ in Theorem \ref{thm:ood generalization upper bound l2} can not be infinitely small, as the model is required to be robust in $B(\boldsymbol{x}_{i}, 2r/\epsilon)$ for each $\boldsymbol{x}_{i}$. Specifically, when $\epsilon \to 0$, the robust region $B(\boldsymbol{x}_{i}, 2r/\epsilon)$ can cover the data support $\mathcal{X}$, then the model has almost constant output in $\mathcal{X}$.
\end{remark}
\begin{remark}
The bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} become vacuous when $r$ is large. Thus, our results can not be applied to those OOD data from distributions far away from the original training distribution. For example, ImageNet-R \citep{hendrycks2020many} consists of data from different renditions e.g., photo vs. cartoon, where most pixels vary, leading to large $\|\boldsymbol{u} - \boldsymbol{v}\|_{p}^{p}$ in \eqref{eq:w distance}, and thus large distributional distance.
\end{remark}
The proofs of Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} are in Appendix \ref{app:proof in Robustness Corresponds with Better OOD Generalization}.
Lemmas \ref{lem:equivalence} and \ref{lem:optimal} in Appendix \ref{app:proof in Learning Robust Model Results in Better OOD Generalization} show that the OOD data concentrates around the in-distribution data with high probability. Thus, the robustness of model on training samples guarantees the generalization on OOD data.
The observations from Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2} are summarized as follows.
\begin{enumerate}
\item
The right-hand sides of bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} imply that a more input-robust model (i.e., a larger $r$ and a smaller $\epsilon$ in Definition \ref{def: model robustness}) has smaller OOD generalization bound, and thus better performance on OOD data.
\item For both \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2},
a larger number of training samples $n$ results in smaller upper bounds. This indicates that in a high-dimensional data regime with a large feature dimension $d_{0}$ of data and diameter $D$ of data support, more training samples can compensate for generalization degradation caused by large $d_{0}$ and $D$.
\item The bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} are independent of the model capacity.
Compared with other uniform convergence generalization bounds which increase with the model capacity (e.g., Rademacher complexity \citep{yin2019rademacher} or entropy integral \citep{sinha2018certifying}), our bounds are superior for models with high capacity.
\end{enumerate}
\subsection{Adversarial Training Improves Input-Robustness}\label{sec:robust training}
\begin{algorithm}[t!]
\caption{Multi-Step SGD.}
\label{alg:sgd}
\textbf{Input:} Number of training steps $T$, learning rate for model parameters $\eta_{\text{\boldmath{$w$}}_{t}}$ and adversarial input $\eta_{\boldsymbol{x}}$, two initialization points $\text{\boldmath{$w$}}_{1}, \boldsymbol{\delta}_{1}$, constant $p\in\{2, \infty\}$ and perturbation size $r$.\\
\textbf{Return} $\text{\boldmath{$w$}}_{T + 1}$.
\begin{algorithmic}[1]
\FOR {$t=1, \cdots, T$}
\STATE {Uniformly sample $i_{t}$ from $\{1,\cdots, n\}$.}
\FOR {$k=1, \cdots, K$}
\STATE{$\boldsymbol{\delta}_{k + 1} = \text{Proj}_{B_{p}(\textbf{0}, r)}\left(\boldsymbol{\delta}_{k} \!+\! \eta_{\boldsymbol{x}}\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} \!+\! \boldsymbol{\delta}_{k})\right)$.}
\ENDFOR
\STATE {$\text{\boldmath{$w$}}_{t + 1} = \text{\boldmath{$w$}}_{t} - \eta_{\text{\boldmath{$w$}}_{t}}\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{K + 1})$.}
\ENDFOR
\end{algorithmic}
\end{algorithm}
As is justified in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}, the input-robust model can generalize on OOD data.
Thus we consider
training an input-robust model with the following objective
\begin{equation}
\small
\begin{aligned}
& \min_{\text{\boldmath{$w$}}}\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p) = \min_{\text{\boldmath{$w$}}}\frac{1}{n}\sum\limits_{i=1}^{n}\sup_{\|\boldsymbol{\delta}\|_{p}\leq r(p)}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta})\\
& \!=\! \min_{\text{\boldmath{$w$}}}\frac{1}{n}\sum\limits_{i=1}^{n}[\underbrace{f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i})}_{\text{clean acc}} \!+\! \sup_{\|\boldsymbol{\delta}\|_{p}\leq r(p)}\underbrace{(f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} \!+\! \boldsymbol{\delta}) \!-\! f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}))}_{\text{input-robustness}}], \label{eq:objective}
\end{aligned}
\end{equation}
which is from AT~\citep{madry2018towards}, and can be decomposed into the clean accuracy term and the input-robustness term.
We consider $p\in\{2, \infty\}$ as in Section \ref{sec:Robustness Corresponds with Better OOD Generalization}, with $r(2) \!=\! 2r/\epsilon_{0}, r(\infty) \!=\! 2r$ for any given small constant $\epsilon_{0}$.
\par
Besides the general assumptions in Section~\ref{sec:Robustness Corresponds with Better OOD Generalization}, we also use the following mild assumptions in this subsection.
\begin{assumption}
\label{ass:Lip continuous}
The loss $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ satisfies the following Lipschitz smoothness conditions
\begin{equation}
\small
\begin{aligned}
\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{1}, \boldsymbol{x}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}_{2}, \boldsymbol{x})\| & \leq L_{11}\|\text{\boldmath{$w$}}_{1} - \text{\boldmath{$w$}}_{2}\|, \\
\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{1}) - \nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{2})\| & \leq L_{12}\|\boldsymbol{x}_{1} - \boldsymbol{x}_{2}\|, \\
\|\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{1}, \boldsymbol{x}) - \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{2}, \boldsymbol{x})\| & \leq L_{21}\|\text{\boldmath{$w$}}_{1} - \text{\boldmath{$w$}}_{2}\|, \\
\|\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{1}) - \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{2})\| & \leq L_{22}\|\boldsymbol{x}_{1} - \boldsymbol{x}_{2}\|.
\end{aligned}
\end{equation}
\end{assumption}
\begin{assumption}
\label{ass:grad_bound}
$\|\nabla_{\text{\boldmath{$w$}}}f(\text{\boldmath{$w$}}, \boldsymbol{x})\|$ is upper bounded by $G$.
\end{assumption}
\begin{assumption}
\label{ass:PL inequality}
For $p\in\{2, \infty\}$, $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)$ in \eqref{eq:objective} satisfies the PL-inequality:
\begin{equation}
\small
\!\!\frac{1}{2}\|\nabla_{\text{\boldmath{$w$}}} \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)\|^{2} \!\geq\! \mu_{\text{\boldmath{$w$}}}\left(\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p) \!-\! \inf_{\text{\boldmath{$w$}}}\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)\right).
\end{equation}
For any $\text{\boldmath{$w$}}$ and training sample $\boldsymbol{x}_{i}$, $f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta})$ is $\mu_{\boldsymbol{x}_{i}}$-strongly concave in $\boldsymbol{\delta}$ for $\|\boldsymbol{\delta}\|_{p} \leq r(p)$:
\begin{equation}
\small
f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i} + \boldsymbol{\delta}) - f(\text{\boldmath{$w$}},\boldsymbol{x}_{i}) \leq \langle \nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}, \boldsymbol{x}_{i}), \boldsymbol{\delta}\rangle - \frac{\mu_{\boldsymbol{x}_{i}}}{2}\|\boldsymbol{\delta}\|^{2},
\end{equation}
where $\mu_{\text{\boldmath{$w$}}}$ and $\mu_{\boldsymbol{x}_{i}}$ are constants.
\end{assumption}
Assumptions \ref{ass:Lip continuous} and \ref{ass:grad_bound} are widely used in minimax optimization problems~\citep{nouiehed2019solving,sinha2018certifying}.
PL-inequality in Assumption \ref{ass:PL inequality} means that although $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ may be non-convex on $\text{\boldmath{$w$}}$, all the stationary points are global minima.
This is observed or proved recently for over-parameterized neural networks \citep{xie2017diversity,du2019gradient,allen2019convergence,liu2020toward}.
The local strongly-concavity in Assumption \ref{ass:PL inequality} is reasonable when the perturbation size $\|\boldsymbol{\delta}\|_{p}$ is small.
\par
To solve the minimax optimization problem \eqref{eq:objective}, we consider the multi-step stochastic gradient descent (SGD) in Algorithm \ref{alg:sgd}~\citep{nouiehed2019solving}. $\text{Proj}_{A}(\cdot)$ in Algorithm \ref{alg:sgd} is the $\ell_{2}$-projection operator onto
$A$.
Note that the update rule of $\boldsymbol{\delta}_{k}$ in Algorithm \ref{alg:sgd} is different from that in PGD adversarial training \citep{madry2018towards}, where $\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{k})$ in Line 4 is replaced with the sign of it.
\par
The following theorem gives the convergence rate of Algorithm $\ref{alg:sgd}$ both in expectation and in high probability.
\begin{theorem}
\label{thm:convergence}
Let $\text{\boldmath{$w$}}_{t}$ be updated by
Algorithm \ref{alg:sgd},
$p\!\in\!\{2,\infty\}$, $\eta_{\text{\boldmath{$w$}}_{t}}\!=\!\frac{1}{\mu_{\text{\boldmath{$w$}}}t}$, $\eta_{\boldsymbol{x}}\!=\!\frac{1}{L_{22}}$,
$K\geq \frac{L_{22}}{\mu_{\boldsymbol{x}}}\log{\left(\frac{8T\mu_{\text{\boldmath{$w$}}}d_{0}r^{2}(p)}{GL}\right)}$, where $\mu_{\boldsymbol{x}} = \min_{1\leq i \leq n}\mu_{\boldsymbol{x}_{i}}$ and $L = L_{11} + \frac{L_{12}L_{21}}{\mu_{\boldsymbol{x}}}$.
Under Assumptions \ref{ass:Lip continuous}, \ref{ass:grad_bound}, and \ref{ass:PL inequality},
we have
\begin{equation}
\small
\mathbb{E}[\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{T + 1}, p)] - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}, p) \leq \frac{G^{2}L}{T\mu^{2}_{\text{\boldmath{$w$}}}},
\end{equation}
and with probability at least $1 - \theta$,
\begin{equation}\label{eq:convergence in probability}
\small
\begin{aligned}
\tilde{R}_{P_{n}}&(\text{\boldmath{$w$}}_{T + 1}, p) - \tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}, p) \\
& \leq \frac{G^{2}\log{(\log{(T/\theta)})}(64L + 16\mu_{\text{\boldmath{$w$}}}) + G^{2}L}{T\mu_{\text{\boldmath{$w$}}}^{2}},
\end{aligned}
\end{equation}
for $0< \theta < 1/e$, $T\geq 4$, with $\text{\boldmath{$w$}}^{*} \in\arg\min_{\text{\boldmath{$w$}}}\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}, p)$.
\end{theorem}
This theorem shows that Algorithm \ref{alg:sgd} is able to find the global minimum of the adversarial objective \eqref{eq:objective} both in expectation and in high probability.
Specifically, the convergence rate of Algorithm~\ref{alg:sgd} is $\mathcal{O}(1/\lceil T/K\rceil) = \mathcal{O}(K/T) = \tilde{\mathcal{O}}(1/T)$, since the number of inner loop steps $K$ is $\mathcal{O}(\log{(Td_0r(p)^2)})$, which increases with the feature dimension of input data $d_{0}$ and the size of perturbation $r$. The proof of Theorem \ref{thm:convergence} is in Appendix \ref{app:proof in robust training}.
\par
The following Proposition~\ref{pro:robustness} (proof is in Appendix \ref{app:proof of proposition robustness}) shows that the model trained by Algorithm \ref{alg:sgd} has a small error on clean training samples, and satisfies the
condition of input-robustness
in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}.
\begin{proposition}
\label{pro:robustness}
If $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon$ for $\text{\boldmath{$w$}}$ and a constant $\epsilon$, then $R_{P_{n}}(\text{\boldmath{$w$}}) \leq \epsilon$, and $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ is $(r(p), 2\epsilon, P_{n}, p)$-input-robust.
\end{proposition}
\par
According to Theorem \ref{thm:convergence} and Proposition \ref{pro:robustness},
after $T$ training steps in Algorithm \ref{alg:sgd}, we can obtain a $(r(p), \tilde{\mathcal{O}}(1/T), P_{n}, p)$-input-robust model when $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*})$ is close to zero.
Thus, combining Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}, we get the following corollary which shows that the adversarially trained model generalizes on OOD data.
\begin{corollary}
\label{cor:excess risk}
For $p\in\{2, \infty\}$, with the same notations as
Theorem \ref{thm:ood generalization upper bound} and \ref{thm:convergence},
if $\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}^{*}, p)\leq \epsilon_{0}$, then with probability at least $1 - \theta$,
\begin{equation*}\label{eq:excess risk bound l2}
\small
\begin{aligned}
& \sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r/\epsilon_{0})} R_{P}(\text{\boldmath{$w$}}_{T + 1}, 2) \leq (2M + 3)\epsilon_{0} \\
& + (2M\! + \!3)\!\left(\frac{G^{2}\log{(\log{(2T/\theta)})}(64L + 16\mu_{\text{\boldmath{$w$}}}) + G^{2}L}{T\mu_{\text{\boldmath{$w$}}}^{2}}\right)\\
& + \!M\sqrt{\frac{(2d_{0})^{\frac{2\epsilon_{0}^{2}D}{r^{2}} \!+\! 1}\log{2} \!+\! 2\log{(2/\theta)}}{n}},
\end{aligned}
\end{equation*}
and
\begin{equation*}\label{eq:excess risk bound linf}
\small
\begin{aligned}
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r)} & R_{P}(\text{\boldmath{$w$}}_{T + 1}, \infty) \leq 3\epsilon_{0} \\
& + \frac{G^{2}\log{(\log{(2T/\theta)})}(192L + 48\mu_{\text{\boldmath{$w$}}}) + 3G^{2}L}{T\mu_{\text{\boldmath{$w$}}}^{2}} \\
& + M\sqrt{\frac{2d_{0}^{\frac{2D}{r^{2}} + 1}\log{2} + 2\log{(2/\theta)}}{n}},
\end{aligned}
\end{equation*}
for any $0\leq \theta \leq 1/e$ and $T\geq 4$.
\end{corollary}
\par
This corollary is directly obtained by combining Theorem \ref{thm:ood generalization upper bound}, \ref{thm:ood generalization upper bound l2}, \ref{thm:convergence}, and Proposition \ref{pro:robustness}.
It shows that the excess risk (i.e., the terms in the left-hand side of the above two inequalities) of the adversarially trained model on OOD data is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$ after $T$ steps.
The dependence of the bounds on hyperparameters like input data dimension $d_{0}$, $\ell_{1}$-diameter $D$ of data support $\mathcal{X}$ are from the OOD generalization bounds \eqref{eq:ood bound linf}, \eqref{eq:ood bound l2}, and convergence rate \eqref{eq:convergence in probability}.
\section{Robust Pre-Trained Model has Better Initialization on Downstream Tasks}\label{sec:pretrain improves ood}
The paradigm of ``first pre-train and then fine-tune'' has been widely explored recently \citep{radford2021learning,hendrycks2020pretrained}.
In this section, we theoretically show that the input-robust pre-trained model provides an initialization that generalizes on downstream OOD data.
\par
Assume the $m$ i.i.d. samples $\{\boldsymbol{z}_{i}\}$ in the pre-training stage are from distribution $Q_{0}$.
For a small constant $\epsilon_{\text{pre}}$ and given $r(2) = r/\epsilon_{\text{pre}}, r(\infty) = r$,
the following Theorems \ref{thm:pretrain generalize} and \ref{thm:pretrain generalize l2}
show that the pre-trained model with a small excess risk on OOD data in the pre-training stage also generalizes on downstream OOD data. The proofs are in Appendix \ref{app:proof of theorem pretrain generalize}.
\begin{theorem}
\label{thm:pretrain generalize}
If $\sup_{Q\in B_{\mathsf{W}_{\infty}}(Q_{0}, r(\infty))}R_{Q}(\text{\boldmath{$w$}}_{\emph{\text{pre}}})\leq \epsilon_{\emph{\text{pre}}}$, then
\begin{equation}\label{eq:initialized error linf}
\small
\sup_{P\in B_{\mathsf{W}_{\infty}}(P_{0}, r(\infty))}R_{P}(\text{\boldmath{$w$}}_{\emph{\text{pre}}}) \leq \epsilon_{\emph{\text{pre}}} + 2M\mathsf{TV}(P_{0}, Q_{0}),
\end{equation}
and with probability at least $1 - \theta$,
\begin{equation}
\small
\tilde{R}_{P_{n}}(\text{\boldmath{$w$}}_{\emph{\text{pre}}}, \infty) \leq \epsilon_{\emph{\text{pre}}}\! + \!2M\mathsf{TV}(P_{0}, Q_{0}) + M\sqrt{\frac{\log{(1/\theta)}}{2n}}.
\end{equation}
\end{theorem}
\begin{theorem}
\label{thm:pretrain generalize l2}
If $\sup_{Q\in B_{\mathsf{W}_{2}}(Q_{0}, r_{0})}R_{Q}(\text{\boldmath{$w$}}_{\emph{\text{pre}}})\leq \epsilon_{\emph{\text{pre}}}$ with $r_{0}= \sqrt{2D^{2}\mathsf{TV}(P_{0}, Q_{0}) + r(2)^{2}}$, then
\begin{equation}\label{eq:initialized error l2}
\small
\sup_{P\in B_{\mathsf{W}_{2}}(P_{0}, r(2))}R_{P}(\text{\boldmath{$w$}}_{\emph{\text{pre}}}) \leq \epsilon_{\emph{\text{pre}}} + 2M\mathsf{TV}(P_{0}, Q_{0}).
\end{equation}
\end{theorem}
\par
\begin{remark}
The self-supervised pre-training (e.g., masked language modeling in BERT \citep{devlin2019bert}) can also be included into the $f(\text{\boldmath{$w$}}, \boldsymbol{x})$ in Section \ref{sec:Robustness Corresponds with Better OOD Generalization},
if we take label $y\sim P_{y \mid \boldsymbol{x}}$ as the distribution of the artificially constructed labels (e.g., masked tokens in BERT).
\end{remark}
When we implement fine-tuning on downstream tasks, the model is initialized by $\text{\boldmath{$w$}}_{\text{pre}}$.
Combining the results in
Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}
(an input-robust model has small OOD generalization error)
with Theorems \ref{thm:pretrain generalize} and \ref{thm:pretrain generalize l2},
we conclude that the input-robust model
has small excess risk on the OOD data in the pre-training stage, and thus
generalizes on the OOD data of downstream tasks. Specifically, \eqref{eq:initialized error linf} and \eqref{eq:initialized error l2} show that
the initial OOD excess risk in the fine-tuning stage $\sup_{P\in B_{\mathsf{W}_{p}}(P_{0}, r(p))}R_{P}(\text{\boldmath{$w$}}_{\text{pre}})$ is decided by terminal OOD excess risk in pre-training stage $\sup_{Q\in B_{\mathsf{W}_{p}}(Q_{0}, r(p))}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})$ and the total variation distance $\mathsf{TV}(P_{0}, Q_{0})$.
The intuition is that if $\text{\boldmath{$w$}}_{\text{pre}}$ generalizes well on distributions around $Q_{0}$, and $P_{0}$ is close to $Q_{0}$ under the total variation distance, then $\text{\boldmath{$w$}}_{\text{pre}}$ generalizes on downstream OOD data.
\par
To satisfy the condition $\sup_{Q\in B_{\mathsf{W}_{p}}(Q_{0}, r(p))}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})\leq \epsilon_{\text{pre}}$ in Theorems \ref{thm:pretrain generalize} and \ref{thm:pretrain generalize l2}, we can use adversarial pre-training.
Corollary \ref{cor:excess risk} implies $\epsilon_{\text{pre}}=\mathcal{O}(1/\sqrt{m})$ by implementing sufficient adversarial pre-training. Thus, massive training samples $m$ in the adversarial pre-training stage improves the OOD generalization on downstream tasks as $\epsilon_{\text{pre}}=\mathcal{O}(1/\sqrt{m})$ appears in the bounds \eqref{eq:initialized error linf} and \eqref{eq:initialized error l2}.
\par
\citet{radford2021learning,hendrycks2020pretrained} empirically verify that the standardly pre-trained model also generalizes well on downstream OOD data. It was shown that sufficient standard training by gradient-based algorithm can also find the most input-robust model under some mild conditions \citep{soudry2018implicit,lyu2019gradient}. Thus, $\sup_{Q\in B_{\mathsf{W}_{\infty}}(Q_{0}, r(p))}R_{Q}(\text{\boldmath{$w$}}_{\text{pre}})\leq \epsilon_{\text{pre}}$ can hold even for standardly pre-trained model. However, the convergence to the most input-robust model of standard training is much slower compared with AT, e.g., for
linear model \citep{soudry2018implicit,li2019inductive}.
Hence, to efficiently learn an input-robust model in the pre-training stage, we suggest adversarial pre-training.
\section{Experiments}
\begin{table*}[t!]
\caption{Clean and corruption accuracy (\%) of ResNet34 on \texttt{CIFAR10-C} and \texttt{ImageNet-C} using standard training and adversarial training under both $\ell_{2}$-norm and $\ell_{\infty}$-norm.}
\label{tbl:adversarial training on image}
\centering
\scalebox{0.645}{
{
\begin{tabular}{l|c|c|ccc|cccc|cccc|cccc|c}
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multirow{2}{*}{Clean} & \multicolumn{3}{c|}{Noise} & \multicolumn{4}{c|}{Blur} & \multicolumn{4}{c|}{Weather} & \multicolumn{4}{|c|}{Digital} & \multirow{2}{*}{Avg.} \\
& & & Gauss & Shot & Impulse & Defocus & Glass & Motion & Zoom & Snow & Frost & Fog & Bright & Contrast & Elastic & Pixel & JPEG & \\ \hline
\multirow{3}{*}{\texttt{CIFAR10-C}} & Std & 94.82 & 34.75 & 40.43 & 25.45 & 59.85 & 48.95 & 67.58 & 63.85 & 73.31 & 62.87 & \textbf{67.03} & \textbf{90.69} & \textbf{36.83} & 76.00 & 42.89 & 75.84 & 57.75 \\
& Adv-$\ell_{2}$ & 94.93 & 70.39 & 74.24 & 45.17 & 72.77 & 71.34 & 73.51 & 80.26 & 83.28 & 81.36 & 51.08 & 89.37 & 19.49 & 83.39 & 79.78 & \textbf{89.52} & 71.00 \\
& Adv-$\ell_{\infty}$ & 93.48 & \textbf{80.18} & \textbf{80.80} & \textbf{62.73} & \textbf{77.71} & \textbf{77.10} & \textbf{75.46} & \textbf{82.47} & \textbf{83.45} & \textbf{82.32} & 41.00 & 88.15 & 16.10 & \textbf{83.82} & \textbf{85.98} & 89.36 & \textbf{73.78} \\ \hline
\multirow{3}{*}{\texttt{ImageNet-C}} & Std & 74.01 & 18.97 & 18.39 & 12.98 & 6.32 & 9.76 & 11.49 & 9.37 & 8.78 & 12.98 & 6.21 & 33.74 & 4.31 & 18.29 & 23.91 & 29.08 & 14.97 \\
& Adv-$\ell_{2}$ & 73.66 & \textbf{30.13} & \textbf{28.93} & \textbf{25.05} & \textbf{32.91} & 25.61 & \textbf{34.50} & 32.84 & \textbf{27.39} & \textbf{33.82} & \textbf{36.52} & \textbf{62.18} & \textbf{31.73} & 42.91 & 47.86 & 51.55 & \textbf{36.26} \\
& Adv-$\ell_{\infty}$ & 68.36 & 25.94 & 25.61 & 21.17 & 24.56 & \textbf{32.81} & 32.20 & \textbf{34.57} & 26.70 & 33.47 & 11.22 & 56.07 & 12.34 & \textbf{47.67} & \textbf{57.32} & \textbf{59.10} & 33.38 \\ \hline
\end{tabular}}}
\end{table*}
\subsection{Adversarial Training Improves OOD Generalization}\label{sec:at improves ood}
In this section, we verify our conclusion in Section \ref{sec:Learning Robust Model Results in Better OOD Generalization} that OOD generalization can be improved by AT (Corollary \ref{cor:excess risk}).
\subsubsection{Experiments on Image Classification}
\label{sec:Experiments on Image Classification}
\paragraph{Data.} We use the following benchmark datasets.
\begin{itemize}
\item \texttt{CIFAR10} \citep{krizhevsky2009learning} has 50000 colorful images as training samples from 10 object classes. \texttt{CIFAR10-C} simulates OOD colorful images with 15 types of common visual corruptions, which serves as a benchmark to verify the OOD generalization of model trained on \texttt{CIFAR10}. Each type of corruption has five levels of severity, and each severity has 10000 validation samples. The 15 types of corruptions are divided into 4 groups: Noise, Blur, Weather and Digital.
\item \texttt{ImageNet} \citep{deng2009imagenet} contains colorful images with over 1 million training samples from 1,000 categories. Similar to \texttt{CIFAR10-C}, \texttt{ImageNet-C} serves as a benchmark of OOD data with 15 types of corruptions. Each type of corruption has five levels of severity with 50000 validation samples in it. A visualization of \texttt{ImageNet-C} is in Figure \ref{fig:imagenet-c} in Appendix.
\end{itemize}
\paragraph{Setup.}
The model used in this subsection is ResNet34 \citep{he2016deep}. To verify that adversarial training helps improve OOD performance, we conduct Algorithm \ref{alg:sgd} on \texttt{CIFAR10}, \texttt{ImageNet} and evaluate the model on \texttt{CIFAR10-C} and \texttt{ImageNet-C}, respectively. The number of inner loop steps $K$ is 8 for \texttt{CIFAR10}, and 3 for \texttt{ImageNet}. The models are trained by SGD with momentum.
The number of training epochs is 200 for \texttt{CIFAR10}, and 100 for \texttt{ImageNet}.
The learning rate starts from 0.1 and decays by a factor 0.2 at epochs 60, 120, 160 (resp. 30, 60, 90) for \texttt{CIFAR10} (resp. \texttt{ImageNet}).
Detailed hyperparameters are in Appendix \ref{app:hyp on adv}.
\par
We compare adversarial training under $\ell_{2}$- and $\ell_{\infty}$-norm (respectively abbreviated as ``Adv-$\ell_{2}$'' and ``Adv-$\ell_{\infty}$'') against standard training (abbreviated as ``Std'').
For Adv-$\ell_{\infty}$, we replace $\nabla_{\boldsymbol{x}}f(\text{\boldmath{$w$}}_{t}, \boldsymbol{x}_{i_{t}} + \boldsymbol{\delta}_{k})$ in Line 4 of Algorithm~\ref{alg:sgd} with the sign of it as in \citep{madry2018towards},
in order to find stronger adversarial perturbation~\citep{goodfellow2015explaning}.
\paragraph{Main Results.}
In Table \ref{tbl:adversarial training on image}, for each type of corruption, we report the test accuracy on \texttt{CIFAR10-C} under the strongest corruption severity level 5\footnote{Lighter severity levels exhibit similar trends but with smaller performance gaps between adversarial and standard training}.
For \texttt{ImageNet-C}, we report the average test accuracy of five severity levels as in \citep{hendrycks2018benchmarking}.
We also report the test accuracy on \texttt{CIFAR10} and \texttt{ImageNet} in the column of ``Clean'' for comparison.
As can be seen, Adv-$\ell_{2}$ and Adv-$\ell_{\infty}$ improve the average accuracy on OOD data, especially under corruption types Noise and Blur. This supports our finding in Section~\ref{sec:Learning Robust Model Results in Better OOD Generalization} that AT makes the model generalize on OOD data.
Though AT improves the OOD generalization on all corruption types for \texttt{ImageNet-C}, it degenerates the performance
for data corrupted under types Fog, Bright and Contrast in \texttt{CIFAR10-C}.
We speculate this is because these three corruptions intrinsically rescale the adversarial perturbation,
and refer readers to Appendix \ref{app:perturbation size} for a detailed discussion.
\paragraph{Ablation Study.}
We study the effect of perturbation size $r$ and the number of training samples $n$ for adversarial training
in bounds \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2}.
Due to the space limit, we put the implementation details and results in Appendix \ref{app:perturbation}.
\par
The results for the effect of perturbation size $r$ are in Figures~\ref{fig:adv_l2_r}-\ref{fig:adv_linf_r} in Appendix \ref{app:perturbation size}.
As can be seen, the accuracy on OOD data \texttt{CIFAR10-C} first increases and then decreases with an increasing $r$.
This is because the upper bounds of excess risk in \eqref{eq:ood bound linf} and \eqref{eq:ood bound l2} are decided by both the clean accuracy and input-robustness.
However, an increasing perturbation size $r$ improves the input-robustness, but harms the clean accuracy~\citep{raghunathan2019adversarial}.
Specifically, when the perturbation size $r$ is small, the clean accuracy is relatively stable and the robustness dominates.
Thus the overall OOD performance increases as $r$ increases.
However, when $r$ is relatively large, a larger $r$ leads to worse clean accuracy though better robustness, and can lead to worse overall OOD performance.
Thus, to achieve the optimal performance on OOD data, we should properly choose the perturbation size $r$ rather than continually increasing it.
\par
The results for the effect of the number of training samples $n$ are in Figures~\ref{fig:adv_l2_num}-\ref{fig:adv_linf_num} in Appendix \ref{app:number of training samples}.
The accuracy on OOD data increases with the number of training samples, which is consistent with our findings in Theorems \ref{thm:ood generalization upper bound} and \ref{thm:ood generalization upper bound l2}.
\subsubsection{Experiments on Natural Language Understanding}\label{sec:Experiments on Natural Language Understanding}
\begin{table}[t!]
\caption{Performance of $\text{BERT}$ base model on NLU tasks using standard training and adversarial training under both $\ell_{2}$-norm and $\ell_{\infty}$-norm.}
\label{tbl:adversarial training on text}
\centering
\scalebox{0.6}{
{
\begin{tabular}{c|cc|ccc}
\hline
Dataset & Train & Test & Std & Adv-$\ell_{2}$ & Adv-$\ell_{\infty}$ \\
\hline
\multirow{8}{*}{\texttt{STS-B}} & \multirow{2}{*}{Images} & Images & 98.38 & 97.81 & 96.39 \\
& & MSRvid & 89.52(-8.86) & \textbf{90.61}(-7.20) & 90.09(-6.30) \\
\cline{2-6}
& \multirow{2}{*}{MSRvid} & MSRvid & 98.55 & 97.45 & 96.65 \\
& & Images & \textbf{84.12}(-14.43) & 83.63(-13.82) & 83.11(-13.54) \\
\cline{2-6}
& \multirow{2}{*}{Headlines} & Headlines & 97.59 & 96.73 & 95.75 \\
& & MSRpar & 62.07(-35.52) & 64.48(-32.25) & \textbf{67.67}(-28.08) \\
\cline{2-6}
& \multirow{2}{*}{MSRpar} & MSRpar & 97.55 & 97.33 & 97.55 \\
& & Headlines & 75.58(-21.97) & 75.27(-22.06) & \textbf{76.12}(-21.43) \\
\hline
\multirow{4}{*}{\texttt{SST-2}; \texttt{IMDb}} & \multirow{2}{*}{SST-2}& SST-2 & 93.57 & 93.57 & 93.92 \\
& & IMDb & 90.06(-3.51) & \textbf{91.50}(-2.07) & 91.32(-2.60) \\
\cline{2-6}
& \multirow{2}{*}{IMDb} & IMDb & 94.36 & 94.88 & 94.68 \\
& & SST-2 & 87.00(-7.36) & \textbf{88.53}(-6.35) & 88.07(-6.61) \\
\hline
\multirow{3}{*}{\texttt{MNLI}} & \multirow{3}{*}{Telephone} & Telephone & 83.01 & 83.16 & 82.90 \\
& & Letters & 82.45(-0.56) & 83.76(+0.60) & \textbf{84.07}(+1.17) \\ & & Face-to-face & 81.56(-1.45) & \textbf{83.59}(+0.43) & \textbf{83.59}(+0.69) \\
\hline
\end{tabular}}}
\end{table}
\paragraph{Data.}
As in \citep{hendrycks2020pretrained}, we use three pairs of datasets as the original and OOD datasets for NLU tasks.
\begin{itemize}
\item \texttt{SST-2} \citep{socher2013recursive} and \texttt{IMDb} \citep{maas2011learning} are sentiment analysis datasets,
with pithy expert and full-length lay movie reviews, respectively.
As in \citep{hendrycks2020pretrained}, we train on one dataset and evaluate on the other.
Then we report the accuracy of a review's binary sentiment predicted by the model.
\item \texttt{STS-B} consists of texts from different genres and sources.
It requires the model to predict the textual similarity between pairs of sentences \citep{cer2017semeval}.
As in \citep{hendrycks2020pretrained}, we use four sources from two genres: MSRpar(news), Headlines (news); MSRvid(captions), Images(captions). The evaluation metric is Pearson's correlation coefficient.
\item \texttt{MNLI} is a textual entailment dataset which contains sentence pairs from different genres of text \citep{williams2018broad}.
We select training samples from two genres of transcribed text (Telephone and Face-to-Face) and the other of written text (Letters) as in \citep{hendrycks2020pretrained}, and report the classification accuracy.
\end{itemize}
\paragraph{Setup.} For a pre-trained language model e.g., BERT,
each input token
is encoded as a one-hot vector and then mapped into a continuous embedding space.
Instead of adding perturbations to the one-hot vectors,
we construct adversarial samples in the word embedding space as in \citep{zhu2019freelb}.
\par
The backbone model is the base version of $\text{BERT}$ \citep{devlin2019bert} which has been widely used in the NLP community.
We conduct AT in the fine-tuning stage to see its effectiveness on OOD generalization.
The models are trained by AdamW \citep{loshchilov2018decoupled} for 10 epochs.
Detailed hyperparameters are in Appendix \ref{app:hyp on adv}.
As in Section \ref{sec:Experiments on Image Classification},
we compare Adv-$\ell_{2}$ and Adv-$\ell_{\infty}$ with Std.
\paragraph{Main Results.}
In Table \ref{tbl:adversarial training on text},
we report the results on in-distribution data and OOD data, and the gap between them (in the brackets) as in \citep{hendrycks2020pretrained}. The gaps in brackets are used to alleviate the interference by the general benefits from AT itself, since it was shown in \citep{zhu2019freelb} that AT can improve the generalization ability of model on in-distribution textual data.
\par
As can be seen, adversarially trained models perform similarly or even better than standardly trained models on in-distribution data, while significantly better on OOD data especially for \texttt{MNLI}. The smaller gaps between in-distribution and OOD data support our finding that AT can be used to improve OOD generalization.
\subsection{Robust Pre-Trained Model Improves OOD Generalization}
\label{expt:pretrain}
Previously in Section \ref{sec:pretrain improves ood}, we theoretically show that
an input-robust pre-trained model gives a better initialization for fine-tuning on downstream task, in terms of OOD generalization.
In this section, we empirically show that this better initialization also leads to better OOD generalization after finetuning on image classification tasks.
\begin{table*}[htbp!]
\caption{Clean and corruption accuracy (\%) of ResNet34 on \texttt{CIFAR10-C} with no pre-training, standard pre-training, and adversarial pre-training under $\ell_{2}$-norm and $\ell_{\infty}$-norm.}
\label{tbl:adversarial pre-training}
\centering
\scalebox{0.64}{
{
\begin{tabular}{l|c|c|ccc|cccc|cccc|cccc|c}
\hline
\multirow{2}{*}{Fine-Tuning} & \multirow{2}{*}{Pre-Training} & \multirow{2}{*}{Clean} & \multicolumn{3}{c|}{Noise} & \multicolumn{4}{c|}{Blur} & \multicolumn{4}{c|}{Weather} & \multicolumn{4}{|c|}{Digital} & \multirow{2}{*}{Avg.} \\
& & & Gauss & Shot & Impulse & Defocus & Glass & Motion & Zoom & Snow & Frost & Fog & Bright & Contrast & Elastic & Pixel & JPEG & \\ \hline
\multirow{4}{*}{Std} & No & 95.21 & 40.55 & 40.64 & 19.91 & 83.21 & 67.77 & 77.86 & 90.31 & 80.71 & 77.91 & 67.27 & \textbf{90.88} & 48.14 & 80.80 & 81.99 & 80.84 & 68.59 \\
& Std & 94.65 & 41.25 & 42.91 & 22.58 & 85.19 & 71.03 & 78.49 & \textbf{90.82} & 82.78 & \textbf{80.04} & 67.66 & 89.97 & 45.70 & \textbf{83.89} & 82.03 & \textbf{80.99} & 69.69 \\
& Adv-$\ell_{2}$ & 95.06 & \textbf{45.10} & \textbf{50.58} & 27.57 & 87.27 & \textbf{72.95} & 79.08 & 90.57 & \textbf{83.29} & 77.25 & 65.41 & 90.15 & \textbf{50.41} & 82.81 & 78.01 & 78.95 & 70.63 \\
& Adv-$\ell_{\infty}$ & 94.30 & 40.94 & 46.42 & \textbf{29.39} & \textbf{87.60} & 70.79 & \textbf{81.44} & 90.69 & 82.77 & 79.28 & \textbf{68.84} & 89.19 & 45.29 & 83.59 & \textbf{83.13} & 80.86 & \textbf{70.68} \\ \hline
\multirow{4}{*}{Adv-$l_{2}$} & No & 94.43 & 56.82 & 60.58 & 29.34 & 85.44 & 71.67 & 81.80 & 90.08 & 83.68 & 80.37 & 61.68 & 89.96 & 34.76 & 83.76 & 85.16 & 83.24 & 71.89 \\
& Std & 94.09 & 57.64 & 60.96 & 26.35 & 86.78 & \textbf{73.52} & 82.16 & 90.46 & 82.12 & 80.64 & 62.58 & 88.98 & 34.68 & 84.29 & 83.42 & 83.42 & 71.87 \\
& Adv-$\ell_{2}$ & 94.45 & \textbf{58.98} & \textbf{62.99} & \textbf{35.08} & 87.07 & 72.29 & 81.66 & 91.07 & 83.53 & 81.38 & 62.82 & 89.52 & \textbf{39.53} & 84.35 & \textbf{86.60} & 88.55 & 73.69 \\
& Adv-$\ell_{\infty}$ & 95.25 & 58.64 & 62.18 & 29.86 & \textbf{88.15} & 73.00 & \textbf{82.95} & \textbf{91.98} & \textbf{84.76} & \textbf{83.86} & \textbf{64.76} & \textbf{91.00} & 37.35 & \textbf{84.65} & 86.57 & \textbf{88.59} & \textbf{73.89} \\ \hline
\multirow{4}{*}{Adv-$\ell_{\infty}$} & No & 92.46 & 80.91 & 81.69 & 52.00 & 79.58 & 80.94 & 77.42 & 80.21 & 80.57 & 79.35 & 35.41 & 83.15 & 18.06 & 83.51 & 87.79 & 87.44 & 72.54 \\
& Std & 92.05 & 80.21 & 81.06 & \textbf{63.02} & 77.94 & 77.80 & 75.60 & 80.04 & \textbf{83.77} & 81.22 & 41.57 & \textbf{89.94} & \textbf{19.04} & 82.39 & 85.49 & \textbf{88.76} & 73.86 \\
& Adv-$\ell_{2}$ & 92.55 & \textbf{81.96} & \textbf{82.86} & 58.95 & \textbf{80.51} & \textbf{82.66} & \textbf{78.21} & \textbf{86.56} & 81.49 & 81.10 & 42.07 & 89.76 & 18.56 & \textbf{84.58} & \textbf{88.53} & 88.05 & \textbf{75.06} \\
& Adv-$\ell_{\infty}$ & 92.28 & 81.74 & 82.37 & 56.96 & 80.34 & 81.90 & 77.94 & 85.76 & 81.48 & \textbf{81.70} & \textbf{42.99} & 89.00 & 18.45 & 84.50 & 88.07 & 87.50 & 74.71 \\ \hline
\end{tabular}}}
\end{table*}
\paragraph{Setup.}
Following \citep{salman2020adversarially}, we pre-train the model on \texttt{ImageNet} and then fine-tune it on \texttt{CIFAR10}.
To get an input-robust model in the pre-training stage, we consider adversarially pre-train the model.
We compare adversarial pre-training (Adv-$\ell_{2}$ and Adv-$\ell_{\infty}$) against standard pre-training and no pre-training
as in Section \ref{sec:Experiments on Image Classification}.
In the fine-tuning stage, the data from \texttt{CIFAR10} are resized to $224\times224$ as in \citep{salman2020adversarially}. We also compare stadard fine-tuning and adversarial fine-tuning under both $\ell_{2}$- and $\ell_{\infty}$-norm. After fine-tuning, we verify the OOD generalization on \texttt{CIFAR10-C}. The other settings are the same as Section \ref{sec:Experiments on Image Classification}.
\paragraph{Main Results.}
The results
are shown in Table \ref{tbl:adversarial pre-training}.
As can be seen,
for all fine-tuning methods,
adversarially pre-trained models consistently achieve better performance on OOD data
than standardly pre-trained models or models without pre-training.
Thus, the initialization from the adversarially pre-trained input-robust model leads to
better OOD generalization on downstream tasks after fine-tuning. In addition, standard pre-training slightly improves the OOD generalization compared with no pre-training
when we conduct Adv-$\ell_{\infty}$ fine-tuning or standard fine-tuning.
We also observe that for all four kinds of pre-training,
adversarial fine-tuning under $\ell_{\infty}$-norm has better performance than $\ell_{2}$-norm.
This agrees with the observations in Section~\ref{sec:Experiments on Image Classification}.
Note that the results of
models without pre-training are different from those in Table \ref{tbl:adversarial training on image} due to the resized input data.
\subsection{Discussion}
\label{sec:discussion}
It is shown in \citep{hendrycks2020pretrained} that the language model BERT~\citep{devlin2019bert} pre-trained on large corpus generalizes well on downstream OOD data, and RoBERTa~\citep{liu2019roberta} pre-trained with more training data and updates generalizes even better than BERT.
We speculate this is because (i) sufficient pre-training obtains an input-robust model as discussed in Section \ref{sec:pretrain improves ood}, and this better-initialization leads to better OOD generalization after finetuning as observed in Section~\ref{expt:pretrain}; and (ii) the objective of masked language modeling predicts the masked (perturbed) input tokens and enables a certain amount of input-robustness.
In this section, we empirically show
that the model initialized by BERT has higher input-robustness than a randomly initialized model.
Besides, compared with BERT, RoBERTa is pre-trained with more training samples and updating steps
and the model initialized by it is more robust to input perturbations.
\paragraph{Setup.} We compare the input-robustness of the base versions of pre-trained language model BERT
\citep{devlin2019bert} and RoBERTa
\citep{liu2019roberta},
against a randomly initialized model whose
parameters
are independently sampled from $\mathcal{N}(0, 0.02^{2})$ \citep{wolf2020transformers}.
The three models have exactly the same structure.
Compared with $\text{BERT}$, $\text{RoBERTa}$ is pre-trained on a larger corpus for more updating steps.
Experiments are performed on \texttt{MRPC} and \texttt{CoLA} datasets from the GLUE benchmark \citep{wang2018glue},
with 3.7k and 8.5k training samples, respectively.
Similar as Section \ref{sec:Experiments on Natural Language Understanding},
we add adversarial perturbations in the embedding space.
We use 3 steps of $\ell_{\infty}$-norm attack to construct perturbation.
The perturbation size is 0.001 and the perturbation step size 0.0005.
Since the the last classification layer of $\text{BERT}$ or $\text{RoBERTa}$ is randomly initialized during downstream task fine-tuning,
we study the difference in the hidden states of the last Transformer layer
before the classification layer.
Denote $\textbf{h}, \textbf{h}_{\text{per}}\in \mathbb{R}^{128\times 768}$ as the hidden states from the original input and the adversarially perturbed input, respectively. We use the $\ell_{2}$-norm $\|\textbf{h}_{\text{per}} - \textbf{h}\|$ and the cosine
similarity $\langle\textbf{h}, \textbf{h}_{\text{per}}\rangle/(\|\textbf{h}\|\|\textbf{h}_{\text{per}}\|)$ to measure the difference. The cosine similarity is used to alleviate the potential interference caused by the scale of $\textbf{h}$ over different pre-trained models. The results are in Figure \ref{fig:comp}.
\begin{figure}[htbp]
\centering
\subfloat{
\includegraphics[width=0.3\textwidth]{pic/robustness/legend.pdf}}
\vspace{-0.1in}
\\
\addtocounter{subfigure}{-1}
\subfloat[\texttt{MRPC.}\label{fig:mrpc_norm}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/mrpc_norm.pdf}}
\subfloat[\texttt{CoLA.}\label{fig:cola_norm}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/cola_norm.pdf}}\\
\subfloat[\texttt{MRPC.}\label{fig:mrpc}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/mrpc.pdf}}
\subfloat[\texttt{CoLA.}\label{fig:cola}]{
\includegraphics[width=0.24\textwidth]{pic/robustness/cola.pdf}}
\vspace{-0.05in}
\caption{Difference of hidden states in the last Transformer layer between the original input and adversarially perturbed input, measured by $\ell_{2}$-norm and cosine similarity. The models compared are randomly initialized model, $\text{BERT}$, and $\text{RoBERTa}$.
The datasets used are \texttt{MRPC} and \texttt{CoLA} from the GLUE benchamrk.
The dashed lines in the upper and bottom figures are respectively the mean of $\|\textbf{h}_{\text{per}} - \textbf{h}\|$ and $\langle\textbf{h}, \textbf{h}_{\text{per}}\rangle/(\|\textbf{h}\|\|\textbf{h}_{\text{per}}\|)$ from all samples in a dataset.}
\label{fig:comp}
\vspace{-0.1in}
\end{figure}
\paragraph{Main Results.}
The histograms of $\|\textbf{h}_{\text{per}} - \textbf{h}\|$ and $\langle\textbf{h}, \textbf{h}_{\text{per}}\rangle/(\|\textbf{h}\|\|\textbf{h}_{\text{per}}\|)$ from all training samples
in MRPC and CoLA
are shown in Figure \ref{fig:mrpc_norm}, \ref{fig:cola_norm} and Figure \ref{fig:mrpc}, \ref{fig:cola}, respectively.
We can observe that (i) $\text{BERT}$ is more robust than the randomly initialized model,
indicating that the masked language modeling objective and
sufficient pre-training improves input-robustness, and leads to better ood performance after fine-tuning;
(ii) $\text{RoBERTa}$ is more input-robust compared with $\text{BERT}$, which implies that
that more training samples and updating steps in the pre-training stage improve the input-robustness.
Combining with that a more input-robust pre-trained model also leads to better OOD generalization on downstream tasks empirically (Section~\ref{sec:pretrain improves ood}),
the above observations (i) and (ii) may also explain the finding in \citep{hendrycks2020pretrained} that $\text{BERT}$ generalizes worse on downstream OOD data than $\text{RoBERTa}$, but much better than the model without pretraining.
\section{Conclusion}
In this paper, we explore the relationship between the robustness and OOD generalization of a model.
We theoretically show that the input-robust model can generalize well on OOD data
under the definition of OOD generalization via Wasserstein distance. Thus, for a model trained from scratch,
we suggest using adversarial training to improve the input-robustness of the model which results in better OOD generalization.
Under mild conditions, we show that the excess risk on OOD data of an adversarially trained model is upper bounded by $\tilde{\mathcal{O}}(1/\sqrt{n} + 1/T)$.
For the framework of first pre-training and then fine-tuning,
we show that a pre-trained input-robust model provides a theoretically good initialization which empirically improves OOD generalization after fine-tuning. Various experiments on CV and NLP verify our theoretical findings.
| 2aaf9ede0b34d6bee32fc9767e2ac2603a2518cc | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The quantum Monte Carlo method~\cite{VONDERLINDEN199253,Pollet2012}
is widely used in various fields of physics as a non-perturbative tool of analysis.
In a path integral formalism using Lagrangian,
a partition function is written in terms of the integral of the Boltzmann weight
$\mathrm{e}^{-S}$ over field variables, where $S$ is an action.
When the action is a real-valued function,
the Boltzmann weight is regarded as a probability density function.
This ensures that quantum expectation values of physical observables
can be estimated by importance sampling of the Boltzmann weight.
However, the positivity of the Boltzmann weight is violated
in many physically interesting systems:
Hubbard model, finite density quantum chromodynamics (QCD), QCD with a $\theta$-term, matrix superstring models and any systems defined by Schwinger-Keldysh formalism which describes real-time dynamics, for instance \cite{PhysRevB.41.9301, PhysRevLett.94.170201,Muroya:2003qs,deForcrand:2010ys,Vicari:2008jw,Krauth:1998xh, PhysRevD.75.045007, PhysRevLett.117.081602}.
In these cases,
the number of samples becomes exponentially large as the system size grows
in order to obtain statistically significant results.
In non-relativistic fermionic systems,
a frequently used way to apply the quantum Monte Carlo method is
introducing bosonic auxiliary field
through the Hubbard-Stratonovich transformation~\cite{Blankenbecler:1981jt,Scalapino:1981ju,SUGIYAMA19861}.
After integrating out the fermion fields,
we will obtain an effective action of the auxiliary field.
Since the effective action involves a logarithm of a fermion determinant,
the positivity is not guaranteed except in a few cases
where the action has the particle-hole symmetry~\cite{PhysRevB.31.4403},
the Kramers symmetry~\cite{PhysRevB.71.155115},
or Majorana positivity~\cite{Li:2014tla,Li:2016gte,PhysRevLett.116.250601},
for instance.
A promising approach to evade the sign problem
is the complex Langevin method~\cite{Klauder:1983sp,Parisi:1984cs},
which is an extension of the stochastic quantization to systems with complex-valued actions.
An advantage of this method is that it is scalable to the system size,
and thus, the computational cost is similar to
the usual quantum Monte Carlo method without the sign problem.
On the other hand, it is known that this method sometimes gives incorrect answers
even when the statistical average of a physical observable converges.
In the recent decade,
a way to judge the reliability of the complex Langevin method is extensively
studied~\cite{Aarts:2009uq,Aarts:2011ax,Nishimura:2015pba,Nagata:2015uga,Nagata:2016vkn,Salcedo:2016kyy,Aarts:2017vrv,Nagata:2018net,Scherzer:2018hid,Scherzer:2019lrh,Cai:2021nTV},
and proposed criteria which are able to compute in actual simulations
using the boundary terms~\cite{Aarts:2009uq,Aarts:2011ax,Scherzer:2018hid,Scherzer:2019lrh}
and the probability distribution of the drift term~\cite{Nishimura:2015pba,Nagata:2016vkn}.
While it is still difficult to predict when the complex Langevin method fails
without performing numerical simulations,
we can eliminate wrongly convergent results thanks to these criteria.
In the context of cold fermionic atoms,
the complex Langevin method is applied to
rotating bosons~\cite{Hayata:2014kra},
polarized fermions~\cite{PhysRevD.98.054507, Rammelmuller:2018hnk, 10.21468/SciPostPhys.9.1.014, PhysRevA.103.043330},
unpolarized fermions with contact repulsive interactions~\cite{Loheac.PhysRevD.95.094502}
and mass imbalanced fermions~\cite{Rammelmuller.PhysRevD.96.094506}
to study ground state energy, thermodynamic quantities and Fulde-Ferrell-Larkin-Ovchinnikov-type pairings
(see also a recent review~\cite{Berger:2019odf}).
In this paper,
we consider a spatially one-dimensional spin-1/2 polarized fermions with contact attractive interactions which is known as the Gaudin-Yang model~\cite{RevModPhys.85.1633},
and compute the single-particle energy of spin-down fermions in a spin-up Fermi sea,
which is referred to as the Fermi polaron energy.
Recently, the single-particle excitation spectra of Fermi polarons were experimentally measured in higher dimensional atomic systems~\cite{PhysRevLett.102.230402,PhysRevLett.103.170402,Koschorreck2012,Cetina96,PhysRevLett.118.083602,PhysRevLett.125.133401,PhysRevX.10.041019,PhysRevA.103.053314}
(also see a recent review~\cite{Massignan_2014} for Fermi polarons).
While an analytic formula for the polaron energy in one dimension is obtained exactly at zero temperature
based on the thermodynamic Bethe ansatz method~\cite{doi:10.1063/1.1704798},
no analytical solutions are known at finite temperature (note that the numerical results for finite-temperature properties of polarized gases within the thermodynamic Bethe ansatz were reported in Ref.~\cite{PhysRevA.94.031604}).
The one-dimensional Fermi polarons were studied
with several theoretical approaches such as Bruckner-Hartree-Fock~\cite{PhysRevA.84.033607}, {\it T}-matrix~\cite{PhysRevLett.111.025302,atoms9010018}, and variational~\cite{YSong2019,Mistakidis_2019} approaches.
In this study,
we demonstrate that a microscopic quantity, that is, the polaron energy is efficiently computed by the complex Langevin method.
This paper is organized as follows.
In Sec.~\ref{sec:GY},
we derive a lattice action of the Gaugin-Yang model.
In Sec.~\ref{sec:CLM},
we review how to compute physical quantities using the complex Langevin method.
In Sec.~\ref{sec:Observables},
we show a way to extract the ground state energy in the spin-down channel
from a two-point imaginary time Green's function.
In Sec.~\ref{sec:results},
we present the numerical results.
Section~\ref{sec:summary} is devoted to the summary of this paper.
In this work, $k_{\rm B}$ and $\hbar$ are taken to be unity.
\section{The Gaudin-Yang model}\label{sec:GY}
We consider a one-dimensional two-component Fermi gas
with contact attractive interactions
which is known as the Gaugin-Yang model~\cite{PhysRevLett.19.1312,GAUDIN196755}.
The Hamiltonian is given by
\begin{align}
\hat{H}
=
\sum_{p,\sigma}
\qty(\frac{p^2}{2} - \mu_\sigma)
\hat{c}_{p,\sigma}^\dag
\hat{c}_{p,\sigma}
-
g
\sum_{p,p',q}
\hat{c}_{p+\frac{q}{2},\uparrow}^\dag
\hat{c}_{-p+\frac{q}{2},\downarrow}^\dag
\hat{c}_{-p'+\frac{q}{2},\downarrow}
\hat{c}_{p'+\frac{q}{2},\uparrow},
\label{Hamiltonian}
\end{align}
where
$\hat{c}_{p,\sigma}$ and $\hat{c}_{p,\sigma}^{\dag}$
are fermionic annihilation/creation operators with
momentum $p$ and spin $\sigma=\uparrow,\downarrow$, respectively.
In this work, the atomic mass is taken to be unity.
The coupling constant $g$ is related to
a scattering length in one dimension $a$ as $g=\frac{2}{a} > 0$~\cite{RevModPhys.85.1633}.
The chemical potential of spin-$\sigma$ fermions
are represented by $\mu_\sigma$.
For convenience, we introduce an average chemical potential
$\mu=(\mu_{\uparrow}+\mu_{\downarrow})/2$
and a fictitious Zeeman field
$h=(\mu_{\uparrow}-\mu_{\downarrow})/2$.
The grand canonical partition function is given by
$Z = \Tr\bqty{
\mathrm{e}^{-\beta \qty(\hat{H}-\sum_{\sigma}\mu_{\sigma}\hat{N}_\sigma)}
}$
with $\beta$ being an inverse temperature and a number operator
$\hat{N}_\sigma = \sum_{p}\hat{c}_{p,\sigma}^\dag \hat{c}_{p,\sigma}$.
The path-integral representation of $Z$ reads
\begin{align}
Z = \int \prod_{\sigma} \mathcal{D}\psi^*_\sigma \mathcal{D}\psi_\sigma \ \mathrm{e}^{-S},
\end{align}
where action $S$ is given by
\begin{align}
S =
\int_0^\beta \dd{\tau} \int \dd{x}
\Bigg[
&\sum_{\sigma=\uparrow, \downarrow}
\psi^*_{\sigma}(x,\tau) \qty(
\frac{\partial}{\partial \tau}
- \frac{1}{2}\frac{\partial^2}{\partial x^2}
- \mu_\sigma
)
\psi_\sigma(x,\tau) \notag \\
&\qquad\qquad
- g
\psi^*_\uparrow(x,\tau)
\psi^*_\downarrow(x,\tau)
\psi_\downarrow(x,\tau)
\psi_\uparrow(x,\tau)
\Bigg].
\label{action}
\end{align}
Here $\psi_\sigma(x,\tau), \psi^*_\sigma(x,\tau)$
are a Grassmann field and its complex conjugate.
While the action~\eqref{action} is given in a continuous spacetime,
one should perform a lattice regularization appropriately
to carry out numerical simulations.
We write lattice spacing of temporal and spatial directions
by $a_\tau$ and $a_x$, respectively, and their ratio by $r = a_\tau/a_x^2$.
We also introduce lattice quantities by
\begin{align}
\bar{\mu}_\sigma \equiv \mu_\sigma a_x^2, \quad
\bar{g} \equiv g a_x, \quad
\bar{\psi}_{\sigma,j,n} \equiv \psi_\sigma(ja_x,na_\tau) a_x^{1/2},
\end{align}
where $n$ and $j$ are integers that satisfy
$0 \leq n < N_\tau$ and $0 \leq j < N_x$.
The inverse temperature and the spatial length of the lattice is
given by $\beta = T^{-1} = N_\tau a_\tau$ and $L = N_x a_x$.
With these notations, we consider a lattice action:
\begin{align}
S_\text{lat}
=
&\sum_{j,n} \sum_{\sigma=\uparrow,\downarrow}
\qty(
\bar{\psi}^*_{\sigma;j,n} \bar{\psi}_{\sigma;j,n}
-
\bar{\psi}^*_{\sigma;j,n+1}
e^{-\bar{\phi}_{j,n} + \bar{\mu}_\sigma}
\bar{\psi}_{\sigma;j,n}
+
\frac{r}{2}
(\bar{\psi}^*_{\sigma;j+1,n} - \bar{\psi}^*_{\sigma;j,n})
(\bar{\psi}_{\sigma;j+1,n} - \bar{\psi}_{\sigma;j,n})
) \notag \\
&+\sum_{j,n} \frac{\cosh(\bar{\phi}_{j,n}) - 1}{\bar{g}},
\label{lattice action}
\end{align}
where $\bar{\phi}_{j,n}$ is a bosonic auxiliary field.
As shown in Ref.~\cite{Alexandru:2018brw},
the lattice action~\eqref{lattice action} correctly converges to the continuum one
as long as the matching conditions
\begin{align}
\frac{g a_\tau}{a_x}
=
\qty(\frac{f_2}{f_0} - \frac{f_1^2}{f_0^2})
\mathrm{e}^{\bar{\mu}_\uparrow + \bar{\mu}_\downarrow},
\quad
%
\mu_\sigma a_\tau
= \frac{f_1}{f_0} \mathrm{e}^{\bar{\mu}_\sigma} - 1
\end{align}
are satisfied, where $f_k$ is a $\bar{g}$-dependent constant given by
\begin{align}
f_k
=
\int_{-\infty}^\infty \dd{\bar{\phi}}
\mathrm{e}^{-\frac{\cosh(\bar{\phi}) - 1}{\bar{g}}} \mathrm{e}^{k\bar{\phi}}.
\end{align}
In practice, it is sufficient to use an approximated form of the matching conditions
\begin{align}
\bar{g} \simeq \frac{ga_\tau}{a_x}, \quad
\bar{\mu}_\sigma \simeq \mu_\sigma a_\tau - \frac{ga_\tau}{2a_x},
\end{align}
which are obtained as the first order approximation
in the expansion in terms of $a_\tau$.
After integrating out the fermion fields,
the partition function and the effective action of the auxiliary field read
\begin{align}
Z
&=
\int \prod_{j,n} \dd{\bar{\phi}_{j,n}}
\mathrm{e}^{-S_\text{eff}[\bar{\phi}]},
\label{partitionfunction}
\end{align}
where the effective action of the auxiliary field is given by
\begin{align}
S_\text{eff}[\bar{\phi}]
&=
\sum_{j,n} \frac{\cosh\bar{\phi}_{j,n} - 1}{\bar{g}}
- \sum_{\sigma} \log \det
\qty[
I
+
e^{N_\tau \bar{\mu}_\sigma}
B^{-1}C_{N_\tau-1}
\cdots
B^{-1}C_{0}
], \label{effective action}\\
%
B_{j,j'}
&=
-\frac{r}{2}
\qty(
\delta_{j-1,j'} + \delta_{j+1,j'}
)
+
\qty(
1 + r
)
\delta_{j,j'}, \quad
%
(C_n)_{j,j'}
=
\delta_{j,j'} e^{-\bar{\phi}_{j,n}},
\end{align}
where $I$ is the $N_x \times N_x$ identity matrix.
Since we consider a naive finite difference as
an approximation of the second order derivative with respect to $x$,
the eigenvalues of $B$ are
$1+2r \sin^2\frac{\pi k}{N_x}, (k=0,1,\cdots, N_x-1)$.
It has been argued in Ref.~\cite{Blankenbecler:1981jt, Alexandru:2018brw} that
this naive lattice action converges too slow to the continuum limit,
and the behavior can be improved by replacing the eigenvalues of $B$ by
\begin{align}
\lambda_k = \exp(\frac{r}{2} \qty(\frac{2\pi k}{N_x})^2).
\end{align}
After this replacement, the form of $B_\sigma$ is given by
\begin{align}
B_{j,j'}
=
\frac{1}{N_x}
\sum_{k=-\lfloor N_x/2 \rfloor}^{\lfloor N_x/2 \rfloor}
\lambda_k
\cos k(j-j').
\end{align}
A notable point is that
the effective action~\eqref{effective action}
involves a logarithm of the fermion determinant
which can be complex in general.
Therefore, this term may cause the sign problem if the Zeeman field $h$ is not zero
and then the Monte Carlo simulation can be
difficult to apply to this system.
\section{Complex Langevin method}\label{sec:CLM}
The complex Langevin method (CLM)~\cite{Klauder:1983sp,Parisi:1984cs}
is an extension of the stochastic quantization
which is usually applicable to real-valued actions.
In the CLM, we first consider a complexified auxiliary field
$\bar{\phi}_{n,k}$
and extend the domain of definition of $S_\text{eff}$ to the complex space.
For such a complex field,
we consider a fictitious time evolution described by
the complex Langevin equation:
\begin{align}
\bar{\phi}^\eta_{n,k}(t + \Delta t) =
\bar{\phi}^\eta_{n,k}(t)
- \frac{\partial S_\text{eff}}{\partial \bar{\phi}^\eta_{n,k}} \Delta t
+ \eta_{n,k}(t) \sqrt{\Delta t},
\end{align}
where $\eta_{n,k}(t)$ is a real Gaussian noise.
When we assume that the system described by
the complex Langevin equation reach equilibrium at $t = t_\text{eq}$,
an average of a physical observable $O(\bar{\phi})$ can be defined as
\begin{align}
\expval{O(\bar{\phi})}
\equiv
\lim_{T\to\infty}\frac{1}{T}
\int_{t_\text{eq}}^{t_\text{eq} + T}
\dd{t}
\expval{O(\bar{\phi}^\eta(t))}_\eta,
\label{CLMaverage}
\end{align}
with the average over the noise
$\expval{O(\bar{\phi}^\eta(t))}_\eta$
being
\begin{align}
\expval{O(\bar{\phi}^\eta(t))}_\eta
\equiv
\frac{
\int
\prod_{n,k,t}
\dd{\eta}_{n,k}(t)
O(\bar{\phi}^\eta(t))
\mathrm{e}^{-\frac{1}{4} \sum_{n,k,t} \eta_{n,k}(t)^2}
}{
\int
\prod_{n,k,t}
\dd{\eta}_{n,k}(t)
\mathrm{e}^{-\frac{1}{4} \sum_{n,k,t} \eta_{n,k}(t)^2}
}.
\end{align}
We note that
$\expval{\eta_{n,k}(t) \eta_{n',k'}(t')}_\eta
=
2\delta_{nn'}\delta_{kk'}\delta_{tt'}$,
in particular.
Although, we expect that
the mean value
$\expval{O(\bar{\phi})}$
is equivalent to the quantum expectation value calculated in an original action, i.e.,
$\int \prod_{n,k} \dd{\bar{\phi}}_{n,k}
O(\bar{\phi}) \mathrm{e}^{-S_\text{eff}}
/
\int \prod_{n,k} \dd{\bar{\phi}}_{n,k}
\mathrm{e}^{-S_\text{eff}}$
in the limit $\Delta t \to 0$,
it is not correct in general.
There are extensive studies~\cite{Aarts:2009uq,Aarts:2011ax,Nishimura:2015pba,Nagata:2015uga,Nagata:2016vkn,Salcedo:2016kyy,Aarts:2017vrv,Nagata:2018net,Scherzer:2018hid,Scherzer:2019lrh,Cai:2021nTV}
to understand when the CLM is justified,
and criteria for determining whether a CLM is reliable or not have been proposed.
One of a practical criterion
which can be relied on in actual numerical simulations
is discussed from a view point of a probability distribution of a drift term~\cite{Nishimura:2015pba,Nagata:2016vkn}.
In our case,
it is sufficient to consider a magnitude of the drift term given by
\begin{align}
v^\eta \equiv
\max_{n,k} \qty|
\frac{\partial S_\text{eff}}{\partial \bar{\phi}^\eta_{n,k}}
| \label{drift_magnitude}
\end{align}
and its distribution.
According to the criterion, the CLM is reliable
if the probability distribution of $v^\eta$ shows an exponential decay.
\section{Observables}\label{sec:Observables}
The number density of spin-$\sigma$ fermions is given by
\begin{align}
n_\sigma
=
\frac{T}{L}\frac{\partial}{\partial \mu_\sigma} \log Z
=
\frac{1}{L}
\frac{1}{Z}
\int \prod_{j,n} \dd{\bar{\phi}_{j,n}}
\tr \qty[
\frac{1}
{
I + e^{-N_\tau \bar{\mu}_\sigma}
C_{0}^{-1}B
\cdots
C_{N_\tau-1}^{-1}B
}
]
\mathrm{e}^{-S_\text{eff}[\bar{\phi}]}.
\end{align}
The particle number density on a lattice unit is defined by
$\bar{n}_\sigma = n_\sigma a_x$.
From below,
we assume that the spin-down fermions are regarded as minority.
Typical temperature and momentum scales are given by the Fermi scales
which are determined by the density of spin-up fermions:
\begin{align}
T_\text{F} = \frac{\pi^2 n_\uparrow}{2},
\quad
p_\text{F} = \pi n_\uparrow.
\end{align}
In lattice simulations,
we can compute dimensionless combination $T/T_\text{F}$ and $p_\text{F}a$ as follows:
\begin{align}
\frac{T}{T_\text{F}} = \frac{2}{\pi^2 \bar{n}_\uparrow^2 N_\tau r},
\quad
p_\text{F}a = \frac{2\pi r \bar{n}_\uparrow}{\bar{g}}.
\end{align}
In order to calculate the polaron energy,
we consider two-point Green's function:
\begin{align}
G(p,\tau)
\equiv
\frac{1}{Z}
\Tr[
e^{-\beta\hat{K}}
\Tprod_\tau
\qty(
\hat{c}^\dagger_{\downarrow,p}(\tau)
\hat{c}_{\downarrow,0}(0)
)
],
\quad
(-\beta \leq \tau \leq \beta),
\end{align}
where $\Tprod_\tau$ is the imaginary-time-ordered product.
Hereinafter we restrict $\tau > 0$.
We write the eigenvalue and eigenstate of $\hat{K}$ by
$\hat{K}\ket{n} = K_n\ket{n}$.
In particular, $K_0 < K_1 < \cdots$.
We also assume that the ground state $\ket{0}$ is not degenerated.
Expanding the trace by the eigenstates,
the correlation function reads
\begin{align}
G(p,\tau)
=
\frac{
\sum_{nm}
e^{-(\beta-\tau) \Delta K_n - \tau \Delta K_m}
\mel{n}{\hat{c}^\dagger_{\sigma,p}}{m}
\mel{m}{\hat{c}_{\sigma',p'}}{n}
}{
\sum_n
e^{-\beta \Delta K_n}
},
\end{align}
where $\Delta K_n \equiv K_n - K_0$.
In the low temperature limit $\beta \to \infty$,
only the ground state contributes to the summation over $n$.
Thus, we find
\begin{align}
\tilde{G}(p,\tau)
\equiv
\lim_{\beta\to\infty}
G(p,\tau)
=
\sum_m
e^{-\tau \Delta K_m}
\mel{0}{\hat{c}^\dagger_{\sigma,p}}{m}
\mel{m}{\hat{c}_{\sigma',p'}}{0}.
\end{align}
Since the matrix elements appeared in the above expression
does not depend on $\tau$,
it behaves like
\begin{align}
\tilde{G}(p,\tau)
=
A_0 e^{-\tau E_0} + A_1 e^{-\tau E_1} + \dots,
\end{align}
where $A_0, A_1, \dots$ are $\tau$-independent constants,
and $E_0, E_1, \dots$ are energies of the ground state and excited states.
In particular, the energy of the ground state can be extracted by
\begin{align}
E_0(p)
=
\frac{1}{a_\tau}
\lim_{\tau \to \infty} R(p,\tau),
\quad
R(p,\tau) \equiv
\log
\frac{\tilde{G}(p,\tau)}
{\tilde{G}(p,\tau+a_\tau)}, \label{ratio of green functions}
\end{align}
keeping $\tau \ll \beta$.
The polaron energy $U$ is defined by
\begin{align}
U \equiv E_0(0) + \mu_\downarrow. \label{def_polaron_energy}
\end{align}
The polaron energy is the shift of single-particle energy from that in the case of free fermions
due to the interaction between majority (spin-up fermions) and minority (spin-down fermions).
We note that
the polaron energy at zero temperature is calculated exactly
based on thermodynamic Bethe ansatz~\cite{doi:10.1063/1.1704798}:
\begin{align}
\frac{U}{T_\text{F}}
=
-\frac{2}{\pi}\qty[
\frac{1}{p_\text{F}a}
+ \tan^{-1}\qty(\frac{1}{p_\text{F}a})
+ \qty(
\frac{\pi}{2} + \tan^{-1}\qty(\frac{1}{p_\text{F}a})
) \frac{1}{(p_\text{F}a)^2}
].
\label{exact polaron energy}
\end{align}
In lattice calculations,
the polaron energy is obtained as follows.
From the form of the effective action,
the lattice expression of the inverse Green's function reads
\begin{align}
G^{-1}
\equiv
\begin{pmatrix}
B & 0 & 0 & \cdots & 0 & \mathrm{e}^{\bar{\mu}_\downarrow}C_{N_\tau-1} \\
-\mathrm{e}^{\bar{\mu}_\downarrow}C_0 & B & 0 & \cdots & 0 & 0 \\
0 & -\mathrm{e}^{\bar{\mu}_\downarrow}C_1 & B & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & -\mathrm{e}^{\bar{\mu}_\downarrow}C_{N_\tau-2} & B
\end{pmatrix}.
\label{inverseGreenfunction}
\end{align}
From a straightforward algebra,
each component of $G$ reads
\begin{alignat}{2}
G_{jj}
&= \frac{B^{-1}}
{
I
+
\mathrm{e}^{N_\tau\bar{\mu}_\downarrow}
B^{-1}C_{j-1}
\cdots
B^{-1}C_{0}
B^{-1}C_{N_\tau-1}
\cdots
B^{-1}C_j
}, \\
G_{kj}
&=
\begin{cases}
B^{-1}C_{k-1} \cdots B^{-1}C_jG_{jj}, \quad &(j+1 \leq k \leq N-1), \\
-B^{-1}C_{k-1} \cdots B^{-1}C_0 B^{-1}C_{N-1} \cdots B^{-1}C_jG_{jj}, \quad &(0 \leq k \leq j-1).
\end{cases}
\end{alignat}
The momentum representation of $G_{ij}$ is calculated by the discrete Fourier transformation.
Therefore, if the temporal lattice size $N_\tau$ is sufficiently large,
$\tilde{G}(p,\tau)$ is approximately given by
\begin{align}
\tilde{G}(2\pi k/N_x, na_\tau)
\simeq
\frac{1}{N_x} \sum_{k',l'=0}^{N_x-1}
\mathrm{e}^{\frac{2\pi i}{N_x}kk'}
(G_{0n})_{k'l'}.
\end{align}
\section{Numerical results}\label{sec:results}
We performed complex Langevin simulations on $(N_\tau, N_x) = (40, 60), (80, 60)$ lattices.
The anisotropy is set to $r = 0.1$.
There are three dimensionless parameters
to characterize the Gaugin-Yang model:
$\beta \mu$, $\beta h$ and $\lambda \equiv \sqrt{\beta} g$.
We fixed the dimensionless coupling constant by $\lambda = 2$,
and swept the average chemical potential and the Zeeman field between
$-1.2 \leq \beta \mu \leq 1.2, \, 0 \leq \beta h \leq 12$ for $N_\tau = 40$
and $-2.4 \leq \beta \mu \leq 2.4, \, 0 \leq \beta h \leq 24$ for $N_\tau = 80$,
respectively.
We set Langevin step-size by $\Delta t = 0.01$,
and saved configurations of the auxiliary field at the 0.02 interval.
For every parameter sets, we took 5001 samples.
Error bars shown below are $1\sigma$ statistical errors
calculated by the Jackknife method,
where bin-sizes are $0.3 - 1.2$ in units of Langevein time
depending on parameters and observables.
In every Langevin step,
the magnitude of the drift term (\ref{drift_magnitude})
is calculated and stored,
and finally the probability distribution $P(v^\eta)$ of drift term
can be drown.
In Fig.~\ref{fig:drift},
a typical result of the probability distribution $P(v^\eta)$ is shown.
It is normalized so that the integral of the distribution is 1.
In each simulation,
we confirmed that $P(v^\eta)$ shows
a linear fall-off in the log-log plot.
This means that $P(v^\eta)$ shows an exponential fall-off,
and then our calculation of CLM was reliable.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.75\linewidth]{fig/drift.pdf}
\caption{The histogram of the drift term for the $N_\tau=40$ lattice
at $\beta\mu=0$, $\beta h = 3$.}
\label{fig:drift}
\end{figure}
We also investigated the eigenvalues
of the matrix
\begin{align}
G^{-1}_{\sigma, {\rm red}}
=
I
+
e^{N_\tau \bar{\mu}_\sigma}
B^{-1}C_{N_\tau-1}
\cdots
B^{-1}C_{0},
\label{reducedinversGreenfunction}
\end{align}
which is the reduced matrix of inverse Green's function
Eq. (\ref{inverseGreenfunction})
appearing in the effective action on the lattice
Eq. (\ref{effective action})
as an effective fermionic matrix.
We calculate the eigenvalues
$w_\sigma$ of the matrix
from one configuration
in the case of several values of $\beta h=0$ to $\beta h=12$
and other fixed parameters, $N_\tau=40$ and $\beta\mu=-1.2$.
The imaginary part of $w_\sigma$ is negligibly small
and hereinafter we discuss the real part of the
eigenvalues.
The numerical results of eigenvalues
$\log {\rm Re}[w_\sigma]$
of the matrices
$G^{-1}_{\uparrow, {\rm red}}$ and
$G^{-1}_{\downarrow, {\rm red}}$
are shown
in Fig. \ref{fig:eigenvalue}.
Red circles and blue squares correspond to the eigenvalues of $G^{-1}_{\uparrow, {\rm red}}$ and $G^{-1}_{\downarrow, {\rm red}}$, respectively.
In the case of $\beta h=0$,
$w_\uparrow$ is exactly same as $w_\downarrow$ because
$G^{-1}_{\uparrow, {\rm red}}= G^{-1}_{\downarrow, {\rm red}}$.
While the
range of the eigenvalues of $G^{-1}_{\uparrow, {\rm red}}$ tend to be broad,
the range of the eigenvalues of $G^{-1}_{\downarrow, {\rm red}}$ tend to be narrow
when $\beta h$ increases.
It is notable point of this eigenvalue-analysis
that the eigenvalues of $G^{-1}_{\sigma, {\rm red}}$
are always larger than 1 because of $\log({\rm Re}[w])>0$
even in the case of the large $\beta h$, corresponding to the large population imbalance.
This result indicates that the integrand of the partition function (\ref{partitionfunction}) is always positive and no sign problem happens in the parameter region of our calculations.
Note that this is a numerical finding in our setup,
and we do not prove that
the sign problem never occurs in the Gaudin-Yang model with population imbalance.
We note that the sign problem may occur in other situations within the Hamiltonian~\eqref{Hamiltonian} or the action~\eqref{action}, for example, considering other values of masses, chemical potentials, coupling constants, lattice parameters, and dimensions.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.75\linewidth]{fig/x060_t040_r0.100_1.0000_0.0000_-0.3000_h_2.0000eigenvalue.pdf}
\caption{The eigenvalues of matrix $G^{-1}_{\sigma, {\rm red}}$ with several values of $h$ in the case of
$N_\tau=40$ and $\beta\mu=-1.2$.
Red circles and blue squares correspond to $G^{-1}_{\uparrow, {\rm red}}$ and $G^{-1}_{\downarrow, {\rm red}}$, respectively.
}
\label{fig:eigenvalue}
\end{figure}
In Fig.~\ref{fig:particle number},
we show dimensionless quantities $T/T_\text{F}$, $1/p_\text{F}a$ and $n_\downarrow/n_\uparrow$,
which are typical indicators of the temperature, the interaction strength
and the population imbalance, respectively.
The ratio of particle numbers $n_\downarrow/n_\uparrow$ becomes significantly small
when $\beta h \gg 1 \, (\beta \mu_\uparrow \gg \beta \mu_\downarrow)$ as expected.
In that case, $T/T_\text{F}$ and $1/p_\text{F}a$ are also small
since $T_\text{F}$ and $p_\text{F}$ are proportional to $n_\uparrow$.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.45\linewidth]{fig/temperature_Nt=40.pdf}
\includegraphics[width=0.45\linewidth]{fig/temperature_Nt=80.pdf}
\includegraphics[width=0.45\linewidth]{fig/pFa_Nt=40.pdf}
\includegraphics[width=0.45\linewidth]{fig/pFa_Nt=80.pdf}
\includegraphics[width=0.45\linewidth]{fig/imbalance_Nt=40.pdf}
\includegraphics[width=0.45\linewidth]{fig/imbalance_Nt=80.pdf}
\caption{Dimensionless physical parameters $T/T_\text{F}$, $1/p_\text{F}a$ and the ratio $n_{\downarrow}/n_{\uparrow}$ of particle numbers
for $N_\tau =40$ (left) and $N_\tau = 80$ (right).}
\label{fig:particle number}
\end{figure}
For each parameter,
we computed the ratio of Green's functions $R(0,na_\tau)$
defined in Eq.~\eqref{ratio of green functions} at zero momentum.
Numerical results on an $N_\tau = 40$ lattice at $\beta\mu=0$
for a single configuration are shown in Fig.~\ref{fig:E0}.
Qualitative behavior of $R(0,na_\tau)$ at other $N_\tau$ and $\beta \mu$
are same as these results.
In the parameter region we swept,
$R(0,na_\tau)$ has a plateau at intermediate imaginary time,
which suggests that the energy spectrum is gapped
from any possible excited states.
In our analysis,
we extract the single-particle ground-state energy $E_0(0)$ by
\begin{align}
E_0(0)\simeq \frac{1}{a_\tau}R(p,\tau=(N_\tau-2)a_\tau) \label{1particle_energy}
\end{align}
because the long-time limit as
Eq. (\ref{ratio of green functions})
cannot be taken on a lattice.
After calculating the single-particle energy (\ref{1particle_energy}),
the polaron energy $U$ is obtained by Eq. (\ref{def_polaron_energy}).
In Fig.~\ref{fig:polaron energy},
we show the polaron energy on $N_\tau = 40$ and $80$ lattices.
As the temporal lattice size $N_\tau$ becomes large, the system is close to the continuum limit.
The color of each point represents the statistical average of $T/T_\text{F}$.
The lowest temperature is $T/T_\text{F} \simeq 0.08$ for $N_\tau = 40$
and $T/T_\text{F} \simeq 0.07$ for $N_\tau = 80$, respectively.
The ratio of particle numbers $n_\downarrow/n_\uparrow$ varies from
$1.0 \times 10^{-5}$ to $1.5 \times 10^{-1}$ for $N_\tau = 40$
and
$8.4 \times 10^{-8}$ to $1.0 \times 10^{-1}$ for $N_\tau = 80$, respectively.
The solid line indicates the exact result
at zero temperature shown in Eq.~\eqref{exact polaron energy}.
For a fixed $N_\tau$,
the numerical results show similar behavior to Eq.~\eqref{exact polaron energy}
as a function of $1/p_\text{F}a$
despite they also depend on $T/T_\text{F}$ and $n_\downarrow/n_\uparrow$.
Moreover, the numerical results tend to be close to the exact result at zero temperature when we take the continuum limit. Our result suggests that the polaron energy is insensitive to the temperature and the impurity concentration.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.75\linewidth]{fig/E0.pdf}
\caption{The $\tau$-dependence of the ratio of Green's functions
$R(0,\tau)$
on an $N_\tau = 40$ lattice at $\beta\mu=0$.
$n$ denotes the number for the discretized imaginary time $\tau=na_\tau$.
Each point in this plot is obtained for a single configuration.}
\label{fig:E0}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.75\linewidth]{fig/polaron_energy.pdf}
\caption{The polaron energy computed by the CLM.
The solid line is the exact result at $T=0$ shown in Eq.~\eqref{exact polaron energy}.}
\label{fig:polaron energy}
\end{figure}
\section{Summary}\label{sec:summary}
We have studied excitation properties of Fermi polarons at finite temperature for the attractive Gaudin-Yang model
with large population imbalances using the complex Langevin method,
a non-perturbative approach free from the sign problem.
We have performed numerical simulations
for several chemical potential $\beta \mu$
and Zeeman field $\beta h$,
the dimensionless control parameters of the model,
and found that our simulation covers wide range of
temperature $T/T_\text{F}$,
strength of the coupling $1/p_\text{F}a$
and population imbalance $n_\downarrow/n_\uparrow$.
We have computed the polaron energy as a function of $1/p_\text{F}a$.
While our result is still away from the zero temperature and single-polaron limit,
the computed polaron energy shows similar $(1/p_\text{F}a)$-dependence
to the exact result at those limits.
The complex Langevin method well works in Gaudin-Yang model even in the presence of the population imbalance.
Practically, within our setup,
the probability distribution of the drift term
always show an exponential fall-off,
which means that the problem of wrong convergence does not occur.
Moreover, the integrand of path integral is always positive within our simulation
from the eigenvalue-analysis.
However, it is known that
the sign problem is severe
in the case of higher dimension \cite{PhysRevA.82.053621}.
Thus the behavior of the probability distribution of the drift term and
the eigenvalues in higher dimension
will be investigated as future study.
One interesting application of the complex Langevin method is
to study the transition from degenerate Fermi-polaron regime
to classical Boltzmann-gas regime of a unitary spin-imbalanced Fermi gas
which is found to be a sharp transition
by a cold-atom experiment using ${}^6$Li Fermi gases
in a three-dimensional box potential~\cite{PhysRevLett.122.093401}.
Also, it is interesting to explore an inhomogeneous pairing phase~\cite{10.21468/SciPostPhys.9.1.014,PhysRevA.103.043330} and in-medium bound states~\cite{PhysRevResearch.1.033177}, which cannot be addressed by quantum Monte Carlo simulation due to the sign problem in the mass- and population-imbalanced systems.
In order to discuss such phenomena,
we need more elaborate estimation of systematic errors.
The work in this direction will be presented elsewhere.
\section*{Acknowledgments}
The authors are grateful to Tetsuo Hatsuda, Kei Iida, and Yuya Tanizaki for fruitful discussion.
T.\ M.\ D. was supported by Grant-in-Aid for Early-Career Scientists (No. 20K14480).
H.\ T. was supported by Grants-in-Aid for Scientific Research from JSPS (No.\ 18H05406).
S.\ T.\ was supported by the RIKEN Special Postdoctoral Researchers Program.
This work was partly supported by RIKEN iTHEMS Program.
| 3e524ff0a34f33c715f9dc32cdd97f6af05c0fa0 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\input{introduction}
\section{Related Work}
\input{relatework}
\section{FineAction Dataset}
\input{dataset}
\section{Experiment}
\input{experiment}
\section{Conclusion}
\input{conclusion}
{\small
\subsection{Dataset Preparation}\label{Preparation}
First,
we build up a pool of action categories from the existing benchmarks for video collection and annotation.
Since ActivityNet \cite{caba2015activitynet} and Kinetics \cite{CarreiraQuo} contain a wide range of action categories from sports to daily activities,
we choose the union set of their action categories as our pool to increase class diversity.
Second, to select suitable action classes with clear temporal boundaries, we conduct filtering based on this pool by the following steps.
(1) We remove the generic action categories (such as \textit{Standing}),
since these actions are general with ambiguous temporal boundaries.
(2) We decompose the coarse action categories into a number of fine-grained actions,
For example,
\textit{Layup drill in basketball} can be divided into fine-grained action labels such as
\textit{dribble basketball},
\textit{pass basketball},
\textit{dunk Basketball},
etc.
\textbf{Action taxonomy}.
Based on these rules,
we finally generate 106 action classes within a new taxonomy of three-level granularity.
This taxonomy consists of
4 top-level categories,
17 middle-level categories,
and 106 bottom-level categories.
Figure \ref{fig:UI} (a) shows the sub-tree of \textit{Personal Care} in this taxonomy,
and the full taxonomy can be found in the supplementary material.
It is worth noting that,
our built taxonomy is different from the one in ActivityNet \cite{caba2015activitynet}, and HACS Segments \cite{zhao2019hacs},
due to its fine-grained characteristics.
For example,
\textit{Makeup} is actually a middle-level action that consists of 7 bottom-level actions in our taxonomy,
as shown in Figure \ref{fig:UI} (a), while
it is a bottom-level action in ActivityNet.
In practice, thanks to our more specific action definition, our taxonomy is more reasonable and suitable for temporal action localization in videos.
\textbf{Video collection}.
Based on our action taxonomy,
we progressively collect videos from both existing benchmarks and Internet.
First,
we manually select videos from the related categories of the exisiting video datasets,
i.e.,
YouTube8M \cite{abu2016youtube},
Kinetics400 \cite{CarreiraQuo},
and
FCVID \cite{jiangfcvid}.
Note that,
for the bottom-level action categories that do not appear in these datasets,
we use the corresponding middle-level action categories for selection.
Second,
we annotate these videos (the procedure is described in the next section),
and count the number of instances per category.
If the number of instances is smaller than 200 in one bottom-level category,
we use keyword to crawl YouTube videos to increase instances in this action.
Finally, we remove video duplicates by computing the pair-wise video similarity with deep features.
\subsection{Dataset Annotation}
After data collection,
we perform manual annotation of action instances for each video.
Specifically,
given a video,
the trained annotator is asked to find the start and the end of each action instance as well as associating action tag with this instance.
To boost annotation efficiency and consistency,
we design an annotation tool as shown in Figure \ref{fig:UI} (c).
First,
the annotator loads a video into \textit{video browsing area},
for quickly previewing the entire video.
Second,
the annotator carefully checks the video frames in \textit{frame selection area},
and determines the start and the end of each action instance.
Third,
the annotator operates various buttons (e.g., adding, deleting, and modifying labels) of \textit{operation menu area} for detailed annotation.
Finally,
the annotator makes a double check for the annotation results on \textit{label display area}.
We design a rigorous guidance and checking procedure to ensure the annotation quality.
First,
we develop a guide to give the specific definition of each action class, which can ensure the annotation consistency of temporal boundaries among different annotators.
Specifically, we provide the start and end description of each action category with image-text definition.
For sport activities,
we use their definition and explanation from Wikipedia and/or professional sport manuals.
For other daily activities,
a team of professional researchers in TAL are asked to summarize the clear boundary definitions to avoid annotation bias.
Figure \ref{fig:UI} (b) shows an example of\textit{ Paint Eye Shadow} in our annotation guide.
Second,
we arrange another annotator team to check all the annotated videos in multiple rounds to increase annotation quality.
Finally,
the professional researchers are asked to make the sampling inspection to further alleviate inaccurate and/or inappropriate annotations.
\begin{figure*}[!htp]
\begin{center}
\includegraphics[width=\linewidth]{Image/statis.pdf}
\end{center}
\vspace{-3mm}
\caption{Number of instances per category. We plot the instance distribution of all the bottom-level categories in each top-level category. All the plots exhibit the natural long-tailed distribution.}
\label{fig:category_num}
\vspace{-5mm}
\end{figure*}
\subsection{Dataset Statistics}
We first compare our FineAction with the large-scale datasets in TAL,
i.e.,
ActivityNet \cite{caba2015activitynet} and HACS Segment \cite{zhao2019hacs}.
As shown in Table \ref{tab:compare},
FineAction has the comparable data size but exhibits more fine-grained properties than ActivityNet and HACS Segment.
For example,
the fine-grained actions often happen in a short period.
Hence,
the average duration of temporal instance in our FineAction (7.1s) is much shorter than the one in ActivityNet (49.2s) and HACS Segment (33.2s).
Moreover,
we further make comparison in terms of instance-duration distribution in Figure \ref{fig:time}.
The distribution of our FineAction is different from that of ActivityNet and HACS Segments,
i.e.,
most instances are shorter than 2s in our FineAction,
while
most instances are longer than 15s in ActivityNet and HACS Segments.
Finally,
several fine-grained actions can happen simultaneously.
Hence,
11.5\% of temporal segments have multiple action labels with overlaps in our FineAction,
while all the temporal segments are tagged with a single label in ActivityNet and HACS Segment.
Hence, these multi-label annotations would greatly increase the challenge of our FineAction.
Then, we compare our FineAction with other fine-grained temporal datasets.
As shown in Table \ref{tab:compare},
most of these existing benchmarks focus on narrow domains such as sports (THUMOS-14 and FineGym V1.0) or kitchen activities (MPII Cooking and EPIC-Kitchens),
while our FineAction contains multiple types of activities with rich diversity.
In addition, the recent large-scale fine-grained datasets mainly work on other video tasks,
e.g., action recognition (FineGym V1.0) and anticipation (EPIC-Kitchens).
THUMOS-14 is the most similar dataset to our FineAction on TAL.
However, this dataset is quiet small with limited number of videos and action instances.
These comparison demonstrates that our FineAction is unique with distinct properties of fine-grained action classes, multi-label and dense annotations, relatively large-scale capacity, and rich action diversity.
Finally,
we show the number of instances for each action category in Figure \ref{fig:category_num}.
For each top-level category,
we visualize the instance distribution over the corresponding bottom-level categories.
The number of instances exhibits the natural long-tailed distribution,
which also arises new challenges and opportunities for temporal action detection.
\subsection{Dataset Properties}
\textbf{Dataset challenges}.
Compared with the existing benchmarks,
our FineAction dataset shares the following distinct challenges.
1) {\em Fine-grained action categories with a long-tailed distribution}.
It is difficult to recognize these detailed actions,
if the models mainly focus on distinguishing background and action categories without learning subtle motion patterns.
2){\em Densely annotated instances with short temporal duration}.
It leads to the difficulties in precisely localizing temporal boundaries,
since various short actions are densely distributed in the entire video.
3) {\em Temporal segments with multiple action labels in overlaps}.
It is challenging to recognize different actions in a single segment,
if the models are designed without concurrent action analysis and/or action relation learning.
4) {\em Diverse action classes with a wide range of semantics}.
Without specific prior knowledge, it is hard to design TAL models to perform well on various fine-grained human actions.
\textbf{High quality}.
FineAction is progressively annotated in a careful and efficient procedure.
Our annotation teams are rigidly supervised by professional researchers.
With our customized annotation tool and guidance,
action instances are correctly labeled with precise temporal boundaries.
Besides,
our researchers make multiple calibrations to remove annotation bias as much as possible.
\textbf{Richness and diversity}.
From the perspective of action definition,
FineAction establishes a new taxonomy which contains fine-grained actions from sports to daily activities.
Hence, it is a preferable benchmark to understand detailed human actions in the realistic scenarios.
From the perspective of data collection,
FineAction is collected from various data resources (including both existing benchmarks and Internet videos) via rigid quality control.
Such richness brings new opportunity to develop powerful deep learning models for TAL.
\subsection{Dataset and Metrics}\label{Evaluation}
\textbf{FineAction benchmark}.
To build a solid TAL benchmark,
we manually split the videos into the training set (50$\%$), validation set (25$\%$), and testing set (25$\%$).
In total, FineAction contains 57,752 training instances from 8,440 videos,
24,236 validation instances from 4,174 videos and 21,336 testing instances from 4,118 videos.
Unless otherwise mentioned,
we report the results by training on the training set and testing on the validation set.
We withhold the annotations of testing set in the public release and encourage to submit the testing results to our server for evaluation.
\textbf{Evaluation metrics.} In line with the previous temporal action localization tasks, we use Average Recall (AR) to evaluate the proposals on action categories, which is calculated under different tIoU thresholds [0.5:0.05:0.95].
We measure the relation between AR and the Average Number (AN) of proposals, denoted
as AR@AN.
Furthermore, we also calculate the area (AUC) under the AR vs. AN curve as another evaluation metric, similar to ActivityNet dataset, where AN ranges from 0 to 100.
For the temporal action localization task, mean Average Precision (mAP) is a conventional evaluation metric, where Average Precision (AP) is calculated for each action category.
For our FineAction, mAP with tIoU thresholds [0.5:0.05:0.95] is used to compute the average mAP.
\textbf{Feature extraction. }
For feature encoding, we utilize two-stream architecture \cite{Simonyan2014Two} to extract features for video frames.
In our experiment, two separate I3D \cite{CarreiraQuo} models are pre-trained from consecutive RGB frames and optical Flow frames on Kinetics \cite{CarreiraQuo}, respectively.
I3D model takes non-overlapping snippets of 16 stacked RGB or optical Flow frames as input and extracts 2048-dimensional features for each stream.
For the fusion of modality, we concatenate the features of RGB and Flow.
By default, we rescale the feature sequence of input videos to length $L$ = 100 by linear interpolation following \cite{Lin_2018_ECCV}. We also investigate the effect of rescaled length in our ablation study.
\begin{table}[t]
\centering
\begin{tabular}{c|c|cc}
\hline
\textbf{Dataset}& \textbf{AR@100} & \textbf{Classifier} & \textbf{mAP@0.5} \\
\hline
THUMOS14& 47.7 & UNet &38.8 \\
ActivityNet-1.3 &75.0 & TSN & 50.1 \\
HACS Segment &70.9 & SlowFast & 52.5 \\
\hline
FineAction&24.3 & TSN & 14.4 \\
\hline
\end{tabular}
\caption{Performance evaluation of BMN method on different benchmarks.}
\label{tab:SOAT comparation}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{c|rccc}
\hline
\textbf{Length} & {\textbf{AR@5}} & {\textbf{AR@10}} & {\textbf{AR@100}} & {\textbf{AUC}}\\
\hline
100 & 9.99 & 12.84 &24.34 & 19.19\\
150 & 10.56 & 13.99 & 26.89 & 21.06\\
200 & 10.69 &14.41 & 28.57 & 22.16 \\
250 & 11.11 & 14.81 & 29.56 & 22.94\\
\hline
\end{tabular}
\caption{Influence of feature sequence length.}
\label{tab:length}
\vspace{-3mm}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{c|cccc}
\hline
\textbf{Instance Duration} & 0-2 s &2-5 s& 5-10 s&\textgreater 10 s \\
\hline
\textbf{AR@100} &7.46& 32.58 &51.24 &71.73\\
\hline
\end{tabular}
\caption{Localization performance on different duration of instances.}
\label{tab:error_time}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{c|ccc}
\hline
\textbf{FineAction} & \textbf{AR@100} & \textbf{Avg. Duration} &\textbf{Num}\\
\hline
straighten hair & 1.6 & 1.17 s & 1934\\
apply eyebrows & 1.7 & 1.13 s & 3428\\
dig hole & 2.2 & 0.83 s & 544\\
\hline
freestyle relay & 88.4 & 53.12 s & 345\\
breaststroke & 94.5 & 53.59 s & 211\\
play the harp & 96.2 & 69.21 s & 268\\
\hline
\end{tabular}
\caption{Top-3 worst and top-3 best localization performance of categories on the FineAction with BMN. }
\vspace{-3mm}
\label{tab:error_class}
\end{table}
\subsection{Action Detection Results}
We report the performance of the state-of-the-art methods whose codes are publicly available, including BMN \cite{lin2019bmn}, DBG \cite{lin2020fast}, and G-TAD \cite{xu2020g}, on two main tasks for temporal action localization.
To ensure a fair comparison, we keep the video features on the same scale for all the methods.
Table \ref{tab:SOAT} left shows the results on \textit{Action Proposal Generation}, in terms of AR@AN with SNMS at IoU thresholds [0.50:0.05:0.95], where SNMS stands for Soft-NMS.
Table \ref{tab:SOAT} right shows the results of \textit{Action Detection}, in terms of mAP at IoU thresholds [0.50:0.05:0.95].
To assign the global classification results to the proposals, we adopt top-1 video-level classification results generated by TSN \cite{TSN19}, and we use confidence scores of BMN proposals for detection results retrieving.
It clearly shows that, the performances of these existing methods on our FineAction for the two tasks are comparable to each other, and the modality between RGB and Flow has a certain complementarity.
To further compare our FineAction with other benchmarks, we present the performance evaluation of BMN method on different benchmarks in Table \ref{tab:SOAT comparation}.
The performance on our dataset is far lower than other benchmarks for both action propsoal generation and action detection, demonstrating the challenges of our FineAction dataset.
\subsection{Ablation Studies}
\textbf{Study on sequence length}.
We investigate the length of input feature sequence on the FineActiond dataset.
We use BMN \cite{lin2019bmn} as baseline for evaluation and the result is reported in Table~\ref{tab:length}.
As expected, increasing the sequence length contributes to a better localization performance when $L$ is from 100 to 200, and the peformance tends to saturate when $L$ is from 200 to 250.
We analyze that BMN can leverage longer sequence to aggregate more detailed context for localization.
However, too long sequence may limit the batch size and has a side effect on network training.
Hence, appropriately increasing sequence length can improve fine-grained temporal action localization.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{Image/error0.pdf}
\end{center}
\vspace{-3mm}
\caption{ Error analysis on FineAction. (a) \textbf{Left}: the error distribution over the number of predictions per video. G means the number of Ground-Truth instances. \textbf{Right}: the impact of error types, measured by the improvement gained from resolving a particular type of error. (b) Visualization of typical failure cases on FineAction. }
\label{fig:error}
\end{figure*}
\textbf{Error analysis.}
We analyze typical errors in FineAction by using BMN with the protocol in \cite{alwassel2018diagnosing}.
As shown in Figure \ref{fig:error} (a),
Background Error and Localization Error are the two main sources of false positives.
We can conclude that the BMN method often generates invalid proposals and incorrect boundaries in our FineAction,
perhaps due to the fine-grained and multi-labeled action instances.
To further demonstrate it, we visualize the typical failure cases in Figure \ref{fig:error} (b), using a soccer game video.
As expected, many detected proposals are invalid and fail to distinguish action from background, which confirms the challenges of our FineAction dataset.
\textbf{Which categories are more challenging?}
After carefully analyzing the typical errors, we delve into which categories are more challenging in our FineAction.
First, we divide the instance into four levels according to its duration,
followed by \textit{0-2s, 2-5s, 5-10s, \textgreater 10s}, and present the average recall with 100 BMN proposals for each level.
As shown in Table \ref{tab:error_time},
instances in the \textit{0-2s} level have the worst performance,
even though this level covers the largest proportion of our FineAction dataset (Figure \ref{fig:time}).
It clearly illustrates the localization difficulty in short instances.
Moreover,
we further show the action categories with top-3 worst and top-3 best localization performance in Table \ref{tab:error_time}.
We can see that,
the 3 worst categories (\textit{straighten hair}, \textit{apply eyebrows} and \textit{dig hole}) are highly fine-grained with much shorter duration.
Hence,
it is quite challenging to detect these actions,
even with a large number of training instances.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{Image/metric.pdf}
\end{center}
\vspace{-5mm}
\caption{
An illustration of the proposed new metric. In this example, the IoU of the ground truth moment and two proposals (\textit{Proposal 1} and \textit{Proposal 2}) are both 0.5.}
\label{fig:metric}
\vspace{-4mm}
\end{figure}
\begin{table}[t]
\small
\centering
\begin{tabular}{l|ccc}
\hline
\textbf{Method} & \textbf{Modality} & \textbf{AUC old} &\textbf{AUC new}\\
\hline
\multirow{3}{*}{BMN} & RGB & 17.49 & 16.42 \\
& Flow & 18.94 & 17.94 \\
& RGB+Flow & 19.19 & 18.19 \\
\hline
\multirow{3}{*}{DBG} & RGB & 15.48 & 14.53\\
& Flow & 17.70 & 16.74 \\
& RGB+Flow & 17.24 & 16.32 \\
\hline
\multirow{3}{*}{G-TAD} & RGB & 16.06 & 14.90 \\
& Flow & 17.09 & 15.97 \\
& RGB+Flow & 17.65 & 16.54 \\
\hline
\end{tabular}
\caption{Comparison of different SOTA methods on new metric at IoU [0.50:0.05:0.95]. Results are worse on the new metric, which can better evaluate the performance for shorter instance.}
\label{tab:new_metric}
\vspace{-4mm}
\end{table}
\textbf{Rethink the evaluation metrics for shorter instances.}
As mentioned in Section \ref{Evaluation}, we think that the prevailing evaluation metric AR@AN IoU is unreliable for shorter instances.
For the short ground truth,
it is preferable to generate a short proposal with the similar scale,
instead of producing a long proposal to cover the ground truth blindly.
For example, as shown in Figure \ref{fig:metric}, both \textit{Proposal 1} and \textit{Proposal 2} meet the IoU=0.5 condition, i.e., these two proposals are regarded equally under the original AR@AN IoU.
However, since the temporal boundaries of the \textit{Proposal 2} are further from the ground truth, we propose a new metric as follows, which penalizes more on \textit{Proposal 2}:
\begin{equation}
IoU_{new}=IoU\cdot \alpha_s \cdot \alpha_e,
\label{eq:new IoU}
\end{equation}
where $\alpha_s=1-\mathrm{abs}(g_t^s-p^s)$ and $\alpha_e=1-\mathrm{abs}(g_t^e-p^e)$.
Both $p^s,p^e$ and $g_t^s,g_t^e$ are normalized to 0-1 by dividing the whole video length.
As shown in Table \ref{tab:new_metric},
the new metric has lower performance than the old metric and is more suitable for fine-grained annotations. It also shows that action localization for shorter instances still faces with great challenges.
\subsection{Action Datasets}
Action recognition aims at classifying action in a trimmed video clip.
Due to its potential value in realistic applications, it has been widely explored in the past decades, from handcrafted methods \cite{Laptev2008Learning,wang2013dense} to deep learning models \cite{CarreiraQuo,Du2015Learning,Simonyan2014Two}.
In particular, more challenging datasets were proposed subsequently, including UCF101 \cite{soomro2012ucf101}, Kinetics \cite{CarreiraQuo}, ActivityNet \cite{caba2015activitynet}, Moments in Time \cite{monfort2019moments}, and others \cite{kuehne2011hmdb,idrees2017thumos,yeung2018every,zhao2019hacs,sigurdsson2016hollywood}.
With fast development of these large-scale benchmarks,
deeply-learned video models exhibit high accuracy by learning spatial and temporal evolution of actions.
Unlike early datasets of action recognition, which mainly focus on action classification,
more challenging datasets are proposed for other tasks such as temporal action detection and spatio-temporal action detection.
Temporal action detection in untrimmed videos is crucial to inspection of massive videos uploaded on the Internet in real world.
Several datasets with fine granularity of classes have been presented in the narrow domains.
For example,
THUMOS Challenge 2014 \cite{idrees2017thumos} includes 413 untrimmed videos on 20 sport actions, which was extended into Multi-THUMOS \cite{yeung2018every} with 65 action classes.
FineGym V1.0 \cite{shao2020finegym} provides temporal annotations with a three-level semantic hierarchy for gymnastic videos.
Other datasets include MPII Cooking \cite{rohrbach2012database,rohrbach2016recognizing} and EPIC-Kitchens \cite{damen2018scaling} mainly focus on scene in the kitchen.
Models trained on such domain-specific datasets may not generalize well to daily activities.
Conversely, some datasets like ActivityNet-v1.3 \cite{caba2015activitynet} and HACS Segment \cite{zhao2019hacs} were designed to include more general, every-day activities.
ActivityNet-v1.3 \cite{caba2015activitynet} includes 20K untrimmed videos with 23K temporal action annotations, and HACS Segment \cite{zhao2019hacs} contains 49k untrimmed videos with 122k temporal action annotations.
But these datasets are lack of fine-grained annotations for daily activities.
Spatio-temporal action detection datasets, such as UCF Sports \cite{rodriguez2008action}, UCF101-24 \cite{soomro2012ucf101}, J-HMDB \cite{jhuang2013towards}, DALY \cite{weinzaepfel2016towards}, AVA \cite{gu2018ava} and AVA-Kinetics \cite{li2020ava}, typically evaluate spatio-temporal action detection for short videos with frame-level action annotations.
These benchmarks pay more attention to spatial information with frame-level detectors and clip-level detectors, which are limited to fully utilize temporal information. Recently, MultiSports \cite{MultiSports21} presented a spatio-temporal action detection dataset on sports actions.
Alternatively,
we mainly focus on temporal localization of detailed human action instances in this work.
\subsection{Temporal Action Localization}
Temporal action localization aims at localizing the start and end points of action clips from the entire untrimmed video with full observation.
It has been widely investigated in the literature.
For example, \cite{zhao2017temporal} leverages actionness as guidance to obtain temporal intervals and then classifies these intervals by learning temporal structure and action completeness.
\cite{Shou2016Temporal} localizes actions with three stages of action proposal generation, proposal classification, and proposal regression.
\cite{ChaoRethinking,XuR} transforms Faster R-CNN \cite{ren2015faster} for temporal action localization.
\cite{Lin_2018_ECCV} generates proposals via learning starting and ending probability using a temporal convolutional network.
Graph-based methods \cite{Zeng_2019_ICCV} and \cite{bai2020boundary} apply graph convolutional networks to model relations among different proposals for action classification and localization.
Recently \cite{tan2021relaxed} presents a simple and end-to-end learnable framework for direct action proposal generation, by re-purposing a Transformer-alike architecture.
\begin{table*}[h]
\small
\centering
\begin{minipage}{.67\textwidth}
\footnotesize
\begin{tabular}{c|cccccc}
\hline
{\textbf{Database}} &{\textbf{Category}} & {\textbf{Video}} & {\textbf{Instance}} &{\textbf{Overlap}} & {\textbf{Duration}} &{\textbf{Action type}}\\
\hline
MPII Cooking & 65 &45 &5,609 & 0.1\% & 11.1 m &\multirow{2}{*}{kitchens}\\
EPIC-Kitchens & 4,025 &700 &89,979 &28.1\% & 3.1 s & \\
\hline
FineGym V1.0 & 530 & 303 &32,697 & 0.0\% &1.7 s &\multirow{2}{*}{sports}\\
THUMOS14 & 20 &413 & 6,316 &17.5\% & 4.3 s & \\
\hline
ActivityNet & 200 &19,994 & 23,064 &0.0\% & 49.2 s &\multirow{3}{*}{{daily events}}\\
HACS Segment& 200 & 49,485 & 122,304 &0.0\% & 33.2 s & \\
FineAction (Ours) & 106 & 16,732 & 103,324 & 11.5\% & 7.1 s & \\
\hline
\end{tabular}
\vspace{-3mm}
\caption{Comparison with Related Benchmarks. Our FineAction is unique due to its fine-grained action classes, multi-label and dense annotations, relatively large-scale capacity, and rich action diversity. }
\vspace{-5mm}
\label{tab:compare}
\end{minipage}
\hfill
\begin{minipage}{.3\textwidth}
\includegraphics[width=1.\textwidth]{Image/time.png}
\vspace{-5mm}
\captionof{figure}{Comparison of instance duration.}
\vspace{-3mm}
\label{fig:time}
\end{minipage}
\end{table*} | 20ed8f55192312dd159cd720b8f6adb680052a10 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Ease of Use}
\section{Introduction}
Brain-machine interfaces (BMIs) acquire signals generated by neurons, use them to decode users' intentions, and convert them into commands that can be used to control actuators like robotic arms and prostheses. They are essential tools in motor rehabilitation by restoring the movement ability of amputees and tetraplegic patients \cite{ajiboye_restoration_2017,collinger_7_2013,hochberg_reach_2012,hotson_individual_2016}.
To achieve the objective, it is essential to develop accurate algorithms to decode motor function from neural signals. Linear decoders, including linear regression \cite{collinger_7_2013}, linear discriminant analysis \cite{hotson_individual_2016}, and variants of Kalman filters \cite{ajiboye_restoration_2017,nason_real-time_2020,hochberg_reach_2012} have been developed to perform arm and hand control. To further improve the accuracy, nonlinear decoders such as recurrent neural network \cite{sussillo_recurrent_2012,hosman_bci_2019} and feed forward neural network \cite{glaser_machine_2020,willsey_real-time_2021} are actively investigated.
While neural networks are powerful, they come at the cost of high computational complexity, implying high energy consumption in hardware implementation \cite{sze_efficient_2017}.
For BMIs, that perform their inference tasks in the implanted devices to reduce the wireless communication overhead for the large volumes of raw data, their power efficiency is a critical concern to limit the heat generation and to protect the surrounding tissues. As a result, highly energy efficient approaches, that match the accuracy achieved by artificial neural network (ANN) should be designed.
Spiking neural networks (SNNs), a computing paradigm inspired by biological neural networks, have potential for achieving energy-efficient computation by leveraging sparsity introduced by the asynchronous feature of the neurons \cite{10.3389/fnins.2018.00774}.
While SNNs have been heavily studied to solve classification tasks~\cite{wu_spatio-temporal_2018,zheng_going_2020}, they are perfect candidates for solving regression tasks with spatio-temporal inputs and outputs, such as motor decoding, thanks to their nature of taking time series as input and generating time series output.
There are three main ways to construct SNNs: 1)
The first is ANN-to-SNN conversion. This requires an ANN trained before being converted to the SNN.
Typically, it relies on rate coding and tries to mimic the computation of the ANN by setting the parameters of the SNN to correlate the spiking neurons' firing rates with the neurons' activation of the original ANN~\cite{rueckauer_conversion_2017}. This conversion approach has shown similar performance as its ANN counterparts.
2) The second is unsupervised learning such as spike-timing-dependent-plasticity (STDP). This is a biologically plausible approach relying on local learning rules~\cite{masquelier_unsupervised_2007}. However, without global supervision, so far, it has not achieved state-of-the-art performance in terms of accuracy and energy efficiency. 3) The third is SNN backpropagation such as spatio-temporal-back-propogation (STBP)~\cite{wu_spatio-temporal_2018,zheng_going_2020}. By applying a surrogate function in the backward flow to approximate the derivative of the spike activity, the error backpropagation path can be established for gradient descent training. This approach utilizes the temporal feature of the input and has demonstrated good performance in classification tasks. In this work, we construct the SNN using the SNN backpropagation method.
In this paper, we propose a novel low-complexity SNN trained with improved STBP backpropagation method to predict the open-loop two-finger velocities for implantable BMIs. The main contributions are:
\begin{itemize}
\item We propose and open source\footnote{https://github.com/liaoRichard/SNN-for-Finger-Velocity-iBMI} a novel low-complexity SNN based on leaky-integrated-and-fire (LIF) neurons for regressing continuous-time finger velocity decoding for low-power implantable BMIs.
\item We demonstrate that the STBP SNN training strategy can be enhanced with techniques such as the neuron reset-by-subtraction scheme and trainable decay factors to achieve the same level of performance as a state-of-the-art ANN decoder.
\item We validate the accuracy of the proposed SNN on two datasets recorded from non-human primates' hand area of primary motor cortex for open-loop decoding tasks with two individual finger groups, in a streaming fashion, to mimic a real-time decoding scenario.
\item We perform an SNN computation complexity analysis and compare it with both the state-of-the-art ANN decoder and the ANN-converted SNN. The proposed SNN requires only $6.8\%$ computation operations and $9.4\%$ memory access compared to the ANN, indicating its potential for more energy-efficient hardware implementation.
\end{itemize}
\vspace{-5pt}
\section{Methods}
\vspace{-5pt}
\subsection{Neuron model} \label{sec:neuron_model}
We use the adjusted LIF neuron model for its simplicity in hardware implementation~\cite{izhikevich_which_2004}. As demonstrated in \cite{izhikevich_which_2004}, the LIF neuron model shows a good trade-off between cognitive capabilities and computational complexity, making it a suitable candidate for the implementation on embedded platforms with constrained-resources. The implemented neuron model is described in \eqref{eq:mem_upd}, \eqref{eq:inp_current}, and \eqref{eq:spike_fun}.
\vspace{-2pt}
\begin{equation}
\label{eq:mem_upd}
u^{l}(t) = \tau (u^{l}(t-1)-s^{l}(t-1)V_{th}) + I^{l}(t)
\vspace{-4pt}
\end{equation}
\begin{equation}
\label{eq:inp_current}
I(t) = \sum_{i=0}^{M-1} w^{l} s^{l-1}(t)
\vspace{-4pt}
\end{equation}
\begin{equation}
\label{eq:spike_fun}
s^{l}(t) = \begin{cases}
1 & u^{l}(t) \ge V_{th} \\
0 & u^{l}(t) < V_{th}
\end{cases}
\vspace{-4pt}
\end{equation}
In \eqref{eq:mem_upd}, $u^{l}$ represents the membrane potential in $l$th layer and $I$ represents the input current expressed by \eqref{eq:inp_current}. $V_{th}$ denotes the threshold voltage of the neuron. M is the number of input connections of the neuron. $s$ is the status of the spike defined by \eqref{eq:spike_fun} and $s$ equals one when the neuron fires. $\tau$ is the decay factor of the leaky neuron. The LIF integrates the input current and leaks at a rate of $\tau$. When the membrane potential exceeds the threshold voltage, the neuron fires and the membrane potential decreases by $V_{th}$. Inspired by \cite{tan_improved_2021}, we apply reset-by-subtract scheme instead of reset-by-zero scheme to avoid loss due to the over adjustment after spiking.
\begin{figure}[t]
\centering\includegraphics[width=\linewidth]{figures/network_single_col_font_8.pdf}
\caption{Proposed SNN unfolded for training (a) and in inference (b)}
\label{fig:network_arch}
\vspace{-15pt}
\end{figure}
\subsection{Network architecture}
We use spiking band power (SBP) proposed in \cite{nason_low-power_2020} as the input to the network. SBP is a neural feature for motor prediction defined as an average of absolute $\SI{300}-\SI{1000}{\hertz}$ band-pass-filtered signal. The proposed network handles the SBP features averaged in time frames from 96 channels.
Our proposed SNN architecture is depicted in Fig.~\ref{fig:network_arch}. It consists of four fully connected layers inspired by \cite{willsey_real-time_2021}. Unlike \cite{willsey_real-time_2021}, where the authors use a convolutional layer as the input layer to capture the temporal information from multiple time frames, our network directly receives the features from a single time frame as input.
This choice has been made to exploit an intrinsic property of SNNs, i.e., the membrane potential of each neuron acts as a memory element, and stores temporal information from previous inputs.
The output layer uses two non-spiking neurons that follow the membrane equation $u(t) = \tau u(t-1) + I(t) $. The membrane voltages $u(t)$ are then taken directly as the continuous-valued outputs for the prediction of two finger velocities for every input.
Except for the first layer of the network whose input current is the weighted sum of the SBP features, in all hidden layers, the total input current $I$ of each LIF neuron is obtained as the weighted sum of the input spikes whose values are either 1 or 0, as described in \eqref{eq:inp_current} and \eqref{eq:spike_fun}. Therefore, one of the advantages of using an SNN instead of a conventional ANN is that the input current $I$ can be calculated as additions of weights instead of Multiply-Accumulate (MAC) operations thanks to the 1-bit binary input spikes. Except for the last output layer, all hidden neurons fire only when the membrane potentials exceed the threshold, introducing a high degree of sparsity in the intermediate features. The reduced computational complexity as well as the high degree of sparsity are the two main sources of efficiency of the proposed SNN compared to a conventional ANN.
Between successive layers, the batch normalization is implemented to improve convergence. To handle the extra temporal dimension in SNN and to avoid both gradient vanishing and gradient explosion problems by balancing the firing rate, we use the special threshold-based batch normalization introduced in \cite{zheng_going_2020}.
Dropout is implemented for regularization during training. The dropout is performed for only spatial dimensions. At each time step, a new dropout mask is generated randomly.
\vspace{-10pt}
\subsection{Training methods} \label{sec:train_method}
Spiking functions for neurons in SNNs are not directly differentiable, so this has been an obstacle for direct backpropagation for SNNs for a long time. As suggested by \cite{shrestha_slayer_2018,wu_spatio-temporal_2018}, using a surrogate function to approximate the derivative of spike activity allows the gradient to propagate back through the neurons. In this work, we use a square surrogate function defined by \eqref{eq:surrogate_fun}.
\vspace{-4pt}
\begin{equation}
\label{eq:surrogate_fun}
\frac{\partial a}{\partial u} = \begin{cases}
1 & \text{if} |u(t) - V_{th}| < 0.5 \\
0 & \text{else}
\end{cases}
\vspace{-4pt}
\end{equation}
The training process is based on the publicly available PyTorch implementation of the STBP training method introduced in \cite{wu_spatio-temporal_2018}. The support for reset-by-subtract scheme, trainable decay factor, as well as non-spiking output layers were implemented additionally to the training framework.
Instead of defining a decaying factor $\tau$ as a hyperparameter shared by all the neurons, we make it a trainable parameter for each neuron, and optimize them at training time; the value for $\tau$ is clamped between 0 and 1 during the forward pass. With different decaying factors, each neuron has an independent behavior when receiving the input spikes, thereby showing a different sensitivity to the events received in the past; this feature increases the expressiveness of the SNN ~\cite{fang_incorporating_nodate,yin_accurate_2021}.
Fig.~\ref{fig:network_arch}a shows the unfolded network during the training process. The STBP allows backpropagation through both temporal and spatial dimensions. The network is unfolded for ten time frames during the training process. We experimentally observed that using more than ten frames does not improve the accuracy of the network, therefore, we used ten time-frames as the upper bound of our exploration. Additionally, we discarded the first two frames in the loss calculation, because the network has not yet converged to a stable prediction. The loss is then backpropagated through spatial and temporal dimension in this unfolded network. By using a sliding window whose length is 10 time frames and overlap is 9 time frames, training samples are generated. These samples containing 10 time frames are then shuffled for training.
During the inference process, the network is not unfolded, and all neurons maintain and update their internal state through the whole process. One time frame is used once per inference. This operating scenario mimics the situation where a real-time prediction task runs on data that are streamed continuously to the network.
We applied AdamW optimizer \cite{loshchilov_decoupled_2018} with learning rate and weight decay of $2\times10^{-3}$ and $1\times10^{-2}$, respectively. The batch size used during training is 128. The membrane threshold $V_{th}$ is set to $0.4$. These hyperparameters are determined by grid search.
Similarly to \cite{willsey_real-time_2021}, we use the time-integrated mean square error as the loss function during training while Pearson correlation coefficient is used as the metric to evaluate the performance of SNN as it is commonly used for neural decoding algorithm comparison \cite{nason_low-power_2020,willsey_real-time_2021}.
\vspace{-2pt}
\section{Results and discussion}
\vspace{-5pt}
\subsection{Dataset}
\vspace{-3pt}
We evaluated the proposed SNN on two datasets recorded from non-human primates while they were performing two-degree-of-freedom finger tasks. The datasets contain positions, velocities of two fingers, and the SBP features from 96 channels.
Dataset A, also used in~\cite{willsey_real-time_2021}, contains $\SI{817}{s}$ data. Dataset B is the open-source dataset also used for~\cite{nason_real-time_2020}, containing $\SI{610}{s}$ data.
The first $80\%$ of the data are used for training, the remaining $20\%$ are for validation. The SNN is trained for 24 epochs and 23 epochs for the evaluation on Dataset A and B respectively. The velocities and SBP features are averaged within time frames to be processed. Time frames' sizes are chosen to be $\SI{50}{ms}$ and $\SI{32}{ms}$ as introduced in \cite{willsey_real-time_2021,nason_real-time_2020}. Averaged SBP features are standardized by removing means and scaling to standard deviation of 1 before being fed into the SNN. Predicted velocity is also standardized with statistics from training set. Standardized data are recovered to original scale for inference. An inference is performed for every time frame to generate two finger velocities in a streaming fashion.
\input{figures/fig_abl_study_tau_distr}
\subsection{Performance comparison}
The improvements that are achieved by using the proposed trainable decay factor and reset-by-subtract are quantified by the ablation study performed with dataset A and the results are depicted in Fig.~\ref{fig:abl_study}.
Trainable decay factor and reset-by-subtract scheme jointly improve the correlation coefficient to over 0.74. Fig.~\ref{fig:tau_distr} shows the decay factors' distribution before and after training for each layer.
The trained decay factors cover a wide range of values, allowing different neuron dynamics, which may improve the expressiveness of the network, as suggested by ~\cite{fang_incorporating_nodate}.
The results from our SNN experiments are summarized in Table \ref{tab:corr}. For comparison, we reproduced the KF predictor\cite{nason_real-time_2020} and the NN predictor\cite{willsey_real-time_2021} using the same parameters reported in the original papers. We also compare with the SNN converted from the ANN using SNN toolbox~\cite{rueckauer_conversion_2017}. The proposed network reaches better correlation coefficients than linear KF and achieves the same level of performance as the state-of-the-art ANN decoder.
The velocity predicted by the proposed SNN, ANN, and the true velocity of one of the fingers are plotted in Fig.~\ref{fig:vel_curve} and it shows a good match.
\noindent\begin{table}[b]
\begin{minipage}[t]{.43\columnwidth}
\caption{Performance \\ comparison.}
\label{tab:corr}
\vspace{-6pt}
\setlength{\tabcolsep}{2.6pt}
\renewcommand{\arraystretch}{1}
\small
\begin{threeparttable}
\begin{tabular}{@{}llr@{}}
\toprule
Dataset & A & B \\ \midrule
KF \cite{nason_real-time_2020} & 0.601 & 0.459 \\
ANN \cite{willsey_real-time_2021} & 0.724$^\dagger$ & 0.582 \\
Converted & \multirow{2}{*}{0.740} & \multirow{2}{*}{0.590} \\
SNN & & \\
\textbf{Proposed} & \multirow{2}{*}{\textbf{0.745}} & \multirow{2}{*}{\textbf{0.582}} \\
\textbf{SNN} & & \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item $^\dagger$ Reproduced.
\end{tablenotes}
\end{threeparttable}
\end{minipage}
\hfill
\begin{minipage}[t]{.56\columnwidth}
\caption{Operation \\ comparison.}
\label{tab:comp}
\vspace{-6pt}
\setlength{\tabcolsep}{2.8pt}
\renewcommand{\arraystretch}{1.2}
\small
\begin{threeparttable}
\begin{tabular}{@{}lrrr@{}}
\toprule
\multirow{2}{*}{Operation} & ANN &Convert. & \textbf{Prop.} \\
& \cite{willsey_real-time_2021} & SNN & \textbf{SNN}\\
\midrule
MAC & 529K & 5K & \textbf{25K} \\
ADD & - & 865K & \textbf{33K} \\
\hline
Total ops $^\dagger$ & 529K & 293K & \textbf{36K} \\
\hline
Mem access & 2116K & 2615K & \textbf{199K} \\ \bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item $^\dagger$ Total MAC operations. 3 ADDs count as 1 MAC~\cite{horowitz_11_2014}.
\end{tablenotes}
\end{threeparttable}
\end{minipage}
\end{table}
\input{figures/fig_vel_curve}
\input{figures/fig_spike_rate}
\vspace{-10pt}
\subsection{Computation complexity analysis}
SNNs are appealing for their potentials in achieving efficient hardware implementation \cite{10.3389/fnins.2018.00774}. One primary source of efficiency is sparsity. We evaluated the spike rate for neurons in different layers and average spike count in each layer to quantify the sparsity in our network. The results are presented in Fig.~\ref{fig:spike_rate}.
Most of the neurons in the all layers have spiking probability well below $50\%$.
The average spike rates are $26\%$, $24\%$, $9\%$ for the three spiking layers respectively. In each inference, out of the 770 neurons in the whole network, there are 150 spikes generated on average.
Due to the rate approximation, the SNN converted from ANN requires multiple simulation steps on the same input for one output to achieve sufficient accuracy, in this case 19 steps for one prediction. In total, around 3000 spikes are used per inference as shown in Fig.~\ref{fig:ann_2_snn}. The ANN-converted SNN uses 20 times more spikes than the proposed one, indicating longer latency and higher energy consumption in hardware implementation.
\input{figures/fig_ann_converted_snn}
In conventional ANNs, the weighted sum computation for each input to a neuron requires a MAC operation. Whereas in the proposed SNN, as the spike status is either 1 or 0, this process has been replaced by addition operations, which requires much less power~\cite{horowitz_11_2014}. Here, we conservatively assume three additions as one MAC operation according to the ratio of the energy cost for floating point operations reported in~\cite{horowitz_11_2014} for the comparison of the total number of operations. To have a fair comparison between the proposed SNN and ANN, it is important to remark that the SNN needs to update its membrane potentials as described in \eqref{eq:mem_upd} once per inference, this amounts to an additional MAC operation per each neuron. In this comparison, three memory loads and one store are assumed for each MAC while two loads and one store are assumed for each addition.
The average spike rates in Fig.~\ref{fig:spike_rate} are used for the analysis. Additions and the corresponding memory accesses are not executed when there is no spike. The results are summarized in Table~\ref{tab:comp}.
Thanks to enhanced STBP training and the high level of sparsity, the proposed SNN requires $6.8\%$ operations and $9.4\%$ memory access compared to the ANN while achieving same level of accuracy.
\vspace{-3pt}
\section{Conclusion}
\vspace{-3pt}
In this paper, we present a spiking neural network and its training method to decode continuous-valued finger velocities for implantable BMI applications. The proposed network is trained with STBP backpropagation enhanced by trainable decay factor and reset-by-subtract techniques to improve the accuracy while keeping low computation complexity.
We compared the performance of the proposed SNN, Kalman filter, ANN, and SNN converted from ANN on two datasets for open-loop finger decoding tasks. The proposed SNN achieves the same level of correlation coefficient as the state-of-the-art decoders, while showing significantly less spike count than SNN converted from ANN and $6.8\%$ operations and $9.4\%$ memory access compared to the ANN decoder, indicating potential in achieving energy-efficient hardware implementation.
\bibliographystyle{IEEEtran}
| e0747c631870e494d89d26933990adfc2954b4a6 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction} \label{sec:introduction}
The {\it oriented diffeomorphism group} of an ordered link $L = \{L_1, \ldots , L_n\} \subset S^3$ consists of all orientation preserving diffeomorphisms of $S^3$ that preserve the link setwise. We denote this group $ { \sf {Diff} }(L)$. The action of $ { \sf {Diff} }(L)$ on the components of $L$ defines a homomorphism from $ { \sf {Diff} }(L)$ to the symmetric group ${\mathbb S}_n$; its image is denoted ${\mathbb S}(L)$. A basic question asks whether every subgroup $H \subset {\mathbb S}_n$ arises as ${\mathbb S}(L)$ for some $n$--component link. We provide obstructions. Our examples of groups that do not arise are the alternating groups, ${\mathbb A}_n$, for $n\ge 6$. \smallskip
\noindent{\bf Theorem.} {\it If $n \ge 6$, then there does not exist an ordered $n$--component link $L$ that satisfies ${\mathbb S}(L) = {\mathbb A}_n$.}
\smallskip
The study of symmetries of links is usually placed in the context of an extension of the symmetric group called the {\it Whitten group}:
\[
\Gamma_n = {\mathbb Z}_2 \oplus \big( ({\mathbb Z}_2)^n \rtimes {\mathbb S}_n \big).
\]
In the semidirect product, ${\mathbb S}_n$ acts on $({\mathbb Z}_2)^n$ in by permuting the coordinates. As we will describe in the next section, the ${\mathbb Z}_2$ factors keep track of the orientations of $S^3$ and the components of $L$. The question of which subgroups of $\Gamma_n$ arise from links was first considered by Fox and Whitten in the mid-1960s, first appearing in print in 1969~\cite{MR242146}. The theorem stated above provides the first examples of groups that cannot arise.
\smallskip
\noindent{\bf Summary of proof.} The basic idea of our approach is as follows. For a given link $L$ there is a Jaco-Shalen-Johannson JSJ--decomposition of the complement of $L$ into hyperbolic and Seifert fibered components $\{C_i\}$. This decomposition is unique up to isotopy. We first observe that if ${\mathbb S}(L)$ does not contain an index two subgroup, then one of the $C_i$ (say $C_1$) is invariant under the action of $ { \sf {Diff} }(L)$ up to isotopy.
If $C_1$ is hyperbolic, we can replace the action of $ { \sf {Diff} }(L)$ restricted to $C_1$ with a finite group of isometries of $C_1$. We then use a reembedding of $C_1$ into $S^3$ (as first described by Budney in~\cite{MR2300613}) to extend that action to $S^3$. It follows from results such as~\cite{MR2178962} that the action on $S^3$ is conjugate to a linear action. We then find that ${\mathbb S}(L)$ is a quotient of a finite subgroup of $\textrm{SO}(4)$. Finally, a group theoretic analysis reduces the problem to the simpler one of considering quotients of finite subgroups of $\textrm{SO}(3)$, which are enumerated.
In contrast to the hyperbolic case, if $C_1$ is Seifert fibered, then the diffeomorphism group of $C_1$ itself is large, sufficiently so that we can construct enough symmetries of $L$ to show that ${\mathbb S}(L) = {\mathbb S}_n$.
\smallskip
\noindent{\bf Outline.} Section~\ref{sec:generalities} describes the general theory of intrinsic symmetry groups of oriented links, as first considered by Fox and Whitten~\cite{MR242146}. Sections~\ref{sec:knots} and~\ref{sec:links} describe the classical case of knots, $n=1$, and results for the case of $n=2$.
Section~\ref{sec:fully} presents prime, non-split links, with full symmetry group for all $n$.
In Section~\ref{sec:jsw-trees} we describe JSJ-decompositions, the associated tree diagrams, and prove that in the case of ${\mathbb S}(L) = {\mathbb A}_n$, some component of the decomposition is fixed (up to isotopy) by the action of the diffeomorphism group.
Section~\ref{sec:reembed} explains how that distinguished component can be reembedded into $S^3$ as the complement of a link. The reembedding is used in Section~\ref{sec:hyperbolic} to show that if the fixed component is hyperbolic, then ${\mathbb S}(L)$ is a subgroup of a quotient of a finite subgroup of $SO(4)$. Finally, in Section~\ref{sec:seifertfibered} we present the Seifert fibered case. In the concluding Section~\ref{sec:questions}, we present a few questions and include an example of a 4--component link $L$ with ${\mathbb S}(L) = {\mathbb A}_4$.
\smallskip
\noindent{\bf Notational comment.} We are calling the groups studied here the {\it intrinsic symmetry groups} of links. The {\it symmetry group} of a link consists of the group of diffeomorphisms of $S^3$ that leave the link invariant, module isotopy. Even for knots, these symmetry groups include, for instance, all dihedral groups.
\smallskip
\noindent{\it Acknowledgments} I have benefited from comments from Ryan Budney, whose work provides a backdrop for our approach. Nathan Dunfield provided an example of a four-component link $L$ with ${\mathbb S}(L) = {\mathbb A}_4$. I was also helped by discussions with Jim Davis, Allan Edmonds, Charlie Frohman, Michael Larsen, Swatee Naik and Dylan Thurston.
\section{The general setting of oriented links}\label{sec:generalities}
We now describe the general theory of {\it intrinsic symmetry groups} of links. This theory was initially developed by Fox and was first presented by Whitten in~\cite{MR242146}. To be precise, we will momentarily consider links in 3--manifolds that are diffeomorphic to $S^3$, rather than work specificially with $S^3$. In this setting we have the following definition: an {\it $n$--component link}\ is an ordered $(n+1)$--tuple of oriented manifolds, $L = (S, L_1, L_2, \ldots , L_n)$, where $S$ is diffeomorphic to $S^3$ and the $L_i$ are disjoint submanifolds of $S$, each diffeomorphic to $S^1$. The set of $n$--component links will be denoted $\mathcal{L}_n$.
Given a second link $L' = (S', L_1', L_2', \ldots , L_n')$, an {\it orientation preserving diffeomorphism} from $L$ to $L'$ is an orientation preserving diffeomorphism $F \colon\thinspace S \to S'$ such that $F(L_i) = L'_i$ as oriented manifolds for all $i$.
For any oriented manifold $M$, $-M$ denotes its orientation reverse. Let ${\mathbb Z}_2$ be the cyclic group of order two written multiplicatively: ${\mathbb Z}_2 = \{1, -1\}$. If $\epsilon = -1 \in {\mathbb Z}_2$, we will let $\epsilon M = -M$, and if $\epsilon = 1 \in {\mathbb Z}_2$, we will let $\epsilon M = M$.
The group ${\mathbb Z}_2 \oplus ({\mathbb Z}_2)^n$ acts on $\mathcal{L}_n$ by changing the orientations of the factors. The symmetric group ${\mathbb S}_n$ acts on $\mathcal{L}_n$ by permuting the component knots. These actions do not commute, but together define an action on the set of knots by the {\it Whitten group}:
\[
\Gamma_n = {\mathbb Z}_2 \oplus \big( ({\mathbb Z}_2)^n \rtimes {\mathbb S}_n \big).
\]
In this semi-direct product, ${\mathbb S}_n$ acts on the $n$--fold product by permuting the coordinates.
To be precise, given an element $s = \big(\eta, (\epsilon_1, \ldots , \epsilon_n), \rho \big) \in \Gamma_n$ and an $n$--component link $L$, we let
\[ sL =( \eta S, \epsilon_1 L_{\rho(1)}, \cdots , \epsilon_n L_{\rho(n)}) .\] Notice that these group actions are defined to be on the left. Thus, elements in ${\mathbb S}_n$ are multiplied right-to-left.
\begin{definition} For a link $L \in \mathcal{L}_n$, the {\it intrinsic symmetry group} of $L$ is the subgroup $\Sigma(L) = \{ s \in \Gamma_n\ \big| \ sL \cong L\} \subset \Gamma_n$. Note that ``$\cong$'' indicates the existence of an orientation and order preserving diffeomorphism.
\end{definition}
There are two fundamental questions regarding such link symmetries. \vskip.05in
\noindent{\bf Problem 1.} Given an $n$--component link $L$, determine $\Sigma(L)$.\vskip.05in
\noindent{\bf Problem 2.} For each subgroup $H \subset \Gamma_n$, does there exist an $n$--component link $L$ such that $\Sigma(L) = H$?
\smallskip
The first can be effectively answered for low-crossing number links with programs such as Snappy~\cite{SnapPy}. The second is the focus of this paper; we present the first examples of groups that cannot arise as the symmetry group of a link.
\subsection{Restricting to the oriented category and basic observations}
There is a canonical index two subgroup $\overline{\Gamma}_n \subset {\Gamma}_n$ consisting of elements of the form
\[ (1, (\epsilon_1, \ldots, \epsilon_n), \rho).\] This subgroup maps onto ${\mathbb S}_n$. We leave it to the reader to verify the following, which implies that any constraint on what groups occur as ${\mathbb S}(L)$ places a constraint on what groups can arise as $\Sigma(L)$.
\begin{theorem} The image of $\Sigma(L) \cap \overline{\Gamma}_n$ in ${\mathbb S}_n$ is precisely ${\mathbb S}(L)$. \end{theorem}
After the initial sections of this paper, we will be restricting our work to orientation preserving diffeomorphisms of $S^3$ and will work with unoriented links. We will use the following conventions which were summarized in the introduction.
\begin{enumerate}
\item Links will all be of the form $L = (S^3, L_1, L_2 , \ldots ,L_n)$ where $S^3$ has some fixed orientation and the $L_i$ are disjoint unoriented submanifolds, each diffeomorphic to $S^1$.
\item We will consider diffeomorphisms of the link that are orientation preserving on $S^3$ and that possibly permute the set of $L_i$.
\item The set of such diffeomorphisms will be denoted $ { \sf {Diff} }(L)$.
\item Given $F \in { \sf {Diff} }(L)$ we have $(S^3, F(L_1), F(L_2), \ldots , F(L_n)) = (S^3, L_{\rho(1)} , L_{\rho(2)} \ldots , L_{\rho(n)})$ for some $\rho \in {\mathbb S}_n$. This defines a homomorphism $\Phi \colon\thinspace { \sf {Diff} }(L) \to {\mathbb S}_n$.
\item The image $\Phi$ in ${\mathbb S}_n$ is denoted ${\mathbb S}(L)$.
\end{enumerate}
\section{Examples: knots}\label{sec:knots}
Before restricting to the orientation preserving diffeomorphism group, in this section and the next we will summarize what is known in general for links of one and of two components. Then, in Section~\ref{sec:fully}, we show that for all $n$ there is a prime, non-splittable link with
$\Sigma(L) = \Gamma_n$.
Let $n=1$. The symmetric group ${\mathbb S}_1$ is trivial and thus the first Whitten group is $\Gamma_1 \cong {\mathbb Z}_2 \oplus {\mathbb Z}_2$. The knots $(1,-1)K, (-1,1)K,$ and $(-1,-1)K$ have been called the {\it reverse}, $K^r$, the {\it mirror image}, $m(K)$, and the {\it reversed mirror image}, $m(K)^r$, respectively. (Older references have called the reverse of $K$ the {\it inverse}. The name ``reverse'' is used to distinguish it from the concordance inverse, which is represented by the reversed mirror image.) Figure~\ref{fig:trefoils} illustrates the possibilities. A detailed account of the key results in the study of knot symmetries is contained in~\cite{MR867798}. Here is a brief summary.
The group $\Gamma_1$ has five subgroups: the entire group, the trivial subgroup, and the three subgroups containing exactly one of the nontrivial elements of $\Gamma_1$. Each is realized as $\Sigma(K)$ for some knot $K$.
\begin{itemize}
\item The unknot or the figure eight knot, $4_1$, have full symmetry group. They are called {\it fully amphicheiral}.
\item The trefoil knot is reversible. Dehn showed that it does not equal its mirror image, a fact that can now be proved using such invariants as the signature or the Jones polynomial. Thus, $3_1$ is {\it reversible}.
\item Trotter~\cite{MR0158395} proved the existence of non-reversible knots. His examples in~\cite{MR0158395} have nonzero signature and thus have trivial symmetry group. We say that such knots are {\it chiral}. Hartley~\cite{MR683753} proved that $9_{32}$ is non-reversible and since it has non-zero signature, it too is chiral.
\item Kawauchi~\cite{MR559040} proved that $K = 8_{17}$ is non-reversible. It is easily seen that $K = m(K)^r$, and thus $8_{17}$ is {\it negative amphicheiral}.
\item The simplest example of a low-crossing number knot that is non-reversible and for which $K = m(K)$ is $12a_{147}$, which was detected by the program Snappy. (Presumably the general techniques developed by Hartley in~\cite{MR683753} would also show that this knot is not reversible.) More complicated examples of such {\it positive amphicheiral} knots were first discovered by Trotter.
\end{itemize}
\begin{figure}[h]
\labellist
\pinlabel {\text{{$K$}}} at 100 -35
\pinlabel {\text{{$(1,-1)K = K^r$}}} at 500 -35
\pinlabel {\text{{$(1,-1)K = m(K)$}}} at 800 -35
\pinlabel {\text{{$(-1,1)K = m(K)^r$}}} at 1150 -35
\endlabellist
\includegraphics[scale=.3]{trefoil2} \hskip.6in \includegraphics[scale=.3]{trefoil4} \hskip.6in
\includegraphics[scale=.3]{trefoil1} \hskip.6in \includegraphics[scale=.3]{trefoil3}
\vskip.15in
\caption{Symmetries of knots.}
\label{fig:trefoils}
\end{figure}
\section{Two-component links}\label{sec:links}
Here we summarize the results of~\cite{MR2909632, MR2909631} concerning two-component links. We have that $\Gamma_2 = {\mathbb Z}_2 \oplus \big( ({\mathbb Z}_2)^2 \rtimes {\mathbb S}_2 \big)$ is of order 16. In~\cite{MR2909632, MR2909631} the authors describe the 27 conjugacy classes of subgroups of $\Gamma_2$. They then show that tables of prime, non-splittable links provide examples of links realizing 21 of these subgroups. One of the missing subgroups is $\Gamma_2$ itself. This is clearly the symmetry group of the unlink; in a note on MathOverflow~\cite{rbudney}, Budney showed that $\Gamma_2$ is the symmetry group of a non-splittable Brunnian link. We will expand on that example in the next section.
To conclude this section, we list the subgroups that are currently not known to be the symmetry groups of two-component links. Let $\tau$ denote the transposition in ${\mathbb S}_2$.
\begin{itemize}
\item $\left< \ (1, (-1,1)\ \tau ) \right> \cong {\mathbb Z}_4$. \vskip.05in
\item $\left< \ (1, (-1,1))\ ,\ (-1, (1,1)) \ \right> \cong {\mathbb Z}_2 \oplus {\mathbb Z}_2$. \vskip.05in
\item $\left< \ (1, (1,-1))\ ,\ (-1, (-1,1)) \ \right> \cong {\mathbb Z}_2 \oplus {\mathbb Z}_2$. \vskip.05in
\item $\left< \ (-1, (-1,1)) \ ,\ (1, (-1,1)\tau) \ \right> \cong D_4$, the dihedral group with four elements. \vskip.05in
\item $\left< \ (1, (1,-1)) \ ,\ (1, (-1,1)) \ ,\ (-1, (1,1)) \ \right> \cong {\mathbb Z}_2 \oplus {\mathbb Z}_2 \oplus {\mathbb Z}_2$.\vskip.05in
\end{itemize}
\section{Fully amphicheiral links for all $n$} \label{sec:fully} In Figure~\ref{fig:companion} we illustrate a knot $K$ in a solid torus $D$. Two parallel strands of $K$ are tied in a knot $J$, where $J$ is chosen to be fully amphicheiral; the figure eight knot would be sufficient. As oriented pairs, we have $(D,K) \cong (-D,K) \cong (D, -K) \cong (-D, -K)$.
\begin{figure}[h]
\labellist
\pinlabel {\text{\Large{$ J$}}} at 180 58
\endlabellist
\includegraphics[scale=.32]{companion}
\vskip.15in
\caption{Companion.}
\label{fig:companion}
\end{figure}
Budney's example~\cite{rbudney} of a two-component link $L$ with full symmetry group $\Sigma(L) = \Gamma_2$ is formed from the Hopf link by replacing neighborhoods of each component with copies of $(D,K)$. An example of a three-component link with full symmetry group is built in the same way, starting with the Borromean link. Notice that in both these examples, the links are Brunnian.
Problem~\ref{prob:brunnian} in Section~\ref{sec:questions} asks: Does there exist a Brunnian link with four or more components with full symmetry group?
We conclude this section with an elementary observation.
\begin{theorem}\label{thm:fullsym} For every $n$ there exists a prime, non-splittable link $L$ for which $\Sigma(L) \cong \Gamma_n$.
\end{theorem}
\begin{proof}
To form an $n$--component link with full symmetry group, proceed as follows. Starting with the knot $J$, form a link by replacing $J$ with $n$--parallel copies of $J$; formally, form the $(n,0)$--companion of $J$. Next, replace a neighborhood of each component of that link with a copy of $(D,K)$. Innermost circle arguments, dating to the work of Schubert~\cite{MR72482}, can be used to show that this link $L$ is prime and non-splittable.
\end{proof}
\section{Torus decompositions and tree diagrams} \label{sec:jsw-trees} A principal tool in understanding knot and link complements is the Jaco-Shalen-Johannson torus decomposition, which we refer to as the JSJ--decomposition. An excellent resource is ~\cite{MR2300613} which contains details for the results we summarize here.
Let $X$ be the complement of a non-splittlable link $L$ in $S^3$. The JSJ-composition of $X$ is given by a finite family of disjoint incompressible embedded tori, $\{T_i\}$, with the property that each component of the complement of $\cup T_i$ has either a complete hyperbolic structure or is Seifert fibered. There is the additional condition that no $T_i$ is boundary parallel and that no two of the $T_i$ are parallel. Up to isotopy, there is a unique minimal set $\{T_i\}$ with these properties; this set provides the JSJ-decomposition. No two $T_i$ in the decomposition are isotopic.
We can associate a finite tree Tr($L$) to this decomposition, as follows. Let the components of $X \setminus \cup T_i$ be denoted $\{C_i\}$. The vertices of the Tr($L$) correspond to the $C_j$. Two vertices are joined by an edge if the closures of the corresponding $C_i$ intersect; there is one edge for each $T_i$. When possible, we will use the names $C_i$ and $T_i$ to denote the vertices and edges. We will say that a component $C_i$ {\it contains a component} $L_j \in L$ if $L_j$ is in the closure of $C_i$.
\subsection{The subtrees $ {\text{Tr}}_L(K) $ and $\widehat{\text{Tr}}(L)$}
Let $K$ be a component of $L$. Its orbit under the action of $ { \sf {Diff} }(L)$ is a sublink of $L$, $ \{K_1, \ldots, K_m\}$, where $K_1 = K$. Each $K_i$ is contained in a vertex of $\text{Tr}(L)$. The set of such vertices is denoted $\{D_1, \ldots , D_k\}$. (Since the action of $ { \sf {Diff} }(L)$ on the set of $K_i$ is transitive, each $D_j$ contains the same number of components of $L$. In particular, $k$ divides $m$. Later we will expand on this observation.
The vertices $ \{D_1, \ldots , D_k\}$ in $\text{Tr}(L)$ span a unique minimal subtree, which we denote $ {\text{Tr}}_L(K)$. In the case that the action of $ { \sf {Diff} }(L)$ is transitive on $L$, the orbit of $K$ is all of $L$, and we write $\widehat{\text{Tr}}(L) = {\text{Tr}}_L(K)$. (Notice that $\widehat{\text{Tr}}(L)$ need not equal $T(L)$; for instance, vertices of $T(L)$ of valence one that do not contain components of $L$ are not included in $\widehat{\text{Tr}}(L)$.)
\begin{theorem} If $ { \sf {Diff} }(L)$ acts transitively on $L$, then the tree $\widehat{\text{Tr}}(L)$ either contains exactly one vertex, or its valence one vertices are precisely the set $ \{D_1, \ldots , D_k\}$.
\end{theorem}
\begin{proof}
It is an elementary observation that in the subtree of a tree spanned by the set of vertices $\{D_j\}$, the only vertices of valence one correspond to elements in the set $\{D_j\}$ and that if there is more than one $D_j$, then at least one of them is a vertex of valence one. We need to see that each $D_j$ has valence one.
Suppose that the vertex $D_1$ is of valence one in $\widehat{\text{Tr}}(L)$ and that it contains $L_1$. Let $D_2$ be another vertex, and suppose it contains $L_2$. There is an element $F \in { \sf {Diff} }(L)$ such that $F(L_1) = L_2$. The map $F$ is isotopic relative to $L$ to a diffeomorphism $F'$ that preserves the JSJ-decomposition. This $F'$ induces an automorphism of $\text{Tr}(L)$ that leaves $\widehat{\text{Tr}}(L)$ invariant. Thus, there is an automorphism of $\widehat{\text{Tr}}(L)$ that carries $D_1$ to $D_2$. It follows that $D_2$ is of valence one in $\widehat{\text{Tr}}(L)$.
\end{proof}
\subsection{The group $ { \sf {Diff} }^*(L)$.} Fix a JSJ-decomposition of $S^3 \setminus L$.
\begin{definition} We let $ { \sf {Diff} }^*(L) \subset { \sf {Diff} }(L)$ to be the subgroup consisting of elements that leave the JSJ-decomposition invariant.
\end{definition}
\begin{theorem}The image of $ { \sf {Diff} }^*(L) $ in ${\mathbb S}_n$ equals ${\mathbb S}(L)$.
\end{theorem}
\begin{proof} Given an element in ${\mathbb S}(L)$, there is a diffeomorphism $F\in { \sf {Diff} }(L)$ that maps to it. We have that $F$ is isotopic relative to $L$ to an element $F' \in { \sf {Diff} }^*(L)$. The map $F'$ induces the same permutation of the components of $L$ as does $F$.
\end{proof}
\begin{theorem} In the case that $ { \sf {Diff} }^*(L)$ acts transitively on the components of $L$, the action of $ { \sf {Diff} }^*(L)$ on $\widehat{\text{Tr}}(L)$ factors through an action of ${\mathbb S}(L)$ on $\widehat{\text{Tr}}(L)$.
\end{theorem}
\begin{proof} An automorphism of a tree is completely determined by its action on the valence one vertices of the tree. We leave this elementary observation to the reader.
\end{proof}
\subsection{The structure of $\widehat{\text{Tr}}(L)$ when ${\mathbb S}(L) = {\mathbb A}_n$ }
In Figure~\ref{fig:tree1} we provide an example of a labeled tree to serve as a model for the discussion that follows.
\begin{figure}[h]
\labellist
\pinlabel {\text{\Large{$D_1$}}} at 20 20
\pinlabel {\text{\Large{$D_2$}}} at 86 20
\pinlabel {\text{\Large{$D_3$}}} at 152 20
\pinlabel {\text{\Large{$D_4$}}} at 218 20
\pinlabel {\text{\Large{$D_5$}}} at 285 20
\pinlabel {\text{\Large{$D_6$}}} at 349 20
\pinlabel {\text{\Large{$C_1$}}} at 93 148
\pinlabel {\text{\Large{$C_2$}}} at 290 148
\pinlabel {\text{\Large{$C_0$}}} at 188 244
\endlabellist
\includegraphics[scale=.48]{tree1}
\vskip.15in
\caption{Tree diagram for sublink $K$ of $L$ on which $ { \sf {Diff} }(L)$ acts transitively.}
\label{fig:tree1}
\end{figure}
\begin{lemma} If ${\mathbb S}(L) = {\mathbb A}_n$ and $\widehat{\text{Tr}}(L)$ contains more than one vertex, then each $D_i$ contains exactly one $L_1$ and the number of vertices in the set $\{D_i\}$ is $n$.
\end{lemma}
\begin{proof} Suppose that $ D_1$ contains $L_1$ and $L_2$ and that $D_2$ contains $L_3$ and $ L_4$. Then the permutation $(1 2 3) \in {\mathbb A}_n$ does not induce an action on $\widehat{\text{Tr}}(L)$.
\end{proof}
\begin{theorem}\label{thm:alt2} If ${\mathbb S}(L) = {\mathbb A}_n$, $n\ge 3$, then $\widehat{\text{Tr}}(L)$ is a rooted tree with either exactly one vertex, $C$, or with $n$ vertices of valence one. In the second case, there is a unique vertex with valence greater than 2; the tree $\widehat{\text{Tr}}(L)$ is built from that high valence vertex $C$ by attaching $n$ linear branches, all of the same length. The vertex $C$ is invariant under the action of ${\mathbb A}_n$ on $\widehat{\text{Tr}}(L)$.
\end{theorem}
\begin{proof}
Figure~\ref{fig:tree3} is a schematic of a tree. We are asserting that $\widehat{\text{Tr}}(L)$ is of this form.
\begin{figure}[h]
\labellist
\pinlabel {\text{\Large{$D_1$}}} at 20 20
\pinlabel {\text{\Large{$D_2$}}} at 86 20
\pinlabel {\text{\Large{$D_3$}}} at 152 20
\pinlabel {\text{\Large{$D_4$}}} at 218 20
\pinlabel {\text{\Large{$D_5$}}} at 285 20
\pinlabel {\text{\Large{$C$}}} at 153 242
\endlabellist
\includegraphics[scale=.48]{tree3}
\vskip.15in
\caption{Possible tree diagram $\widehat{\text{Tr} } (L)$ for a five component link $L$ on which ${\mathbb S}(L) = {\mathbb A}_5$.}
\label{fig:tree3}
\end{figure}
We have seen that each $D_i$ contains precisely one $L_i$ and these are the valence one vertices of $\widehat{\text{Tr} } (L)$.
A tree with more than two valence one vertices always contains some vertex with valence greater than 2. It remains to show that there is a unique such vertex of valence greater than two. (For an example of the sort of tree we need to rule out, build a tree from two copies of the graph illustrated in Figure~\ref{fig:tree3} by joining the roots with single edge.)
An elementary exercise shows that for any tree on which ${\mathbb A}_n$ acts and for which the action of ${\mathbb A}_n$ on vertices of valence one is transitive, there is an invariant vertex or edge: proceed by induction, removing all valence one vertices and their adjacent edges from the tree.
We next observe that in the case that the symmetry group is ${\mathbb A}_n$, there must be an invariant vertex. The action of the symmetry group of the tree is transitive on its valence one vertices, so if there is an invariant edge, some elements must reverse that edge. It follows that the subgroup of the symmetry group that does not reverse the edge is index two. But ${\mathbb A}_n$ does not contain an index 2 subgroup for $n\ge 3$.
\end{proof}
\subsection{The structure of the core $C$ in the case that ${\mathbb S}(L) = {\mathbb A}_n$.}
Suppose that ${\mathbb S}(L) = {\mathbb A}_n$. Then by Theorem~\ref{thm:alt2} there is a core $C$ in the JSJ--composition of $L$. This core is acted on by $ { \sf {Diff} }^*(L)$. The boundary of $C$ is the union of two sets of tori, $\{T_1, \ldots , T_n\} \cup \{S_1, \ldots , S_m\}$. Each $T_i$ bounds a submanifold $W_i \subset S^3$ that contains the link component $L_i$ and does not contain $C$. A schematic appears in Figure~\ref{fig:tree2}. In this diagram we have included extra edges showing $ \widehat{\text{Tr}}(L)$ might be a proper subtree of $ {\text{Tr}}(L)$ and that $C$ might have more than $n$ boundary components.
\begin{figure}[h]
\labellist
\pinlabel {\text{\Large{$L_1$}}} at 20 30
\pinlabel {\text{\Large{$L_2$}}} at 86 30
\pinlabel {\text{\Large{$L_3$}}} at 152 30
\pinlabel {\text{\Large{$L_4$}}} at 218 30
\pinlabel {\text{\Large{$L_5$}}} at 285 30
\pinlabel {\text{\Large{$C$}}} at 153 252
\endlabellist
\includegraphics[scale=.48]{tree2}
\vskip.15in
\caption{Possible tree diagram $\widehat{\text{Tr} } (L)$ for a five component link $L$ on which $ { \sf {Diff} }^*(L)$ acts transitively.}
\label{fig:tree2}
\end{figure}
Let $ { \sf {Diff} }(C)$ be the diffeomorphism group of the core $C$. It contains a subgroup $ { \sf {Diff} }(C,T)$ that leaves invariant the set of $T_i$. This group maps to ${\mathbb S}_n$ via its action on $\{T_i\}$.
\begin{theorem} In the case that ${\mathbb S}(L) = {\mathbb A}_n$ with $n\ge 5$, with core $C$, the group $ { \sf {Diff} }(C,T)$ acts on $\{T_i\}$ as either the ${\mathbb A}_n$ or ${\mathbb S}_n$.
\end{theorem}
\begin{proof} It is clear that the action contains ${\mathbb A}_n$. The only subgroups of ${\mathbb S}_n$ that contain ${\mathbb A}_n$ are ${\mathbb A}_n$ and ${\mathbb S}_n$.
\end{proof}
Notice that it might happen that there are elements of $ { \sf {Diff} }(C,T)$ that do not map to elements of ${\mathbb A}_n$; it is possible that not every action on $C$ extends to $S^3$.
\section{Reembeddings}\label{sec:reembed}
Reembeddings appear in two different ways in our proof. In the case of $C$ hyperbolic, we embed $C$ in $S^3$ as a link complement. In the Seifert fibered case, we embed $C$ into a closed Seifert fibered space as the complement of a set of regular fibers. In this section, we describe the embedding into $S^3$.
In the previous section, some of the (torus) boundary components of the core $C$ were denoted $T_i$. We will now see that by using reembeddings we can view these $T_i$, along with the other boundary components $S_i$ of $C$, as peripheral tori for a link in $S^3$. This is presented in~\cite{MR2300613}, where Budney gave a reembedding theorem for submanifolds of $S^3$. Here we present a slightly enhanced version of that result, keeping track of boundary curves. First we set up some notation.
Let $X\subset S^3$ be a compact connected submanifold with one of its boundary components a torus $T$. The complement of $T$ consists of two spaces, $Y_1$ and $Y_2$. We have $H_1(Y_1) \cong {\mathbb Z} \cong H_1(Y_2)$. We assume $X \subset Y_1$. When needed, we will write these as $Y_1(X,T)$ and $Y_2(X,T)$.
We have that $\ker( H_1(T) \to H_1(Y_1)) \cong {\mathbb Z}$. The generator can be represented by a simple closed curve we denote $l$. Similarly, a representative of $\ker( H_1(T) \to H_1(Y_2)) \cong {\mathbb Z}$ is denoted $m$. There is no natural orientation for these choices. However, we can assume that they are oriented so that the intersection number of $m$ and $l$ is 1 with respect to the orientation of $T$ viewed as the boundary of $Y_1$. We can also assume that $m$ and $l$ intersect transversely in exactly one point. With this setup, we have the following.
\begin{theorem} There exists an orientation preserving embedding $F\colon\thinspace X \to S^3$ such $ F(T)$ is the boundary of a tubular neighborhood of a knot in $S^3$ having meridian $F(m)$ and longitude $F(l)$.
\end{theorem}
\begin{proof}
An embedded torus in $S^3$ bounds (on one side or the other) a solid torus which we denote $W$. If $Y_2 = W$, then $m$ is the meridian of $W$ and $F$ can be taken to be the identity.
If $Y_1 = W$, then form the boundary union $Z = Y_1 \cup W'$, where $W'$ is a solid torus, attached so that its meridian is identified with $m$ and its longitude is identified with $l$. Then $Z$ is the union of two solid tori and the choice of identification ensures that $H_1(Z) = 0$. Thus, $Z \cong S^3$.
\end{proof}
\begin{corollary} \label{cor:embed2}Suppose that $X \subset S^3$ is a compact manifold with boundary a union of tori $\{T_1, \ldots , T_k\}$. There exists a link $L = \{L_1, \ldots , L_k\}$ and an orientation preserving homeomorphism $F\colon\thinspace X\to S^3 \setminus \nu(L)$, where $\nu(L)$ is an open tubular neighborhood. Furthermore, it can be assumed that $F$ preserves meridians and longitudes.\end{corollary}
\begin{corollary} \label{cor:embed3} With $X \subset S^3$ and $L$ as in Corollary~\ref{cor:embed2}, suppose that a diffeomorphism $g\colon\thinspace S^3 \to S^3$ satisfies $g(X) = X$. Then the diffeomorphism of $F(X)$ given as the composition $F \circ g \circ F^{-1}$ extends to a diffeomorphism of $(S^3, L)$.
\end{corollary}
\noindent{\bf Note.} Not every diffeomorphism of $X$ determines a diffeomorphism of $L$. It is essential here that the diffeomorphism of $X$ extends to $S^3$.
\subsection{Summary theorem}
\begin{theorem} Suppose that ${\mathbb S}(L) = {\mathbb A}_n$. Then there is a link $( L'_1, \ldots, L'_n, J_1, \ldots J_m)$ with complement diffeomorphic to $C$ and that is either hyperbolic or Seifert fibered. The mapping class group of this link has a subgroup that preserves $(L_1', \ldots L_n')$. The image of this subgroup in ${\mathbb S}_n$ is either ${\mathbb A}_n$ or ${\mathbb S}_n$.
\end{theorem}
\begin{proof} To prove this using the previous results, we need to show that a JSJ-decomposition exists. That is, that $L$ is non-splittable. If $L$ does split, it splits as the union non-split sublinks, say $D_1, \ldots , D_k$, where each $D_i$ is contained in a ball that does not intersect the other $D_j$. The transitivity of the ${\mathbb A}_n$--actions implies that the $D_i$ are identical links. Thus, we can write $D_i = \{ D_i^1, \ldots , D_i^m\}$ for some $m$ that is independent of $i$.
If $m =1$, then $L$ is consists of $n$ copies of a knot $J$, each copy in a separate ball. In this case, the symmetry group would be ${\mathbb S}_n$. If $m =n$, then we are in the nonsplit case, as desired.
Finally, if $1 < m < n$, then any element of $\mathcal{M}$ that carries $D_1^1$ to $D_2^1$, must carry $D_1^2$ to some $D_2^i$. But not every element of ${\mathbb A}_n$ behaves in this way.
\end{proof}
To complete the proof that ${\mathbb A}_n$, $n \ge 6$, is not the intrinsic symmetry group of any link, we consider the hyperbolic and Seifert fibered cases separately.
\section{The case of $C$ hyperbolic.}\label{sec:hyperbolic}
We use the notion of {\it core} as in the previous section.
\begin{theorem}If ${\mathbb A}_n \subset {\mathbb S}(L)$ and the core $C$ is hyperbolic, then some finite subgroup of $SO(4)$ contains a finite subgroup having ${\mathbb A}_n$ as a quotient.
\end{theorem}
\begin{proof}
For each element $\phi \in { \sf {Diff} }(C, \partial C)$ that extends to $S^3$, let $\phi'$ denote an isometry that is isotopic to $\phi$ relative the boundary. Note that the actions of $\phi$ and $\phi'$ on the finite set of components $\{\partial C\}$ are the same. The set of $\phi'$ generates a subgroup of Isom($C$). This is necessarily a finite group, $H$. The group $H$ contains the subgroup $H' \subset H$ that leaves invariant the set $\{L_1', \ldots , L_n'\}$. The image of $H'$ in ${\mathbb S}_n$ contains ${\mathbb A}_n$. By restricting to a further subgroup $H''$, we can assume the image is precisely ${\mathbb A}_n$.
By results such as~ \cite{MR2416241, MR2178962}, any finite subgroup of Diff($S^3$), such as $H''$, is isomorphic to a subgroup of $SO(4)$.
\end{proof}
\begin{corollary} If ${\mathbb A}_n \subset {\mathbb S}(L)$ then $n \le 5$.
\end{corollary}
\begin{proof} This follows from the results of the next subsection. \end{proof}
\subsection{The only subgroup of $SO(4)$ that maps onto a noncyclic simple group is isomorphic to ${\mathbb A}_5$.}
We prove somewhat more than this.
\begin{theorem} If $A$ is a nonabelian simple group and a subgroup $H \subset SO(4)$ surjects onto $A$, then $A\cong {\mathbb A}_5$.
\end{theorem}
Denote the surjection from $H$ to $A$ by $\phi\colon\thinspace H \to A$. We begin by recalling the structure of $SO(4)$.
The set of unit quaternions is homeomorphic to $S^3$ and as a Lie group is isomorphic to $SU(2)$. Quotienting by $\pm 1$ yields a 2--fold cover $SU(2) \to SO(3)$.
Let $x$ and $y$ be unit quaternions and view elements $v \in {\mathbb R}^4$ as quaternions. Then $x$ and $y$ define a homomorphism $\psi_{x,y} \colon\thinspace SU(2) \times SU(2) \to SO(4)$ by $\psi_{x,y} (v) = xvy^{-1}$. This yields a 2--fold covering of $SU(2) \times SU(2) \to SO(4) $. Hence, $SO(4) \cong (SU(2)) \times SU(2)) / < (-1,-1)> $.
There is a 2--fold covering space $q\colon\thinspace (SU(2)) \times SU(2)/ \left< (-1,-1)\right> \to SO(3) \times SO(3)$. We thus have the following diagram.
\[
\begin{diagram}
\node{(SU(2) \times SU(2))/ \left<(-1,-1)\right>} \arrow[2]{e,t}{\cong} \arrow{s,l}{2-\text{fold cover}, \text{$q$}}
\node[2]{SO(4)} \\
\node{SO(3) \times SO(3)} \end{diagram}
\]
We will write element of $SO(4)$ and of $SO(3)\times SO(3)$ as equivalence classes of pairs of unit quaternions.
\begin{lemma} The map $\phi$ induces a surjection $\phi' \colon\thinspace q(H) \subset SO(3) \times SO(3) \to A$.\end{lemma}
\begin{proof} If the map $q\colon\thinspace H \to q(H)$ is an isomorphism, then this is trivially true. It is possible that $q\colon\thinspace H \to q(H)$ is two-to-one, which can occur if and only if the central element $(1,-1) \in H$. In this case $q(H) \cong H/\left< (1,-1)\right>$. Since $A$ is nonabelian and simple, the image of $(1,-1)$ in $A$ is trivial. \end{proof}
\begin{lemma} Let $G \subset SO(3) \times SO(3)$. Let $G_1$ and $G_2$ be the images of the projections of $G$ onto the first and second factors of the product. If $\phi' \colon\thinspace G \to A$ where $A$ is nonabelian and simple, then a subgroup of $G_1$ or $G_2$ maps onto $A$. In particular, $A$ is a quotient of a finite subgroup of SO$(3)$. \end{lemma}
\begin{proof} Let $F = G \cap ( \text{SO}(3) \times \{1\})$. We have that $F$ is a normal subgroup of $G$ and thus, $\phi'(F) = A$ or $\phi'(F) = \{1\}$. In the first case, we are done, so assume that $\phi'(F) = \{1\}$.
We now define a surjective homomorphism $\phi''\colon\thinspace G_2 \to A$. Given $y \in G_2$, there exists an element $x \in G_1$ such that $(x,y) \in G$. Set $\phi''( y) = \phi'( (x,y))$. To see that this is well-defined, notice that $(x_1, y) \in G$ and $(x_2, y) \in G$ then $x_1 x_2^{-1} \in F$. Thus $ \phi'( (x_1 ,y)) = \phi'( (x_2 ,y)) $. It is easily checked that $\phi'$ is surjective and is a homomorphism.
\end{proof}
\begin{lemma} The group ${\mathbb A}_5$ is the only finite noncyclic simple group contained in SO($3$).
\end{lemma}
\begin{proof}
The finite subgroups of SO$(3)$ are classified. Here is the list of possibilities.
\begin{itemize}
\item $A_n \cong {\mathbb Z}_n$: cyclic groups.
\item $D_n$: dihedral groups.
\item $E_6 \cong {\mathbb A}_4$: tetrahedral group.
\item $E_7 \cong {\mathbb S}_4$: octahedral group.
\item $E_8 \cong {\mathbb A}_5$: icosahedral group.
\end{itemize}
The subgroups of the dihedral group are either dihedral, and thus not simple, or cyclic. The smallest nonabelian simple group is ${\mathbb A}_5$.
\end{proof}
\section{The case of $C$ Seifert fibered.}\label{sec:seifertfibered}
We begin with a basic example.
\begin{example} \label{ex:sfexamp}Consider the $(n+2)$-component link $L$ formed as follows. Let $T$ be a standardly embedded torus in $S^3$ and form the $(np, nq)$--torus link on $T$ with $q > p >1$ relatively prime. Add to this the cores of two solid tori bounded by $T$. There is a Seifert fibration of $S^3$ with the torus link represented by regular fibers and the two cores being neighborhood of singular fibers of type $p/q$ and $q/p$.\
We leave it to the reader to confirm that for this link, $\Sigma(L) \cong {\mathbb Z}_2 \oplus {\mathbb S}_{n}$. It should be clear how the components of the $(np, nq)$--torus link can be freely permuted. The ${\mathbb Z}_2$ arises from a diffeomorphism that reverses all the components.
Two exercises arise here. The first is to show that that every symmetry fixes the two core circles. The second is to show that the complement of this link is homeomorphic to the complement of $n+2$ fibers of the Hopf fibration of $S^3$.
More examples can be built from this one. Let $J\subset S^1 \times B^2$ be a knot for which $\partial (S^1 \times B^2)$ is incompressible in the complement of $J$. A new link can be formed by replacing neighborhoods of the components of $L$ with copies of $S^1 \times B^2$. Then the symmetry group of this new link will be isomorphic to either $ {\mathbb Z}_2 \oplus {\mathbb S}_{n-2}$ or $ {\mathbb S}_{n-2}$, depending on the symmetry type of $J$.
\end{example}
\subsection{$C$ is the complement of regular fibers in a closed Seifert manifold}
\begin{example} Figure~\ref{fig:seifertfibered} provides a schematic of one possible case in which the core $C$ is Seifert fibered. Some of the labels in the diagram will be explained later. A link $L$ can be formed by filling each $T_i$ with pairs $(S^1 \times B^2, J_i)$ and the $S_i$ are filled with either solid tori or nontrivial knot complements. There are constraints required for this to produce a link in $S^3$ and we do not assert that in all cases in which $C$ is Seifert fibered it will be of this form. We illustrate it to provide a good model to have in mind as we develop the notation and arguments that follow. Another good model is provided by Example~\ref{ex:sfexamp}.
\begin{figure}[h]
\labellist
\pinlabel {\text{\Large{$T_1$}}} at 25 -10
\pinlabel {\text{\Large{$f$}}} at 15 40
\pinlabel {\text{\Large{$T_1$}}} at 25 -10
\pinlabel {\text{\Large{$T_l$}}} at 65 -10
\pinlabel {\text{\Large{$T_{l+1}$}}} at 122 -10
\pinlabel {\text{\Large{$T_p$}}} at 160 -10
\pinlabel {\text{\Large{$S_i$}}} at 210 -10
\pinlabel {\text{\Large{$S_j$}}} at -10 80
\endlabellist
\includegraphics[scale=.80]{seifertfibers2}
\vskip.15in
\caption{Possible Seifert fibered core $C$.}
\label{fig:seifertfibered}
\end{figure}
Notice the complement of this link is homeomorphic to the complement of the link formed by giving the parallel strands a full twist. In this case, all the components, including the horizontal one, are fibers of the Hopf fibration of $S^3$. More generally we have the following.
\end{example}
\begin{theorem} The core $C$ is diffeomorphic to the complement of a set of regular fibers in a closed Seifert manifold.
\end{theorem}
\begin{proof} Build a manifold $M$ by attaching solid tori to the boundary components of $C$ so that each longitude is identified with the fiber of the fibration of $C$. Then the Seifert fibration of $C$ extends to $M$ and the cores of the solid tori are regular fibers.
\end{proof}
We now fix the choice of that $M$ and its Seifert fibration.
\subsection{Notation and a basis for $H_1(T_i)$}
For each $T_i$, there is a basis of $H_1(T_i)$ represented by a pair of curves, $\{f_i, g_i\}$: since $T_i$ bounds the solid torus neighborhood of a regular fiber, we let $f_i$ denote the fiber and let $g_i$ denote the meridian of the solid torus.
Each torus $T_i$ bounds a submanifold of $S^3$ that contains the component $L_i$; denote it be $W_i$. All the pairs $(W_i,L_i)$ are diffeomorphic, so we choose one and denote it $(W,K)$ with boundary $T$. We have that $T$ contains a canonical longitude that is null-homologous in $W$ which we denote $\lambda$; chose a second curve intersecting it once and denote it $\mu$.
We now see that $(S^3, L)$ is built from $C$ by attaching copies of $W$ to the $T_i$ using attaching maps we denote $G_i$. (Other manifolds have to be attached along the other boundary components of $C$, which we have denoted $S_i$.) Denote the images of $\{ \lambda, \mu \}$ under $G_i$ by $\{ \lambda_i , \mu_i \} $.
\begin{theorem} The intersection number of $\lambda_i$ with $f_i$ is nonzero. \end{theorem}
\begin{proof} Our proof depends on the uniqueness of the fibrations of Seifert fibered manifolds, up to isotopy. This does not hold for all Seifert manifolds (e.g.~$S^1\times B^2$), but Waldhausen~\cite{MR235576} proved that if the Seifert fibered manifold $M$ is sufficiently large, that is, if it contains an incompressible surface that is not boundary parallel, then the fibration is unique. (See also the reference~\cite{MR0426001}.) In the case that the three-manifold has four or more boundary components, it is clearly sufficiently large. The preimage of a circle in the base space that bounds two of the boundary components is an incompressible torus and is not boundary parallel.
We now claim that the $\lambda_i$ are not fibers of the fibration. Consider $ i \ne j$ and the pair $\lambda_i$ and $\lambda_j$. Any element of $ { \sf {Diff} }(L)$ that maps $L_i$ to $L_j$ carries $\lambda_i$ to $\pm\lambda_j$. Self-homeomorphisms of Seifert fibered spaces with more than three boundary components preserve fibers up to isotopy, so if $\lambda_i$ is a fiber, then $\lambda_j$ is also a fiber.
Suppose that are $\lambda_i$ and $\lambda_j$ are fibers. Then there is a vertical annulus $A$ in $C$ joining $\lambda_i$ to $\lambda_j$. There are also surfaces $B_i$ and $B_j$ in $W_i$ and $W_j $ with boundaries $\lambda_i$ and $\lambda_j$. The union of $A$ with $B_i \cup B_j$ is a closed surface in $S^3$. There is also a curve on $T_i$ meeting this surface in exactly one point. This is impossible in $S^3$.
\end{proof}
\subsection{Maps between the $T_i$.}
Without loss of generality, we will focus on $T_1$ and $T_2$. We denote a chosen element in $ { \sf {Diff} }(L)$ that carries $L_1$ to $L_2$ by $F$. Note that we can assume $F(f_1) = f_2$, $F(\lambda_1) = \lambda_2$, and $F(\mu_1) = \mu_2$. However, maps of $C$ do not necessarily preserve the $g_i$. We can assume that $F(g_1) = g_2 + wf_2$ for some $w$.
For both values of $i$ we have constants so that
\[ \lambda_i = \alpha_i f_i + \beta_i g_i \hskip.5in \mu_i = \delta_i f_i + \gamma_i g_i.\]
Applying $F$ to the set with $i=1$ and renaming variables, we have
\[ \lambda_1= \alpha f_1 + \beta g_1 \hskip.5in \mu_1= \delta f_1+ \gamma g_1 \hskip.3in \text{and}\]
\[ \lambda_2= (\alpha f_2 + \beta g_2) + \beta w f_2 \hskip.5in \mu_2= ( \delta f_2+ \gamma g_2) + \gamma wf_2.\]
\subsection{Constructing the transposition}
\begin{theorem}\label{thm:transpose}There is a diffeomorphism $G $ of $C$ that interchanges $T_1$ and $T_2$ and is the identity on all other boundary components of $C$. The map $G$ can be chosen so that it preserves the $f_1$ and satisifies $G(g_1) = g_2 + wf_2$ and $G(g_2) = g_1 -wf_2$.
\end{theorem}
\begin{proof}
Using the fact that $T_i$ are boundaries of regular fibers, there is a diffeomorphism $G$ of $C$ that interchanges $T_1$ and $T_2$ that also preserves the pairs $\{f_i, g_i\}$. This map can be assumed to be the identity on the other components.
There is a vertical annulus in $C$ joining $f_1$ and $f_2$. We can perform a $w$--fold twist along this annulus. This is the identity map on all boundary components other than $T_1$ and $T_2$. On $T_1$ and $T_2$ it preserves the $f_1$ and $f_2$, it maps $g_1$ to $g_1 -wf_1$ and it maps $g_2$ to $g_2 + wf_2$.
\end{proof}
\subsection{Main theorem in Seifert fibered case}
\begin{theorem} If ${\mathbb A}_n \subset {\mathbb S}(L)$ and the associated core $C$ is Seifert fibered, then there is an element $H \in { \sf {Diff} }(L)$ which transposes $L_1$ and $L_2$. Equivalently, ${\mathbb S}(L) = {\mathbb S}_n$. \end{theorem}
\begin{proof} The map $G$ given in Theorem~\ref{thm:transpose} satisfies
\[ G(\lambda_1) = \alpha f_2 + \beta(g_2 +wf_2) \hskip.2in \text{and} \hskip.2in G(m_1) = \delta f_2 + \gamma (g_2 +wf_2) .
\]
It also satisfies
\[ G(\lambda_2) = \alpha f_1 + \beta(g_1 -wf_1) +\beta w f_1 \hskip.2in \text{and} \hskip.2in G(\mu_1) = \delta f_1 + \gamma (g_1 -wf_1) +\gamma w f_1.
\]
Simplifying shows that this interchanges the attaching maps of $W$ to $T_1$ and $T_2$, and thus extends as desired.
\end{proof}
\section{Questions}\label{sec:questions}
\begin{enumerate}
\item Figure~\ref{fig:a4} illustrates a 4--component link $L$ satisfying $ {\mathbb S}(L) = {\mathbb A}_4$. This example was found by Nathan Dunfield using Snappy~\cite{SnapPy} . In the Snappy enumeration of links, it is $L12a2007$. Does there exist a 5--component link $L$ with ${\mathbb S}(L) = {\mathbb A}_5$. (The group ${\mathbb S}_4$ has 11 conjugacy classes of subgroups. Each is ${\mathbb S}(L)$ for some link. The case of ${\mathbb A}_4$ is the most difficult to find.)
\begin{figure}[h]
\includegraphics[scale=.15]{A4-2}
\caption{A link with intrinsic symmetry group $A_4$.}
\label{fig:a4}
\end{figure}
\item \label{question:fw} The Fox-Whitten group $\Gamma_n$ maps onto ${\mathbb S}_n$, and thus the obstructions we have developed here provide obstructions to groups $G \subset \Gamma_n$ from being oriented intrinsic symmetry groups of links. Can the techniques used here provide finer obstructions in the oriented case?
\item As a particular example of Quesiton~\ref{question:fw}, can any of the unknown cases for $2$--component links described in Section~\ref{sec:links} be eliminated as possible intrinsic symmetry groups?
\item If a subgroup $H\subset {\mathbb S}_n$ or $H \subset \Gamma_n$ is the intrinsic symmetry group for a link, is it the intrinsic symmetry group of a non-split link or of an irreducible link?
\item \label{prob:brunnian} A natural class of links consists of Brunnian links; these are non-splittable but become the unlink upon removing any one of the components. The links produced in Theorem~\ref{thm:fullsym} having symmetry group ${\mathbb S}_n$ are not Brunnian. The example of 2--component and 3--component links with ${\mathbb S}(L) = {\mathbb S}_n$ that proceed the proof of that theorem are Brunnian. Hence, we ask: for all $n\ge 3$, does there exist a Brunnian link $L$ with ${\mathbb S}(L) = {\mathbb S}_n$?
\item Another class to consider is alternating links, and presumably there are strong constraints on ${\mathbb S}(L)$ for these.
\item Let $M$ be a compact three-manifold with $n$ torus boundary components, $\partial_i(M)$. Choose a basis of $H_1(\partial_i(M))$ for each $i$. One can form a Whitten-like group $\Omega_n ={\mathbb Z}_2 \oplus (G^n \rtimes {\mathbb S}_n)$ where $G$ is the automorphism group of ${\mathbb Z} \oplus {\mathbb Z}$. Each manifold $M$ gives rise to a subgroup of $\Omega_n$. What subgroups arise in this way? This is particularly interesting in the case that the interior of $M$ has a complete hyperbolic structure.
\item The previous question can be modified. Given a subgroup $H \subset {\mathbb S}_n$, is there a complete hyperbolic three-manifold with $n$ cusps such the $H$ represents the permutations of the cusps that are realized by isometries of $M$? In relation to this, Paoluzzi and Porti~\cite{MR2493374} proved that every finite group is the isometry group of the complement of a hyperbolic link in $S^3$. Notice that their isometries need not extend to $S^3$. Applying their construction to a subgroup of ${\mathbb S}_n$ does not produce an $n$ component link.
\end{enumerate}
| 374fa49b532327c79fa43fa3d798d5a618205c58 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Camera calibration deals with finding the five intrinsic ( focal length, image sensor format, and principal point) and six extrinsic (rotation, translation) parameters of the specific camera.
\begin{figure}
\centering
{\includegraphics[width=0.95\columnwidth]{images/cm.png}}
\caption{Our method estimates extrinsic (baseline, pitch and translation) and intrinsic (focal length and principal point offset) parameters using pre-trained Inception-v3~\cite{szegedy2016rethinking} and the proposed Camera Projection Loss.}
\label{fig:flowdiagram}
\end{figure}
Most of the existing methods usually ignore the underlying mathematical formulation of the camera model and instead propose an end-to-end framework to directly estimate the desired parameters~\cite{workman2015deepfocal,rong2016radial,hold2018perceptual, lopez2019deep, zhai2016detecting, detone2016deep,bogdan2018deepcalib,zhang2020deepptz,workman2016horizon,barreto2006unifying}. Thus they are difficult to interpret for real-world applications and have so far been able to mainly estimate the focal length of the camera via single image only~\cite{detone2016deep,bogdan2018deepcalib,zhang2020deepptz}.
The major contributions of our work are as follows:
\begin{itemize}
\item To the best of our knowledge, this work is the first learning-based method to jointly estimate both intrinsic and extrinsic camera parameters including camera baseline, disparity, pitch, translation, focal length and principal point offset.
\item The existing learning based approaches~\cite{detone2016deep, workman2015deepfocal, zhang2020deepptz} have not been applied to the estimation of all $10$ camera parameters due to lack of any dataset. We addressed this limitation by generating a synthetic dataset from two towns in CARLA~\cite{Dosovitskiy17} simulation consisting of $49$ different camera settings.
\item Unlike existing methods, instead of designing an end-to-end solution to directly estimate the desired parameters~\cite{detone2016deep}, we proposed a new representation that represents camera model equations as a neural network in a multi-task learning (MTL) framework.
\item We proposed a novel \emph{camera projection loss} (CPL) that combines analytical equations in learning framework.
\end{itemize}
\section{Proposed Method}
\label{sec:pm}
We propose to utilize multi-task learning by incorporating mathematical equations through a new loss function embedded as a neural network for better representation while learning.
We train a convolutional neural network to predict the extrinsic and intrinsic camera parameters. To achieve this, we use dependent regressors that share a common network architecture as the feature extractor. We use a Inception-v3~\cite{szegedy2016rethinking} pretrained on ImageNet~\cite{russakovsky2015imagenet}
as a feature extractor followed by the Lambda layers for loss computation with $13$ regressors, 10 of which correspond to the camera parameters while 3 correspond to the 3D point cloud. Instead of training these regressors to predict the focal length, principal point, baseline, pitch, and translation, we use proxy variables that are not visible in the image and are dependent on each other. This allows us to directly relate our method with the mathematical foundations of multi-view geometry~\cite{Hartley:2003:MVG:861369} resulting in better performance.
\textbf{Camera Model}: The camera model that we consider is the perspective projection model based on the pinhole camera~\cite{faugeras1993three}. In this work, we rely on projection of 2D image points $(u,v)$ to 3D world points $X, Y, Z$ as a reference. This leaves us with with the estimation of $13$ free parameters: focal length ($f_x$, $f_y$), principal point ($u_0$, $v_0$), disparity ($d$), baseline ($b$), pitch ($\theta_p$), translation $\mathbf{t} = \{t_x, t_y t_z\}$ and 3D coordinates $(X, Y, Z)$.
\textbf{Parameterization}: As revealed by previous work ~\cite{workman2015deepfocal,workman2016horizon,hold2018perceptual}, an adequate parameterization of the variables to predict can benefit convergence and final performance of the network. For the case of camera calibration, parameters such as the focal length or the tilt angles are difficult to interpret from the image content. Instead, they can be better represented by proxy parameters that are directly observable in the image. We use 2D to 3D projection as a proxy for our parameters.
A 2D point in image coordinate system is projected to camera coordinate and then to world coordinate system and the process can be explained by the following:
\begin{align}
\begin{pmatrix} X \\ Y \\ Z \\ 1 \end{pmatrix} \sim \begin{bmatrix}\begin{pmatrix} f_x & 0 & u_0 \\ 0 & f_y & v_0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0}_{3\times 1}^T & 1\end{pmatrix}\end{bmatrix}^{-1} \begin{pmatrix} u \\ v \\ 1 \end{pmatrix},
\label{eq:4}
\end{align}
\begin{align}
\begin{pmatrix} X \\ Y \\ Z \\ 1 \end{pmatrix} \sim \begin{pmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0}_{3\times 1}^T & 1\end{pmatrix}^{-1} \begin{pmatrix} f_x & 0 & u_0 \\ 0 & f_y & v_0 \\ 0 & 0 & 1 \end{pmatrix}^{-1} \begin{pmatrix} u \\ v \\ 1 \end{pmatrix}.
\label{eq:5}
\end{align}
where $\mathbf{R}$ is the camera rotation matrix. The image point $(u, v)$ to camera point $(x_{cam}, y_{cam}, z_{cam})$ transformation can be performed as follows (assuming skew = 0):
\begin{align}
\begin{pmatrix} y_{cam} \\ z_{cam} \\ x_{cam} \end{pmatrix} \sim \begin{pmatrix} \frac{1}{f_x} & 0 & \frac{-u_0}{f_x} \\ 0 & \frac{1}{f_y} & \frac{-v_0}{f_y} \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} u \\ v \\ 1 \end{pmatrix}
\label{eq:6}
\end{align}
\begin{subequations}
\begin{equation}
\label{eq-6a}
y_{cam} = \frac{u}{f_x} - \frac{u_0}{f_x} = \frac{u - u_0}{f_x}
\end{equation}
\begin{equation}
\label{eq-6b}
z_{cam} = \frac{v}{f_y} - \frac{v_0}{f_y} = \frac{v - v_0}{f_y}
\end{equation}
\begin{equation}
\label{eq-6c}
x_{cam} = 1
\end{equation}
\end{subequations}
Similarly, for camera to world transformation we have:
\begin{align}
\begin{pmatrix} X \\ Y \\ Z \\ 1 \end{pmatrix} \sim \begin{pmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0}_{3\times 1}^T & 1\end{pmatrix} \begin{pmatrix} x_{cam} \\ y_{cam} \\ z_{cam} \\ 1 \end{pmatrix}
\label{eq:7}
\end{align}
\begin{align}
\begin{pmatrix} X \\ Y \\ Z \end{pmatrix} \sim \begin{pmatrix} \cos\theta & 0 & \sin\theta \\ 0 & 1 & 0 \\ -\sin\theta & 0 & \cos\theta\end{pmatrix} \begin{pmatrix} x_{cam} \\ y_{cam} \\ z_{cam} \end{pmatrix} + \begin{pmatrix} x \\ y \\ z \end{pmatrix}
\label{eq:8}
\end{align}
\begin{subequations}
\begin{equation}
\label{eq-8a}
X = x_{cam} * \cos\theta + z_{cam} * \sin\theta + x
\end{equation}
\begin{equation}
\label{eq-8b}
Y = y_{cam} + y
\end{equation}
\begin{equation}
\label{eq-8c}
Z = -x_{cam} * \sin\theta + z_{cam} * \cos\theta + z
\end{equation}
\end{subequations}
To project a point from image to camera coordinate:
\begin{subequations}
\begin{equation}
\label{eq-4a}
x_{cam} = f_x * b / d
\end{equation}
\begin{equation}
\label{eq-4b}
y_{cam} = -(x_{cam} / f_x) * (u - u_0)
\end{equation}
\begin{equation}
\label{eq-4c}
z_{cam} = (x_{cam} / f_y) * (v_0 - v)
\end{equation}
\end{subequations}
$x_{cam}$ works as a proxy for $f_x$, baseline and disparity while $y_{cam}$ works as a proxy for $f_x$, u and $u_0$ and $z_{cam}$ works as a proxy for $f_y$, $v$ and $v_0$. Using $x_{cam}$, $y_{cam}$ and $z_{cam}$ from Eq.~\ref{eq-4a}, Eq.~\ref{eq-4b} and Eq.~\ref{eq-4c} respectively, points can be projected to world coordinate system using:
\begin{subequations}
\begin{equation}
\label{eq-5a}
X = x_{cam} * \cos(\theta_p) + z_{cam} * \sin(\theta_p) + t_x
\end{equation}
\begin{equation}
\label{eq-5b}
Y = y_{cam} + t_y
\end{equation}
\begin{equation}
\label{eq-5c}
Z = -x_{cam} * \sin(\theta_p) + z_{cam} * \cos(\theta_p) + t_z
\end{equation}
\end{subequations}
$X$ works as a proxy for pitch and $t_x$ while $Y$ works as a proxy for $t_y$ and $Z$ works as a proxy for pitch and $t_z$.
\textbf{Camera Projection Loss}: When a single architecture is trained to predict parameters with different magnitudes, special care must be taken
to weigh the loss components such that the estimation of
certain parameters do not dominate the learning process.
We notice that for the case of camera calibration, instead
of optimizing the camera parameters separately, a single
metric based on 2D to 3D projection of points can be used.
Given two images with known parameters $\omega=(f_x, f_y, u_0,\\ v_0, b, d, \theta_p, t_x, t_y, t_z, X, Y, Z)$ and a prediction of such parameters
given by the network $\hat{\omega}$=($f_x^{'}, f_y^{'}, u_0^{'}, v_0^{'}$, $b^{'}$, $d^{'}$, $\theta_p^{'}$, $t_x^{'}$, $t_y^{'}$, $t_z^{'}$, $X^{'}$, $Y^{'}$, $Z^{'}$), we get the projected point in world coordinate system through Eq.~\ref{eq-4a} - Eq.~\ref{eq-5c}. Loss is computed between actual $\omega$ and predicted $\hat{\omega}$ using:
\begin{equation}
\label{eq-6}
L({\omega}, \hat{\omega}) = (\frac{1}{n})\sum_{i=1}^{n}MAE({\omega} , \hat{\omega})
\end{equation}
\textbf{Separating sources of loss errors}: The proposed loss solves the task balancing problem by expressing different errors in terms of a single measure. However, using several camera parameters to predict the 3D points introduces a new problem during learning: the deviation of a point from its ideal projection can be attributed to more than one parameter. In other words, an error from one parameter can backpropagate through the camera projection loss to other parameters.
To avoid this problem, we disentangle the camera projection loss, evaluating it individually for each parameter similar to~\cite{lopez2019deep}:
\begin{align*}
\begin{split}
L_{f_x} &= L((f_x, f_y^{GT}, u_0^{GT}, v_0^{GT}, b^{GT}, d^{GT}, \theta_p^{GT},\\ &t_x^{GT}, t_y^{GT}, t_z^{GT}, X^{GT}, Y^{GT}, Z^{GT}),\omega) \\
L_{f_y} &= L((f_x^{GT}, f_y, u_0^{GT}, v_0^{GT}, b^{GT}, d^{GT}, \theta_p^{GT},\\ &t_x^{GT}, t_y^{GT}, t_z^{GT}, X^{GT}, Y^{GT}, Z^{GT}),\omega) \\
\ldots \\
L_{Z} &= L((f_x^{GT}, f_y^{GT}, u_0^{GT}, v_0^{GT}, b^{GT}, d^{GT}, \theta_p^{GT},\\ &t_x^{GT}, t_y^{GT}, t_z^{GT}, X^{GT}, Y^{GT}, Z),\omega)
\end{split}
\end{align*}
\begin{align}
\label{eq-7}
\begin{split}
L^{*} &= \frac{L_{f_x} + L_{f_y} + L_{u_0} + ... + L_{Z}}{13}
\end{split}
\end{align}
The loss function is further normalized to avoid the unnecessary bias due to one or more error terms by introducing weights $\alpha_i$ with each of the parameters. This bias is introduced due to heterogeneous ranges of various parameters. These weights $\alpha_i$ are learned adaptively during the training process. The updated loss function is defined as:
\begin{align}
\label{eq-9}
\begin{split}
L^{*} &= \alpha_{f_x}L_{f_x} + \alpha_{f_y}L_{f_y} + \alpha_{u_0}L_{u_0} + ... + \alpha_{Z}L_{Z}
\end{split}
\end{align}
\begin{figure}
\centering
\begin{tabular}{cc}
{\includegraphics[width=0.35\columnwidth]{Carla/Town1/LeftRGB_00000.png}} &
{\includegraphics[width=0.35\columnwidth]{Carla/Town2/LeftRGB_00000.png}} \\
(a) & (b) \\
{\includegraphics[width=0.35\columnwidth]{tsinghuaDaimlerDataset_2014-12-04_082614_000018089_leftImg8bit.png}} &
{\includegraphics[width=0.35\columnwidth]{tsinghuaDaimlerDataset_2014-12-04_082614_000018404_leftImg8bit.png}}\\
(c) & (d)\\
\end{tabular}
\caption{Some representative images from the synthetic and real datasets. (a) Town 1 - CARLA (b) Town 2 - CARLA (c-d) Tsinghua-Daimler.}
\label{fig:foobar}
\end{figure}
\section{Results and Evaluation}
\begin{table*}[t]
\caption{Table showing Normalized MAE in predicted parameters on CVGL test set comprising of 23,796 images.}
\centering
{
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{ l|c|c|c|c|c|c|c|c|c|c }
\hline
& $f_x$ & $f_y$ & $u_0$ & $v_0$ & $b$ & $d$ & $t_x$ & $t_y$ & $t_z$ & $\theta_p$\\
\hline
Average~\cite{workman2015deepfocal} & 0.840 & 0.786 & 0.432 & 0.542 & 6.552 & 3.607 & 6.552 & 9.372 & 5.361 & 0.744\\
Deep-Homo~\cite{detone2016deep} & 0.062 & 0.062 & 0.008 & 0.008 & 0.156 & 0.065 & 0.156 & 0.161 & 0.155 & 0.045\\
MTL-CPL-U (Ours) & 0.935 & 0.685 & 0.892 & 0.737 & 0.938 & 0.423 & 0.400 & 0.329 & 0.432 & 1.060\\
MTL-Baseline (Ours) & 0.030 & 0.029 & 0.017 & 0.007 & \textbf{0.057} & 0.013 & \textbf{0.064} & \textbf{0.076} & \textbf{0.071} & 0.024\\
MTL-CPL-A (Ours) & \textbf{0.022} & \textbf{0.022} & \textbf{0.004} & \textbf{0.006} & 0.093 & \textbf{0.007} & 0.097 & 0.116 & 0.098 & \textbf{0.017}\\
\hline
\end{tabular}
\end{adjustbox}
\label{table:1}
}
\end{table*}
\begin{table*}[t]
\caption{Table showing Normalized MAE in predicted parameters on Tsinghua-Daimler test set comprising of 2,914 images. For this experiment, we just did a forward pass without any transfer learning or training.}
\centering
{
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{ l|c|c|c|c|c|c|c|c|c|c }
\hline
& $f_x$ & $f_y$ & $u_0$ & $v_0$ & $b$ & $d$ & $t_x$ & $t_y$ & $t_z$ & $\theta_p$\\
\hline
Average~\cite{workman2015deepfocal} & 0.994 & 0.991 & 0.969 & 0.951 & 112.438 & \textbf{0.492} & 10.843 & 271.935 & 13.798 & \textbf{982.413}\\
Deep-Homo~\cite{detone2016deep} & 0.958 & 0.958 & 0.946 & 0.895 & 9.985 & 1.233 & 0.166 & 27.141 & 0.862 & 2746.994\\
MTL-CPL-U (Ours) & \textbf{0.872} & \textbf{0.888} & \textbf{0.782} & \textbf{0.795} & \textbf{0.081} & 1.271 & \textbf{0.147} & \textbf{23.836} & \textbf{0.635} & 7700.968\\
MTL-Baseline (Ours) & 0.957 & 0.958 & 0.944 & 0.893 & 18.323 & 1.258 & 1.035 & 32.946 & 0.999 & 2418.250\\
MTL-CPL-A (Ours) & 0.938 & 0.938 & 0.946 & 0.895 & 14.182 & 1.259 & 0.727 & 30.640 & 1.418 & 1995.353\\
\hline
\end{tabular}
\end{adjustbox}
\label{table:2}
}
\end{table*}
\subsection{Datasets}
\textbf{Synthetic Data}: We trained and evaluated our proposed approach by generating a new CVGL Camera Calibration dataset using Town 1 and Town 2 of CARLA~\cite{Dosovitskiy17} Simulator.
The dataset consists of $49$ camera configurations with town 1 having $25$ configurations while town 2 having $24$ configurations . The parameters modified for generating the configurations include $fov$, $x$, $y$, $z$, pitch, yaw, and roll. Here, $fov$ is the field of view, (x, y, z) is the translation while (pitch, yaw, and roll) is the rotation between the cameras. The total number of image pairs is $79,320$, out of which $18,083$ belong to Town 1 while $61,237$ belong to Town 2, the difference in the number of images is due to the length of the tracks.
\textbf{Real Data}: We have used a recent Cyclist Detection dataset~\cite{li2016new} for evaluating our approach on real world data.
We have used the test set provided by the authors containing $2,914$ images by first deriving the right image using left and disparity images and then use the pair as input to compare different methods.
\textbf{Implementation Details}: Our loss is implemented and trained using Keras~\cite{ketkar2017introduction}, an open-source deep learning framework. All networks are trained on GeForce GTX $1050$ Ti GPU for $200$ epochs with early stopping using ADAM optimizer~\cite{KingmaADAM2015} with Mean Absolute Error (MAE) loss function and a base learning rate $\eta$ of ${10}^{-3}$ with a batch size of $16$.
The Lambda layer exists in Keras so that arbitrary expressions can be used as a Layer when constructing Sequential and Functional API models. In the proposed architecture, Lambda layers have been utilized for basic operations including addition, subtraction, multiplication, division, negation, cosine and sine. The intuition is to incorporate mathematical equations in the learning framework.
We have used Normalized Mean Absolute Error for evaluation as follows:
\begin{equation}
\label{eq-60}
NMAE(y, \hat{y}) = \frac{MAE(y, \hat{y})}{\frac{1}{n}\sum_{i=1}^{n}|y_{i}|} = \frac{MAE(y, \hat{y})}{mean(|y|)}
\end{equation}
where $y$ and $\hat{y}$ are the target and estimated values respectively.
\subsection{Comparative Analysis}
\textbf{Experimental Setup}: We compared our proposed method with two state-of-the-art approaches namely Average field of view~\cite{workman2015deepfocal} and Deep-Homo~\cite{detone2016deep}. Average field of view~\cite{workman2015deepfocal} is a baseline approach, given a query image, it uses the average field of view of the training set as the prediction~\cite{workman2015deepfocal}. {Deep-Homo}~\cite{detone2016deep} estimates an 8-degree-of-freedom homography between two images. We have modified Deep-Homo\cite{detone2016deep} to predict the required 13 parameters for comparison purposes as by default, it only predicted 8 values corresponding to the four corners and then using 4-point parameterization and then convert it into the homography matrix. For the purpose of the ablative study, we also created three variants of our multi-task learning approach namely MTL-Baseline, MTL-CPL-U, and MTL-CPL-A. MTL-Baseline does not include any additional layers to incorporate camera model equations, instead, it is an end-to-end learning architecture based on mean absolute error (MAE). It has 13 regressors sharing a common feature extractor directly predicting the required values. MTL-Baseline is implemented to study the effect of proposed camera projection loss. We also used two variants of camera projection loss one with uniform weighting (MTL-CPL-U) in the loss function and the other with adaptive weighting (MTL-CPL-A) to balance the heterogeneous ranges of calibration parameters.
\textbf{Error Analysis on generated data}: We compare the normalized mean absolute error (NMAE) of each of the parameters by all the methods with our proposed approach. It can be seen from Table~\ref{table:1} that for baseline ($b$) and translation ($t_x, t_y, t_z$), MTL-Baseline approach resulted in minimum values for NMAE while for all other parameters MTL-CPL-A performs better due to bias in loss introduced as a result of the heterogeneous range of values among parameters. For all the parameters, MTL based methods have lowest NMAE values. This indicates that incorporating camera model geometry in the learning framework not only resulted in a more interpretable learning framework but it also outperforms the state-of-the-art methods.
\textbf{Error Analysis on real data}: For this experiment, we didn't trained on the Tsinghua-Daimler dataset but just performed a forward pass to test the generalizability and the results further strengthen our argument. It can be seen from Table~\ref{table:2} that our proposed multi-task learning approach, MTL-CPL-U, outperforms other methods on $8$ out of $10$ parameters. For disparity ($d$), pitch ($\theta_p$) , MTL-CPL-U resulted in higher NMAE values due to bias in loss introduced as a result of the heterogeneous range of values among parameters. For all the remaining parameters our proposed multi-task learning approach resulted in minimum values for NMAE which further solidifies our argument of incorporating camera model geometry in the learning framework.
\section{Conclusion}
Our proposed approach outperforms several baselines, including CNN-based methods on both synthetic and real data is an amalgam of Deep Learning and closed-form analytical solutions.
Thus this paper offers a significantly new idea in the area of Machine Learning and Computer Vision. Although we have applied the proposed solution for the estimation of camera calibration parameters, it offers a general framework for many such problems. For example, tracking using Kalman Filter, homography estimation, and many other applications we plan to implement in future.
\bibliographystyle{IEEEbib}
| ff3c6e4629c3cb52834c3141c9ff55dce90f3ecf | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Abstract}
We report on the development of new code to support the beam waveguide antenna mount types in AIPS, which will allow polarisation analysis
of observations made using these antennas.
Beam Wave-guide antennas in VLBI are common in communication antennas that have been repurposed (e.g. Warkworth, Yamaguchi).
The mount type affects the differential phase between the left and the
right hand circular polarisations (LHC and RHC) for different points
on the sky.
We demonstrate that the corrections for the Warkworth beam wave guide antenna can be applied.
\section{Telescope Mounts}
Mount types are a combination of the focus position and the drive
type. In Radio Astronomy there are six more or less commonly used
focus positions, and three drive types.
\subsection{Focus positions}
Radio telescopes use a much more limited set of focus positions compared to optical telescopes.
Figure \ref{fig:mounts} indicates where the focii are.
Here we list them, along with examples of the codes for telescopes which use them (in brackets).
Images of these are
shown in figure 1 of eLBA memo 9.
\begin{itemize}
\item Prime focus (PKS/MED). Focus at the site of the secondary mirror
(sub-reflector).
\item Cassegrain focus (VLBA/most). Focus after the secondary
(hyperboloid) mirror.
\item Gregorian focus (EFF). Prime Focus before the secondary
(ellipsoid) mirror, secondary focus after the secondary mirror.
\item Folded Cassegrain focus (CED). Focus bolted to the elevation axis,
after the tertiary mirror.
\item Reduced Nasmyth focus (JCMT). Focus on the elevation axis, but
bolted to the azimuth floor, after the tertiary mirror.
\item Full Nasmyth focus (PV/YEB40). Focus bolted to the azimuth axis
floor, after the forth mirror.
\item Beam Wave Guide focus (WARK/YAMA). Focus bolted to the floor of the antenna support structure, after the fifth (or more) mirror.
\end{itemize}
\begin{figure}[htb]
\begin{center}
\epsfig{file=mounts3.eps,width=10cm}
\caption{Mount types, with the minimum number of mirrors (fixed or rotating with respect to the prime focus), for Prime, Cassegrain, Folded and reduced Nasmyth, Full Nasmyth and Beam Wave Guide. The first three (for an Alt-Az antenna) follow the Cassegrain feed angle rotation, the reduced and full Nasmyth introduce a dependence on $\pm$Elevation and the Beam Wave Guide introduces a dependence on $\pm$Azimuth.}
\label{fig:mounts}
\end{center}
\end{figure}
Fixed mirrors after the forth, in a Nasmyth system, swap the mount type
between the equivalent of a reduced and full Nasmyth, so that any
system with an odd number of mirrors is a `reduced' and an even number
of mirrors is equivalent to a `full' Nasmyth system in our context. In
a similar fashion the expression for a folded Cassegrain and Prime
focus solutions are essentially the same, being either one or three
reflections from the on-sky view.
The difference between a Cassegrain and a Gregorian image is a
rotation (of 180$^o$), so these types are degenerate after the absolute
position angle calibration.
Furthermore at the correlator, the `on sky' left or right is combined
with the same to produce the parallel hand output. This is effectively
the same as adding another mirror to the optical chain for those
optics with an odd number of mirrors. For all expected cases,
therefore, we will be correlating either Cassegrain, Nasmyth or Beam WaveGuide foci, combined with the drive type of the antenna.
Nasmyth is Right or Left handed, depending on the orientation of M3. Likewise M5 can be Right or Left handed, but in all cases we have seen they are R where M3 is L, and vice versa.
\subsection{Drive configuration}
There are three drive configurations used in VLBI. This also effects the
rotation of the feeds as seen from the sky.
\begin{itemize}
\item HA-DEC (WST), for Hour Angle -- Declination. Also called
equatorial. It produces no change of feed angle as the source passes
across the sky. As HA-DEC mounts require an asymmetrical structural
design they are unsuited to the support of very heavy
structures. Their advantage is that motion is required in only
one axis to track an object as Earth rotates.
\item ALT-AZ (VLBA/most), for Altitude -- Azimuth. Also called
Az-El. It rotates the telescope pointing by the parallactic angle as
the source passes across the sky. It is the most common variety of
Radio telescope mount. It is symmetric so can support very large
antennae, but at the zenith small angular changes on the sky can
require very large angular changes in the azimuth angle. This effect
is call the `keyhole'.
\item EW (HOB), for East -- West. It rotates the telescope by the
co-parallactic angle as the source passes across the sky. An EW mount
places the focus very high, therefore it is prone to flexing and
poor pointing. Their advantage is that they can track across the sky at
high speeds and without gaps (i.e. it does not have a `keyhole' at the
zenith). They are primarily used for tracking Low Earth Orbit
satellites.
\end{itemize}
The parallactic angle, sometimes called the position angle, of an
object is the angle between the celestial pole (north or south
depending on location), the object and the zenith (the point in the
sky directly overhead). It is not to be confused for the same named
angle which is also called the convergence angle. This is the
difference in the angular direction to an object at a distance, from
two points of view, i.e. the parallax. The expression for the
parallactic angle as we use it is;
\[
\chi_p = {\rm arctan}[\frac{{\rm sin}(\Theta)\ {\rm cos}(l)}
{{\rm cos}(\delta){\rm sin}(l) -
{\rm cos}(\Theta){\rm cos}(l){\rm sin}(\delta)}]
\]
The co-parallactic angle is;
\[
\chi_c = {\rm arctan}[\frac{{\rm cos}(\Theta)}
{{\rm sin}(\delta){\rm sin}(\Theta)}]
\]
The Nasmyth angle is almost the same as the parallactic angle, but
with the relative rotation of the forth mirror included;
\[
\chi_n = \chi_p \pm E
\]
The BWG angle is almost the same as the Nasmyth angle, but
with the relative rotation of the fifth mirror included;
\[
\chi_b = \chi_n \mp A
\]
where $\Theta$ is the hour angle, and $\delta$ the declination, of the
source. $l$ is the latitude of the telescope, $E$ is the
elevation and $A$ is the azimuth.
As all BWG bring the focus back into the pedistal, the sign of $E$ and $A$ are opposite; in principle this does not need to be the case.
Each mirror adds a swap of polarisation from LHC to RHC. Therefore, in
principle, the number of reflections before the horn is very important
in any optical system, as each mirror also swaps the feed angle
polarisation vector on the sky.
However when the detected LHC or RHC polarisations are relabelled to
represent the on-sky value, an effective mirror is added to produce
the parallel or cross hand outputs. This means that only Cassegrain, Nasmyth or BWG models are required.
\begin{footnotesize}
\begin{table}[htb]
\begin{center}
\begin{tabular}{lcr}
&&{Mount Number}\\
Focus and Drive&Label&AIPS\\
\hline
Cassegrain and Alt-Az& ALAZ &0\\
Any and Equatorial & EQUA &1\\
Any and Orbiting& ORBI &2\\
\multicolumn{3}{c}{New mount types}\\
Prime focus and EW & EW-\,- &3\\
Right hand Naysmyth and Alt-Az& NS-R &4\\
Left hand Naysmyth and Alt-Az& NS-L &5\\
Right hand BWG and Alt-Az& BW-R &6\\
Left hand BWG and Alt-Az& BW-L &7\\
\hline
\end{tabular}
\caption{Mount Types}
\end{center}
\end{table}
\end{footnotesize}
\section{The Beam Wave Guide configuration}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{DrawingsReflectorFeed.eps}
\caption{Schematic of the Warkworth 30-m radio telescope optics BWG optics.}
\label{fig:pf1}
\end{figure}
The feed in a beamed wave guide system sits on the floor on the pedestal, thus avoiding movement and related issues. The image of the sky has to pass over two rotating mirrors, here assumed to be M3 and M5. Other static mirrors can and often are introduced, but the only fact that is important is the rotational behaviour of the mirrors that move.
I.e. for a ``folded cassagrain'' (only Ceduna to our knowledge) the M3 rotates with the elevation axis, so there is no extra term introduced and this behaves as a Cassegrain (if the mount in Alt-Az).
Therefore the difference between the instrumental
polarisation angle of a BWG mount and the parallactic angle is plus or minus the elevation angle and minus or plus the azimuthal angle.
This assumes that the BWG directs the image of the sky back to the central axis; if it does not the sign of the Azimuth angle changes.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{V255-Solutions.eps}
\caption{The impact of the R/L phase difference for V255AG, for a number of calibrators, before (Yellow) and after (Green) the application of the mount feed rotation correction, with CLCOR. The antenna mount types are AT, CD, MP, PA all Alt-Az, Hobart EW-mount and Warkworth right handed Beam Wave Guide.}
\label{fig:step1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{V255-SOLS-ZOOM.eps}
\caption{Zoom in on the R/L phase difference for V255AG, after removing a constant R/L phase offset using one scan. Imperfections in the R/L calibration are noticeable. Parkes has an offset (so requires using a different calibration scan) and a ripple, whilst Ceduna and Hobart show slow wandering. This maybe due to poor polarisation purity.
\label{fig:step2}}
\end{figure}
\pagebreak
| 7205dca1410898a606bab509c018eed487879396 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section*{Acknowledgments}\end{small}}
\newcommand{\mbox{{\raisebox{-0.4ex}{$\stackrel{>}{{\scriptstyle\sim}}$}}}}{\mbox{{\raisebox{-0.4ex}{$\stackrel{>}{{\scriptstyle\sim}}$}}}}
\newcommand{\mbox{{\raisebox{-0.4ex}{$\stackrel{<}{{\scriptstyle\sim}}$}}}}{\mbox{{\raisebox{-0.4ex}{$\stackrel{<}{{\scriptstyle\sim}}$}}}}
\setlength{\topmargin}{-1.5cm}
\title[Should we believe UV-mm SED modelling?]{Should we believe the results of UV-mm galaxy SED modelling?}
\author[C.~C. Hayward and D.~J.~B. Smith]{
\parbox[t]{\textwidth}{
Christopher C. Hayward$^{1,2}$\thanks{E-mail: cchayward@caltech.edu}\thanks{Moore Prize Postdoctoral Scholar in Theoretical Astrophysics}
and Daniel J.~B.~Smith$^3$
}
\vspace*{6pt} \\
$^1$TAPIR 350-17, California Institute of Technology, 1200 E. California Boulevard, Pasadena, CA 91125, USA \\
$^2$Heidelberger Institut f\"ur Theoretische Studien, Schloss--Wolfsbrunnenweg 35, 69118 Heidelberg, Germany \\
$^3$Centre for Astrophysics, Science \& Technology Research Institute, University of Hertfordshire, Hatfield, Herts, AL10 9AB, UK
}
\begin{document}
\date{Submitted to MNRAS}
\pagerange{1--24} \pubyear{2014}
\maketitle
\label{firstpage}
\begin{abstract}
Galaxy spectral energy distribution (SED) modelling is a powerful tool, but constraining how well it is able to infer the true values for galaxy
properties (e.g. the star formation rate, SFR) is difficult because independent determinations are often not available. However, galaxy simulations can provide
a means of testing SED modelling techniques. Here, we present a numerical experiment in which we apply the SED
modelling code \textsc{magphys}\xspace to ultraviolet (UV)--millimetre (mm) synthetic photometry generated from hydrodynamical simulations of an isolated disc galaxy and a major galaxy
merger by performing three-dimensional dust radiative transfer. We compare the properties inferred from the SED modelling with the true
values and find that \textsc{magphys}\xspace recovers
most physical parameters of the simulated galaxies well. In particular, it recovers consistent parameters irrespective of the viewing angle, with smoothly varying results for
neighbouring time steps of the simulation, even though each viewing angle and time step is modelled independently.
The notable exception to this rule occurs when we use an SMC-type intrinsic dust extinction curve
in the radiative transfer calculations. In this case, the two-component dust model used by \textsc{magphys}\xspace is unable to effectively correct for the attenuation of the
simulated galaxies, which leads to potentially significant errors (although we obtain only marginally acceptable fits in this case).
Overall, our results give confidence in the ability of SED modelling to infer physical properties of galaxies, albeit with some caveats.
\end{abstract}
\begin{keywords}
dust, extinction --- galaxies: fundamental parameters --- galaxies: ISM --- galaxies: stellar content --- infrared: galaxies --- radiative transfer.
\end{keywords}
\section{Introduction} \label{S:intro}
A galaxy's spectral energy distribution (SED) encodes much information about the galaxy, including its star formation history (SFH);
its stellar, gas, and metal content, and the physical conditions of its interstellar medium (ISM). The number of galaxies, both local and high-redshift,
with well-sampled ultraviolet (UV) to millimetre (mm) integrated SEDs has increased rapidly in recent years, and much effort is being made to attempt to
extract galaxy properties from these SEDs. Accurately extracting physical properties, such as stellar mass and star formation rate (SFR),
from galaxy SEDs is crucial to answer many open questions in galaxy formation, including the following: what is the SFH of the universe?
How do properties such as the SFR, metallicity, and gas fraction depend on redshift and galaxy mass? What processes
quench star formation in galaxies? Furthermore, knowledge of galaxies' physical properties is often necessary to compare observations with theoretical models
because models typically do not directly predict observables.
The simplest method to determine some property of a galaxy is to use a single photometric data point. For example, if the redshift is known, SFRs are
commonly derived from UV, H$\alpha$, or 24-\micron~photometry \citep{Kennicutt:1998review}, and stellar mass can be derived from a galaxy's
near-infrared (NIR) flux \citep[e.g.][]{Bell:2001}. However, such methods require various simplifying assumptions and can suffer from significant systematics
and degeneracies.
Use of multiple data points simultaneously can yield more information and break some degeneracies, such as that between age and metallicity.
In a technique known as SED modelling or stellar population synthesis\footnote{In this work, we use the more-general term `SED modelling' rather than
`stellar population synthesis' because the former can include additional sources of radiation, such as active galactic nuclei (AGN) and dust, beyond direct
stellar emission.}
(e.g. \citealt{Leitherer:1999,Bolzonella:2000}; \citealt[][hereafter BC03]{Bruzual:2003ck}; \citealt{LeBorgne:2004};
\citealt{Maraston:2005}; \citealt{Burgarella:2005,Cunha:2008cy,Kotulla:2009,Kriek:2009,Noll:2009,Conroy:2010b,Serra:2011};
see \citealt{Walcher:2011} and \citealt{Conroy:2013} for reviews), a galaxy is treated as the sum of its parts:
as input, one must use template SEDs for single-age stellar populations (SSPs), which depend on the age and metallicity of the stellar population,
the stellar initial mass function (IMF), and the stellar libraries used.
By assuming an SFH and metallicity, which may be a function of age, the total SED of the stellar population can be calculated. In addition to
stellar emission, nebular emission lines \citep[e.g.][]{Charlot:2001} and AGN emission \citep[e.g.][]{Burgarella:2005,Noll:2009,Berta:2013} can also be included. Dust attenuation
can be treated using an empirical attenuation curve \citep[e.g.][]{Calzetti:1994im,Calzetti:2000iy,Calzetti:1997hu} or a simple analytic model
\citep*[e.g.][hereafter CF00]{Charlot:2000bd}.
A large set of templates is generated by varying the model parameters, and, in principle, the parameters of the template SED that best fits the
UV--NIR photometry can be used to infer physical properties of the galaxy, including SFR, age, metallicity, and stellar mass.
The above discussion has largely ignored infrared (IR) emission from dust, but this, too, can be used to infer galaxy properties.
Indeed, if IR data are unavailable, it is possible to mistake a heavily obscured, rapidly star-forming galaxy for a passive galaxy \citep[e.g.][]{Carter:2009}.
The simplest way to interpret a galaxy's far-IR (FIR) through mm emission\footnote{For simplicity and in accordance with convention, throughout
this work, we will use the terms `IR' or `MIR-mm' to denote the wavelength range 8-1000 \micron, which accounts for the bulk of the dust emission.}
is to fit one or more modified blackbodies to the SED, and in doing so
infer the IR luminosity (which can be used to estimate the SFR), effective dust temperature(s), and dust mass (see sections 4.2 and 4.3 of \citealt*{Casey2014}).
IR SED models, such as those of \citet{Dale:2002} or \citet{Draine:2007sings}, can also be used.
Because the UV--NIR and MIR--mm regions of an SED yield complementary information,
it is preferable to use both simultaneously. An example of a simple method to do this is to infer the SFR by using a combination of the UV or H$\alpha$ luminosity
and the IR luminosity to account for both unobscured and obscured star formation (e.g. \citealt{Kennicutt:2007,Kennicutt:2009iv};
\citealt*{Relano:2009}; \citealt{Wuyts:2011a,Wuyts:2011b,Reddy:2012, Lanz:2013}).
A more sophisticated approach is to use all available data when fitting SEDs. One method is to perform radiative transfer calculations
assuming some simple galaxy geometry \citep[e.g.][]{Silva:1998ju,Efstathiou:2000,Granato:2000,Popescu:2000,
Tuffs:2004,Siebenmorgen:2007,Groves:2008,Michalowski:2010masses,Michalowski:2010production}.
Alternatively, one can take a more empirical approach and treat the FIR SED as a sum of modified blackbodies \citep[e.g.][hereafter dC08]{Cunha:2008cy}
or use IR SED templates \citep[e.g.][]{Noll:2009}.
Regardless of the manner in which the IR SED is obtained, the luminosity of the IR SED should be equal to the luminosity absorbed by the dust.
This requirement is necessary for the SED model to be self-consistent and can also enable the model to be more constraining because it
explicitly links the UV--NIR and MIR--mm regions of the SED.
The SED modelling methods described above are very flexible, and they can be applied to large samples of galaxies
\citep[e.g.][]{Kauffmann:2003,Brinchmann:2004,Gallazzi:2005,Salim:2007,Cunha:2010by,Smith:2012}.
However, there are multiple simplifying assumptions and uncertainties inherent in the models \citep[e.g.][]{Conroy:2009ks,Conroy:2010a,Conroy:2010b}, some of which
we will discuss here.
The SEDs depend strongly on the IMF, but the shape of the IMF and whether it is universal are both actively debated (e.g. \citealt{Dave:2008};
\citealt*{vanDokkum:2008,vanDokkum:2010,vanDokkum:2011,Conroy:2012,Hopkins:2013IMF,Narayanan:2012IMF,Narayanan:2013};
\citealt{Hayward:2013number_counts}; see \citealt{Bastian:2010vo} for a review).
The SSP templates are also uncertain; one particular area of disagreement is the treatment of thermally pulsating asymptotic giant branch (TP-AGB) stars
\citep{Maraston:2005,Maraston:2006fn,Kriek:2010wr,Henriques:2011,Zibetti:2013}. SED modelling codes typically assume relatively simple SFHs, and changing the assumed
form can significantly affect the results of the modelling \citep[e.g.][]{Michalowski:2012,Michalowski2014}.
Furthermore, the models must necessarily assume simple geometries. In many cases, the dust is treated as a foreground screen or mixed slab.
Some models include somewhat more complicated geometries in that they allow the young stars and older stars to be attenuated by different amounts
(e.g. \citealt{Silva:1998ju}; CF00; dC08; \citealt{Groves:2008}), but this geometry is still only a crude approximation to reality.
Spatial variation in metallicity is typically not accounted for (except perhaps indirectly in the form of an age dependence), and uncertain dust composition can significantly
affect the amount and wavelength dependence of the attenuation, the shape of the dust emission SED, and the inferred dust mass. Finally, treating the FIR emission
as one or more modified blackbodies may be problematic if one's goal is to infer physical quantities rather
than simply describe the SED \citep[e.g.][]{Shetty:2009a,Shetty:2009b,Hayward:2012smg_bimodality,Kelly:2012,Smith:2013},
but more sophisticated models, such as those of \citet{Dale:2002} and \citet{Draine:2007sings}, may be able to yield physical insight.
Given the complexity of and assumptions inherent in SED modelling, it is desirable to compare the results of different methods and, if possible, to test
how well the methods can recover the galaxy properties that they intend to recover.
There have been extensive efforts to `internally validate' SED modelling methods, i.e. search for systematics and uncertainties that are inherent in the methods
using either a sample of synthetic SEDs constructed using the same assumptions that are inherent in the SED modelling codes
or samples of real galaxies (dC08; \citealt{Walcher:2008,
Giovannoli:2011,Boquien:2012,Buat:2012,Buat2014,Smith:2012}; see sections 4.4.4 and 4.5.3 of \citealt{Walcher:2011} for an
extensive discussion). `External validation' of the quantities recovered by SED modelling is possible for only a few quantities, such as the SFR and mass-to-light ratio; however,
when there is a discrepancy between e.g. the SFR inferred from the FIR luminosity and SED modelling, it is not clear \textit{a priori} which SFR value is more
accurate \citep[e.g.][]{Hayward2014,Utomo2014}.
Fortunately, it is possible to test some (but definitely not all) of the assumptions inherent in SED modelling by applying SED modelling to synthetic SEDs generated
from semi-analytical models (e.g. \citealt{Lee:2009}; \citealt{Trager:2009}; \citealt*{Pforr:2012,Pforr:2013}; \citealt{Mitchell2013})
or simulations (e.g. \citealt{Wuyts:2009kr,Lanz2014,Michalowski2014}; Torrey et al., submitted) as a type of
controlled numerical experiment. \citet{Wuyts:2009kr} were the first to perform such tests using hydrodynamical simulations.
They were able to investigate discrepancies caused by mismatches between the true SFH in the simulations and that assumed,
different amounts of attenuation for stars of different ages and AGN, metallicity variations, and AGN contamination unaccounted for in the SED modelling.
However, their method for calculating the photometry was relatively simple: first, it is not clear that the \citet{Calzetti:2000iy} attenuation curve, which is
an empirically derived attenuation curve that is meant to be applied to integrated galaxy SEDs of starburst galaxies, should be applied to attenuate
individual lines-of-sight within galaxies. Furthermore, because \citeauthor{Wuyts:2009kr} did not perform radiative transfer, they could not investigate the
effects of scattering. Additionally, they did not account for the obscuration of young stellar clusters on sub-resolution scales. Finally,
because \citeauthor{Wuyts:2009kr} did not calculate dust re-emission, they restricted their SED modelling to synthetic optical--NIR photometry. As we explain
below, we avoid these limitations by performing dust radiative transfer on hydrodynamical simulations.
Now that radiative transfer is routinely applied to three-dimensional (3-D) hydrodynamical simulations (e.g. \citealt{Jonsson:2006}; \citealt*{Jonsson:2010sunrise};
\citealt{Wuyts:2009b,Wuyts:2010,Bush:2010,Narayanan:2010dog,Narayanan:2010smg,Hayward:2011smg_selection,Hayward:2012smg_bimodality,
Hayward:2013number_counts,Hayward2014,Snyder:2011,Snyder:2013})
and increasingly sophisticated SED modelling is applied to observed galaxies (e.g. dC08; \citealt{Cunha:2010by,daCunha:2010b,Buat:2012,Smith:2012,Lanz:2013}),
it is appropriate to revisit the work of \citet{Wuyts:2009kr}; we do so here by performing radiative transfer on hydrodynamical simulations
of a disc galaxy and a galaxy major merger and applying the SED modelling method of dC08, \textsc{magphys}\xspace, to the synthetic photometry. Because \textsc{magphys}\xspace is now very
commonly used \citep[e.g.][]{Cunha:2010by,daCunha:2010b,Wijesinghe:2011,Rowlands:2012,Rowlands2014,Rowlands2014b,Smith:2012,Banerji:2013,Berta:2013,Fu:2013,
Gruppioni:2013,Ivison:2013,Lanz:2013,Bitsakis:2014,Delvecchio:2014,Presotto:2014,Toft:2014} this work should be of great relevance to many researchers.
This approach is critical for our ability to interpret the results of SED fitting. In addition to the aforementioned, our approach has several further advantages over previous attempts
to validate SED modelling methods. For example, because each line of sight to a galaxy is uniquely affected by dust, it is possible that our ability to infer its properties is viewing-angle dependent.
It is difficult to address this issue using real galaxies, although dC08 attempted to do so statistically by using the ratio of the projected major and minor axes as a crude
proxy for viewing angle; this test revealed no evidence for bias in the averaged values of different \textsc{magphys}\xspace parameters across a sample of 1658
Infrared Astronomical Satellite \citep[IRAS;][]{Neugebauer:1984}-selected galaxies. In contrast, our
approach to generating emergent SEDs at different viewing angles for the same temporal snapshot enables us to address this issue directly. Second, it is highly likely that the extent
of our ability to infer the properties of a galaxy depends on the evolutionary stage of that galaxy (e.g. the length of time since the most recent burst of star formation); this
method of external validation enables us to quantify this effect over timescales in excess of a gigayear, a task that is clearly impossible using real galaxies.
One concern with this method of validation is whether the simulations resemble real galaxies well enough that the test is relevant. This concern is one
reason that we utilise idealised simulations in which the progenitor galaxies are constructed `by hand', as our goal is not to form galaxies \emph{ab initio} but
rather to perform a controlled numerical experiment on simulated galaxies with reasonable properties. This approach helps to ensure that the properties
of the simulated galaxies (e.g. the radial and vertical profiles of the discs, gas fraction, and metallicity) are similar to those of real galaxies.
Furthermore, other works that used the same hydrodynamical and radiative transfer codes and similar initial conditions have demonstrated that the SEDs
agree with those of real galaxies: \citet{Jonsson:2010sunrise} found that for a variety of colour-colour plots spanning the UV through submm, the simulated discs
typically occupied regions that are also occupied by real galaxies from the SIRTF Nearby Galaxies Survey \citep[SINGS;][]{Kennicutt:2003,Dale:2007} sample (but the
full variation in real galaxies' colours was not captured by the simulations, likely because of the limited parameter space spanned by the simulations and because no early-type
or interacting galaxies were simulated). \citet{Lanz2014} used a library of $\sim$12 000 synthetic SEDs of simulated isolated and interacting disc galaxies to fit
the UV--FIR SEDs of a subset of isolated and interacting galaxies \citep[originally presented in][]{Lanz:2013} from the \emph{Spitzer} Interacting Galaxies Survey
(N. Brassington et al., in preparation). They found that most of the real galaxy SEDs were reasonably well-fit by one or more of the simulated galaxy SEDs.
\footnote{\citeauthor{Lanz2014} compared some of the physical properties inferred from the observed SEDs using \textsc{magphys}\xspace with the properties of the corresponding
best-fitting simulated SEDs. However, they did not directly apply \textsc{magphys}\xspace to the simulated SEDs and thus only validated \textsc{magphys}\xspace in an indirect manner. Instead,
the focus of \citet{Lanz2014} was a comparison of observed interacting galaxy SEDs with SEDs predicted from simulations.
Here, we present a more direct and more detailed investigation of the ability of \textsc{magphys}\xspace to recover the properties of simulated galaxy SEDs.}
Similarly, the simulated high-redshift galaxy SEDs of \citet{Hayward:2011smg_selection,Hayward:2012smg_bimodality,Hayward:2013number_counts}
provide acceptable fits to the SEDs of 24-\micron-selected starbursts and AGN (Roebuck et al., in preparation). Thus, we are confident that the simulations
are sufficiently reasonable for the purposes of this work.
The remainder of this paper is organised as follows: in Section \ref{S:methods}, we describe the combination of hydrodynamical simulations and dust radiative transfer used
to create the synthetic photometry and the SED modelling code of dC08, \textsc{magphys}\xspace. Section \ref{S:example_fit} presents an example \textsc{magphys}\xspace fit to a synthetic SED.
Sections \ref{S:isolated_disc} and \ref{S:merger} discuss the results of applying \textsc{magphys}\xspace to the SEDs calculated for the isolated disc and galaxy merger simulations,
respectively, using the default \textsc{sunrise}\xspace parameters. Section \ref{S:systematic_uncertainties} investigates the influence of potential sources of systematic error,
such as the treatment of dust attenuation, in the SED modelling procedure. In Section \ref{S:discussion}, we discuss some implications of our results.
Section \ref{S:conclusions} presents our conclusions.
\section{Methods} \label{S:methods}
To investigate the effectiveness of SED modelling, we first generated synthetic UV--mm SEDs by performing dust radiative transfer on hydrodynamical
simulations of an isolated disc galaxy and a major galaxy merger. We then applied \textsc{magphys}\xspace to the synthetic photometry and compared the
physical parameter values inferred by \textsc{magphys}\xspace with the true values for the simulated galaxies.
Note that the comparison was performed in a blind fashion; CCH generated the synthetic photometry and provided it to DJBS without the
corresponding physical parameter values. Then, DJBS fit the synthetic photometry and provided the inferred parameter values to CCH for comparison.
No modifications to the simulations or SED modelling procedure were made after this comparison was performed, and each snapshot was modelled
independently (i.e. \textsc{magphys}\xspace did not `know' that different viewing angles correspond to the same galaxy or that successive snapshots are in any way related).
Now, we present the key details of our method for calculating SEDs from hydrodynamical simulations and the SED modelling code \textsc{magphys}\xspace.
\subsection{Calculating SEDs of simulated galaxies} \label{S:sims}
This work uses a combination of 3-D \textsc{gadget-3}\xspace \citep{Springel:2001gadget,Springel:2005gadget} smoothed-particle
hydrodynamics\footnote{Recently, various authors \citep[e.g.][]{Agertz:2007,Springel:2010arepo,Bauer:2012,Keres:2012,Sijacki:2012,Vogelsberger:2012}
have highlighted issues with the standard formulation
of SPH that may cause the results of simulations performed using the technique to be inaccurate. However, for the type of idealised simulations
used in this work, the standard form of SPH yields results that are very similar to those of the more-accurate moving-mesh hydrodynamics
technique \citep{Hayward2014arepo}.}
galaxy simulations and the {\sc Sunrise}\footnote{\textsc{sunrise}\xspace is publicly available at \\ \url{http://code.google.com/p/ sunrise/}.}
\citep{Jonsson:2006sunrise,Jonsson:2010sunrise} Monte Carlo dust radiative transfer code to calculate synthetic SEDs of the simulated galaxies.
The methods have been described in detail elsewhere \citep[e.g.][]{Jonsson:2006,Jonsson:2010sunrise,Hayward:2011smg_selection,
Hayward:2012smg_bimodality}, so we only summarise them briefly here.
The \textsc{gadget-3}\xspace simulations include star formation following a volume-density-dependent Kennicutt-Schmidt relation \citep{Schmidt:1959,Kennicutt:1998} with
a low-density cutoff, a sub-resolution prescription for the multiphase ISM \citep[which implicitly includes supernova feedback;][]{Springel:2003},
and a model for black hole accretion and thermal AGN feedback \citep{Springel:2005feedback}. The current work utilises two \textsc{gadget-3}\xspace simulations.
One is a simulation of an isolated disc galaxy, the \textsf{vc3} model of \citet{Cox:2006}. The initial conditions consist of a dark matter halo and a rotationally supported
exponential disc of gas and stars. The dark matter halo has a \citet{Hernquist:1990} profile with an effective concentration of 9, spin parameter $\lambda = 0.033$,
and circular velocity $V_{200} = 160$ km s$^{-1}$. The exponential disc has a radial scale length of 3.9 kpc, a total mass of $5.6 \times 10^{10} ~\mathrm{M_{\odot}}$, and
an initial gas fraction of 40 per cent. The second simulation is the \textsf{vc3vc3e} model of \citet{Cox:2006}, which is a merger of two of the previously described disc
galaxies. For this simulation, the two disc galaxies were initialised on a parabolic orbit with a pericentric passage distance of 5 kpc and an initial separation of 100 kpc.
The two discs were initially oriented such that their spin axes are specified by the spherical coordinates $(\theta, \phi) = (30^{\circ}, 60^{\circ})$ and $(-30^{\circ},45^{\circ})$
(the `e' orbit of \citealt{Cox:2006}). The masses and gravitational softening lengths for the baryonic (dark matter) particles are $3.9 \times 10^5 ~\mathrm{M_{\odot}}$ and 100 pc
($7.6 \times 10^6 ~\mathrm{M_{\odot}}$ and 200 pc), respectively. Please see \citet{Cox:2006} for further details of the \textsc{gadget-3}\xspace simulations.
At 10-Myr intervals, we saved snapshots of the \textsc{gadget-3}\xspace simulations and processed them with \textsc{sunrise}\xspace, which calculates the emission from the star and black hole
particles present in the \textsc{gadget-3}\xspace simulations, propagates the emission through the dusty ISM, and calculates the IR re-emission from dust.
The default \textsc{sunrise}\xspace assumptions and parameters used in this work are identical to those used by \citet{Jonsson:2010sunrise}, except that we include
AGN emission as first introduced in \citet{Younger:2009}.
Star particles with ages $>10$ Myr were assigned {\sc starburst99} \citep[SB99;][]{Leitherer:1999,Vazquez:2005} SSP SED templates according to their
ages and metallicities. Younger star particles were assigned templates from \citeauthor{Groves:2008} (\citeyear{Groves:2008}; see also \citealt{Dopita:2005,
Dopita:2006b,Dopita:2006c}), which include emission from the HII and photodissociation
regions that surround young star clusters. The black hole particles were assigned luminosity-dependent templates from \citet{Hopkins:2007}, which are based on
observations of un-reddened quasars. Because the luminosity of the black hole particle(s) is determined self-consistently from the accretion rate in the
\textsc{gadget-3}\xspace simulations, the AGN contribution varies significantly with time; see Section \ref{S:AGN} for details.
The dust density distribution was
calculated by projecting the \textsc{gadget-3}\xspace metal density onto a 3-D adaptive mesh refinement grid and assuming a dust-to-metal density ratio of 0.4
\citep{Dwek:1998,James:2002}. \textsc{sunrise}\xspace calculates dust absorption and scattering using a Monte Carlo method.
Our default dust model is the Milky Way (MW) $R_V=3.1$ model of \citet{Weingartner:2001} as updated by \citet{Draine:2007kk}.
The energy absorbed by the dust
is re-emitted in the IR. \textsc{sunrise}\xspace calculates the emission assuming the dust is in thermal equilibrium (except for half of the PAHs with grain size $<100$ \AA;
see \citealt{Jonsson:2010sunrise} for details). To do so, the code determines the thermal equilibrium dust temperature for each grid cell
and grain species by solving the following equation \citep[e.g.][]{Misselt:2001,Jonsson:2010GPU}:
\begin{equation} \label{eq:dust_T}
\int \sigma_j(\lambda) B(\lambda,T_{ij}) d\lambda = \int I_i(\lambda) \sigma_j(\lambda) d\lambda,
\end{equation}
where $\sigma_j$ is the dust absorption cross section for grain species\footnote{A grain `species' refers to grains of a single size and composition.}
$j$, $I_i(\lambda)$ is the local radiation field intensity in the $i$th grid cell\footnote{The
local radiation field includes contributions both from the attenuated emission from stars and AGN and IR emission that is re-radiated by dust; the latter
source can be significant in e.g. the nuclear regions of starbursts, in which the optical depths can be extremely high.},
$T_{ij}$ is the equilibrium temperature of grain species $j$ in the $i$th grid cell\footnote{Note that in principle, in a given radiative transfer calculation, there can be
$i \times j$ distinct dust temperatures. In the simulations presented in this work, $i$ can be as large as $\sim 10^6$ and $j = 220$; thus, the total number
of distinct dust temperatures in a single radiative transfer calculation can be $\sim 10^8$.}, and
$B(\lambda,T_{ij})$ is the Planck function. Because $I_i(\lambda)$ includes a contribution from dust emission, equation (\ref{eq:dust_T}) must be solved iteratively.
Once the equilibrium dust temperatures are determined, the total SED emitted by dust in grid cell $i$ is calculated using
\begin{equation} \label{eq:dust_SED}
L_{\lambda,i} = 4 \pi \sum_j \sigma_j(\lambda) B(\lambda,T_{ij}),
\end{equation}
and a final radiative transfer step is performed to calculate spatially resolved dust emission SEDs for each viewing angle.
\ctable[
caption = {Viewing angles (i.e. camera positions) used in the \textsc{sunrise}\xspace radiative transfer calculations.\label{tab:cameras}},
center,
notespar,
doinside=\small,
]{ccc}{
\tnote[a]{Number used to identify viewing angles in Figs. \ref{fig:disc_attenuation} and \ref{fig:merger_attenuation}.}
\tnote[b,c]{Camera positions ($\theta$ and $\phi$ denote the polar and azimuthal angles, respectively) in spherical coordinates.
The isolated disc galaxy and merger orbit lie in the $xy$-plane. Thus, angle 1 provides a face-on view of the disc galaxy.}
}{
\FL
Angle number\tmark[a] & $\theta$\tmark[b] & $\phi$\tmark[c] \NN
& (deg) & (deg) \ML
1 & 0 & 0 \NN
2 & 73.4 & 0 \NN
3 & 73.4 & 120 \NN
4 & 73.4 & 240 \NN
5 & 124.8 & 0 \NN
6 & 124.8 & 120 \NN
7 & 124.8 & 240 \LL
}
The results of the \textsc{sunrise}\xspace calculations are spatially resolved UV--mm SEDs (i.e. integral field unit spectrograph-like data) for the simulated galaxies
viewed from seven different cameras. To sample uniformly in solid angle, the positions were selected by uniformly sampling the cosine of the polar angle, $\cos \theta$,
starting at the north pole and excluding the south pole ($\cos \theta = \{-1/3, 1/3, 1\}$). For each $\cos \theta$ value except for $\cos \theta = 1$, for which all azimuthal
angles are equivalent, the azimuthal angle $\phi$ was sampled uniformly ($\phi = \{0, 2\pi/3, 4\pi/3\}$).
The camera positions in spherical coordinates are specified in Table \ref{tab:cameras}.
We calculated the integrated photometry by
summing the SEDs of all pixels and convolving with the appropriate filter response curves. We assumed that the simulated galaxies are at redshift $z = 0.1$.
In this work, we used the bands that were used for the initial
\textit{Herschel} ATLAS \citep[\textit{H}-ATLAS;][]{Eales:2010} investigations because one of the motivations of the work was to validate the SED modelling approach used in
\citet{Smith:2012} for 250\,$\mu$m-selected galaxies with $r$-band Sloan Digital Sky Survey \citep[SDSS;][]{York:2000} counterparts from \citet{Smith:2011}. The
likelihood-ratio cross-matching in \citet{Smith:2011} was performed in order to associate redshift information primarily from SDSS and the Galaxy and Mass Assembly \citep[GAMA;][]{Driver:2011}
survey with the \textit{H}-ATLAS sources, and to leverage the matched multi-wavelength photometry from GAMA for the purposes of fitting SEDs \citep[we refer the
interested reader to][for further details]{Driver:2011}. Specifically, we used simulated photometry in the near- and far-UV bands of the Galaxy Evolution Explorer satellite
\citep[GALEX;][]{Martin:2005}, the \textit{ugriz} bands from the SDSS,
the \textit{YJHK} bands from the UK Infrared Deep Sky Survey \citep[UKIDSS;][]{Lawrence:2007}, and FIR data from IRAS at 60 $\micron$,
\textit{Herschel} PACS \citep{Poglitsch:2010} at 100 and 160 $\micron$, and \textit{Herschel}
SPIRE \citep[][]{Griffin:2010} at 250, 350, and 500 $\micron$. Note that at all times, the photometry
is integrated over the entire system (i.e. both galaxies in the merger).
Because the goal of this work is to test the efficacy of SED modelling under ideal conditions and investigate
intrinsic systematic uncertainties rather than those that arise from noisy or limited data, we did not add any noise to the photometry. However, for the purposes of
applying \textsc{magphys}\xspace, we assumed uncertainties identical to those assumed in \citet{Smith:2012}, amounting to 0.2\,mag in the FUV and NUV bands, 0.1\,mag in the
\textit{ugrizYJHK} bands, 20 per cent at 60 $\micron$, 10 (20) per cent at 100 (160) $\micron$ and 15 per cent at 250, 350 and 500 $\micron$. Although in
\citet{Smith:2012}, the motivation for these uncertainties was to do with issues regarding absolute calibration uncertainties and hard-to-quantify aperture effects
\citep[e.g.][]{Hill:2011}, they are arbitrary for this investigation (but are similar to the quantifiable model uncertainties in the radiative transfer calculations;
\citealt{Lanz2014}).
\subsection{SED modelling} \label{S:dC08}
As previously discussed, we performed SED modelling using \textsc{magphys}\xspace (dC08), which is now very commonly used for interpreting observed galaxy SEDs
(see example references in Section \ref{S:intro}).
We used the version described in \citet{Smith:2012}, which was used for the \textit{H}-ATLAS analysis therein; here, we summarise the most relevant details,
and we refer the reader to dC08 for full details of the method.
\textsc{magphys}\xspace fits galaxy SEDs using a Bayesian approach to determine posterior distributions for the fit parameters. In this manner, median-likelihood values
for physical properties of a galaxy, such as the SFR, are inferred.
The emission from stars for a given IMF, SFH, and metallicity is determined using the `CB07' (unpublished) version of the BC03 SSPs.\footnote{The CB07 templates are the default templates used in \textsc{magphys}\xspace. However, it has recently been discovered that the CB07 models over-correct
for the contribution of TP-AGB stars \citep{Zibetti:2013}. For this reason, it is now possible to use the BC03 models in \textsc{magphys}\xspace.}
Dust attenuation
is treated via the method of CF00; in this approach, young stars (with age $< 10^7$ Myr) are more attenuated than older stars to account
for stars being born in dense molecular clouds. All stars are attenuated by an effective optical depth $\hat{\tau}_{\lambda}^{\rm ISM}$,
which is given by equation (4) from dC08:
\begin{equation}
\hat{\tau}_{\lambda}^{\rm{ISM}} = \mu \hat{\tau}_{V} (\lambda/5000~{\rm{\AA}})^{-0.7}.
\end{equation}
The young stars are further attenuated by effective optical depth $\hat{\tau}_{\lambda}^{\rm BC}$,
which is given by equation (3) of dC08:
\begin{equation}
\hat{\tau}_{\lambda}^{\rm{BC}} = (1-\mu) \hat{\tau}_{V} (\lambda/5000~{\rm{\AA}})^{-1.3}.
\end{equation}
In the above equations, $\hat{\tau}_V = \hat{\tau}_{\lambda}^{\rm ISM} + \hat{\tau}_{\lambda}^{\rm BC}$ is the total effective optical depth
to the young stars and $\mu = \hat{\tau}_V^{\rm ISM}/(\hat{\tau}_V^{\rm BC} + \hat{\tau}_V^{\rm ISM})$ is the fraction of the total optical
depth contributed by the `diffuse ISM'. The specific power-law indices adopted in the above equations were motivated by fitting
the CF00 model to observations of local starburst galaxies. Note that assuming that the dust has MW-, LMC- or SMC-type properties, the
the CF00 model can be well-reproduced with a discrete-cloud geometry (see CF00 for full details).
The FIR dust emission is treated as a sum of multiple optically thin\footnote{The assumption of optical thinness in the FIR is likely to be reasonable for all but the most
extreme local galaxies, and modelling normal galaxies was the original purpose for which \textsc{magphys}\xspace was designed. However, this assumption may be
problematic for extremely IR-luminous, highly obscured galaxies, such as submm galaxies \citep{Hayward:2012smg_bimodality,
Rowlands2014}.}
modified blackbodies with different normalizations, dust temperatures, and dust emissivity indices, $\beta$. We do not discuss the implementation in full detail here,
instead referring the interested reader to dC08. However, we will briefly highlight some salient features of the \textsc{magphys}\xspace\ dust implementation, in which some of the parameters
are fixed based on observational constraints and some are allowed to vary. The FIR emission is dominated
by `warm' and `cold' grains in thermal equilibrium decomposed into the birth cloud and diffuse ISM components of the CF00 model.
The birth clouds and diffuse ISM both have warm-dust components, which are treated as modified blackbodies with $\beta = 1.5$; the temperature of the warm birth cloud component, $T_{\rm W}^{\rm BC}$,
is allowed to vary within a prior between $30 \le T_{\rm W}^{\rm BC} \le 60$\,K, whereas the ISM warm component has a fixed temperature of 45\,K.
In contrast, only the diffuse ISM has a cold-dust component, which is represented as a modified blackbody with $\beta = 2.0$ and variable temperature $15 \le T_{\rm C}^{\rm ISM} \le 25$\,K.
For the purpose of calculating dust masses,
the dust emissivity is normalised at $\kappa_{850~\mu {\rm m}} = 0.77$ g$^{-1}$ cm$^2$ \citep{Dunne:2000}.
Given the SED components described above, dC08 generated libraries of template SEDs, including a set of 25 000 stellar population models with a wide
variety of SFHs (which have the general form of an exponentially declining component with superimposed bursts), metallicities, and dust attenuation.
dC08 also generated a separate
set of 50 000 dust SED templates with a range of dust temperatures and relative contributions of different dust components (see dC08 for the
details of the sampling and the assumed prior distributions for the model parameters).
The separate stellar population and dust emission SED templates are combined to yield UV--mm SEDs.
One of the parameters that describes the stellar population template SEDs is the fraction of the absorbed luminosity that is absorbed by the {diffuse ISM
rather than the birth clouds, $f_{\mu}^{\rm SFH}$. Similarly, a parameter for the dust emission template SEDs is the fraction of the IR luminosity that is
emitted by dust in the diffuse ISM, $f_{\mu}^{\rm IR}$. When \textsc{magphys}\xspace\ combines the stellar population and dust emission template SEDs, to make the
SEDs self-consistent (i.e. to satisfy the `energy balance' criterion), it requires that
\begin{equation}
f_{\mu}^{\rm SFH} = f_{\mu}^{\rm IR} \pm \delta f_{\mu},
\label{eq:df}
\end{equation}
where $\delta f_{\mu} = 0.15$. Strict equality is not required to account for uncertainties from e.g.
viewing angle, and dC08 found that $\delta f_{\mu} = 0.15$ was sufficient to yield good fits to observed galaxy SEDs.
This condition requires that the UV-NIR and MIR-mm emission are self-consistent; thus, the availability of UV--mm constraints is leveraged more fully
by the fitting procedure than by treating the UV-NIR and MIR-mm components in isolation.
Applying the condition specified in equation (\ref{eq:df}) to all possible combinations of the 25 000 stellar population and 50 000 dust emission template SEDs
yields a library of millions of UV--mm SED templates. \textsc{magphys}\xspace then fits galaxy SEDs in a Bayesian manner
using the $\chi^2$ estimator to determine the goodness-of-fit (see \citealt{Kauffmann:2003} for an early application of such a technique).
That \textsc{magphys}\xspace uses $\chi^2$ for the SED fitting requires that each datum has an associated error
estimate to appear in the denominator of the $\chi^2$ calculation. In the case of real data, this uncertainty can include several different components, such as photon
shot noise, calibration uncertainties, and aperture effects, which are clearly not applicable to our model
(which is noise-free, precisely calibrated, and includes integrated photometry). As noted in Section \ref{S:sims}, we arbitrarily adopt uncertainties in each band
identical to those used in the \textsc{magphys}\xspace fits for {\it H}-ATLAS galaxies in \citet{Smith:2012}. This is potentially problematic because if the simulated SEDs were
perfectly represented in the \textsc{magphys}\xspace libraries, this would result in extremely small values of best-fit $\chi^2$, a point to which we shall return when discussing
our results below.
\begin{figure*}
\centering
\includegraphics[width=1.95\columnwidth,trim=0cm 0cm 0cm 0.65cm,clip]{./plots/nslss_raff_z0_sn113_sedplot.eps}
\caption{The top panel shows an example of the \textsc{magphys}\xspace SED fits for the $t - t({\rm SFR_{max}}) = -0.5$ Gyr snapshot (between first passage and coalescence)
of the merger simulation observed from seven viewing angles. The black points are the simulated photometry. The cyan lines correspond to the
`observed' SEDs of the simulated galaxy for each
of the seven viewing angles, and the grey line indicates the true input (unattenuated) SED. The green, blue, and red lines denote the best-fitting total
output (i.e. the sum of the attenuated emission from stars and the dust emission), unattenuated
stellar, and dust SEDs, respectively, yielded by \textsc{magphys}\xspace for each of the seven viewing angles. The bottom panel shows the residuals in the photometry,
$(L_{\rm true}-L_{\textsc{magphys}\xspace})/\sigma$. For each
of the viewing angles, \textsc{magphys}\xspace yields an acceptable fit to the photometry, and the true intrinsic stellar SED is recovered reasonably well. The larger
residuals near \textit{i} band occur because H$\alpha$ falls in this band at $z = 0.1$, and emission lines are not accounted for in our \textsc{magphys}\xspace modelling;
this has the effect of leaving a positive residual in \textit{i} band and affecting the neighbouring residuals (because neighbouring bands are not independent in SED modelling).
In the MIR, the \textsc{magphys}\xspace SEDs vary strongly with viewing angle, and the true SEDs are not recovered for most of the viewing angles. In the FIR shortward of
the observed-frame 60-$\micron$ data point, \textsc{magphys}\xspace underestimates the true SEDs.}
\label{fig:sed_example}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=2.0\columnwidth,trim=0cm 0cm 0cm 0.35cm,clip]{./plots/fitresults_isolated.eps}
\caption{Results of applying \textsc{magphys}\xspace to the synthetic integrated photometry for the isolated disc simulation. Each panel shows
the evolution of a \textsc{magphys}\xspace parameter vs. simulation snapshot time $t$ in Gyr. The coloured lines indicate median-likelihood values inferred from \textsc{magphys}\xspace, and different
colours denote different viewing angles (in order of angle number as specified in Table \ref{tab:cameras}, red, blue, orange, light blue, pink, purple and yellow).
The shaded regions represent the median of the symmetrized 16th and 84th percentiles of the cumulative frequency distribution about the median of each set of parameters
averaged over the seven viewing angles. When possible
(not all \textsc{magphys}\xspace parameters correspond to physical parameters of the simulations), the true values from the simulation are plotted as a solid green line
(in panels b\xspace, c\xspace, d\xspace, e\xspace, f\xspace, and g\xspace).
The panels are as follows: (a)\xspace $\chi^2$ value for the best-fitting SED, where the dashed line indicates the threshold for an acceptable fit from \citet{Smith:2012};
(b)\xspace $V$-band attenuation ($A_{\mathrm{V}}$);
(c)\xspace stellar mass; (d)\xspace total luminosity of the dust emission; (e)\xspace dust mass; (f)\xspace specific SFR; (g)\xspace SFR;
(h)\xspace fraction of luminosity absorbed by the diffuse ISM in \textsc{magphys}\xspace ($f_{\mu}^{\rm SFH}$); (i)\xspace total $V$-band optical depth in \textsc{magphys}\xspace ($\hat{\tau}_{\mathrm{V}}$);
(j)\xspace $V$-band optical depth of the diffuse ISM in \textsc{magphys}\xspace ($\hat{\tau}_{\mathrm{V,ISM}}$);
(k)\xspace cold-dust temperature parameter of \textsc{magphys}\xspace ($T_{\rm C}^{\rm ISM}$); and (l)\xspace warm-dust temperature parameter of \textsc{magphys}\xspace ($T_{\rm W}^{\rm BC}$) In the last two panels,
the dotted lines represent the limits imposed by the assumed priors.
Most parameters are recovered well; see the text for details.}
\label{fig:disc}
\end{figure*}
\section{Results} \label{S:results}
\subsection{Example fit} \label{S:example_fit}
Fig. \ref{fig:sed_example} shows the results of applying \textsc{magphys}\xspace to the $t - t({\rm SFR_{max}}) = -0.5$ Gyr snapshot of the merger simulation. The spread in the photometric points at a given wavelength
reflects the viewing-angle-dependent variation in dust attenuation, which is self-consistently computed for the simulated galaxy through dust radiative transfer. In the UV,
the output SEDs for the seven cameras span a range of $\sim 0.5$ dex in luminosity over the different viewing angles. For the most-obscured viewing angle, the observed NUV luminosity is an order
of magnitude fainter than the intrinsic luminosity. Longward of $\sim1~\micron$, the variation of the photometry with viewing angle and the attenuation are considerably less (although still non-negligible).
For each of the seven viewing angles, \textsc{magphys}\xspace yields an acceptable fit to the photometry, although the models underpredict the $i$-band data points because at $z = 0.1$
(the assumed redshift of the
simulated galaxy), the high-equivalent-width H$\alpha$ emission line, which is not considered in this implementation of \textsc{magphys}\xspace, falls roughly in the centre of that band's
transmission function.\footnote{Interestingly, this systematic bias is also seen in the {\it H}-ATLAS \textsc{magphys}\xspace\ analysis of 250-\micron-selected galaxies in \citet{Smith:2012}.}
Encouragingly, \textsc{magphys}\xspace is able to recover the intrinsic (unattenuated) stellar SED to within $\sim 0.1-0.4$ dex. This success indicates that the
CF00 two-component dust attenuation model used in \textsc{magphys}\xspace is effective at correcting for the effects of dust attenuation (for this particular SED; we present
an example in which this is not the case in Section \ref{S:dust}). In the simulations, the dust attenuation for a given
viewing angle depends on the 3-D spatial distribution of sources of emission and dust, spatial variations in the stellar populations, and dust scattering into and out of the given
line of sight. Differential extinction is significant because for a given line of sight, there is, in principle, a unique line-of-sight optical depth for each stellar particle; thus, a
two-component model is surely a crude approximation to the actual geometry of the simulated galaxy. Consequently, it is impressive that the dust attenuation correction
is as effective as it is.
\begin{figure}
\centering
\includegraphics[trim=265 0 265 0,clip,width={0.85\columnwidth}]{./plots/compare_av_isolated.eps}
\caption{$A_{\mathrm{V}}$ values of the \textsc{magphys}\xspace best-fitting SEDs vs. the true $A_{\mathrm{V}}$ values for the isolated disc simulation. The points are coloured according to the viewing angle, as specified
in the legend. For viewing angles for which the true $A_{\mathrm{V}}$ value is relatively low (angles 1, 5, 6, and 7), \textsc{magphys}\xspace tends to overestimate the $A_{\mathrm{V}}$ values. Conversely, for viewing angles
with higher $A_{\mathrm{V}}$ values (angles 2, 3, and 4), \textsc{magphys}\xspace tends to underestimate $A_{\mathrm{V}}$. The average offset between the \textsc{magphys}\xspace and true $A_{\mathrm{V}}$ values is $0.006 \pm 0.129$.}
\label{fig:disc_attenuation}
\end{figure}
Using these input data, \textsc{magphys}\xspace is significantly less effective at recovering the true dust SED in the MIR, primarily because of the lack of
`observations' at wavelengths in the range of $\sim2-50~\micron$,
where the MIR SED is poorly recovered for most viewing angles. At these wavelengths, the variation in the best-fitting SEDs for each viewing angle output by \textsc{magphys}\xspace
is greater than an order of magnitude in luminosity (whereas the variation in the true SED is negligible). In the FIR between $\sim 25$ \micron~and the 60-$\micron$ data point,
all of the best-fitting SEDs for this snapshot under-predict the true SED \citep[although this is not necessarily the case for other snapshots, this trend was also noted by][]{Ciesla:2014}.
As noted in \citet{Smith:2012}, these difficulties are not
unexpected because the only constraints on the MIR SED in the absence of MIR observations come from the prior on the MIR component of the dust SED library (which is
chosen at random and thus deliberately broad) and the energy balance criterion. This latter constraint is also weakened in the MIR regime as a result of the small contribution
of the hot dust component to the total dust luminosity.
The uncertainty in the MIR highlights the fundamentally phenomenological (rather than physical) nature of the model for the IR emission: for a given total energy absorbed, the dust
emission does not depend \textit{a priori} on the SED of the absorbed light. In reality, the shape of the radiation field that heats the dust, which can vary significantly throughout a galaxy,
affects the dust-temperature distribution. For this reason, as noted in dC08 and \citet{Smith:2012}, the efficacy of the dust emission model in \textsc{magphys}\xspace at observationally un-sampled
wavelengths (particularly in the MIR) is limited. The model may be useful for recovering the IR luminosity and dust mass (we address these possibilities below), but it should
not be used to interpret the detailed physical state of the dust or to make predictions for regions of the SED that are unconstrained by the available photometry. Using a more
physically motivated model for dust emission, such as that of \citet{Draine:2007sings}, may alleviate this problem \citep{Ciesla:2014}.
\subsection{Isolated disc} \label{S:isolated_disc}
Fig. \ref{fig:disc} shows the time evolution of various quantities for the isolated disc simulation. In each panel, the thin non-green lines indicate the median-likelihood
values output by \textsc{magphys}\xspace (except for the $\chi^2$ and $A_{\mathrm{V}}$ panels, which show the values for the best-fitting SED);
different colours correspond to different viewing angles, and the shaded region
represents an estimate of the typical uncertainty about the median (specifically, it represents the median of the symmetrized 16th and 84th percentiles of the cumulative frequency
distribution about the median of each set of parameters, averaged over the seven viewing angles). The thick green lines represent the true values of the quantities
for the simulations (when possible; not all \textsc{magphys}\xspace parameters have a direct physical counterpart in the simulations). See the figure legend for details of the parameters
shown in each panel.
At all times during this isolated disc simulation, acceptable fits can be found, which is to say that the $\chi^2$ values (shown
in panel a\xspace) are always below the threshold value for an acceptable fit (shown as the horizontal dashed line). This threshold value of $\chi^2$ was derived
in \citet{Smith:2012} on the basis that it corresponds to the $\chi^2$ value above which there is a probability of less than one percent that the best fit is consistent
with the model given the seventeen bands of photometry available, their total errors, and a statistical estimate of the number of free parameters in the model
(see \citealt{Smith:2012} for the technical details of this derivation). In Section \ref{S:methods}, we have already mentioned the arbitrary nature of the photometric
errors that we have adopted in this study (given the absence of e.g. calibration uncertainties and aperture effects in our simulated photometry). That the best-fit
$\chi^2$ values are non-negligible ($1 < \chi^2 < 20$) highlights that there are differences between the SEDs emergent from the
simulation and the \textsc{magphys}\xspace fitting libraries (which is not surprising because it is unlikely that the simple treatment of dust attenuation used in \textsc{magphys}\xspace can perfectly capture
the relatively complex source and dust geometry of the simulated galaxies). As we shall discuss
below, the generally reasonable parameter estimates that \textsc{magphys}\xspace derives, relative to the known simulated values, suggest that the $\chi^2$ threshold appears
sufficiently large to allow us to confidently recover reasonable SED fits for the simulated SEDs; we shall return to this topic below.
\begin{figure}
\centering
\includegraphics[width={\columnwidth}]{./plots/cmd.eps}
\caption{SDSS $u-r$ colour vs. $r$-band absolute AB magnitude for the simulated disc galaxy (dashed line) and galaxy merger (solid line). Various times
of interest are marked, as described in the legend. The grey segment of the solid line indicates the time period of the merger simulation during which
\textsc{magphys}\xspace does not yield acceptable fits to the simulated SEDs. The dashed line indicates the optimal separator between the blue cloud and red sequence from
\citet{Baldry:2006}. For most of the duration of both simulations, the simulated galaxies are within the blue cloud. After the final starburst (red diamond),
the simulated merger continues to approach the green valley. Because the merger simulation was terminated $\sim0.5$ Gyr after the starburst, there is not sufficient time for it to
move to the red sequence.}
\label{fig:cmd}
\end{figure}
The physical evolution of the isolated disc is simple:
because there is no gas accretion in this idealised simulation, as time progresses, the gas content is depleted, the SFR decreases, and $M_{\star}$ increases.
The time evolution of the various simulation quantities is qualitatively recovered by the SED modelling: the physical and fitted values of
$A_{\mathrm{V}}$ (panel b\xspace), dust luminosity $L_\mathrm{d}$ (panel d\xspace), $M_\mathrm{dust}$ (panel e\xspace), sSFR (panel f\xspace), and SFR (panel g\xspace) all decrease with time, whereas both the physical and fitted
values of $M_{\star}$ (panel c\xspace) increase with time.
As well as the general trends, it is worth noting that the output parameters vary smoothly within the errors between adjacent time snapshots.
This is reassuring because \textsc{magphys}\xspace\ fits each snapshot (and viewing angle) independently without knowledge that the snapshots\slash angles are related; the lack of discontinuities
in the derived parameters offers considerable support for the reliability of the parameters that \textsc{magphys}\xspace\ produces.
However, the quantitative agreement between the physical and fitted parameters is more varied. $L_\mathrm{d}$ (panel d\xspace) is recovered exceptionally
well because the simulated photometry samples the FIR SED well, in particular around the peak \citep[e.g.][]{Smith:2013}, and the model SEDs typically provide good fits to the
simulated photometry. Although the inferred and true SFR\footnote{The \textsc{magphys}\xspace SFRs plotted in this work correspond to SFRs averaged over the past
100 Myr, although our results are almost identical if we instead consider \textsc{magphys}\xspace SFRs with 10 Myr averaging. The value for the simulations is the `instantaneous' SFR, i.e.
the sum of the SFRs of the individual gas particles, which are calculated based on their gas
densities and the assumed sub-resolution star formation prescription. Consequently, the SFR value for the simulations corresponds to an average over a shorter
timescale (i.e. less than the maximum time step, 5 Myr) than the \textsc{magphys}\xspace values. If the SFR varies significantly on $10-100$ Myr timescales, this difference
could lead to discrepancies between the \textsc{magphys}\xspace and simulation values even if \textsc{magphys}\xspace recovers the SFH exactly. However, for most times in the simulations,
this effect is minor.} values (panel g\xspace) agree well at early times, the true SFR is increasingly overestimated as the simulation
progresses; the overestimate can be as much as $\sim 0.2$ dex.
\begin{figure*}
\centering
\includegraphics[width=2.0\columnwidth,trim=0cm 0cm 0cm 0.35cm,clip]{./plots/fitresults_fiducial.eps}
\caption{Similar to Fig. \ref{fig:disc}, but for the galaxy major merger simulation. The x-axis in each panel indicates the time relative to the peak of the starburst induced at
coalescence. This convention thus provides some insight into the physical state of the system at a given time. The qualitative evolution and values of the various physical parameters are recovered
very well, even during the coalescence-induced starburst phase, when the fits are formally unacceptable. As for the isolated disc case, the dust mass is systematically underestimated.}
\label{fig:merger}
\end{figure*}
The dust mass\footnote{The dust emissivities assumed by the two codes differ: in \textsc{magphys}\xspace, the emissivity is normalised by $\kappa_{850 ~{\rm \mu m}} = 0.77$ g$^{-1}$
cm$^{2}$ \citep{Dunne:2000}, whereas the MW dust model used in the simulations has $\kappa_{850 ~{\rm \mu m}} = 0.38$ g$^{-1}$ cm$^{2}$. Consequently, we multiply the dust
masses output by \textsc{magphys}\xspace by two to account for this difference.} (panel e\xspace) is systematically underestimated by $\sim 0.2 - 0.3$ dex.
This underestimation is at least partially due to the assumption in the simulations that the cold phase of the sub-resolution ISM has a negligible volume filling factor and
thus does not absorb photons. Consequently, dust contained in the cold phase does not emit light and cannot be recovered. In Section \ref{S:mp_off},
we discuss this issue in more detail.
Within the uncertainties, the inferred and true stellar masses (panel c\xspace) agree throughout the simulation. However, at early times, the median-likelihood values
can be less than the true values by $\sim0.1-0.3$ dex.
Because the stellar mass is well-recovered and the SFR is slightly overestimated, the specific SFR (panel f\xspace) is also overestimated slightly.
For some viewing angles, $A_{\mathrm{V}}$ (panel b\xspace) tends to be underestimated, whereas for others, it is typically overestimated. This is indicated more clearly in
Fig. \ref{fig:disc_attenuation}, which shows the $A_{\mathrm{V}}$ recovered by \textsc{magphys}\xspace versus the
true $A_{\mathrm{V}}$. For less-attenuated (closer to face-on) viewing angles (angles 1, 5, 6, and 7), $A_{\mathrm{V}}$ is slightly overestimated, whereas for
more-attenuated (closer to edge-on) viewing angles, $A_{\mathrm{V}}$ is slightly underestimated by \textsc{magphys}\xspace. On average, $A_{\mathrm{V}}$ is recovered to within $0.006 \pm 0.129$.
For a given viewing angle, the inferred and true $A_{\mathrm{V}}$ values typically differ by less than 0.2 magnitudes.
Although the other plotted quantities do not have direct physical analogues in the simulations, their time evolution is also of interest. Panels (k)\xspace and (l)\xspace show
the \textsc{magphys}\xspace dust temperatures $T_{\mathrm{C}}^{\mathrm{ISM}}$ and $T_{\mathrm{W}}^{\mathrm{BC}}$ versus time. $T_{\mathrm{C}}^{\mathrm{ISM}}$ and $T_{\mathrm{W}}^{\mathrm{BC}}$ both
tend to decrease as the simulation progresses. This decrease reflects the shifting of the simulated galaxy's SED to longer wavelengths with time because
the strong decrease in luminosity coupled with a relatively weak decrease in the dust mass results in colder dust (see the discussion in \citealt{Hayward:2011smg_selection}).
The median likelihood values for $T_{\mathrm{C}}^{\mathrm{ISM}}$ ($T_{\mathrm{W}}^{\mathrm{BC}}$) are in the range $\sim 18-23$ ($33-46$) K, and the uncertainty, which is more significant than the variation
with viewing angle, is $\sim 2$ ($5-10$) K.
The total $V$-band optical depth, $\hat{\tau}_{\mathrm{V}}$ (panel i\xspace), and the $V$-band optical depth contributed by the diffuse ISM, $\hat{\tau}_{\mathrm{V,ISM}}$ (panel j\xspace),
both remain relatively constant over time.
At all times, the diffuse ISM is optically thin and the total effective optical depth (birth clouds plus diffuse ISM) is $\sim 1-2$, although there is significant variation with both
time and viewing angle, and the uncertainty is relatively large. As the simulation progresses and the sSFR decreases, the fraction of the luminosity absorbed
by the diffuse ISM, $f_{\mu}^{\rm SFH}$ (panel h\xspace), increases. The diffuse ISM absorbs of order half of the total luminosity ($f_{\mu}^{\rm SFH} \sim 0.3-0.6$).
The variation with viewing angle is indicated by the differences among the \textsc{magphys}\xspace parameter values at a given time. For most parameters, the variation
is less than the \textsc{magphys}\xspace uncertainties\footnote{Note that the none of the viewing angles are edge-on (but three have relatively high inclinations
of $73.4$ deg). Had we used an edge-on camera, the overall variation among
viewing angles would certainly be greater. However, it is unlikely that our conclusions regarding the importance of viewing angle variation would change
qualitatively, and the probability of observing real disc galaxies almost perfectly edge-on is low.}
(i.e. the coloured lines lie within the shaded region). The notable exceptions are $\hat{\tau}_{\mathrm{V,ISM}}$ and, to a lesser extent,
$f_{\mu}^{\rm SFH}$. Physically, $\hat{\tau}_{\mathrm{V,ISM}}$ should vary with viewing angle because as the disc is viewed closer to edge-on, the typical column depths along the
line of sight are greater. For the same reason, $f_{\mu}^{\rm SFH}$, which is the fraction of the absorbed stellar light that is absorbed by the diffuse ISM rather than
the birth clouds, should also vary with viewing angle. The physical viewing-angle-dependent variation in the obscuration is demonstrated by panel (b)\xspace
of Fig. \ref{fig:disc} and further highlighted in Fig. \ref{fig:disc_attenuation}; as discussed above, the true $A_{\mathrm{V}}$ values (the green lines) typically differ by only $\sim 0.2$\,mag from the best-fit estimates.
\subsection{Major galaxy merger} \label{S:merger}
We now turn to the evolution of the major galaxy merger simulation. This merger exhibits the characteristic evolution of major mergers that induce strong
starbursts (not all orbits result in such starbursts; see \citealt{Cox:2006}).
Because the progenitor galaxies are not initialised with bulges, which tend to stabilise the discs, a starburst with maximum SFR of $\sim 60 ~\mathrm{M_{\odot} {\rm ~yr}^{-1}}$ is
induced at first pericentric passage ($t - t({\rm SFR_{max}}) \sim -1.1$ Gyr). Subsequently, the SFR decreases below the initial value. As the progenitor disc galaxies
approach final coalescence ($t - t({\rm SFR_{max}}) \sim 0$ Gyr), a starburst that is even stronger than that at first passage is induced by the tidal torques exerted
by the galaxies upon one another. The SFR and $L_\mathrm{d}$ briefly exceed $100 ~\mathrm{M_{\odot} {\rm ~yr}^{-1}}$ and $10^{12} ~\mathrm{L_{\odot}}$, respectively (i.e. the simulated galaxy would be
classified as an ultraluminous IR galaxy, ULIRG). Shortly after the peak of the starburst, the AGN contribution (see Section \ref{S:AGN}) is maximal; the AGN can contribute
as much as 75 per cent of the total UV--mm luminosity \citep[see e.g.][for a detailed study of a real merger-induced starburst in a ULIRG exhibiting AGN activity]{Smith:2010}.
During the final starburst, a significant fraction of the available gas is consumed. Shock heating
and AGN feedback heat the bulk of the remaining gas (see e.g. \citealt{Hayward2014arepo} for details). Consequently, the SFR plummets from $\sim 100 ~\mathrm{M_{\odot} {\rm ~yr}^{-1}}$
to less than $\sim 0.5 ~\mathrm{M_{\odot} {\rm ~yr}^{-1}}$, and the AGN emission decreases rapidly.
To put the simulated merger in context, the time evolution of the simulated merger in the SDSS $u-r$ colour versus $r$-band absolute AB magnitude $M_r$
colour-magnitude diagram (CMD) is shown in Fig. \ref{fig:cmd}. For completeness, the time evolution of the isolated disc is also shown. During most of the duration of the simulations,
the galaxies are in the blue-cloud region of the CMD \citep[see e.g.][]{Baldry:2004,Baldry:2006,Darg:2010}. After the starburst that is induced at final coalescence
of the merging galaxies, the simulated merger continues its evolution towards the green valley, the locus of which is denoted by the green dot-dashed line \citep[from][]{Baldry:2006}.
Because the simulation was terminated $\sim0.5$ Gyr after the peak of the starburst, the system does not transition onto the red sequence.
Our aim is to investigate how well \textsc{magphys}\xspace can fit the SEDs of actively star-forming, IR-luminous galaxies, for which the full panchromatic
capabilities of \textsc{magphys}\xspace can be utilised. Thus, the fact that the simulated merger does not enter the red sequence is irrelevant for the purposes of this work.
It is worthwhile to note that the simulated galaxies occupy regions of the CMD that are populated by real galaxies \citep[e.g.][]{Baldry:2004,Baldry:2006,Darg:2010}.
This is true even in the phase of the merger simulation during which \textsc{magphys}\xspace is unable to yield an acceptable fit to the simulated SEDs; this time period is indicated
by the grey segment of the solid line in Fig. \ref{fig:cmd}.
Fig. \ref{fig:merger} shows the results of applying \textsc{magphys}\xspace to the SEDs of the merger simulation; the panels are the same as in Fig. \ref{fig:disc}.
As was the case for the isolated disc, \textsc{magphys}\xspace qualitatively recovers the true evolution of the physical parameters except
for the stellar mass and dust mass for a short time near merger coalescence. For example, the times and amplitudes of the starbursts
are captured exceptionally well. The success of \textsc{magphys}\xspace at inferring the time evolution of the merger is reassuring and perhaps even surprising because (1)
the version of \textsc{magphys}\xspace used here was designed to treat relatively normal local galaxies, not ULIRGs; (2) \textsc{magphys}\xspace does not include emission from AGN, which is
significant at some times (near coalescence) during this merger simulation (the issue of AGN contamination will be discussed in detail in Section \ref{S:AGN});
(3) \textsc{magphys}\xspace treats each viewing angle and
time snapshot individually without knowledge of one another; and (4) because of the first
two reasons, \textsc{magphys}\xspace does not formally achieve a good fit to the SEDs during the coalescence stage of the merger [i.e. for $-0.1 \la t - t({\rm SFR_{max}}) \la 0.2$ Gyr, the
$\chi^2$ value is greater than the threshold for an acceptable fit; see panel {a\xspace}].
For most of the snapshots and viewing angles, the values inferred by \textsc{magphys}\xspace for most of the parameters are consistent with the true values within the uncertainties.
Because the FIR photometry are typically well-fit by \textsc{magphys}\xspace, $L_\mathrm{d}$ (panel d\xspace) is recovered extremely well. The SFR (panel g\xspace) is also typically
recovered well; note that this is not necessarily a consequence of the excellent recovery of $L_\mathrm{d}$ because \textsc{magphys}\xspace\ includes a possible contribution to the
dust luminosity from evolved stars that are not linked with the most recent burst of star formation.
In the pre-coalescence phase [$-1 \la t - t({\rm SFR_{max}}) \la -0.2$ Gyr], the SFR tends to be overestimated slightly (by $\la 0.1$ dex), and in the post-starburst
phase, it can be overestimated by as much as 0.6 dex (which is a smaller factor than would occur if a simple conversion from $L_{\rm IR}$ were used; see \citealt{Hayward2014}),
irrespective of whether we consider \textsc{magphys}\xspace SFRs averaged over 10 or 100 Myr (i.e. the \textsc{magphys}\xspace default 100 Myr SFR-averaging timescale is not the source of this discrepancy).
The stellar mass (panel c\xspace) is recovered to within $\sim 0.2$ dex except during the final coalescence\slash starburst phase, when the fits are statistically unacceptable.
At early times, it is systematically underestimated.
In Sections \ref{S:methods} and \ref{S:isolated_disc}, we discussed the choice of $\chi^2$ threshold that we use to identify bad fits, a threshold that is exceeded during
the coalescence phase at the time of the peak starburst and AGN activity. That the threshold is exceeded here offers further encouragement for our arbitrary choice of photometric errors:
using this $\chi^2$ threshold, we are able to get an acceptable fit to $\sim 95$ per cent of the snapshots, and it is only during the $\sim 5$ per cent of the simulation when the
starburst and AGN activity are most intense that we are unable to derive a good fit to the simulated photometry. This time period is when the physics of the galaxy and the \textsc{magphys}\xspace library are
most discrepant, and visual inspection of the `best-fitting' SEDs suggests that these fits should be rejected.\footnote{This effect also raises the tantalising possibility of using poor fits as a means
of identifying sources that have undergone recent mergers, though as discussed in \citet{Smith:2012}, there are several other possible reasons for poor fits (e.g. errors in the photometry, incorrect
cross-identification, and\slash or artificially narrow prior libraries).} To summarize, although we adopt uncertainties on the simulated
data out of necessity for the purposes of applying \textsc{magphys}\xspace rather than because of the physical effects that blight real data, the results that they produce do seem to be at
least plausible and the resulting threshold value appears to function broadly as expected.
Returning to the recovered parameters, the sSFR (panel f\xspace) is typically recovered within the uncertainties, although at early times,
the median-likelihood values from \textsc{magphys}\xspace can be as much as $\sim 0.5$ dex greater
than the true values because of the underestimate of $M_{\star}$ at these times.
The sSFR is slightly overestimated in the post-starburst phase because
of the overestimate of the SFR at these times.
The dust mass (panel e\xspace) is systematically underestimated by $\sim 0.2-0.5$ dex.
However, as for the isolated disc, a significant part of the underestimate is because in the simulation,
by construction, the dust in the `cold phase' of the sub-resolution ISM does not
absorb or emit radiation. We investigate and discuss this issue in detail in
Section \ref{S:mp_off}.
\begin{figure}
\centering
\includegraphics[trim=265 0 265 0,clip,width={0.85\columnwidth}]{./plots/compare_av_fiducial.eps} \\
\includegraphics[trim=530 0 0 0,clip,width={0.85\columnwidth}]{./plots/compare_av_fiducial.eps}
\caption{Median-likelihood $A_{\mathrm{V}}$ values from \textsc{magphys}\xspace versus true $A_{\mathrm{V}}$
for all snapshots of the merger simulation. In the top panel, the points
are coloured according to the viewing angle, whereas in the bottom panel,
the colours indicate the time relative to the peak of the final starburst. For
most of the simulation, \textsc{magphys}\xspace recovers the true $A_{\mathrm{V}}$ to within $\sim 0.2$
dex. However, near the time of the final starburst (the cyan points in the lower
panel), $A_{\mathrm{V}}$ can be underestimated by as much as $\sim 1$ magnitude.
Unlike for the isolated disc simulation, there is no significant viewing-angle
dependence.}
\label{fig:merger_attenuation}
\end{figure}
The time evolution of $A_{\mathrm{V}}$ is shown in panel (b)\xspace, but how well it is recovered
can be read more easily from Fig. \ref{fig:merger_attenuation}. For most of the
merger simulation, the true $A_{\mathrm{V}}$ is recovered to within $\sim 0.2$ mag. The average
offset $\Delta A_{\mathrm{V}} = A_{\mathrm{V, true}} - A_{\mathrm{V, \textsc{magphys}\xspace}} = 0.106 \pm 0.213$.
The upper panel of Fig. \ref{fig:merger_attenuation} indicates that, unlike for the
isolated disc simulation, there is no significant viewing-angle dependence because
there are no `special' viewing angles for this highly asymmetric system.
The lower panel shows that there is no systematic offset between the true and
inferred $A_{\mathrm{V}}$ values, except for during the time period near the final starburst (indicated by the light-blue symbols),
when the $A_{\mathrm{V}}$ tends to be underestimated by \textsc{magphys}\xspace, sometimes by greater than 1 magnitude; however, as we have previously mentioned, we are unable to
derive acceptable fits to the photometry during this stage of the merger.
The evolution of the \textsc{magphys}\xspace parameters that have no direct physical analogue in the simulations still provides some interesting insights into the physical
state of the simulated galaxies. The cold (panel k\xspace) and warm (panel l\xspace) dust temperatures are in the range $\sim 20-25$ and $\sim 36 - 60$ K, respectively.
Thus, they tend to be higher for the merger than for the isolated disc. The formal uncertainties are similar to those for the isolated disc, $\sim 1-2$ and $\sim 5-10$
K for $T_{\mathrm{C}}^{\mathrm{ISM}}$ and $T_{\mathrm{W}}^{\mathrm{BC}}$, respectively. Both temperatures increase sharply during the starburst induced at first pericentric
passage ($t - t({\rm SFR_{max}}) \sim -1.15$ Gyr); this behaviour reflects the increase in effective dust temperature (i.e. the shift of the IR SED
peak to shorter wavelength; e.g. \citealt{Smith:2013}) that is caused in starbursts primarily by the simultaneous sharp increase in luminosity and decrease in dust mass
\citep{Hayward:2011smg_selection}.\footnote{This effect is also seen around the peak associated with the merger coalescence, but the unacceptably high $\chi^2$
values during this stage of the simulation preclude any physical interpretation.} Although it is encouraging that the evolution of the \textsc{magphys}\xspace dust temperatures reflects
this physical effect, this result should be interpreted with caution because of the significant error bars associated with $T_{\mathrm{C}}^{\mathrm{ISM}}$ and $T_{\mathrm{W}}^{\mathrm{BC}}$ and the proximity of the
median-likelihood values to the bounds on the temperature priors (dotted grey horizontal lines in panels k\xspace and l\xspace).
The effective optical depths (panels i\xspace and j\xspace) are especially interesting. For most of the simulation, $\hat{\tau}_{\mathrm{V,ISM}} \la 1$. Interestingly, $\hat{\tau}_{\mathrm{V,ISM}}$ peaks during both starbursts, which is
physically reasonable because the emission at those times is dominated by relatively compact, obscured starbursts. The variation with viewing angle is small, but
it is greater than the formal uncertainty on $\hat{\tau}_{\mathrm{V,ISM}}$. The typical total optical depth $\hat{\tau}_{\mathrm{V}}$ is $\sim 2$, but this value varies significantly with time and viewing angle
and is very uncertain (i.e. at fixed time and viewing angle, the confidence interval can span the range $\hat{\tau}_{\mathrm{V}} \sim 0-4$).
The fraction of the total luminosity absorbed by the diffuse ISM, $f_{\mu}^{\rm SFH}$ (panel h\xspace), decreases sharply during the starbursts.
This result is consistent with the physical expectation that in starbursts, the dust luminosity is dominated by highly obscured young stars. This
is certainly true in the simulations, and it is impressive that the \textsc{magphys}\xspace parameter evolution reflects this effect, even when the fits are formally
unacceptable during the starburst induced at merger coalescence. However, the decrease in $f_{\mu}^{\rm SFH}$ during the starbursts may simply
be a consequence of the assumed prior because by construction, only the birth clouds contain dust with $T > 45$ K.
As for the isolated disc case, most of the modelled parameters show little or no
viewing-angle dependence once the uncertainties are taken into account. The only exception is $\hat{\tau}_{\mathrm{V,ISM}}$; as
explained above, this quantity should vary with viewing angle, whereas
quantities such as the SFR and stellar mass should not.
\ctable[
caption = {\textsc{sunrise}\xspace runs used to investigate systematic uncertainties.\label{tab:runs}},
center,
star,
notespar,
doinside=\small,
]{ll}{
\tnote[a]{Run designation.}
\tnote[b]{Description of the assumptions used in the \textsc{sunrise}\xspace calculations.}
}{
\FL
Designation\tmark[a] & Description\tmark[b] \ML
{\sf fiducial}\xspace & {\sc starburst99} SSP templates, AGN emission enabled, MW-type dust, default (clumpy) sub-resolution ISM model \NN
{\sf AGN-off}\xspace & AGN emission disabled (solely in the radiative transfer calculations; see footnote \ref{footnote:agn_off}) \NN
{\sf LMC-dust}\xspace & LMC-type dust used instead of MW-type dust \NN
{\sf SMC-dust}\xspace & SMC-type dust used instead of MW-type dust \NN
{\sf alternate-ISM}\xspace & Alternate sub-resolution ISM model (no sub-resolution clumpiness) \LL
}
\subsection{Systematic uncertainties} \label{S:systematic_uncertainties}
In this section, we present tests in which we performed additional \textsc{sunrise}\xspace radiative transfer calculations on the merger simulation. In each test, we varied
one of the assumptions in the \textsc{sunrise}\xspace calculation and kept all others identical to those of the default-parameter run. The assumptions used for these tests are summarised
in Table \ref{tab:runs}. These tests enable us to characterise how our ignorance of the underlying `microphysics', such as the details of stellar evolution and the dust grain
composition, affects the accuracy of the SED modelling.
\begin{figure}
\centering
\includegraphics[width={0.99\columnwidth}]{./plots/agn_fraction.eps} \\
\caption{Fractional contribution of the AGN(s) to the total 1-1000 $\micron$ luminosity versus time. The AGN contribution is most significant during the time
periods shortly after first pericentric passage and final coalescence. The maximum AGN contribution, $\sim 75$ per cent of the 1-1000 $\micron$ luminosity,
is $\sim 100$ Myr after the coalescence-induced starburst.}
\label{fig:agn_frac}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim= 0 377 531 11,clip,width={0.75\columnwidth}]{./plots/fitresults_agn_off.eps}\\
\includegraphics[trim= 354 377 177 0,clip,width={0.75\columnwidth}]{./plots/fitresults_agn_off.eps}
\caption{Selected results for the {\sf AGN-off}\xspace test. At the time of maximum AGN luminosity
[$t - t({\rm SFR_{max}}) \sim 0.2$ Gyr], the $\chi^2$ values for the best-fitting model (top panel) are less than for the {\sf fiducial}\xspace run, which demonstrates that AGN contamination hinders the ability
of \textsc{magphys}\xspace to obtain a satisfactory fit at the time when the AGN is most active. Note that the stellar mass (bottom panel) is recovered more accurately
than for the {\sf fiducial}\xspace run, which indicates that AGN contamination partially causes the overestimate of the stellar mass at that time in the {\sf fiducial}\xspace run.}
\label{fig:agn_off}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=0 377 531 11,clip,width={0.75\columnwidth}]{./plots/fitresults_lmcavg_10.eps} \\
\includegraphics[trim=354 377 177 0,clip,width={0.75\columnwidth}]{./plots/fitresults_lmcavg_10.eps}
\caption{Selected results for the {\sf LMC-dust}\xspace test.
When LMC-type dust is used to calculate mock SEDs of the simulated merger, the $\chi^2$ values (top panel) and stellar mass (bottom panel) differ considerably from the {\sf fiducial}\xspace
case; for all other parameters, the time evolution does not differ significantly from the {\sf fiducial}\xspace case. For the {\sf LMC-dust}\xspace run, the $\chi^2$ values are slightly higher. The
median-likelihood stellar mass values are systematically $\sim 0.1$ dex greater, but for most of the evolution of the merger, they are still consistent with the true values.}
\label{fig:lmc}
\end{figure}
\subsubsection{AGN contamination} \label{S:AGN}
The {\sf fiducial}\xspace \textsc{sunrise}\xspace runs include emission from the AGN particles in the \textsc{gadget-3}\xspace simulation. The AGN luminosity varies considerably with time because it is
determined by the rate of gas inflow to the nuclear region(s) of the merging galaxies. The fractional contribution of the AGN(s) to the total 1-1000 $\micron$ luminosity
is shown in Fig. \ref{fig:agn_frac}. The contribution is most significant during the time periods shortly after the starbursts induced at first pericentric passage and final
coalescence, when the fractional contribution reaches $\sim 25$ and $\sim 75$ per cent, respectively. Thus, near those times, the AGN emission has a significant effect on the
simulated SEDs (see \citealt{Snyder:2013} for a detailed study). Because \textsc{magphys}\xspace does not include a treatment of AGN emission, it is possible that the AGN emission
can affect the ability of \textsc{magphys}\xspace to obtain a satisfactory fit and infer accurate parameters during the time periods of the simulation when the AGN contribution is significant.
Thus, it is worthwhile to check how the results differ when the AGN emission is not included in the radiative transfer calculations.
\begin{figure*}
\centering
\includegraphics[trim=0 377 531 11,clip,width={0.50\columnwidth}]{./plots/fitresults_smc_bar.eps}
\includegraphics[trim=354 377 177 0,clip,width={0.50\columnwidth}]{./plots/fitresults_smc_bar.eps}
\includegraphics[trim=354 188 177 189,clip,width={0.50\columnwidth}]{./plots/fitresults_smc_bar.eps}
\includegraphics[trim=531 188 0 189,clip,width={0.50\columnwidth}]{./plots/fitresults_smc_bar.eps}
\caption{Selected results for the {\sf SMC-dust}\xspace test. When SMC-type dust is used, the quality of the \textsc{magphys}\xspace fits decreases considerably, as indicated by the systematically
greater $\chi^2$ values (leftmost panel) compared with the {\sf fiducial}\xspace case. For much of the evolution of the merger, the fits are only marginally acceptable or unacceptable.
The median-likelihood stellar mass values (second panel from left) agree much less well with the true values than for the {\sf fiducial}\xspace run; although the fits are formally acceptable
for $-0.8 \la t - t({\rm SFR_{max}}) \la -0.1$ Gyr, \textsc{magphys}\xspace underestimates the stellar mass by as much as $\sim 0.3$ dex. The inferred SFR (third panel from left) can be severely incorrect: for
$-1.6 \la t - t({\rm SFR_{max}}) \la -1.2$ Gyr, \textsc{magphys}\xspace infers an SFR of zero for most viewing angles when the true SFR is $\sim 20 ~\mathrm{M_{\odot} {\rm ~yr}^{-1}}$, which is likely because \textsc{magphys}\xspace
attributes all of the FIR emission to the diffuse ISM rather than the stellar birth clouds (i.e. $f_{\mu}^{\rm SFH} = 1$, as indicated in the rightmost panel).}
\label{fig:smc}
\end{figure*}
For the merger simulation, we performed a \textsc{sunrise}\xspace run in which we artificially set the AGN luminosity to zero (the {\sf AGN-off}\xspace run); by comparing the results of this run with the {\sf fiducial}\xspace
run, we can determine how AGN contamination affects the \textsc{magphys}\xspace results.\footnote{\label{footnote:agn_off}Because we used the same \textsc{gadget-3}\xspace simulation, which includes black hole
accretion and thermal AGN feedback, this \textsc{sunrise}\xspace calculation is technically not physically self-consistent. However, the virtue of this test is that it enables us to quantify the impact
of the AGN emission on the simulated galaxy SEDs with all else (e.g. the SFH and galaxy geometry, which would be altered had we disabled black hole accretion and AGN feedback
in the \textsc{gadget-3}\xspace simulation) being equal. See \citealt{Snyder:2013} for a detailed analysis of similar tests.} For most parameters, the \textsc{magphys}\xspace results for the {\sf fiducial}\xspace and
{\sf AGN-off}\xspace runs do not differ significantly. However, the differences in the $\chi^2$ values and recovered stellar masses are of interest. The time evolution of these two quantities for
the {\sf AGN-off}\xspace run are shown in Fig. \ref{fig:agn_off}. The $\chi^2$ values at the peak of the starburst and AGN activity [$-0.1 \la t - t({\rm SFR_{max}}) \la 0.2$ Gyr] are less when the
AGN emission is disabled, which indicates that AGN contamination is part of the reason that \textsc{magphys}\xspace did not yield satisfactory fits for the {\sf fiducial}\xspace run during that
phase of the merger (however, the fits are still formally unacceptable at this time for the {\sf AGN-off}\xspace run). For the {\sf AGN-off}\xspace run, the stellar mass is recovered more accurately
than for the {\sf fiducial}\xspace case (compare panel c\xspace of Fig. \ref{fig:merger} and the bottom panel of Fig. \ref{fig:agn_off}).
Thus, AGN contamination partially causes the overestimate of the stellar mass in the {\sf fiducial}\xspace run during the coalescence phase of the merger.
Still, it is reassuring that although \textsc{magphys}\xspace does not account for AGN emission, most of the recovered parameters are robust to AGN contamination.
Even when the AGN contributes $\sim 25$ per cent of the UV--mm luminosity [at $t - t({\rm SFR_{max}}) \sim -1$ Gyr], \textsc{magphys}\xspace is able to obtain acceptable fits to the photometry
and recover the parameters accurately.\footnote{Note that the lack of MIR photometry may be partially responsible for the
robustness of the results to AGN contamination. However, in a similar test, \citet{Michalowski2014} included mock MIR photometry and also found that the stellar
masses were robust to AGN contamination.}
\subsubsection{Dust grain composition} \label{S:dust}
The dust properties are another source of uncertainty in the SED modelling procedure because the dust composition and grain-size distribution
affect the attenuation curve and shape of the dust SED. Dust in the ISM is a complex topic (see \citealt{Draine:2003araa} for a review): even for the
Milky Way,
Large Magellanic Cloud (LMC), and Small Magellanic Cloud (SMC), it is far from trivial to determine the detailed dust properties in some region of the ISM.
Furthermore, the dust properties are likely very different in different regions of the ISM of a galaxy; for example, it is possible that typical grain sizes are greater
in higher-density regions \citep[e.g.][]{Kelly:2012}. Naturally, dust in high-redshift galaxies is even less understood than for local galaxies, and it may be possible
that dust properties vary significantly with redshift because of e.g. the differences in the timescales for the various dust-production channels
\citep[e.g.][]{Valiante:2009,Michalowski:2010production}. Indeed, there is some observational evidence that dust in high-redshift galaxies differs from that in the Milky Way
(e.g. \citealt{Buat:2011,Buat:2012}; \citealt*{Kriek:2013}; \citealt{Aller2014}). Thus, dust is a potentially significant uncertainty inherent in SED modelling that cannot be ignored.
Because we typically do not have a detailed understanding of a galaxy's dust properties when fitting its SED, an empirically supported attenuation curve is typically assumed;
at best, one can use a flexible attenuation curve parameterisation, as is done in \textsc{magphys}\xspace, or multiple attenuation curves to help characterise the significance of this uncertainty.
We have investigated this uncertainty by varying the
intrinsic properties of the dust, which affect the effective attenuation curve and the FIR SED shape, used in the \textsc{sunrise}\xspace calculation. In addition to the default
MW model, we performed \textsc{sunrise}\xspace runs in which the \citet{Draine:2007kk} LMC and SMC models were used. Because the attenuation curve used by
\textsc{magphys}\xspace was not changed, these tests mimic the situation in which the dust properties assumed when fitting a galaxy's SED do not correspond to the
true dust properties of the galaxy.
\begin{figure*}
\centering
\includegraphics[width=1.95\columnwidth,trim=0cm 0cm 0cm 0.65cm,clip]{./plots/nslss_raff_z0_sn016_sedplot.eps}
\caption{Example SED fits for the $t - t({\rm SFR_{max}}) = -1.47$ Gyr snapshot of the {\sf SMC-dust}\xspace run; see the
caption of Fig. \ref{fig:sed_example} for a complete description of what is plotted. Unlike
for the {\sf fiducial}\xspace case, the intrinsic SEDs inferred by \textsc{magphys}\xspace (blue lines) differ significantly
from the true intrinsic SED (grey line), even though \textsc{magphys}\xspace yields acceptable fits
to the synthetic photometry. Consequently, for some parameters (e.g. the SFR),
\textsc{magphys}\xspace recovers the true values poorly.}
\label{fig:smc_sed}
\end{figure*}
Fig. \ref{fig:lmc} shows selected results for the {\sf LMC-dust}\xspace run.
The results for when LMC-type dust was used in the radiative transfer calculations
are similar to those of the {\sf fiducial}\xspace case, for which the MW dust model
was used. The $\chi^2$ values (top panel)
for the {\sf LMC-dust}\xspace run tend to be greater than for the
corresponding snapshot of the {\sf fiducial}\xspace run, but the fits are still acceptable
except during the near-coalescence phase of the merger.
The only parameter that differs noticeably is the stellar mass: the median-likelihood values yielded by \textsc{magphys}\xspace for the {\sf LMC-dust}\xspace run
are marginally ($\sim 0.1$ dex) greater than for the {\sf fiducial}\xspace run.
Fig. \ref{fig:smc} presents selected results for the {\sf SMC-dust}\xspace case, in which SMC-type
dust was used in the radiative transfer calculations rather than the default
MW-type dust. In this case, the $\chi^2$ values (leftmost panel) are significantly
greater than for the {\sf fiducial}\xspace case, and for most mock SEDs, \textsc{magphys}\xspace
yields fits that are only marginally acceptable or unacceptable.
For almost all snapshots, the median-likelihood values for the stellar mass
(second panel from left) differ from the true values by $\sim 0.2-0.4$ dex, even
when the fits are formally acceptable [e.g. $-0.8 \la t - t({\rm SFR_{max}}) \la -0.2$ Gyr].
The median-likelihood values for the SFR (third panel from left) and sSFR (not shown)
can also differ significantly. Most notably, for most mock SEDs from the time
period $-1.6 \la t - t({\rm SFR_{max}}) \la -1.2$ Gyr, when the fits are still formally acceptable (although
the $\chi^2$ values are often very close to the threshold for an acceptable fit),
\textsc{magphys}\xspace infers an SFR of zero, but the true
value is $\sim20 ~\mathrm{M_{\odot} {\rm ~yr}^{-1}}$. The reason for this considerable error can be
understood from the rightmost panel of Fig. \ref{fig:smc}, which shows
$f_{\mu}^{\rm SFH}$, the fraction of stellar luminosity that is absorbed by the diffuse ISM
rather than the birth clouds. When the inferred SFR is zero, $f_{\mu}^{\rm SFH} = 1$, which
indicates that \textsc{magphys}\xspace has attributed the considerable FIR luminosity
($L_{\rm IR} > 10^{11} ~\mathrm{M_{\odot}}$; see panel d\xspace of Fig. \ref{fig:merger}) exclusively
to older stellar populations.
To understand the origin of this discrepancy, it is
instructive to investigate how well \textsc{magphys}\xspace is able to recover the true intrinsic
SEDs. Fig. \ref{fig:smc_sed} shows an example of the SED fits for the $t - t({\rm SFR_{max}}) = -1.47$
Gyr snapshot of the {\sf SMC-dust}\xspace case. This figure is similar to Fig.
\ref{fig:sed_example}, which shows SED fits for the {\sf fiducial}\xspace case (refer to the
caption of Fig. \ref{fig:sed_example} for full details regarding what is plotted). For all viewing angles,
\textsc{magphys}\xspace yields acceptable fits to the synthetic photometry. However, the intrinsic
SEDs inferred by \textsc{magphys}\xspace (blue lines) differ considerably from the true intrinsic
SED (grey line). \textsc{magphys}\xspace tends to underestimate (overestimate) the intrinsic UV
(optical through NIR) emission. Because the true intrinsic SED is not recovered
well by \textsc{magphys}\xspace (because the attenuation curve inferred by \textsc{magphys}\xspace differs significantly from the true
attenuation curve; see below), it is unsurprising that the parameter values
yielded by \textsc{magphys}\xspace can differ significantly from the true values. We suggest that this may be less likely to occur for
real observations of actual galaxies than it is in our simulations because the inevitable addition of photometric measurement
errors should ensure larger $\chi^2$ values, thereby making these discrepant fits unacceptable. Indeed, it may also be possible
to `tune' the arbitrary photometric errors assumed in the fitting to alleviate this potential issue, but we make no attempt to do so here.
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth,trim=0cm 0cm 0cm 0.77cm,clip]{./plots/nslss_raff_z0_sn016_ext_comparison.eps}
\caption{Comparison of the true attenuation curves (dashed lines) and attenuation curves
inferred by \textsc{magphys}\xspace (dotted lines) for the $t - t({\rm SFR_{max}}) = -1.47$ Gyr snapshot of the {\sf SMC-dust}\xspace run. The different
colours correspond to different viewing angles. The black solid line denotes the intrinsic SMC-type
opacity curve (with arbitrarily normalisation) that is used in the radiative transfer calculations.
Generically, the true attenuation curves are significantly steeper than those inferred by \textsc{magphys}\xspace.
Consequently, for the {\sf SMC-dust}\xspace case, \textsc{magphys}\xspace is unable to effectively correct for dust, and the
unattenuated SED inferred by \textsc{magphys}\xspace differs considerably from the true intrinsic SED, as
shown in Fig. \ref{fig:smc_sed}. Thus, the recovered values of parameters such as the SFR can
differ significantly from the true values.}
\label{fig:smc_attenuation_curve}
\end{figure}
Fig. \ref{fig:smc_attenuation_curve} shows a comparison of the true attenuation
curve and the attenuation curve inferred by \textsc{magphys}\xspace for each viewing angle
for the $t - t({\rm SFR_{max}}) = -1.47$ Gyr snapshot of the {\sf SMC-dust}\xspace run (for which the SEDs are shown in Fig. \ref{fig:smc_sed});
the intrinsic SMC dust opacity curve (which has been arbitrarily normalised) is also plotted for comparison.
For all snapshots, the true attenuation curve is significantly steeper than that inferred by \textsc{magphys}\xspace.
Consequently, even if the $A_{\mathrm{V}}$ value recovered by \textsc{magphys}\xspace is accurate, \textsc{magphys}\xspace will under-correct
(over-correct) for dust attenuation in the UV (optical through NIR). This effect explains why the
intrinsic SED tends to be underestimated (overestimated) in the UV (optical through NIR), as
shown in Fig. \ref{fig:smc_sed} and described above.
Recall that the shape of the attenuation curve in \textsc{magphys}\xspace depends on the assumptions
of the CF00 model, in which the optical depth scales as $\lambda^{-1.3}$ in the `birth clouds'
and $\lambda^{-0.7}$ in the `diffuse ISM'. In the simulations, the attenuation curve that results for
a given snapshot and viewing angle depends not only on the intrinsic opacity curve of the dust
but also the spatial distribution of stars and dust, which results in differential attenuation, and spatial
variations in age and metallicity, which cause the intrinsic emission to spatially vary. Thus,
it is perhaps unsurprising that the attenuation curves of the simulated galaxies are sometimes not
described effectively by the standard CF00 model.\footnote{However, it may be possible to better
correct for attenuation using the CF00 model by allowing the power-law indices of
$\hat{\tau}_{\lambda}^{\rm{BC}}$ and $\hat{\tau}_{\mathrm{V,ISM}}$ to vary and marginalizing over this additional uncertainty.}
The results for the {\sf SMC-dust}\xspace and, to a lesser extent, {\sf LMC-dust}\xspace cases highlight the difficulty of accurately
correcting for dust attenuation. Unfortunately, our understanding of the dust grain composition
is still limited, even for relatively nearby galaxies \citep{Amanullah2014,Patat2014}, and there is
evidence that galaxy attenuation curves can systematically vary with galaxy properties
\citep[e.g.][]{Buat:2012,Kriek:2013}. Thus, dust is likely to remain a significant uncertainty in SED modelling for
some time, and one should interpret results that depend sensitively on the assumed attenuation
curve with caution.
\subsubsection{The presence of very cold dust} \label{S:mp_off}
\begin{figure}
\centering
\includegraphics[trim=0 188 531 189,clip,width={0.75\columnwidth}]{./plots/fitresults_mp_off.eps}\\
\includegraphics[trim=0 0 531 378,clip,width={0.75\columnwidth}]{./plots/fitresults_mp_off.eps}
\caption{Selected results for the the {\sf alternate-ISM}\xspace test, in which all dust in the simulated ISM (rather than just that in the diffuse ISM)
was used. The dust mass (top) is recovered significantly more accurately than for the {\sf fiducial}\xspace case, although it is still underestimated by
$\la 0.2$ (as much as $\sim 0.6$) dex during the phase between first pericentric passage and final coalescence (post-starburst phase).
Furthermore, the total optical depth (bottom) and $A_{\mathrm{V}}$ values (not shown) are greater than in the {\sf fiducial}\xspace case, which reflects
the fact that the attenuation along any line of sight is guaranteed to be greater in this case than when the default ISM model is used.}
\label{fig:mp_off}
\end{figure}
The \citet{Springel:2003} sub-resolution model implicitly splits the gas contained in a given SPH particle into cold, dense clouds (which contain the bulk of the mass
but have a relatively low volume filling fraction) and a diffuse phase. In the default ISM treatment in \textsc{sunrise}\xspace, it is assumed that the cold phase has negligible volume
filling fraction. Thus, the dust associated with the cold phase does not absorb photons and consequently does not emit radiation. This sub-resolution model
is used as the default model in \textsc{sunrise}\xspace because the real ISM of galaxies is certainly not smooth on $\sim 100$-pc scales. Unfortunately, the resolution of our simulations and
sub-resolution ISM model used prevent us from resolving the full phase structure of the ISM. Simply ignoring the dust in the cold phase of the ISM is a crude
approach for treating this unresolved clumpiness. However, for the purpose of testing how well \textsc{magphys}\xspace can recover the dust mass, it is clearly undesirable
to include dust that does not absorb or emit radiation by construction.
To investigate the uncertainty in the results that is associated with unresolved clumpiness in the ISM, it is also possible to use the dust associated with
both phases of the sub-resolution ISM, which may be more appropriate in some regimes; see \citet{Jonsson:2010sunrise}, \citet{Hayward:2011smg_selection},
\citet{Snyder:2013}, and \citet{Lanz2014} for detailed discussions of this issue. We refer to this alternate treatment of sub-resolution clumpiness of the ISM as
the `alternate ISM' (or `multiphase-off' in the parlance of \citealt{Hayward:2011smg_selection}) model.
When the alternate ISM model is used, each cell in the \textsc{sunrise}\xspace grid contains the same or a greater mass of dust as in the {\sf fiducial}\xspace case; for regions of high
gas density (e.g. the central starburst), the difference can be an order of magnitude. Consequently, when the alternate ISM model is used, the attenuation along
any line of sight is greater, and the dust temperatures tend to be colder because of dust self-absorption.
Fig. \ref{fig:mp_off} shows the results for the merger simulation when we use the alternate ISM model in the \textsc{sunrise}\xspace calculation, which we refer to as the
{\sf alternate-ISM}\xspace case. The trends for most \textsc{magphys}\xspace parameters are qualitatively the same as for the {\sf fiducial}\xspace case. However, there a few interesting differences.
Most importantly, the dust mass (top panel) is recovered significantly more accurately than for the {\sf fiducial}\xspace run shown in Fig. \ref{fig:merger}. The reason
for the superior agreement is that when the alternate ISM model is used in the \textsc{sunrise}\xspace calculation, all of the dust in the simulated galaxies can potentially
absorb and emit radiation. In the {\sf fiducial}\xspace run, the dust in the sub-resolution cold clouds does not absorb or reemit any radiation but is still counted as part of
the total dust mass.
However, even in the {\sf alternate-ISM}\xspace case, the dust mass inferred by \textsc{magphys}\xspace is slightly ($\la 0.2$ dex) underestimated during the phase between
first pericentric passage, and it is significantly underestimated (by up to $\sim 0.6$\,dex) during the post-starburst phase. The likely
reason for the remaining underestimate is that after the strong starburst and AGN activity, much of the remaining gas (and thus dust) is contained in a low-density, extended
hot halo. Consequently, the optical depth through this halo is very low, and the dust contained in it absorbs little radiation. As a result, a significant fraction of the dust
cannot be detected via emission. Furthermore, the total optical depth (bottom panel) and $A_{\mathrm{V}}$ values (not shown) are typically greater than
in the {\sf fiducial}\xspace case. This result demonstrates that \textsc{magphys}\xspace qualitatively captures the key physical difference between the two runs.
Unfortunately, the need to use a sub-resolution ISM model in the \textsc{sunrise}\xspace calculations precludes us from determining which case is more correct.
Ideally, use of the next generation of galaxy simulations, which are able to achieve parsec-scale resolution
\citep[e.g.][]{Hopkins:2013mergers,Hopkins:2013FIRE,Hopkins:2013merger_winds}, may eliminate this uncertainty.
\section{Discussion} \label{S:discussion}
\subsection{Dependence on viewing angle}
One of the strengths of our approach is its ability to test how, for a given simulated galaxy, the results of SED modelling vary with viewing angle. Ideally,
estimates of intrinsic physical parameters of a galaxy, such as the SFR and stellar mass, should be insensitive to the perspective from which the galaxy
is observed. For almost all \textsc{magphys}\xspace parameters plotted in this work, for a given simulation snapshot, the median-likelihood values vary with viewing angle
by less than the uncertainty; thus to all intents and purposes, viewing-angle effects do not cause systematic errors in the results.
This result is naturally quite reassuring because for real galaxies, only one viewing angle is available.
Some parameters (primarily $\hat{\tau}_{\mathrm{V,ISM}}$) do vary with viewing angle, but, in so far as the parameters can be interpreted physically, they should depend
on viewing angle. Thus, this variation is not a cause for concern.
\subsection{Other potential sources of error}
In this section, we will briefly discuss other potential sources of error in SED modelling. This issue will be investigated in greater detail in future work.
\subsubsection{Photometric uncertainties}
Throughout this work, no noise was added when generating the mock photometry. Thus, the tests represent the ideal situation in which there are no observational
uncertainties and the inherent physical uncertainties (i.e. those that originate from discrepancies between the model assumptions and reality) are the only source
of discrepancies between the inferred and true parameters (i.e. they are the sole contributors to the best-fit $\chi^2$). These tests are useful for understanding the
fundamental limitations of the method that cannot be
overcome through the use of more-accurate photometry, but they are clearly unrepresentative of the real-world process of modelling galaxy SEDs. Consequently,
it is worthwhile to examine the effects of including observational uncertainties when generating the mock photometry.
We performed a series of tests in which we added a simple Gaussian noise model to the mock photometry for the {\sf fiducial}\xspace run, and used \textsc{magphys}\xspace to fit the noisy photometry with the same assumed errors discussed in section \ref{S:sims} (similar tests were performed in \citealt{Smith:2012} to validate the consistency of \textsc{magphys}\xspace by feeding it photometry derived from several of the best-fitting SEDs with
simulated Gaussian measurement errors superposed). As expected, the $\chi^2$ values
were greater than for the noiseless case, and \textsc{magphys}\xspace did not yield a statistically acceptable fit for a significantly greater number of mock SEDs.
However, the recovered median-likelihood values for the physical parameters did not differ qualitatively (although the confidence intervals became wider),
and the qualitative evolution of the various physical parameters of the simulation was captured just as well as for the {\sf fiducial}\xspace case. This result suggests that
the median-likelihood parameter values yielded are robust to the inclusion of realistic random uncertainties and demonstrates the effectiveness of the
Bayesian fitting method employed by \textsc{magphys}\xspace.
\subsubsection{SED coverage}
The results of SED modelling can potentially depend on the wavelength sampling of the photometry used \citep[e.g.][]{Smith:2012,Pforr:2012,Pforr:2013}.
In this work, the photometric bands used are those that were available for the initial \textit{H}-ATLAS investigations, which provide relatively
good coverage of the SEDs in the UV--NIR and FIR; MIR data are noticeably absent. Including MIR data could potentially change the results
significantly. For example, MIR data may help to better constrain the relative contributions of young and old stellar populations to the dust heating.
However, the MIR tends to be sensitive to the presence of AGN (see e.g. \citealt{Snyder:2013} for a detailed discussion). Thus, inclusion of MIR
data could make it significantly more difficult to fit the synthetic SEDs using \textsc{magphys}\xspace.
Because galaxy surveys vary considerably in
terms of the available photometry, it would be worthwhile to investigate the effects of varying the photometry used in the SED modelling.
As a first test, we investigated the effects of excluding the PACS photometry. Although the agreement was generally good in the comparatively
quiescent phases of the simulation, the most significant discrepancy was that the IR luminosity and SFR
were underestimated by $\sim 0.5$ dex during the starburst that occurs at first passage (but it is possible that this underestimate could be
corrected by modifying the priors; E. da Cunha, private communication). This
test further highlights the importance of the available photometry sampling the peak of the temperature-dependent SED for the purposes of recovering the true dust luminosity,
in agreement with the investigation by \citet{Smith:2013} which used isothermal models to fit the dust SEDs of {\it H}-ATLAS galaxies.
\subsubsection{Emission lines}
Another potential source of uncertainty is the contribution of nebular emission lines (which are typically not accounted for by SED modelling
codes) to the broadband photometry \citep[e.g.][]{Charlot:2001,Schaerer:2009,Pacifici:2012,Schenker:2013,Stark:2013}.
At certain redshifts, especially
$z \sim 6-7$, not accounting for contamination from nebular emission can cause the stellar ages \citep{Schaerer:2009} and stellar masses
\citep{Schenker:2013,Stark:2013} to be overestimated. Our simulated SEDs include nebular emission lines; thus, they can contribute
to the broadband photometry. Indeed, the contribution of H$\alpha$ emission is the cause for the larger residuals near the $i$ band that can be
observed in Fig. \ref{fig:sed_example}, and this effect is also often seen in the SED fits of {\it H}-ATLAS\ galaxies in \citet{Smith:2012}. \textsc{magphys}\xspace
is able to consider H$\alpha$ emission as part of the input data set (although these data were unavailable at the time that \citealt{Smith:2012} was written);
we defer a detailed investigation of the influence of emission lines on the derived SED parameters to a future investigation.
\subsection{Applicability of the results to other SED modelling codes}
It is important to keep in mind that we have only employed one SED modelling code, \textsc{magphys}\xspace, which has multiple advantages,
including the following: 1. it utilizes
the full UV--mm SED, and including information yielded by the dust emission can potentially break degeneracies that could not be addressed
using UV--NIR data alone. 2. The underlying SFHs are continuous SFHs with superimposed random bursts. Consequently, it is not subject
to the potential systematic errors that are associated with single-component SFHs \citep[e.g.][]{Michalowski:2012,Michalowski2014}.
3. Through its use of the CF00 dust attenuation model, differential attenuation of young stellar populations can be
(approximately) accounted for.
Because \textsc{magphys}\xspace represents a relatively sophisticated, state-of-the-art SED modelling code, its success at recovering the physical properties
of our simulated galaxies cannot be generalised to all SED modelling codes. Thus, it would be worthwhile to perform similar tests for other
commonly used SED modelling codes. As a first step,
\citet{Michalowski2014} tested the ability of multiple SED modelling codes to recover the stellar masses of simulated submm galaxies
(SMGs). They found that as long as a single-component SFH was not used, all of the codes were able to accurately recover the stellar masses, albeit with a
factor of $\sim 2$ uncertainty. However, this work was deliberately limited in scope to the stellar masses of SMGs, and a more comprehensive
comparison of SED modelling codes is warranted.
\subsection{Recommendations for applying SED modelling codes}
We have demonstrated that for the {\sf fiducial}\xspace runs, \textsc{magphys}\xspace recovered the true physical parameter values of the simulated galaxies well.
However, uncertainties in the `microphysics', especially regarding the dust attenuation law, can cause serious discrepancies between
the median-likelihood parameter values output by \textsc{magphys}\xspace and the true values \emph{even when the fits are formally acceptable}
(although as we have discussed in section \ref{S:dust}, this should only affect a small fraction of SED fits).
Consequently, for real galaxies, for which e.g. the dust attenuation law or IMF may vary with galaxy properties, there is a risk of making
significant errors for some subset of the observed galaxy population when attempting to recover the physical parameters of the galaxies
through SED modelling.
Because SED modelling is now applied to datasets that contain hundreds to hundreds of thousands of galaxies, it is infeasible to check
the individual fits one-by-one to search for irregularities. One approach for avoiding significant mis-estimates of physical parameters
would be to use a significantly more conservative $\chi^2$ threshold than what was used in this work. However, this would result in
discarding many galaxies for which the vast majority of fits are acceptable and the parameters are well recovered, which is clearly undesirable.
Perhaps the best approach is to broaden the experimental priors in an attempt to `marginalise' over our ignorance. This could be achieved, for
example, by comparing the results derived using multiple distinct SED modelling
approaches; ideally, the approaches should utilise different assumptions about e.g. the dust attenuation (see e.g. \citealt{Bolzonella:2000},
\citealt{Burgarella:2005}, and \citealt{Buat:2011,Buat:2012} for examples). Furthermore, simpler techniques, such
as using empirical laws to estimate the SFR from $L_{\rm IR}$ or radio continuum luminosity, should also be used; although these certainly have their own
caveats, they can still provide additional insight, and current `panchromatic' SED fitting codes lack the machinery to include radio continuum data
in their analyses. For objects for which the results of
different SED modelling approaches or/and simpler techniques differ, one should interpret the results with caution and investigate further.
Such disagreements are especially likely for galaxies that differ significantly from the galaxies that were used for validation of the model,
as is the case for \textsc{magphys}\xspace (with the standard priors) and submm galaxies \citep{Rowlands2014}.
Such a multi-faceted validation may seem tedious, and it would naturally require more human effort and computational time. However, we
believe that the additional investment will be rewarded with significantly more robust results, or, at the least, a determination of the
types of galaxies for which (some of) the physical properties must remain `known unknowns' for the time being.
\section{Conclusions} \label{S:conclusions}
By applying the SED modelling code \textsc{magphys}\xspace to synthetic photometry generated by performing dust radiative transfer on hydrodynamical simulations of
an isolated disc galaxy and a galaxy merger, we have investigated how well \textsc{magphys}\xspace can recover the intrinsic properties of the simulated galaxies. Our
principal conclusions are the following:
\begin{enumerate}
\item For the isolated disc galaxy simulation, \textsc{magphys}\xspace yields acceptable fits at all times. The $V$-band attenuation, stellar mass, dust luminosity, SFR, and sSFR
are recovered accurately. The dust mass is systematically underestimated, but whether this underestimation will occur for real galaxies is unclear (see conclusion vii).
\item For the galaxy merger simulation, when the assumptions regarding the IMF, SSP models, and dust composition in \textsc{magphys}\xspace and the dust radiative transfer
calculations are similar, \textsc{magphys}\xspace yields acceptable fits and recovers all parameters except the dust mass well, except during the near-coalescence phase of
the merger, when the starburst and AGN activity are most intense. During this phase, the fits are often not formally acceptable, but most parameters are
still recovered reasonably well.
\item For most parameters, the variation in the median-likelihood values with
viewing angle is less than the uncertainty for a single viewing angle. For
parameters that should depend on viewing angle, such as $\hat{\tau}_{\mathrm{V,ISM}}$, the variation
with viewing angle can be greater than the uncertainty for a single viewing angle.
\item Although \textsc{magphys}\xspace does not include AGN emission, the galaxy properties that we infer are generally unaffected by AGN contamination.
Even when the AGN contributes as much as 25 per cent of the UV--mm luminosity, \textsc{magphys}\xspace can obtain statistically acceptable fits to the photometry
and recover the parameters accurately.
\item When either LMC- or SMC-type (rather than the default MW-type) dust are used to perform radiative transfer to calculate the mock photometry,
\textsc{magphys}\xspace recovers some parameters
less well. For the {\sf LMC-dust}\xspace case, the median-likelihood stellar mass values are $\sim 0.1$ dex greater but still consistent with the true values within the uncertainties. When
SMC-type dust is used, \textsc{magphys}\xspace yields marginally acceptable or unacceptable fits for the majority of the mock SEDs. Most
notably, for some snapshots for which the SFR is $\sim 20 ~\mathrm{M_{\odot} {\rm ~yr}^{-1}}$, \textsc{magphys}\xspace yields median-likelihood SFR values of zero even though the fits
are formally acceptable.
\item The amount by which the dust mass is underestimated depends on the sub-resolution ISM model used in the radiative transfer calculations. In the best-case
scenario, \textsc{magphys}\xspace recovers the dust mass well during the first-passage and coalescence phases of the merger but underestimates it by $\sim 0.1-0.2$ dex (as
much as $\sim 0.6$ dex) during the phase between first passage and coalescence (post-starburst phase).
\end{enumerate}
Overall, our results constitute a somewhat mixed endorsement of the SED modelling approach: when the assumptions made regarding e.g. the dust attenuation
curve are relatively consistent with the true attenuation curve, \textsc{magphys}\xspace performs very well. However, if, for example, the true dust attenuation curve
differs significantly from that assumed by \textsc{magphys}\xspace, one may be better served by using less-sophisticated but more transparent methods for inferring
physical properties of galaxies from their SEDs. Regardless, one should use caution when performing SED modelling on large samples of galaxies
and ideally cross-check the results by using multiple SED modelling codes and comparing with the results of simpler techniques.
\begin{small}\section*{Acknowledgments}\end{small}
We are very grateful to the anonymous referee, Elisabete da Cunha, Matt Jarvis and Kate Rowlands for detailed comments on the manuscript.
We thank Lauranne Lanz, Micha{\l} Micha{\l}owski and R\"udiger Pakmor for useful discussions
and Volker Springel for providing the non-public version of \textsc{gadget-3}\xspace used for this work.
CCH acknowledges the hospitality of the Aspen Center for Physics, which is supported by the National Science Foundation Grant No. PHY-1066293, and
the Centre for Astrophysics at the University of Hertfordshire, and he is grateful to the Klaus Tschira Foundation and Gordon and Betty Moore Foundation for financial support.
DJBS acknowledges the hospitality of the Heidelberg Institute for Theoretical Studies.
This research has made use of NASA's Astrophysics Data System Bibliographic Services.
\\
\footnotesize{
| 854e76232e3135eb4f6cd97fb66519f4c809df96 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
It is well known that the affine Lie algebras and the Viasoro algebra have been widely used in many physics areas and mathematical branches, and the Virasoro algebra served as an outer-derivative
subalgebra plays a key role in representation theory of the affine Lie algebras. Their close relationship strongly suggests
that they should be considered simultaneously, i.e., as one algebraic
structure. Actually it has led to the definition of the so-called
affine-Virasoro algebra \cite{Ka1,Ku}, which is the semidirect
product of the Virasoro algebra and an affine Kac-Moody Lie algebra
with a common center. Affine-Virasoro algebras sometimes are much more connected to the conformal field theory.
For example, the even part of the $N=3$ superconformal algebra is
just the affine-Virasoro algebra of type $A_1$. Highest weight
representations and integrable representations of the affine-Virasoro
algebras have been studied in several papers (see
\cite{Ka1,EJ,Ku,JY,LH,LQ,W,XH}, etc.). All irreducible Harish-Chandra modules (weight
modules with finite-dimensional weight spaces) with nonzero central actions the affine-Virasoro
algebras were classified in \cite{B}. However, up to now, all irreducible uniform bounded modules over these algebras are not yet classified.
In this paper, we classify
all irreducible weight modules with finite-dimensional weight spaces over the affine-Virasoro Lie algebra of
type $A_1$. Throughout this
paper, $\z$, $\z^*$ and $\c$ denote the sets of integers, non-zero
integers and complex numbers, respectively. $U(L)$ denote the universal enveloping algebra of a Lie algebra $L$. All modules considered in this paper are nontrivial. For any $\z$-graded space $G$, we also use notations $G_+, G_-, G_0$ and $G_{[p, q)}$ to denote the subspaces spanned by elements in $G$ of degree $k$ with $k>0$, $k<0$, $k=0$ and $p\le k<q$, respectively.
\section{Basics}
In this section, we shall introduce some notations of the Virasoro algebra and affine-Virasoro algebras.
\subsection{Virasoro algebra and twisted Heisenberg-Virasoro algebra}
By definition, the Virasoro algebra Vir:=$\c\{d_m, C\mid m\in\z\}$ with bracket:
\begin{equation}
[d_m, d_n]=(n-m)d_{m+n}+\de_{m+n,0}\frac{m^3-m}{12}C,\quad [d_m, C]=0,
\end{equation} for all $m, n\in\z$.
Let $\c[t, t^{-1}]$
be the Laurent polynomials ring over $\c$, then $\mbox{Der}\,\c[t, t^{-1}]=\c\{t^{m+1}\frac d{dt}\mid m\in\z\}$
(also denote by Vect($S^1$), the Lie algebra of all vector fields on the circle).
$$\Vir=\widehat{\mbox{Der}\,\c[t, t^{-1}]}.$$
The twisted Heisenberg-Virasoro algebra $\mathcal H$ was first studied by
Arbarello et al in \cite{ADKP}, where a connection is established
between the second cohomology of certain moduli spaces of curves and
the second cohomology of the Lie algebra of differential operators
of order at most one.
By definition, $\mathcal H$ is the universal central extension of
the following Lie algebra $\mathcal D$, which is the Lie algebra of
differential operators order at most one.
\begin{defi} As a vector space over $\c$, the Lie algebra $\mathcal D$ has a
basis $\{d_n, Y_n\mid n \in\z\}$ with the following relations
\begin{eqnarray}
&&[d_m,d_n]=(n-m)d_{m+n},\\
&&[d_m,Y_n]=nY_{m+n}, \\
&&[Y_m,Y_n]=0,
\end{eqnarray} for all $m,\; n\in\z$.
\end{defi}
Clearly, the subalgebra $H=\c\{Y_m\mid m\in\z\}$ of $\mathcal D$ is centerless Heisenberg algebra and
$W=\c\{d_m\mid m\in\z\}$ is the Witt algebra (or centerless Virasoro algebra).
\subsection{Affine-Virasoro algebra}
\begin{defi} \label{aff-vir} Let $L$ be a finite-dimensional Lie algebra with a
non-degenerated invariant normalized symmetric bilinear form $(\, , )$, then the affine-Virasoro Lie algebra is the vector space
$$L_{av}=L\otimes \c[t, t^{-1}]\oplus\c C\oplus\bigoplus_{i\in\z}\c{d_i},$$
with Lie bracket:
\begin{eqnarray*}&&[\,x\otimes t^m, y\otimes t^n\,]=[\,x, y\,]\otimes t^{m+n}+m(x, y)\de_{m+n, 0}C,\\
&&[d_i, d_j]=(j-i)d_{i+j}+{1\over 12}(j^3-j)\de_{i+j, 0}C,\\
&&[d_i, x\ot t^m]=mx\ot t^{m+i}, \quad [C, L_{av}]=0,\end{eqnarray*}
where $x, \,y\in L$, $m,n, i, j\in\z$ (if $L$ has no such form, we set $(x, y)=0$ for all $x, y\in L$).
\end{defi}
\noindent{\bf Remark:}
If $L=\c{e}$ is one dimensional, then $L_{av}$ is just the
twisted Heisenberg-Virasoro algebra (one center element).
\vskip5pt
Now we only consider specially $L$ as the simple Lie algebra $\frak{sl}_2=\c\{e, f, h\}$. Then by Defintion \ref{aff-vir}, the corresponding affine-Virasoro algebra $\mathcal L:=L_{av}=\c\{e_i, f_i, h_i,d_i, C\mid i\in\z\}$,
with Lie bracket:
\begin{eqnarray*}&&[\,e_i, f_j\,]=h_{i+j}+i\de_{i+j, 0}C,\\
&&[\,h_i, e_j\,]=2e_{i+j}, \quad [\,h_i, f_j\,]=-2f_{i+j},\\
&&[d_i, d_j]=(j-i)d_{i+j}+{1\over 12}(j^3-j)\de_{i+j, 0}C,\\
&&[d_i, h_j]=jh_{i+j}, \quad [h_i, h_j]=2i\de_{i+j, 0}C,\\
&&[d_i, e_j]=je_{i+j}, \quad [d_i, f_j]=jf_{i+j}, \quad [C, {\mathcal L}]=0,\end{eqnarray*}
where $i, j\in\z$.
\noindent{\bf Remark.} In fact, $\mathcal L$
is the even part of the $N=3$ superconformal algebra (\cite{CL}).
\vskip5pt
The Lie algebra $\mathcal L=\oplus_{i\in\z}{\mathcal L}_i$
is a $\z$-graded Lie algebra, where ${\mathcal L}_i=\c\{d_i, e_i, f_i, h_i, \delta_{0, i}C\}$.
The subalgebra ${\mathcal H}_X:=\c\{X_i, d_i, C\mid i\in\z\}$ for $X=e,f,h$ of $\mathcal L$
is isomorphic to the twisted
Heisenberg-Virasoro algebra ${\mathcal H}$ (the only difference is the center element).
Clearly $\frak h:=\c h_0+\c C+\c
d_0$ the Cartan subalgebra of ${\mathcal L}$.
\subsection{Harish-Chandra modules}
For any
$\mathcal L$-module $V$ and $\lambda, \mu\in \c$, set
$V_{\lambda, \mu}:=\bigl\{v\in V\bigm|d_0v=\lambda v, h_0v=\mu v\bigr\}$, which is
generally called the weight space of $V$ corresponding to the weight
$\lambda, \mu$.
An $\mathcal L$-module $V$ is called a weight module if $V$ is the
sum of all its weight spaces.
A nontrivial irreducible weight ${\mathcal L}$-module $V$ is called of intermediate
series if all its weight spaces are one-dimensional.
\par
A weight ${\mathcal L}$-module $V$ is called a {highest} (resp.
{lowest) weight module} with {highest weight} (resp. {lowest
weight}) $\lambda, \mu\in \c$, if there exists a nonzero weight vector $v
\in V_{\lambda, \mu}$ such that
1) $V$ is generated by $v$ as ${\mathcal L}$-module;
2) ${\mathcal L}^+ v=0 $ (resp. ${\mathcal L}^- v=0 $), where ${\mathcal L}^+={\mathcal L}_++\c e_0$ and ${\mathcal L}^-={\mathcal L}_-+\c f_0$ (the notations ${\mathcal L}_+=\sum_{i\ge 1}{\mathcal L}_i$, ${\mathcal L}_-=\sum_{i\le -1}{\mathcal L}_i$ are introduced in Section 1).
If, in addition, all weight spaces $V_{\ll, \mu}$ of a weight ${\mathcal L}$-module $V$ are finite-dimensional, the module $V$ is called a {\it
Harish-Chandra module}. Clearly, a highest (lowest) weight module
is a Harish-Chandra module.
For a weight module $V$, we define
\begin{equation}\hbox{Supp}(V):=\bigl\{\lambda\in \c \bigm|V_\lambda=\oplus_{\mu\in\c}V_{\lambda, \mu} \neq
0\bigr\}.\end{equation}
Obviously, if $V$ is an irreducible weight ${\mathcal L}$-module, then
there exists $\lambda\in\c$ such that ${\rm
Supp}(V)\subset\lambda+\z$. So $V=\sum_{i\in \z}V_i$ is a $\z$-graded module, where $V_i=V_{\ll+i}$.
Kaplansky-Santharoubane \cite{KS} in 1983 gave a classification of
Vir-modules of the intermediate series. There are three families
of indecomposable modules with each weight space is one-dimensional:
(1) ${\mathcal A}_{a,\; b}=\sum_{i\in\z}\c v_i$:
$d_mv_i=(a+i+b m)v_{m+i}$;
(2) ${\mathcal A}(a)=\sum_{i\in\z}\c v_i$: $d_mv_i=(i+m)v_{m+i}$
if $i\ne 0$, $d_mv_0=m(m+a)v_{m}$;
(3) ${\mathcal B}(a)=\sum_{i\in\z}\c v_i$: $d_mv_i=iv_{m+i}$ if
$i\ne -m$, $d_mv_{-m}=-m(m+a)v_0$, for some $a, b\in\c$, where $C$ acts trivially on the above modules.
It is well-known that ${\mathcal A}_{a,\; b}\cong{\mathcal A}_{a+1,\; b}, \forall a, b\in\c$, then we can always suppose that $a\not\in\z$ or $a=0$ in ${\mathcal A}_{a,\; b}$.
Moreover, the module
${\mathcal A}_{a,\; b}$ is simple if $a\notin\z$ or $b\ne0, 1$.
In the opposite case the module
contains two simple subquotients namely the trivial module and
$\c[t, t^{-1}]/\c$. It is also clear that ${\mathcal A}_{0,0}$ and
${\mathcal B}(a)$ both have $\c v_0$ as a submodule, and their corresponding
quotients are isomorphic, which we denote by ${\mathcal A}_{0,0}'$. Dually,
${\mathcal A}_{0,1}$ and ${\mathcal A}(a)$ both have $\C v_0$ as a quotient module, and
their corresponding submodules are isomorphic to ${\mathcal A}_{0,0}'$. For
convenience, we simply write ${\mathcal A}_{a,b}'={\mathcal A}_{a,b}$ when ${\mathcal A}_{a,b}$ is
irreducible.
All Harish-Chandra modules over the Virasoro algebra were classified in \cite{M} in 1992.
Since then such works were done on the high rank Virasoro algebra in \cite{LvZ0} and \cite{S1}, the Weyl algebra in \cite{S}.
\begin{theo}\cite{M}
Let $V$ be an irreducible weight Vir-module with finite-dimensional weight spaces.
Then $V$ is a highest weight module, lowest weight module, or Harish-Chandra module of intermediate series.
\end{theo}
\begin{theo}\cite{LvZ}\label{thv}
If $V$ is an irreducible weight module with finite-dimensional weight spaces over $\mathcal D$, then $V$ is a highest or lowest weight
module or the Harish-Chandra module of uniformly bounded.
Moreover, any uniformly bounded module is a Harish-Chandra module of intermediate series.
\end{theo}
\noindent{\bf Remarks.}
(1) The Harish-Chandra module of the intermediate series over $\mathcal D$ is induced by ${\mathcal A}_{a,\; b}=\sum_{i\in\z}\c v_i$ with $Y_nv_i=cv_{n+i}$ for some $c\in\c$.
Denote this module by ${\mathcal A}_{a,\; b,\; c}$.
It is well-known that ${\mathcal A}_{a,\; b,\; c}\cong{\mathcal A}_{a+1,\; b,\; c}, \forall a, b, c\in\c$.
Moreover, the module ${\mathcal A}_{a,\; b,\;c}$ is simple if $a\notin\z$ or $b\ne0, 1$ or $c\ne 0$.
We also use ${\mathcal A}_{a,\; b,\; c}'$ to denote by the simple subquotient of ${\mathcal A}_{a,\; b,\; c}$ as in the Virasoro algebra case.
(2) All indecomposable Harish-Chandra modules of
the intermediate series over $\mathcal D$ were classified in \cite{LJ}.
\begin{lemm} \label{lem25}
Let $V$ be a uniformly bounded weight ${\mathcal D}$-module. Then $V$ has an irreducible submodules $V'\cong
{\mathcal A}_{a, b, c}'$ for some $a, b, c\in\c$.
\end{lemm}
\begin{proof}
Consider $V$ as a $\Vir$-module. From
representation theory of $\Vir$ (\cite{KS}), we have $\dim
V_{\ll+n}=p$ for all $\ll+n \neq 0$. We have a $\Vir$-submodule
filtration
$$0=W^{(0)}\subset W^{(1)} \subset W^{(2)}\subset \cdots \subset W^{(p)}=V,$$
where $W^{(1)}, \cdots ,W^{(p)}$ are $\Vir$-submodules of $V$, and
the quotient modules \break $W^{(i)}/W^{(i-1)}\cong {\mathcal A}_{a_i, b_i}'$ for some $a_i, b_i\in\c$.
Now any ${\mathcal D}$-submodule filtration is also a $\Vir$-submodule filtration and then its length is finite. So $V$ has an irreducible
${\mathcal D}$-submodule $V'$, which is also a uniformly bounded. So by Theorem 2.3, $V'\cong
{\mathcal A}_{a, b, c}'$ for some $a, b, c\in\c$.
\end{proof}
\section{The case of $\dim L=2$}
Let $T$ be a $2$-dimensional nontrivial Lie algebra. Then we can suppose that $T=\c\{h, e\}$ with $[h, e]=2e$.
Clearly, there exists an invariant symmetric bilinear form $(\, , )$ on $T$ given by
$$(h, h)=2, \ \ (h, e)=(e, e)=0.$$
In this case, the Lie algebra ${\mathcal T}_2:=T_{av}$ is generated by $\{d_n, e_n, h_n, C\mid n\in\z\}$
\begin{eqnarray*}
&&[d_m,d_n]=(n-m)d_{m+n}+{1\over 12}(n^3-n)\de_{m+n, 0}C,\quad [d_m,e_n]=ne_{m+n},\\
&&[d_m,h_n]=nh_{m+n},\quad
[h_m,e_n]=2e_{m+n},\\
&&[e_m,e_n]=0,\ [h_m,h_n]=2m\de_{m+n, 0}C,
\end{eqnarray*} for all $m,n\in\z$.
Clearly, ${\mathcal T}_2$ is a subalgebra of $\mathcal L$.
\begin{prop} \label{p31}
Let $V$ be a uniformly bounded irreducible
${\mathcal T}_2$-module. Then $V=\sum\c v_i\cong {\mathcal A}_{a,b,c}$ is the Harish-Chandra module of intermediate series with $h_nv_i=cv_{n+i}$ and $e_nv_i=0$.
\end{prop}
\begin{proof}
From representation theory of $\Vir$ (\cite{KS}), we have $C=0$.
Clearly, $\c\{h_0, e_0\}$ is a $2$-dimensional solvable Lie
subalgebra of ${\mathcal T}_2$. So we can choose an irreducible $\c\{h_0,
e_0\}$-submodule $\c\{v\}$ of $V$ such that $h_0v=c_1v$ and
$e_0v=0$ for some $c_1\in\c$.
Now the Lie subalgebra ${\mathcal H}_e:=\c\{d_n, e_n\mid
n\in\z\}$ is isomorphic to the Lie algebra $\mathcal D$ defined in Section 2 (see Definition 2.1).
Set $U=U({\mathcal H}_e)v$, which is ${\mathcal H}_e$-module generated by $v$.
By Lemma \ref{lem25}, we can choose an irreducible ${\mathcal H}_e$-submodule $V'=\sum u_i$ of $U$ with $e_n u_i=du_{i+n}$ for some $d\in\c$ and for all $n, i\in\z$.
Moreover, $V=\sum U(H)u_i$, where $H=\c\{h_i\mid i\in\z\}$. Clearly $e_0$ is nilpotent on some element $u_i\in V'$, so $d=0$.
It is $e_nV'=0$, and then $e_nV=0$ since
$e_nh_iu_j=h_ie_nu_j-2e_{i+j}u_i=0$ for all $n\in\z$. Now the irreducibility of $V$
as ${\mathcal T}_2$-module is equivalent to that of $V$ as ${\mathcal
H}_h$-module, where ${\mathcal H}_h=\c\{d_n, h_n\mid n\in\z\}$ is also isomorphic to the Lie algebra $\mathcal D$. By
Theorem \ref{thv}, $V=\sum v_i$ is the Harish-Chandra module of intermediate series
with $h_nv_i=cv_{n+i}$ for some $c\in\c$ and for all $n, i\in\z$.
\end{proof}
\section{Harish-Chandra modules over the affine-Virasoro algebra $\mathcal L$}
\begin{theo} \label{pH1}
Let $V$ be an irreducible weight ${\mathcal L}$-module with finite-dimensional weight spaces. If $V$ is not a highest and lowest
module, then $V$ is uniformly bounded.
\end{theo}
\begin{proof} From Section 2, we can suppose that $V=\oplus_{i\in \z}V_i$ is an
irreducible Harish-Chandra ${\mathcal L}$-module without highest and
lowest weights. We shall prove that for any $i\in\Z^*$, $k\in\Z$,
\begin{eqnarray}\label{s===0}
d_i|_{V_k}\oplus d_{i+1}|_{V_k}\oplus e_{i}|_{V_k}\oplus
f_{i}|_{V_k}\oplus h_i|_{V_k}: \ \ V_k\ \to\ V_{k+i}\oplus V_{k+i+1}
\end{eqnarray}
is injective. In particular, by taking $i=-k$, we obtain that
$\dim\,V_k$ is uniformly bounded.
In fact, suppose there exists some $v_0\in V_k$ such that
\begin{equation}\label{LLL-1111}d_iv_0=d_{i+1}v_0=e_{i}v_0=f_{i}v_0=h_{i}v_0=0.\end{equation}
Without loss of generality, we can suppose $i>0$. Note that when
$\ell\gg0$, we have
$$\ell=n_1i+n_2(i+1)$$
for some $n_1, n_2\in\N$, from this and the relations in the
definition, one can easily deduce that $d_\ell,e_{\ell}, f_{\ell},
h_{\ell}$ can be generated by $d_i,d_{i+1},e_{i}, f_{i}, h_{i}$.
Therefore there exists some $N>0$ such that
$$d_\ell v_0=e_{\ell}v_0=f_{\ell}v_0=h_{\ell}v_0=0\mbox{\ \ for all \ }\ell\ge
N.$$
This means \begin{equation}{\mathcal L}_{[N, +\infty)}v_0=0,\label{v00}\end{equation} where ${\mathcal L}_{[N, +\infty)}=\oplus_{i\ge N} {\mathcal L}_i$.
Since ${\mathcal L}={\mathcal L}_{[1, N)}+{\mathcal L}_0+{\mathcal L}_-+{\mathcal L}_{[N, \infty)}$, using the PBW theorem and the irreducibility of $V$, we have
\begin{eqnarray}
V&=&U({\mathcal L})v_0=U({\mathcal L}_{[1, N)})U({\mathcal L}_0+{\mathcal L}_-)U({\mathcal L}_{[N, \infty)})v_0\\
&=&U({\mathcal L}_{[1, N)})U({\mathcal L}_0+{\mathcal L}_-)v_0.\label{4.4}
\end{eqnarray}
Note that $V_+$ is a ${\mathcal L}_+$-module. Let $V_+'$ be the ${\mathcal L}_+$-submodule of $V_+$ generated by $V_{[0, N)}$.
Now prove that \begin{equation}V_+=V_+'. \label{vvv}\end{equation} In fact, let $x\in V_+$ be of degree $k$. If $0\le k<N$, then by definition $x\in V_+'$. Suppose that $k\ge N$. By (\ref{4.4}), $x$ is a linear combination of the form $u_ix_i$ with $u_i\in {\mathcal L}_{[1, N)}$ and $x_i\in V$, where $i$ is in a finite subset of $\mathbb {Z}_+$.
For any $i\in I$, the degree $\deg u_i$ of $u_i$ satisfies $1\le \deg u_i<N$, so $0<\deg x_i=k-\deg u_i<k$. By inductive hypothesis, $x_i\in V_+'$, and thus $x\in V_+'$. So (\ref{vvv}) holds.
Eq. (\ref{vvv}) means that $V_+$ is finite generated as ${\mathcal L}$-module. Choose a basis $B$ of $V_{[0, N)}$, then for any $x\in B$, we have $x=u_xv_0$ for some $u_x\in U(\mathcal L)$. Regarding $u_x$ as a polynomial with respect to a basis of $\mathcal L$, by induction on the polynomial degree and using $[u, w_1w_2]=[u, w_1]w_2+w_1[u, w_2]$ for $u\in \mathcal L$, $w_1, w_2\in U(\mathcal L)$, we see that there exists a positive integer $k_x$ large enough such that $k_x>N$ and $[{\mathcal L}_{[k_x, +\infty)}, u_x]\subset U(\mathcal L){\mathcal L}_{[N, \infty)}$.
Then by (\ref{v00}), ${\mathcal L}_{[k_x, +\infty)}x=[{\mathcal L}_{[k_x, +\infty)}, u_x]v_0+u_x{\mathcal L}_{[k_x, +\infty)}v_0=0$. Take $k=\max\{k_x, x\in B\}$, then
$${\mathcal L}_{[k, +\infty)}V_+={\mathcal L}_{[k, +\infty)}U({\mathcal L}_+)V_{[0, N)}=U({\mathcal L}_+){\mathcal L}_{[k, +\infty)}V_{[0, N)}=0.$$
Since ${\mathcal L}_+\subset {\mathcal L}_{[k, +\infty)}+[{\mathcal L}_{[-k', 0)}, {\mathcal L}_{[k, +\infty)}]$ for some $k'>k$, we get ${\mathcal L}_+V_{[k', +\infty)}=0$. Now if $x\in V_{[k'+N, +\infty)}$, by (\ref{4.4}), it is a sum of elements of the form $u_jx_j$ such that $u_j\in {\mathcal L}_{[1, +\infty)}$ and then $x_j\in V_{[k', +\infty)}$, and thus $u_jx_j=0$. This prove that $V$ has no degree $\ge k'+N$.
Now let $p$ be the maximal integer such that $V_p\ne 0$, since the four-dimensional subalgebra $\c\{d_0, h_0, e_0, C\}$ has a two-dimensional solvable subalgebra $\c\{h_0, e_0\}$ and two central elements $\{d_0, C\}$, so there exists a common eigenvector $w$ of $\frak h=\c\{d_0, h_0, C\}$ in $V_p$ with $e_0w=0$. It is ${\mathcal L}^+w=0$.
Then $w$ is a highest weight vector of $\mathcal L$, this contradicts the assumption of the Theorem.
\end{proof}
\section{Representations of the Lie algebra $\mathcal L$}
Now we shall consider uniformly bounded irreducible
weight modules over ${\mathcal L}$.
Let $M(\ll)$ be the finite-dimensional irreducible highest weight
$\frak{sl}_2$-module with highest weight $\ll$, then
$L(M(\ll)):=M(\ll)\ot \c[t, t^{-1}]$ becomes an irreducible
${\mathcal L}$-module by the actions as follows:
\begin{eqnarray*}
&&d_m(u\ot t^i)=(a+bm+i)u\ot t^{m+i},\\
&&x_m(u\ot t^i)=(x\cdot u)\ot t^{m+i},
\end{eqnarray*} for any $u\in M(\ll)$ and for some $a, b\in\c$.
\noindent{\bf Remark.} $L(M(\ll))$ irreducible iff $M(\ll)$ is a nontrivial $\frak{sl}_2$-module or $a\not\in\z$ or $b\ne 0, 1$. We also use $L(M(\ll))'$ to denote the irreducible submodule or subquotient of $L(M(\ll))$.
\begin{prop} \label{p51}
Let $V$ be a uniformly bounded irreducible
weight ${\mathcal L}$-module. Then $V$ is isomorphic to $L(M(\ll))'$ for some finite-dimensional irreducible ${\mathcal L}$-module $M(\ll)$.
\end{prop}
\begin{proof}
Similarly to Proposition \ref{p31}, one has $C=0$. Clearly, ${\mathcal T}_2=\c\{d_n, h_n, e_n, C\mid n\in\z\}$
is a subalgebra of ${\mathcal L}$.
Consider $V$ as a ${\mathcal
T}_2$-module, similarly to Lemma \ref{lem25}, $V$ has an irreducible uniformly bounded submodule $V'$. By Proposition \ref{p31},
$V'=\sum\c v_i$ of $V$ with $h_nv_i=cv_{n+i}$ and
$e_{n}v_i=0$ for some $c\in\c$. Moreover, $V=U(F)V'$, where $F=\c\{f_i\mid i\in\z\}$ is the Lie subalgebra of $\mathcal L$ generated by $f_i$ for all $i\in\z$.
If $c=0$, then $e_if_jv_k=[e_i, f_j]v_k+f_je_iv_k=0$ for all $u, j, k\in\z$. So $EV=0$, where $E=\c\{e_i\mid i\in\z\}$ is the Lie subalgebra of $\mathcal L$ generated by $e_i$ for all $i\in\z$.
Then the irreducibility of $V$
as ${\mathcal L}$-module is equivalent to that of $V$ as ${\mathcal
T}_2'$-module, where the subalgebra ${\mathcal
T}_2':=\c\{d_n, h_n, f_n, C\mid n\in\z\}$ is also isomorphic to the Lie algebra ${\mathcal T}_2$. By
Proposition \ref{p31}, $V$ is the Harish-Chandra module of intermediate series
with $f_nv_i=0$ for all $n, i\in\z$. The theorem is proved. So we can suppose that $c\ne 0$.
Fix $k\in\z$, for any $i\in\z$, $v_i={\frac1c}h_{i-k}v_k$, so $U(F)v_i\subset U(H, F)v_k$, where $U(H, F)$ is the universal enveloping algebra of the Lie subalgebra generated by $h_i, f_i$ for all $i\in\z$ of $\mathcal L$.
So $$V=U(H, F)v_k, \eqno(5.1)$$ for any $k\in\z$.
Then for any $v\in V$ there exists $n\in\z_+$ such that $e_i^nv=0$ since $e_iv_k=0$. It is that each $e_i$ is locally nilpotent on $V$.
Replacing ${\mathcal T}_2=\c\{d_n, h_n, e_n, C\mid n\in\z\}$ by the subalgebra ${\mathcal T}_2':=\c\{d_n, h_n, f_n, C\mid n\in\z\}$, we see that each $f_i$ is locally nilpotent on $V$.
So $V$ is an integrable weight ${\mathcal L}$-module.
By (5.1) we know that $V$ becomes an irreducible module over the loop algebra $\bar L({\mathcal L})=\c\{e_i, f_i, h_i, d_0\mid i\in\z\}$. Moreover, $V$ is an integrable weight $\bar L({\mathcal L})$-module. So by \cite{C} (or see \cite{CP}, \cite{E0}), $V\cong M(\bar\ll, \bar a)=L(\otimes_{i=1}^kM(\ll_i))$ as $\bar L({\mathcal L})$-modules, where $\bar \ll=(\ll_1, \cdots, \ll_k)$ and $\bar a=(a_1, \cdots, a_k)$.
However, by $[d_i, h_j]=jh_{i+j}$, we have $k=1$ and $\bar a=1$. So as an irreducible $\bar L({\mathcal L})$-module, $V\cong L(M(\ll))'$ for some highest weight $\ll$ of $\frak{sl}_2$.
Now we only need consider the actions of $d_n$ on $V$. Suppose that $\sum_{i\in\z}\c v\ot t^i$ is a Vir-module of intermediate series, then
$$d_nf_0v\ot t^i=f_0d_nv\ot t^i=(a+bn+i)f_0v\ot t^{n+i}.$$
So $\sum_{i\in\z}\c f_0v\ot t^i$ is a Vir-module of intermediate series.
\end{proof}
Combining with Theorem 4.1, we get the following result.
\begin{theo} \label{main2}
Let $V$ be an irreducible weight ${\mathcal L}$-module with finite-dimensional weight spaces. Then $V$ is a highest weight module or a
lowest weight module or isomorphic to $L(M(\ll))'$ for some finite-dimensional irreducible ${\mathcal L}$-module $M(\ll)$.
\end{theo}
\noindent{\bf Remark.}
The unitary highest weight modules over the affine-Virasoro algebra $\mathcal L$ were considered in \cite{Ka1, JY}.
\newpage \centerline{\bf ACKNOWLEDGMENTS}
\vskip15pt
Project is supported by the NNSF (Grants: 11271131, 11371134) and ZJNSF (LZ14A010001).
| 38d8c869005492fce7de9983710599464c399240 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\subsection{Acknowledgements}
The authors are grateful for the useful conversations with Jun Guo, Qing Wang, Zuowei Liu and Xin Chen. We acknowledge the support from Tsinghua University, Center of High Energy Physics of Tsinghua University and Collaborative Innovation Center of Quantum Matter of China. Kechen Wang is supported in part by the CAS Center for Excellence in Particle Physics (CCEPP) and wants to thank Cai-Dian L$\rm \ddot{u}$ for his help.
| 5dd8aa129a4567812a1d6f4a64b6f9c91313e1da | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
In a seminal paper Eshelby described the strain response of isolated inclusions to applied stresses, and predicted the stiffness of solid composites containing a dilute volume fraction of inclusions \cite{Eshelby57}.
These results has since been successfully used to model a huge range of problems, from composite mechanics to fracture and dislocation theory.
As Eshelby's theory strictly only applies to composites containing dilute inclusions, it has been extended to treat non-dilute composites with a variety of approximation schemes \cite{Hashin63,Hashin62,Christensen79,Cauvin07,Fornes03}, many of which show good agreement with experimental data across an unexpectedly wide range of inclusion volume fraction $\phi$.
Although Eshelby--theory and its non-dilute extensions work well for hard composites, recent work has shown that they can fail to describe soft composites \cite{Stylesoft15, Style15}.
This is because such schemes view the constituents of the composite as bulk linear-elastic solids, while ignoring the physics of the interface between them \cite{Eshelby57,Hashin63,Mori73,Christensen05,Hashin62,Christensen79}.
However, as is generally the case in interfacial thermodynamics \cite{Cahn78}, when the inclusions become sufficiently small that the surface energy becomes appreciable relative to the bulk strain energy, one cannot ignore interfacial effects.
For example, when the interface between an inclusion and the host (with Young's modulus $E$) is governed by an isotropic, strain-independent surface tension $\gamma$, the validity of the standard framework \cite[e.g.,][]{Eshelby57,Hashin63,Mori73,Christensen05,Hashin62,Christensen79} is limited to inclusions much larger than the elastocapillary length $L\equiv \gamma/E$ \cite{Stylesoft15, Style15}.
This is typically the situation for soft materials such as gels and elastomers \cite[e.g.,][]{Hui13,styl12c,mora13,nade13}.
Here we extend Eshelby's theory to soft, non-dilute composites with an isotropic, strain-independent interfacial surface tension.
In particular, motivated by recent experiments and their analysis \cite{Stylesoft15, Style15}, we focus on the problem of a soft elastic solid containing a non-dilute distribution of identical liquid droplets.
The framework for our extension is the multiphase scheme introduced by Mori and Tanaka \cite{Mori73}, and our approach
generalizes previous theoretical results that have been compared with experiments on soft aerated composites \cite{Palierne90,Linh13,Ducloue14}.
Our work differs from previous approaches that either consider dilute inclusions or interfacial elasticity \cite[e.g.,][]{Sharma04,Duan05JMPS,LeQuang07,Stylesoft15}, or that obtain upper and lower bounds on composite elastic moduli with interfacial elasticity \cite{LeQuang08,Brisard10,Brisardbulk10}.
\section{The Mori-Tanaka, or Equivalent Inclusion-Average Stress (EIAS), method}\label{approxschemes}
The concept of an equivalent inclusion \cite{Eshelby57} and the average-stress in the matrix are central to the Mori-Tanaka approximation scheme, which we refer to as Equivalent Inclusion-Average Stress (EIAS) method \cite[e.g.,][]{Benveniste87}. Here, we envision a two-phase system of inclusions in a host matrix. The inclusion phase consists of identical incompressible droplets randomly arranged in the solid elastic host matrix, as seen in Fig. \ref{fig:schematic}. Under stress-free circumstances, the droplets are spherical.
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{schem_MT.pdf}
\caption{Schematic representation of the composite material treated. Identical liquid inclusion droplets are embedded in a solid elastic matrix. }
\label{fig:schematic}
\end{figure}
Benveniste \citep{Benveniste87} described the central assumption of the EIAS method as being equivalent to require that the fourth order tensor relating the average strain in a typical inclusion to the average strain in the matrix is equivalent to ``Wu's tensor'', $T_{ijkl}$' \cite{Wu66}. Wu's tensor relates the uniform strain in an inclusion embedded in an ``all-matrix'' material to the imposed uniform strain at infinity. Here, the inclusion phase is denoted with the superscript *, and the matrix phase is free of indices. Across the interface between the phases in the equivalent inclusion system, the stress and displacement are continuous (also known as perfect bonding conditions).
For a composite consisting of spherical elastic inclusions with bulk/shear moduli $K^*,\mu^*$ embedded in a matrix with moduli $K,\mu$, the EIAS method gives that the effective composite moduli $\overline{K},\overline{\mu}$ are (see Eqs. (31) and (32) of \cite{Benveniste87}):
\begin{equation}
\overline{K}=K+\phi(K^*-K)A_m, \quad \overline{\mu}=\mu+\phi(\mu^*-\mu)A_s\,,
\label{benveniste}
\end{equation}
where
\begin{multline}
A_m=\frac{K}{K+(1-\phi)(K^*-K)S_m}, \quad A_s=\frac{\mu}{\mu+(1-\phi)(\mu^*-\mu)S_s},\\
\mathrm{and}\quad S_m=\frac{1+\nu}{3(1-\nu)}\quad S_s=\frac{8-10\nu}{15(1-\nu)}\quad\quad
\label{auxiliary}
\end{multline}
and $\nu$ is Poisson's ratio of the matrix.
Here we first show the equivalence between a droplet embedded in an elastic solid with an isotropic interfacial tension $\gamma$ and a corresponding elastic inclusion with no interfacial tension.
This allows us to calculate the moduli $K^*$ and $\mu^*$, which we can then substitute into the above equations to yield the effective composite properties.
\section{Calculating the equivalent inclusion moduli}
Following Style et al., \cite{Stylesoft15, Style15}, we model the boundary condition for the elastic stress at the surface of the droplet using the Young-Laplace equation for the discontinuity of the traction vector $\sigma \cdot \textbf{n}$;
\begin{equation}
\sigma \cdot \textbf{n}=-p\textbf{n}+\gamma \mathcal{K} \textbf{n},
\end{equation}
where $\textbf{n}$ is the normal to the deformed droplet surface, $p$ the pressure in the droplet, and the "total curvature" $\mathcal{K}$ is the sum of the local principal curvatures. Importantly, the interfacial stress is treated as a constant, isotropic and strain-independent surface tension $\gamma$, which is an excellent approximation for a wide range of soft materials \cite[e.g.][]{Hui13}.
The bulk modulus $K^*$ of the equivalent elastic inclusion can be calculated \cite[e.g.,][]{chp8micromechanics} by considering a spherical particle embedded in an infinite host material subjected to a spherically symmetric strain at infinity, yielding
\begin{equation}
K^*=K_{incl}+\frac{2\gamma}{3 R} \,,
\label{equivalentbulk}
\end{equation}
where $R$ is the radius of the liquid inclusion, and $K_{incl}$ is the bulk modulus that the inclusion would have in absence of surface effects. Thus $K^*\rightarrow \infty$ as $K_{incl}\rightarrow\infty$ in our incompressible droplet inclusions.
We obtain the shear modulus $\mu^*$ of the equivalent elastic inclusion by comparing Eshelby's results for the elastic moduli of a dilute composite with spherical elastic inclusions \cite{Eshelbypg390}:
\begin{subequations}
\begin{align}
&\overline{K}^{dil}=\frac{K}{1-\alpha^{-1} \phi},\quad \alpha=\frac{1+\nu}{3(1-\nu)}\,, \label{bulkdiluteapprox}\\
\overline{\mu}^{dil}=\frac{\mu}{1+B\phi}&, \quad B=\frac{\mu^*-\mu}{(\mu-\mu^*)\beta-\mu},\quad \beta=\frac{2}{15}\frac{4-5\nu}{1-\nu}\,\label{sheardiluteapprox}
\end{align}
\end{subequations}
to Style \emph{et al.}'s result for the Young's modulus of a dilute composite containing incompressible liquid droplets \cite[][Eq.\,(19)]{Stylesoft15}:
\begin{equation}
\frac{\overline{E}^{dil}}{E}=\left[ 1+\frac{3(1-\nu)\left[\frac{R}{L}(1+13\nu)-(9-2\nu+5\nu^2+16 \nu^3)\right]}{(1+\nu)\left[\frac{R}{L}(7-5\nu)+(17-2\nu-19 \nu^2)\right]} \phi \right]^{-1}.
\label{19Style}
\end{equation}
Using Eq. (\ref{bulkdiluteapprox}) and noting $\overline{\mu}^{dil}=3\overline{K}^{dil}\overline{E}^{dil}/(9\overline{K}^{dil}-\overline{E}^{dil})$, Eq. (\ref{19Style}) becomes
\begin{equation}
\frac{\overline{\mu}^{dil}}{\mu}=\frac{17-2\nu -19\nu^2 + \frac{R}{L}(7-5\nu)}{ 17 - 2 \nu - 19\nu^2 +15(\nu^2-1)\phi +\frac{R}{L}(7 -5\nu -15(\nu -1)\phi)}.
\label{shear_eqn}
\end{equation}
Thus we equate Eqs. (\ref{sheardiluteapprox}) and (\ref{shear_eqn}) to obtain
\begin{equation}
\frac{\mu^*}{\mu}=\frac{8(1+\nu)}{3(1+\nu)+5\frac{R}{L}}.
\label{equivalentshear}
\end{equation}
This agrees with \cite{Ducloue14} in the limit of an incompressible matrix. Finally, the equivalent Young's modulus is
\begin{equation} \frac{E^*}{E}=\frac{3\mu^*}{2\mu(1+\nu)}=\frac{4}{1+\nu+\frac{5}{3}\frac{R}{L}}\,.
\label{equivalentyoung}
\end{equation}
It is important to note that although a finite value of the volume fraction $\phi\ll 1$ is assumed in the derivation of
the equivalent moduli in Eqs (\ref{equivalentbulk}), (\ref{equivalentshear}) and (\ref{equivalentyoung}), all are {\em independent of} $\phi$.
In the case of an incompressible matrix ($\nu=1/2$) we recover the expression for $E^*$ of Style et al., \cite[][Eq.\,(9)]{Style15},
\begin{equation}
\left(\frac{E^*}{E}\right)=\frac{24\frac{L}{R}}{10+9\frac{L}{R}}.
\end{equation}
Now, for arbitrary Poisson's ratio $\nu$ of the host matrix, we find that (a) when $R\gg L$ the droplets behave like inclusions with Young's modulus $E^*=12\gamma/5R$, and (b) when $R \ll L$, in the capillarity-dominated regime, the equivalent Young's modulus of each inclusion saturates at $E^*=4E/(1+\nu)$. This shows that despite the widespread ansatz that $E^*=2\gamma/R$, the effective stiffness cannot become arbitrarily large as the droplet shrinks. Therefore, the limits (a) and (b) found by Style et al., \cite[][]{Stylesoft15} for an incompressible host matrix, $E^*\rightarrow 12\gamma/5R$ ($R\gg L$) and $E^*\rightarrow 8E/3$ ($R\ll L$) respectively, are consistent with a more general theory.
\section{The effective composite moduli}
Having obtained equations for the the equivalent inclusion moduli, $K^*,\mu^*$, we can substitute them into Eqs. (\ref{benveniste}) and (\ref{auxiliary}) to obtain
\begin{widetext}
\begin{equation}
\frac{\overline{K}}{K}=\lim_{K^*\rightarrow \infty} \left[ 1 + \frac{\phi(K^*-K)}{K+(1-\phi)(K^*-K) S_m}\right]=1+\frac{\phi}{(1-\phi)S_m}\,
\label{effectivebulk}
\end{equation}
and
\begin{equation}
\frac{\overline{\mu}}{\mu}=1-\frac{15 (\nu-1) (1 -\frac{R}{L} +\nu)\phi}{\frac{R}{L} \left[7 + 8 \phi - 5 \nu (1 + 2 \phi)\right] +(1 +\nu)\left[17 -8 \phi +\nu (10 \phi -19)\right]}.
\label{effectiveshear}
\end{equation}
Hence from Eq.\,(\ref{effectivebulk}) one finds
\begin{equation}
\frac{\overline{E}}{E}= \frac{\frac{3}{2(1+\nu)(1-2\nu)}\left(\frac{\overline{K}}{K}\right)\left(\frac{\overline{\mu}}{\mu}\right)}{\frac{1}{1-2\nu}\frac{\overline{K}}{K}+\frac{1}{2(1+\nu)}\frac{\overline{\mu}}{\mu}}=\frac{\nu (4\phi-1)-(2\phi +1)}{1+\nu}\,
\frac{f_1+\frac{R}{L}f_2}{f_3+\frac{R}{L}f_4},
\label{effectiveYoung}
\end{equation}
with
\begin{equation} \left\{\begin{array}{l}f_1(\nu,\phi)= -(1 +\nu) \left[\nu (19 + 5 \phi)-(17+7 \phi)\right]\\
f_2(\nu,\phi)= (5\nu-7) (\phi-1)\\
f_3(\nu,\phi)= (1 + \nu) (19 \nu-17) + \left[44\nu -14 +
2 (5 - 24 \nu) \nu^2\right] \phi + \left[13 - 15 \nu +
2 \nu (15 \nu-13) + \nu (2 \nu-1) (-13 +
15 \nu)\right] \phi^2 \\
f_4(\nu,\phi)= 5 \nu-7 +2 (7 \nu-5) \phi + (1 - 2\nu) (15 \nu-13) \phi^2 \end{array}\right..
\end{equation}
\end{widetext}
As expected in the dilute limit $\phi\rightarrow 0$ of Eq.(\ref{effectiveYoung}) we recover Eq. (\ref{19Style}).
Next we focus on the special case of an incompressible matrix ($\nu=1/2$), where the identity $E_{rel}\equiv\left(\frac{\overline{E}}{E}\right)=\left(\frac{\overline{\mu}}{\mu}\right)\equiv\mu_{rel}$ holds. In some experimental situations it is interesting to know the effective Young's modulus for an incompressible matrix with a finite concentration of inclusions of arbitrary size where the bulk elasticity ($R\gg L$) and the capillarity dominated ($R\ll L$) limits manifest themselves. Here, Eq.\,(\ref{effectiveYoung}) takes the simpler form
\begin{equation}
\left(\frac{\overline{E}}{E}\right)=\frac{15+9\phi + \frac{R}{L} (6-6\phi)}{15-6\phi +\frac{R}{L}(6 + 4\phi)},
\label{moritanakaincompressible}
\end{equation}
whose large and small droplet limits are
\begin{equation}
\left(\frac{\overline{E}}{E}\right)= \left\{\begin{array}{ll}\frac{3-3\phi}{3+2\phi}, & R \gg L \\
\frac{5+3\phi}{5-2\phi}, & R \ll L
\end{array}\right..
\label{moritanakalargesmalldrop}
\end{equation}
When, as above, the elastocapillary length is based on the matrix material, $L\equiv \gamma/E$, a natural dimensionless parameter is $\gamma^\prime\equiv L/R=\gamma/(E R)$, which we use to rewrite Eq. (\ref{moritanakaincompressible}) as
\begin{equation}
E_{rel}\vert_{\nu=1/2}=\frac{2-2 \phi + \gamma^\prime (5+ 3\phi)}{2+\frac{4}{3} \phi +\gamma^\prime(5 - 2\phi)}\;.
\label{moritanakaincompressibleintermsofgamma}
\end{equation}
Fig. \ref{fig:regular} shows $E_{rel}$ of Eq. (\ref{moritanakaincompressibleintermsofgamma}) versus $\phi$, and in Fig. \ref{fig:R} it is plotted against $R/[ (3V / 4 \pi )^{1/3}]$, where $V$ is the volume of composite per inclusion. We see in Fig. \ref{fig:regular} that the
$\gamma^\prime = L/R < 2/3$ ($\gamma^\prime > 2/3$) softening (stiffening) behavior spans the experimental range seen by Style et al., \cite{Style15}.
We also find exact ``mechanical cloaking'', where $E_{rel}$ is constant at $\gamma^\prime = 2/3$ for all liquid volume fractions. Precisely the same cloaking condition is found in the dilute theory \cite{Stylesoft15}, {\em and} from a complimentary generalized 3-phase self-consistent approach \cite{MSWb} (again independent of $\phi$).
\begin{figure}[htp]
\centering
\includegraphics[width=0.95\columnwidth]{Fig1_v2_MT.pdf}
\vspace{-0.4cm}
\caption{In the incompressible matrix case $E_{rel}$ versus $\phi$ for a wide range of $\gamma^\prime$ from the softening to the stiffening regime, according to the EIAS theory.}
\label{fig:regular}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=0.95\columnwidth]{Fig2_v2_MT.pdf}
\vspace{-0.4cm}
\caption{In the incompressible matrix case $E_{rel}$ versus $R/[ (3V / 4 \pi )^{1/3}]$ for a wide range of the parameter $L/[ (3V / 4 \pi )^{1/3}]$, according to the EIAS theory.}
\label{fig:R}
\end{figure}
As $\gamma^\prime$ becomes arbitrarily large (or the droplets become arbitrarily small), Eq.\,(\ref{moritanakalargesmalldrop}) shows that the capillary-dominated stiffening regime asymptotes to $E_{rel}\vert_{\nu=1/2} \rightarrow (5+3\phi)/(5-2\phi)$. This is the upper limit of rigidity, scaling as $1/(1-\phi)$ at small $\phi$ (Fig. \ref{fig:regular}).
Finally, we note that in the limit $\phi\rightarrow 0$, the present theory quantitatively captures the dilute theory \cite{Stylesoft15}.
In Figs. \ref{Mori} and \ref{Morideviation}, we compare these two predictions of $E_{rel}$ for the incompressible matrix case. Clearly, the EIAS theory is softer than the dilute theory in both the softening ($\gamma^\prime < 2/3$) and the stiffening ($\gamma^\prime > 2/3$) regimes.
Importantly, for volume fractions up to $\phi \approx 0.2$ (after which the dilute theory begins to break down), there is only a few percent deviation, as shown in Fig. \ref{Morideviation}. This is well within experimental error \cite{Style15} and thus we show that the dilute theory provides an accurate and simple framework for comparison, given that it is the appropriate asymptotic limit of the non-dilute theory. It is only when $\phi$ increases that large deviations appear, and these are most pronounced for large $\gamma^\prime$. Independent of $\phi$, the two theories predict precisely the same mechanical cloaking condition of the inclusions; $\gamma^\prime=2/3$.
\begin{figure}[htp]
\centering
\includegraphics[width=1\columnwidth]{Fig3_v2_MT.pdf}
\caption{In the incompressible matrix case $E_{rel}$ versus $\phi$ for the EIAS (solid curves, Eq. \ref{moritanakaincompressibleintermsofgamma}) and dilute (dotted/dashed curves, \cite{Stylesoft15}) theories over a wide range of $\gamma^\prime$ from the softening to the stiffening regime. From the bottom to the top, the solid EIAS curves correspond to $\gamma^\prime=0.1$, $0.3$, $1$, $3$, $10$, $100$. The dilute theory curves correspond to $\gamma^\prime=0.1$ (red, dashed line), $0.3$ (black, dotted line), $1$ (blue, dash-dotted line), $3$ (red, dashed line), $10$ (black, dotted line), $100$ (green, dash-dotted line).}
\label{Mori}
\end{figure}
\begin{figure}[htp]
\vspace{-0.4cm}
\centering
\includegraphics[width=1\columnwidth]{Fig4_v2_MT.pdf}
\caption{In the incompressible matrix case, the percent deviation between the dilute and EIAS theories $\Delta E_{rel}$ versus $\phi$, shown for the same values of $\gamma^\prime$ as in Fig.\,\ref{Mori}: $0.1$ (red, solid line), $0.3$ (black, dashed line), $1$ (blue, solid line), $3$ (red, dash-dotted line), $10$ (black, solid line), $100$ (green, dash-dotted line). }
\label{Morideviation}
\end{figure}
\section{Conclusions}
In light of recent work showing unexpected stiffening behavior of the effective elastic response of soft materials with liquid inclusions \cite{Stylesoft15, Style15}, we have revisited the Mori-Tanaka, or Equivalent Inclusion-Average Stress (EIAS), method for composite materials to account for the (strain-independent) liquid/matrix interfacial tension. The motivation is that whilst Style et al., \cite{Stylesoft15, Style15} explained experimental data using a dilute theory, we sought to understand the limits of the dilute approximation by extending a known approach for non-dilute systems to account for the stiffening behavior associated with interfacial forces. In so doing, we quantitatively analyzed when the dilute theory breaks down and thus confirmed that the comparison of experiment and theory \cite{Style15} occurred in the regime where the dilute theory is valid.
In detail, we extended the EIAS theoretical framework for the effective elastic moduli of composites including liquid droplets, by taking into account the surface tension at the droplet host-matrix interface when the matrix is a linear-elastic material. The dilute limit of the EIAS theory is achieved by taking $\phi \rightarrow 0$, and we find that the effective Young's modulus depends solely on two only parameters; $\phi$ and $\gamma^\prime=L/R$. We examined this graphically only in the incompressible case of $\nu=1/2$. These models, along with a generalized self-consistent 3-phase theory \cite{MSWb}, predict the same exact cloaking condition of the far-field signatures associated to the presence of the inclusions, viz., $R=3L/2$, independent of volume fraction $\phi$.
There are a range of possible comparisons and tests that immediately come to mind. For example, in situations wherein the host matrix is a nonlinear elastic \cite[e.g.,][]{Castaneda98,Jiang04}, or viscoelastic \cite[e.g.,][]{Palierne90} material. Finally, it would be of interest to compare this framework and that in our companion paper \cite{MSWb}, in which we treat the inclusion/matrix interface using a strain-independent surface tension, with approaches using an interfacial stress model \cite[e.g.,][]{Duan05,Duan07}.
\section{Acknowledgments}\label{Acknowledgments}
FM and JSW acknowledge Swedish Research Council Grant No. 638-2013-9243 and the 2015 Geophysical Fluid Dynamics Summer Study Program at the Woods Hole Oceanographic Institution, which is supported by the National Science Foundation and the Office of Naval Research under OCE-1332750. JSW also acknowledges
a Royal Society Wolfson Research Merit Award.
| 356c5d537f6131b74237e9dfae6ce48b8c37a423 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\label{introduction}
Transformer based NMT systems perform well on multiple translation tasks \citep{vaswani2017attention}.
Multi-head attention is a very important component of the Transformer model~\citep{vaswani2017attention}.
Multiple heads improve performance compared to a single head, as they allow the model to jointly look at different subspaces, and hence capture enhanced features from sentences.
For example,
a head can capture positional information by attending to adjacent tokens, or it can capture syntactic information by attending to tokens in a particular syntactic dependency relation \citep{voita2019analyzing}.
However, the performance of the transformer-base model with 8 heads at each layer is only 1 BLEU point higher than that of a similar model with just a single head at each layer \citep{voita2019analyzing}.
This is due to the fact that majority of the heads learn similar weights, and therefore, multiple heads attend to the same parts of the input. Hence, most of the heads are redundant, leading to an increased computational complexity without improving performance.
To avoid this redundancy, one approach is to prune the redundant heads based on certain importance score. In this work, we focus on designing an importance computation method to compute the importance score for each head.
Some recent work has
analyzed the importance of heads
by considering average attention weights of each head at some specific position \citep{voita2018context}. However, average of attention weights is a static measure of the head importance as it does not consider the varying importance of each head with respect to the input.
The importance of a head is dynamic, as a head can be very important for a particular word, but can be less important for other words.
Thus, in this work, we propose a Dynamic Head Importance Computation Mechanism (DHICM) to calculate the importance score for each head, and this can be later utilized to design a pruning strategy.
Our key idea is to apply a second level attention on the outputs of all heads, to dynamically calculate the importance score for each head, that varies with the input, while training.
{We also propose to add a new loss term to prevent our approach from assigning equal importance to all heads. Note that we apply DHICM for both self attention heads and encoder-decoder attention heads present in the encoder and decoder of the transformer architecture.}
To evaluate the performance of our method, we considered multiple translation tasks with different language pairs such as Hindi-English, Belarusian-English, German-English.
Results show that DHICM achieves a much higher performance compared to the standard transformer model, particularly, in low-resource conditions where much less training data is available.
Moreover, DHICM requires only ${\sim}d^2$ additional parameters ($d$ is the word embedding dimension), that is much less than the total number of parameters in the transformer base model.
{The transformer model has a large number of hyperparameters, due to which, it is computationally challenging to search for their optimal values.
Thus, much of the previous work used default values of the hyperparameters \citep{gu2018meta, aharoni2019massively}. However, these are not guaranteed to yield optimal performance on different datasets. Grid search over all hyperparameters is computationally intensive due to the exponential number of combinations across all possible values.
Therefore, in this work, we perform grid search over a subset of hyperparameters, i.e., {\textit{architecture} hyperparameters} and {\textit{regularisation} hyperparameters},
and experiments show that the hyperparameter values obtained from our method yield significantly better performance compared to the default values.}
{To summarize, our work makes the following major contributions:
\begin{itemize}
\item We propose a Dynamic Head Importance Computation Mechanism for transformer based NMT systems, to compute the importance scores for all heads dynamically with respect to an input token.
\item
We propose to add an additional loss function that helps to compute different attention for different heads, and filter the most important heads.
\item Our hyperparameter tuning method yields significantly better performance than the default values.
\end{itemize}
}
\section{Background}
\subsection{Single-Head Attention} \label{single_head_attention}
Given a sequence of $N$ $d$-dimensional vectors $X = (x_1,x_2,...,x_N)$ and a query vector $y \in \mathbb{R}^d$, a single-head attention is a weighted aggregate of $x_i$, $i \in \{1,2,...,N\}$, followed by a linear transformation. The weights are obtained using a function $F(x_i, q)$ e.g., multi-layer perceptron \citep{bahdanau2014neural} or scaled dot product \citep{vaswani2017attention},
\iffalse
\begin{equation}
F(x_i, y) = Softmax(\frac{y^{T}W_{q}^{T}W_{k}x_{i}}{\sqrt d_{k}}) \nonumber
\end{equation}
\fi
and the attention $A_h(X, y|W_v,W_o)$ is computed as $A(X, y) = W_o \: {\sum\limits}_{i=1}^{N}F(x_i, y)W_vx_i$, where $W_o$ and $W_v$ are learnable weights. {In a transformer based NMT system, there is an encoder and a decoder. The encoder encodes the input sequence of tokens and outputs a sequence of vectors $X$. The decoder uses $X$ to generate a sequence of tokens. If the query vector $y$ is generated using the encoder, then the computed attention is known as self-attention. Whereas if the query vector $y$ is generated from the decoder, then the computed attention is known as encoder-decoder attention.}
\iffalse
\begin{equation}
Attn_{W_q,W_k,W_v,W_o}(X, y) = W_o \: {\sum\limits}_{i=1}^{N}F(x_i, y)W_vx_i \nonumber
\end{equation}
\fi
\iffalse
where, $W_v \in \mathbb{R}^{d_k \times d}$, $W_o \in \mathbb{R}^{d \times d_k}$ are learnable parameters and $d_k$ is a scaling factor. In case of encoder-decoder attention, $X$ represents the source sentence, and $y$ represents a target word.
\fi
\subsection{Multi-Head Attention} \label{multi_head_attention}
Multi-head attention mechanism runs through multiple single head attention mechanisms in parallel \citep{vaswani2017attention}.
Let there be a total of $H$ heads, where each head $h \in \{1,2,...,H\}$ corresponds to an independent single head attention. The output of each head $A_h(X, y|W_v^h,W_o^h)$ is calculated independently,
and the final output of multiple heads is calculated using the outputs of all heads, i.e., $\Sigma_{h=1}^{H}A_h(X, y|W_v^h,W_o^h)$,
\iffalse
\begin{equation}
MHA(X, y) = \Sigma_{h=1}^{H}Attn_{W_q^h,W_k^h,W_v^h,W_o^h}(X, y) \nonumber
\end{equation}
\fi
where, $W_v^h,W_o^h$ are learnable weights for each head $h$.
\iffalse
\begin{center}
$S^{h} = softmax(\frac{Q^{h} * (K^{h})^{T}}{\sqrt d_{k}})$ \\
$O^{h} = S^{h}V^{h}$ \\
$A = ([O^{1};O^{2};...;O^{H}])W_{O}$
\end{center}
\fi
\section{Approach}
\subsection{Dynamic Head Importance Computation Mechanism (DHICM)}
\label{DHIC}
In the traditional transformer model, the output of the multi-head attention is a linear transformation over the concatenation of outputs of all heads.
Therefore, the outputs of all heads have equal contribution. However,
since all heads are not equally important to the input (Sec.~\ref{introduction}),
we propose to compute the importance of each head with respect to the input dynamically.
Our idea is that an additional attention layer will allow the model to pay more attention to the head that is more important to the input.
Thus, we design a second level attention
that uses the input and output of all heads to compute attention scores, i.e., importance for all the heads
with respect to the input, described as follows.
Let $x \in \mathbb{R}^d$ be a $d$-dimensional input to the multi-head attention module, and $O^{h}$ be the output of head $h \in \{1,2,...,H\}$ (without applying the linear transformation $W_o^h$ described in Sections \ref{single_head_attention} and \ref{multi_head_attention}).
We first learn a function $G(x, O^h)$
to determine the attention, i.e., importance score for head $h$.
To approximate $G(x, O^h)$, we considered both multi layer perceptron and scaled dot product.
In our experiments, we observed that both achieve similar performance, and since scaled dot product requires less number of parameters, we used the latter to compute $G(x, O^h)$:
\begin{equation}
G(x, O^h) = \frac{\exp^{s(x, O^h)}}{\Sigma_{n=1}^{H}\exp^{s(x, O^n)}}
\end{equation}
\noindent
where,
\begin{equation}
s(x, O^h) = \frac{{O^h}^{T}W^{T}Ux}{\sqrt d_{m}}
\label{sx}
\end{equation}
\noindent
Here, $W \in \mathbb{R}^{d_m \times d_k}, U \in \mathbb{R}^{d_m \times d}$ are learnable parameters, and $d_k, d_m$ are scaling factors for the multi-head attention and second level attention, respectively. We also add a dropout layer \citep{srivastava2014dropout} after computing $Ux$ in Equation~\ref{sx}. Next, we compute the output of the second-level attention layer ($DHICM$) using the attention scores for each head, as follows:
\begin{equation}
DHICM(x, O) = W_s \: {\sum\limits}_{h=1}^{H}G(x, O^h)VO^h
\end{equation}
\noindent
where, $O=(O^1,O^2,...,O^H)$, and $V \in \mathbb{R}^{d_m \times d_k}$ and $W_s \in \mathbb{R}^{d \times d_m}$ are learnable parameters. The output of the second level attention is then passed to the feed forward network. {Note that DHICM learns only ${\sim}d^2$ additional parameters corresponding to $W, U, W_s, V$ in the second layer added, and this is much less than the total number of parameters in the standard transformer model (typical value of $d$ is 512).}
\paragraph{Objective} \label{Loss}
Let $L_c$ represent the cross entropy loss that is minimized to ensure that the model generates accurate tokens. However, by only considering $L_c$ as the objective, it might be possible that the model learns equal values of $G(x, O^h)$ for all $h \in \{1,2,...,H\}$. This would indicate that all heads are equally important to the input $x$, and thus, prevent us from filtering the most important heads.
To avoid this, we add an extra loss term to penalize the model if the value of $G(x, O^h)$ becomes equal for all $h \in \{1,2,...,H\}$.
More formally, let $a \in R^H$ be a vector representing the importance score of all heads according to the model, where $a_h = G(x, O^h)$
is the importance score of head $h$.
Let $b \in R^H$ be a vector representing equal importance of all heads, i.e., $b_h = \frac{1}{H}$, where $H$ is a constant.
Both $a$ and $b$ denote the importance distribution of the heads, where $a$ is learned by the model using the second level attention, and $b$ is a uniform distribution with equal importance for all heads.
In order to avoid the model from assigning equal importance to all the heads, we maximize the Kullback-Leibler divergence (KL Divergence) between distributions $a$ and $b$. Note that both the distributions sum up to 1, i.e., $\sum\limits_{h}a_h = 1$, and $\sum\limits_{h}b_h = 1$, and that $a_h > 0, b_h > 0$ for all $h \in \{1,2,...,H\}$.
Specifically, we add an extra loss term $L_{KL}$ as the KL Divergence between $a$ and $b$, given as:
\begin{equation}
L_{KL} (a||b) = \sum\limits_{h \in \{1,2,...,H\}} a_h ln \frac{a_h}{b_h}
\end{equation}
\noindent The overall loss $L$, where we minimize $L_c$ and maximize $L_{KL}$, is computed as:
\begin{equation}
L = L_c - \lambda * L_{KL}
\end{equation}
\noindent
where $\lambda$ is a hyperparameter used to control the effect of $L_{KL}$ on the overall loss $L$. The objective is to minimize the overall loss $L$.
\section{Experiment}
\subsection{Dataset Description} \label{data}
\begin{table}[!t]
\centering
{
\begin{tabular}{ccccc}
\toprule
Dataset & Train & Validation & Test \\
\midrule
IWSLT14 & 160K & 7.3K & 6.7K \\
WMT17-CS & 5.9M & 3K & 6K \\
HindEnCorp & 256K & 7K & 7K \\
\specialcell{TED talks \\ (Be-En)} & 4.5K & 1K & 2.6K \\
\bottomrule
\end{tabular}
}
\caption{Train, Validation and Test split size for different datasets used in our experiments}
\label{datasets}
\end{table}
\begin{table*}[!t]
\centering
{
\begin{tabular}{rrrrr}
\toprule
& \multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{{Default}} & \multicolumn{3}{c}{Optimal} \\
\cmidrule(l){3-5}
& & De-En & Hi-En & Be-En \\
\midrule
Feed forward dim. & 2048 & 2048 & 1024 & 128 \\
Attention heads & 8 & 4 & 4 & 2 \\
Dropout & 0.1 & 0.5 & 0.3 & 0.1\\
Attention Dropout & 0.0 & 0.1 & 0.0 & 0.0 \\
Activation Dropout & 0.0 & 0.3 & 0.0 & 0.0 \\
Dropout (Section~\ref{DHIC}) & N/A & 0.5 & 0.2 & 0.2 \\
Label Smoothing & 0.1 & 0.1 & 0.1 & 0.4 \\
\bottomrule
\end{tabular}
}
\caption{Default and Optimal Hyperparameters}
\label{hyperparameters}
\end{table*}
We used German-English (De-En) parallel corpus obtained from IWSLT14 \citep{cettolo2014report} and WMT17 \citep{bojar2017results} shared translation tasks to evaluate the performance of our proposed method.
{Table~\ref{datasets} reports the number of parallel sentences in training, validation and test splits of different datasets that are considered in our experiments}.
To compare with \citep{iida2019attention},
we used WMT17 De-En training corpus as training set and newstest13 as validation set. Similar to \citep{iida2019attention}, we concatenated newstest14 and newstest17 to make one test set. We call this WMT17 dataset with the modified test set as WMT17-CS dataset.
To assess the performance of our method for low resource language pairs, we used Hindi-English (Hi-En) parallel corpus obtained from HindEnCorp0.5 \citep{11858/00-097C-0000-0023-625F-0}. Also, we created smaller training sets from the complete IWSLT14 training set. We randomly sampled 10K, 20K, 30K, 40K, 80K, 120K and 160K sentence pairs from the full training data. The validation and test datasets were the same across all training sets.
{We also evaluated the performance of our method on extremely low resource language pairs. We used Belarusian-English (Be-En) parallel corpus from TED talks \citep{qi2018and} that contains only 4.5K parallel sentences in the training set}.
The HindEnCorp0.5 dataset contains 270K sentence pairs, out of which
we randomly sampled 7K sentence pairs each for validation and test sets, and used the remaining sentences as the training set.
We used moses toolkit \citep{koehn2007moses} to tokenize German, Belarusian and English sentences, and IndicNLP Library\footnote{\href{https://github.com/anoopkunchukuttan/indic_nlp_library}{IndicNLP Library}} to tokenize Hindi sentences. For open-vocabulary translation, we segmented words using byte-pair encoding (BPE)\footnote{\href{https://github.com/rsennrich/subword-nmt}{https://github.com/rsennrich/subword-nmt}} \citep{sennrich2015neural}.
{For Be-En parallel corpus, we learned 5K merge operations for both Be and En separately. For other datasets,} we combined the source and target sentences of the training set for learning BPE. We learned 10K merge operations for IWSLT14 dataset, and 20K merge operations for other datasets.
\subsection{Hyperparameter Optimization}
\label{hpmeters}
{The transformer model has a large number of hyperparameters, and hence the total number of combinations of possible values for these hyperparameters is exponential. Therefore, although the language pairs are different from the original pairs used to determine the default values, much of the previous work uses the default hyperparameters (e.g., \citep{gu2018meta, aharoni2019massively}). However, different languages have different characteristics, and using the hyperparameters tuned for one language pair, might not yield the optimal performance for another language pair. Furthermore, the amount of data available for training also affects the choice of hyperparameters. Hence, for each language pair, we perform extensive hyperparameter tuning to get better performance. Since there are exponential number of combinations, grid search is computationally very intensive, and random search is not guaranteed to yield optimal hyperparameters. Hence, we perform hyperparameter search using different values for a subset of hyperparameters. We majorly tune on two types of hyper-parameters - architecture hyper-parameters (e.g., number of attention heads, feed-forward dimension), and regularization hyper-parameters (e.g., dropout, attention dropout, activation dropout, label smoothing). The remaining hyper-parameters such as word embedding size, number of layers, for both encoder and decoder are set to their default values (similar to \citep{vaswani2017attention}), and kept constant throughout the search. We first tune the architecture hyperparameters and keep the regularization hyperparameters constant with their default values. Next, we tune the regularization hyperparameters using the optimal values for architecture hyperparameters. Since we consider only a small subset of hyperparameters, the number of combinations are not exponential, and hence we are able to use grid search to tune the hyperparameters. The optimal hyperparameters chosen are the ones that correspond to the minimum loss on the validation set. Also, we use early stopping (described in Section~\ref{exp_setup_baselines}) to prevent our model from overfitting.
Although our hyperparameter tuning method does not guarantee a global optimum, we observe a substantial improvement over the default hyperparameters in our experiments (Section~\ref{results}).}
The values of default and optimal hyperparameters obtained using our hyperparameter search, are reported in Table~\ref{hyperparameters}.
\subsection{Experimental Setup and Baselines} \label{exp_setup_baselines}
We consider the Standard Transformer-base model \cite{vaswani2017attention} as a baseline, and for implementation, we used fairseq toolkit \cite{ott2019fairseq}. We also analyzed the effect of applying our proposed approach DHICM to different layers of both encoder and decoder of the transformer model, and observed that
applying the second level attention at the last layer of both encoder and decoder yields the best score.
We refer to the hyperparameters reported in the Standard Transformer-base model \cite{vaswani2017attention} as the Default Hyperparameters, and those {obtained using our hyperparameter search described in Section~\ref{hpmeters}} are referred to as the Optimal Hyperparameters. {We trained all the models on 4 Nvidia GeForce RTX 2080 Ti GPUs. The number of layers of encoder and decoder was set to 6, number of tokens per batch was set to 8000, and the word embedding dimension $d$ was set to 512. We used Adam optimizer ($\epsilon = 10^{-6}, \beta_{1} = 0.9, \beta_{2} = 0.98$)~\citep{kingma2014adam} with a learning rate of $5 \times 10^{-4}$. We used inverse square root learning rate scheduler with 4000 warmup steps, and used beam search with beam size of 5 for generating the sentences. In our proposed approach, we add two additional hyperparameters, that is, $\lambda$ (described in Section~\ref{DHIC}), and a dropout in the second level attention (described in Section~\ref{DHIC}). The optimal values for the dropout added are provided in Table~\ref{hyperparameters}, and we set $\lambda$ as 0.1, for all experiments, corresponding to the minimum loss on the validation set. We save model checkpoints after every epoch and select the best checkpoint based on the lowest validation loss. In order to minimize overfitting, we stop training if the validation loss does not decrease for 10 consecutive epochs.}
\iffalse
\begin{table}[t]
\centering
{
\begin{tabular}{cccccc}
\toprule
& \specialcell{FFNN \\dim} & \specialcell{Attn. \\heads} & \specialcell{Drop\\-out} & \specialcell{Attn. \\dropout} & \specialcell{Activ-\\ation \\dropout} \\
\midrule
default & 2048 & 8 & 0.1 & 0.0 & 0.0 \\
De-En & 2048 & 4 & 0.5 & 0.1 & 0.3 \\
Hi-En & 1024 & 4 & 0.3 & 0.0 & 0.0 \\
\bottomrule
\end{tabular}
}
\caption{Default and Optimal Hyperparameters for different language pairs.}
\label{hyperparameters}
\end{table}
\fi
\iffalse
For De-En translation, we tuned hyper-parameters using IWSLT14 dataset, and also used those
for WMT17-CS dataset.
\fi
For training the models on smaller, randomly sampled training sets from the full IWSLT14 training set (Sec.~\ref{data}), we used the optimal hyper-parameters learned using the full IWSLT14 training set. We used BLEU \citep{papineni2002bleu} as the evaluation metric to compare the performance of our approach with two versions of the baseline model, (i) T-base, which is the Transformer-base model trained using Default hyperparameters, and (ii) T-optimal, which is the Transformer-base model trained using Optimal hyperparameters (Sec.~\ref{hpmeters}). {Please note that, for all our experiments, the hyperparameters for T-optimal and DHICM are same.}
\iffalse
\begin{table}[t]
\centering
\resizebox{8cm}{!}{
\begin{tabular}{crrrrrrrrrrrr}
\toprule
\specialcell{Training \\Set Size} & 10K & 20K & 30K & 40K & 50K & 60K & 70K & 80K & 100K & 120K & 140K & 160K \\
\midrule
Baseline with default hyper-parameters & 10K & & 4.39 & 14.03 \\
Baseline with our optimal hyper-parameters & 20K & & 7.62 & 22.00 \\
Our Approach & 30K & & 23.06 & 26.37 \\
\bottomrule
\end{tabular}
}
\caption{Results for baseline model with default hyper-parameters, baseline model with our optimal hyper-parameters and Attention over Head Attention model on IWSLT 2014 test data. Each experiment was ran for 3 different times with randomly sampling the training set from the full training corpus.}
\label{table1}
\end{table}
\fi
\iffalse
\begin{table}[t]
\centering
\begin{tabular}{|p{0.08\textwidth}|p{0.08\textwidth}|p{0.08\textwidth}|p{0.08\textwidth}|}
\hline
Dataset & T-base & T-optimal & Our Approach \\
\hline\hline
WMT17-CS & 21.33 & 24.56 & \textbf{25.68}\\
\hline
HindEnCorp & 17.3 & 22.96 & \textbf{26.41} \\
\hline
\end{tabular}
\caption{Results for baseline model with default hyper-parameters, baseline model with our optimal hyper-parameters, multi-hop attention proposed by \citep{iida2019attention} and Attention over Head Attention model on WMT17 test set. \mahak{Dataset size?}}
\label{t2}
\end{table}
\fi
\begin{table}[t]
\centering
{
\begin{tabular}{ccccc}
\toprule
Dataset & T-base & T-optimal & DHICM \\
\midrule
WMT17-CS & 21.33 & 24.56 & \textbf{25.68}\\
HindEnCorp & 17.3 & 22.96 & \textbf{26.41} \\
\specialcell{TED talks \\ (Be-En)} & 4.09 & 5.49 & \textbf{6.29} \\
\bottomrule
\end{tabular}
}
\caption{BLEU Score of different models on WMT17-CS, HindEnCorp, and Be-En parallel-corpora (trained using full training set). Note that Be-En is an extremely low resource language pair.}
\label{WMT17_HindEnCorp_results}
\end{table}
\section{Results} \label{results}
\iffalse
Table-\ref{WMT17_HindEnCorp_results} shows the performance of different methods. For the WMT17-CS dataset, the baseline model with optimal hyper-parameters (T-optimal) achieves a BLEU score of 24.56, and outperforms T-base model with default hyper-parameters (T-base) that yields a BLEU score of 21.33.
The multi-hop multi-head attention model \cite{iida2019attention} achieves 23.91 BLEU score. Our proposed Attention Over Head Attention method yields 25.68 BLEU score which outperforms T-base model by almost 4.3 BLEU points, and outperforms the T-optimal model by 1.1 BLEU points. Our proposed approach also outperforms the multi-hop multi-head attention model on the wmt17 dataset.
\fi
Table~\ref{WMT17_HindEnCorp_results} shows the performance of different methods.
We observe that T-optimal outperforms T-base, and this demonstrates that the optimal hyperparameters found in our extensive hyperparameter search yield higher performance compared to the default hyperparameters in \cite{vaswani2017attention}.
Also, DHICM achieves a higher BLEU score, and outperforms T-optimal on HindEnCorp and WMT17-CS datasets by 3.45 and 1.12 BLEU points, respectively.
We also performed experiment on the extremely low resource language pair Be-En, and observed that T-base achieved 4.09 BLEU score, and T-optimal achieved 5.49 BLEU score. Thus, T-optimal outperformed T-base by 1.4 BLEU points. Moreover, DHICM achieved 6.29 BLEU score, thus outperforming T-optimal by 0.8 BLEU points. We also compared the performance of our method with the
multi-hop multi-head attention model \cite{iida2019attention} on WMT17-CS De-En dataset. We observed that DHICM outperforms \cite{iida2019attention} by 1.77 BLEU points.
\begin{table}[t]
\centering
{
\begin{tabular}{cccc}
\toprule
\specialcell{Train Set Size} & \specialcell{T-base} & \specialcell{T-optimal} & \specialcell{DHICM} \\
\midrule
10K & 9.23 & 4.39 & \textbf{14.03} \\
20K & 13.44 & 7.62 & \textbf{22.00} \\
30K & 16.43 & 23.06 & \textbf{26.37} \\
40K & 19.31 & 27.79 & \textbf{28.32} \\
80K & 27.58 & 32.73 & \textbf{32.93} \\
120K & 30.93 & 34.60 & \textbf{ 34.7} \\
160K & 32.72 & 35.85 & \textbf{35.92} \\
\bottomrule
\end{tabular}
}
\caption{BLEU score averaged over 3 randomly sampled training sets from full IWSLT14 training set}
\label{IWSLT14_results}
\end{table}
\begin{figure*}[!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{baseline.pdf}
\caption{Traditional Transformer-Base model}
\label{baseline}
\end{subfigure}
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{aoa.pdf}
\caption{DHICM}
\label{aoa}
\end{subfigure}
\caption{Encoder-Decoder Attention distribution (from model trained using 20K sentence pairs of IWSLT14 training set)}
\label{attndist}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{baseline3.pdf}
\caption{Traditional Transformer-Base model}
\label{baseline3}
\end{subfigure}
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{aoa3.pdf}
\caption{DHICM}
\label{aoa3}
\end{subfigure}
\caption{Encoder-Decoder Attention distribution (from model trained using 160K sentence pairs of IWSLT14 training set)}
\label{attndist_high}
\end{figure*}
Table~\ref{IWSLT14_results} shows the BLEU score achieved by the models trained with smaller training sets that are randomly sampled from full IWSLT 2014 training set. We observe that the performance of all methods increases with an increase in the training set size, and DHICM achieves a much higher performance compared to T-base for all training set sizes.
\iffalse
T-optimal achieves BLEU score comparable to our approach for
larger training sets, however, in the low-resource setting (i.e., smaller training sets), the performance of T-optimal is much lower compared to
our approach.
\fi
The performance of T-optimal and DHICM is similar for larger datasets, however, for low-resource datasets, our approach outperforms T-optimal by a large margin.
Since the hyperaparameters for both T-optimal and DHICM are same, we can see that the gain in the performance of our method is due to the proposed second layer attention over the multi-head attention. In addition, our proposed loss function (Section~\ref{Loss})
prevents the model from assigning the same importance to all heads. Thus, we are able to filter more important heads.
\section{Analysis}
Our proposed approach DHICM outperforms T-base and T-optimal by a large margin in the low resource conditions.
We further analyzed the performance of the baseline model and DHICM, and observed that DHICM learns better word alignment especially, in low resource conditions.
One of the reasons for learning better alignment can be that for each word, all heads are not equally important.
The second level attention that we designed in our model allows the tokens to pay more attention to the heads that capture more relevant information for translation. Since the heads that are more relevant
receive more attention, the parts of the input to which these heads attend, in turn receive more attention, and thus, the alignment becomes better. For example, providing more attention to the heads that capture the syntactic or semantic information, and relatively less attention to the heads that capture positional information.
This justifies our hypothesis mentioned in Section~\ref{DHIC}.
We also verified this using the encoder-decoder attention distribution of the models shown in Figure~\ref{attndist} (low resource conditions) and Figure~\ref{attndist_high} (high resource conditions). The decoder of the transformer model uses the outputs of the encoder to generate the tokens in the target language. Each generated token pays some attention to each token in the source language. The attention distribution matrix shows the attention paid by the generated tokens in the target sentence (rows) to the tokens in the source sentence (columns).
In Figure~\ref{baseline} and Figure~\ref{baseline3}, we can see that most of the tokens on the source side get similar attention for the baseline approach. Moreover, the highest attention a source token receives is approximately 0.12 and 0.5 in Figure~\ref{baseline} and Figure~\ref{baseline3}, respectively. This implies that the most important source token for translation does not receive enough attention, resulting in a poor word alignment.
On the contrary, for DHICM (Figure~\ref{aoa} and Figure~\ref{aoa3}), we observe a large variance in the distribution of the attention paid by a target token to the source tokens. Thus, more appropriate source tokens receive higher attention scores $(\sim 0.8)$ in DHICM, leading to a better word alignment, as shown in both Figure~\ref{aoa} and Figure~\ref{aoa3}. Also when 160K training sentences are used for IWSLT14, although the performance of the baseline and DHICM is similar, DHICM learns better word alignments compared to the baseline (shown in Figure~\ref{attndist_high}), as DHICM helps the model to pay more attention to more relevant source tokens. Moreover, DHICM allows the model to pay higher attention ($\sim 0.8$) to the appropriate source words compared to the baseline model where highest attention received by a source token is $\sim 0.5$. This shows that for both low resource and high resource conditions, DHICM helps the model to pay higher attention to the more relevant source tokens.
We also analysed the additional attention layer introduced in DHICM. We compute the attention paid by each token to each head. Using the second level attention, we compute the attention paid by a particular token to all the heads and plot the attention values to create an attention distribution matrix. Figure~\ref{headattn} shows the attention distribution for the second level attention added on top of the multi-head self attention in the last layer of the encoder. The attention distribution matrix shows the attention paid by each source token (rows) to all the 4 heads (columns). The distribution shows that each token pays different amount of attention to each head, and this justifies our hypothesis that all heads are not equally important. Also, different tokens pay different amount of attention to a particular head, which also supports our hypothesis that the importance of a head is dynamic in nature, i.e., it varies as the input token changes. The attention distribution matrix also shows that the additional loss term indeed allows the model to compute different importance scores for different heads. In Figure~\ref{headattn}, we can see that the second head gets the least attention from all the tokens. This shows that our proposed method identifies the least important heads, and thus, by incorporating DHICM, an appropriate pruning strategy can be developed to prune the least important heads.
\begin{figure}[!h]
\hspace{-10mm}
\centering
\includegraphics[width=0.4\textwidth]{Layer1.pdf}
\caption{Attention paid by each source token to all the heads (from model trained using 160K sentence pairs of IWSLT2014 training set. Brighter/Lighter color corresponds to higher attention and Darker color corresponds to lower attention.}
\label{headattn}
\end{figure}
\section{Related Work}
Some recent work has shown that most of the heads in a multi-head attention model
become redundant during test time
\citep{michel2019sixteen}.
\citep{voita2018context, voita2019analyzing}
analyzed the heads in a multi-head attention model,
based on some importance score that is calculated after the model is fully trained.
In contrast, in this work, we propose to calculate the importance scores dynamically while training.
A recent work \cite{iida2019attention} proposed
to apply attention on top of the output of multi-head attention. However, they apply an additional attention layer only on the encoder, whereas, in our proposed method, we apply the second level attention on both encoder and decoder, that helps the generated target words to pay significant attention to appropriate source words, which in turn enhances the encoder-decoder attention distribution as shown in Figure~\ref{aoa}. Moreover, their proposed approach might learn equal attention weights for the additional attention layer, which would make all the heads equally important. In such a case, their approach would perform similar to transformer base model, even after adding more number of parameters compared to the standard transformer. To address this, we add an extra loss term in our method, to penalize for learning similar weights for the second level attention. This helps our method to compute different importance scores for different heads.
Furthermore, during the calculation of the final attention, they transform the output of each head using a different transformation matrix for each head, while our proposed approach DHICM uses a single transformation matrix for the outputs of all heads. Thus, DHICM learns much fewer number of parameters in addition to achieving greater performance
(the number of additional parameters learned in their approach is 550K, whereas DHICM learns 500K additional parameters).
\section{Conclusion and Future Work}
In this work, we proposed an effective Dynamic Head Importance Computation Mechanism (DHICM) to dynamically calculate the importance of different heads during training. Our idea is to calculate the importance with an additional attention layer along with the standard multi-head attention. We also proposed a loss function to prevent our method from computing equal importance for all heads, which together with the second-level attention
facilitates to
dynamically identify heads that are most important to the input word. Thus, {the target words generated pay significantly high attention to the more appropriate/relevant source words.}
We also performed extensive hyperparameter tuning on a subset of hyperparameters,
and observed that the optimal hyper-parameters obtained from our search yield a much higher BLEU score compared to the default hyper-parameters.
Experiments on multiple translation tasks show that DHICM outperforms the standard transformer model
by a large margin, especially in low resource settings.
In the future,
we will use the importance scores of the heads computed using DHICM and implement a strategy for pruning the less important heads. We would also like to explore further in the direction of reducing redundancy in multi-head attention.
\bibliographystyle{acl_natbib}
| 9e83dc1ddd27578fe719e33c0dab3563264b5c7c | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Analyzing human motion has been an intensely studied problem in the computer vision community. While most works focus on the challenging task of action detection and recognition, there is still limited work in the domain of human motion quality assessment from a functional point of view.
Several potential applications for this exists, including heath care for patient rehabilitation and sports for athlete performance improvement \cite{pirsiavash2014assessing}.
Assessing the quality of human motion/actions is a difficult problem. Human experts such as coaches, physiotherapists, or doctors have been trained extensively to discover the rules required to assess different types of motions.
In this work, we concentrate on motion quality assessment from a correctness perspective, using machine learning methods. We want to see whether machine learning methods could easily classify an action, from a set of various actions correctly and incorrectly executed, as valid or not, in a binary manner.
\section{Related work}
The task of everyday activity recognition \cite{pirsiavash2012detecting}\cite{shahroudy2016ntu} and action recognition \cite{idrees2017thumos} have been discussed in several papers, with a large number of datasets being publicly available \cite{escalera2017challenges}.
There are a few articles where the task of \textit{action correctness} is approached from different perspectives. Some approaches use either accelerometer sensors, others rely on depth or colour cameras.
\cite{ebert2017qualitative} tackles the task of a qualitative assessment of human motion through the use of an accelerometer and assigning a quality class label to each motion. Other authors, such as \cite{parisi2016human}, focus on computing how much a performed movement recorded using a depth camera matches the correct continuation of a learned sequence. \cite{paiement2014online} uses the recorded gait movement of six healthy subjects going up the stairs to train a model. The model's ability to detect the abnormalities is tested on 6 other patients with 3 types of simulated knee injuries.
Action correctness is also very similar to action completeness \cite{heidarivincheh2016beyond}. In this context, an action is considered completed if the action goal was achieved: i.e. \textit{the drinking action is completed when one actually consumes a beverage from a cup}. The authors have used six types of actions to test the completeness, which involves the interaction with different objects. In this article, we do not aim for action completeness, but instead we focus on the specific task of how correct an action is actually performed.
For the action correctness task, the only publicly available dataset that we found is UI-PRMD, proposed by \cite{vakanski2018data}. They have recorded 10 subjects performing 10 types of actions, with each action being performed in an optimal and non-optimal way. The dataset was not recorded with a particular type of injury in mind, but focuses instead on healthy subjects performing a few types of exercises.
At the moment, no public baseline benchmark has been published on the UI-PRMD dataset. Nevertheless, we are studying the feasibility of training an action correctness model on this dataset. We construct a binary classifier for every type of exercise, with the purpose of differentiating between a correctly and wrongly executed action. Different subjects might perform the non-optimal movement in several ways. For example, for the \textit{"Deep squat"} exercise, the non-optimal movement is defined by \cite{vakanski2018data} as \textit{"Subject does not maintain upright trunk posture, unable to squat past parallel, demonstrates knee values collapse or trunk flexion greater than 30$^{\circ}$"}. This definition allows a certain degree of subjectivity in assessing the correctness of the movement.
\begin{figure*}[h!]
\includegraphics[width=\textwidth]{TCN5.png}
\caption{The structure of the Res-TCN, where \textbf{BRC} stands for \textbf{B}atch Normalization, \textbf{R}eLU and \textbf{C}onvolution}
\label{fig:tcnstructure}
\end{figure*}
Instead of using hand crafted features for every type of action, our purpose is to use a machine learning system to learn what makes a movement non-optimal. We use the Temporal Convolutional Neural Network (Res-TCN) proposed by \cite{kim2017interpretable}. This classifier is one of the top performing methods on a large scale action recognition dataset \cite{shahroudy2016ntu}, giving slightly better performance than the STA-LSTM used by \cite{song2017end}.
\section{Machine learning for action correctness}
The results presented in this section are obtained with Convolutional Neural Networks. It is worth mentioning that a few other standard machine learning methods have been tested, including Support Vector Machines and Random Forests, but the classification accuracy was not significantly higher that 50\%, which is close to a random decision.
\subsection{Overview of the Res-TCN classifier}
In this section we provide a short overview of the structure of the network proposed by \cite{kim2017interpretable}.
Figure \ref{fig:tcnstructure} depicts the structure of the Res-TCN network. The input $X_0$ to the network is the concatenated skeleton features from every frame of the video sequence. This is followed by a first convolution with the convolution filter length $c$ of eight, a stride $s$ of one and number of filters $f$ of eight.
The following nine blocks are \textit{Residual Units} introduced by \cite{he2016deep} and consist of batch normalization, ReLU and convolution operations, with the number of filters of the convolution increasing from 64 to 128 and 256. After the last layer a global average pooling is used across the entire temporal sequence. A final softmax layer with the number of neurons equal to the number of classes is used for classification.
The advantage of the Res-TCN architecture over recurrent structures like LSTM alternatives is possible model interpretability as shown by \cite{kim2017interpretable}.
\subsection{Model parameters}
In \cite{kim2017interpretable}, the authors are using as input to the TCN the raw 3D skeleton points. We have used a similar setup, but have also tested the system performance when it receives as input the angles between different joints. For 3D skeleton points setup, we take the computed (X, Y, Z) values of each skeleton joint and concatenate all values to form a skeleton feature. A skeleton feature per frame is a 66 dimensional vector obtained by multiplying the number of joints (which is 22) with the data per point (which is 3) as it can be seen in Figure \ref{fig:X0}.
\begin{figure*} [h!]
\includegraphics[width=\linewidth]{ExampleX0.png}
\caption{The raw feature extraction for a single sample from the UI-PRMD dataset}
\label{fig:X0}
\end{figure*}
One disadvantage of a TCN architecture over other types of architectures like LSTM is the fact that the data size has to be consistent over all input examples. The way we overcome this limitation is by finding the maximum video length across all segmented movements and use zero padding for the 2D feature-array.
For the Res-TCN parameters, we used the same configuration as proposed by \cite{kim2017interpretable}: stochastic gradient descent with nesterov acceleration with a momentum of 0.9, all convolution layers have applied a $L1$ regularizer with a weight of $10^{-4}$, and to prevent overfitting a dropout of 0.5 is applied after every ReLU.
The model is trained for 500 epochs, and we use a batch size of 128.
\subsection{Data and experiments}
As mentioned above, we use the UI-PRMD dataset for our machine learning investigation. The data has been recorded using both a Kinect camera and a Vicon optical tracker \cite{vicon}. The Vicon optical tracker is a system designed for capturing human motion with high accuracy and consists of eight high speed cameras that track a set of retroreflective markers. We have focused just on the data recorded with the Kinect, due to the low cost of the sensor compared with Vicon.
UI-PRMD data consists of 10 movements: deep squat, hurdle step, inline lunge, side lunge, sit to stand, standing active straight leg raise, standing shoulder abduction, standing shoulder extension, standing shoulder internal-external rotation, and standing shoulder scaption. Each movement is repeated 10 times by each of the 10 individuals recorded.
The Kinect data is presented in the form of 22 YXZ triplets of Euler angle and positions. The values for the waist joint are given in absolute coordinates, while the values of the rest of the joints are given in relative coordinates with respect to the parent joint \cite{vakanski2018data}. Based on the local angle and position data we have computed the transformation matrix in order to obtain the absolute joint coordinates as 3D points.
We followed a cross-subject evaluation splitting the data into training, validation and testing. For the training phase we have used 6 persons, 3 were used in validation and 1 for testing. We have used a 10-fold cross validation in order to validate our model, every time using a different person in the testing set.
The authors of \cite{shahroudy2016ntu} have defined the testing protocol for NTU RGB+D dataset as a training/testing split. We find that due to the smaller size of UI-PRMD data, a protocol training/validation/testing to be more appropriate. We use the validation set to avoid model overfitting on the training set. We save the model which generalized best, i.e. obtained the best accuracy, on the validation set, and that model is used on the testing set.
\subsection{Results and discussions}
\begin{figure}[h!]
\includegraphics[width=\linewidth]{Results3.pdf}
\caption{Average accuracy over 10-folds for every action}
\label{fig:results}
\end{figure}
For every movement type, we have trained a different model with the explicit purpose of differentiating between optimal and non-optimal movements. Figure \ref{fig:results} presents the average results for every movement type, by using as input for the neural network the absolute 3D point position or the relative joint angles.
When using the 3D points, on average the accuracy is 62.3\%, while when using the relative joint angles is 71.2\%.
The types of movements for which the non-optimal movement was consistently detected across subjects are: \textit{deep squat} with 83.5\% accuracy for relative joint angles and \textit{sit to stand} with 82\% accuracy. The movements that proved to be the most difficult are \textit{standing shoulder extension} with 62\% accuracy and \textit{standing shoulder internal-external rotation} with 63\% accuracy.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{3GesturesAnalysed2.png}
\caption{Accuracy for side lunge, standing shoulder internal-external rotation and standing shoulder scaption for each test case}
\label{fig:results3Gestures}
\end{figure}
In figure \ref{fig:results3Gestures} we present a detailed view of the classifier performance for 3 actions (side lunge, standing shoulder internal-external rotation and standing shoulder scaption) and all subjects in the dataset. For example, we obtained the accuracy of ~50\% for \textit{Person 1} for \textit{side lunge} by training a binary classifier where the training set consisted of \textit{Person 2} to \textit{7}, the validation set of \textit{Person 8} and \textit{9}, and the test set of \textit{Person 1}. An accuracy of 50\% is equivalent to a random decision level, therefore the classifier wasn't able to generalize for this particular subject. On the other hand, for \textit{Person 2} and \textit{8} the classifier achieved a perfect accuracy for certain types of actions.
One reason for this discrepancy of performance between different subjects might be given by the way the actions were actually performed. For example, for the three analyzed actions, side lunge, standing shoulder internal-external rotation and standing shoulder scaption, the motion is performed using the right side of the body (right leg, right shoulder) by most of the subjects. The exception are Person 7 and Person 10 which perform the motion using their left side of the body.
We have also trained a general model that classifies for any input movement if it is performed correctly or not and it obtained an average accuracy of 63.3\% when using joint angles as input data. This is much lower than the 71.2\% accuracy obtained when training specialized models for every type of movement.
\section{Conclusion}
The work presented here is an initial investigation of the applicability of machine learning to human motion quality assessment. In particular, we looked into the task of training a model to recognize an action or a movement as correct or not. Our results show a high variability of results for different action types. Angle features seem to be more relevant than using raw 3D joint positions. More work has to be done in identifying, adapting, and improving certain machine learning methods, such as the convolutional neural networks which prove efficient at least for some classes of actions.
The results shown that the actions which involve a movement symmetry of the body, like sit to stand and deep squat, were easier to train. For certain classes of actions, the variability of movement is too high. Therefore, we believe that a data augmentation process might help especially with these action types. As future work, we plan to augment the training set with simulated data, by translating the motion performed by one side of the body to the other one.
| c7907b1048fe039cf939bd7e7fad69c667114038 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The first evidence of bound states of spin waves in ferromagnets goes back to 1931 and Bethe's solution of the spin-1/2 Heisenberg chain\cite{Bethe_1931}, only one year after Bloch's theory of spin waves\cite{Bloch}.
Since the Heisenberg ferromagnet conserves the number of spin-deviations and the states with different spin deviations are orthogonal to each other, Bloch realised that the low lying excitations are better understood in terms of spin-deviations, and that the Heisenberg model can be simply diagonalised in the subspace of single spin-deviation states in momentum basis, leading to the concept of spin-waves\cite{Bloch}. Similarly, the model can be studied in the subspace of two spin-deviation states, and the problem can be reduced to a one particle problem in the centre-of-mass momentum basis, with quite generally a continuum of two-magnon excitations and a well separated bound state emerging from two neighboring deviations\cite{Bethe_1931,Dyson_1956, Wortis_1963, DCMattis, Haldane_1982a, Haldane_1982b}. In 1D, the bound state exists as a separate excitation for all values of the wave-vector $k$.
The Heisenberg ferromagnet is realised in nature for example in the spin-1/2 compound $(\mathrm{C}_6\mathrm{D}_{11}\mathrm{N}\mathrm{D}_3)\mathrm{Cu}\mathrm{Br}_3$ or in the spin-1 compounds $\mathrm{CsNi}\mathrm{F}_3$ and $\mathrm{Ni}\mathrm{Nb}_2\mathrm{O}_6$, to cite a few \cite{Mikeska_1977, Bohn_1980, Kopinga_1986, PChauhan_2020, Tapan_Neutron_Scattering}. Inelastic neutron scattering (INS) experiments have probed spin-wave excitations by measuring the differential scattering cross-section in these ferromagnetic compounds\cite{Kopinga_1986,DeVries_1989}. However, detecting the bound state of spin-waves in a ferromagnet has remained a challenge because INS measures single spin-flip excitations. At low temperatures the thermal ensemble is predominantly populated by fully aligned states, and accordingly one observes spin-wave excitations only. However, upon increasing the temperature, the thermal ensemble is populated with single-spin-deviation states and a spin-flip in these states results in states in the two-spin deviation subspace. Thus, INS experiments are in principle able to capture the bound-state of ferromagnets if they are performed at a non-zero temperature.
There have been previous attempts to observe the bound states of spin-waves. Silberglitt and Harris have demonstrated that the bound states, in a 3D ferromagnet, have observable signatures in the thermal dynamical structure factor (DSF) in the large wavelength limit ($k=0$)\cite{Silberglit_Harris_1967,Silberglitt_Harris_1968}. The resonance of the bound state with the two-spin-wave continuum results in a broadening of the line width and an increase of the intensity of the single spin-wave peak which, a priori, can be detected in INS experiments. However, although the bound states of 3D ferromagnets exist as separate modes above a certain threshold of $k$, they are expected to gather a very small spectral weight compared to the main spin-wave excitations and to lie too close in energy to be properly resolved after thermal broadening \cite{Silberglitt_Harris_1968}. Thus, the direct detection of bound states in INS experiments is difficult. Instead far-infrared transmission techniques have been used to find indirect signatures, and ferromagnetic resonances have been observed for $\mathrm{Co}\mathrm{Cl}_{2}.2\mathrm{H}_2 \mathrm{O}$(CC2)\cite{Date_Motokawa_1966, Torrance_Tinkham_1969}. However, one cannot completely distinguish the bound state as the far infrared measurements are close to $k = 0$ where the bound state is not well-resolved from the single spin-wave excitations.
In the case of 1D ferromagnets the difference in excitation energy between the continuum and the bound state is largest at $k=\pi$, and this difference is larger than in its 3D counterpart. Therefore, a finite temperature INS experiment has a better chance to resolve and detect the bound state of spin-waves in 1D. To check this expectation, we directly simulate the DSF using finite temperature time dependent Density Matrix Renormalization Group algorithm (henceforth referred to as thermal t-DMRG algorithm). We find evidence of the bound state and compare its spectral intensity with that of the single spin-wave peak. Although bound states exist for ferromagnets with arbitrary spin, we focus our investigation to the finite temperature dynamics of spin-1/2 and spin-1 ferromagnetic chains.
The paper is organised as follows: in section \ref{method}, we briefly describe the numerical method we used to obtain the results. In section \ref{spin-1/2 FM chain}, we discuss the thermodynamics of the spin-1/2 FM chain obtained from thermal DMRG simulations and benchmark it with Wang-Landau Stochastic Series Expansion Quantum Monte Carlo (QMC) algorithm from ALPS package. We then report on the finite temperature dynamics of spin-waves for the spin-1/2 FM chain. In order to characterise the bound state, we measure the spectral weights associated with it in the isotropic case and give simple arguments to motivate the shape of the spectral peak and the nature of its temperature dependence which we support with spin-wave calculations done in the limit of vanishing magnetic field in section \ref{Thermal_DSF_mag_field}. In section \ref{spin-1 FM chain}, we extend the discussion to spin-1 chains. We discuss the two-spin deviation spectrum of the spin-1 FM chain using the dynamical quadrupolar structure factor at zero temperature, and we explain the origin of resonances in the two-magnon continuum by including biquadratic interactions. At finite temperature, we find clear signatures of the bound state in the thermal DSF of the Heisenberg model and of anti-bound states for large biquadratic interactions, and we extend the discussion to the case with easy axis anisotropy.
\section{The method}
\label{method}
INS experiments measure a differential scattering cross-section which is directly proportional to the dynamical structure factor (DSF) denoted as ${S^{\alpha\tilde{\alpha}}(k,\omega)}_{\beta}$. DSF is the Fourier transform of a time-dependent correlation function denoted $C^{\alpha\tilde{\alpha}}(l,t;\beta)$ defined by:
\begin{eqnarray}
&&C^{\alpha\tilde{\alpha}}(l,t;\beta) = \mathrm{Tr}\lbrack \hat{\rho}_{\beta}S^{\alpha}(\Delta r_l;t)S^{\tilde{\alpha}}(0;0)\rbrack\nonumber\\
&&\\
\label{Spin_DSF_def}
&&{S^{\alpha\tilde{\alpha}}(k,\omega)}_{\beta} = \frac{1}{L^2}\int_{-\infty}^{\infty}dt\sum_{l}e^{-i(k\Delta r_l+\omega t)}C^{\alpha\tilde{\alpha}}(l,t;\beta)\nonumber
\end{eqnarray}
where, $L$ is the number of sites and $\Delta r_l$ is the relative position with respect to the centre of the chain at $l=0$ and $\beta$ is the inverse temperature. The thermal DSF ${S^{\alpha\tilde{\alpha}}(l,t)}_{\beta}$ can represent different components depending on $\alpha,\tilde\alpha\in\lbrace +, - , z\rbrace$.
The computation of the DSF at finite temperature can be performed by using a time-dependent DMRG algorithm on a thermal ensemble (denoted as $\hat{\rho}_\beta$) pioneered by Barthel et al \cite{Barthel_2009, Barthel_2013} and Kestin et al \cite{Noam_Kestin_2019}. The thermal ensemble is a mixed state and the matrix product density operator (MPDO) ansatz is better suited for simulating mixed states. However, it is more convenient to construct the MPDO in terms of purified matrix product states (MPS).
A purified MPS is defined on an enlarged Hilbert space, namely a physical Hilbert space and an ancillary Hilbert space \cite{Verstraete_Cirac_2004,Schollwock_2011,Paeckel_2019}. We simulate purified states up to half the inverse temperature denoted as $\hat{\rho}_{\beta/2}$ by applying the time evolution operator in second-order Suzuki-Trotter steps \cite{Feiguin_White_2004} and then compute the ensemble average of observables by tracing over the ancillary degrees of freedom. Since the thermal ensemble is Hermitian (i.e. $\hat{\rho}_{\beta/2} = \hat{\rho}_{\beta/2}^{\dagger}$), one can write the following expression for computing observables $\mathrm{Tr}\left( \hat{\rho}^{\dagger}_{\frac{\beta}{2}}O\hat{\rho}_{\frac{\beta}{2}}\right)$. Only tracing over ancillary degrees of freedom, i.e. $\mathrm{Tr}_a\left(\hat{\rho}_{\frac{\beta}{2}}\hat{\rho}^{\dagger}_{\frac{\beta}{2}}\right)$, results in the full thermal ensemble ($\hat{\rho}_{\beta}$) as a MPDO.
The computation of the thermal DSF essentially consists of two steps: (i) simulation of the thermal ensemble by performing imaginary time evolution; (ii) simulation of real time evolution after applying the relevant spin operator to the thermal ensemble. For the simulation of the thermal ensemble, we used imaginary time Trotter steps of $\Delta \beta = 0.01/J$ keeping the truncation weights to be of the order of $\mathcal{O}\left(10^{-8}\right)$. For real-time evolution we used the trotter steps to be $\Delta t =0.1/J$ keeping the truncation weights to be $\mathcal{O}\left(10^{-4}\right)$. In practice, the time-dependent correlation functions is computed as follows:
\begin{eqnarray}
C^{\alpha\tilde{\alpha}}(l,t;\beta) = \mathrm{Tr}\lbrack \hat{\rho}^{\dagger}_{\beta/2}S^{\alpha}(\Delta r_l;0)e^{-iHt} S^{\tilde{\alpha}}(0;0)\hat{\rho}_{\beta/2}e^{iHt}\rbrack\nonumber
\end{eqnarray}
Since the space Fourier-transformed correlations for \textit{positive} and \textit{negative} times are related by conjugation, only the positive time correlations were simulated. The simulations were run up to a final time $t_f = 20/J$ (instead of infinite time) to obtain the DSF, which is large compared to the interaction strength ($1/J$). We made sure that, for all the system sizes, the temporal spread of correlations does not reach the boundary. Upon taking the time Fourier transform, the DSF gets convoluted with a sharp window of finite time which results in numerical artefacts. This problem is overcome by multiplying the correlations with a Gaussian filter $2(\pi t^2_f)^{-1/2}e^{-4t^2/t_f^2}$ in order to smooth out the finite time effects \cite{Feiguin_White_2004, Bouillot_2011}. The resulting numerical DSF has a spatial resolution of $\Delta k = 2\pi/L$ and a frequency resolution of $\Delta \omega = \pi/t_f$.
Numerically, the spin-1 chain poses a separate challenge. Since the physical dimension increases and the complexity of the code scales as $\mathcal{O}(d^6\chi^3)$, it is expensive in computational resources. We restricted ourselves to 60 sites and $\chi = 400$ for the thermal DSF computations. For larger number of sites, one would have to keep a larger bond-dimension in order to faithfully represent the thermal ensemble. In order to avoid the boundary effects from the time-evolution cone touching the sides of the chain, we limit ourselves to final time $t_f = 14/J$. This reduces our frequency and spatial resolution as compared to the spin-$1/2$ chain. Since we work with a small chain size and a small final time evolution, we applied a Gaussian filter $2(\pi t^2_f)^{-1/2}e^{-4t^2/t_f^2}e^{-4x^2/(L-1)}$ to the DSF in order to smoothen out both temporal and spatial finite size effects.
\section{Spin-1/2 Ferromagnetic chain}
\label{spin-1/2 FM chain}
\subsection{Model, scattering states, and bound state}
We discuss the spin-1/2 ferromagnetic Heisenberg chain with $L$ sites and periodic boundary conditions described by the Hamiltonian:
\begin{eqnarray}
\mathcal{H}_{FM} = -J\sum_{i=1}^{L} \mathbf{S}_{i}\cdot \mathbf{S}_{i+1}, \hspace{0.4cm}J>0
\label{spin_half_FM_Hamiltonian}
\end{eqnarray}
The low-lying excitations can be described in terms of spin-deviations.
\begin{enumerate}
\item The ground state can be chosen to be the state which is fully-aligned in a given direction. It has no spin-deviation. The ground state energy is $E_0 = -JL/4 $.
\item There are $L$ one-spin-deviation states. The Hamiltonian is trivially diagonalized in the momentum basis, leading to spin-wave states with energy:
\begin{eqnarray}
\omega_{1}(k) = E_{1}(k)-E_0 = J(1-\cos k)
\label{spin12_spinwave}
\end{eqnarray}
\item There are $L(L-1)/2$ two-spin deviation states which can be written in the basis of the centre of mass momenta. One can classify them into two kinds of states:
(i) $L(L-3)/2$ scattering states of pairs of spin-waves, and (ii) $L$ bound states of spin-waves. The energy of the scattering states is given by the sum of the energies of two spin-waves:
\begin{eqnarray}
\omega_{2}(K,p) &=& J(1-\cos k_1)+J(1-\cos k_2)\nonumber\\
&=&2J\left(1-\cos \frac{K}{2}\cos p\right)
\label{spin12_two_spinwave}
\end{eqnarray}
The last line is obtained by transforming the momenta coordinates to the centre of mass momentum $K \equiv k_1+k_2$ and the relative momentum $p \equiv (k_1-k_2)/2$. The bound-state energy can be derived by following the Green's function approach \cite{Wortis_1963} or simply solving the eigenvalue equation \cite{Fukuda_Wortis_1963, DCMattis} (see Appendix \ref{Appendix_spin_half_FM_chain} for details). The dispersion relation of the bound state is:
\begin{eqnarray}
\omega_{2,\mathrm{BS}}(K) = E_{2,\mathrm{BS}}(K)-E_0 = J \sin^{2}\frac{K}{2}
\label{spin12_boundstate}
\end{eqnarray}
\end{enumerate}
The difference between the lower boundary of the continuum (given by $p=0$ in Eq.\ref{spin12_two_spinwave}) and the bound state energy is given by:
$$
4J\sin^{2}\frac{K}{4} - J\sin^{2}\frac{K}{2} = 4J\sin^{4}\frac{K}{4}\geq 0,
$$
with the equality holding only for $K=0$. Thus, the bound state exists as a well separated excitation for all $K>0$.
\subsection{Thermodynamics}
In order to benchmark the first step of the computation of the thermal DSF, we simulate the thermodynamics of the spin-$1/2$ FM chain. Bloch discussed the low-temperature thermodynamics of 3D ferromagnets in terms of non-interacting spin-waves \cite{Bloch}. Since the density of spin-waves in the model is very small at low-temperatures, the spin-waves can be assumed to be non-interacting. This leads to the well known temperature dependence of the magnetisation $M(T)\sim M(0)\left(1-{(T/T_c)}^{\frac{3}{2}}\right)$ for 3D. If one tries to extend this argument to 1D, one gets a diverging correction to the magnetisation. However, from Mermin-Wagner-Hohenberg theorem, the magnetisation of the 1D ferromagnet should be zero. Therefore, Takahashi complemented the non-interacting spin-wave theory with a constraint of zero magnetisation \cite{Takahashi_1986}, a method known as modified spin-wave theory, to explain the low temperature thermodynamics of the 1D ferromagnet. This leads to the following low-temperature expansion of the free energy of the 1D ferromagnet:
\begin{eqnarray}
F &=& \frac{E_0}{L}-\frac{\zeta\left(\frac{3}{2}\right)}{\sqrt{2\pi}}T^{\frac{3}{2}}+T^2\nonumber\\
&+&\sqrt{\frac{2}{\pi}}\left(\zeta\left(\frac{1}{2}\right) - \frac{\zeta\left(\frac{5}{2}\right)}{16}\right)T^{\frac{5}{2}}+\mathcal{O}(T^{3})
\label{free_energy_density}
\end{eqnarray}
where, $\zeta(\alpha)$ is the Riemann-zeta function. Other interesting thermodynamic quantities such as the entropy, the average energy and the specific heat can be extracted by using statistical physics relations, leading to the low temperature behaviours
\begin{eqnarray}
&&S\propto T^{\frac{1}{2}}\nonumber\\
&&\langle E\rangle - E_0\propto T^{\frac{3}{2}}\nonumber\\
&&C_v \propto T^{\frac{1}{2}}\nonumber
\end{eqnarray}
Note that the free energy of the 1D ferromagnet has also been computed using thermal Bethe-Ansatz by Takahashi \cite{Takahashi_1971}. This method leads to a set of coupled integral equations on spin-deviations which is analytically solvable in the limit of very low temperature. The agreement between the modified spin-wave theory and the thermal Bethe Ansatz results is excellent at very low temperature \cite{Takahashi_1971,Takahashi_1986}.
\begin{figure}
\centering
\includegraphics[width = 9cm, height= 9cm ,keepaspectratio]{Figure1_Thermal_statics_FM_Heisenberg_chain_with_qmc_error_bars_2500.pdf}
\caption{Thermodynamics of the spin-1/2 FM Heisenberg chain. The solid lines stand for our thermal DMRG data. The non-interacting spin-wave thermodynamic quantities are shown as red dashed line while the modified spin-wave thermodynamic quantities (low temperature expansion in Eq. \ref{free_energy_density}) are shown as blue dotted lines. The black symbols are QMC data obtained with the Wang-Landau algorithm. See main text for details.}
\label{Thermal_statics_FM}
\end{figure}
Numerically one is limited to finite system sizes of the thermal ensemble and the thermodynamic limit of the thermal quantities is obtained by a finite-size scaling analysis. The thermal ensemble energy is readily computed from the thermal ensemble and the specific heat is obtained by differentiating the thermal ensemble energy with respect to temperature (see Appendix \ref{finite_size_scaling_therm_quant} for finite-size scaling data). The entropy per unit length is obtained by numerically integrating the specific heat with respect to the inverse temperature. The results for the energy and the entropy lead to the estimation of the free energy. This sequence of steps is summarized below:
\begin{eqnarray}
&\mathrm{(i)}&\hspace{0.2cm}\langle E\rangle_{\beta} = \frac{1}{L\mathcal{Z}}\mathrm{Tr}\left\lbrack \mathcal{H}e^{-\beta\mathcal{H}}\right\rbrack = \frac{1}{L}\mathrm{Tr}\lbrack\hat{\rho}_{\beta/2}\mathcal{H}\hat{\rho}_{\beta/2}\rbrack\nonumber\\
&\mathrm{(ii)}&\hspace{0.2cm}C_v(\beta) = \frac{d}{dT}\langle E\rangle_{\beta}\nonumber\\
&\mathrm{(iii)}&\hspace{0.2cm}S(\beta) = k_B\int^{\beta}_{\infty}\frac{C_v(\beta)}{\beta}d\beta\nonumber\\
&\mathrm{(iv)}&\hspace{0.2cm}F(\beta) = \langle E\rangle_{\beta}- TS(\beta)\nonumber
\end{eqnarray}
The agreement between our numerics and modified spin-wave theory is very good at low temperatures, where non-interacting spin-waves dictate the thermodynamics (see Fig. \ref{Thermal_statics_FM}). For higher temperatures, interactions between spin-waves become more important and a better agreement with the numerics would probably be obtained using thermal Bethe-Ansatz, but this is beyond the scope of this article. To benchmark our results at not so low temperatures, we have used the Wang-Landau Stochastic Series Expansion QMC code of the ALPS package to calculate these thermodynamic quantities \cite{Wang_Landau_2001,ALPS_Troyer_2003, Bauer_2011} using $L=140$ sites, a cut-off $\Lambda=10^4$, and a temperature step $\Delta T = 10^{-3}(1/J)$. The agreement with our determination of the thermodynamic quantities is perfect within the error bars of the QMC data.
\subsection{Finite-temperature dynamics}
From the real-time evolution of the thermal ensemble, we computed the longitudinal component of DSF, shown in Fig. \ref{Finite_temp_without_h_DSF_spin_half}. Since the ferromagnetic chain is isotropic, the transverse components are the same up to a multiplicative factor. At non-zero temperatures, the FM chain develops a thermal population of single-spin-deviation states. Flipping a single spin can result in either of two possibilities: (i) de-excite the state to a fully-aligned state; (ii) create an additional spin-deviation resulting in a two-spin-wave state or a bound state. In the first case, it results in non-zero thermal spectral weight in the negative $\omega$ regime (see Fig. \ref{Finite_temp_without_h_DSF_spin_half}b). The spectral weight for this process is proportional to $e^{-\beta \omega_1(k)}\approx e^{-\beta Jk^2}$ as is evident from Eq. \ref{DSF_formula}. Because of the exponential decay with respect to the wave vector k and inverse temperature, spectral weights are only visible at lower temperatures and close to k = 0. In the second case, since the single spin-flip results in an additional spin-deviation, the bound state can gather significant enough spectral weights to be detected with neutron scattering experiments above some temperature. However, at higher temperatures, the thermal broadening of spin-wave excitations assisted by two-magnon processes obscures the bound state. Therefore, in the thermal DSF, the bound state can only be detected as a separate mode in a suitable range of temperatures near $k = \pi$.\par
\begin{figure}
\centering
\includegraphics[width= 14 cm, height = 14 cm, keepaspectratio]{Figure2_Spin_half_FM_Heisenberg_model_Longitudinal_thermal_DSF_140sites_chi_400.pdf}
\caption{Longitudinal thermal DSF (${S^{zz}(k,\omega)}_{\beta}$) of the spin-1/2 FM chain with $L = 140$ sites, $\chi=400$ and $t_f = 20/J$. Since the system is isotropic, the spin-flip DSF component ($S^{xx}(k,\omega)_{\beta}$) is equal to the longitudinal DSF component. As the temperature is lowered (or the inverse temperature $\beta$ increased), the bound state progressively loses spectral weight. The features in Fig. \ref{Finite_temp_without_h_DSF_spin_half}b) are consistent with the spin-wave and bound state dispersion relations of Eq.\ref{spin12_spinwave} and Eq.\ref{spin12_boundstate} which are shown as white dashed and dotted lines respectively. The de-excitation processes from spin-wave state to FM fully aligned state is also denoted in the figure by white dashed line.}
\label{Finite_temp_without_h_DSF_spin_half}
\end{figure}
To be more quantitative, we considered section cuts for various temperatures at $k =\pi$ (See Fig.\ref{section_cut_area_comparison}). The area under the curve for the bound state ($\mathrm{I}_{\mathrm{BS}}$) and for spin-wave excitations ($\mathrm{I}_{\mathrm{SW}}$) is given by integrating the longitudinal DSF over different frequency ranges (Fig.\ref{section_cut_area_comparison}a):
\begin{eqnarray}
\mathrm{I}_{\mathrm{BS}}(k=\pi;\beta) &=& \int_{\omega_1}^{\omega_2}S^{zz}(k=\pi,\omega)_{\beta} d\omega\nonumber \\
\\
\mathrm{I}_{\mathrm{SW}}(k=\pi;\beta) &=& \int_{\omega_2}^{\omega_3}S^{zz}(k=\pi,\omega)_{\beta} d\omega,\nonumber
\end{eqnarray}
where $\omega_1$ and $\omega_3$ are such that the section-cut curve lies below $10^{-4}$ outside this range, while $\omega_2$ is the value where the section cut curve reaches a local minimum between the bound state peak and the main spin-wave excitation peak. The fraction of total spectral weight under the bound state in the section cut forms a direct criterion for it to be detectable in the INS experiment. The feature is considered to be visible if it gathers more than 5 percent of the total spectral weight at $k = \pi$, which sets the lower limit of the temperature range. For the upper limit, we use the criterion that the thermal broadening of the bound state and of the main spin-wave mode are such that the bound state is no longer distinguishable. Therefore, from these criteria, the bound state feature can be detected in the temperature regime $J/12<k_BT<J/3$ (Appendix \ref{temp_range}).\par
Since the DSF is simulated for finite sizes, the thermal spectral areas at $k=\pi$ have to be extrapolated in $1/L$ to obtain the spectral areas in the thermodynamic limit (Appendix \ref{finite_size_scaling_bound_state}). A log-log plot of these extrapolated areas is shown in Fig. \ref{section_cut_area_comparison}b. It is difficult to push the thermal DMRG algorithm to the $T=0$ limit since one would have to simulate upto $\beta J=\infty$. However, for $T=0$, the DSF should capture only the spin-wave excitation, so the area under the bound state curve is $\mathrm{I}_{\mathrm{BS}}(k=\pi;\beta=\infty) = 0 $. We determined the area under the main spin-wave excitation at $T=0$ by ensuring that the sum of the area of the spin-wave peak and of the bound state peak is constant. We make the following observations at this section-cut :\\
(a) The spectral peak of the bound state has a longer tail that extends to low energies (see section \ref{TDSF_extr_mag_field}, \ref{Section_cut_simple_calculation}).\\
(b) The area under the spectral peak associated with the bound state scales with temperature as $T^{\frac{3}{2}}$ (Fig. \ref{section_cut_area_comparison}b). This is because the thermal ensemble population of spin-waves dominantly scale as $T^{\frac{3}{2}}$. It also indicates that the spectral weight of spin-waves arising from fully aligned states would be decreasing in a similar way. We verified this from plotting the following quantity (inset of Fig. \ref{section_cut_area_comparison}b):
\begin{equation}\Delta I_{\mathrm{SW}}(k=\pi;\beta) = I_{\mathrm{SW}}(k=\pi; \infty) -I_{\mathrm{SW}}(k=\pi;\beta)\nonumber\end{equation}
These observations can be qualitatively supported by simple calculations in the presence of a vanishing magnetic field (see section \ref{Thermal_DSF_mag_field}).
\begin{figure}
\centering
\includegraphics[width =14cm, height = 14cm, keepaspectratio]{Figure3_FM_spectral_weights_vs_temperature_section_cut_pi.pdf}
\caption{(a) Numerical longitudinal thermal DSF section cut of the spin-1/2 FM chain at $k=\pi$ and $\beta J=8$. The spin-wave excitation is the main spectral peak at $\omega = 2J$ and the bound state is the smaller spectral peak at $\omega = J$; (b) log-log plot of the area under the bound state versus temperature (see main text for details). At low temperatures, it is consistent with an exponent 3/2 (red dotted line). Inset: log-log plot of the difference between the area under the main spin-wave excitation at zero temperature and finite temperature versus temperature. At low temperature, it is also consistent with an exponent 3/2 (red dotted line).}
\label{section_cut_area_comparison}
\end{figure}
Finally, in Fig. \ref{Comparison_section_cuts_LDSF}, we plot the section-cuts of the longitudinal component of the thermal DSF at various wave-vectors. As the wave-vector increases towards $\pi$, the bound state mode is completely separated for low enough temperatures. It is interesting to note that for $k =0.6\pi$ and $k = 0.7\pi$, the bound state is not completely separated from the two-magnon continuum. The presence of the bound state appears as an asymmetric spectral peak with a long tail.
\begin{figure}
\centering
\includegraphics[width = 9.25cm ,height = 9.25cm ,keepaspectratio]{Figure4_Section_cuts_140sites_comparison_with_temperature.pdf}
\caption{Comparison of section cuts of the longitudinal thermal DSF of the spin-1/2 FM chain with 140 sites for $\beta J = 8$ (blue), $\beta J =16$ (red) and $\beta J = 32$ (green). As the temperature decreases, the spectral peak associated with the bound state decreases in height and gets separated from the single spin-wave excitation, but for section-cuts at $k = 0.6\pi$ (shown in a) and $k=0.7\pi$ (shown in b), the bound state is not completely separated even at very low temperatures. }
\label{Comparison_section_cuts_LDSF}
\end{figure}
\section{ Thermal DSF in a magnetic field}
\label{Thermal_DSF_mag_field}
We study the problem in the presence of an external magnetic field defined by the Hamiltonian:
\begin{eqnarray}
H_{FM,h} = - J \sum_{i} \mathbf{S}_{i}\cdot \mathbf{S}_{i+1} +h\sum_{i}S^{z}_{i}
\end{eqnarray}
Since single-spin-deviation states gain Zeeman energy $h$ and two-spin-deviation states gain Zeeman energy $2h$, the dispersion relation for spin-wave and the dispersion relation for the bound state becomes
\begin{equation}
\omega_{1,h}(k) = J(1-\cos k)+h
\label{disperson_reln_mag_field_spin_wave}
\end{equation}
\begin{equation}
\omega_{2,BS,h}(k) = J\sin^2\frac{k}{2}+2h
\label{disperson_reln_mag_field_bound_state}
\end{equation}
The model is no longer isotropic, therefore, it has 3 different thermal DSF components namely - (i) Longitudinal component $S^{z,z} (k,\omega)_\beta$, (ii) Transverse component $S^{+,-}(k,\omega)_\beta$ and (iii) Transverse component $S^{-,+}(k,\omega)_\beta$.
\subsection{Longitudinal Dynamical Structure Factor in a magnetic field}
\label{LDSF_extr_mag_field}
The degeneracy of the FM ground state is lifted upon applying an external magnetic field and the ground state of Hamiltonian $\mathcal{H}_{FM,h}$ is given by the fully polarised state in the direction opposite to the magnetic field. The zero-temperature longitudinal component of the DSF is obtained by determining the time-dependent correlation between the $z$-components (denoted by $C_{zz}(m,n;t)$) and then taking both space and time Fourier transform
\begin{eqnarray}
C_{zz}(m,n;t) &=& \bra{\mathrm{GS}}S^{z}_{m}(t)S^z_{n}\ket{\mathrm{GS}}\nonumber\\
S^{zz}(k,\omega)&=&\frac{1}{L^2}\int_{-\infty}^{\infty} dt\sum_{m,n}C_{zz}(m,n;t)e^{-ik(r_n-r_m)}e^{-i\omega t}\nonumber
\end{eqnarray}
As the groundstate is a fully polarised state in the negative direction, the longitudinal component of the DSF can be directly evaluated to be:
\begin{eqnarray}
S^{zz}(k,\omega) = \frac{\pi}{2}\delta(\omega)\delta(k)
\end{eqnarray}
As a result, at zero temperature, all the spectral weight is concentrated at $\omega =0, k = 0$. Upon increasing the temperature, the spin-wave state gathers spectral weight with more weight still concentrated near $\omega, k\approx 0$. This is also true if the magnetic field were decreased for a given temperature. Since the weights near $\omega=k=0$ are at least one order of magnitude more than the rest of the $\omega$ or $k$ values, it is very unlikely to observe the spin-waves or bound state at finite temperature INS experiments in the longitudinal channel. We summarise our findings on the longitudinal component of the DSF in Fig. \ref{Longitudinal_DSF_h}.
\begin{figure*}
\centering
\includegraphics[width =16cm, height =16cm, keepaspectratio]{Figure5_Longitudinal_thermal_DSF_hvsT.pdf}
\caption{Longitudinal thermal DSF($S^{zz}(k,\omega;\beta)$) of the spin-1/2 FM chain in the presence of a magnetic field ($L =140$, $t_f = 20/J$). The spectral weight gathered by the spin-wave dispersion is one-tenth of the spectral weight at $\omega =0 ,k=0$. Signatures of the bound state are visible, but they are one-hundredth the spectral weight of the main feature. }
\label{Longitudinal_DSF_h}
\end{figure*}
\subsection{Transverse Dynamical Structure Factor in a magnetic field}
\label{TDSF_extr_mag_field}
We now present the numerical results of the transverse component $S^{-,+}(k,\omega)_\beta$ where a clear signature of bound state is obtained (see Fig.\ref{Transverse_DSF_h}). A simple calculation can be attempted by only keeping the most dominant terms in the thermal DSF at low temperatures - the single excitation on fully-aligned-state (denoted as $|GS\rangle$) leading to a spin-wave state (denoted as $|\gamma_1\rangle$) and the excitation from a single spin-wave state to a bound state of two spin-waves (denoted as $|\alpha_{\mathrm{BS}}\rangle$). The thermal DSF can be expressed in the Lehmann representation as:
\begin{eqnarray}
S^{-,+}(k,\omega)_{\beta} = \frac{2\pi}{L\mathcal{Z}}\sum_{\eta,\gamma} e^{-\beta E_{\gamma}}{\left |\bra{\eta}S^{+}_{-k}\ket{\gamma}\right |}^2 \delta{\left(\omega-\lbrack \omega_{\eta}-\omega_{\gamma}\rbrack\right)}\nonumber\\
\label{DSF_formula}
\end{eqnarray}
where $\mathcal{Z}$ is the partition function and $|\eta\rangle$ or $|\gamma\rangle$ are eigen states of the model.
Thus, the thermal DSF becomes:
\begin{eqnarray}
&&S^{-,+}(k,\omega)_{\beta} \propto \frac{e^{-\beta E_0}}{\mathcal{Z}}\Bigg(\sum_{\gamma_1}{\left |\bra{\gamma_{1}}S^{+}_{-k}\ket{\mathrm{GS}}\right |}^2\delta(\omega - \omega_{1, h})+\nonumber\\
&&\sum_{\alpha_{\mathrm{BS}};\gamma_{1}}e^{-\beta\omega_{1,h}}{\left |\bra{\alpha_{\mathrm{BS}}}S^{+}_{-k}\ket{\gamma_{1}}\right |}^2\delta(\omega - \lbrack \omega_{2,\mathrm{BS}, h}-\omega_{1, h}\rbrack)+\dots\Bigg)\nonumber\\
\label{DSF_formula_expansion}
\end{eqnarray}
The first term inside the bracket is easily calculated:
\begin{eqnarray}
\sum_{\gamma_{1}}{\left |\bra{\gamma_1}S^{+}_{-k}\ket{\mathrm{GS}}\right |}^2\delta(\omega - \omega_{1,h}(p))=\delta(\omega - \omega_{1,h}(k))\nonumber
\end{eqnarray}
As expected the spin-wave excitation is shifted by $h$. It is thermally broadened in the plots (Fig.\ref{Transverse_DSF_h}),
but the analytical computation does not capture it here.
The second term in the DSF expression indicates that the spectral weights of the bound state observed at a given wave vector $k$ is due to a finite overlap with the bound state of momentum $K=k+p$, where $p$ is the momentum of a thermally excited spin-wave. So, for computing the second term, we replace the sum over bound states and spin-wave states with a sum over $K$ and $p$ respectively. The bound state can be expressed in terms of two spin-deviations (see Eqs. \ref{two_spin_wave_state} and \ref{coeff_bound_state} in Appendix \ref{Appendix_spin_half_FM_chain}). $S^{-,+}(k,\omega)_{\beta,2}$ is thus proportional to:
\begin{eqnarray}
&&\sum_{K,p}e^{-\beta\omega_{1,h}(p)}{\left |\bra{\alpha_{\mathrm{BS}}}S^{+}_{-k}\ket{\gamma_{1}}\right |}^2\delta(\omega - \lbrack \omega_{\mathrm{BS},h}(K)-\omega_{1,h}(p)\rbrack)\nonumber\\
&=&\frac{16}{L}\sum_{p}e^{-\beta\omega_{1,h}(p)}{\left |\sin \frac{p+k}{2}\right |}^2{\left | f_s(p+k;p-k)\right |}^2\nonumber\\
&&\delta(\omega - \lbrack \omega_{\mathrm{BS},h}(p+k)-\omega_{1,h}(p)\rbrack)\nonumber
\end{eqnarray}
where, in the thermodynamic limit ($L\gg1$), the factor $f_{s}(p+k;p-k)$ is given by:
$$
\frac{\cos \left(\frac{p-k}{2}\right)-\cos\left(\frac{p+k}{2}\right)}{3+\cos \left(p+k\right) -2 \cos p -2 \cos k}
$$
We selected the section cut at $k =\pi$ to compare our analytical results with the numerical results. In the thermodynamic limit we evaluate the integral in $p$ to compute $S^{-,+}(k=\pi,\omega)_{\beta,2}$, which is proportional to:
\begin{eqnarray}
&&\frac{16e^{-\beta h}}{2\pi}\int_{p=-\pi}^{p=\pi}dpe^{-2\beta J\sin^{2}\frac{p}{2}}\frac{\sin^{2}\frac{p}{2}\cos^{2}\frac{p}{2}}{{\left(1+3\sin^{2}\frac{p}{2}\right)}^2}\times\nonumber\\
&&\delta\left(\omega -J\left(1 - 3\sin^{2}\frac{p}{2}\right)-h\right)
\label{Integral}
\end{eqnarray}
The integral (Eq. \ref{Integral}) is even about $p=0$ and upon doing a change of variable $t = \sin^2\frac{p}{2}$, it becomes:
\begin{eqnarray}
\frac{16e^{-\beta h}}{3\pi J}\int_{0}^{1}dt e^{-2\beta J t}\left\lbrace\frac{\sqrt{t(1-t)}}{{\left(1+3t\right)}^2}\right\rbrace\delta\left(t-\left(\frac{-\omega + J+h}{3J}\right)\right)\nonumber
\end{eqnarray}
which, for $-2J+h\leq \omega \leq J+h$, leads to:
\begin{equation}
\frac{16e^{-\beta h}}{9\pi}e^{-\frac{2}{3}\beta \left(J+h-\omega\right)}\frac{\sqrt{\left(J+h-\omega\right)\left(2J-h+\omega\right)}}{{\left(\omega-2J-h\right)}^2}
\label{Transverse_analytical_DSF}
\end{equation}
The above expression clearly shows that the spectral weights of the bound state in the section cut extends from the spectral peak to negative energies. Note that there is an exponential suppression of the thermal spectral weights due to the magnetic field.
\begin{figure*}
\centering
\includegraphics[width =16cm, height =16cm, keepaspectratio]{Figure6_Transverse_thermal_DSF_hvsT.pdf}
\caption{Transverse thermal DSF ($S^{-,+}(k,\omega;\beta)$) of the spin-1/2 FM chain in the presence of a magnetic field ($L =140$, $t_f = 20/J$). The spin-wave dispersion is shifted in energy by $h$ (the strength of the magnetic field). The bound state at higher temperatures and lower magnetic fields gathers more spectral weights. It must be noted that these spectral weights are much smaller in magnitude than in the case of zero-magnetic field.}
\label{Transverse_DSF_h}
\end{figure*}
\subsection{Section cut in the limit of a vanishing magnetic field}
\label{Section_cut_simple_calculation}
\begin{figure}
\centering
\includegraphics[width = 14cm, height =14cm,keepaspectratio]{Figure7_Graph_log_func_with_respect_to_log_T.pdf}
\caption{(a) Comparison of the numerically determined t-DMRG section cut of the spin-1/2 FM chain at $k=\pi$ (blue) and a simple estimate of the bound state thermal weight (red). The simple estimate misses spectral weight because we only included excitations with a single spin-wave state in the thermal population.
(b) Effective exponent determined by taking the derivative of the log of the area under the curve with respect to the log of the temperature.}
\label{section_cut_simple_calc}
\end{figure}
In the limit of $h$ going to $0$, the system becomes isotropic and the transverse component is simply equal to two times the longitudinal component of the thermal DSF. One can determine the prefactor (in Eq. \ref{DSF_formula_expansion}) numerically from the free-energy density (in Fig. \ref{Thermal_statics_FM}d) using the formula $F = -k_BT\ln\mathcal{Z}$.
The expression (Eq.\ref{Transverse_analytical_DSF}), after multiplying by the prefactor, qualitatively explains the long tail of the spectral peak associated with the bound state seen in the numerical thermal DSF section cut (in Fig. \ref{section_cut_simple_calc}a). For very low temperatures, the free-energy density can be replaced with the modified spin-wave theory free-energy density formula (Eq. \ref{free_energy_density}) and the resulting integral can be determined numerically. Its temperature dependence is consistent with $T^{\frac{3}{2}}$ as shown in Fig. \ref{section_cut_simple_calc}b. The effective exponent as a function of temperature is given by the derivative of the log of the area under the bound state with respect to log T [\onlinecite{Timothy}]. At very low temperatures, the exponent tends towards 3/2. Note that this calculation does not include contributions from multi-spin-wave scattering states to the bound state. This is presumably why the effective exponent only tends to the expected value at very low temperatures.
\section{Spin-1 Ferromagnetic chain}
\label{spin-1 FM chain}
\subsection{Zero temperature dynamics}
The analysis of the spin-1/2 FM chain can be extended to the spin-1 FM chain. A subtlety arises however because two spin-deviations can occupy the same site. To better describe the features in the finite temperature DSF, we will compare it to excitations in the two-spin-deviation subspace. The dispersion relation for a spin-wave excitation in the spin-1 chain is $2J(1-\cos k)$, where $J$ is the interaction strength and $k$ is the momentum. The two-spin-deviation subspace has three types of solutions - (i) two-spin wave scattering states, (ii) bound state and (iii) anti-bound state.
\begin{enumerate}
\item The energies of the two spin wave scattering states (labelled by $k_1$ and $k_2$) in the basis of the centre of mass momenta are given by:
\begin{eqnarray}
\omega_2(K,p) = 4J\left(1-\cos\frac{K}{2}\cos p\right),\nonumber
\end{eqnarray}
where $K\equiv k_1+k_2$ and $p\equiv (k_1-k_2)/2$.
\item The bound state solution can be determined by setting up the transfer matrix equation, as in the spin-$1/2$ case (see appendix A) and by looking for the \textit{localised} solution \cite{Wortis_1963, DCMattis,Tonegawa_1970,Haldane_1982a, Haldane_1982b}. In the thermodynamic limit, the dispersion relation of the bound state in the spin-$1$ case is given by:
\begin{equation}
\omega_{2,\mathrm{BS}}(K) = \frac{11J}{3}+\frac{J}{3}\left(\frac{13+12\cos K}{\textrm{x}}+\textrm{x}\right)
\end{equation}
with
\begin{eqnarray}
\textrm{x}^3&=&-100-126 \cos K-27\cos 2K\nonumber\\
&+&12\sqrt{6}\sqrt{{\left(\cos\frac{K}{2}\right)}^6(29+27\cos K)} \nonumber
\end{eqnarray}
The difference between the lower energy boundary of the continuum and the bound state energy is positive for $K>0$, so it exists as a well separate excitation.
\item While the first two types of excitations are also present in spin-1/2 case, there arises the possibility of an anti-bound state due to two spin-deviations occupying the same site\cite{Papanicolaou_1988}. In the case of the Heisenberg model, this does not give rise to an anti-bound state but to a \textit{resonance} because the energy of that state, which is dispersionless and given by:
\begin{eqnarray}
\omega_{2,\mathrm{ABS}} (K) = 4J
\end{eqnarray}
to first order in the transverse part of the Hamiltonian, overlaps with the two-magnon continuum. If a biquadratic interaction is included however, an anti-bound state well separated from the two-magnon continuum appears (see below).
\end{enumerate}
To numerically characterise the two-spin deviation spectrum, it is useful to look at the zero temperature dynamical quadrupolar structure factor (DQSF). The quadrupolar structure factor has 3 equivalent components for an isotropic system corresponding to the change of on-site magnetisation ($\Delta S^z$), namely - longitudinal component ($\Delta S^z =0$), transverse component ($\Delta S^z=\pm 1$) and pairing component ($\Delta S^z=\pm 2$) \cite{Salvatore_2011}. We present the numerical results of the pairing component (denoted as $C_{Q,2}(i,j;t)$) but we verified that the three components give the same spectral features up to a multiplicative factor. To be specific, the pairing component of the DQSF is defined by:
\begin{eqnarray}
&&C_{Q,2}(i,j;t) = \frac{1}{2}\left\lbrack\langle{\left(S^{-}(j;t)\right)}^2{\left(S^{+}(i)\right)}^2\rangle+\mathrm{h.c.}\right\rbrack\nonumber\\
&&\\
&&Q_{2}(k,\omega) = \frac{1}{L^2}\int_{-\infty}^{\infty} dt e^{i\omega t}\sum_{i,j}e^{-ik(r_{j}-r_{i})}C_{Q,2}(i,j;t)\nonumber
\end{eqnarray}
where, $Q_{2}(k,\omega)$ is fourier transform of the pairing component. There is a clear evidence of a resonance extending into the two-spin-wave continuum around $k=\pi$ (in Fig. \ref{DQSF_BLBQ}a).
In order to convince ourselves that it is the anti-bound state that results in a resonance, we extend our model by including a nearest neighbour biquadratic coupling. The Hamiltonian is given as:
\begin{equation}
\mathcal{H}_{\mathrm{BLBQ}} = J\sum_{i}\cos\theta\left(\mathbf{S}_{i}\cdot\mathbf{S}_{i+1}\right)+\sin\theta{\left(\mathbf{S}_{i}\cdot\mathbf{S}_{i+1}\right)}^2
\end{equation}
The ferromagnetic phase for this model is located in the interval $\theta\in\left(\pi/2,5\pi/4\right)$. The spin-1 Heisenberg FM chain is recovered for $\theta = \pi$. We simulate the DQSF in the FM phase of the BLBQ model for $\theta\in\left(\pi/2, 3\pi/4\right\rbrack$ and, for $\theta$ not too large, there is clear evidence of an anti-bound state above the two-spin-wave continuum. Following the Green's function approach by Wortis \cite{Wortis_1963}, we found the same dispersion relations for the bound states and anti-bound states as has been obtained for this model by Aghahosseini et. al \cite{Aghahosseini_Parkinson_1978}. They agree with the numerical determination of the spectrum in Fig. \ref{DQSF_BLBQ}. As $\theta$ increases from $\pi/2$ towards the Heisenberg FM point, DQSF plots show that the anti-bound state enters the continuum, explaining the presence of a resonance at the Heisenberg point. Interestingly for negative biquadratic interaction (in Fig. \ref{DQSF_BLBQ}e), there is a stronger decay of spectral weights within the limits of the two-magnon continuum when compared to the positive values of biquadratic interactions. Beyond the two-magnon continuum, the intensity is smaller than the numerical errors (i.e $10^{-5}$). Due to the chosen color scheme, the figure appears to have no weight in the two-magnon continuum.
\begin{figure}
\centering
\includegraphics[width =7.6cm ,height =15.2cm , keepaspectratio]{Figure8_DQSF_FMphase_Boundstates.pdf}
\caption{ DQSF plots for (a) the spin-1 Heisenberg model, (b-e) the spin-1 BLBQ model ($L =240$, $t_f =24/J$, $T=0$). We find that the bound state (red stars) is more prominent as $k$ goes to $\pi$, but for lower values of $k$ it lies within the two-magnon continuum (bounded by the thin white dashed lines). By contrast, the anti-bound state (red crosses) is more prominent for lower values of $k$ and in some cases merge with the two-magnon continuum for larger values of $k$. The thicker white dashed lines indicate the single magnon dispersion.}
\label{DQSF_BLBQ}
\end{figure}
\subsection{Finite temperature dynamics}
Similarly to spin-1/2 FM chain, one can expect the bound state in a spin-1 FM chain to be experimentally detectable at finite temperature. And indeed, the numerical simulations of the thermal DSF clearly show the presence of a bound state (see Fig. \ref{Spin1_thermal_DSF}). The energy difference between the single spin-wave excitation and the bound state of two spin-waves is maximum at $k =\pi$, like in the spin-1/2 chain, and it is equal to $J$, so it should be possible to resolve it.
In the infrared-limit, even though the bound state only touches the spin-wave continuum at $k =0$, the bound state feature is cloaked because of the thermal broadening of the single spin-wave excitation and of two-spin-wave processes. \par
\begin{figure}
\centering
\includegraphics[width= 9.25 cm , height = 9.25cm, keepaspectratio]{Figure9_Spin1_FM_Heisenberg_model_Longitudinal_DSF_T_60sites_chi_400.pdf}
\caption{Thermal DSF of the spin-1 FM chain at different finite temperatures. The thermal DSF simulation has been carried out with $L = 60$ and $t_f = 14/J$. As compared to the spin-1/2 chain, the resolution of the modes is smaller in the case of the spin-1 chain. In the thermal DSF, the white dashed lines show the spin-wave excitation, while the white dotted line shows the bound state. As in the case of the spin-1/2 chain, the de-excitation process (denoted by the white dashed line) of the spin-wave state to the fully aligned state is also captured here.}
\label{Spin1_thermal_DSF}
\end{figure}
The first point of difference between spin-1/2 and spin-1 FM chains is that the main spin-wave excitation spectral peak is twice as large for the spin-1 chain as compared to the spin-1/2 chain (see Fig. \ref{comparison_spin_half_spin1_chain}). At the same time, the spectral peak corresponding to the bound state is of the same height for both cases. Therefore, the relative intensity of the bound state with respect to the main spin-wave is halved. Accordingly, detecting the bound state can be expected to be more challenging in spin-1 FM chains.\par
\begin{figure}
\centering
\includegraphics[width =9.25cm, height =9.25cm, keepaspectratio]{Figure10_Section_cut_longitudinal_DSF_comparison_spin_half_spin1.pdf}
\caption{Comparison between the thermal DSF section cut of the spin-1/2 and spin-1 FM Heisenberg chains at $k =\pi$ and $\beta J = 8$. In order to have an appropriate comparison we present here data obtained from keeping the same parameters for both chains, namely $L = 60$, $\chi = 400$ and $t_f = 14/J$. The height of the bound state peak is the same, but the peak corresponding to the spin-wave excitation is twice as large for spin -1 than for spin-1/2 chain. }
\label{comparison_spin_half_spin1_chain}
\end{figure}
For completeness, we also explored the thermal spectral signature of bound states and anti-bound states by thermal DSF simulation of the spin-1 BLBQ chain as shown in Fig. \ref{Thermal_DSF_BLBQ}. The anti-bound states lose most of the spectral weights as compared to the zero temperature simulation of the DQSF. The visible spectral weight lies close to the region $k\approx \pi$ for values of $\theta$ where the biquadratic coupling is large enough so that the anti-bound state is separated from the continuum by a gap (Fig. \ref{Thermal_DSF_BLBQ}a). In the zero temperature DQSF plots, as the biquadratic interaction is decreased in the range of $\theta\in\left\lbrack 3\pi/4,9\pi/10\right)$, there is either an anti-bound state or a bound state at a given $k$ vector. This points to a three-way competition between anti-bound state, bound state and two-magnon scattering state when we introduce temperature. The bound states are barely visible, even though they exist for $\theta \in \lbrack 3\pi/4, \pi)$ (see Fig. \ref{DQSF_BLBQ}), but upon decreasing the temperature to the appropriate level for $\theta$ greater than $\pi$, the bound state gathers enough weight to be distinguished from the main spin-wave excitation. Thus, a finite-temperature INS experiment can observe the bound state better in the presence of a significant negative biquadratic interaction. Although this is not the most generic situation in experimental realisations of spin-1 chains, negative biquadratic interactions can be generated in the presence of quasi-degenerate orbitals \cite{Mila_Zhang_2000}.\par
\begin{figure}
\centering
\includegraphics[width =9.25cm, height=9.25cm , keepaspectratio]{Figure11_FM_BLBQ_model_Longitudinal_thermal_DSF_60sites_chi_400.pdf}
\caption{Longitudinal thermal DSF ($S^{zz}(k,\omega)$) of the FM bilinear-biquadratic spin-1 chain ($L =60$, $T =J/10$, $t_f=16/J$). As the biquadratic coupling is decreased from (a) to (d), the clear signature of an anti-boundstate is replaced by that of a boundstate. Note that the anti-boundstate gathers very little spectral weight at finite temperature.}
\label{Thermal_DSF_BLBQ}
\end{figure}
\subsection{Easy axis single-ion anisotropy}
\subsubsection{Zero temperature dynamics}
The spin-1 ferromagnetic chains are usually realised in Nickel based compounds and they exhibit an additional easy-axis single-ion anisotropy term (whose interaction strength is denoted by $D$) \cite{PChauhan_2020}. The Hamiltonian is given by :
\begin{eqnarray}
H = -J\sum_{i}\mathbf{S}_{i}\cdot\mathbf{S}_{i+1}-D\sum_{i}{(S^z_i)}^2
\label{single_ion}
\end{eqnarray}
The dispersion relation of the spin wave solution for the model is $2J(1-\cos k)+D$. In order to understand the modifications to the two spin-deviation excitation spectrum, we simulate the DQSF for this model. Because of the anisotropy, the three components are different from each other. For the longitudinal component ($\Delta S^z = 0$), the spectral weight is present only at $K=0$, $\omega = 0$ and in the transverse component ($\Delta S^z = 1$), only the single-spin-wave excitation has spectral weight. The pairing component ($\Delta S^z=2$) captures the spectral weights for (i) the two-spin-wave scattering states and (ii) the bound states.
\begin{enumerate}
\item The energies of the two-spin wave scattering states are shifted by $2D$ when compared to Heisenberg ferromagnet.
\begin{eqnarray}
\omega_2(K,p) = 4J\left(1-\cos\frac{K}{2}\cos p\right)+2D,\nonumber
\end{eqnarray}
where the centre of mass momentum is denoted by $K$ and the relative momentum is denoted by $p$.
\item There are two bound states in this model corresponding to whether two-spin deviations are on the same sites or the adjacent sites. They are identified as single-ion spin bound state and exchange bound state respectively in the literature \cite{Silberglitt_Torrance_1970}. By following Wortis's method \cite{Wortis_1963,Silberglitt_Torrance_1970, Papanicolaou_1987} and setting up the transfer matrix equation, we arrive at two solutions for bound states. At $K =\pi$, the energy of single ion bound state is $4J$ and that of the exchange bound state is $3J+2D$. The difference in their energies varies as $J-2D$. There are two energy regimes - for $D<J/2$, the single-ion spin bound state is at a higher energy and for $D>J/2$ the exchange bound state is at a higher energy.
\end{enumerate}
The DQSF simulations in Fig. \ref{Fig_DQSF_values_D} show these excitations. It is interesting to note that the higher energy bound state enters the two-spin wave continuum and results in a resonance while the lower energy bound state exists as a separate mode completely (see Fig. \ref{Fig_DQSF_values_D}d)
\begin{figure}
\centering
\includegraphics[width = 9.25cm, height = 9.25cm ,keepaspectratio]{Figure12_FM_Spin1_with_positiveD_QLRO2_DSF_240sites_chi_200_t_40.pdf}
\caption{Pairing component of the quadrupolar structure factor for spin-1 FM chain with different values of $D$ (L = 240, $t_f = 40/J$). The bound state (denoted by solid lines) of two spin-waves already touches the continuum (denoted by dashed lines) at $k =\pi$ for $D=0.4J$. The resonance observed in the continuum is due to the single-ion bound state when $D<J/2$ and to the exchange bound state when $D>J/2$.}
\label{Fig_DQSF_values_D}
\end{figure}
\subsubsection{Finite temperature dynamics}
We explore the observability of bound states in the thermal DSF simulations. Due to anisotropy, the longitudinal and transverse component of DSF are different from each other.\par
(i) The longitudinal component shows most of the thermal spectral weights at $\omega=0$ and $k=0$ (Fig.\ref{Longitudinal_anisotropy_FM_spin1}). At high temperatures, the spin-wave excitation gathers spectral weights and it remains unshifted by single-ion anisotropy $D$. For small anisotropy term, the exchange bound state gathers spectral weights as well, however due to thermal broadening it remains indistinguishable from the main spin-wave mode.\par
(ii) The transverse component of DSF has significant spectral weight along the dispersion relation of the spin-wave. Unlike the longitudinal component, the spin-wave excitation appears with shifted energies due to the single-ion anisotropy $D$. For high temperatures, there are spectral weights in the negative $\omega$-axis due to de-excitation processes from spin-wave state to fully-aligned state. For small $D$ values and at relatively higher temperatures, it captures spectral weights for the exchange bound state. The single-ion bound state decays via the continuum channel and does not gather significant spectral weights.
In zero temperature dynamics simulations, the gap between single-ion bound state and the exchange bound state reduces with increasing anisotropy. Therefore, a more pronounced decay of the spectral weights for exchange bound states via the continuum channel is expected for larger values of the single-ion anisotropy ($D$). So, the bound state can be expected to be seen in a ferromagnet with anisotropy $D<0.4J$. The spectral weights are exponentially suppressed as $e^{-\beta D}$. As a result, the bound states are visible in the finite-temperature simulations at high temperatures (i.e. $\beta J=4$) and for small values of $D$ particularly within $0.2J$. Due to thermal broadening of the bound state and the spin-wave, they are indistinguishable at higher temperatures. For larger anisotropy values i.e. $D>J/2$, the exchange bound state enters the continuum and therefore its spectral weight decays via the continuum channel. Although, in this regime, the single-ion bound state exists as a separate mode, its spectral weight is exponentially suppressed and it is not visible, even at higher temperatures. Our observations for this case are summarised in Fig.\ref{Transverse_anisotropy_FM_spin1}.
\begin{figure}
\centering
\includegraphics[width= 9.25cm, height =9.25cm, keepaspectratio]{Figure13_Longitudinal_thermal_DSF_DvsT.pdf}
\caption{Longitudinal component of the thermal DSF for the spin-1 FM chain with single-ion anisotropy (L =60, $t_f = 16/J$). Much of the spectral weight is concentrated at $k =0$ and $\omega = 0$. At higher temperatures and smaller magnitude of '$D$' the bound state gathers some spectral weight but it is indistinguishable from the thermally broadened spin-wave excitation.}
\label{Longitudinal_anisotropy_FM_spin1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width =9.25cm, height =9.25cm, keepaspectratio]{Figure14_Transverse_thermal_DSF_DvsT.pdf}
\caption{Transverse component of the thermal DSF for the spin-1 FM chains with single-ion anisotropy (L = 60, $t_f = 16/J$). The spin-wave dispersion is shifted by $D$ as expected. The bound state is observable for smaller $D$ and higher temperature.}
\label{Transverse_anisotropy_FM_spin1}
\end{figure}
\subsection{Easy plane single-ion anisotropy}
If an easy plane single-ion anisotropy is present instead, corresponding to $D<0$ in Eq. \ref{single_ion}, the ground state is no longer the fully aligned state but belongs to the sector $S^z_{tot}=0$, and the excitations are expected to have a very different nature. Since the present paper is devoted to the bound states of magnons, we do not discuss this case further.
\section{Conclusion}
The dynamics of multiple excitations in ferromagnets is very rich and it forms the bedrock to understand the antiferromagnetic case \cite{Bethe_1931,Toshiya_Inami1994, Elliott_1969,Headings_2010, Mourigal2013}. Upon diagonalising the two-spin deviation subspace of the 1D Heisenberg ferromagnet one finds the presence of a bound state. The thermal DMRG simulations reported here confirm that the bound state can be detected when the spin-wave states are thermally populated.
From finite temperature numerics, we noted the long tail of the bound state spectral peak extending to very low energies and found that the spectral weight of the bound state increases with temperature as $T^{\frac{3}{2}}$, a behaviour that is qualitatively captured by considering only contributions from prominent processes (i.e. excitations from fully aligned states and single spin deviation states).
INS experiments have already been conducted on ferromagnets, but at very low temperature (see e.g. Ref. \cite{Kopinga_1986}, where the exchange coupling ($J$) was 66 K while the measurements have been made at temperatures below 4K ($\sim J/16$). This temperature regime was not ideal to observe the bound state because its spectral weight is about one-hundredth of that of the spin-wave excitation in this temperature range. With the improvements in neutron scattering technology and performing a scan for $k=\pi$ (where the separation between the spin-wave and the bound state is maximum) at a higher temperature regime ($J/12$ to $J/3$), we have shown that it should be possible to detect the bound state in ferromagnetic chains, the most favourable case being spin-1/2. We also showed that including a magnetic field does not help since it tends to suppress the spectral weight corresponding to the bound state.
These arguments can in principle be extended to 2D and 3D Heisenberg ferromagnets where there are 2 bound states and 3 bound states at the edge of the Brillouin zones respectively, but differences are to be expected. In contrast to 1D, in 2D ferromagnets, there is one bound state which enters the continuum, while the other bound state exists for all k. In 3D, the three bound states only exist beyond certain threshold values of k close to the edge of the Brillouin zone. Thus, in higher dimensions, the bound states become more difficult to detect directly. Detecting the resonances where the bound state enters the continuum might be easier. Besides, the thermal broadening of the bound states and of the main spin-wave excitations might be additional bottlenecks in higher dimensions.\par
For higher values of the spin (like the spin-1 case studied in the present paper), the ratio of the difference between the bound state and the spin-wave energies and the main spin-wave energy at $k =\pi$ decreases. Thus, after thermal broadening, the bound state and the spin-wave peaks may not be easily distinguishable. Furthermore, the thermal DSF of spin-1 chains with easy-axis single ion anisotropy show that the bound state has less spectral weight and should be more difficult to detect than in the case with no anisotropy.
So the best candidate to observe the bound state is the spin-1/2 FM chain.
Finally, in real materials, finite temperature induces both acoustic and optical phonon modes which could obscure the bound state spectral weights in the magnon energy spectrum observed in INS experiments \cite{Jeske_2018, Tapan_Neutron_Scattering}. However, upon choosing a material with appropriate strength of magnetic exchange coupling, one should be able to separate the magnons from the phonons, especially at the edge of the Brillouin zone. Additionally, switching on the external magnetic field would decouple the spin degrees of freedom from the lattice vibrations and could lead to clear detection of the bound state.
In conclusion, we hope that the present paper will encourage specialists of inelastic neutron scattering to perform experiments at intermediate temperature to try and detect the bound state of the ferromagnetic chain, a ninety year old prediction still awaiting for a direct confirmation. More generally, beyond the case of the bound state of ferromagnets, the present results suggest that performing INS at intermediate temperature might help revealing excitations not visible at zero temperature.
\section*{Acknowledgements}
We are very grateful to Noam Kestin for insightful discussions on the Thermal DMRG code, and to Henrik R\o{}nnow for discussions regarding neutron scattering experiments. MN thanks Aubry Jaquier for helping in setting up the ALPS package. MN also thanks Olivier Gauth\'e and Jeanne Colbois for helpful discussions. The numerical simulations were performed on the SCITAS clusters at EPFL. We acknowledge the funding from the Swiss National Science Foundation.
| 224c91ac7991a4c82553c1aff20e86637d07a7e0 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The ferromagnetic quantum Heisenberg model is one the most important and widely studied models of statistical mechanics. In $d=3$ dimensions, the low temperature properties of the model are usually examined using spin-wave theory. In the spin-wave approximation one assumes that the low-energy behavior of the system can be described in terms of collective excitations of spins called spin waves. From an equivalent point of view, which dates back to Holstein and Primakoff \cite{HP}, these spin waves are known as bosonic quasiparticles called magnons.
The spin-wave approximation has been very successful, predicting for example a phase transition in three and more dimensions, or the $T^{3/2}$ Bloch magnetization law \cite{B1,B2}. In his seminal 1956 paper \cite{D}, Dyson derived further properties of the quantum Heisenberg model which, among other things, included the low temperature expansion of the magnetization.
While there was little doubt about the validity of spin-wave theory in three (or more) dimensions, a rigorous proof of some of its predictions has only recently be given
in \cite{CGS} (see also \cite{CorGiuSei-14}). There it was proved that the free energy of the three-dimensional ferromagnetic quantum Heisenberg model is to leading order indeed given by the expression derived using spin-wave approximation, for any spin $S\geq 1/2$. (See also \cite{CS2,T} for earlier non-sharp upper bounds, or \cite{CG,Benedikter-2017} for results in the large $S$ limit).
The situation is different in lower dimensions. It has been known since the seminal work of Mermin and Wagner \cite{MW} that the $d=1$ and $d=2$ dimensional quantum Heisenberg models do not exhibit long range order at any non-zero temperature. The low temperature behavior of the system in low dimensions is thus very different from the one in three or higher dimensions, and it is less clear whether spin-wave theory should also be valid in lower dimensions.
In 1971 Takahashi \cite{Takahashi-1971} derived a free energy expansion for $d=1$ in the case $S=1/2$. In this special case the quantum Heisenberg model is exactly solvable via the Bethe ansatz \cite{Bethe-1931}. The spectrum of the (finite size) model can be obtained by solving the corresponding Bethe equations. Under certain assumptions (known as string hypothesis) on the solutions of these equations
he derived what are now known as thermodynamic Bethe equations, an analysis of which leads to a formula for the free energy. Later, in \cite{Takahashi-1986} he derived an alternative free energy expansion using (a modified) spin-wave theory (for any $S$, and also in two dimensions). Interestingly, the second terms in the (low temperature) free energy expansions in \cite{Takahashi-1971,Takahashi-1986} do not agree with the predictions of conventional spin-wave theory \cite{B1,B2,D,HP}.
We mention that
the thermodynamic Bethe equations have been used not only for the Heisenberg spin chain, but also in other models including the Kondo model \cite{Andrei-80,AndDes-84,AndJer-95,Schlottmann-2001} or the Gross--Neveu model in high energy physics \cite{AndLow-79}. For more applications of the string hypothesis and its relation to numerous other models in physics we refer to the review articles \cite{vanTongeren-2016,Maslyuk-2016}.
In the present paper, using different methods, we prove that, to leading order, the formula derived by Takahashi based on the Bethe ansatz and the string hypothesis in \cite{Takahashi-1971} is indeed correct. Our analysis does not use the Bethe ansatz and our result holds for any spin $S$. It therefore also partly justifies the spin-wave approximation derived in \cite{Takahashi-1986}. We shall utilize some of the methods developed for the three-dimensional case in \cite{CGS}, but novel ingredients are needed to treat the case of lower dimensions, both for the upper and the lower bounds.
\section{Model and Main Result}\label{sec:model}
We consider the one-dimensional ferromagnetic quantum Heisenberg model with nearest neighbor interactions. For a chain of length $L$, it is defined in terms of the Hamiltonian
\begin{equation}\label{heisenberg ham 1}
H_L = \sum_{x=1}^{L-1} \left( S^2 - \vec S_x \cdot \vec S_{x+1} \right).
\end{equation}
Here $ {\vec S}=(S^1,S^2,S^3)$ denote the three components of the spin operators corresponding to spin $S$, i.e., they are the generators of the rotations in a $2S+1$ dimensional representation of $SU(2)$. The Hamiltonian $H_L$ acts on the Hilbert space $\mathscr{H}_L = \bigotimes_{x=1}^L \mathbb{C}^{2S+1}$. We added a constant $S^2$ for every bond in order to normalize the ground state energy of $H_L$ to zero.
Our main object of study is the specific free energy
\begin{equation*}
f_L(\beta,S) = - \frac{1}{\beta L} \ln \Tr e^{-\beta H_L}
\end{equation*}
for $\beta>0$,
and its thermodynamic limit
\begin{equation} \label{eq:free_energy_1d_thermodynamic}
f(\beta,S) = \lim_{L\to \infty} f_{L}(\beta,S).
\end{equation}
We are interested in the behavior of $f(S,\beta)$ in the low temperature limit $ \beta \to \infty $ for fixed $S$. Our main result is as follows.
\begin{theorem}\label{thm:main_thm}
Consider the Hamiltonian \eqref{heisenberg ham 1} and the corresponding free energy \eqref{eq:free_energy_1d_thermodynamic}. For any $S\geq 1/2$,
\begin{equation}\label{eq:mainthmd1}
\lim_{\beta\to \infty} f(\beta,S)S^{\frac12} \beta^{\frac32}=C_1:=\frac{1}{2\pi}\int_{\mathbb{R}}\ln \big(1-e^{-p^2}\big)dp=\frac{-\zeta(\frac32)}{2\sqrt{\pi}}.
\end{equation}
\end{theorem}
The proof of Theorem \ref{thm:main_thm} will be given in Sections \ref{sec:up} and \ref{sec:lower}, where we derive quantitative upper and lower bounds, respectively. The trial state employed in the derivation of the upper bound can also be used in $d=2$ dimensions. We refer to Proposition \ref{thm:2D_upper} in Appendix \ref{sec:appendix} for a precise statement and its proof.
\section{Boson Representation}\label{sec:bosons}
It is well known that the Heisenberg Hamiltonian can be rewritten in terms of bosonic creation and annihilation operators \cite{HP}. For any $ x \in [1,\ldots, L] \subset \mathbb{Z} $ we set
\begin{equation}\label{ax upx}
S_{x}^+ = \sqrt{2S}\, a^\dagger_x \left[ 1 - \frac{a^\dagger_x a_x}{2S} \right]_+^{1/2} \ , \quad S_{x}^{-} = : \sqrt{2S}\left[ 1 - \frac{a^\dagger_x a_x}{2S} \right]_+^{1/2} a_x\ , \quad S_{x}^3 = : a^\dagger_x a_x - S\,,
\end{equation}
where $a^{\dagger}_{x}, a_x$ are bosonic creation and annihilation operators, $S^\pm = S^1 \pm i S^2$, and $[\, \cdot\,]_+ = \max\{0, \, \cdot\, \}$ denotes the positive part. The operators $a^\dagger$ and $a$ act on the space $\ell^2(\mathbb{N}_0)$ via $(a\, f)(n) = \sqrt{n+1} f(n+1)$ and $(a^\dagger f)(n) = \sqrt{n} f(n-1)$, and satisfy the canonical commutation relations $[a,a^\dagger] = 1$. One readily checks that (\ref{ax upx}) defines a representation of $SU(2)$ of spin $S$, and the operators ${\vec S}_x$ leave the space $\bigotimes_{x=1}^L \ell^2 ( [0,2S]) \cong \mathscr{H}_L = \bigotimes_{x=1}^L \mathbb{C}^{2S+1}$, which can naturally be identified with a subspace of the Fock space $\mathcal{F}_L:=\bigotimes_{x=1}^L \ell^2(\mathbb{N}_0)$, invariant.
The Hamiltonian $H_L$ in \eqref{heisenberg ham 1} can be expressed in terms of the bosonic creation and annihilation operators as
\begin{align} \nonumber
H_L = S \sum_{x=1}^{L-1} \biggl( & - a^\dagger_x \sqrt{ 1 - \frac {n_x}{2S} } \sqrt{ 1-\frac{n_{x+1}}{2S}}a_{x+1} - a^\dagger_{x+1} \sqrt{ 1-\frac{n_{x+1}}{2S}} \sqrt{ 1 - \frac {n_x}{2S} } a_x \\ & + n_x + n_{x+1} - \frac 1{S} n_x n_{x+1} \biggl) \,, \label{hamb}
\end{align}
where we denote the number of particles at site $x$ by $n_x= a^\dagger_x a_x$. It describes a system of bosons hopping on the chain $[1,\ldots L]$ with nearest neighbor {\em attractive} interactions and a hard-core condition preventing more than $2S$ particles to occupy the same site. Also the hopping amplitude depends on the number of particles on neighboring sites, via the square root factors in the first line in (\ref{hamb}).
In the bosonic representation (\ref{hamb}), the vacuum is a ground state of the Hamiltonian $H_L$, and the excitations of the model can be described as bosonic particles in the same way as phonons in crystals. There exists a zero-energy ground state for any particle number less or equal to $2SL$, in fact. While this may not be immediately apparent from the representation (\ref{hamb}), it is a result of the $SU(2)$ symmetry of the model. The total spin is maximal in the ground state, which is therefore $(2SL+1)$-fold degenerate, corresponding to the different values of the $3$-component of the total spin. The latter, in turn, corresponds to the total particle number (minus $SL$) in the bosonic language.
Before we present the proof of Thm.~\ref{thm:main_thm}, we shall briefly explain the additional difficulties compared to the $d=3$ case, and the reason why the proof in \cite{CGS} does not extend to $d=1$. Spin-wave theory predicts that at low temperatures the interaction between spin waves can be neglected to leading order. This means that \eqref{hamb} can effectively be replaced by the Hamiltonian of free bosons hopping on the lattice. At low temperature and long wave lengths $\ell\gg 1$, one can work in a continuum approximation where the last term $-\sum_x n_x n_{x+1}$ in \eqref{hamb} scales as $\ell^{-d}$, while the kinetic energy scales as $\ell^{-2}$. The interaction terms can thus be expected to be negligible only for $d\geq 3$, and this is indeed what was proved in \cite{CGS}.
This argument is in fact misleading, as the attractive interaction term turns out to be compensated by the corrections terms in the kinetic energy coming from the square root factors. Making use of this cancellation will be crucial for our analysis (while it was not needed in \cite{CGS} to derive the free energy asymptotics for $d\geq 3$).
We note that for $d=1$ and $d=2$ the interaction is strong enough to create bound states between magnons \cite{bs2,bs1,bs3,GS}. These occur only at non-zero total momentum, however, with binding energy much smaller than the center-of-mass kinetic energy at low energies. Hence they
do not influence the thermodynamic properties of the system at low temperature to leading order.
\section{Upper Bound}\label{sec:up}
In this section we will prove the following
\begin{proposition}\label{ub: pro}
Recall $C_1$ defined in \eqref{eq:mainthmd1}.
As $ \beta S \to \infty $, we have
\begin{equation}\label{fe ub asympt1}
f(\beta,S) \leq C_1 S^{-\frac12} \beta^{-\frac32} \left(1 - \mathcal{O}( (\beta S)^{-\frac{1}{8}} (\ln \beta S)^{3/4}) \right).\,
\end{equation}
\end{proposition}
The general structure of the proof will be similar to the corresponding upper bound given in \cite{CGS}. The difference lies in the choice of the trial state, which in contrast to \cite{CGS} allows for more than one particle on a single site.
\medskip
\noindent {\it Step 1. Localization in Dirichlet boxes.} Our proof will rely on the Gibbs variational principle, which states that
\begin{equation}\label{varpr}
f_L (\beta,S) \leq \frac{1}{L} \tr H_L \Gamma + \frac{1}{\beta L} \tr \Gamma \ln \Gamma
\end{equation}
for any positive $\Gamma$ with $\tr \Gamma =1$. We shall confine the particles into smaller intervals, introducing Dirichlet boundary conditions. To be precise, let
\begin{equation*}
H_{L}^{\rm D} = H_{L} + 2 S^2 + S(S_1^3 + S_L^3)
\end{equation*}
be the Heisenberg Hamiltonian on $\Lambda_L := [1,\ldots,L] \subset \mathbb{Z}$ with $S^3_x=-S$ boundary conditions. Note that $H_{L}^{\rm D} \geq H_{L}$. We assume that $L = k(\ell+1)+1$ for some integers $k$ and $\ell$. By letting all the spins point maximally in the negative $3$-direction on the boundary of the smaller intervals of side length $\ell$, we obtain the upper bound
\begin{equation*}
f_{L}(\beta,S) \leq \left( 1 + \ell^{-1} \right)^{-1} f_{\ell}^{\rm D}(\beta,S) \ , \quad f_\ell^{\rm D}(\beta,S) := - \frac 1{\beta \ell} \ln \Tr e^{-\beta H_{\ell}^{\rm D}} \,.
\end{equation*}
In particular, by letting $k\to \infty$ for fixed $\ell$, we have
\begin{equation}\label{eq:localization_upperbound}
f(\beta,S) \leq \left( 1 + \ell^{-1} \right)^{-1} f_{\ell}^{\rm D}(\beta,S)
\end{equation}
in the thermodynamic limit.
\medskip
\noindent {\it Step 2. Choice of trial state.} To obtain an upper bound on $f_{\ell}^{\rm D}$, we can use the variational principle (\ref{varpr}), with
\begin{equation}\label{def:gamma}
\Gamma = \frac{ \mathcal{P} e^{-\beta T} \mathcal{P} }{\Tr_\mathcal{F} \mathcal{P} e^{-\beta T }\mathcal{P}} \,
\end{equation}
where we denote $\mathcal{F}\equiv \mathcal{F}_\ell$ for simplicity. Here, $\mathcal{P}$ is defined by
\begin{equation}
\mathcal{P}=\prod_{x=1}^{\ell}f(n_x) \label{def:cP}
\end{equation}
where
\begin{equation}\label{def:f}
f(n)= \begin{cases} 1 & \text{if} \quad n=0; \\
\left[ \prod_{j=1}^{n}\left(1-\frac{j-1}{2S}\right) \right]^{\frac12} & \text{if} \quad n=1,2,\ldots, 2S;\\
0 & \text{if} \quad n>2S.
\end{cases}
\end{equation}
The operator $T$ is the Hamiltonian on Fock space $\mathcal{F}$ describing free bosons on $\Lambda_{\ell}=[1,\ldots,\ell]$ with Dirichlet boundary conditions, i.e.,
\begin{align}\nonumber
T &= S \sum_{x,y \in \Lambda_\ell} \left( -\Delta^{\rm D}\right)(x,y) a^\dagger_x a_y \\
&= S \sum_{\langle x,y\rangle\subset \Lambda_\ell} \left( - a^\dagger_x a_y - a^\dagger_y a_x + n_x + n_y \right) + S (n_1 + n_\ell)
\label{hamd}
\end{align}
where $\Delta^{\rm D}$ denotes the Dirichlet Laplacian on $\Lambda_\ell$ and $\langle x,y\rangle$ means that $x$ and $y$ are nearest neighbors. The eigenvalues of $-\Delta^{\rm D}$ are given by
\begin{equation}\label{epsd}
\left\{ \epsilon(p) = 2 (1-\cos(p)) \, : \, p \in \Lambda_\ell^{*\rm D}:= \left( \frac \pi{\ell+1} \{ 1, 2,\dots, \ell\} \right) \right\}
\end{equation}
with corresponding eigenfunctions $\phi_p(x) = [2/(\ell+1)]^{\frac{1}{2}} \sin(x p)$.
\medskip
\noindent {\it Step 3. Energy estimate.} We shall now give a bound on the energy of the trial state.
\begin{lemma} \label{lem:thp}
On the Fock space $\mathcal{F}=\bigotimes_{x\in\Lambda_\ell} \ell^2(\mathbb{N}_0)$,
\begin{equation}\label{THP}
\mathcal{P} H_{\ell}^{\rm D} \mathcal{P} \leq T\,.
\end{equation}
\end{lemma}
\begin{proof}
Definition \eqref{def:cP} implies that
\begin{eqnarray} \label{cPcommutations}
\begin{aligned}
\mathcal{P} a^\dagger_x &=& \prod_{z\in \Lambda_\ell}f(n_z) a^\dagger_x=a^\dagger_x f(n_x+1)\prod_{\underset{z\neq x}{z\in \Lambda_\ell}}f(n_z)=a_x^\dagger \mathcal{P} \sqrt{1-\frac{n_x}{2S}}.
\end{aligned}
\end{eqnarray}
It follows that
\begin{equation} \label{eq:propcP}
\mathcal{P} a^\dagger_x \sqrt{ 1 - \frac {n_x}{2S} } \sqrt{ 1-\frac{n_y}{2S}}a_y \mathcal{P} = a^\dagger_x \mathcal{P}^2 \big(1-\frac{n_x}{2S}\big)\big(1-\frac{n_y}{2S}\big) a_y.
\end{equation}
With the aid of \eqref{cPcommutations} and \eqref{eq:propcP} one checks that
\begin{eqnarray*}
\begin{aligned}
\mathcal{P} H_{\Lambda_\ell}^{\rm D} \mathcal{P} &= S\sum_{\langle x,y\rangle\subset \Lambda_\ell} (a^\dagger_x - a^\dagger_y) \mathcal{P}^2 \big(1-\frac{n_x}{2S}\big)\big(1-\frac{n_y}{2S}\big)(a_x-a_y) \\ & \quad + S \sum_{x\in\{1,\ell\}} a^\dagger_x \mathcal{P}^2 \big(1-\frac{n_x}{2S}\big) a_x.
\end{aligned}
\end{eqnarray*}
The desired bound \eqref{THP} then follows directly from $\mathcal{P}^2 \big(1-\frac{n_x}{2S}\big)\big(1-\frac{n_y}{2S}\big)\leq 1$ and $\mathcal{P}^2 \big(1-\frac{n_x}{2S}\big)\leq 1$.
\end{proof}
We conclude that
\begin{equation} \label{energybound}
\tr H_{\ell}^{\rm D} \Gamma \leq \frac{\tr_\mathcal{F} T e^{-\beta T}}{\tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P}}\, .
\end{equation}
As a next step, we will show that $\tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P}$ is close to $\tr_\mathcal{F} e^{-\beta T}$ for $\ell \ll (\beta S)^\frac23 $. The following lemma is an adaptation of the corresponding result in \cite[Lemma~4.3]{CGS}.
\begin{lemma}\label{lem:ppex}
We have
\begin{equation}\label{ppex}
\frac{ \Tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P}}{\Tr_\mathcal{F} e^{-\beta T} } \geq 1- \left( \frac {\pi^2}{12}\right)^2 \frac{\ell (\ell+1)^2}{(\beta S)^2}.
\end{equation}
\end{lemma}
\begin{proof}
Using that $f(n_x)\leq 1$ and that $f(n_x)=1$ if $n\in \{0,1\}$, we have
\begin{equation}\label{omp}
1-\mathcal{P}^2 \leq \sum_{x=1}^\ell (1-f^2(n_x)) \leq \frac 12\sum_{x=1}^\ell n_x (n_x-1) = \frac 12 \sum_{x=1}^\ell a^\dagger_x a^\dagger_x a_x a_x \,.
\end{equation}
Wick's rule for Gaussian states therefore implies that
\begin{equation}\label{eq:gaussian_bound_energy}
\frac{ \Tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P}}{\Tr_\mathcal{F} e^{-\beta T} } \geq 1- \frac 12 \sum_{x=1}^\ell \frac{ \Tr_\mathcal{F} a^\dagger_x a^\dagger_x a_x a_x e^{-\beta T}}{\Tr_\mathcal{F} e^{-\beta T} } = 1 -\sum_{x=1}^\ell \left( \frac{ \Tr_\mathcal{F} n_x e^{-\beta T}}{\Tr_\mathcal{F} e^{-\beta T} }\right)^2 \,.
\end{equation}
Moreover,
\begin{equation*}
\frac{ \Tr_\mathcal{F} n_x e^{-\beta T}}{\Tr_\mathcal{F} e^{-\beta T}} = \frac 1{ e^{\beta S (-\Delta^{\rm D})} -1 }(x,x) =\sum_{p \in \Lambda_\ell^{*\rm D}}\frac{|\phi_p(x)|^2}{e^{\beta S \epsilon(p)}-1} \leq \frac{2}{\ell +1}\sum_{p \in \Lambda_\ell^{*\rm D}}\frac{1}{e^{\beta S \epsilon(p)}-1}.
\end{equation*}
By using $(e^{x}-1)^{-1}\leq x^{-1}$ for $x\geq 0$ in the last sum, as well as $1-\cos x\geq \frac{2x^2}{\pi^2}$ for $x\in(0,\pi)$, this gives
\begin{equation}\label{eq416}
\frac{ \Tr_\mathcal{F} n_x e^{-\beta T}}{\Tr_\mathcal{F} e^{-\beta T}}
\leq \frac{\ell +1}{2 \beta S } \sum_{n=1}^\ell \frac 1{n^2}
\leq \frac{\pi^2}{12} \frac{\ell +1}{\beta S } \,.
\end{equation}
\iffalse
By treating separately the smallest $p$ term we can bound the remaining terms in the sum by an integral. In fact, using monotonicity of $\epsilon(p)$ we have
\begin{equation*}
\begin{aligned}
\frac{2}{\ell +1}\sum_{p \in \Lambda_\ell^{*\rm D}}\frac{1}{e^{\beta S \epsilon(p)}-1}
&\leq \frac{2}{\ell +1}\frac{1}{e^{2\beta S (1-\cos\frac{\pi}{\ell+1})}-1}+\frac{2}{\pi}\int_{\frac{\pi}{\ell+1}}^\pi \frac{dt}{e^{2\beta S (1-\cos t)}-1} \\
&\leq \frac{2}{\ell +1}\frac{1}{e^{\beta S\frac{4}{(\ell+1)^2}}-1}+\frac{2}{\pi}\int_{\frac{\pi}{\ell+1}}^\pi \frac{dt}{e^{4\beta S t^2/\pi^2}-1}
\end{aligned}
\end{equation*}
where we used $1-\cos x\geq \frac{2x^2}{\pi^2}$ for $x\in(0,\pi)$. Finally, since $(e^{x}-1)^{-1}\leq x^{-1}$ for $x\geq 0$, we obtain
\begin{equation}
\frac{ \Tr_\mathcal{F} n_x e^{-\beta T}}{\Tr_\mathcal{F} e^{-\beta T}}\leq \frac{2\ell+1}{2 \beta S}.
\end{equation}
\fi
Inserting this bound into \eqref{eq:gaussian_bound_energy} yields the desired result.
\end{proof}
\smallskip
\noindent {\it Step 4. Entropy estimate.} It remains to give a bound on the entropy of $\Gamma$. We proceed in the same way as in \cite[Lemma~4.4]{CGS}.
\begin{lemma}\label{lem:ent}
We have
\begin{align*} \label{entb}
\frac 1 \beta \Tr \Gamma \ln \Gamma & \leq - \frac 1 \beta \ln \tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P} - \frac{\Tr_\mathcal{F} T e^{-\beta T} }{\Tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P} } \\ & \quad + S \left( \frac{\pi^2}{12} \right)^2 \frac{ \ell(\ell+1)^3}{(\beta S)^{7/2}} \left[ \frac {\sqrt\pi \zeta(3/2)}{8} + \frac { (\beta S)^{1/2}}{\ell} \right] \frac{\Tr_\mathcal{F} e^{-\beta T} }{\Tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P} } \,.
\end{align*}
\end{lemma}
\begin{proof}
We have
\begin{equation*}
\Tr \Gamma \ln \Gamma = - \ln \tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P} + \frac 1{\Tr_\mathcal{F} \mathcal{P} e^{-\beta T} \mathcal{P}} \Tr_\mathcal{F} \mathcal{P} e^{-\beta T} \mathcal{P} \ln \mathcal{P} e^{-\beta T} \mathcal{P}\,.
\end{equation*}
Using the operator monotonicity of the logarithm, as well as the fact that the spectra of $\mathcal{P} e^{-\beta T} \mathcal{P}$ and $e^{-\beta T/2} \mathcal{P}^2 e^{-\beta T/2}$ agree, we can bound
\begin{align*}\nonumber
\Tr_\mathcal{F} & \mathcal{P} e^{-\beta T} \mathcal{P} \ln \mathcal{P} e^{-\beta T} \mathcal{P} = \Tr_\mathcal{F} e^{-\beta T/2} \mathcal{P}^2 e^{-\beta T/2} \ln e^{-\beta T/2} \mathcal{P}^2 e^{-\beta T/2} \\
& \leq \Tr_\mathcal{F} e^{-\beta T/2} \mathcal{P}^2 e^{-\beta T/2} \ln e^{-\beta T} = - \beta \Tr_\mathcal{F} T \mathcal{P}^2 e^{-\beta T} \,.
\end{align*}
Hence
\begin{equation}
\Tr \Gamma \ln \Gamma \leq - \ln \tr_\mathcal{F} \mathcal{P} e^{-\beta T} \mathcal{P} - \beta \frac{\Tr_\mathcal{F} T e^{-\beta T} }{\Tr_\mathcal{F} \mathcal{P} e^{-\beta T} \mathcal{P} } + \beta \frac{\Tr_\mathcal{F} T (1-\mathcal{P}^2) e^{-\beta T} }{\Tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P} } \,.
\end{equation}
In the last term, we can bound $1-\mathcal{P}^2$ as in (\ref{omp}), and evaluate the resulting expression using Wick's rule. With $\phi_p$ the eigenfunctions of the Dirichlet Laplacian, displayed below Eq.~(\ref{epsd}), we obtain
\begin{equation}
\begin{aligned}\label{eq:entestimate1D}
\frac{ \Tr_\mathcal{F} T n_x(n_x-1) e^{-\beta T} }{\Tr_\mathcal{F} e^{-\beta T} } & = \left(\frac{ \Tr_\mathcal{F} n_x e^{-\beta T} }{\Tr_\mathcal{F} e^{-\beta T} } \right)^2 \sum_{p \in \Lambda_\ell^{*\rm D}} \frac {2S \epsilon(p) }{e^{\beta S \epsilon(p)} -1} \\ & \quad + \frac{ \Tr_\mathcal{F} n_x e^{-\beta T} }{\Tr_\mathcal{F} e^{-\beta T} } \sum_{p \in \Lambda_\ell^{*\rm D}} \frac { S\epsilon(p) |\phi_p(x)|^2 }{ \left( \sinh \tfrac 12 \beta S \epsilon(p) \right)^2 } \,.
\end{aligned}
\end{equation}
To estimate the sums over $p$ we proceed similarly as in the proof of Lemma~\ref{lem:ppex} to obtain
\begin{align*}
\sum_{p \in \Lambda_\ell^{*\rm D}} \frac {2S \epsilon(p) }{e^{\beta S \epsilon(p)} -1} & \leq \frac{\ell+1}{\pi} \int_{0}^\pi \frac {2S \epsilon(p) }{e^{\beta S \epsilon(p)} -1} dp \leq \frac{\ell+1}{\pi^3} \int_{0}^\pi \frac {8 S p^2}{e^{4\beta S p^2/\pi^2 } -1} dp \\
& \leq S \frac{\ell+1}{(\beta S)^{3/2}} \int_0^\infty \frac{p^2}{e^{p^2} - 1} dp = S \frac{\ell+1}{(\beta S)^{3/2}} \frac{\sqrt\pi}4 \zeta(3/2)
\end{align*}
and
\begin{align*}
\sum_{p \in \Lambda_\ell^{*\rm D}} \frac { S\epsilon(p) }{ \left( \sinh \tfrac 12 \beta S \epsilon(p) \right)^2 } \leq \frac 4{S\beta^2 } \sum_{p \in \Lambda_\ell^{*\rm D}} \frac { 1 }{ \epsilon(p) } \leq \frac {(\ell+1)^2}{S\beta^2 } \sum_{n=1}^\ell \frac { 1 }{ n^2 } \leq \frac {\pi^2}6 \frac {(\ell+1)^2}{S\beta^2} \,.
\end{align*}
\iffalse
\begin{align*}
\sum_{p \in \Lambda_\ell^{*\rm D}} \frac { S\epsilon(p) }{ \left( \sinh \tfrac 12 \beta S \epsilon(p) \right)^2 } \leq \frac { (\ell+1)^{2} }{ S \beta^2 } + \frac{\ell+1}\pi \int_{\frac \pi{\ell +1}}^\pi \frac { S\epsilon(p) }{ \left( \sinh \tfrac 12 \beta S \epsilon(p) \right)^2 } dp \leq \frac{(\ell+1)(2\ell +1)}{S \beta^2}\,.
\end{align*}
\begin{equation*}
\begin{aligned}
\sum_{p \in \Lambda_\ell^{*\rm D}} \frac {2S \epsilon(p) }{e^{\beta S \epsilon(p)} -1} \leq 20 S\left(\frac{\ell}{(\beta S)^{3/2}}\right) \,\, \text{and}\, \,\, \sum_{p \in \Lambda_\ell^{*\rm D}} \frac { S\epsilon(p) }{ \left( \sinh \tfrac 12 \beta S \epsilon(p) \right)^2 } \leq 2S\left(\frac{\ell}{\beta S}\right)^2
\end{aligned}
\end{equation*}
\fi
The expectation value of $n_x$ can be bounded independently of $x$ as in \eqref{eq416}. When summing over $x$, we can use the normalization $\sum_x |\phi_p(x)|^2 =1$. In combination this yields the desired bound.
\end{proof}
\smallskip
\noindent {\it Step 5. Final estimate.} The Gibbs variational principle \eqref{varpr} together with \eqref{energybound}, Lemma \ref{lem:ppex} and Lemma \ref{lem:ent} implies that for $(\beta S)^{1/2} \lesssim \ell \ll (\beta S)^{2/3}$
\begin{equation*}
\begin{aligned}
f_\ell^{\rm D}(\beta,S)& \leq - \frac{1}{\beta\ell} \ln \tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P} + C S\frac{\ell^3}{(\beta S)^{7/2}}\frac{\Tr_\mathcal{F} e^{-\beta T} }{\Tr_\mathcal{F} \mathcal{P} e^{-\beta T}\mathcal{P} }\\
&\leq -\frac{1}{\beta \ell}\ln \tr_\mathcal{F} e^{-\beta T}-\frac{1}{\beta \ell}\ln \left(1-\frac{C \ell^3}{(\beta S)^2}\right) + C S\frac{\ell^3}{(\beta S)^{7/2}
\end{aligned}
\end{equation*}
for a suitable constant $C>0$.
The first term on the right side in the second line of the expression above equals
\begin{equation}
- \frac 1 {\beta \ell} \ln \tr_\mathcal{F} e^{-\beta T} = \frac 1 {\beta\ell} \sum_{p\in \Lambda_\ell^{*\rm D} } \ln ( 1- e^{-\beta S \epsilon(p)} )\,.
\end{equation}
By monotonicity, we can bound the sum by the corresponding integral,
\begin{equation}\label{up:rie}
\frac 1 {\beta\ell} \sum_{p\in \Lambda_\ell^{*\rm D} } \ln ( 1- e^{-\beta S \epsilon(p)} ) \leq \frac{1}{\pi \beta} \left( 1 +\ell^{-1} \right) \int_{\frac \pi {\ell +1}}^{\pi} \ln ( 1- e^{-\beta S \epsilon(p)} ) dp\,,
\end{equation}
which is of the desired form, except for the missing part
$$
- \frac{1}{\pi \beta} \int_0^{\frac \pi {\ell +1}} \ln ( 1- e^{-\beta S \epsilon(p)} ) dp \leq - \frac 1 {\beta(\ell+1)} \int_0^1 \ln \left( 1 - e^{- \frac{4 \beta S}{(\ell+1)^2} p^2} \right) dp \sim \frac{ \ln (\ell^2/(\beta S)) }{ \beta \ell}
$$
for $\ell \gg (\beta S)^{1/2}$.
Since $\epsilon(p)\leq p^2$ we further have
\begin{align}\nonumber
\frac{1}{\beta \pi} \int_{0}^\pi \ln ( 1- e^{-\beta S \epsilon(p)} ) & \leq \frac{1}{2\pi \beta} \int_{0}^\infty \ln ( 1- e^{-\beta S p^2} ) + \frac C {\beta (\beta S)^\alpha } \\
& =C_1 S^{-1/2} \beta^{-3/2} + \frac C {\beta (\beta S)^\alpha } \nonumber
\end{align}
for arbitrary $\alpha>0$, some $C>0$ (depending on $\alpha$), and $C_1$ defined in \eqref{eq:mainthmd1}. For $(\beta S)^{2/3} \gg \ell \gg (\beta S)^{1/2}$ all the error terms are small compared to the main term. The desired upper bound stated in Proposition (\ref{ub: pro}) is obtained by combining the estimate above with \eqref{eq:localization_upperbound} and choosing $\ell \sim (\beta S)^{5/8} (\ln \beta S)^{1/4}$. \hfill\qed
\section{Lower bound}\label{sec:lower}
In this section we shall prove the following
\begin{proposition}\label{prop:lower}
Recall $C_1$ defined in \eqref{eq:mainthmd1}. As $ \beta S \to \infty $, we have
\begin{equation*}
f(\beta,S) \geq C_1 S^{-\frac12} \beta^{-\frac32} \left(1 + \mathcal{O}( (\beta S)^{-\frac{1}{12}}(\ln\beta S)^{1/2} (\ln \beta S^3)^{\frac13}) \right).
\end{equation*}
\end{proposition}
Note that in contrast to the upper bound in Prop.~\ref{ub: pro}, the lower bound above is not entirely uniform in $S$. Indeed, one has $\ln(\beta S^3)= \ln(\beta S)+\ln S^2$ and hence $S$ is not allowed to grow arbitrarily fast compared to $\beta S$. To obtain a uniform bound, one can combine our results with the method in \cite{CG} where the case $S\to \infty$ for fixed $\beta S$ was analyzed.
The remainder of this section is devoted to the proof of Prop.~\ref{prop:lower}. For clarity, the presentation will be divided into several steps. Some of them will use results from \cite{CGS}.
\medskip
\noindent {\it Step 1. Localization.} Recall the definition \eqref{heisenberg ham 1} of the Hamiltonian $H_L$. For a lower bound, we can drop a term $( S^2 - \vec S_\ell \cdot \vec S_{\ell+1})$ from the Hamiltonian, which leads to the subadditivity
\begin{equation} \label{eq:subbadit}
L f_L(\beta, S) \geq \ell f_{\ell}(\beta, S) + (L-\ell) f_{L-\ell}(\beta,S)
\end{equation}
for $1\leq \ell \leq L-1$. By applying this repeatedly, one readily finds that
$$
f(\beta,S) \geq f_\ell(\beta,S)
$$
for any $\ell \geq 1$. We shall choose $\ell$ large compared with the thermal wave length, i.e., $\ell \gg (\beta S)^{1/2}$.
\medskip
\noindent {\it Step 2. Lower bound on the Hamiltonian.} Recall that the total spin operator is defined as $\vec S_{\rm tot} = \sum_{x=1}^\ell \vec S_x$. It follows from the theory of addition of angular momenta that
\begin{equation}\label{eq:totalspinsquare}
\vec S_{\rm tot}^2 = T(T+1) \ \text{with\ } \sigma(T)=\{0,1,\ldots, S\ell\}\,,
\end{equation}
where $\sigma$ denotes the spectrum.
We will use the following bound on the Hamiltonian.
\begin{lemma}
With $T$ defined in \eqref{eq:totalspinsquare}, we have
\begin{equation} \label{eq:lowerboundHam}
H_\ell \geq \frac 2{\ell^3} ( S\ell (S \ell +1) - \vec S_{\rm tot}^2 ) \geq \frac {2S}{\ell^2}\left( S\ell - T\right).
\end{equation}
\end{lemma}
\begin{proof}
It was shown in \cite[Eq. (5.6)]{CGS} that
$$
(S^2-\vec S_x\cdot \vec S_y) + (S^2-\vec S_y\cdot \vec S_z)\geq \frac 12 (S^2-\vec S_x\cdot \vec S_z)
$$
for three distinct sites $x,y,z$, and consequently that
$$
(y-x) \sum_{w=x}^{y-1} \left( S^2 - \vec S_w \cdot \vec S_{w+1} \right) \geq \frac 1{2} (S^2-\vec S_x\cdot \vec S_{y})
$$
for any $x<y$. After summing the above bound over all $1\leq x < y \leq \ell$, we obtain
\begin{align*}
\sum_{1\leq x<y\leq \ell} (S^2-\vec S_x\cdot \vec S_{y}) & \leq 2 \sum_{1\leq x<y\leq \ell} (y-x) \sum_{w=x}^{y-1} \left( S^2 - \vec S_w \cdot \vec S_{w+1} \right)
\\ & = 2 \sum_{w=1}^{\ell-1} \left( S^2 - \vec S_w \cdot \vec S_{w+1} \right) \sum_{x=1}^w \sum_{y=w+1}^\ell (y-x).
\end{align*}
We have
$$
\sum_{x=1}^w \sum_{y=w+1}^\ell (y-x) = \frac \ell 2 w (\ell - w) \leq \frac{\ell^3}{8}
$$
for $1\leq w \leq \ell-1$,
and hence
$$
H_\ell \geq \frac 4{\ell^3} \sum_{1\leq x<y\leq \ell} (S^2-\vec S_x\cdot \vec S_{y}) =\frac 2{\ell^3} ( S\ell (S \ell +1) - \vec S_{\rm tot}^2 ).
$$
As $\vec S_{\rm tot}^2 = T(T+1)$ we thus have
$$
H_\ell \geq
\frac {2S}{\ell^2}\left( S\ell+1 - \frac{T(T+1)}{S\ell}\right).
$$
The final bound \eqref{eq:lowerboundHam} then follows from the fact that $T\leq S\ell$.
\end{proof}
Note that Lemma~\ref{eq:lowerboundHam} implies, in particular, a lower bound of $2S \ell^{-2}$ on the spectral gap of $H_\ell$ above its ground state energy. The exact spectral gap is known to equal $2 S ( 1- \cos(\pi/\ell)) \approx \pi^2 S \ell^{-2}$, see \cite{CLR}.
\smallskip
\noindent {\it Step 3. Preliminary lower bound on free energy.} With the aid of \eqref{eq:lowerboundHam} we shall now prove the following preliminary lower bound on the free energy.
\begin{lemma}\label{lem53}
Let
\begin{equation}
\ell_0 := \sqrt{\frac{4 \beta S}{\ln \beta S }} \label{def:ell0}
\end{equation}
and assume that $\ell \geq \ell_0/2$. Then, for $\beta S$ sufficiently large, we have
\begin{equation}\label{eq:preelimfreeenergylower}
f_\ell(\beta,S) \geq - C\frac { \left( \ln \beta S\right)^{1/2}}{\beta^{3/2} S^{1/2}} \ln \beta S^3
\end{equation}
for some constant $C>0$.
\end{lemma}
\begin{proof}
With the aid of \eqref{eq:lowerboundHam} and the $SU(2)$ symmetry we have
\begin{align*}
\Tr e^{-\beta H_\ell} & \leq \sum_{n=0}^{\lfloor S\ell\rfloor} e^{ {-2\beta S}{\ell^{-2}} n } \Tr \id_{T=S\ell -n} \\ & = \sum_{n=0}^{\lfloor S\ell\rfloor} e^{ {-2\beta S}{\ell^{-2}} n } \left( 2(S\ell -n)+1\right) \Tr \id_{T=S\ell -n}\id_{S_{\rm tot}^3 = n-S\ell}
\\ & \leq (2 S\ell +1) \sum_{n=0}^{\lfloor S\ell\rfloor} e^{ {-2\beta S}{\ell^{-2}} n } \Tr \id_{S_{\rm tot}^3 = n-S\ell}.
\end{align*}
The last trace equals the number of ways $n$ indistinguishable particles can be distributed over $\ell$ sites, with at most $2S$ particles per site. Dropping this latter constraint for an upper bound, we obtain
$$
\Tr e^{-\beta H_\ell} \leq (2S \ell+1) \left ( 1 - e^{ {-2\beta S}{\ell^{-2}} }\right)^{-\ell} \,.
$$
In particular,
\begin{equation}\label{fbo}
f_\ell(\beta,S) \geq -\frac 1{\beta \ell} \ln (1+ 2 S\ell ) + \frac {1}{\beta } \ln \left ( 1 - e^{ {-2\beta S}{\ell^{-2}} }\right)\,.
\end{equation}
For large $\beta S$, this expression is minimized when $\ell \approx \ell_0$ with $\ell_0$ given in \eqref{def:ell0}. If $\ell_0/2 \leq \ell \leq \ell_0$, we can use the lower bound on $\ell$ in the first term in \eqref{fbo}, and the upper bound on the second, to obtain
\begin{equation}\label{fbo2}
f_\ell(\beta,S) \geq -\frac { (\ln \beta S)^{1/2}}{\beta ( \beta S)^{1/2}} \ln \left(1+ 2 S (\beta S)^{1/2} (\ln \beta S)^{-1/2} \right) + \frac {1}{\beta } \ln \left ( 1 - (\beta S)^{-1/2} \right)\,,
\end{equation}
which is of the desired form.
If $\ell > \ell_0$, we can divide the interval $[1,\ell]$ into smaller ones of size between $\ell_0/2$ and $\ell_0$. Using the subadditivity \eqref{eq:subbadit} we conclude \eqref{fbo2} also in that case.
\end{proof}
\smallskip
\noindent {\it Step 4. Restriction to low energies.}
For any $E>0$, we have
\begin{align*}
\Tr e^{-\beta H_\ell} & \leq \Tr e^{-\beta H_\ell} \id_{H_\ell < E} + e^{-\beta E/2} \Tr e^{-\beta H_\ell/2 } \id_{H_\ell \geq E}
\\ & \leq \Tr e^{-\beta H_\ell} \id_{H_\ell < E} + e^{-\beta (E + \ell f_\ell(\beta/2,S))/2}.
\end{align*}
In particular, with the choice
$$
E = E_0(\ell,\beta,S) := - \ell f_\ell(\beta/2,S)
$$
this gives
\begin{equation}\label{tg}
\Tr e^{-\beta H_\ell} \leq 1 + \Tr e^{-\beta H_\ell} \id_{H_\ell < E_0}.
\end{equation}
Using the $SU(2)$ invariance, we can further write
\begin{align}\nonumber
\Tr e^{-\beta H_\ell} \id_{H_\ell < E_0} & = \sum_{n=0}^{\lfloor S\ell\rfloor} (2(S\ell -n) +1) \Tr e^{-\beta H_\ell} \id_{H_\ell < E_0} \id_{T=S\ell -n} \id_{S_{\rm tot}^3=n-S\ell}
\\ & \leq (2 S\ell +1) \sum_{n=0}^{\lfloor S\ell\rfloor} \Tr e^{-\beta H_\ell} P_{E_0,n} \label{eq56}
\end{align}
where
\begin{equation} \label{def:PE0}
P_{E_0,n} = \id_{H_\ell < E_0} \id_{T=S\ell -n} \id_{S_{\rm tot}^3=n-S\ell}.
\end{equation}
In other words, we can restrict the trace to states with $S_{\rm tot}^3$ being as small as possible (given $\vec S_{\rm tot}^2$). In the particle picture discussed above, this amounts to particle number $\mathbb{N} = S\ell - T = n$. Because of \eqref{eq:lowerboundHam}, we have $E_0 > H_\ell \geq 2 S n/\ell^2$ on the range of $P_{E_0,n}$, hence the sum in \eqref{eq56} is restricted to
\begin{equation} \label{def:N_0}
n < N_0 := \frac {E_0 \ell^2}{2S}.
\end{equation}
\smallskip
\noindent {\it Step 5. A Laplacian lower bound.} With the aid of the Holstein--Primakoff representation \eqref{ax upx}, we can equivalently write the Hamiltonian $H_\ell$ in terms of bosonic creation and annihilation operators as
\begin{equation}\label{newham}
H_\ell = S\sum_{x=1}^{\ell-1} \left( a^\dagger_{x+1} \sqrt{ 1 - \frac{n_x}{2S} } - a^\dagger_x \sqrt{ 1 - \frac{n_{x+1}}{2S}} \right) \left( a_{x+1} \sqrt{ 1 - \frac{n_x}{2S} } - a_x \sqrt{ 1 - \frac{n_{x+1}}{2S}} \right)
\end{equation}
where $n_x =a^\dagger_x a_x \leq 2S$. Note that written in this form, the Hamiltonian $H_\ell$ is manifestly positive, contrary to \eqref{hamb}.
Let $\mathbb{N} = \sum_{x} n_x = \ell S + S_{\rm tot}^3$ denote the total number of bosons.
States $\Psi$ with $n$ particles, i.e., $\mathbb{N} \Psi = n\Psi$, are naturally identified with $n$-boson wave functions\footnote{Here $\ell^2_{\rm sym}(A)$ denotes the Hilbert space of square-summable sequences on $A$ invariant under permutations} in $\ell^2_{\rm sym}([1,\ell]^n)$ via
$$
\Psi = \frac 1{\sqrt{n!}} \sum_{x_1,\dots,x_n} \Psi(x_1,\dots,x_n) { a^\dagger_{x_1} \cdots a^\dagger_{x_n} |\Omega\rangle } \,,
$$
where $|\Omega\rangle$ denotes the vacuum (which corresponds to the state with all spins pointing maximally down). Using \eqref{newham}, we have in this representation
\begin{align*}\nonumber
\langle \Psi | H_\ell \Psi \rangle & = S n \sum_{x=1}^{\ell-1} \sum_{x_1,\dots,x_{n-1}} \left | \Psi(x+1,x_1,\dots,x_{n-1}) \sqrt{ 1 - \frac{\sum_{k=1}^{n-1} \delta_{x,x_k}}{2S} } \right. \\ & \left. \qquad\qquad\qquad\qquad\quad - \Psi(x,x_1,\dots,x_{n-1}) \sqrt{ 1 - \frac{\sum_{k=1}^{n-1} \delta_{x+1,x_k}}{2S} }\right|^2.
\end{align*}
Because of permutation-symmetry, we can also write this as
\begin{align*}\nonumber
\langle \Psi | H_\ell \Psi \rangle & = S \sum_{j=1}^n \sum_{\substack{x_1,\dots,x_{n} \\ x_j \leq \ell-1}} \left | \Psi(x_1,\dots, x_j+1, \dots x_{n}) \sqrt{ 1 - \frac{\sum_{k, k\neq j} \delta_{x_j,x_k}}{2S} } \right. \\ & \left. \qquad\qquad\qquad\quad - \Psi(x_1,\dots,x_j, \dots x_{n}) \sqrt{ 1 - \frac{\sum_{k, k\neq j} \delta_{x_j+1,x_k}}{2S} }\right|^2\,.
\end{align*}
For a lower bound, we can restrict the sum over $x_1,\dots,x_{n}$ to values such that $x_k\neq x_l$ for all $k\neq \ell$. For a given $j$, we can further restrict to $x_k\neq x_j+1$ for all $k\neq j$. In this case, the square root factors above are equal to $1$. In other words, we have the lower bound
$$
\langle \Psi | H_\ell \Psi \rangle \geq \frac S 2 \sum_{\substack{X,Y \in \mathcal{X}_{\ell,n} \\ |X-Y| = 1}} \left| \Psi(X) - \Psi(Y)\right|^2
$$
where the sum is over the set $\mathcal{X}_{\ell,n} := \{ [1,\ell]^n : x_i \neq x_j \forall i\neq j\} $, and $|X-Y| = \sum_{i=1}^n |x_i-y_i|$. Note that we have to assume that $\ell \geq n$ for the set $\mathcal{X}_{\ell,n}$ to be non-empty. The factor $1/2$ arises from the fact that particles are allowed to hop both left and right, i.e., each pair $(X,Y)$ appears twice in the sum. Note also that the above inequality is actually an equality for $S=1/2$, since in this case no two particles can occupy the same site.
On the set $\{ 1\leq x_1 < x_2 < \dots <x_n\leq \ell\} \subset \mathcal{X}_{\ell,n}$ define the map
$$
V (x_1, \dots, x_n) = (x_1, x_2-1, x_3-2 , \dots, x_n - n+ 1)
$$
and extend it to the set $\mathcal{X}_{\ell,n} = \{ [1,\ell]^n : x_i \neq x_j \forall i\neq j\} $ via permutations. In other words, $V$ maps $x_i$ to $x_i - k_i$ where $k_i$ denotes the number of $x_j$ with $x_j < x_i$. As a map from $\mathcal{X}_{\ell,n}$ to $[1,\ell-n+1]^n$, $V$ is clearly surjective, but it is not injective. Points in $[1,\ell-n+1]^n$ with at least two coordinates equal have more than one pre-image under $V$. The pre-images are unique up to permutations, however, hence we can define a bosonic wave function $\Phi$ on $[1,\ell-n+1]^n$ by
\begin{equation} \label{def:mapV}
\Phi(V(X)) = \Psi(X) \quad \text{for $X\in \mathcal{X}_{\ell,n}$}.
\end{equation}
We then have
$$
\sum_{\substack{X,Y \in \mathcal{X}_{\ell,n} \\ |X-Y| = 1}} \left| \Psi(X) - \Psi(Y)\right|^2 = \sum_{A,B \in [1,\ell-n+1]^n} \left| \Phi(A)-\Phi(B) \right|^2 \sum_{X\in V^{-1}(A), Y\in V^{-1}(B)} \chi_{|X-Y|=1}\,.
$$
For every pair $(A,B)\in [1,\ell-n+1]^n$ with $|A-B|=1$, there exists at least one pair $(X,Y)\in \mathcal{X}_{\ell,n} $ with $|X-Y|=1$ in the pre-image of $V$. In other words, the last sum above is greater or equal to $1$ if $|A-B|=1$. All this leads to the following statement
\begin{proposition}\label{prop:hamiltonianLaplacianBound}
Let $\mathbb{V}$ denote the map from $\ell^2_{\rm sym}([1,\ell]^n)$ to $\ell^2_{\rm sym}([1,\ell-n+1]^n)$ induced by the map $V$ in \eqref{def:mapV}, i.e., $(\mathbb{V}\Psi)(V(X))= \Psi(X)$ for $X\in \mathcal{X}_{\ell,n}$. Then
$$
\id_{\mathbb{N}=n} H_\ell \geq S \mathbb{V}^\dagger (-\Delta_n^{\ell-n+1}) \mathbb{V},
$$
where $\Delta_n^\ell$ denotes the Laplacian\footnote{This is the graph Laplacian, with free (or Neumann) boundary conditions.} on $[1,\ell]^n$.
\end{proposition}
\smallskip
\noindent {\it Step 6. Bounds on the two-particle density.}
We will use Prop.~\ref{prop:hamiltonianLaplacianBound} and the min-max principle to obtain a lower bound on the eigenvalues of $H_\ell$. For this purpose we need an estimate on the norm of $\mathbb{V}\Psi$.
\begin{lemma}\label{lem:phinormlower}
Let $\Psi\in \ell^2_{\rm sym}([1,\ell]^n)$ with $\|\Psi\|=1$, and let $\rho(x,y) = \langle \Psi | a^\dagger_x a^\dagger_y a_y a_x \Psi\rangle$ be its two-particle density. Then
\begin{equation}\label{eq:normphi_lowerbound}
\|\mathbb{V}\Psi\|^2 \geq 1 - \frac 12 \sum_{x=1}^\ell \rho(x,x) - \sum_{x=1}^{\ell-1} \rho(x,x+1)\,.
\end{equation}
\end{lemma}
\begin{proof}
From the definition of $\Phi:=\mathbb{V}\Psi$ we have
$$
\|\Phi\|^2 = \sum_{A\in[1,\ell-n+1]^n} | \Phi(A)|^2 = \sum_{X\in \mathcal{X}_{\ell,n}} |\Psi(X)|^2 | V^{-1}(V(X))|^{-1} \,,
$$
where $|V^{-1}(V(X))|$ denotes the number of points in the pre-image of $V(X)$. This number equals one if $X$ is such that $|x_j-x_k|\geq 2$ for all $j\neq k$. Hence
$$
\|\Phi\|^2 \geq \sum_{\substack{ X\in\mathcal{X}_{\ell,n} \\ |x_j - x_k|\geq 2 \, \forall j\neq k}} |\Psi(X)|^2 \geq \|\Psi\|^2 - \frac 12 \sum_{x=1}^\ell \langle \Psi | n_x(n_x-1) \Psi\rangle - \sum_{x=1}^{\ell-1} \langle \Psi | n_x n_{x+1} \Psi \rangle.
$$
Indeed, the norm of $\Psi$ involves a sum over all possible configurations so we need to remove the terms which correspond to $x_i=x_j$ or $x_i=x_j+1$ for some $i \neq j$. The $x_i=x_j$ terms are removed through the term $ \frac 12 \sum_{x=1}^\ell n_x(n_x-1) $, which is zero if and only if on each site there is at most one particle. Similarly, the terms corresponding to $x_i=x_j+1$
are removed through $\sum_{x=1}^{\ell-1} n_x n_{x+1} $, which is zero if and only if there are no two neighboring sites that are occupied.
With $\|\Psi\|=1$ and the definition of $\rho(x,y)$ this becomes \eqref{eq:normphi_lowerbound}.
\end{proof}
We shall give a lower bound on the right side of \eqref{eq:normphi_lowerbound} in terms of the energy of $\Psi$.
\begin{proposition}\label{prop56}
Let $\Psi\in \ell^2_{\rm sym}([1,\ell]^n)$ with $\|\Psi\|=1$, and let $\rho(x,y) = \langle \Psi | a^\dagger_x a^\dagger_y a_y a_x \Psi\rangle$ be its two-particle density. Then
\begin{equation}
\sum_{x=1}^{\ell-1} \rho(x+1,x) \leq \frac 4 \ell n(n-1) + 4 (n-1) \sqrt {\frac n S} \langle \Psi | H_\ell \Psi\rangle^{1/2}. \label{eq:twobodydensitybound}
\end{equation}
\end{proposition}
\begin{proof}
For $x\neq z$, we have
\begin{align*}
& \rho(x,y)\left( 1- \frac{\delta_{z,y}}{2S}\right) - \rho(z,y) \left( 1 -\frac{\delta_{x,y}}{2S}\right)
\\ & = \Re \left\langle \Psi \left| \left( a^\dagger_x\sqrt{1-\frac{n_z}{2S}} - a^\dagger_z \sqrt{1-\frac{n_x}{2S}} \right) n_y \left( a_x\sqrt{1-\frac{n_z}{2S}} + a_z \sqrt{1-\frac{n_x}{2S}} \right) \right. \Psi \right\rangle \,.
\end{align*}
The Cauchy--Schwarz inequality therefore implies that
\begin{align*}
& \left| \rho(x,y)\left( 1- \frac{\delta_{z,y}}{2S}\right) - \rho(z,y) \left( 1 -\frac{\delta_{x,y}}{2S}\right) \right|^2
\\ & \leq \left\langle \Psi \left| \left( a^\dagger_x\sqrt{1-\frac{n_z}{2S}} - a^\dagger_z \sqrt{1-\frac{n_x}{2S}} \right) n_y \left( a_x\sqrt{1-\frac{n_z}{2S}} - a_z \sqrt{1-\frac{n_x}{2S}} \right) \right.\Psi\right\rangle \\ &\quad \times \left\langle \Psi \left| \left( a^\dagger_x\sqrt{1-\frac{n_z}{2S}} + a^\dagger_z \sqrt{1-\frac{n_x}{2S}} \right) n_y \left( a_x\sqrt{1-\frac{n_z}{2S}} + a_z \sqrt{1-\frac{n_x}{2S}} \right) \right.\Psi \right\rangle \,.
\end{align*}
Moreover,
\begin{align*}
&\left\langle \Psi \left| \left( a^\dagger_x\sqrt{1-\frac{n_z}{2S}} + a^\dagger_z \sqrt{1-\frac{n_x}{2S}} \right) n_y \left( a_x\sqrt{1-\frac{n_z}{2S}} + a_z \sqrt{1-\frac{n_x}{2S}} \right) \right.\Psi \right\rangle \\ & \leq 2 \left\langle \Psi \left| a^\dagger_x \left(1-\frac{n_z}{2S}\right) n_y a_x \right.\Psi \right\rangle + 2 \left\langle \Psi \left| a^\dagger_z \left(1-\frac{n_x}{2S}\right) n_y a_z \right.\Psi \right\rangle \\ & \leq 2 \rho(x,y) \left(1-\frac{\delta_{z,y}}{2S}\right) + 2 \rho(z,y) \left(1-\frac{\delta_{x,y}}{2S}\right)\,.
\end{align*}
With
$$
h_x^y := \left( a^\dagger_{x+1}\sqrt{1-\frac{n_x}{2S}} - a^\dagger_x \sqrt{1-\frac{n_{x+1}}{2S}} \right) n_y \left( a_{x+1}\sqrt{1-\frac{n_x}{2S}} - a_x \sqrt{1-\frac{n_{x+1}}{2S}} \right)
$$
we thus have
\begin{align}\nonumber
& \left| \rho(x+1,y)\left( 1- \frac{\delta_{x,y}}{2S}\right) - \rho(x,y) \left( 1 -\frac{\delta_{x+1,y}}{2S}\right) \right|^2
\\ & \leq 2 \left\langle \Psi \left| h_x^y \right.\Psi\right\rangle
\left( \rho(x+1,y) \left(1-\frac{\delta_{x,y}}{2S}\right) + \rho(x,y) \left(1-\frac{\delta_{x+1,y}}{2S}\right) \right) \,. \label{eq512}
\end{align}
We note that
$$
S\sum_{x=1}^{\ell-1} \sum_{y=1}^\ell h_x^y = H_\ell \left( \mathbb{N} -1\right)\,.
$$
For given $y \leq \ell/2$, choose $x_y > y$ such that
$$
\rho(x,y) \geq \rho(x_y, y) \quad \text {for all $x > y$}\,.
$$
We have
$$
\rho(y+1,y) = \rho(x_y,y) + \sum_{w = y+1}^{x_y-1} \left( \rho(w,y) - \rho(w+1,y) \right)
$$
(where the sum is understood to be zero if $x_y=y+1$). The first term on the right side can be bounded as
$$
\rho(x_y,y) \leq \frac 1{\ell-y} \sum_{x=y+1}^\ell \rho(x,y) \leq \frac 2 \ell \sum_{x=1}^\ell \rho(x,y)
$$
using that $y\leq \ell/2$ by assumption. For the second we use the bound \eqref{eq512} above, which implies that
$$
\left| \rho(w,y) - \rho(w+1,y) \right| \leq \sqrt 2 \langle \Psi | h_w^y \Psi\rangle^{1/2} \left( \rho(w+1,y) + \rho(w,y) \right)^{1/2}
$$
for $w\geq y+1$. After summing over $y$ and $w$, using the Cauchy--Schwarz inequality and the fact that $\sum_{x,y}\rho(x,y) = n(n-1)$, we thus have the upper bound
$$
\sum_{y\leq \ell/2} \rho(y+1,y) \leq \frac {2n(n-1)} \ell + 2 \sqrt{\frac n S}(n-1) \langle \Psi | H_\ell \Psi\rangle^{1/2}.
$$
If $y>\ell/2$, we use the symmetry of $\rho$ and write
$$
\rho(y+1,y) = \rho(y,y+1) = \rho(x_y,y+1) + \sum_{w=x_y}^{y-1} \left( \rho(w+1 ,y+1) - \rho(w, y+1) \right)
$$
instead, where $x_y$ is now defined by minimizing $\rho(x,y+1)$ for $x\leq y$. Proceeding as above,
we finally conclude the desired estimate.
\end{proof}
A similar bound holds for $\sum_x \rho(x,x)$.
\begin{proposition}\label{prop57}
Let $\Psi\in \ell^2_{\rm sym}([1,\ell]^n)$ with $\|\Psi\|=1$, and let $\rho(x,y) = \langle \Psi | a^\dagger_x a^\dagger_y a_y a_x \Psi\rangle$ be its two-particle density. Then
\begin{equation}\label{cpo}
\sum_{x=1}^\ell \rho(x,x) \leq \frac 4 \ell n(n-1) + (4 + \sqrt{3}) (n-1) \sqrt {\frac n S} \langle \Psi | H_\ell \Psi\rangle^{1/2} \,.
\end{equation}
\end{proposition}
\begin{proof}
Since $ \rho(x,x)$ vanishes for $S=1/2$, we can assume $S\geq 1$ henceforth. By \eqref{eq512},
\begin{align*}
& \left| \rho(x\pm 1,x)\left( 1- \frac{1}{2S}\right) - \rho(x,x) \right|^2
\\ & \leq 2 \sum_{y=1}^{\ell -1} \left\langle \Psi \left| h_y^x \right.\Psi\right\rangle
\left( \rho(x\pm 1,x) \left(1-\frac{1}{2S}\right) + \rho(x,x) \right) \,.
\end{align*}
It thus follows from the Cauchy--Schwarz inequality that
\begin{align*}
&\sum_{x=1}^\ell \rho(x,x) \leq 2 \left( 1 -\frac 1{2S}\right) \sum_{x=1}^{\ell -1} \rho(x+1,x) \\ & \quad + \sqrt{2(n-1) /S } \left\langle \Psi | H_\ell \Psi\right\rangle^{1/2}
\left( 2 \sum_{x=1}^{\ell -1} \rho(x+ 1,x) \left(1-\frac{1}{2S}\right) + \sum_{x=1}^\ell \rho(x,x) \right)^{1/2}\,.
\end{align*}
In the last line, we can make the rough bounds $2 \sum_{x=1}^{\ell -1} \rho(x+ 1,x) \leq n(n-1)$ and $\sum_{x=1}^\ell \rho(x,x)\leq n(n-1)$, and for the term in the first line we use \eqref{eq:twobodydensitybound}. Using also $S\geq 1$, this completes the proof of \eqref{cpo}.
\end{proof}
\smallskip
\noindent {\it Step 7. Final estimate.} Recall the definition \eqref{def:PE0} of $P_{E_0,n}$. It follows from Prop.~\ref{prop:hamiltonianLaplacianBound} that
$$
P_{E_0,n} H_\ell \geq S P_{E_0,n} \mathbb{V}^\dagger (-\Delta_n^{\ell-n+1}) \mathbb{V}P_{E_0,n}
$$
and from Lemma~\ref{lem:phinormlower}, Prop.~\ref{prop56} and Prop.~\ref{prop57} that
$$
P_{E_0,n} \mathbb{V}^\dagger \mathbb{V} P_{E_0,n} \geq P_{E_0,n} (1 - \delta)
$$
where
$$
\delta = \frac {8N_0^2} \ell + 9 N_0 \sqrt {\frac {N_0 E_0} S}
= \left( 2 + \frac 9{\sqrt 8} \right) \frac {E_0^2 \ell^3}{S^2}.
$$
Here we used \eqref{def:N_0}. We shall choose the parameters such that $\delta \ll 1$ for large $\beta$. The min-max principle readily implies that the eigenvalues of $H_\ell$ in the range $P_{E_0,n}$ are bounded from below by the corresponding ones of $S(1-\delta)(-\Delta_n^{\ell-n+1})$. In particular,
for any $\beta>0$
$$
\Tr P_{E_0,n} e^{-\beta H_\ell} \leq \Tr e^{\beta S(1-\delta) \Delta_n^{\ell-n+1}}\,.
$$
Note that the Laplacian $\Delta_n^{\ell-n+1}$ depends on $n$, besides the particle number, also via the size of the interval $[1,\ell-n+1]$. For a lower bound, we can increase the interval size back to $\ell$, all eigenvalues are clearly decreasing under this transformation. In particular,
\begin{align}\nonumber
\Tr e^{-\beta H_\ell} \id_{H_\ell < E_0} & \leq (2 S\ell +1) \sum_{n=0}^{\lfloor N_0\rfloor} \Tr e^{\beta S(1-\delta) \Delta_n^{\ell}} \\
& \leq (2 S\ell +1) (N_0+1 ) \prod_{m=1}^{\ell-1} \left( 1 - e^{-\beta S (1-\delta) \epsilon(\pi m/\ell) } \right)^{-1} \label{tg2}
\end{align}
where $\epsilon(p) = 2 (1-\cos p)$ is the dispersion relation of the discrete Laplacian on $[1,\ell]$.
Combining \eqref{tg} and \eqref{tg2}, we have thus shown that
\begin{align*}
f_\ell(\beta,S) & \geq -\frac1{\beta \ell} \ln \left( 1+ (2S\ell+1) (N_0+1 ) \prod_{m=1}^{\ell-1} \left( 1 - e^{-\beta S (1-\delta) \epsilon(\pi m/\ell) } \right)^{-1}\right)
\\ & \geq \frac1{\beta \ell} \sum_{m=1}^{\ell-1} \ln \left( 1 - e^{-\beta S (1-\delta) \epsilon(\pi m/\ell) } \right) -\frac1{\beta \ell} \ln \left( 1+ (2S\ell+1) (N_0+1 ) \right)\,,
\end{align*}
with $\delta \sim E_0^2 \ell^3 S^{-2}$, $N_0 = E_0 \ell^2/(2S)$ and $E_0 \sim \ell \beta^{-3/2} S^{-1/2} ( \ln(\beta S))^{1/2} \ln (\beta S^3)$.
Since $\epsilon(p)$ is increasing in $p$, we further have
$$
\frac 1{\beta \ell} \sum_{m=1}^{\ell-1} \ln \left( 1 - e^{-\beta S (1-\delta) \epsilon(\pi m/\ell) } \right) \geq \frac 1{\pi \beta} \int_0^\pi \ln(1-e^{-\beta S (1-\delta) \epsilon(p)}) dp.
$$
The error terms compared to the desired expression
$$
\frac 1{\pi \beta} \int_0^\pi \ln(1-e^{-\beta S \epsilon(p)}) dp \sim \beta^{-3/2} S^{-1/2}
$$
are thus
$$
\delta \sim\ell^5 \frac {\ln(\beta S)}{ (\beta S)^3} \left( \ln (\beta S^3) \right)^2 \quad \text{and} \quad (\beta S)^{1/2}\ell^{-1} \ln \left( S \ell N_0\right)
$$
which leads to a choice of $\ell \sim (\beta S)^{1/2 + 1/12} (\ln (\beta S^3))^{-1/3}$ and a relative error of the order $ (\beta S)^{-1/12} \ln(\beta S) (\ln (\beta S^3))^{1/3}$. Note that for this choice the condition $\ell \geq \ell_0/2$ of Lemma~\ref{lem53} is fulfilled exactly when this error is small.
Finally, we note that (compare with \cite[Eqs. (5.42) and (5.43)]{CGS})
$$
\int_0^\pi \ln(1-e^{-\beta S \epsilon(p)}) dp \geq \frac 1{ (\beta S)^{1/2}} \int_0^\infty \ln(1-e^{-p^2}) dp - \mathcal{O}( (\beta S)^{-3/2})
$$
for large $\beta S$.
This completes the proof of the lower bound. \hfill\qed
| 14b26c77fb40bb5ad4cb1d22fb6f81dad287e5c6 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Recent hardware developments have generalized the use of new affordable wearable devices that
people can carry during any kind of activity. These devices (including smartphones, fitness trackers or personal cameras) generate a vast amount of information along with new necessities and opportunities to use them. What we do, what we like, where we go, our health status and much more is unnoticed information that could help us to get insights of our daily life. It has also applications in many domains like healthcare, marketing, personalized training or home automation.
The most exploited task in prior work using this kind of data has been activity analysis \cite{schuldt2004recognizing, Efros:2003:RAD:946247.946720}. These works for example make use of multiple sensors \cite{Maurer, Ellis:2014:MPA:2638728.2641673}, or analyze the user hands and manipulated objects \cite{990977} to identify the user activity.
Our work focuses on the task of
object-detection, since it provides relevant information in addition to user activity estimation, for example in augmented reality or marketing applications. Moving away from the object of focus usually requires more general and robust object detection in wearable videos.
Working with this type of videos presents additional challenges to object detection in regular images, such as frame blurring, camera defocus, fast object movements or strong clutter and occlusion on objects to be recognized.
Egocentric videos record scenes from a particular perspective (first person perspective) due to chest or head mounted cameras. This also causes object occlusion by the human body itself. Human-object interaction also makes objects change their look when being interacted with. For instance, a fridge or a microwave can be both opened or closed, full or empty, etc. See Fig. \ref{sec:intro}.
\begin{figure}[!tb]
\centering
\begin{tabular}{cc}
\includegraphics[width=.47\linewidth]{figures/intro_figs/lumus-dk40-top.jpg} &
\includegraphics[width=.47\linewidth]{figures/intro_figs/sony_evoflow.jpg}\\
(a) & (b)\\
\includegraphics[width=.47\linewidth]{figures/intro_figs/occlusion_P_03_00017520.png} &
\includegraphics[width=.47\linewidth]{figures/intro_figs/occlusion_P_03_00017250.png}\\
(c) & (d)\\
\includegraphics[width=.47\linewidth]{figures/intro_figs/look_change_P_01_00000228.png} &
\includegraphics[width=.47\linewidth]{figures/intro_figs/look_change_P_01_00000288.png}\\
(e) & (f)\\
\end{tabular}
\caption{Wearable video. (a), (b) wearable cameras. (c), (d) examples of typical strong object occlusions by the human body. (c), (d) human interaction produces strong appearance changes in the scene.}
\label{fig:intro}
\end{figure}
Despite the impressive results in the field of object-recognition on conventional images and videos, object detection in egocentric videos recorded with wearable cameras are scarce. One of the reasons is the lack of properly labeled datasets.
This work studies the behaviour of YOLO~\cite{DBLP:journals/corr/RedmonDGF15} for object recognition in first-person perspective videos. YOLO is one of the most used object-detection architectures in real-time domains. We selected it for our study because it is one of the top performing methods for video object detection and presents a good trade-off between speed and accuracy.
We explore different models and data configurations and variations to measure YOLO's accuracy in different situations. We discuss and propose additional ideas for further studies including available data and how to build sub-sets that enable fine grained analysis of the results.
\section{Related Work}
\label{sec:related}
\paragraph*{Object detection in images}
State-of-the-art object-detectors on static images are commonly classified into two categories depending on the number of stages needed to predict the final bounding boxes. According to \cite{mod_conv_det}, single-stage solutions are faster than the ones that involve multiple steps, but they lack of the high accuracy of these second ones.
Multi-stage object-detectors follow a pipeline that first generate a set of Regions Of Interest (ROI) that will be classified and post-processed in posterior steps. Models from this family are mostly based in R-CNN \cite{DBLP:journals/corr/GirshickDDM13} and its posterior updates with Fast R-CNN\cite{DBLP:journals/corr/Girshick15}, Faster R-CNN\cite{DBLP:journals/corr/RenHG015} and R-FCN \cite{DBLP:journals/corr/DaiLHS16}.
In contrast to multi-stage detectors, single-pass detectors use a single Convolutional Neural Network trained end-to-end to predict object bounding boxes along with their labels in a single step. YOLO\cite{DBLP:journals/corr/RedmonDGF15}, SSD\cite{DBLP:journals/corr/LiuAESR15} and RetinaNet\cite{DBLP:journals/corr/abs-1708-02002} are the main single-stage detectors. Despite their lower accuracy, they offer faster predictions, depending on several factors such as the used feature extractor or the input image size that trade off speed and accuracy.
\paragraph*{Object detection in video}
Working with videos involves dealing with several additional issues with respect to detection in conventional images, like motion blur or camera defocus. This makes certain objects
to be undetected or missclassified when we run a per-frame object-detection strategy. Since the release of the ImageNet \cite{ILSVRC15} object detection from video (VID) challenge, many approaches have been developed to handle these issues.
Some of them involve post-processing methods like Seq-NMS\cite{DBLP:journals/corr/HanKPRBSLYH16} that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip or Seq-Bbox \cite{DBLP:journals/corr/HanKPRBSLYH16} that uses frame-level bounding box re-scoring to correct wrong detections and tubelet-level bounding box linking to infer boxes of missed detections.
Other approaches train end-to-end Neural Networks that perform feature aggregation such as FGFA\cite{DBLP:journals/corr/ZhuWDYW17} that aggregate per-frame features along motion paths or D\&T\cite{Feichtenhofer17DetectTrack} that simultaneously perform detection and tracking.
Of particular relevance for our work are previous results in egocentric videos. The work in \cite{4408872} performs event classification by categorizing objects and classifying the environment of a frame. In \cite{Lee2015} important regions are recognized and used to do an egocentric video summarization. Finally, \cite{Ramanan:2012:DAD:2354409.2355089} trains part-based models to detect and classify objects and uses temporal pyramid models to recognize actions.
\section{Object Recognition in Wearables}
In this work, we use YOLO v3\cite{DBLP:journals/corr/abs-1804-02767} to perform object detection in each frame of egocentric videos since it presents a good trade-off between speed and accuracy.
Per-frame object detection is the first step to evaluate object recognition in this type of videos, the use of post-processing bounding boxes is left for next steps.
This section first summarizes the original architecture and then details all the training and execution variations we consider to evaluate the best performing options.
\subsection{YOLO}
YOLO v3 is the model from that improves YOLO9000\cite{DBLP:journals/corr/RedmonF16} to predict in one shot bounding box dimensions along with their object class. Thanks to its Fully Convolutional structure (without fully-connected layers), the neural net is able to take as input an image of any size as long as it is a multiple of 32 (due to its downsample/upsample pipeline).
YOLO's architecture uses Darknet-53 as the backbone. This Network is made of residual blocks and shortcut connections and, trained on the ImageNet dataset\cite{ILSVRC15}, it achieves a similar performance as other architectures like ResNet-101 and ResNet-152\cite{DBLP:journals/corr/HeZRS15} with less computation power.
Inspired by Feature Pyramid Networks\cite{DBLP:journals/corr/LinDGHHB16}, YOLO makes predictions at three scales (see Fig. \ref{fig:yolo_arch}). To do so, it adds a set of convolutional layers after the backbone that outputs the first scale prediction. Next, it takes the feature maps from the 2 previous layers, upsamples it and merges it with feature maps from the backbone to be fed by another set of convolutional layers that outputs the next scale prediction. This process is repeated in a similar way to output the third scale prediction.
\begin{figure*}[!tb]
\centering
\includegraphics[width=.9\linewidth]{figures/yolo_arch.png}
\caption{YOLO v3 architecture. 3 sets of bounding boxes are predicted for each input image. Deeper layers are focused on detecting smaller objects.}
\label{fig:yolo_arch}
\end{figure*}
The final output predictions are 3D tensors with a shape of $N \times N \times [3 * (4 + 1 + C)]$. Each of these tensors divides the input image in a $N \times N$ grid, where each of its cells predicts $3$ boxes along with the probability of finding an object in that box and the probability of that object to belong to any of the $C$ classes contained in the given dataset.
Finally, the network predicts bounding boxes ($b_x$, $b_y$, $b_w$, $b_h$), using dimension clusters as anchor boxes, as follows:
\begin{eqnarray}
\label{eq:prediction}
\nonumber
&b_x = \sigma(t_x) + c_x \\
\nonumber
&b_y = \sigma(t_y) + c_y \\
\nonumber
&b_w = p_w e^{t_w} \\
&b_h = p_h e^{t_h},
\end{eqnarray}
\noindent where $t_x, t_y, t_w, t_h$ represent the predicted box, $c_x, c_y$ represent the offset from the top-left corner of the image and the bounding box prior has width and height $p_w, p_h$.
Once the bounding boxes of an input image have been predicted, those that are under a certain class probability threshold are filtered out. Then, the remaining ones are processed by non-max suppression to filter out overlapping bounding boxes.
To train this model, YOLO uses a loss function that can be split into three main components:
\begin{itemize}
\item{Localization loss}: this score is composed by the error predicted both in the localization and size of bounding boxes. YOLO v3 measures it by using the sum of squared error loss but this varies in different implementations. Localization loss is set to 0 when no object has been predicted.
\item{Confidence loss}: this is also split in two logistic functions that evaluate objects and background probabilities separately. This score should be 1 when a bounding box overlaps a ground truth object by more than any other bounding box prior.
\item{Classification loss}: it uses independent logistic classifiers without softmax to perform multilabel classification. Then it can fit to datasets where one object can belong to different classes or when the labels are not consistent. This error is only measured when an object is detected.
\end{itemize}
\subsection{Model variations}
From this base implementation, we apply different modifications to evaluate its performance in different scenarios. Section \ref{sec:exp-results} provides more detailed information about the variations performed.
\paragraph*{Pretraining}
This is a common technique to take advantage of the patterns learned from other trainings and datasets. Using pretrained weights helps the model to achieve a better generalization to the specified domain and improves the convergence speed. In the present project we evaluate different pretrained weights and strategies to check the one that better suits our problem.
\paragraph*{Input size}
Due to the Fully Convolutional architecture of the Neural Network, the model's output size depends directly on the input image size. The bigger this is, the bigger output we get and more accurate information we can get. However, bigger input sizes involve higher prediction times, so it's important to find a good trade-off between these two elements.
\paragraph*{Alternative architectures}
Along with the main structure described before, the authors also provide the tiny-YOLO model~\cite{DBLP:journals/corr/RedmonDGF15}, a very small Neural Network focused on working on constrained environments, that makes predictions at only two scales and counts with a smaller backbone.
Additionally,
we have also used the base model extended by adding Spatial Pyramid Pooling (SPP)\cite{DBLP:journals/pami/HeZR015}. SPP is a block (see Fig. \ref{fig:spp_block}) aimed to explode local multi-scale features to improve the final accuracy. It is made of a set of max-pooling layers that takes as input the set of feature maps generated by the backbone. Each of these layers pools its input, with stride 1, at different scales by using different window sizes. These three pooled feature maps are concatenated along with the original one to feed the next detection layers. The use of this SPP block helps the model to find objects at different scales.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{figures/spp_block.png}
\caption{Architecture of a SPP block.}
\label{fig:spp_block}
\end{figure}
\section{Experiments}
\label{sec:experiments}
\subsection {Datasets}
We have used the following three sets of data in our experiments because they are the ones that offer more and better annotated frames extracted from egocentric videos.
\paragraph*{ADL Dastaset \cite{Ramanan:2012:DAD:2354409.2355089}}
The main public dataset used in the experiments is the ADL Dastaset. It consists of 27000 frames extracted from 10 hours of video recorded with a chest-mounted GoPro of 20 people performing everyday activities in 20 different homes. These frames are densely annotated with activity and object labels. Due to the scope of this project, only the object annotations are used. These annotations consist of bounding boxes of 47 different objects.
Since the frames have been annotated by different people, there are some inconsistencies among certain videos, Fig. \ref{fig:incons} shows some examples of them. Each relevant object is not annotated in every frame where it occurs, and sometimes there are different class label annotations for the same object. For instance, classes like \textit{cell or cell\_phone}, \textit{shoe or shoes} and \textit{trash\_can or basket or container or large\_container} are used indistinctly.
\begin{figure}[!tb]
\centering
\begin{tabular}{cc}
\includegraphics[width=.47\linewidth]{figures/inconsistencies/trash_can_P_01_00009798.png} &
\includegraphics[width=.47\linewidth]{figures/inconsistencies/basket_P_02_00046110.png}\\
(a) & (b)\\
\includegraphics[width=.47\linewidth]{figures/inconsistencies/basket_P_13_00044190.png} &
\includegraphics[width=.47\linewidth]{figures/inconsistencies/basket_P_02_00060750.png}\\
(c) & (d)\\
\includegraphics[width=.47\linewidth]{figures/inconsistencies/bottle_P_02_00045120.png} &
\includegraphics[width=.47\linewidth]{figures/inconsistencies/container_P_07_00032700.png}\\
(e) & (f)\\
\includegraphics[width=.47\linewidth]{figures/inconsistencies/container_P_03_00001260.png} &
\includegraphics[width=.47\linewidth]{figures/inconsistencies/basket_P_13_00003480.png}\\
(g) & (h)\\
\end{tabular}
\caption[Caption for LOF]{Samples of some labeling inconsistencies in different ADL Dataset videos. In images (a) and (b) the same object is labeled either as \textit{trash\_can} or \textit{container}. In (c) and (d) the same object is labeled either as a \textit{basket} or \textit{container}. In (e), (f), (g) and (h) a bottle can have the generic category \textit{bottle} or \textit{container}, or a specific one like \textit{soap\_liquid} or \textit{deterget}.}
\label{fig:incons}
\end{figure}
To avoid having issues with this labeling inconsistency, we have build two sub-sets of the data with 27 (v2) and 8 (v3) classes respectively. Some classes have been merged or removed to obtain the new sub-sets in an attempt to have only consistent and unique object label annotations. Table \ref{tab:dataset_classes} shows the whole set of class labels in each sub-set.
\begin{table}[!tb]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{{\bf Classes (27) from sub-set v2}} \\ \hline
\multicolumn{4}{|l|}{generic\_container (trash\_can, basket, container, large\_container)} \\ \hline
\multicolumn{2}{|l|}{bottle (perfume, bottle, milk/juice)} & book & cloth \\ \hline
\multicolumn{2}{|l|}{cell\_phone (cell, cell\_phone)} & dish & door \\ \hline
food/snack & fridge & kettle & laptop \\ \hline
knife/spoon/fork & microwave & mug/cup & oven/stove \\ \hline
\multicolumn{2}{|l|}{monitor/tv (monitor, tv)} & \multicolumn{2}{l|}{shoes (shoe, shoes)} \\ \hline
pan & person & pitcher & soap\_liquid \\ \hline
tap & tooth\_brush & tooth\_paste & towel \\ \hline
tv\_remote & washer/dryer & & \\ \hline
\multicolumn{4}{c}{}\\
\end{tabular}
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{{\bf Classes (8) from sub-set v3}} \\ \hline
knife/spoon/fork & laptop & microwave \\ \hline
\multicolumn{2}{|l|}{monitor/tv (monitor, tv)} & mug/cup \\ \hline
pan & tap & washer/dryer \\ \hline
\multicolumn{3}{c}{}\\
\end{tabular}
\caption{ Object classes in the built sub-sets of ADL dataset. Merged classes between brackets}
\label{tab:dataset_classes}
\end{table}
\paragraph*{EPIC-KITCHENS Dataset \cite{DBLP:journals/corr/abs-1804-02748}}
Additionally, the EPIC-KITCHENS Dataset has also been used as additional data in some of our tests. This dataset is composed by frames extracted from videos recorded in a head-mounted GoPro in 32 kitchens by 32 subjects. These frames are provided along with activity annotations and object bounding boxes. Even though this dataset is larger than the ADL one and shows a better annotation quality, the fact that only the \textit{active} objects (those that the subject is interacting with) are provided with annotations makes this dataset less useful for our evaluation purposes.
EPIC-KITCHENS contains 289 different classes, but a lot of them barely appear or are not included in the ADL dataset. So,
we have also created a sub-set with the relevant classes, i.e., merged equivalent ones and removed classes not included in ADL. By doing so, two sub-sets have been built. EpicK$_{v1}$ contains 17 classes (show in \ref{tab:kitchen_classes}), and EpicK$_{v2}$ includes the same and additionally a \textit{food} class.
This food class has only been used in one of the sub-sets because it contains a set of 109 merged classes from the original dataset as well as the major part of the bounding boxes it contains. For this reason it is a matter of interest to check the model performance in both scenarios.
\begin{table}[!tb]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{{\bf Classes (17) from sub-set EpicK$_{v1}$}} \\ \hline
cutlery & tap & plate/bowl \\ \hline
fridge/freezer & salt/oil/vinegar & pan/pot \\ \hline
bag/container & cup/glass & oven \\ \hline
bottle/soup & bin/basket & cloth \\ \hline
kettle & coffee & microwave \\ \hline
colander & washer & \\ \hline
\multicolumn{3}{c}{}\\
\end{tabular}
\caption{ Object classes in the built EpicK$_{v1}$ subset.}
\label{tab:kitchen_classes}
\end{table}
\paragraph*{Own unlabeled videos} we have also acquired additional recordings from a GoPro, independent of the previously mentioned datasets, to run additional validation experiments. In this data the user is looking at and interacting with objects included in the ADL Dataset. One video has been recorded with a chest-mounted camera (like the ADL data) and a second video has been recorded from a head-mounted camera (like the EPIC KITCHENS data).
\subsection{Experiments setup}
To evaluate the performance of our models we use the well known metric
mean Average Precision (mAP) from PASCAL VOC \cite{Everingham:2010:PVO:1747084.1747104}, that considers a predicted bounding box as correct when its Intersection over Union (IoU) with the ground truth is above 0.5 ($mAP_{50}$).
In contrast to the authors of the ADL dataset \cite{Ramanan:2012:DAD:2354409.2355089}, that train part-based models~\cite{Felzenszwalb:2010:ODD:1850486.1850574} using a leave-one-out cross-validation strategy, we train a Neural Network with the first 17 videos of the dataset, leaving the remaining three videos as a validation set.
Therefore, the result in that work is a relevant baseline but it does not exactly show the same metric. Since their work already studied the variations across their dataset, we do not include that within our goals and opt to have a single fixed model to study the model variation influence.
Our \textit{base} solution is a YOLO v3 neural network built in TensorFlow (available at their authors website\footnote{https://pjreddie.com/darknet/yolo/}). We add different modifications to this base model to check its performance in different scenarios discussed in Section \ref{sec:exp-results}.
We start from the available base model pretrained with the weights learned on COCO dataset~\cite{DBLP:journals/corr/LinMBHPRDZ14} and fine-tuning it with the ADL data.
The pretrained layers are frozen during the first 15 epochs and both the input training and test images have a fixed size of 416x416. All our experiments have been performed with a single NVIDIA Geforce RTX 2080 Ti GPU.\\
\subsection{Results}\label{sec:exp-results}
After an exhaustive study of the YOLO's performance in our different data sub-sets, we have found that the major update to our base solution is the use of multi-scale image training (discussed later). For a better generalization, we pretrain our Neural Network with COCO data, and depending on the data sub-set, we have seen that using Spatial Pyramid Pooling is also beneficial.
Table \ref{tab:full} shows a summary of all the experiments we performed over our base solution described before. We vary the training data sub-set, the model architecture, its pretraining and the training input image size. The first row shows the performance of the base solution in the original ADL dataset (with no classes merged or removed).
The relatively low mAP score (note the original set has 47 classes) is partially due to the high intra-class variability, i.e., since each video is recorded in a different home, the particular instances of the object classes can be very different.
We have also observed that the detections evaluated as incorrect are often produced by the label consistency issues discussed in previously. This is confirmed since the models trained with non-conflicting classes (v2 and v3), obtain much higher scores.
\begin{table}[!tb]
\centering
\begin{tabular}{|l|l|l|l||l|}
\hline
{\bf sub-set} & {\bf architecture} & {\bf pretraining} & {\bf input size} & {\bf mAP\textsubscript{50}} \\ \hline\hline
original & base & coco & 416 & 25.400 \\ \hline
original & spp & coco & 416 & 25.658 \\ \hline
original & base & coco & multi-scale & \textbf{26.298} \\ \hline
original & spp & coco & multi-scale & 26.219 \\ \hline\hline
v2 & base & coco & 416 & 38.794 \\ \hline
v2 & base & -- & 416 & 25.948 \\ \hline
v2 & base & backbone & 416 & 31.441 \\ \hline
v2 & base & epicK$_{v1}$ & 416 & 31,449 \\ \hline
v2 & base & epicK$_{v2}$ & 416 & 30,934 \\ \hline
v2 & base & coco & 320 & 36.771 \\ \hline
v2 & base & coco & 608 & 39.188 \\ \hline
v2 & spp & coco & 320 & 36.417 \\ \hline
v2 & spp & coco & 416 & 39.496 \\ \hline
v2 & spp & coco & 608 & 38.850 \\ \hline
v2 & base & coco & multi-scale & 38.946 \\ \hline
v2 & spp & coco & multi-scale & \textbf{39.694} \\ \hline
v2 & tiny & coco & 416 & 26.989 \\ \hline\hline
v3 & base & coco & 416 & 51.145 \\ \hline
v3 & base & -- & 416 & 39.803 \\ \hline
v3 & base & backbone & 416 & 46.773 \\ \hline
v3 & base & epicK$_{v1}$ & 416 & 46,362 \\ \hline
v3 & base & epicK$_{v2}$ & 416 & 46,437 \\ \hline
v3 & base & coco & 320 & 48.326 \\ \hline
v3 & base & coco & 608 & 51.692 \\ \hline
v3 & spp & coco & 320 & 48.304 \\ \hline
v3 & spp & coco & 416 & 51.514 \\ \hline
v3 & spp & coco & 608 & 50.705 \\ \hline
v3 & base & coco & multi-scale & \textbf{51.839} \\ \hline
v3 & spp & coco & multi-scale & 51,570 \\ \hline
v3 & tiny & coco & 416 & 42.798 \\ \hline
\multicolumn{5}{c}{}\\
\end{tabular}
\caption{Configuration of all YOLO variations evaluated and the corresponding $mAP_{50}$ metric.
}
\label{tab:full}
\end{table}
Analyzing the results in more detail, we observe a huge difference in the object-detection accuracy between different object classes.
We
summarize this in Table \ref{tab:raw}, that compares the most relevant results from our experiments with the results presented as baseline by the ADL dataset authors.
As we can see, our base choice with the updates discussed
outperform the dataset authors' solution based on part-based models.
\begin{table}[!tb]
\centering
\begin{tabular}{@{}|l||l||p{0.9cm}|p{0.9cm}@{}|p{1.3cm}@{}|p{1.3cm}@{}|}
\hline
{\bf Object} & \multicolumn{5}{c|}{\bf Architecture used for the detection}\\
\cline{2-6}
{\bf class} & *ADL~\cite{Ramanan:2012:DAD:2354409.2355089} & Base YOLO & SPP YOLO & Base + multi-scale & SPP + multi-scale \\ \hline
tap & 40.4 $\pm$ 24.3 & 80.85 & 80.46 & 78.67 & \textbf{81.18} \\ \hline
soap\_liquid & 32.5 $\pm$ 28.8 & 53.20 & 43.36 & 54.16 & \textbf{60.08} \\ \hline
fridge & 19.9 $\pm$ 12.6 & 71.75 & 69.23 & \textbf{73.40} & 68.96 \\ \hline
microwave & 43.1 $\pm$ 14.1 & 74.52 & 73.84 & \textbf{78.07} & 70.24 \\ \hline
oven/stove & 38.7 $\pm$ 22.3 & 61.87 & 58.55 & \textbf{63.07} & 61.19 \\ \hline
bottle & 21.0 $\pm$ 27.0 & 17.38 & \textbf{25.82} & 25.68 & 19.61 \\ \hline
kettle & 21.6 $\pm$ 24.2 & 24.17 & \textbf{35.31} & 30.28 & 29.53 \\ \hline
mug/cup & 23.5 $\pm$ 14.8 & 22.50 & 16.49 & \textbf{24.89} & 20.88 \\ \hline
washer/dryer & 47.6 $\pm$ 15.7 & 63.11 & 69.11 & \textbf{69.67} & 67.70 \\ \hline
tv & 69.0 $\pm$ 21.7 & 81.39 & 80.40 & \textbf{84.20} & 82.61 \\ \hline
\multicolumn{6}{p{8.5cm}}{*The results presented in that work are an average of several train/test splits, while our results use a unique train/test split. They also perform object-recognition only for 24 classes while our results come from a training with all categories.}\\
\multicolumn{6}{c}{}\\
\end{tabular}
\caption{
Comparison of object-detection results between the ADL author's model, our base model ($Base$) and different studied variations (Spatial Pyramid Pooling and multi-scale training). All trained with the original ADL Dataset.
}
\label{tab:raw}
\end{table}
In the remainder of this section, we discuss in more detail our exhaustive experimentation and variations of the base model, where we use only the proposed ADL sub-sets to mitigate the effect of inconsistencies in the labeling.
\paragraph*{Influence of different pretraining strategies}
To improve the model generalization we have also tested different fine-tuning strategies. This set of tests involve training the NN with random weights initialization, pretraining just the backbone with weights obtained from an ImageNet training or use the weights learned from our EPIC-KITCHENS sub-set versions. Table~\ref{tab:full} shows (first five rows of v2 and v3) that these modifications result in a decrease of 7 to 13 and 5 to 11 of the mAP score when training with the sub-sets v2 and v3 respectively.
Therefore, it is clear that pretraining the full network with COCO weights brings significant benefits. \\
\paragraph*{Analysis of the training input image size}
We have trained YOLO with different fixed input sizes, and made the predictions with that same size.
Fig. \ref{fig:img_size} shows
an illustrative example of the difference in accuracy among them. As it could be expected, high resolution images provide more information for the object localization and classification. Additionally, larger inputs facilitate to detect objects that are too close to each other or too small but requires higher computational resources. Results in Table \ref{tab:full} (see the mAP in the rows 1, 6 and 7 for the v2 and v3 data sub-sets in Table \ref{tab:full}) shows that our base input image size (416x416) achieves a score much higher that the smaller input and only slightly worst than the bigger one. Therefore, the base input image size provides a good trade-off between accuracy and speed.\\
\begin{figure}[!tb]
\centering
\begin{tabular}{@{}c@{\hspace{2mm}}c}
\includegraphics[width=.49\linewidth]{figures/img_size_com/320_ADL_frames_P_19_00096906.png} &
\includegraphics[width=.49\linewidth]{figures/img_size_com/416_ADL_frames_P_19_00096906.png}\\
(a) & (b)\\
\multicolumn{2}{c}{\includegraphics[width=.49\linewidth]{figures/img_size_com/608_ADL_frames_P_19_00096906.png}}\\
\multicolumn{2}{c}{(c)}\\
\end{tabular}
\caption
Detection results on the same frame when training and testing with different image sizes: a) 320x320, b) 416x416, c) 608x608.}
\label{fig:img_size}
\end{figure}
\paragraph*{Analysis of different YOLO architectures}\label{sec:exp_spp}
Here, we discuss the results for the different neural network architectures. Besides the base model with 75 Convolutional layers, we have also tested the \textit{tiny} architecture (tiny-yolo) proposed by the authors, with 13 Convolutional layers and a model that includes a Spatial Pyramid Pooling \cite{DBLP:journals/pami/HeZR015} after the backbone (SPP-YOLO) and is composed by 76 Convolutional layers. Tiny-yolo is able to get much faster results than standard YOLO but at the cost of lower
scores. SPP-YOLO is able to improve the base architecture results without a significant increment in the prediction speed. See Table \ref{tab:speed-acc}.\\
\paragraph*{SPP-YOLO performance with different input sizes} To get a deeper insight about the SPP performance, we have compared this architecture with the base model and different input image sizes. SPP-YOLO is able to detect objects at different scales due to the use of feature maps processed by several pooling layers with different configurations. Table \ref{tab:full} shows how this update improves the base results achieving a mAP score with the base input image size competitive or better to the one achieved with a larger input. That means, better performance with less computing cost. However, the SPP results obtained with different input shapes do not show that improvement. That could be solved by tuning the maxpooling configurations in the SPP hyperparameters. Higher pooling windows could fit better larger images, and lower pooling windows could fit better smaller image sizes, but that is left for a future research.\\
\paragraph*{Analysis of multi-scale training}
Thanks to the Fully Convolutional YOLO's architecture, it can be trained with batches of images with different sizes. By feeding the model with different image sizes, it learns features at different scales, making generalization to different object sizes easier. In this way we have performed a set of tests that involves training with random variable image sizes, but the evaluation is performed with a fixed shape of 416x416.
As a result, in Table \ref{tab:full} we observe a significant improvement of the performance of multi-scale trainings compared with the base model in each of the sub-sets, that also outperforms the SPP results. We also observe how multi-scale training with the SPP architecture achieves comparable or better results than the ones obtained with the base architecture.\\
\paragraph*{Analysis of the accuracy/speed trade-off}
Table \ref{tab:speed-acc} shows the effect of model complexity and input image size on the model precision.
The input image size is a key factor to get fast predictions. In this way, we can see how an input size of 608x608 obtains the best results in general terms but it doubles the FLOPS needed of the smaller input size. In addition, the accuracy obtained with an input size of 416x416 is competitive with it and is a good compromise in this case.
Tiny-yolo is a really small model that is able to perform fast predictions but its accuracy is far from the base model results. This architecture could be able to fit systems with low specifications where efficiency is more important than accuracy.
Finally, we can check how the SPP architecture outperforms the base model with only increasing the Float Operations by 0.5\%.\\
\begin{table*}[!tb]
\centering
\begin{tabular}{|l|l|r|r|l|l|}
\hline
Architecture & Input size & \multicolumn{1}{l|}{FLOPS*} & \multicolumn{1}{l|}{Params} & mAP\textsubscript{50} v2 & mAP\textsubscript{50} v3 \\ \hline
base & 320 x 320 & 39.06 Bn & 61.72 M & 36.771 & 48.326 \\ \hline
base & 416 x 416 & 65.80 Bn & 61.72 M & 38.794 & 51.145 \\ \hline
base & 608 x 608 & 140.21 Bn & 61.72 M & 39.188 & 51.692 \\ \hline
SPP & 320 x 320 & 39.29 Bn & 62.77 M & 36.417 & 48.304 \\ \hline
SPP & 416 x 416 & 66.19 Bn & 62.77 M & \textbf{39.496} & \textbf{51.514} \\ \hline
SPP & 608 x 608 & 141.02 Bn & 62.77 M & 38.85 & 50.705 \\ \hline
tiny & 416 x 416 & 5.53 Bn & 8.74 M & 26.989 & 42.798 \\ \hline
\multicolumn{6}{c}{*FLOPS have been calculated with sub-set v2 but the numbers barely change in the sub-set v3.}\\
\multicolumn{6}{c}{}\\
\end{tabular}
\caption[Caption for LOF]{
Trade-off
analysis between
speed and accuracy for object-detection with models trained on v2 and v3 sub-sets.}
\label{tab:speed-acc}
\end{table*}
\paragraph*{Qualitative analysis in additional scenarios}
We have validated our best models trained with each of our custom data sub-sets, as well as the one trained with the original ADL Dataset in our unlabeled videos. Fig.~\ref{fig:home_test} shows some sample predictions with the objects found along those videos.
Additional results on these videos, along with the code to replicate this work experiments are available online~\footnote{https://sites.google.com/a/unizar.es/filovi}.
\begin{figure}[!tb]
\centering
\begin{tabular}{cc}
\includegraphics[width=.45\linewidth]{figures/chest/104.jpeg} &
\includegraphics[width=.45\linewidth]{figures/chest/3508.jpeg}\\
(a) & (b)\\
\includegraphics[width=.45\linewidth]{figures/head/2702.jpeg} &
\includegraphics[width=.45\linewidth]{figures/head/1682.jpeg}\\
(c) & (d)\\
\end{tabular}
\caption{Sample frames from additional unlabeled videos.
(a), (b) correspond to chest-mounted camera video. (c), (d) correspond to the head-mounted camera video. First, second and third rows correspond to predictions of the original ADL dataset classes, and the v2 and v3 subsets respectively. }
\label{fig:home_test}
\end{figure}
Even though the style of this new environment slightly differs from the ones in the ADL Dataset, we observe a pretty good object generalization. However, since our models have been trained only with frames extracted from videos recorded with chest-mounted cameras they find hard to generalize to the perspective obtained with the head-mounted camera. This issue increases the chances of object misdetection or misclassification.
In general terms, we observe how the model trained with the original dataset is able to generate many more bounding boxes that the ones trained with other data sub-sets, but it does not perform as well as the other models at the actual object classification. This issue appears when detecting specific unusual classes like \textit{vacuum} or \textit{bed} and it gets more false positives with the ones that have label inconsistencies.
The model trained with the sub-set v3
shows less accurate object-localization but better object-classification that training with the original dataset. It also generates more robust and uniform bounding boxes and class labels over time. Models trained with other data sub-sets perform worse at generating uniform results over the video sequence (i.e. contiguous frames switch classes more often).
Finally, the model trained with the sub-set v3 generates more robust and uniform bounding boxes over time, along with their classes. However, it performs worse at detecting objects since it does not predicts as many boxes as expected. This issue could happen because of the existence of less bounding boxes during training, so the network focuses more on the background rather than in the objects. We believe that this could be fixed by tuning the loss hyper-parameters.
\section{Conclusion}
This work shows a detailed evaluation of the YOLO architecture and its performance for object-detection in wearable videos (per-frame). We have discussed the main issues in this kind of videos and how we deal with label inconsistencies in existing relevant datasets.
We have performed several modifications to the model architecture and training strategy of our YOLO base experiment.
After our exhaustive validation, we have found that the pretraining strategy, with COCO weights in this case, is a key element for a good model performance and fast convergence. We have also shown how to improve the object-scale generalization by using Spatial Pyramid Pooling and multi-scale training, without increasing training or prediction speed.
Besides, we have evaluated the performance of the YOLO models in a new scenario and discussed the benefits and drawbacks of each of them and how they handle different view points.
As work-in-progress, there are additional ideas that would help YOLO to take advantage of temporal patterns in the frames. In particular, performing bounding boxes post-processing and feature aggregation. Another open research line is to modify the YOLO loss function to better fit the nature of the wearable data. For instance, the different weights of the loss components could be adjusted to balance the coordinates, background or object relevance during training.
\bibliographystyle{IEEEtran}
| 7ba638a0545ee95f1df31df5a3114b63d5e7c683 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Recall that the Bernoulli polynomials are defined as
$$B_n(x)=\sum_{k=0}^n\binom nkB_kx^{n-k}\ \ (n\in\mathbb{N}).$$
The $n$th generalised harmonic number is given by
$$H_n^{(k)}=\sum_{j=1}^n\frac1{j^k}$$
and the $n$th classic harmonic number is known as $H_n=H_n^{(1)}$.
The Ap\'{e}ry numbers are defined by
$$A_n=\sum_{k=0}^n\binom{n}{k}^2\binom{n+k}k^2\ \ (n\in\mathbb{N}).$$
In the past decades, congruences involving Ap\'{e}ry numbers have attracted the attention of many researchers (see, for instance, \cite{beu-jnt-1987,ccc-jnt-1980,gessel-jnt-1982,gz-jnt-2012}).
In 2012, Guo and Zeng proved the following congruence conjectured by Sun \cite{sun-jnt-2012}: If $p>3$ is a prime, then
\begin{align*}
&\frac1p\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k\equiv \left(\frac{p}{3}\right)\pmod{p^2}.
\end{align*}
where $\left(\frac{\cdot}{p}\right)$ denotes the Legendre symbol.
The main objective of this paper is to prove the following result, which generalises the above congruence.
\begin{thm}\label{ThAp} Let $p>3$ be a prime. Then we have
\begin{align}
&\frac1p\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k\equiv \left(\frac{p}{3}\right)+\frac{p^2}6B_{p-2}\left(\frac13\right)\pmod{p^3}.\label{1pak}
\end{align}
\end{thm}
The well known Franel numbers are defined as
$$f_n=\sum_{k=0}^n\binom{n}k^3\ \ (n\in\mathbb{N}),$$
which satisfy the recurrence:
$$(n+1)^2f_n=(7n^2+7n+2)f_n+8n^2f_{n-1}.$$
Strehl \cite{str-mtn-1993} gave the following identity:
\begin{align}
f_n=\sum_{k=0}^n\binom{n}k\binom{k}{n-k}\binom{2k}k=\sum_{k=0}^n\binom{n}k^2\binom{2k}n.\label{fnidentity}
\end{align}
Many people (see \cite{g-itsf-2013,jv-rama-2010,sun-aam-2013,sun-jnt-2013}) have studied congruences for Franel numbers. For instance, Sun [Theorem 1.1]\cite{sun-aam-2013} proved that for any prime $p>3$,
$$\sum_{k=0}^{p-1}(-1)^kf_k\equiv\left(\frac{p}{3}\right)\pmod{p^2}.$$
The second aim of this paper is to prove the following congruence which conjectured by Sun \cite{sun-njdx-2019}.
\begin{thm}\label{Thfp} For any prime $p>3$, we have
\begin{align}
&\sum_{k=0}^{p-1}(-1)^kf_k\equiv \left(\frac{p}{3}\right)+\frac{2p^2}3B_{p-2}\left(\frac13\right)\pmod{p^3}.\label{1pfk}
\end{align}
\end{thm}
We end the introduction by giving the organization of this paper.
We shall give some preliminary results first in Section 2, and then prove Theorem \ref{ThAp} in Section 3, Theorem \ref{Thfp} will be proved in Section 4 in two ways.
\section{Preliminary lemmas}
\begin{lem}\label{3-3} Let $p>3$ be a prime. Then
$$\sum_{k=1}^{(p-1)/2}\frac{(-3)^k}k\sum_{j=1}^k\frac1{(2j-1)(-3)^j}\equiv\frac16B_{p-2}\left(\frac13\right)\pmod{p}.$$
\end{lem}
\begin{proof}
First we have
\begin{align*}
&\sum_{k=1}^{(p-1)/2}\frac{(-3)^k}k\sum_{j=1}^k\frac1{(2j-1)(-3)^j}=\sum_{k=1}^{(p-1)/2}\frac1k\sum_{j=1}^k\frac{(-3)^{k-j}}{2j-1}=\sum_{k=1}^{(p-1)/2}\frac1k\sum_{j=0}^{k-1}\frac{(-3)^j}{2k-2j-1}\\
&=\sum_{j=0}^{(p-3)/2}(-3)^j\sum_{k=j+1}^{(p-1)/2}\frac1{k(2k-2j-1)}=\sum_{j=0}^{(p-3)/2}\frac{(-3)^j}{2j+1}\sum_{k=j+1}^{(p-1)/2}\left(\frac2{2k-2j-1}-\frac1k\right).
\end{align*}
It is easy to check that
$$\sum_{k=j+1}^{(p-1)/2}\frac2{2k-2j-1}=\sum_{k=1}^{(p-1)/2-j}\frac2{2k-1}=2\left(H_{p-1-2j}-\frac12H_{\frac{p-1}2-j}\right)=2H_{p-1-2j}-H_{\frac{p-1}2-j}.$$
So
$$\sum_{k=1}^{(p-1)/2}\frac{(-3)^k}k\sum_{j=1}^k\frac1{(2j-1)(-3)^j}=\sum_{j=0}^{(p-3)/2}\frac{(-3)^j}{2j+1}\left(2H_{p-1-2j}-H_{\frac{p-1}2-j}-H_{\frac{p-1}2}+H_j\right).$$
It is easy to see that $H_{p-1-2j}\equiv H_{2j}\pmod p$ and $H_{\frac{p-1}2-j}-H_{\frac{p-1}2}\equiv 2H_{2j}-H_j\pmod p$.
Hence
$$\sum_{k=1}^{(p-1)/2}\frac{(-3)^k}k\sum_{j=1}^k\frac1{(2j-1)(-3)^j}\equiv\sum_{j=0}^{(p-3)/2}\frac{(-3)^j}{2j+1}\left(2H_j-2H_{\frac{p-1}2}\right)\pmod p.$$
It is obvious that
\begin{align*}
\sum_{j=0}^{(p-3)/2}\frac{(-3)^j}{2j+1}\left(H_j-H_{\frac{p-1}2}\right)&=\sum_{j=1}^{(p-1)/2}\frac{(-3)^{(p-1)/2-j}}{p-2j}\left(H_{\frac{p-1}2-j}-H_{\frac{p-1}2}\right)\\
&\equiv\frac{(-3)^{(p-1)/2}}2\sum_{j=1}^{(p-1)/2}\frac{H_j-2H_{2j}}{j(-3)^j}\pmod p.
\end{align*}
Therefore in view of \cite[(1.3)]{liu-arxiv-2020}, and by the fact that $(-3)^{(p-1)/2}\equiv\left(\frac{-3}p\right)=\left(\frac{p}3\right)\pmod p$, we immediately get the desired result.
\end{proof}
\begin{lem}\label{2j2k} For any prime $p>3$, we have
$$\sum_{k=1}^{(p-1)/2}\frac{1}{k\binom{2k}k}\sum_{j=1}^k\frac{\binom{2j}j}j\equiv\frac13B_{p-2}\left(\frac13\right)\pmod{p}.$$
\end{lem}
\begin{proof}
In view of \cite{sun-scm-2011}, we have
$$\sum_{j=1}^{(p-1)/2}\frac{\binom{2j}j}j\equiv0\pmod p.$$
This, with $H_{(p-1)/2}^{(2)}\equiv0\pmod p$ yields that
\begin{align*}
\sum_{k=1}^{(p-1)/2}\frac{1}{k\binom{2k}k}\sum_{j=1}^k\frac{\binom{2j}j}j&\equiv-\sum_{k=1}^{(p-1)/2}\frac{1}{k\binom{2k}k}\sum_{j=k+1}^{(p-1)/2}\frac{\binom{2j}j}j=-\sum_{j=2}^{(p-1)/2}\frac{\binom{2j}j}j\sum_{k=1}^{j-1}\frac1{k\binom{2k}k}\\
&=-\sum_{j=1}^{(p-1)/2}\frac{\binom{2j}j}j\sum_{k=1}^{j}\frac1{k\binom{2k}k}+H_{(p-1)/2}^{(2)}\\
&\equiv-\sum_{j=1}^{(p-1)/2}\frac{\binom{2j}j}j\sum_{k=1}^{j}\frac1{k\binom{2k}k}\pmod p.
\end{align*}
It is well known that $\binom{2j}j\equiv\binom{(p-1)/2}j(-4)^j\pmod p$ for each $j=0,1,\ldots,(p-1)/2$, so
$$\sum_{k=1}^{(p-1)/2}\frac{1}{k\binom{2k}k}\sum_{j=1}^k\frac{\binom{2j}j}j\equiv-\sum_{j=1}^{(p-1)/2}\frac{\binom{(p-1)/2}j(-4)^j}j\sum_{k=1}^{j}\frac1{k\binom{2k}k}\pmod p.$$
By \textit{Sigma} we have the following identity
$$\sum_{j=1}^{n}\frac{\binom{n}j(-4)^j}j\sum_{k=1}^{j}\frac1{k\binom{2k}k}=-2\sum_{k=1}^{n}\frac{(-3)^k}k\sum_{j=1}^k\frac1{(2j-1)(-3)^j}.$$
Setting $n=(p-1)/2$ in the above identity, we have
$$\sum_{k=1}^{(p-1)/2}\frac{1}{k\binom{2k}k}\sum_{j=1}^k\frac{\binom{2j}j}j\equiv2\sum_{k=1}^{(p-1)/2}\frac{(-3)^k}k\sum_{j=1}^k\frac1{(2j-1)(-3)^j}\pmod p,$$
then we immediately get the desired result by Lemma \ref{3-3}.
\end{proof}
\begin{lem}\label{p-2k} Let $p>3$ be a prime. Then for each $k=1,2,\ldots,(p-1)/2$ we have
$$\sum_{j=2}^{p-2k}\frac{\binom{k+j-1}{k}}{k+j}\equiv(-1)^k\left(\frac32\sum_{j=1}^k\frac{\binom{2j}j}j-\frac{\binom{2k}k}k\right)-\frac1{k+1}\pmod p.$$
\end{lem}
\begin{proof}
First we have
$$\sum_{j=2}^{p-2k}\frac{\binom{k+j-1}{k}}{k+j}=\sum_{j=k+2}^{p-k}\frac{\binom{j-1}k}j=\sum_{j=k+1}^{p-k}\frac{\binom{j-1}k}j-\frac1{k+1}.$$
In view of \cite[(4.37)]{g-online}, we have
$$\sum_{j=a}^n(-1)^{j-a}\binom{n}j\frac1j=\sum_{j=a}^n\binom{j-1}{a-1}\frac1j\ \ (1\leq a\leq n).$$
Hence
\begin{align*}
\sum_{j=k+1}^{p-k}\frac{\binom{j-1}k}j&=\sum_{j=k+1}^{p-k}(-1)^{j-k-1}\binom{p-k}j\frac1j\\
&=\sum_{j=1}^{p-k}(-1)^{j-k-1}\binom{p-k}j\frac1j-\sum_{j=1}^{k}(-1)^{j-k-1}\binom{p-k}j\frac1j.
\end{align*}
By \textit{Sigma} we have $\sum_{j=1}^n(-1)^j\binom{n}j\frac1j=-H_n$, so
$$\sum_{j=k+1}^{p-k}\frac{\binom{j-1}k}j=(-1)^kH_{p-k}-\sum_{j=1}^{k}(-1)^{j-k-1}\binom{p-k}j\frac1j.$$
It is known that $\binom{-n}k=(-1)^k\binom{n+k-1}k$, hence
$$\sum_{j=1}^{k}(-1)^{j-k-1}\binom{p-k}j\frac1j\equiv\sum_{j=1}^{k}(-1)^{j-k-1}\binom{-k}j\frac1j=(-1)^{k+1}\sum_{j=1}^k\frac1j\binom{k+j-1}j\pmod p.$$
By \textit{Sigma} we have
$$\sum_{j=1}^n\frac1j\binom{n+j-1}j=\frac{1+3n}n-\frac{\binom{2n}n}n-H_n+\frac32\sum_{j=2}^n\frac{\binom{2j}j}j.$$
Thus
\begin{align*}
\sum_{j=1}^{k}(-1)^{j-k-1}\binom{p-k}j\frac1j&\equiv(-1)^{k+1}\left(\frac{1+3k}k-\frac{\binom{2k}k}k-H_k+\frac32\sum_{j=2}^k\frac{\binom{2j}j}j\right)\\
&=(-1)^{k+1}\left(\frac32\sum_{j=1}^k\frac{\binom{2j}j}j-\frac{\binom{2k}k}k-H_{k-1}\right)\pmod p.
\end{align*}
Since $H_{p-k}\equiv H_{k-1}\pmod p$, therefore
$$\sum_{j=2}^{p-2k}\frac{\binom{k+j-1}{k}}{k+j}\equiv(-1)^{k}\left(\frac32\sum_{j=1}^k\frac{\binom{2j}j}j-\frac{\binom{2k}k}k\right)-\frac1{k+1}\pmod p.$$
Now the proof of Lemma \ref{p-2k} is complete.
\end{proof}
\begin{lem}\label{p-2k-k} Let $p>3$ be a prime. Then for each $k=1,2,\ldots,(p-1)/2$ we have
$$\sum_{j=0}^{p-2k}\binom{-k}{k+j}\frac{(-1)^{k+j}}{\binom{k+j}k}\equiv\frac{3k}2\sum_{j=1}^k\frac{\binom{2j}j}j-\frac32\binom{2k}k\pmod p.$$
\end{lem}
\begin{proof}
We know that
\begin{align*}
\sum_{j=0}^{p-2k}\binom{-k}{k+j}\frac{(-1)^{j}}{\binom{k+j}k}&\equiv\sum_{j=0}^{p-2k}\binom{p-k}{k+j}\frac{1}{\binom{p-k-1}j}=\sum_{j=0}^{p-2k}\frac{\binom{p-k}{p-2k-j}}{\binom{p-k-1}j}\\
&=\sum_{j=0}^{p-2k}\binom{p-k}{j}\frac{1}{\binom{p-k-1}{p-2k-j}}\pmod p.
\end{align*}
By \textit{Sigma}, we have the following identity,
$$\sum_{j=1}^n\frac{\binom{n+k}j}{\binom{n+k-1}{n-j}}=-\frac{n+k}{k+1}-\frac{n}{k\binom{n+k-1}{n-1}}+\frac{(n+k)^2\binom{n+k-1}{n-1}}{nk}-(n+k)\sum_{j=2}^n\frac{\binom{k+j-1}k}{k+j}.$$
Setting $n=p-2k$, we have
$$\sum_{j=1}^{p-2k}\frac{\binom{p-k}j}{\binom{p-k-1}{p-2k-j}}\equiv\frac{k}{k+1}+\frac{2(-1)^k}{\binom{2k}{k}}-\frac{(-1)^k}{2}\binom{2k}k+k\sum_{j=2}^{p-2k}\frac{\binom{k+j-1}k}{k+j}\pmod p.$$
Hence
\begin{align*}
\sum_{j=0}^{p-2k}\binom{-k}{k+j}\frac{(-1)^{j}}{\binom{k+j}k}&\equiv\frac{k}{k+1}+\frac{2(-1)^k}{\binom{2k}{k}}-\frac{(-1)^k}{2}\binom{2k}k+k\sum_{j=2}^{p-2k}\frac{\binom{k+j-1}k}{k+j}+\frac1{\binom{-k-1}{k-1}}\\
&=\frac{k}{k+1}-\frac{(-1)^k}{2}\binom{2k}k+k\sum_{j=2}^{p-2k}\frac{\binom{k+j-1}k}{k+j}\pmod p.
\end{align*}
By Lemma \ref{p-2k}, we have
\begin{align*}
\sum_{j=0}^{p-2k}\binom{-k}{k+j}\frac{(-1)^{k+j}}{\binom{k+j}k}&\equiv(-1)^k\frac{k}{k+1}-\frac{1}{2}\binom{2k}k+(-1)^kk\sum_{j=2}^{p-2k}\frac{\binom{k+j-1}k}{k+j}\\
&\equiv(-1)^k\frac{k}{k+1}-\frac{1}{2}\binom{2k}k+k\left(\frac32\sum_{j=1}^k\frac{\binom{2j}j}j-\frac{\binom{2k}k}k\right)-\frac{(-1)^k}{k+1}\\
&=\frac{3k}2\sum_{j=1}^k\frac{\binom{2j}j}j-\frac32\binom{2k}k\pmod p.
\end{align*}
Now we finish the proof of Lemma \ref{p-2k-k}.
\end{proof}
\begin{lem}\label{2kkp+1} For any prime $p>3$, we have
$$\sum_{k=(p+1)/2}^{p-1}\binom{2k}k\sum_{j=p-k}^k\binom{k}j\binom{k+j}j(-1)^{k+j}\equiv-p^2B_{p-2}\left(\frac13\right)\pmod{p^3}.$$
\end{lem}
\begin{proof}
It is known that $\binom{k+j}j\equiv0\pmod p$ for each $p-k\leq j\leq p-1$ with $k\in\{(p+1)/2,\ldots,p-1\}$, and in view of \cite{sun-scm-2011}
$$k\binom{2k}k\binom{2(p-k)}{p-k}\equiv-2p\pmod{p^2}\ \mbox{for}\ \mbox{all}\ k=1,2,\ldots,(p-1)/2.$$
So
\begin{align*}
&\sum_{k=(p+1)/2}^{p-1}\binom{2k}k\sum_{j=p-k}^k\binom{k}j\binom{k+j}j(-1)^{k+j}\\
&=\sum_{k=1}^{(p-1)/2}\binom{2p-2k}{p-k}\sum_{j=k}^{p-k}\binom{p-k}j\binom{p-k+j}j(-1)^{p-k+j}\\
&\equiv-2p\sum_{k=1}^{(p-1)/2}\frac1{k\binom{2k}k}\sum_{j=k}^{p-k}\binom{p-k}j\binom{p-k+j}j(-1)^{k+j+1}\pmod{p^3}.
\end{align*}
It is obvious that
$$\binom{p-k+j}j=\frac{(p-k+j)\cdots(p-k+1)}{j!}\equiv\frac{p(j-k)!(-1)^{k-1}(k-1)!}{j!}=\frac{p(-1)^{k-1}}{k\binom{j}k}\pmod{p^2}.$$
Hence
\begin{align*}
&\sum_{k=(p+1)/2}^{p-1}\binom{2k}k\sum_{j=p-k}^k\binom{k}j\binom{k+j}j(-1)^{k+j}\\
&\equiv-2p^2\sum_{k=1}^{(p-1)/2}\frac1{k^2\binom{2k}k}\sum_{j=k}^{p-k}\binom{-k}j\frac{(-1)^j}{\binom{j}k}\\
&=-2p^2\sum_{k=1}^{(p-1)/2}\frac1{k^2\binom{2k}k}\sum_{j=0}^{p-2k}\binom{-k}{k+j}\frac{(-1)^{k+j}}{\binom{k+j}k}\pmod{p^3}.
\end{align*}
By Lemma \ref{p-2k-k}, Lemma \ref{2j2k} and $H_{(p-1)/2}^{(2)}\equiv0\pmod p$, we have
\begin{align*}
\sum_{k=(p+1)/2}^{p-1}\binom{2k}k\sum_{j=p-k}^k\binom{k}j\binom{k+j}j(-1)^{k+j}&\equiv-2p^2\sum_{k=1}^{(p-1)/2}\frac1{k^2\binom{2k}k}\left(\frac{3k}2\sum_{j=1}^k\frac{\binom{2j}j}j-\frac32\binom{2k}k\right)\\
&=-3p^2\sum_{k=1}^{(p-1)/2}\frac1{k^2\binom{2k}k}\sum_{j=1}^k\frac{\binom{2j}j}j+3p^2H_{(p-1)/2}^{(2)}\\
&\equiv-p^2B_{p-2}\left(\frac13\right)\pmod{p^3}.
\end{align*}
Therefore the proof of Lemma \ref{2kkp+1} is finished.
\end{proof}
\section{Proof of Theorem \ref{ThAp}}
{\it Proof of Theorem \ref{ThAp}.} By \cite[Thm 2.1]{gz-jnt-2012}, we have
\begin{align*}
\frac1p\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k=\sum_{k=0}^{p-1}\binom{2k}k\sum_{j=0}^k\binom{k}j\binom{k+j}j\binom{p-1}{k+j}\binom{p+k+j}{k+j}.
\end{align*}
It is easy to see that
$$\binom{p-1}{k+j}\binom{p+k+j}{k+j}=\prod_{i=1}^{k+j}\frac{p^2-i^2}{i^2}\equiv(-1)^{k+j}\left(1-p^2H_{k+j}^{(2)}\right)\pmod{p^3}.$$
So
\begin{align*}
\frac1p\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k\equiv \theta_1+\theta_2\pmod{p^3},
\end{align*}
where
$$\theta_1=\sum_{k=0}^{(p-1)/2}\binom{2k}k\sum_{j=0}^k\binom{k}j\binom{k+j}j(-1)^{k+j}\left(1-p^2H_{k+j}^{(2)}\right),$$
$$\theta_2=\sum_{k=(p+1)/2}^{p-1}\binom{2k}k\sum_{j=0}^{p-1-k}\binom{k}j\binom{k+j}j(-1)^{k+j}.$$
It is obvious that
\begin{align*}
\theta_2+\sum_{k=0}^{(p-1)/2}\binom{2k}k\sum_{j=0}^k\binom{k}j\binom{k+j}j(-1)^{k+j}&=\sum_{k=0}^{p-1}\binom{2k}k\sum_{j=0}^k\binom{k}j\binom{k+j}j(-1)^{k+j}\\
&-\sum_{k=(p+1)/2}^{p-1}\binom{2k}k\sum_{j=p-k}^{k}\binom{k}j\binom{k+j}j(-1)^{k+j}.
\end{align*}
By Chu-Vandermonde identity, we have
$$\sum_{k=0}^{p-1}\binom{2k}k\sum_{j=0}^k\binom{k}j\binom{k+j}j(-1)^{k+j}=\sum_{k=0}^{p-1}\binom{2k}k.$$
In view of Lemma \ref{2kkp+1}, we have
$$\theta_2+\sum_{k=0}^{(p-1)/2}\binom{2k}k\sum_{j=0}^k\binom{k}j\binom{k+j}j(-1)^{k+j}\equiv\sum_{k=0}^{p-1}\binom{2k}k+p^2B_{p-2}\left(\frac13\right)\pmod{p^3}.$$
Hence
\begin{align*}
&\frac1p\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k-\sum_{k=0}^{p-1}\binom{2k}k-p^2B_{p-2}\left(\frac13\right)\\
&\equiv-p^2\sum_{k=0}^{(p-1)/2}\binom{2k}k\sum_{j=0}^k\binom{k}j\binom{k+j}j(-1)^{k+j}H_{k+j}^{(2)}\\
&=-p^2\sum_{k=0}^{(p-1)/2}\binom{2k}k\sum_{j=k}^{2k}\binom{k}{j-k}\binom{j}{k}(-1)^{j}H_{j}^{(2)}\pmod{p^3}.
\end{align*}
By \cite[(4.1)]{liu-arxiv-2020}, we have
$$\sum_{j=k}^{2k}\binom{k}{j-k}\binom{j}{k}(-1)^{j}H_{j}^{(2)}=3\sum_{j=1}^k\frac1{j^2\binom{2j}j}.$$
So
$$\frac1p\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k-\sum_{k=0}^{p-1}\binom{2k}k-p^2B_{p-2}\left(\frac13\right)\equiv-3p^2\sum_{k=1}^{(p-1)/2}\binom{2k}k\sum_{j=1}^k\frac1{j^2\binom{2j}j}\pmod{p^3}.$$
Therefore, by \cite{st-jnt-2013} and \cite[(4.4)]{liu-arxiv-2020}, we immediately get the desired result
$$\frac1p\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k\equiv\left(\frac{p}3\right)+\frac{p^2}6B_{p-2}\left(\frac13\right)\pmod{p^3}.$$
Now we finish the proof of Theorem \ref{ThAp}.\qed
\section{Proof of Theorem \ref{Thfp}}
{\it Proof of Theorem \ref{Thfp}.} First we have the following identity
$$\sum_{k=0}^n\binom{n}k\binom{n+1+k}kf_k=\frac{(-1)^n}{n+1}\sum_{k=0}^n(-1)^k(2k+1)A_k.$$
Setting $n=p-1$ in the above identity, we have
$$\sum_{k=0}^{p-1}\binom{p-1}k\binom{p+k}kf_k=\frac{1}{p}\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k.$$
It is easy to check that
$$\binom{p-1}{k}\binom{p+k}{k}=\prod_{j=1}^{k}\frac{p^2-j^2}{j^2}\equiv(-1)^{k}\left(1-p^2H_{k}^{(2)}\right)\pmod{p^3}.$$
So we have
$$\sum_{k=0}^{p-1}(-1)^k\left(1-p^2H_k^{(2)}\right)f_k\equiv\frac{1}{p}\sum_{k=0}^{p-1}(-1)^k(2k+1)A_k\pmod{p^3}.$$
This, with Theorem \ref{ThAp} and \cite[(1.5)]{liu-arxiv-2020} yields that
$$\sum_{k=0}^{p-1}(-1)^kf_k\equiv\left(\frac{p}3\right)+\frac{2p^2}3B_{p-2}\left(\frac13\right)\pmod{p^3}.$$
Now the proof of Theorem \ref{Thfp} is completed.\qed
\noindent{\it Another proof of Theorem \ref{Thfp}.} In view of (\ref{fnidentity}), we have
\begin{align*}
\sum_{k=0}^{p-1}(-1)^kf_k&=\sum_{k=0}^{p-1}(-1)^k\sum_{j=0}^k\binom{k}j\binom{j}{k-j}\binom{2j}j=\sum_{j=0}^{p-1}\binom{2j}j\sum_{k=j}^{p-1}\binom{k}j\binom{j}{k-j}(-1)^k\\
&=\sum_{j=0}^{\frac{p-1}2}\binom{2j}j\sum_{k=j}^{2j}\binom{k}j\binom{j}{k-j}(-1)^k+\sum_{j=\frac{p+1}2}^{p-1}\binom{2j}j\sum_{k=j}^{p-1}\binom{k}j\binom{j}{k-j}(-1)^k\\
&=\sum_{j=0}^{\frac{p-1}2}\binom{2j}j\sum_{k=0}^{j}\binom{k+j}j\binom{j}{k}(-1)^{k+j}+\sum_{j=\frac{p+1}2}^{p-1}\binom{2j}j\sum_{k=0}^{p-1-j}\binom{k+j}j\binom{j}{k}(-1)^{k+j}.
\end{align*}
By Chu-Vandermonde identity, we have
$$\sum_{k=0}^{j}\binom{k+j}j\binom{j}{k}(-1)^{k+j}=1$$
Hence
\begin{align*}
\sum_{k=0}^{p-1}(-1)^kf_k&=\sum_{j=0}^{\frac{p-1}2}\binom{2j}j+\sum_{j=\frac{p+1}2}^{p-1}\binom{2j}j(-1)^j\left(\sum_{k=0}^{j}\binom{k+j}j\binom{j}{k}(-1)^k-\sum_{k=p-j}^{j}\binom{k+j}j\binom{j}{k}(-1)^k\right)\\
&=\sum_{j=0}^{p-1}\binom{2j}j-\sum_{j=\frac{p+1}2}^{p-1}\binom{2j}j\sum_{k=p-j}^{j}\binom{k+j}j\binom{j}{k}(-1)^{k+j}.
\end{align*}
By Lemma \ref{2kkp+1} and \cite{st-jnt-2013}, we have
$$\sum_{k=0}^{p-1}(-1)^kf_k\equiv\left(\frac{p}3\right)+\frac{2p^2}3B_{p-2}\left(\frac13\right)\pmod{p^3}.$$
Therefore the proof of Theorem \ref{Thfp} is complete.\qed
\vskip 3mm \noindent{\bf Acknowledgments.}
The author is funded by the Startup Foundation for Introducing Talent of Nanjing University of Information Science and Technology (2019r062).
| c0457f468b554b5bf0f811e51011d9d33c13d55c | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\label{sec:intro}
It is now widely believed that the Universe previously experienced a period of exponentially fast expansion, called cosmic inflation. Although the inflationary Universe has been established as a paradigm, its
detailed mechanism or the underlying model responsible for inflation is not yet fully understood, and a lot of theoretical and observational effort has been made during the past decades to elucidate it. From the observational side, recent precise measurements of the cosmic microwave background (CMB) by the Planck satellite have provided tight limits on some inflationary
observables such as the amplitude and spectral index of the primordial curvature power spectrum and the tensor-to-scalar ratio \cite{Akrami:2018odb},
which have provided support for some models but also excluded many (single-field) models of inflation \cite{Martin:2013tda}.
However, once one extends the framework to scenarios with multiple fields, non-minimal couplings to gravity, and so on, the predictions of some simple models for the spectral index and the tensor-to-scalar ratio can be modified. In multi-field scenarios such as in the curvaton model \cite{Enqvist:2001zp,Lyth:2001nq,Moroi:2001ct} or modulated reheating scenario \cite{Dvali:2003em,Kofman:2003nx}, a so-called spectator field can also contribute to the generation of primordial fluctuations. In such a case, the predictions for the spectral index and the tensor-to-scalar ratio are modified from the usual case and some inflation models become viable again even if the original single-field model is excluded \cite{Langlois:2004nn,Moroi:2005kz,Moroi:2005np,Ichikawa:2008iq,Enqvist:2013paa,Vennin:2015vfa,Haba:2017fbi}.
Something similar can happen in models with a non-minimal coupling to gravity.
For example,
the quartic chaotic inflation with a non-minimal coupling, or the Higgs inflation model,
which has been excluded by the Planck data as a single-field model, becomes viable again since the predictions for the spectral index and the tensor-to-scalar ratio get modified due to the existence of a non-minimal coupling between the Higgs field and gravity
\cite{Bezrukov:2007ep} (see also Refs.~\cite{Spokoiny:1984bd,Futamase:1987ua,Salopek:1988qh,Fakir:1990eg,Amendola:1990nn,Kaiser:1994vs,CervantesCota:1995tz,Komatsu:1999mt} for earlier work on the topic, Refs.~\cite{Rubio:2018ogq,Tenkanen:2020dge} for recent reviews, and Refs.~\cite{Germani:2010gm, Granda:2011zk, Kamada:2012se} for extended work employing a non-minimal derivative coupling).
See also Refs. \cite{Lerner:2011ge,Bezrukov:2011gp,Kaiser:2013sna,Kallosh:2013tua,Gong:2015qha,Takahashi:2018brt} for predictions of inflationary observables in variants of non-minimally coupled models.
Another issue in constructing models of inflation is the so-called graceful exit problem. In some inflation models, such as the power-law inflation \cite{Abbott:1984fp} and the inverse monomial inflation \cite{Ratra:1987rm} models,
the end of inflation cannot be invoked in the usual way -- by violation of slow-roll -- but one needs some destabilizing mechanism, such as tachyonic instability in the inflaton potential, to end inflation. Although such a mechanism does not necessarily affect the models' predictions for observables, one needs to take care of it for a successful and self-consistent inflationary scenario.
In this paper, we show that even a small non-minimal coupling to gravity can also help to end inflation even if one considers models such as the power-law inflation and the inverse monomial inflation models in which the end of inflation cannot be realized by slow-roll violation when the inflaton is minimally coupled to gravity. While this conclusion is not particularly surprising
and has been suggested in the literature (see, {\it e.g.}, Ref.~\cite{Tashiro:2003qp}),
at the same time -- and more importantly -- the {\it same} non-minimal coupling can also make the spectral index $n_s$ and the tensor-to-scalar ratio $r$ consistent with cosmological data such as those obtained by Planck in spite of the fact that the original models are excluded by the current data as minimally coupled single-field models.
As we will show, this is a non-trivial requirement, and it
facilitates model building of the inflationary Universe, especially in the case of models with an extended gravity sector.
Another important issue in inflationary cosmology is reheating. Even if a graceful exit from inflation is realized, the Universe still has to be reheated to become radiation dominated by the time of big bang nucleosynthesis (BBN).
We argue that while the usual mechanism for reheating where the inflaton field oscillates about the minimum of its potential and decays into particles cannot be realized in the models we consider, in most of our scenarios reheating can be realized via gravitational particle production~\cite{Ford,Starobinsky:1994bd}.
This is made possible due to a kination epoch which generically follows the inflationary period in the models we consider. During a kination epoch, the energy density of the inflaton field $\phi$ scales as $\rho_\phi \propto a^{-6}$, with $a$ being the scale factor, and therefore, it decays faster than that of radiation. Hence, the energy density of radiation, produced by gravitational particle production, will eventually dominate the Universe and thus reheat it. As we will show, this is the case in most models we consider in this paper. Interestingly, however, whether kination is realized or not within the non-minimally coupled models we consider depends on the theory of gravity: the so-called metric or Palatini theory. We will make this distinction clear in the following sections.
To summarize, the most important new results obtained in this paper are as follows (i) identification of a non-trivial range of values for the non-minimal coupling function which {\it both} realizes a graceful exit from inflation {\it and} makes the models discussed above consistent with the Planck data, (ii) identification of a suitable reheating mechanism for the above models in scenarios where the usual reheating mechanisms do not work, and (iii) characterization of how the above aspects depend on the theory of gravity (metric or Palatini). As we will show, most of our scenarios naturally include all ingredients of a successful inflationary scenario: a spectral index and tensor-to-scalar ratio consistent with observations, a graceful exit from inflation, and reheating.
The paper is structured as follows. In the next section, we introduce a model of inflation with a non-minimal coupling to gravity and review some basic formulas. In Sec.~\ref{sec:SR_vio}, we discuss how inflation can end by slow-roll violation due to a non-minimal coupling even when the original (minimally coupled) model cannot realize the end of inflation in this way.
We also show that we can not only invoke slow-roll violation but also make the models we consider viable, i.e., that the predictions for the spectral index and tensor-to-scalar ratio become consistent with the current data, although only for a non-trivial range of the non-minimal coupling value, as we will show. Then in Sec.~\ref{sec:reheating}, we argue that reheating can be realized via gravitational particle production in most models we consider in this paper. We also briefly discuss the dynamics after inflation.
The final section is devoted to the summary and conclusions of the paper.
\section{The model}
\label{sec:setup}
\subsection{Action}
Here we describe our setup to investigate the violation of slow-roll in inflationary models with a non-minimal coupling to gravity.
The Jordan frame action is assumed as
\begin{equation}
\label{eq:S_jordan}
S_{\rm J}
= \int d^4x \sqrt{-g}\left(\frac{1}{2} M_{\rm pl}^2 F(\phi) g^{\mu\nu}R_{\mu\nu}(\Gamma)
- \frac{1}{2} g^{\mu\nu}\nabla_{\mu}\phi \nabla_{\nu}\phi - V_J(\phi) \right) \,,
\end{equation}
where $\phi$ is an inflaton and $V_J(\phi)$ is its potential in the Jordan frame, $M_{\rm pl}$ is the reduced Planck mass, $g_{\mu\nu}$ is the metric and $g$ its determinant, $R_{\mu\nu}$ is the Ricci tensor constructed from the space-time connection $\Gamma$ which may or may not depend on the metric and its first derivatives only but also on the inflaton field (see below), and $F(\phi)$ is a function which represents a non-minimal coupling of the inflaton to gravity. In this paper, we assume the following form for this function:
\begin{equation}
\label{eq:F_nonminimal}
F(\phi) = 1+ \xi \left( \frac{\phi}{M_{\rm pl}} \right)^n \,,
\end{equation}
where $\xi$ is a dimensionless coupling parameter and $n$ is assumed to be a positive integer.
In this paper, we consider the case with $\xi \ge 0$.
For the potential $V_J(\phi)$ we will discuss
two examples,
which will be presented
in Secs. \ref{sec:PLI} and \ref{sec:IMI}.
We will also consider two theories of gravity: the so-called {\it metric} and {\it Palatini} theories. In the former case the connection $\Gamma$ depends on the metric only, whereas in the latter case it depends, {\it a priori}, on both the metric and the inflaton field (see Ref. \cite{Bauer:2008zj} for a seminal work and Ref. \cite{Tenkanen:2020dge} for a recent review and introduction to the topic). For simplicity, we will assume that the connection is torsion-free (see, e.g., Refs. \cite{Rasanen:2018ihz,Aoki:2020zqm} for scenarios where this condition was relaxed).
After a Weyl transformation,
\begin{equation}
\label{eq:weyl_trans}
g_{\mu\nu} \rightarrow \Omega^2 (\phi) g_{\mu\nu},
\qquad
\Omega(\phi)^2 \equiv F(\phi) = 1+\xi \left( \frac{\phi}{M_{\rm pl}}\right)^n \,,
\end{equation}
the Einstein frame action can be written in both cases as
\begin{equation}
\label{eq:S_einstein}
S_{\rm E}
= \int d^4x \sqrt{-\hat{g}}\left(\frac{1}{2} M_{\rm pl}^2 \hat{g}^{\mu\nu} \hat{R}_{\mu\nu}(\hat{\Gamma})
- \frac{1}{2} \hat{g}^{\mu\nu} \hat{\nabla}_{\mu}\chi \hat{\nabla}_{\nu}\chi -
V_E(\chi) \right) \,,
\end{equation}
where the hat means that the quantity is defined in the Einstein frame and
where the potential is given by
\begin{equation}
V_E (\chi) = \frac{V_J (\phi(\chi))}{\Omega^4 (\phi(\chi))} \,.
\end{equation}
We denote the Einstein frame field by $\chi$, which is related to the Jordan frame counterpart $\phi$ via
\begin{equation}
\frac{d\phi}{d\chi}
= \frac{\left(
1+\xi\displaystyle\left(\frac{\phi}{M_{\rm pl}}\right)^n
\right)}
{\sqrt{1+\xi \displaystyle\left(\frac{\phi}{M_{\rm pl}}\right)^n
+\frac32 \kappa n^2 \xi^2 \left( \displaystyle\frac{\phi}{M_{\rm pl}} \right)^{2n-2}}} \,,
\end{equation}
where $\kappa =1\,,0$ correspond to the metric and Palatini cases, respectively.
We can solve the above equation numerically for an arbitrary $n$ both in the metric and Palatini cases to obtain
the relation between $\phi$ and $\chi$. We note that an analytic solution, especially for the $n=2$ case, is well known \cite{GarciaBellido:2008ab,Bauer:2008zj,Rasanen:2017ivk} and, in the Palatini
case, the solution even for a general $n$ can be expressed in terms of hypergeometric functions \cite{Jarv:2017azx}.
In the following, we consider the metric and Palatini cases with $n=4$ for illustrative purposes.
Once we specify the potential in the Jordan frame, we can calculate the slow-roll parameters and the number of $e$-folds
by using the Einstein frame potential in the standard fashion. The slow-roll parameters are defined as
\begin{eqnarray}
\label{eq:SR_param}
\epsilon &\equiv& \frac{1}{2}M_{\rm pl}^2 \left(\frac{V_E'(\chi)}{V_E(\chi)}\right)^2 \,, \quad
\eta \equiv M_{\rm pl}^2 \frac{V_E''(\chi)}{V_E(\chi)} \,,
\end{eqnarray}
where the prime denotes a derivative with respect to $\chi$. Unless some kind of destabilizing mechanism is assumed, inflation ends when slow-roll is violated, $\epsilon(\chi) = 1$. From Eq. \eqref{eq:SR_param} we can also calculate the spectral index $n_s$ and the tensor-to-scalar ratio $r$ as
\begin{eqnarray}
\label{eq:ns_r}
n_s &=& 1 - 6 \epsilon + 2\eta, \qquad
r = 16 \epsilon \,.
\end{eqnarray}
The spectral index has the measured value $n_s\simeq 0.965$ at the pivot scale $k_*=0.05\, {\rm Mpc}^{-1}$ \cite{Akrami:2018odb}, whereas the tensor-to-scalar ratio is constrained to $r<0.06$ \cite{Ade:2018gkx}.
In this paper, we consider two types of inflation models and show that while their minimally coupled versions predict values of $n_s$ and $r$ that are excluded by the data, their non-minimally coupled extensions can be easily resurrected. We will describe these models in more detail in the following subsections.
\subsection{Inflation models}
\subsubsection{Power-law inflation}
\label{sec:PLI}
To facilitate comparison with the minimally coupled case, we give the inflaton potentials in the Jordan frame. For the power-law inflation \cite{Abbott:1984fp}, the potential is given by
\begin{equation}
V_J(\phi) = V_0 e^{-\alpha \phi/M_{\rm pl}}\,,
\end{equation}
where $\alpha$ is a dimensionless parameter and $V_0$ is a parameter representing a scale which is roughly the same as the energy scale of inflation.
A potential like this can arise in supergravity and string theories, and in some models
a successful inflationary scenario with $\alpha \ll 1$ can be constructed,
for example, in the framework of M theory \cite{Becker:2005sg}.
In the minimally coupled case, the slow-roll parameters are given by
\begin{equation}
\label{eq:SR_PLI}
\epsilon = \frac12 \alpha^2,
\qquad
\eta = \alpha^2 \,.
\end{equation}
Since $\alpha$ is assumed to be a constant, the slow-roll parameters in this model are also constants.
Therefore, inflation cannot end by violation of slow-roll caused by the dynamics of the inflaton. Therefore, in this model, one needs a non-standard mechanism to end inflation, such as tachyonic instability (see, e.g., Ref. \cite{Martin:2013tda} for details).
The need for an extra mechanism to end inflation is not the only problem of this model. From Eqs.~\eqref{eq:ns_r} and~\eqref{eq:SR_PLI} one can derive a relation between $n_s$ and $r$:
\begin{equation}
r = -8 (n_s -1).
\end{equation}
Since recent observations imply $n_s \simeq 0.965$, the above relation indicates that $r \simeq 0.28$, which is excluded by observations \cite{Akrami:2018odb,Ade:2018gkx}. However, as we will see in the next section, by introducing a non-minimal coupling, the slow-roll parameters can evolve in time and, consequently, violation of slow-roll can be invoked. Furthermore, the predictions for the spectral index and the tensor-to-scalar ratio will also get modified, and the tension with the data can be alleviated for a sufficient choice of the non-minimal coupling function
that depends on the $\alpha$ parameter in the potential.
\subsubsection{Inverse monimial inflation}
\label{sec:IMI}
The Jordan frame potential of the inverse monomial inflation model is given by \cite{Peebles:1987ek,Ratra:1987rm}
\begin{equation}
V_J(\phi) = V_0 \left(\frac{\phi}{M_{\rm pl}}\right)^{-p} ,
\end{equation}
where $p$ is a positive number and $V_0$ represents an energy scale. Models with an inverse monomial potential have been discussed in the context of, e.g., quintessential inflation \cite{Peebles:1987ek,Ratra:1987rm}, intermediate inflation \cite{Barrow:1993zq},
tachyon inflation \cite{Feinstein:2002aj,Sami:2002zy}, and
dynamical supersymmetric inflation \cite{Kinney:1997hm,Kinney:1998dv}.
In the minimally coupled case, the slow-roll parameters are
\begin{equation}
\epsilon = \frac12 p^2 \left( \frac{\phi}{M_{\rm pl}} \right)^{-2},
\qquad
\eta = p(p+1) \left( \frac{\phi}{M_{\rm pl}}\right)^{-2} \,,
\label{eq:slow_roll_IMI}
\end{equation}
from which one obtains
\begin{equation}
n_s - 1 = p (2-p) \left( \frac{\phi}{M_{\rm pl}}\right)^{-2}~.
\end{equation}
From this expression, one can see that the spectral index is blue-tilted when $p <2$, which is excluded by observations. On the other hand,
Eq.~\eqref{eq:slow_roll_IMI} indicates the relation
\begin{equation}
r = \frac{8p}{2-p} (n_s -1 ) \,,
\end{equation}
from which one can see that even when $p > 2$
and $n_s = 0.965$, the tensor-to-scalar ratio is predicted as $r > 0.28$, i.e. it is bounded from below, whereas observations indicate $r<0.06$ \cite{Ade:2018gkx}. Therefore, the minimally coupled version of this model is completely excluded by observations.
Furthermore, since in the minimally coupled case inflation starts at small values of $\phi$ and the field value grows during inflation, the slow-roll parameters monotonically decrease as inflation proceeds. Therefore, also in this model inflation cannot end by violation of slow-roll driven by the inflationary dynamics without an additional mechanism.
We will see that by introducing a non-minimal coupling, we can realize a graceful exit from inflation in this model and obtain values of the spectral index and tensor-to-scalar ratio consistent with the current data at the same time.
\section{Violation of slow-roll and observables in the non-minimally coupled case}
\label{sec:SR_vio}
In this section, we consider the inflation models mentioned in the previous section but this time in the non-minimally coupled case and show how slow-roll can be violated by the existence of a non-minimal coupling to gravity, which ends inflation.\footnote{For some early, pre-Planck works on the topic in a somewhat different context, see Refs.~\cite{Frewin:1993aj,Kaganovich:2000fc}.}
We will also study predictions for the observables $n_s$ and $r$ in this context.\footnote{In addition to providing results for these observables, the Planck data also constrain the running and running of the running of the spectral index \cite{Akrami:2018odb}. We have checked that in all cases we present, the running parameters are small and well within the limits given by the observations.} We present our results for each model in order.
\subsection{Non-minimal power-law inflation}
\label{sec:nm_PLI}
First we discuss the case of the power-law inflation (PLI) model. For illustrative purposes, as mentioned in Sec.~\ref{sec:setup}, we assume $n=4$ for the non-minimal coupling function, i.e.,
\begin{equation}
\Omega^2 (\phi) = 1 +\xi \left( \frac{\phi}{M_{\rm pl}}\right)^4 \,.
\end{equation}
The Einstein frame potential is depicted in Fig.~\ref{fig:PLI_potential} for the cases of $\xi = 0$
(minimally coupled case), $10^{-3}, 10^{-4}$, and $10^{-5}$. In Fig.~\ref{fig:PLI_potential}, we take $\alpha = 0.02$ as a representative example. As can be seen from the figure, as $\xi$ increases, the potential becomes more steep, which causes the slow-roll parameters to increase as the field evolves. This is not surprising, as the non-minimal coupling changes the Einstein frame potential. However, when $\alpha \sim 0.02$, for $\xi \gtrsim 10^{-3}$ the potential becomes too steep to support more than roughly $50-60$ $e$-folds, and therefore for a too large non-minimal coupling it becomes questionable whether the non-minimal PLI model can solve the classic horizon and flatness problems
(see, e.g., Ref. \cite{Liddle:2003as}). For this reason, we only show results in the PLI case for $0 \leq \xi \leq 10^{-3}$ in this paper.
It should be emphasized that whether a model of inflation can be made viable needs to be investigated carefully since the introduction of a non-minimal coupling does not necessarily guarantee the success of the model.
One needs to assume a suitable value for $\xi$ depending on the model and its parameters.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{PLI_pot.eps}
\caption{Potential for the power-law inflation model with $\alpha=0.02$ for the cases $\xi=0, 10^{-3}, 10^{-4}$ and $10^{-5}$.
The metric and Palatini cases are shown.
}
\label{fig:PLI_potential}
\end{center}
\end{figure}
In the left panel of Fig.~\ref{fig:PLI_epsilon}, the evolution of one of the slow-roll parameters, $\epsilon$, is shown for the case with $\alpha=0.02$. Inflation starts from small values of $\chi$ and, as inflation proceeds, the field evolves down the potential towards a larger value.
For $\alpha = 0.02$, the minimally coupled case gives $\epsilon = 2 \times 10^{-4}$, and hence even in the non-minimally coupled case, $\epsilon$ starts from
the value corresponding to the minimally coupled case.
As $\chi$ increases
and the non-minimal coupling term becomes larger,
$\epsilon$ gets larger, as can be read off from the figure.
One can also notice that as $\xi$ becomes larger, $\epsilon$ increases to reach unity at smaller values of $\chi$; i.e., the violation of slow-roll occurs faster for larger values of $\xi$.
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{PLI_eps_chi.eps}
\includegraphics[width=7.5cm]{PLI_eps_Ne.eps}
\caption{Evolution of $\epsilon$ as a function of $\chi$ (left) and $N_e$ (right) in the power-law inflation model with $\alpha=0.02$ for the cases $\xi=10^{-3}$ and $\xi=10^{-5}$.
The metric and Palatini cases are shown.
}
\label{fig:PLI_epsilon}
\end{center}
\end{figure}
In the right panel of Fig.~\ref{fig:PLI_epsilon}, the evolution of $\epsilon$ is plotted as a function of $N_e$, the number of $e$-folds counted backwards from the end of inflation. The figure again illustrates that
when $\xi$ is large, $\epsilon$ goes up relatively quickly. However,
as one can see in Fig.~\ref{fig:PLI_ns_r}, if $\xi$ is too large, the predictions for $n_s$ and $r$ get close to the minimally coupled case even though slow-roll violation is quickly realized.
In Appendix \ref{app:SRP_behavior}, we discuss the evolution of the slow-roll parameter in more detail, focusing on the dependence on $\xi$, which helps us understand the dynamics
and the non-trivial $\xi$-dependence of $n_s$ and $r$.
The above aspects indicate that for a successful inflation model, we need (mild) modifications around $N_e \sim 50 - 60$ to obtain predictions for the spectral index and tensor-to-scalar ratio which are consistent with observations.
These predictions are shown in Fig.~\ref{fig:PLI_ns_r} in the slow-roll approximation. In the figure, we take $\alpha=0.01, 0.02$, and $0.03$ for illustrative purposes. For larger~$\alpha$, the tensor-to-scalar ratio~$r$ gets larger.
As can be seen from the right panel of Fig.~\ref{fig:PLI_epsilon}, the value of $\epsilon$ around $N_e \sim 50 -60$ for the case with a relatively large $\xi$ is close to that in the minimally coupled model,
which means that the prediction for $r$ approaches $r =16\epsilon \sim 8 \alpha^2$. Therefore, when $\alpha \gtrsim 0.1$, the tensor-to-scalar ratio becomes $r \gtrsim 0.08$ regardless of $\xi$, which is not consistent with observations even with a non-minimal coupling to gravity. Figure~\ref{fig:PLI_ns_r} also shows that the differences between the metric and Palatini cases are rather modest, which is due to $\xi$ taking a value much smaller than unity. This is reminiscent of the behavior found in, e.g., Refs. \cite{Tenkanen:2017jih,Racioppi:2017spw,Jarv:2017azx}, which also studied inflationary models with a small non-minimal coupling to gravity in both the metric and Palatini frameworks. Finally, we note that the
parameter space of the non-minimal PLI model presented in Fig.~\ref{fig:PLI_ns_r}
is testable with forthcoming CMB missions. For example, future CMB B-mode polarization experiments
such as BICEP3 \cite{Wu:2016hul}, LiteBIRD~\cite{Matsumura:2013aja}, and the Simons Observatory \cite{Simons_Observatory} will be soon pushing the limit on tensor-to-scalar ratio down to $r\simeq 0.001$,
or aiming to detect $r$ above this limit. These measurements will either provide further support for the model or rule out a large part of its parameter space. For a discussion on prospects for distinguishing between different non-minimal models in the case of a detection, see Ref. \cite{Takahashi:2018brt}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{ns_r_PLI.eps}
\caption{Predictions for the spectral index $n_s$ and tensor-to-scalar ratio $r$ in the non-minimally coupled power-law inflation model with $\alpha=0.01, 0.02$ and $0.03$. We vary the non-minimal coupling parameter as $ 10^{-5} \leq \xi \leq 10^{-3}$. The yellow curve indicates the predictions in the minimally coupled case, and the underlying blue regions indicate the Planck+BICEP2/Keck Array $1\,\sigma$ and $2\,\sigma$ bounds \cite{Array:2015xqh,Akrami:2018odb}. The number of $e$-folds is assumed as $50 \leq N_e \leq 60$ in this figure. The metric and Palatini cases are shown.
}
\label{fig:PLI_ns_r}
\end{center}
\end{figure}
\subsection{Non-minimal inverse monomial inflation}
Let us now discuss the inverse monomial inflation (IMI) model in the framework of non-minimal coupling to gravity.
Here we again assume $n=4$ for the non-minimal coupling function. In Fig.~\ref{fig:IMI_potential}, we show some example Einstein frame potentials in the case of non-minimal version of the model with $p=0.05$
and $\xi=10^{-4}, 10^{-5}$ and $10^{-6}$, as well as with $\xi = 0$.
As in the PLI case, the inflaton moves from a small value to a large one during inflation.
As can be seen from Fig.~\ref{fig:IMI_potential}, a non-minimal coupling again makes the potential steeper, which drives the $\epsilon$ parameter to larger values and eventually ends inflation, in contrast to the minimally coupled counterpart of this model. Similar to the non-minimal PLI model discussed in Sec.~\ref{sec:nm_PLI}, also in this case the potential becomes too steep to support more than roughly $60$ $e$-folds\footnote{While this value is compatible with what is shown in Fig.~\ref{fig:IMI_ns_r}, it is not enough to solve the classic horizon and flatness problems in a scenario where the scale of inflation is high and inflation is followed by a
``kination" phase \cite{Liddle:2003as}. We will return to the dynamics after inflation in Sec. \ref{sec:reheating}.} if the non-minimal coupling is larger than $\xi \gtrsim 10^{-4}$
for $p\sim 0.05$. For smaller $p$, however, the potential can also support more than $60$ $e$-folds for larger $\xi$.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{IMI_pot.eps}
\caption{Potential for the inverse monomial inflation
model with $p=0.05$ in the cases $\xi=0,10^{-4}, 10^{-5}$ and $\xi=10^{-6}$.
The metric and Palatini cases are shown.
}
\label{fig:IMI_potential}
\end{center}
\end{figure}
In Fig.~\ref{fig:IMI_epsilon}, the evolution of $\epsilon$ as a function of $\chi$ (left panel) and $N_e$ (right panel) is shown for the example cases $\xi =10^{-4}$ and $\xi = 10^{-6}$. We take $p=0.05$ in this figure too. Although the tendency is the same as in the case of the PLI model, in the case of the IMI model, $\epsilon$ first gets smaller during the early
stages of inflation, which is a characteristic of the minimally coupled model. However,
as $\chi$ grows, the non-minimal coupling term becomes more dominant and $\epsilon$ becomes larger. Then, it finally ends inflation, which again is the effect of the non-minimal coupling.
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{IMI_eps_chi.eps}
\includegraphics[width=7.5cm]{IMI_eps_Ne.eps}
\caption{Evolution of $\epsilon$ as a function of $\chi$ (left) and $N_e$ (right) in the inverse monomial inflation
model with $p=0.05$ in the cases $\xi=10^{-4}$ and $\xi=10^{-6}$. The metric and Palatini cases are shown.
}
\label{fig:IMI_epsilon}
\end{center}
\end{figure}
In Fig.~\ref{fig:IMI_ns_r}, the predictions of the IMI model for $n_s$ and $r$ in the non-minimally coupled case are shown in slow-roll approximation for $p=0.01, 0.05$ and $0.1$.
We take the number of $e$-folds as $50 \leq N_e \leq 60$.
Regarding the non-minimal coupling parameter, we vary it as $10^{-6} \leq \xi \leq 10^{-3}$ for $p=0.01$, but for $p=0.05$ and $0.1$, we take a narrower range $10^{-6} \leq \xi \leq 10^{-4}$
since the potential cannot support more than $60$ $e$-folds for $\xi \ge 10^{-4}$ in these cases.
These are interesting values of $p$, as they give a blue-tilted $n_s$ in the original minimally coupled case and hence are totally excluded by observations. However, due to the existence of the non-minimal coupling, the (Einstein frame) potential gets modified and the spectral index can become red-tilted so that the model is viable for some range of $\xi$ again, just as in the case of the PLI model.
If we take $p$ larger than $p=0.5$, the spectral index can still become negative due to the existence of the non-minimal coupling but, on the other hand, the tensor-to-scalar ratio gets as large as $r\sim 0.1$. Therefore, models with $p \gtrsim 0.5$ cannot be made viable even with the existence of a non-minimal coupling of the type studied in this paper. Finally, similar to the non-minimal PLI model, Fig.~\ref{fig:IMI_ns_r} also shows that the differences between the metric and Palatini cases are small due to the non-minimal coupling taking only small values. Also, similar to the non-minimal PLI model, forthcoming CMB B-mode polarization experiments
will soon test the model, in particular, the parameter space presented in Fig.~\ref{fig:IMI_ns_r}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{ns_r_IMI.eps}
\caption{Predictions for the spectral index $n_s$ and tensor-to-scalar ratio $r$ in the non-minimally coupled inverse monomial inflation
model with $p=0.01,\, 0.05$, and $0.1$. For $p=0.05,\, 0.1$, we vary the non-minimal coupling parameter as $ 10^{-6} \leq \xi \leq 10^{-4}$, whereas for $p=0.01$ we vary it between $ 10^{-6} \leq \xi \leq 10^{-3}$ because in this case the potential can support more than $60$ $e$-folds of inflation also for $\xi> 10^{-4}$.
The predictions of the minimally coupled case are also shown and the underlying blue regions indicate the Planck+BICEP2/Keck Array $1\, \sigma$ and $2\,\sigma$ bounds \cite{Array:2015xqh,Akrami:2018odb}. The number of $e$-folds is assumed as $50 \leq N_e \leq 60$ in this figure. The metric and Palatini cases are shown.
}
\label{fig:IMI_ns_r}
\end{center}
\end{figure}
\section{Reheating and dynamics after inflation}
\label{sec:reheating}
As shown in the previous section, due to the existence of the non-minimal coupling,
we can dynamically realize a graceful exit in both the PLI and IMI models; that is, we can obtain $\epsilon = 1$ without any additional mechanism.
In addition to having a successful mechanism for ending inflation, in a successful inflationary model the Universe must also be reheated so that by the time of big bang nucleosynthesis, the Universe becomes radiation dominated.
In the standard reheating scenario (see, e.g., Ref.~\cite{Kofman:1997yn}), the inflaton field starts to oscillate around its potential minimum after the end of inflation and the energy density of the oscillating scalar field evolves roughly in the same way as that of non-relativistic matter. During such an effectively matter-dominated epoch, the inflaton field can decay into radiation through some interaction and, as a result, the Universe can
be reheated.
On the other hand, the models discussed here, as seen in Figs.~\ref{fig:PLI_potential} and \ref{fig:IMI_potential}, do not have any potential minimum even with a non-minimal coupling, and hence we cannot expect the usual reheating mechanism described above to work in our setup. However, the so-called gravitational reheating can still be realized in most of our scenarios, as we will explain below.
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{PLI_EoS.eps}
\includegraphics[width=7.5cm]{IMI_EoS.eps}
\caption{Evolution of the equation-of-state parameter $w$ as a function of the number of $e$-folds measured from the end of inflation. Left panel: The power-law inflation model with $\alpha =0.02$ in the cases of $\xi=10^{-3}$ and $\xi=10^{-5}$.
Right panel: The inverse monomial inflation model with $p=0.05$ in the cases of $\xi=10^{-4}$ and $\xi=10^{-6}$. In both panels, the metric and Palatini cases are shown.
}
\label{fig:PLI_eos}
\end{center}
\end{figure}
The effective equation-of-state parameter of the inflaton field in the Einstein frame, $w_\chi$, is defined by
\begin{equation}
w_\chi \equiv \frac{ \dot{\chi}^2 /2 - V_E(\chi)}{ \dot{\chi}^2 /2 + V_E(\chi)}~.
\end{equation}
In Fig.~\ref{fig:PLI_eos}, we plot the evolution of this quantity in the PLI and IMI models as a function of the number of $e$-folds measured from the end of inflation; that is, $N_e=0$ in the figure corresponds to the end of inflation and $-N_e>0$ to the post-inflationary stage. As can be seen in the left panel of the figure, in the PLI model considered here the equation-of -state parameter reaches unity soon after the end of inflation, both in the Palatini and in the metric case. This $w_\chi=1$ phase is nothing but the so-called kination, where the kinetic energy of the field dominates the total energy density of the Universe. However, as can be seen in the right panel of Fig. \ref{fig:PLI_eos}, only in the Palatini counterpart of the IMI model is it possible to reach $w_\chi=1$ and reheat the Universe via gravitational production of particles.
In the metric case with $p\sim 0.05$, the potential is not steep enough to give $w_\chi=1$, but instead we obtain $w_\chi \sim -0.1$. The reason for this unusual behavior is explained in Appendix \ref{app:IMI_eos}.
Reheating mechanisms in scenarios where the Universe undergoes a kination epoch have been studied in a large number of works. One possibility is gravitational reheating, which is based on gravitational particle production that
{\it necessarily} occurs due to excitation of all light fields during or at the end of inflation \cite{Ford,Starobinsky:1994bd}.
It has been shown that in this way, and under some suitable conditions regarding the non-minimal coupling between the Standard Model (SM) Higgs field and gravity, even the SM Higgs can be excited either during or after inflation; thus, it becomes responsible for reheating after inflation regardless of the exact shape of the potential or couplings of the inflaton field \cite{Figueroa:2016dsc,Nakama:2018gll,Opferkuch:2019zbd}. For other recent studies on this mechanism, see Refs. \cite{Dimopoulos:2018wfg,Haro:2018jtb,Hashiba:2018iff,Bettoni:2019dcw}. For earlier studies on gravitational reheating in the context of non-minimal inflation, see Refs. \cite{Tashiro:2003qp,Watanabe:2006ku}.
Let us now analyze how long it takes for the Universe to get reheated. While the energy density of radiation produced gravitationally is at first only quite modest, $\rho_{\rm rad} \sim H_*^4 \ll H_*^2M_{\rm pl}^2 \sim \rho_{\rm tot}$, where $H_*$ is the Hubble scale and $\rho_{\rm tot}\sim \rho_{\chi}$ is the total energy density at the end of inflation,\footnote{More precisely, depending on how the transition from inflation to the kination epoch proceeds, the efficiency of the gravitational particle production is reduced and the energy density is given by
$\rho_{\rm rad} = {\cal A} H_*^4$ where ${\cal A} = {\cal O}(0.01) - {\cal O}(0.1) $ \cite{Chun:2009yu}. However, even when ${\cal A} = 0.01$,
the
qualitative picture does not change.}
the radiation component's energy density
scales down more slowly than that of the inflaton field as the Universe expands, which eventually reheats the Universe. This is indeed the case when the inflaton enters into a kination phase where its kinetic energy dominates the energy density, $\rho_\chi \sim \dot{\chi}^2/2 \gg V(\chi)$ and consequently $\rho_\chi \propto a^{-6}$, where $a$ is the scale factor. For radiation, $\rho_{\rm rad} \propto a^{-4}$ as usual, and therefore $\rho_{\rm rad}/\rho_\chi \propto a^2$. Thus, the Universe becomes radiation dominated in
\begin{equation}
\label{reheating_efolds}
N_{\rm reh} \simeq \ln\left(\frac{M_{\rm P}}{H_*}\right) \simeq 10 - 12\,,
\end{equation}
$e$-folds after the end of inflation. Here we used the definition of the tensor-to-scalar ratio
\begin{equation}
r \equiv \frac{\mathcal{P}_T}{\mathcal{P}_\zeta} = \frac{8}{M_{\rm P}^2 \mathcal{P}_\zeta}\left(\frac{H_*}{2\pi}\right)^2\,,
\end{equation}
which allows us to estimate
\begin{equation}
\label{Hk}
H_* \simeq 7.7\times 10^{13}\sqrt{\frac{r}{0.1}}\, {\rm GeV} \simeq 7.7\times \left(10^{12} - 10^{13}\right) {\rm GeV} \,,
\end{equation}
as in our scenarios $r\sim 0.001 - 0.1$ and the curvature power spectrum amplitude $\mathcal{P}_\zeta = 2.1\times 10^{-9}$ as given by the Planck observations \cite{Akrami:2018odb}. The time of reheating \eqref{reheating_efolds} corresponds to the reheat temperature\footnote{A more detailed analysis of the reheating temperature in the gravitational reheating scenario has been done in Ref.~\cite{Hashiba:2018iff}.}
\begin{equation}
T_{\rm reh} \simeq \rho_{\rm rad}^{1/4}(N_{\rm reh}) \simeq H_* e^{-N_{\rm reh}} \simeq \left(10^7 - 10^9\right)\, {\rm GeV}\,,
\end{equation}
which is well above the temperature required for successful BBN, $T_{\rm BBN} = \mathcal{O}(1)$ MeV (see, e.g., Ref. \cite{Hasegawa:2019jsa}). We therefore conclude that gravitational particle production is sufficient to reheat the Universe in a successful way.
We stress that in most of our scenarios, the conditions for a successful inflationary model can be satisfied due to the existence of the non-minimal coupling even though the minimally coupled versions cannot accommodate a graceful exit or reheating without some extra mechanisms. However, there is still a serious issue in this simple setup: large running of the gravitational coupling in what we call the Jordan frame, provided that standard matter is minimally coupled in that frame \cite{Carroll:1998zi,Dvali:2001dd,Chiba:2001er}. In order to solve this problem, one may need to further assume that, for example, the standard matter is minimally-coupled in the Einstein frame (instead of the Jordan frame) or that the inflaton field has a coupling which changes its potential at large field values and/or eventually stops rolling after inflation. However, this issue is beyond the scope of the present paper and we leave it for future work.
\section{Conclusions}
\label{sec:summary}
In this paper, we have shown that a non-minimal coupling to gravity can not only make some inflationary models viable, similar to the case of the Higgs inflation model, but can also invoke slow-roll violation to realize a graceful exit from inflation. In particular, this is the case for models where some destabilizing mechanism, such as tachyonic instability, should be assumed to end inflation when the
model is minimally coupled to gravity.
As explicit examples, we have considered the power-law and inverse monomial inflation models with a non-minimal coupling to gravity.
When coupled only minimally to gravity, these models are completely excluded since their predictions for the spectral index $n_s$ and tensor-to-scalar ratio $r$ are inconsistent with the current cosmological data. However, we have shown that these models can become viable again with a specific range of values of the non-minimal coupling to gravity.
Typical values are summarized in Table~\ref{table:summary}.
\begin{table}[]
\begin{tabular}{cccc} \hline \hline
PLI ($\alpha =$) & 0.01 & 0.02 & 0.03 \\ \hline
$ \xi $ & $ 1 \times 10^{-3}$ & $ 4 \times 10^{-4}$ & $ 1 \times 10^{-4}$ \\ \hline \hline
IMI ($p =$) & 0.01 & 0.05 & 0.1 \\ \hline
$ \xi $ & $ 7 \times 10^{-4}$ & $ 1 \times 10^{-4}$ & $ 7 \times 10^{-5}$ \\ \hline
\end{tabular}
\caption{A summary of typical values of the non-minimal coupling to gravity,
which can make the power-law and inverse monomial inflation models viable again.
Note that here we assume $n=4$ for the non-minimal coupling function.}
\label{table:summary}
\end{table}
The characterization of this range is one of our most important results.
Furthermore, the same non-minimal coupling can also invoke the required slow-roll violation to end inflation without the need to implement any other mechanism for a graceful exit; this is non-trivial since introducing a non-minimal coupling does not necessarily guarantee that either the modifications to $n_s$ and $r$ are consistent with the Planck data or that the slow-roll violation can be realized. We also showed that in both the PLI and IMI models considered in this paper, the forthcoming CMB B-mode polarization experiments will soon either provide further support for the models or rule out a large part of their parameter space.
Our findings facilitate model building of the inflationary Universe, especially in the framework with an extended gravity sector. In this paper we have studied both metric and Palatini theories of gravity. We found that when it comes to inflationary observables, the differences between the two theories are generically small in the models discussed in this paper. This is due to the non-trivial fact that in our scenarios compatibility with data requires the non-minimal coupling to gravity be very small, $\xi \ll 1$, in contrast to many other models such as Higgs inflation, which require very large non-minimal couplings in order to be compatible with the data. However, as we also showed, the post-inflationary dynamics of the inflaton field can -- surprisingly -- be drastically different in the two counterparts of the same model, depending on the underlying theory of gravity.
The above notion has important consequences for reheating. Since there is no potential minimum in our setup, the usual reheating mechanism where the oscillating inflaton field decays into radiation
cannot work. However, in most of our scenarios a kination phase is typically realized just after the end of inflation, which allows gravitational particle production to complete reheating at temperatures well above those required for successful BBN.
Among the models we considered in this paper, the only exception is the non-minimally coupled inverse monomial inflation model within the metric theory of gravity. This highlights the fact that even when the differences between two theories of gravity are small as far as inflationary observables are concerned, their suitability for building a successful model of inflation can be dramatically different. However, as shown in the paper, most of our models can accommodate all three major ingredients of a successful inflationary model: predictions for $n_s$ and $r$ consistent with data, a graceful exit, and reheating. However, in all scenarios there still remains an issue regarding the running of the gravitational constant at late times, as discussed briefly at the end of the previous section. A detailed study of this issue is left for future work.
\acknowledgments
S.~Yokoyama would like to thank Soichiro Hashiba for useful discussions.
The work of T.~Takahashi was supported by JSPS KAKENHI Grant Number 17H01131, 19K03874 and MEXT KAKENHI Grant Number 15H05888, 19H05110.
T.~Tenkanen was supported by the Simons foundation.
S.~Yokoyama was supported by MEXT KAKENHI Grant Numbers 15H05888 and 18H04356.
T.~Takahashi and S.~Yokoyama would like to thank JSPS and NRF under the Japan - Korea
Basic Scientific Cooperation Program for providing an opportunity for discussions.
| 3d7afce1df3d2113ec400410dac6b76dcf564c8a | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Biharmonic functions are important in physics. Aside from continuum mechanics and elasticity theory, the biharmonic equation also makes an appearance in two-dimensional hydrodynamics problems involving Stokes flows of incompressible Newtonian fluids. A comprehensive review of this fascinating history of biharmonic functions can be found in the article \cite{Mel}.
On this subject the literature is vast. With only very few exceptions, the domains are either surfaces or open subsets of flat Euclidean space, see for example \cite{Bai-Far-Oua}. The development of the very last years has changed this and can be traced e.g. in the following publications: \cite{Gud-13}, \cite{Gud-14}, \cite{Gud-15}, \cite{Gud-Mon-Rat-1}, \cite{Gud-Sif-1}, \cite{Gud-Sif-2}, \cite{Gud-Sob-1}. There the authors develop methods for constructing explicit $r$-harmonic functions on the classical Lie groups and even some symmetric spaces.
\smallskip
In this paper we introduce the notion of complex isoparametric functions on a Riemannian manifold $(M,g)$, see Definition \ref{def-isoparametric}. It turns out that together with the so called eigenfunctions they provide us with a method for manufacturing complex-valued proper $r$-harmonic functions on $(M,g)$, see Section \ref{section-method}.
We then apply our new method to construct proper $r$-harmonic functions on the solvable Lie group semidirect products ${\mathbb R}^m \ltimes {\mathbb R}^n$ and ${\mathbb R}^m \ltimes \mathrm{H}^{2n+1}$, where $\mathrm{H}^{2n+1}$ denotes the classical $(2n+1)$-dimensional Heisenberg group.
The study of these particular Lie groups is motivated by the fact that all four-dimensional irreducible Lie groups are, up to isomorphism, semidirect products of one of these two types.
\section{Preliminaries}\label{section-preliminaries}
Let $(M,g)$ be a smooth manifold equipped with a Riemannian metric $g$. We complexify the tangent bundle $TM$ of $M$ to $T^{{\mathbb C}}M$ and extend
the metric $g$ to a complex bilinear form on $T^{{\mathbb C}}M$. Then the
gradient $\nabla \phi$ of a complex-valued function $\phi:(M,g)\to{\mathbb C}$ is a
section of $T^{{\mathbb C}}M$. In this situation, the well-known linear
{\it Laplace-Beltrami} operator (alt. {\it tension} field) $\tau$ on $(M,g)$
acts locally on $\phi$ as follows
$$
\tau(\phi)=\Div (\nabla \phi)= \frac{1}{\sqrt{\det g}} \sum_{ij}
\frac{\partial}{\partial x_j}\left(g^{ij}\, \sqrt{\det g}\,
\frac{\partial \phi}{\partial x_i}\right).
$$
For two complex-valued functions $\phi,\psi:(M,g)\to{\mathbb C}$ we have the
following well-known product rule
\begin{equation}\label{eq-product-rule}
\tau(\phi\cdot\psi)=\tau(\phi)\cdot\psi+2\,\kappa(\phi,\psi)+\phi\cdot\tau(\psi),
\end{equation}
where the {\it conformality} operator $\kappa$ is given by
$$
\kappa(\phi,\psi)=g(\nabla \phi,\nabla \psi).
$$
Moreover, if $f : U \subset {\mathbb C} \to {\mathbb C}$ is a holomorphic function defined on an open set $U$ containing $\phi(M)$, then we have the chain rule
\begin{equation}\label{eq-chain-rule}
\tau(f \circ \phi) = \kappa(\phi,\phi) \, f''(\phi) + \tau(\phi) \, f'(\phi).
\end{equation}
For a positive integer $r$ the iterated Laplace-Beltrami operator
$\tau^r$ is defined inductively by
$$
\tau^{0} (\phi)=\phi,\quad \tau^r (\phi)=\tau(\tau^{(r-1)}(\phi)).
$$
\begin{definition}\label{definition-proper-p-harmonic}
Let $r$ be a positive integer. Then a smooth complex-valued function $\phi:(M,g)\to{\mathbb C}$ is said to be
\begin{enumerate}
\item[(a)] {\it $r$-harmonic} if $\tau^r (\phi)=0$,
\item[(b)] {\it proper $r$-harmonic} if $\tau^r (\phi)=0$ and
$\tau^{(r-1)}(\phi)$ does not vanish identically.
\end{enumerate}
\end{definition}
It should be noted that the {\it harmonic} functions are exactly the
$1$-harmonic and the {\it biharmonic} functions are the $2$-harmonic
ones. In some texts the $r$-harmonic functions are also called {\it polyharmonic} of order $r$.
We also note that if a function is $r$-harmonic then it is also $s$-harmonic for any $s \geq r$.
Hence, one is usually interested in studying functions which are {\it proper} $r$-harmonic.
\section{Complex Isoparametric Functions}\label{section-method}
A method for constructing proper $r$-harmonic functions on Riemannian manifolds has recently been developed in \cite{Gud-Sob-1}.
The $r$-harmonic functions that the authors consider are of the form $f \circ \phi$, where $f : U \subset {\mathbb C} \to {\mathbb C}$ is a holomorphic function
defined on an open set $U$ containing $\phi(M)$, and
$\phi: (M,g) \to {\mathbb C}$ is an {\it eigenfunction}
i.e.\ a smooth complex-valued function such that
\begin{equation}\label{eq-eigenfunction-def}
\tau(\phi) = \lambda \cdot \phi \quad\text{and}\quad \kappa(\phi,\phi) = \mu \cdot \phi^2,
\end{equation}
for some constants $\lambda, \mu \in {\mathbb C}$.
The construction from \cite{Gud-Sob-1} can be adapted to the more general setting when $\phi$ is a complex isoparametric function, as we will now demonstrate.
Classically, isoparametric functions on Riemannian manifolds are defined as real-valued functions $\phi : (M,g) \to {\mathbb R}$ such that the tension field $\tau$ and the conformality operator $\kappa$ satisfy
\begin{equation}\label{eq-real-isoparametric-def}
\tau(\phi) = \Phi \circ \phi \quad\text{and}\quad \kappa(\phi, \phi) = \Psi \circ \phi,
\end{equation}
for some smooth functions $\Phi,\Psi$.
These have been extensively studied due to their beautiful geometric properties, see e.g.\ \cite{Tho}.
As we are mainly interested in complex-valued functions, the following complex-valued analogue of the classical real-valued iso\-parametric functions will turn out to be useful.
\begin{definition}\label{def-isoparametric}
Let $(M,g)$ be a Riemannian manifold.
Then a smooth complex-valued function $\phi : M \to {\mathbb C}$ is said to be \textit{complex isoparametric} on $M$ if
there exist holomorphic functions $\Phi,\Psi : U \to{\mathbb C}$ defined on some open set $U\subset {\mathbb C}$ containing $\phi(M)$, such that
the tension field $\tau$ and the conformality operator $\kappa$ satisfy
\begin{equation}\label{eq-complex-isoparametric-def}
\tau(\phi) = \Phi \circ \phi \quad\text{and}\quad \kappa(\phi, \phi) = \Psi \circ \phi.
\end{equation}
\end{definition}
\begin{remark}
Note that the term \textit{complex isoparametric} is used here mainly for aesthetical reasons, as the defining conditions (\ref{eq-real-isoparametric-def}) for real isoparametric functions and (\ref{eq-complex-isoparametric-def}) for complex ones are identical.
Despite this similarity, there does not seem to be a direct relation between these two concepts.
E.g.\ for a complex-valued function $\phi: M \to {\mathbb C}$ to be complex isoparametric, it is neither necessary nor sufficient that its real and imaginary parts are real isoparametric.
This is caused not only by the restrictive condition of $\Phi$ and $\Psi$ being holomorphic, but also by the fact that the relation for the conformality operator may fail to be satisfied, as demontrated by the functions
$${\mathbb R} \ni t \mapsto e^{(a+\mathrm ib)t} \quad\text{and}\quad {\mathbb R} \ni t \mapsto e^{at} + \mathrm ie^{bt},\qquad a,b\in {\mathbb R}.$$
Indeed, the former is complex isoparametric but its real and imaginary parts are not real isoparametric, while the latter is not complex isoparametric when $a\not= b$ but its real and imaginary parts are real isoparametric.
\end{remark}
\begin{remark}
We also note that other different definitions of complex-valued isoparametric functions have been proposed.
For example, in \cite{Bai} the author defines a function $\phi:M \to {\mathbb C}$ to be complex isoparametric if
\begin{equation*}
\tau(\phi) = \Phi \circ \phi \quad\text{and}\quad |d\phi|^2 = \kappa(\phi,\bar\phi) = \Psi \circ \phi
\end{equation*}
for a smooth complex-valued function $\Phi$ and a smooth real-valued function $\Psi$.
This definition is less restrictive than ours due to the weaker regularity assumptions on $\Phi$ and $\Psi$, but it is also essentially different because of the assumed condition for the conformality operator $\kappa$. Note that the value of $\kappa(\phi,\phi)$ contains no information about the value of $\kappa(\phi,\bar\phi)$, and vice versa.
\end{remark}
A plethora of examples of eigenfunctions (which are a fortiori complex isoparametric) on classical semisimple Lie groups can be found in Table 1 of \cite{Gud-Sob-1}.
Examples of complex isoparametric functions which are not eigenfunctions shall be studied later in this work, see Proposition \ref{lemma-phi}.
\medskip
As in \cite{Gud-Sob-1}, the gist of constructing $r$-harmonic functions using
isoparametric functions is to study compositions of the form $f \circ \phi$,
where $\phi : (M,g) \to {\mathbb C}$ is complex isoparametric and $f: U \to {\mathbb C}$ is a holomorphic function defined on some open set $U \subset {\mathbb C}$ containing $\phi(M)$.
By the chain rule (\ref{eq-chain-rule}), the tension field of such compositions satisfies
\begin{equation*}
\tau(f \circ \phi) = \kappa(\phi,\phi)\, f''(\phi) + \tau(\phi)\, f'(\phi) = (\Psi \, f'' + \Phi \, f') \circ \phi.
\end{equation*}
With this in hand, one can first obtain a harmonic function $f_1 \circ \phi$ by solving
\begin{equation*}
\tau(f_1 \circ \phi) = 0,
\end{equation*}
which, by the chain rule, reduces to the complex ordinary differential equation
\begin{equation*}
\Psi\,f_1'' + \Phi\,f_1' = 0.
\end{equation*}
Continuing inductively, we assume that a proper $(r-1)$-harmonic function of the form $f_{r-1} \circ \phi$ has already been constructed.
We can then obtain a proper $r$-harmonic function $f_r \circ \phi$ by solving the problem
\begin{equation*}
\tau(f_r \circ \phi) = f_{r-1} \circ \phi,
\end{equation*}
which again reduces to a complex ordinary differential equation, namely
\begin{equation*}
\Psi\, f_r'' + \Phi\, f_r' = f_{r-1}.
\end{equation*}
More explicitly, we get the following result.
\begin{theorem}\label{thm-p-harmonic-main}
Let $(M,g)$ be a Riemannian manifold and $\phi : M \to {\mathbb C}$ be a complex isoparametric function on $M$ with
\begin{equation*}
\tau(\phi) = \Phi \circ \phi \quad\text{and}\quad \kappa(\phi,\phi) = \Psi \circ \phi,
\end{equation*}
for some holomorphic functions $\Phi,\Psi : U \to{\mathbb C}$ defined on an open set $U$ of ${\mathbb C}$ containing $\phi(M)$.
Suppose that one of the following situations holds.
\begin{enumerate}
\item If $\Psi$ vanishes identically, let $\hat{U}$ be an open simply connected subset of \newline ${U \setminus \Phi^{-1}(\{0\})}$
and define the holomorphic functions ${f_r : \hat{U} \to {\mathbb C}}$ for $r \geq 1$ by
\begin{equation*}
f_r(z) = c \lrpar{\int^z \frac{\mathrm d\zeta}{\Phi(\zeta)}}^{r-1},
\end{equation*}
where $c \in {\mathbb C}$ is non-zero.
\item If $\Psi$ does not vanish identically, let $\hat{U}$ be an open simply connected subset of $U \setminus \Psi^{-1}(\{0\})$, put
\begin{equation*}
\Lambda(z) = \exp\lrpar{-\int^z \frac{\Phi(\zeta)}{\Psi(\zeta)} \,\mathrm d\zeta}, \quad z \in \hat{U}
\end{equation*}
and define the holomorphic functions ${f_r : \hat{U} \to {\mathbb C}}$ for $r \geq 1$ by
\begin{align*}
f_1(z) &= c_1 \int^z \Lambda(\zeta) \, \mathrm d\zeta + c_2,\\[0.1cm]
f_r(z) &= \int^z \Lambda(\eta) \int^\eta \frac{f_{r-1}(\zeta)}{\Lambda(\zeta)\,\Psi(\zeta)} \, \mathrm d\zeta \,\mathrm d\eta, \quad r > 1,
\end{align*}
where $c = (c_1, c_2) \in {\mathbb C}^2$ is non-zero.
\end{enumerate}
Then in both cases, the composition
\begin{equation*}
f_r \circ \phi : \phi^{-1} (\hat{U}) \to {\mathbb C}
\end{equation*}
is proper $r$-harmonic on its open domain $\phi^{-1} (\hat{U})$ in $M$ for all $r \geq 1$.
\end{theorem}
\begin{remark}
If the isoparametric function $\phi$ is real-valued, then one can weaken the assumptions in Theorem \ref{thm-p-harmonic-main}
by only requiring that $\Phi$ and $\Psi$ are smooth functions of a real variable and $\hat{U}$ can be taken to be an interval. In the complex-valued case, the requirement that $\hat{U}$ is simply connected is needed to ensure that the holomorphic antiderivatives are well-defined.
\end{remark}
Upon applying Theorem \ref{thm-p-harmonic-main} to the particular case when the isoparametric function $\phi$ is an eigenfunction on $(M,g)$
satisfying (\ref{eq-eigenfunction-def}), one sees that the composition $f_r \circ \phi$ is $r$-harmonic if
\begin{equation}\label{eq-p-harmonic-eigenfunction}
f_r(z)=
\begin{cases}
c\log(z)^{r-1} & \text{if }\; \mu = 0, \; \lambda \not= 0\\[0.2cm]
c_1\log(z)^{2r-1}+ c_{2}\log(z)^{2r-2} & \text{if }\; \mu \not= 0, \; \lambda = \mu\\[0.2cm]
c_1 z^{1-\frac\lambda{\mu}}\log(z)^{r-1} + c_2 \log(z)^{r-1} & \text{if }\; \mu \not= 0, \; \lambda \not= \mu
\end{cases}
\end{equation}
which has already been obtained in the aforementioned paper \cite{Gud-Sob-1}.
\section{The Semidirect Products ${\mathbb R}^m \ltimes {\mathbb R}^n$ and ${\mathbb R}^m \ltimes \mathrm{H}^{2n+1}$}
Low-dimensional Lie groups, particularly of dimension three and four, are of great importance in physics.
Probably most notably such Lie groups are used as models of spacetime in the theory of general relativity.
By Lie's third theorem, there is a bijection between real finite-dimensional Lie algebras and connected simply connected Lie groups of the same dimension.
Conveniently, real Lie algebras of dimension less than or equal to six have been classified,
see e.g.\ the recent book \cite{Sno-Win}.
As a consequence, one also obtains classifications of low-dimensional connected simply connected Lie groups.
The study conducted in this paper was initially meant to be a study of $r$-harmonic functions on the four-dimensional connected simply connected Lie groups.
For our purposes, only the Lie groups whose Lie algebras are indecomposable, i.e.\ not direct products of lower dimensional Lie algebras, are of interest.
The reason for this is that the Lie groups whose Lie algebras are decomposable are themselves direct products of lower dimensional Lie groups,
and the theory of $p$-harmonic functions on product manifolds is well-known, see e.g.\ \cite{Gud-14}.
According to the classification given for example in \cite{Pop-Boy-etal} and \cite{Big-Rem}, all four-dimensional indecomposable real Lie algebras are semidirect products of one of the following types:
\begin{equation*}
\mathfrak r \ltimes \mathfrak r^3, \quad \mathfrak r^2 \ltimes \mathfrak r^2, \quad \mathfrak r \ltimes \mathfrak h^{3},
\end{equation*}
where $\mathfrak r^n$ denotes the $n$-dimensional abelian algebra and $\mathfrak h^{2n+1}$ denotes the $(2n+1)$-dimensional Heisenberg algebra.
For the reader's convenience we list these Lie algebras in Table \ref{table:4dimalgebras}.
We thus see that the corresponding four-dimensional connected simply connected Lie groups are semidirect products of the form
\begin{equation*}
{\mathbb R} \ltimes {\mathbb R}^3, \quad {\mathbb R}^2 \ltimes {\mathbb R}^2, \quad {\mathbb R} \ltimes \mathrm{H}^{3},
\end{equation*}
where $\mathrm{H}^{2n+1}$ is the $(2n+1)$-dimensional Heisenberg group.
Linear representations of these four-dimensional Lie groups can be found in Table \ref{table-groups-4dim}.
This motivates our interest in studying the more general semidirect products $${\mathbb R}^m \ltimes {\mathbb R}^n \quad\text{and}\quad {\mathbb R}^m \ltimes \mathrm{H}^{2n+1}.$$
In particular, we note that such semidirect products are automatically solvable and hence diffeomorphic to the vector space of the corresponding dimension.
\subsection{The Semidirect Products ${\mathbb R}^m \ltimes {\mathbb R}^n$}
Let $\mu : {\mathbb R}^m \to \Aut({\mathbb R}^n) = \GLR n$ be the smooth homomorphism
\begin{equation*}
\mu(t) = \Exp \lrpar{ \sum_{k=1}^m A_kt_k },
\end{equation*}
for some family $\mathcal A = (A_k)_{k=1}^m$ of commuting matrices in $\glr n$.
Then the semidirect product ${\mathbb R}^m \ltimes_\mu {\mathbb R}^n$ is by definition
the smooth manifold ${\mathbb R}^m \times {\mathbb R}^n$ equipped with the Lie group operation
\begin{equation*}
(t,x)(s,y) = (t+s, \, x+\mu(t)y), \quad (t,x),(s,y) \in {\mathbb R}^m \times {\mathbb R}^n.
\end{equation*}
Note that in this case, the family $\mathcal A$ completely determines the semidirect product ${\mathbb R}^m \ltimes_\mu {\mathbb R}^n$,
so that it is natural to use the more suggestive notation ${\mathbb R}^m \ltimes_\mathcal A {\mathbb R}^n$.
The Lie group ${\mathbb R}^m \ltimes_\mathcal A {\mathbb R}^n$ has a faithful linear representation as the matrix group
\begin{equation*}
\big\{ \begin{bmatrix}
\mu(t) & x & 0\\
0 & 1 & 0 \\
0 & 0 & \Exp(\sum_{k} D_k t_k)
\end{bmatrix} \,\mid\, (t,x) \in {\mathbb R}^m \times {\mathbb R}^n \big\},
\end{equation*}
where $(D_k)_{ij} = \delta_{ik}\delta_{jk}$.
The $m$-dimensional block
\begin{equation*}
\Exp\lrpar{\sum_{k=1}^m D_k t_k}
\end{equation*}
of the representation is needed in general since $\mu$ may not be injective, but (parts of) this block may often be omitted.
This linear representation induces a natural basis for the corresponding Lie algebra $\mathfrak r^m \ltimes_\mathcal A \mathfrak r^n$, namely that consisting of the elements
\begin{equation*}
\frac{\partial}{\partial t_k}\bigg|_0
=
\begin{bmatrix}
A_k & 0 & 0 \\
0 & 0 & 0\\
0 & 0 & D_k
\end{bmatrix},
\quad
\frac{\partial}{\partial x_i}\bigg|_0
=
\begin{bmatrix}
0 & e_i & 0\\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix},
\end{equation*}
where $1 \leq k \leq m, \; 1 \leq i \leq n$, and $e_i$ are the canonical unit vectors in ${\mathbb R}^n$.
We now define an inner product on the Lie algebra $\mathfrak r^m \ltimes_\mathcal A \mathfrak r^n$ by requiring that this basis becomes orthonormal and we extend this inner product to a left-invariant Riemannian metric on the Lie group ${\mathbb R}^m \ltimes_\mathcal A {\mathbb R}^n$.
A simple calculation shows that, in the coordinates $(t,x) \in {\mathbb R}^m \times {\mathbb R}^n$, this metric is given by
\begin{equation*}
g_{(t,x)} = \begin{bmatrix}
I_m & 0 \\
0 & \mu(-t)^\mathrm{T} \mu(-t)
\end{bmatrix}.
\end{equation*}
It is easy to see that if $\phi, \psi : ({\mathbb R}^m \ltimes_\mathcal A {\mathbb R}^n, g) \to {\mathbb C}$ are two complex-valued functions then the Laplace-Beltrami operator satisfies
\begin{equation}\label{eq-rm*rn-lap}
\tau(\phi)
= \sum_{k=1}^m\lrpar{\frac{\partial^2\phi}{\partial t_k^2} - \trace(A_k) \,\frac{\partial\phi}{\partial t_k}}
+ \sum_{i,j=1}^n (\mu(t)\,\mu(t)^\mathrm{T})_{ij} \, \frac{\partial^2 \phi}{\partial x_i \partial x_j},
\end{equation}
while the conformality operator is given by
\begin{equation}\label{eq-rm*rn-conf}
\kappa(\phi, \psi) = \sum_{k=1}^m \frac{\partial\phi}{\partial t_k}\frac{\partial\psi}{\partial t_k}
+ \sum_{i,j=1}^n (\mu(t)\,\mu(t)^\mathrm{T})_{ij} \frac{\partial\phi}{\partial x_i}\frac{\partial\psi}{\partial x_j}.
\end{equation}
\medskip
\subsection{The Semidirect Products ${\mathbb R}^m \ltimes \mathrm{H}^{2n+1}$}
Throughout this section we let $J_{2n}$ denote the block diagonal $2n \times 2n$ matrix defined by
\begin{equation*}
J_{2n} x = (-x_2, x_1, \ldots, -x_{2n}, x_{2n-1}).
\end{equation*}
Note that this is the standard complex structure on ${\mathbb R}^{2n} \cong {\mathbb C}^n$.
We recall that the Heisenberg group $\mathrm{H}^{2n+1}$ can be seen as the space ${\mathbb R} \times {\mathbb R}^{2n}$ equipped with the Lie group operation
\begin{equation*}
(\xi, x) \boxplus (\eta, y) = (\xi + \eta + \tfrac{1}{2}\langle J_{2n} x, y \rangle, \, x+y).
\end{equation*}
Its Lie algebra $\mathfrak h^{2n+1}$ is nilpotent and has one-dimensional center.
More explicitly for a basis $\{\Xi, X_1, \ldots, X_{2n}\}$ of $\mathfrak h^{2n+1}$ the non-zero Lie brackets are given by
\begin{equation*}
[X_{2i-1},X_{2i}] = \Xi, \quad 1\leq i \leq n.
\end{equation*}
In what follows, we consider semidirect products ${\mathbb R}^m \ltimes_{\hat\mu} \mathrm{H}^{2n+1}$ with respect to a specific class of homomorphisms
$\hat\mu : {\mathbb R}^m \to \Aut(\mathrm{H}^{2n+1})$.
The reader interested in more details about this can find them in \cite{Sob-MSc}.
Let $\hat\mu : {\mathbb R}^m \to \Aut(\mathrm{H}^{2n+1})$ be the smooth homomorphism
\begin{equation*}
\hat\mu(t) = \begin{bmatrix}
a(t) & 0 \\
0 & \mu(t)
\end{bmatrix},
\end{equation*}
where $a : {\mathbb R}^m \to {\mathbb R}$ and $\mu : {\mathbb R}^m \to \GLR{2n}$ are given by
\begin{equation*}
a(t) = \exp\lrpar{\frac{1}{n}\sum_{k=1}^m \trace(A_k)t_k} \quad\text{and}\quad \mu(t) = \Exp\lrpar{\sum_{k=1}^m A_kt_k},
\end{equation*}
for some family $\mathcal A = (A_k)_{k=1}^m$ of commuting matrices in $\glr{2n}$ of the form
\begin{equation}\label{eq-heisenberg-der-A}
{\small
\begin{bmatrix}
A_{(1,1)} & -\adj(A_{(2,1)}) & - \adj(A_{(3,1)}) & \ldots & -\adj(A_{(n,1)})\\
A_{(2,1)} & A_{(2,2)} & -\adj(A_{(3,2)}) & \ldots & -\adj(A_{(n,2)})\\
A_{(3,1)} & A_{(3,2)} & A_{(3,3)} & \ldots & -\adj(A_{(n,3)}) \\
\vdots & & & \ddots & \vdots\\
A_{(n,1)} & A_{(n,2)} & A_{(n,3)} & \ldots & A_{(n,n)}
\end{bmatrix}
},
\end{equation}
where $A_{(i,j)} \in {\mathbb R}^{2\times 2}$ are such that $\trace A_{(i,i)} = a$ for $1 \leq i \leq n$, and $\adj(A_{(i,j)})$ denotes
the adjugate i.e.\ the transpose of the cofactor matrix of $A_{(i,j)}$.
In this situation, the Lie group semidirect product ${\mathbb R}^m \ltimes_{\hat\mu} \mathrm{H}^{2n+1}$ is the manifold ${\mathbb R}^m \times {\mathbb R} \times {\mathbb R}^{2n}$
equipped with the Lie group operation
\begin{align*}
(t,\xi,x) \, (s,\eta,y) &= \big(t+s, \, (\xi,x)\boxplus(a(t) \eta, \mu(t)y)\big)\\[0.1cm]
&= \big(t+s, \; \xi + a(t)\eta + \tfrac{1}{2}\langle J_{2n} x, \mu(t)y \rangle, \; x + \mu(t)y \big),
\end{align*}
where $(t,\xi,x) , (s,\eta,y) \in {\mathbb R}^m \times {\mathbb R} \times {\mathbb R}^{2n}$.
As in the previous section, such semidirect products are completely determined by the family $\mathcal A$,
so that it is natural to use the more suggestive notation ${\mathbb R}^m \ltimes_\mathcal A \mathrm{H}^{2n+1}$.
A faithful linear representation of the Lie group ${\mathbb R}^m \ltimes_\mathcal A \mathrm{H}^{2n+1}$ is given by the matrix group
\begin{equation*}
\big\{\begin{bmatrix}
a(t) & \frac{1}{2}(J_{2n} x)^\mathrm{T} \mu(t) & \xi & 0\\
0 & \mu(t) & x & 0\\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & \Exp(\sum_k D_kt_k)
\end{bmatrix}
\,\mid\,
(t,\xi,x) \in {\mathbb R}^m \times {\mathbb R} \times {\mathbb R}^{2n}
\big\},
\end{equation*}
where as before $(D_k)_{ij} = \delta_{ik}\delta_{jk}$.
Just as in the previous section, the $m$-dimensional block
\begin{equation*}
\Exp\lrpar{\sum_{k=1}^m D_k t_k}
\end{equation*}
of the representation is needed in general, but (parts of) it can often be removed.
We equip the Lie group ${\mathbb R}^m \ltimes_\mathcal A \mathrm{H}^{2n+1}$ with the left-invariant Riemannian metric induced by the inner
product on its Lie algebra $\mathfrak r^m \ltimes_\mathcal A \mathfrak h^{2n+1}$ defined by requiring that the basis
\begin{equation*}
\small
\frac{\partial}{\partial t_k}\bigg|_0 = \begin{bmatrix}
\frac{1}{n}\trace(A_k) & 0 & 0 & 0\\
0 & A_k & 0 & 0\\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & D_k
\end{bmatrix},
\end{equation*}
\begin{equation*}
\frac{\partial}{\partial \xi}\bigg|_0 = \begin{bmatrix}
0 & 0 & 1 & 0\\
0 & 0_{2n} & 0 & 0\\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0_{m}
\end{bmatrix},
\quad
\frac{\partial}{\partial x_i}\bigg|_0 = \begin{bmatrix}
0 & 0 & 0 & 0\\
0 & 0_{2n} & e_i & 0\\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0_{m}
\end{bmatrix}
\end{equation*}
on $\mathfrak r^m \ltimes_\mathcal A \mathfrak h^{2n+1}$ becomes orthonormal.
An elementary but long calculation shows that this metric can be expressed in the coordinates $(t,\xi,x) \in {\mathbb R}^m \times {\mathbb R} \times {\mathbb R}^{2n}$ as
\begin{align*}
g_{(t,\xi,x)}
&=
\begin{bmatrix}
I_m & 0 & 0 \\
0 & 1 & 0\\
0 & -\frac{1}{2}J_{2n} x & I_{2n}
\end{bmatrix}
\begin{bmatrix}
I_m & 0 & 0\\
0 & a(-t)^2 & 0\\
0 & 0 & \mu(-t)^\mathrm{T} \mu(-t)
\end{bmatrix}
\begin{bmatrix}
I_m & 0 & 0 \\
0 & 1 & -\frac{1}{2}(J_{2n} x)^\mathrm{T}\\
0 & 0 & I_{2n}
\end{bmatrix}\\[0.1cm]
&=
\begin{bmatrix}
I_m & 0 & 0\\
0 & 0 & 0\\
0 & 0 & \mu(-t)^\mathrm{T} \mu(-t)
\end{bmatrix}
+
a(-t)^2
\begin{bmatrix}
0_m & 0 & 0\\[0.1cm]
0 & 1 & -\frac{1}{2}(J_{2n} x)^\mathrm{T}\\[0.1cm]
0 & -\frac{1}{2} J_{2n}x & \frac{1}{4}J_{2n} x(J_{2n} x)^\mathrm{T}
\end{bmatrix}.
\end{align*}
With this in hand, it is easy to see that if $\phi,\psi : ({\mathbb R}^m \ltimes_\mathcal A \mathrm{H}^{2n+1},g) \to {\mathbb C}$ are two complex-valued functions, then
the Laplace-Beltrami operator satisfies
\begin{align}\label{eq-rm*heisenberg-lap}
\tau(\phi)
&= \sum_{k=1}^m \lrpar{ \frac{\partial^2 \phi}{\partial t_k^2} - \frac{n+1}{n}\,\trace(A_k) \, \frac{\partial \phi}{\partial t_k} }
+ \lrpar{a(t)^2 + \frac{1}{4} \langle \mu(t)^\mathrm{T} J_{2n} x,\mu(t)^\mathrm{T} J_{2n} x \rangle} \frac{\partial^2 \phi}{\partial \xi^2}\nonumber\\
& \quad + \sum_{i=1}^{2n} (\mu(t)\,\mu(t)^\mathrm{T} J_{2n} x)_i \, \frac{\partial^2 \phi}{\partial \xi \partial x_i}
+ \sum_{i,j=1}^{2n} (\mu(t) \, \mu(t)^\mathrm{T})_{ij} \, \frac{\partial^2 \phi}{\partial x_i \partial x_j}.
\end{align}
and the conformality operator satisfies
\begin{align}\label{eq-rm*heisenberg-conf}
\kappa(\phi,\psi) &= \sum_{k=1}^m \frac{\partial \phi}{\partial t_k} \frac{\partial \psi}{\partial t_k}
+ \lrpar{a(t)^2 + \frac{1}{4} \langle \mu(t)^\mathrm{T} J_{2n} x,\mu(t)^\mathrm{T} J_{2n} x \rangle } \frac{\partial \phi}{\partial \xi}\frac{\partial \psi}{\partial \xi} \nonumber\\
&+ \frac{1}{2}\sum_{i=1}^{2n} (\mu(t)\,\mu(t)^\mathrm{T} J_{2n} x)_i
\lrpar{ \frac{\partial \phi}{\partial \xi}\frac{\partial \psi}{\partial x_i} + \frac{\partial \phi}{\partial x_i}\frac{\partial \psi}{\partial \xi} } \\
&+ \sum_{i,j=1}^{2n} (\mu(t) \, \mu(t)^\mathrm{T})_{ij} \,\frac{\partial \phi}{\partial x_i} \frac{\partial \psi}{\partial x_j}, \nonumber
\end{align}
\medskip
\section{$r$-Harmonic Functions on ${\mathbb R}^m \ltimes {\mathbb R}^n$ and ${\mathbb R}^m \ltimes \mathrm{H}^{2n+1}$}
As we have seen in the preceding sections,
there are a lot of similarities between the Laplace-Beltrami operators and the conformality operators on the Lie groups
${\mathbb R}^m \ltimes {\mathbb R}^n$ and ${\mathbb R}^m \ltimes \mathrm{H}^{2n+1}$ so that here we will study these in parallel.
To simplify the statements of our results we fix the following notation.
\begin{notation}
Let $G$ be a Lie group, either ${\mathbb R}^n$ or $\mathrm{H}^{2n+1}$.
If $G = {\mathbb R}^n$ the indices $i,j$ are assumed to satisfy $1\le i,j\le n$ and in the case $G = \mathrm{H}^{2n+1}$ the condition $1\le i,j\le 2n$. In both cases we have $1 \leq k \leq m$, unless otherwise specified.
Throughout this section $\mathcal A = (A_k)_k$ will denote a commuting family of inveritable real matrices which are of dimensions $n\times n$ if $G = {\mathbb R}^n$
and of dimensions $2n\times 2n$ if $G = \mathrm{H}^{2n+1}$.
In the latter case, we also assume that each member of the family $\mathcal A$ is of the form (\ref{eq-heisenberg-der-A}).
Finally, we will use the following notation
\begin{equation}\label{eq-omega-def}
{\mathbb R}^m \ni \omega = \begin{cases}
(\trace(A_1),\ldots,\trace(A_m)) & \text{if } G={\mathbb R}^n,\\[0.2cm]
\frac{n+1}{n} (\trace(A_1),\ldots,\trace(A_m)) & \text{if } G =\mathrm{H}^{2n+1}.
\end{cases}
\end{equation}
Now if $\phi,\psi : {\mathbb R}^m \ltimes_\mathcal A G \to {\mathbb C}$ depend only on the coordinates $t$ and $x$, i.e.\ if they are independent of $\xi$, in the case when $G = \mathrm{H}^{2n+1}$, then the Laplace-Beltrami operator satisfies
\begin{equation*}
\tau(\phi) = \sum_k \lrpar{ \frac{\partial^2 \phi}{\partial t_k^2} - \omega_k\, \frac{\partial \phi}{\partial t_k} }
+ \sum_{ij} (\mu(t)\,\mu(t)^\mathrm{T})_{ij} \, \frac{\partial^2 \phi}{\partial x_i \partial x_j}
\end{equation*}
and the conformality operator is given by
\begin{equation*}
\kappa(\phi,\psi)
= \sum_k \frac{\partial\phi}{\partial t_k}\frac{\partial\psi}{\partial t_k} + \sum_{ij} (\mu(t)\,\mu(t)^\mathrm{T})_{ij} \,\frac{\partial \phi}{\partial x_i}\frac{\partial \psi}{\partial x_j},
\end{equation*}
regardless of the choice of the Lie group $G$.
\end{notation}
Our first result is now the following one of general variable separation.
\begin{proposition}\label{prop-separation-1}
Let $\phi,\psi : {\mathbb R}^m \ltimes_\mathcal A G \to {\mathbb C}$ be two complex-valued functions such that $\phi$
depends only on $t \in {\mathbb R}^m$ while $\tau^\alpha(\psi)$ is independent of $t$ for all $\alpha \geq 0$.
Then the tension field of their product $\phi\cdot\psi$ satisfies the identity
\begin{equation*}
\tau^r(\phi \cdot \psi) = \sum_{\alpha=0}^r \binom{r}{\alpha} \, \tau^\alpha(\phi) \, \tau^{r-\alpha}(\psi).
\end{equation*}
In particular, if $\phi$ and $\psi$ are proper $p$-harmonic and proper $r$-harmonic on ${\mathbb R}^m \ltimes_\mathcal A G$, respectively,
then their product $\phi\cdot\psi$ is proper $(p+r-1)$-harmonic
on ${\mathbb R}^m \ltimes_\mathcal A G$.
\end{proposition}
\begin{remark}
This result is reminiscent of the variable separation statement on product manifolds, which can be found in Lemma 6.1 of \cite{Gud-14}. For our result here we need the additional assumption that $\tau^\alpha(\psi)$ is independent of $t$ for all $\alpha \geq 0$.
This assumption is superfluous for direct products of manifolds, but it is essential in our cases.
\end{remark}
\begin{proof}
Since $\phi$ is a function of $t$, we see by the formulae (\ref{eq-rm*rn-lap}) and (\ref{eq-rm*heisenberg-lap}) for the
Laplace-Beltrami operator, that $\tau^\beta(\phi)$ remains to be a function of $t$ for all $\beta \geq 0$.
Since by assumption, the tension field $\tau^\alpha(\psi)$ is independent of $t$ for all $\alpha\geq 0$, it follows from
the relations (\ref{eq-rm*rn-conf}) and (\ref{eq-rm*heisenberg-conf}) for the conformality operator, that
\begin{equation*}
\kappa(\tau^\beta(\phi), \tau^\alpha(\psi)) = 0,
\end{equation*}
for all $\alpha,\beta \geq 0$. The first part of our statement now follows easily by induction combined with the product rule (\ref{eq-product-rule}) for the Laplace-Beltrami operator.
For the final claim of the result notice that the identity we have just proven implies that
\begin{equation*}
\tau^{p+r-2}(\phi\cdot\psi) = \binom{p+r-2}{p-1}\, \tau^{p-1}(\phi) \, \tau^{r-1}(\psi) \not= 0
\quad\text{and}\quad
\tau^{p+r-1}(\phi\cdot\psi) = 0,
\end{equation*}
so that the product $\phi\cdot\psi$ is indeed proper $(p+r-1)$-harmonic.
\end{proof}
Constructing proper $r$-harmonic functions $\psi : {\mathbb R}^m \ltimes_\mathcal A G \to {\mathbb C}$ such that $\tau^\alpha(\psi)$ is independent of $t$ for $\alpha \geq 0$
seems difficult in general, if not impossible.
However, we note that any harmonic function on ${\mathbb R}^m \ltimes_\mathcal A G$ which is independent of $t$ trivially satisfies this condition.
On the other hand, we can use our main Theorem \ref{thm-p-harmonic-main} to construct proper $r$-harmonic functions depending only on $t$.
We summarize these thoughts in the following statement.
\begin{proposition}\label{prop-pharmonic-x-t}
\leavevmode
\begin{enumerate}[label=(\roman*), font=\upshape, itemsep=0.3cm]
\item Define the complex-valued function $\psi : {\mathbb R}^m \ltimes_\mathcal A G \to {\mathbb C}$ by
\begin{equation*}
\begin{cases}
\psi(x) = a + \textstyle\sum_i v_i x_i + \textstyle\sum_{ij} B_{ij}x_ix_j & \text{if } G= {\mathbb R}^n,\\[0.2cm]
\psi(\xi,x) = a + b\xi + \textstyle\sum_i v_i x_i + \textstyle\sum_{ij} B_{ij}x_ix_j & \text{if } G= \mathrm{H}^{2n+1},
\end{cases}
\end{equation*}
where $a,b,v_i,B_{ij}\in {\mathbb C}$ are not all zero and the coefficients $B_{ij}$ form a symmetric matrix such that
\begin{equation*}
\trace (\mu(t)\,\mu(t)^\mathrm{T} B) = 0, \quad t \in {\mathbb R}^m.
\end{equation*}
Then $\psi$ is proper harmonic on ${\mathbb R}^m \ltimes_\mathcal A G$.
\item Let $(r_k)_{k=1}^m$ be a collection of positive integers
and define the complex-valued functions $\phi_{k} : {\mathbb R}^m \ltimes_\mathcal A G\to {\mathbb C}$ by
\begin{equation*}
\phi_{k}(t_k) = \begin{cases}
c_1 \, t_k^{r_k-1} e^{\omega_k t_k} + c_{2} \, t_k^{r_k-1} & \text{if }\;\omega_k \not= 0,\\[0.2cm]
c_1 \, t_k^{2r_k-1} + c_{2} \, t_k^{2r_k-2} & \text{if }\;\omega_k = 0,
\end{cases}
\end{equation*}
where $c = (c_1, c_2) \in {\mathbb C}^2$ is non-zero. Then for each $k = 1,\ldots, m$, the function $\phi_{k}$ is proper $r_k$-harmonic on ${\mathbb R}^m \ltimes_\mathcal A G$.
Furthermore, their product
\begin{equation*}
\prod_{k=1}^m \phi_k(t_k)
\end{equation*}
is proper $(r_1 + \ldots + r_m -m+1)$-harmonic on ${\mathbb R}^m \ltimes_\mathcal A G$.
\end{enumerate}
\end{proposition}
\begin{proof}
To prove part (i) we note that since $B$ is symmetric we have
\begin{equation*}
\frac{\partial^2 \psi}{\partial x_i \partial x_j} = 2\,B_{ij}
\end{equation*}
so that
\begin{equation*}
\tau(\psi) = 2\textstyle\sum_{ij} (\mu(t) \, \mu(t)^\mathrm{T})_{ij} \,B_{ij} = 2\, \trace (\mu(t)\,\mu(t)^\mathrm{T} B),
\end{equation*}
confirming the result for both choices of the Lie group $G$.
For part (ii) we note that
\begin{equation*}
\tau(t_k) = -\omega_k \quad\text{and}\quad \kappa(t_k,t_k) = 1,
\end{equation*}
so that for each $k=1,\ldots,m$ the coordinate function $t_k$ is isoparametric.
The first claim then follows from Theorem \ref{thm-p-harmonic-main} after an easy calculation.
The second claim follows by noting that
\begin{equation*}
\tau^r\lrpar{\prod_{k=1}^m \phi_k} = \sum_{j_1+\ldots+j_m = r} \binom{r}{j_1,\ldots, j_m} \, \prod_{k=1}^m \tau^{j_k}(\phi_k),
\end{equation*}
which can be proven analogously to Proposition \ref{prop-separation-1}.
\end{proof}
The matrix $B$ in part (i) of our Proposition \ref{prop-pharmonic-x-t} can of course be taken to be zero.
The following examples show though that in important cases this is not necessary.
\begin{example}\label{ex-g44}
Consider the four-dimensional Lie group
\begin{equation*}
\mathrm{G}_{4.4} = \big\{ \begin{bmatrix}
e^{t} & te^{t} & \frac{1}{2}t^2e^{t} & x_1\\
0 & e^{t} & te^{t} & x_2\\
0 & 0 & e^{t} & x_3\\
0 & 0 & 0 & 1
\end{bmatrix} \,\mid\, (t,x) \in {\mathbb R} \times {\mathbb R}^3 \big\}.
\end{equation*}
This is the semidirect product
${\mathbb R} \ltimes_\mathcal A {\mathbb R}^3$ with respect to the family $\mathcal A$ consisting of the single matrix
\begin{equation*}
\begin{bmatrix}
1 & 1 & 0\\
0 & 1 & 1\\
0 & 0 & 1
\end{bmatrix}.
\end{equation*}
Then following Proposition \ref{prop-separation-1}, combined with Proposition \ref{prop-pharmonic-x-t}, the function
\begin{equation*}
(t,x) \mapsto (c_1 t^{r-1}e^{-3 t} + c_{2}t^{r-1}) (a_1 + a_2 x_1 + a_3 x_2 + a_4 (x_2^2 - x_3^3 - 2x_1x_3))
\end{equation*}
is proper $r$-harmonic on $\mathrm{G}_{4.4}$ for any non-zero elements $c \in {\mathbb C}^2$ and $a \in {\mathbb C}^4$.
\end{example}
\begin{example}\label{ex-g48}
For $\alpha \in [-1,1]$ consider the following four-dimensional Lie group
\begin{equation*}
\mathrm{G}_{4.8}^\alpha
=
\big\{
\begin{bmatrix}
e^{(1+\alpha) t} & -\frac{x_2}{2}e^{t} & \frac{x_1}{2}e^{\alpha t} & \xi\\[0.1cm]
0 & e^{t} & 0 & x_1\\
0 & 0 & e^{\alpha t} & x_2\\
0 & 0 & 0 & 1
\end{bmatrix}
\,\mid\, (t,\xi,x) \in {\mathbb R} \times {\mathbb R} \times {\mathbb R}^2 \big\}.
\end{equation*}
This is the semidirect product
${\mathbb R} \ltimes_\mathcal A \mathrm{H}^3$ with respect to the family $\mathcal A$ consisting of the single matrix
\begin{equation*}
\begin{bmatrix}
1 & 0\\
0 & \alpha
\end{bmatrix}.
\end{equation*}
Then Proposition \ref{prop-separation-1}, in conjunction with Proposition \ref{prop-pharmonic-x-t}, implies that the function defined by
\begin{equation*}
(t,\xi,x) \mapsto \begin{cases}
(c_1 t^{2r-1} + c_{2} t^{2r-2}) (a_1 + a_2\xi + a_3 x_1 + a_4 x_2 + a_5 x_1 x_2) & \text{if }\;\alpha = -1,\\[0.2cm]
(c_1 t^{r-1}e^{2(1+\alpha) t} + c_{2}t^{r-1}) (a_1 + a_2\xi + a_3 x_1 + a_4 x_2 + a_5 x_1 x_2) & \text{if }\;\alpha \not= -1
\end{cases}
\end{equation*}
is proper $r$-harmonic on $\mathrm{G}_{4.8}^\alpha$ for any non-zero elements $c \in {\mathbb C}^2$ and $a \in {\mathbb C}^5$.
\end{example}
We now proceed by constructing a non-trivial class of complex isoparametric functions on ${\mathbb R}^m \ltimes_\mathcal A G$.
These functions will depend only on the variables $t$ and $x$ i.e.\ they will be independent of the variable $\xi$ if $G = \mathrm{H}^{2n+1}$.
Here we will use the well-known fact that the elements of a commuting family of matrices always possess a common eigenvector.
\begin{proposition}\label{lemma-phi}
Let $v$ be a common eigenvector of the commuting family $\mathcal A^\mathrm{T} = (A_k^\mathrm{T})_{k}$
and $\lambda = (\lambda_1, \ldots, \lambda_m)$ be the vector consisting of the corresponding eigenvalues.
Then the complex-valued function $\phi : {\mathbb R}^m \ltimes_\mathcal A G \to {\mathbb C}$ defined by
\begin{equation*}
\phi(t,x) = e^{-\langle \lambda, t \rangle} \langle v,x \rangle
\end{equation*}
is complex isoparametric on ${\mathbb R}^m \ltimes_\mathcal A G$ with
\begin{equation*}
\tau(\phi) = \langle \lambda, \lambda + \omega \rangle \, \phi \quad \text{and} \quad
\kappa(\phi,\phi) = \langle \lambda, \lambda \rangle \, \phi^2 + \langle v, v \rangle.
\end{equation*}
\end{proposition}
\begin{remark}
Here, $\langle \cdot, \cdot \rangle$ denotes the complex bilinear product of vectors given by
\begin{equation*}
\langle v,w \rangle = \sum_{i} v_i w_i, \quad v,w\in {\mathbb C}^n.
\end{equation*}
\end{remark}
\begin{proof}
We have
\begin{equation*}
\frac{\partial \phi}{\partial t_k} = -\lambda_k \, \phi, \quad \frac{\partial^2 \phi}{\partial t_k^2} = \lambda_k^2 \, \phi, \quad
\frac{\partial \phi}{\partial x_i} = e^{-\langle\lambda, t\rangle}v_i\ \ \text{and}\ \ \frac{\partial^2 \phi}{\partial x_i \partial x_j} = 0.
\end{equation*}
Thus,
\begin{equation*}
\tau(\phi) = \sum_k ( \lambda_k^2 + \omega_k \lambda_k ) \, \phi = \langle\lambda, \lambda + \omega \rangle \, \phi,
\end{equation*}
as well as
\begin{align*}
\kappa(\phi,\phi)
&= \sum_k \lambda_k^2 \,\phi^2 + e^{-2\langle \lambda, t\rangle} \sum_{ij}(\mu(t) \, \mu(t)^\mathrm{T})_{ij} v_iv_j\\[0.1cm]
&= \langle \lambda, \lambda\rangle\,\phi^2 + e^{-2\langle\lambda, t\rangle} \langle\mu(t)^\mathrm{T} v, \mu(t)^\mathrm{T} v\rangle\\[0.1cm]
&= \langle \lambda, \lambda\rangle\,\phi^2 + \langle v,v\rangle,
\end{align*}
where the final equality follows from the fact that $v$ is an eigenvector of
\begin{equation*}
\mu(t)^\mathrm{T} = \Exp\lrpar{\textstyle\sum_k A_k^\mathrm{T} t_k}
\end{equation*}
with eigenvalue $e^{\langle\lambda,t\rangle}$ by assumption.
\end{proof}
Having constructed complex isoparametric functions, we can now apply Theorem \ref{thm-p-harmonic-main} to generate examples of $r$-harmonic functions.
As one might suspect, the antiderivatives from Theorem \ref{thm-p-harmonic-main} can not be computed explicitly in general.
In what follows, we consider several specific cases where this is possible.
\begin{example}\label{ex:g_41}
The four-dimensional Lie group
\begin{equation*}
\mathrm{G}_{4.1} = \big\{ \begin{bmatrix}
1 & t & \frac{1}{2}t^2 & x_1\\
0 & 1 & t & x_2\\
0 & 0 & 1 & x_3\\
0 & 0 & 0 & 1
\end{bmatrix} \,\mid\, (t,x) \in {\mathbb R} \times {\mathbb R}^3 \big\}
\end{equation*}
is the semidirect product ${\mathbb R} \ltimes_\mathcal A {\mathbb R}^3$, where $\mathcal A$ consists of the single matrix
\begin{equation*}
A = \begin{bmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
0 & 0 & 0
\end{bmatrix}.
\end{equation*}
All the eigenvalues of $A^\mathrm{T}$ are $0$ and the eigenspace is the one-dimensional space spanned by the unit vector $e_3$.
Thus, the function $\phi$ from Proposition \ref{lemma-phi} is simply the coordinate function
$(t,x) \mapsto x_3$ satisfying
\begin{equation*}
\tau(\phi) = 0 \quad\text{and}\quad \kappa(\phi,\phi) = 1.
\end{equation*}
A simple application of Theorem \ref{thm-p-harmonic-main} then shows that the function
\begin{equation*}
(t,x) \mapsto a_1 x_3^{2r-1} + a_2 x_3^{2r-2}
\end{equation*}
is proper $r$-harmonic on $\mathrm{G}_{4.1}$.
By construction this function also satisfies the condition from Proposition \ref{prop-separation-1} in that its Laplacian of any order is independent of $t$.
Hence, if $p,r,q$ are positive integers such that $r+q-1=p$,
Proposition \ref{prop-separation-1} combined with part (ii) Proposition \ref{prop-pharmonic-x-t} implies that the function
defined by
\begin{equation*}
(t,x) \mapsto (c_1 t^{2r-1} + c_2 t^{2r-2})(a_1 x_3^{2q-1} + a_2 x_3^{2q-2})
\end{equation*}
is proper $p$-harmonic on $\mathrm{G}_{4.1}$ for any non-zero $a,c \in {\mathbb C}^2$.
\end{example}
\begin{example}\label{ex-sol3}
The celebrated Thurston geometry $\mathbf{Sol}^3$ is the solvable Lie group
\begin{equation*}
\mathbf{Sol}^3 = \big\{ \begin{bmatrix}
e^t & 0 & x_1\\
0 & e^{-t} & x_2\\
0 & 0 & 1
\end{bmatrix} \,\mid\, (t,x) \in {\mathbb R} \times {\mathbb R}^2 \big\}.
\end{equation*}
Examples of proper $r$-harmonic functions on this Lie group have already been constructed in \cite{Gud-14} and \cite{Gud-Sif-2}.
We can view this Lie group as the semidirect product ${\mathbb R} \ltimes_\mathcal A {\mathbb R}^2$, where $\mathcal A$ consists of the matrix
\begin{equation*}
A = \begin{bmatrix}
1 & 0\\
0 & -1
\end{bmatrix}.
\end{equation*}
The eigenvalues of $A^\mathrm{T}$ are $1$ and $-1$, and the corresponding eigenvectors are $e_1$ and $e_2$, respectively.
In view of Proposition \ref{lemma-phi} we get an isoparametric function for each eigenvector, namely
\begin{equation*}
\phi_1(t,x) = e^{-t}x_1 \quad\text{and}\quad \phi_2(t,x) = e^t x_2,
\end{equation*}
both of which satisfy
\begin{equation*}
\tau(\phi_i) = \phi_i \quad\text{and}\quad \kappa(\phi_i,\phi_i) = \phi_i^2+1.
\end{equation*}
Applying Theorem \ref{thm-p-harmonic-main} to these isoparametric functions, we find that the functions
\begin{align*}
(t,x) &\mapsto c_1 \arsinh(e^{-t} x_1)^{2r-1} + c_2 \arsinh(e^{-t} x_1)^{2r-2},\\[0.1cm]
(t,x) &\mapsto c_1 \arsinh(e^{t} x_2)^{2r-1} + c_2 \arsinh(e^{t} x_2)^{2r-2}
\end{align*}
are proper $r$-harmonic on $\mathbf{Sol}^3$
for any non-zero $c \in {\mathbb C}^2$.
\end{example}
The reader should note that in both Examples \ref{ex:g_41} and \ref{ex-sol3} the eigenvector $v$ is non-isotropic i.e.\ $\langle v, v \rangle \not= 0$.
This is of course expected since the eigenvector is real in both cases.
However, if the eigenvector $v$ turns out to be complex, then it could happen that it is isotropic.
In this case we see directly that the function $\phi$ from Proposition \ref{lemma-phi} becomes an eigenfunction cf.\ (\ref{eq-eigenfunction-def}).
In fact, a quick inspection of the proof of Proposition \ref{lemma-phi} shows that in this case the vector $\lambda$ can be taken arbitrarily,
rather than requiring it to consist of eigenvalues.
\begin{corollary}
\label{prop-phi-pharmonic-v-isotropic-1}
Let $v$ be a common eigenvector of the commuting family $\mathcal A^\mathrm{T} = (A_k^\mathrm{T})_{k}$
and suppose that $v$ is isotropic.
Then for any complex vector $\nu \in {\mathbb C}^m$ the function $\phi: {\mathbb R}^m \ltimes_\mathcal A G \to {\mathbb C}$ defined by
\begin{equation*}
\phi(t,x) = e^{-\langle \nu, t \rangle } \langle v, x \rangle
\end{equation*}
is an eigenfunction on ${\mathbb R}^m \ltimes_\mathcal A G$ satisfying
\begin{equation*}
\tau(\phi) = \langle \nu, \nu + \omega \rangle \, \phi \quad \text{and} \quad
\kappa(\phi,\phi) = \langle \nu, \nu \rangle \, \phi^2.
\end{equation*}
\end{corollary}
For this situation the antiderivatives from Theorem \ref{thm-p-harmonic-main} have already been computed in \cite{Gud-Sob-1}, cf.\ also (\ref{eq-p-harmonic-eigenfunction}).
\begin{example}
Consider the following interesting four-dimensional Lie group
\begin{equation*}
\mathrm{G}_{4.10}
=
\big\{ \begin{bmatrix}
e^{-t_1}\cos(t_2) & e^{-t_1}\sin(t_2) & x_1 & 0\\
-e^{-t_1}\sin(t_2) & e^{-t_1}\cos(t_2) & x_2 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & e^{t_2}\\
\end{bmatrix} \,\mid\, (t,x) \in {\mathbb R}^2 \times {\mathbb R}^2 \big\}.
\end{equation*}
This is the semidirect product ${\mathbb R}^2 \ltimes_\mathcal A {\mathbb R}^2$, where the family $\mathcal A$ consists of the two commuting matrices
\begin{equation*}
\begin{bmatrix}
-1 & 0\\
0 & -1
\end{bmatrix}, \quad
\begin{bmatrix}
0 & -1\\
1 & 0
\end{bmatrix}.
\end{equation*}
Their two common eigenvectors are $v_\pm = (1, \pm \mathrm i)$, both of which are isotropic.
Hence, we see from Corollary \ref{prop-phi-pharmonic-v-isotropic-1} that for any $\nu = (\nu_1, \nu_2) \in {\mathbb C}^2$, the functions
\begin{equation*}
\phi_\pm(t,x) = e^{-(\nu_1 t_1 + \nu_2 t_2)} (x_1 \pm \mathrm i x_2)
\end{equation*}
are eigenfunctions on $\mathrm{G}_{4.10}$ with
\begin{equation*}
\tau(\phi_\pm) = (\nu_1^2 - 2\nu_1 + \nu_2^2) \, \phi_\pm \quad\text{and}\quad \kappa(\phi_\pm, \phi_\pm) = (\nu_1^2 + \nu_2^2) \, \phi_\pm^2. \qedhere
\end{equation*}
\end{example}
\smallskip
In the case when $v$ is isotropic, we can take the real and the imaginary part of the function $\phi$ from Proposition \ref{lemma-phi}
to obtain even more isoparametric functions.
\begin{proposition}\label{lemma-phi-ReIm}
Let $v$ be a common eigenvector of the commuting family $\mathcal A^\mathrm{T} = (A_k^\mathrm{T})_{k}$
and let $\lambda = (\lambda_1, \ldots, \lambda_m)$ be the vector consisting of the corresponding eigenvalues.
Further suppose that the eigenvector $v$ is isotropic and define the function $\phi: {\mathbb R}^m \ltimes_\mathcal A G \to {\mathbb C}$ by
\begin{equation*}
\phi(t,x) = e^{-\langle \mathfrak R\mathfrak e\,\lambda,t \rangle} \langle v, x \rangle.
\end{equation*}
Then the real part $\phi_1 = \mathfrak R\mathfrak e\,\phi$ and the imaginary part $\phi_2 = \mathfrak I\mathfrak m\,\phi$ of $\phi$ are isoparametric functions on ${\mathbb R}^m \ltimes_\mathcal A G$ with
\begin{align*}
\tau(\phi_i) &= \langle \mathfrak R\mathfrak e\,\lambda,\mathfrak R\mathfrak e\,\lambda + \omega\rangle \,\phi_i,\\[0.1cm]
\kappa(\phi_i, \phi_i) &= \langle \mathfrak R\mathfrak e\,\lambda, \mathfrak R\mathfrak e\,\lambda \rangle \, \phi_i^2 + \frac{1}{2} \langle v,\overline{v} \rangle.
\end{align*}
\end{proposition}
\begin{proof}
The result follows by direct calculations and Proposition \ref{lemma-phi}.
\end{proof}
\begin{example}
For $\alpha \geq 0$, consider the four-dimensional Lie group $\mathrm{G}_{4.9}^\alpha$ given by
\begin{equation*}
\begin{bmatrix}
e^{2\alpha t} & -\frac{x_2\cos(t)+x_1\sin(t)}{2}e^{\alpha t} & \frac{x_1\cos(t)-x_2\sin(t)}{2}e^{\alpha t} & \xi & 0\\[0.135cm]
0 & e^{\alpha t}\cos(t) & e^{\alpha t}\sin(t) & x_1 & 0\\
0 & -e^{\alpha t}\sin(t) & e^{\alpha t}\cos(t) & x_2 & 0\\
0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & e^t
\end{bmatrix}
\end{equation*}
where $(t,\xi,x) \in {\mathbb R} \times {\mathbb R} \times {\mathbb R}^2$.
We can view this group as the semidirect product ${\mathbb R} \ltimes_\mathcal A \mathrm{H}^3$ with respect to the family $\mathcal A$ consisting of the single matrix
\begin{equation*}
A = \begin{bmatrix}
\alpha & 1\\
-1 & \alpha
\end{bmatrix}.
\end{equation*}
The two eigenvectors of $A^\mathrm{T}$ are $v_\pm = (1,\pm \mathrm i)$ and the corresponding eigenvalues are ${\lambda_\pm = \alpha \pm \mathrm i}$.
Note that the eigenvectors $v_\pm$ are isotropic.
In what follows, it suffices to only consider the eigenvector $v = (1,\mathrm i)$ and its eigenvalue $\lambda = \alpha + \mathrm i$.
The functions $\phi_1$ and $\phi_2$ from Proposition \ref{lemma-phi-ReIm} are then given by
\begin{equation*}
\phi_i(t,\xi,x) = e^{-\alpha t} x_i,
\end{equation*}
satisfying
\begin{equation*}
\tau(\phi_i) = 3\alpha^2\phi_i \quad\text{and}\quad \kappa(\phi_i,\phi_i) = \alpha^2\phi_i^2 + 1.
\end{equation*}
We now apply Theorem \ref{thm-p-harmonic-main} to these isoparametric functions.
\smallskip
In the case when $\alpha = 0$ an easy calculation of the antiderivatives from Theorem \ref{thm-p-harmonic-main} shows that
for any positive integer $r$ and any non-zero $c\in {\mathbb C}^2$, the functions
\begin{equation*}
(t,\xi,x) \mapsto c_1 x_i^{2r-1} + c_2 x_i^{2r-2}, \quad i = 1, 2
\end{equation*}
are proper $r$-harmonic on $\mathrm{G}_{4.9}^0$.
By construction, these functions also satisfy the condition from Proposition \ref{prop-separation-1}
so we may also multiply them by a function depending only on $t$ to obtain even more examples of proper $r$-harmonic functions
cf.\ Example \ref{ex:g_41}.
In the case when $\alpha \not= 0$ the functions $f_p$ from Theorem \ref{thm-p-harmonic-main} do not seem to possess a nice closed formula.
However, let us at least note that for any non-zero $c \in {\mathbb C}^2$ the functions given by
\begin{equation*}
(t,\xi,x) \mapsto c_1 \, \frac{2(\alpha e^{-\alpha t} x_i)^3+3\alpha e^{-\alpha t} x_i}{\lrcurl{(\alpha e^{-\alpha t} x_i)^2+1}^{3/2}} + c_2, \quad i = 1, 2
\end{equation*}
are proper harmonic on $\mathrm{G}_{4.9}^\alpha$
and the functions given by
\begin{align*}
&(t,\xi,x) \mapsto c_1 \lrpar{\arsinh(\alpha e^{-\alpha t} x_i) + \frac{(\alpha e^{-\alpha t} x_i)^3}{3\lrcurl{(\alpha e^{-\alpha t} x_i)^2+1}^{3/2}}}\\[0.1cm]
&\quad +c_2 \lrpar{ \frac{ 2(\alpha e^{-\alpha t} x_i)^3 + 3\alpha e^{-\alpha t} x_i }{\lrcurl{(\alpha e^{-\alpha t} x_i)^2+1}^{3/2}} \, \arsinh(\alpha e^{-\alpha t} x_i)
- \frac{1}{(\alpha e^{-\alpha t} x_i)^2+1}}, \quad i = 1, 2
\end{align*}
are proper biharmonic on $\mathrm{G}_{4.9}^\alpha$.
\end{example}
\vspace*{\fill}
| 67abb48139fc83063c90ba5af2b9017b75e88297 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Large neural networks are surprisingly easy to optimize~\cite{sun2019},
despite the substantial non-convexity of
the loss as a function of the parameters~\cite{goodfellow2015}.
In particular,
it is usually found that changing the random initialization
has no effect on performance,
even though it can change the model learned
by gradient-based optimization methods%
~\cite{garipov2018}.
Understanding the cause of trainability from random initial conditions
is critical for the development of new architectures and optimization methods,
which must otherwise just hope to retain this favorable property
based on heuristics.
One possible explanation for this phenomenon
is based on the stationary points of gradient-based
optimization methods.
These methods are stationary
when the gradient of the loss function is 0,
at the \emph{critical points} of the loss.
Critical points are classified by their Morse index,
or the degree of local negative curvature
(i.e., the relative number of dimensions in parameter space
along which the curvature is negative).
Since, among all critical points,
gradient descent methods only converge to those points with index 0%
~\cite{lee2016},
which includes local minima, it has been argued that
large neural networks are easy to train because
their loss functions for many problems
only have local minima at values of the loss
close to or at the global optimum.
This is known as
the \enquote{no-bad-local-minima} property.
Previous work~\cite{dauphin2014,pennington2017} has
reported numerical evidence for a convex relationship between index and loss
that supports the hypothesis that neural network loss functions
have the no-bad-local-minima property:
for low values of the loss, only low values of the index were observed,
whereas for high values of the loss, only high values of the index were observed.
However, more recent theoretical work
has indicated that there are in fact
bad local minima on neural network losses
in almost all cases%
~\cite{ding2019}.
The validity of the numerical results
depends on the validity of
the critical point-finding algorithms, and
the second-order critical point-finding algorithms
used in~\cite{dauphin2014} and~\cite{pennington2017}
are not in fact guaranteed to find critical points
in the case where the Hessian is singular.
In this case, the second-order information used
by these critical point-finding methods becomes unreliable.
Neural network loss Hessians are typically highly singular~\cite{sagun2017},
and poor behavior of Newton-type critical point-finding methods has been reported
in the neural network case~\cite{coetzee1997},
casting doubt on the completeness and accuracy of the results
in~\cite{dauphin2014} and~\cite{pennington2017}.
\cite{frye2019} verified that second-order methods
can in fact find high-quality approximate critical points
for linear neural networks,
for which the analytical form of the critical points is known%
~\cite{baldi1989},
providing ground truth.
In particular, the two phase convergence pattern
predicted by the classical analysis of Newton methods~\cite{nocedal2006}
is evident:
a linear phase followed a short,
local supralinear phase
(\figref{fig:nmr_comparison}A).
The supralinear convergence is visible in the
\enquote{cliffs} in the blue{} traces in
\figref{fig:nmr_comparison}A,
where the convergence rate suddenly improves.
With a sufficiently strict cut-off on the gradient norms,
the correct loss-index relationship obtained analytically
(\figref{fig:nmr_comparison}B, gray points)
is shared by the points obtained numerically
(\figref{fig:nmr_comparison}B,
light blue{} points).
With an insufficiently strict cutoff,
the loss-index relationship implied by the observed points
is far from the truth
(\figref{fig:nmr_comparison}B,
dark red points)
Unfortunately, good performance on linear networks
does not guarantee good performance on non-linear networks.
When applied to a non-linear network,
even with the same data,
the behavior of these Newton methods changes dramatically
for the worse
(\figref{fig:nmr_comparison}C).
No runs exhibit supralinear convergence
and the gradient norms at termination are many orders of magnitude larger.
These are not the signatures of a method converging
to a critical point,
even though gradient norms are sometimes still
under the thresholds reported
in~\cite{pennington2017} and~\cite{frye2019}
(no threshold reported in~\cite{dauphin2014}).
This makes it difficult to determine whether the
putative loss-index relationship measured from these critical points
(\figref{fig:nmr_comparison}D)
accurately reflects the loss-index relationship
at the true critical points of the loss function.
In this paper,
we identify a major cause of this failure
for second-order critical point-finding methods:
\emph{gradient-flat regions},
where the gradient is approximately in the kernel of the Hessian.
In these regions, the loss function
is locally approximately linear along the direction of the gradient,
whether or not the gradient is itself small,
as would be the case near a true critical point.
We first define gradient-flatness (\secref{sec:theory})
and explain, with a low-dimensional example (\secref{sec:toy}),
why it is problematic for second-order methods:
gradient-flat points can be \enquote{bad local minima}
for the problem of finding critical points.
We then provide evidence that gradient-flat regions
are encountered when applying
the Newton-MR algorithm to a deep neural network loss
(\secref{sec:results}, \appref{app:mlp}).
Furthermore, we show that, though gradient-flat regions need not
contain actual critical points,
the loss-index relationship looks strikingly similar to that reported
in~\cite{dauphin2014} and~\cite{pennington2017},
suggesting that these previous studies may have found gradient-flat regions,
not critical points.
Finally, we note the implications of gradient-flatness for the
design of second-order methods for use in optimizing neural networks:
in the presence of gradient-flatness,
approximate second-order methods,
like K-FAC~\cite{martens2015} and Adam~\cite{kingma2014}
may be preferable to exact second-order methods even without taking computational cost into account.
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\linewidth]{img/fig_nmr_comparison.pdf}
\caption{\textbf{Newton Methods that Find Critical Points on a Linear Network
Fail on a Non-Linear Network}.
\textbf{A-B}: Newton-MR on a linear autoencoder
applied to multivariate Gaussian data,
as in~\cite{frye2019}.
\emph{A}:
Squared gradient norms of the loss $L$,
as a function of the parameters $\theta$,
across iterations of Newton-MR,
colored by whether, after the first of early termination or 1000 epochs,
squared gradient norms are below 1e-10 (blue{})
or not (orange{}).
\emph{B}:
The loss and Morse index of putative
and actual critical points, with ground truth.
The Morse index is defined as the fraction of negative eigenvalues.
Analytically-derived critical points in gray,
points from the end of runs that terminate
below a squared gradient norm of 1e-10 in light blue{},
and points from trajectories stopped early,
once they pass a squared gradient norm of 1e-2, in dark red.
\textbf{C-D}:
As in \emph{A}-\emph{B},
on the same network architecture and data,
but with Swish~\cite{ramachandran2017} non-linear activations
instead of identity activations.
\emph{D}:
Loss and Morse index of putative critical points.
Points with squared gradient norm above 1e-10 in orange{},
those below 1e-10 in blue{}.
Analytical expressions for critical points are not available
for this non-linear network.}
\label{fig:nmr_comparison}
\end{figure}
\section{Gradient-Flat Points are Stationary Points for Second-Order Methods}\label{sec:gfp}
In this section,
we introduce and define gradient-flat points and
explain why they are problematic for second-order
critical point-finding methods,
with the help of a low-dimensional example
to build intuition.
In numerical settings and in high dimensions,
approximately gradient-flat points are also important,
and so we define a quantitative index of gradient-flatness
based on the residual norm of the Newton update.
Connected sets of these numerically gradient-flat points
are gradient-flat regions,
which cause trouble for
second-order critical point-finding methods.
\subsection{At Gradient-Flat Points, the Gradient Lies in the Hessian's Kernel}\label{sec:theory}
Critical points are of interest because
they are points where the first-order approximation
of a function$f$ at a point\footnote{Note that,
for a neural network loss function,
the variable we take the gradient with respect to,
here $x$, is the vector of parameters, $\theta$,
not the data, which is often denoted with an $x$.}
$x+\delta$ based on the local information at $x$
\begin{equation}
f(x + \delta) \approx f(x) + \grad{f}{x}^\top \delta
\end{equation}
is constant, indicating that they are the stationary points
of first-order optimization algorithms
like gradient descent and its accelerated variants.
By stationary point, we mean a point at which the proposed
updates of an optimization algorithm are zero.
Stationary points need not be
points to which the algorithm converges.
For example,
gradient descent almost surely only converges
to critical points with index 0~\cite{lee2016,jin2018a},
even though its stationary points include saddle points and maxima.
However, the curvature properties of non-minimal stationary points show up
in proofs of the convergence rates of first order optimizers,
e.g.~\cite{jin2018a,jin2018b,jin2018c},
and so knowledge of their properties can guide algorithm design.
In searching for critical points,
it is common to use a linear approximation to the behavior of the gradient
at a point $x + p$ given the local information at a point $x$
\begin{equation}
\grad{f}{x + p} \approx \grad{f}{x} + \hess{f}{x}p
\end{equation}
Because these methods rely on a quadratic approximation of the original function $f$,
represented by the Hessian matrix of second partial derivatives,
we call them second-order critical point-finding methods.
The approximation on the right-hand side is constant whenever
$p$ is an element of $\ker{\hess{f}{x}}$,
where $\ker{M}$ is notation for the kernel of a matrix $M$
--- the subspace that $M$ maps to 0.
When $\hess{f}{x}$ is non-singular,
this is only satisfied when $p$ is $0$,
so if we can define an update rule such that
$p=0$ iff $\grad{f}{x}=0$,
then, for non-singular Hessians,
we can be sure that our method is stationary only at critical points.
In a Newton-type method,
we achieve this by selecting our step by
solving for the zeroes of this linear approximation,
i.e.~the Newton system,
$$
0 = \grad{f}{x} + \hess{f}{x}p
$$
which has solution
$$
p = -\hess{f}{x}^{+}\grad{f}{x}
$$
where the matrix $M^+$ is the Moore-Penrose
pseudoinverse of the matrix $M$,
obtained by performing the singular value decomposition,
inverting the non-zero singular values,
and recomposing the SVD matrices in reverse order.
The Newton update $p$ is zero iff $\grad{f}{x}$ is 0
for a non-singular Hessian,
for which the pseudo-inverse is simply the inverse.
For a singular Hessian,
the update $p$ is zero iff $\grad{f}{x}$ is
in the kernel of the pseudoinverse.
Note that if the Hessian is constant as a function of $x$,
the linear model of the gradient is exact and
this algorithm converges in a single step.
Within the vicinity of a critical point,
this algorithm converges extremely quickly~\cite{nocedal2006},
but the guarantee of convergence is strictly local.
Practical Newton methods in both
convex optimization~\cite{boyd2004} and
non-linear equation solving~\cite{nocedal2006,izmailov2014}
often compare multiple possible choices of $p$
and select the best one according to a
\enquote{merit function} applied to the gradients
which has a global minimum
for each critical point.
Such algorithms have broader guarantees of global convergence.
A common choice for merit function
is the squared norm,
$$
g(x) = \frac{1}{2}\sumsq{\grad{f}{x}}
$$
In gradient norm minimization~\cite{mciver1972},
we optimize this merit function directly.
The gradients of this method are
$$
\grad{g}{x} = \hess{f}{x}\grad{f}{x}
$$
As with Newton methods, in the invertible case
the updates are zero iff $\grad{f}{x}$ is 0.
In the singular case,
the updates are zero if the gradient is in the Hessian's kernel.
Because this method is framed as the minimization of a scalar function,
it is compatible with first-order optimization methods,
which are more commonly implemented and better supported
in neural network libraries.
However, neural network Hessians are generally singular,
especially in the overparameterized case~\cite{sagun2017,ghorbani2019},
meaning the kernel is non-trivial,
and so neither class of methods can guarantee
convergence to critical points.
Newton's method can diverge,
oscillate, or behave chaotically~\cite{griewank1983}.
The addition of merit function-based upgrades
can remove these behaviors,
but it cannot guarantee convergence to critical points~\cite{powell1970,griewank1983}.
The gradient norm minimization method described
in~\cite{pennington2017}
was previously proposed and this flaw pointed out
twice in the field of chemical physics ---
once in the 1970s
(proposed~\cite{mciver1972}, critiqued~\cite{cerjan1981})
and again in the 2000s
(proposed simultaneously~\cite{angelani2000,broderix2000},
critiqued~\cite{doye2002}).
What are the stationary points, besides critical points,
for these two method classes in the case of singular Hessians?
It would seem at first that they are different:
for gradient norm minimization,
when the gradient is in the Hessian's kernel;
for Newton-type methods,
when the gradient is in the Hessian's pseudoinverse's kernel.
In fact, however,
these conditions are identical, due to the Hessian's symmetry\footnote{
Indeed, the kernel of the pseudo-inverse is equal
to the kernel of transpose,
as can be seen from the singular value decomposition,
and the Hessian is equal to its transpose because it is symmetric.
See~\cite{strang1993}.},
and so both algorithms share a broad class of stationary points.
These stationary points have been identified previously,
but nomenclature is not standard:
Doye and Wales, studying gradient norm minimization,
call them \emph{non-stationary points}~\cite{doye2002},
since they are non-stationary with respect to the function $f$,
while Byrd et al., studying Newton methods,
call them \emph{stationary points}~\cite{byrd2004},
since they are stationary with respect to the merit function $g$.
To avoid confusion between these incommensurate conventions
or with the stationary points of the function $f$,
we call a point where the gradient lies in the kernel
of the Hessian a \emph{gradient-flat} point.
This name was chosen because a function is \emph{flat}
when its Hessian is 0, meaning every direction is in the kernel,
and so it is locally flat around a point in a given direction
whenever that direction is in the kernel of the Hessian at that point.
Note that, because $0 \in \ker$ for all matrices,
every critical point is also a gradient-flat point,
but the reverse is not true.
When we wish to explicitly refer to gradient-flat points
which are not critical points,
we will call them \emph{strict} gradient-flat points.
At a strict gradient-flat point, the function is,
along the direction of the gradient,
locally linear up to second order.
There is an alternative view of gradient-flat points
based on the squared gradient norm merit function.
All gradient-flat points are stationary points
of the gradient norm,
which may in principle be local minima, maxima, or saddles,
while the global minima of the gradient norm are critical points.
When they are local minima of the gradient norm,
they can be targets of convergence
for methods that use
first-order approximations of the gradient map,
as in gradient norm minimization and in Newton-type methods.
Strict gradient-flat points, then,
can be \enquote{bad local minima} of the gradient norm,
and therefore prevent the convergence of
second-order root-finding methods
to critical points,
just as bad local minima of the loss function
can prevent convergence of first-order optimization methods
to global optima.
Note that Newton methods cannot be demonstrated to converge
only to gradient-flat points~\cite{powell1970}.
Furthermore, Newton convergence can be substantially slowed
when even a small fraction of the gradient
is in the kernel~\cite{griewank1983}.
Below we will see that,
while a Newton method applied to a neural network loss
sometimes converges to and almost always encounters strict gradient-flat points,
the final iterate is not always either
a strict gradient-flat point or a critical point.
\subsection{Convergence to Gradient-Flat Points Occurs in a Low-Dimensional Quartic Example}\label{sec:toy}
The difficulties that gradient-flat points pose for Newton methods
can be demonstrated with a polynomial example in two dimensions,
plotted in \figref{fig:toy}A.
Below,
we will characterize
the strict gradient-flat (orange{})
and critical (blue{}) points of this function
(\figref{fig:toy}A).
Then, we will observe the behavior of a practical Newton
method applied to it (\figref{fig:toy}B-C)
and note similarities to the results in \figref{fig:nmr_comparison}.
We will use this simple, low-dimensional example
to demonstrate principles useful
for understanding the results of applying
second-order critical point-finding methods to more complex,
higher-dimensional neural network losses.
As our model function, we choose
\begin{equation}\label{eqn:toy}
f(x, y) = 1/4 x^4 - 3x^2 + 9x + 0.9y^4 + 5y^2 + 40
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/fig_toy_problem.pdf}
\caption{\textbf{Stationarity of and Convergence to
a Strict Gradient-Flat Point on a Quartic Function}.
\textbf{A}:
Critical and strict gradient-flat points
of quartic $f(x,y)$ (defined in \eqnref{eqn:toy}).
Middle panel:
$f(x,y)$
plotted in color
black, low values; white, high values),
along with the direction of the Newton update $p$
as a (notably non-smooth) vector field (red).
Stationary points of
the squared gradient norm merit function $g$ are indicated:
strict gradient-flat points in orange{},
the critical point in blue{}.
Top and bottom panels:
The value (top) and squared gradient norm (bottom)
of $f$ as a function of $x$ value
with $y$ fixed at 0.
The $x$ axis is shared between panels.
\textbf{B}:
Performance and trajectories of Newton-MR~\cite{roosta2018}
on~\eqnref{eqn:toy}.
Runs that terminate near a strict gradient-flat point
are in orange{},
while those that terminate near
a critical point are in blue{}.
Middle panel:
Trajectories of Newton-MR laid over
$f(x, y)$.
$x$ and $y$ axes are shared with the middle panel of
\emph{A}.
Initial values indicated with scatter points.
Top and bottom panels:
Function values (top) and squared gradient norms (bottom)
of Newton-MR trajectories as a function of iteration.
The $x$ axis is shared between panels.}
\label{fig:toy}
\end{figure}
It is plotted in~\figref{fig:toy}A, middle panel.
This quartic function has two affine subspaces
of points with non-trivial Hessian kernel,
defined by $[\pm\sqrt{2}, y]$.
The kernel points along the $x$ direction and so
is orthogonal to this affine subspace at every point.
As a function of $y$, $f$ is convex,
with one-dimensional minimizers at $y=0$.
The strict gradient-flat points occur at the intersections
of these two sets:
one strict gradient-flat point at $[\sqrt{2}, 0]$,
which is a local minimum of the gradient norm,
and one at $[-\sqrt{2}, 0]$,
which is a saddle of the same
(\figref{fig:toy}A, orange{} points, all panels).
In the vicinity of these points, the gradient is,
to first order, constant along the $x$-axis,
and so the function is locally linear or flat.
These points are gradient-flat but
neither is a critical point of $f$.
The only critical point is located at the minimum of the polynomial,
at $[-3, 0]$
(\figref{fig:toy}A, blue{} point, all panels),
which is also a global minimum of the gradient norm.
The affine subspace that passes through
$[-\sqrt{2}, 0]$ divides the space into two
basins of attraction, loosely defined,
for second-order methods:
one, with initial $x$-coordinate $x_0<-\sqrt{2}$,
for the critical point of $f$
and the other for the strict gradient-flat point.
Note that the vector field in the middle panel shows update directions
for the pure Newton method,
which can behave extremely poorly in the vicinity
of singularities~\cite{powell1970,griewank1983},
often oscillating and converging very slowly
or diverging.
Practical Newton methods use techniques like
damping and line search to improve behavior~\cite{izmailov2014}.
To determine how a practical Newton method
behaves on this function,
we focus on the case of Newton-MR~\cite{roosta2018},
which uses the MR-QLP~\cite{choi2011} solver%
\footnote{MR-QLP, short for MINRES-QLP,
is a Krylov subspace method akin to conjugate gradient
but specialized to the symmetric, indefinite and ill-conditioned case,
which makes it well-suited to this problem and to neural network losses.}
to compute the Newton update
and back-tracking line search
with the squared gradient norm merit function
to select the step size.
Pseudocode for this algorithm is provided in
~\appref{app:nmr}.
This method was found to perform better than
a damped Newton method
and gradient norm minimization
on finding the critical points of a linear autoencoder in~\cite{frye2019}.
Results are qualitatively similar for damped Newton methods
with a squared gradient norm merit function.
The results of applying Newton-MR
to~\eqnref{eqn:toy} are shown in%
~\figref{fig:toy}B.
The gradient-flat point is attracting
for some trajectories
(orange{}),
while the critical point is attracting for others
(blue{}).
For trajectories that approach the strict gradient-flat point,
the gradient norm does not converge to 0,
but converges to a non-zero value near 10
(orange{} trajectories; \figref{fig:toy}B, bottom panel).
This value is typically several orders of magnitude lower
than the initial point, and so would appear to be close to 0
on a linear scale that includes the gradient norm of the initial point.
Since log-scaling of loss functions is uncommon in machine learning,
as losses do not always have minima at 0,
second-order methods apporaching gradient-flat points
can appear to converge to critical points
if typical methods for visually assessing convergence are used.
There are two interesting and atypical behaviors worth noting.
First, the trajectories tend to oscillate
in the vicinity of the gradient-flat point
and converge more slowly
(\figref{fig:toy}B, middle panel, orange{} lines).
Updates from points close to the affine subspace where the Hessian has a kernel,
and so which have an approximate kernel themselves,
sometimes jump to points where the Hessian doesn't have an approximate kernel.
This suggests that, when converging towards a gradient-flat point,
the degree of flatness will change iteration by iteration.
Second, some trajectories begin in the nominal basin
of attraction of the gradient-flat point
but converge to the critical point
(\figref{fig:toy}B, middle panel, blue{} points
with $x$-coordinate $>-\sqrt{2}$).
This is because the combination of back-tracking line search
and large proposed step sizes means that occasionally,
very large steps can be taken, based on non-local features of the function.
Indeed,
back-tracking line search is a limited form of global optimization
and the ability of line searches
to change convergence behaviors predicted from local properties
on nonconvex problems
is known~\cite{nocedal2006}.
Since the back-tracking line search is based on the gradient norm,
the basin of attraction for the true critical point,
which has a lower gradient norm than the gradient-flat point,
is much enlarged relative to that for the gradient-flat point.
This suggests that Newton methods
using the gradient norm merit function will be biased towards finding gradient-flat points
that also have low gradient norm.
\subsection{Approximate Gradient-Flat Points and Gradient-Flat Regions}\label{sec:approxgfp}
Analytical arguments focus on exactly gradient-flat points,
where the Hessian has an exact kernel
and the gradient is entirely within it.
In numerical settings,
it is almost certain no matrix will have an exact kernel,
due to rounding error.
For the same reason, the computed gradient vector will generically not lie entirely
within the exact or approximate kernel.
However, numerical implementations of second-order methods
will struggle even when there is no exact kernel
or when the gradient is only partly in it,
and so a numerical index of flatness is required.
This is analogous to the requirement to specify a tolerance
for the norm of the gradient when deciding whether to consider a point
an approximate critical point or not.
We quantify the degree of gradient-flatness of a point
by means of
the \emph{relative residual norm} ($r$)
and the \emph{relative co-kernel residual norm} ($r_H$)
for the Newton update direction $p$.
The vector $p$ is an inexact solution to the Newton system $Hp + g = 0$,
where $H$ and $g$ are the current iterate's Hessian and gradient.
The residual is equal to $Hp + g$,
and the smaller its norm, the better $p$ is as a solution.
The co-kernel residual is equal to the Hessian times the residual,
and so ignores any component in the kernel of the Hessian.
Its norm quantifies the quality of an inexact Newton solution
in the case that the gradient lies partly in the Hessian kernel,
the unsatisfiable case, where $Hp \neq -g$ for any $p$.
When the residual is large but the co-kernel residual is small
(norms near 1 and 0, respectively, following suitable normalization),
then we are at a point where
the gradient is almost entirely in the kernel of the Hessian:
an approximate gradient-flat point.
In the results below, we consider a point approximately gradient-flat
when the value of $r_H$ is below 5e-4
while the value of $r$ is above $0.9$.
See \appref{app:residuals} for definitions and details.
We emphasize that numerical issues for second-order methods
can arise even when the degree of gradient-flatness
is small.
Under this relaxed definition of gradient-flatness,
there will be a neighborhood of approximate gradient-flat points
around a strict, exact gradient-flat point
for functions with Lipschitz-smooth gradients and Hessians.
Furthermore, there might be connected sets of non-null Lebesgue measure
which all satisfy the approximate gradient-flatness condition
but none of which satisfy the exact gradient-flatness condition.
We call both of these \emph{gradient-flat regions}.
There are multiple reasonable numerical indices of flatness besides
the definition above.
For example,
the Hessian-gradient regularity condition in~\cite{roosta2018},
which is used to prove convergence of Newton-MR,
would suggest creating a basis for the approximate kernel of the Hessian
and projecting the gradient onto it.
Alternatively, one could compute the Rayleigh quotient of the gradient with respect to the Hessian.
Our method has the advantage of being computed as part of the Newton-MR algorithm.
It furthermore avoids diagonalizing the Hessian or the specification
of an arbitrary eigenvalue cutoff.
The Rayleigh quotient can be computed with only one Hessian-vector product,
plus several vector-vector products,
so it might be a superior choice for larger problems where
computing a high-quality inexact Newton step is computationally infeasible.
\section{Gradient-Flat Regions are Common on Deep Network Losses}\label{sec:results}
To determine whether gradient-flat regions are
responsible for the poor behavior of Newton methods
on deep neural network (DNN) losses
demonstrated in~\figref{fig:nmr_comparison},
we applied Newton-MR to the loss of a
small, two hidden layer fully-connected autoencoder
trained on 10k MNIST images downsized to 4x4,
similar to the downsized datasets in~\cite{dauphin2014,pennington2017}.
We found similar results on a fully-connected classifier
trained on the same MNIST images via the cross-entropy loss
(see~\appref{app:mlp})
and another classifier trained
on a very small subset of 50 randomly-labeled MNIST images,
as in~\cite{zhang2016}
(see~\appref{app:mem}).
We focused on Newton-MR because we
found that a damped Newton method like that in~\cite{dauphin2014}
performed poorly, as reported for the XOR problem in~\cite{coetzee1997},
and furthermore that there was insufficient detail
to replicate~\cite{dauphin2014} exactly.
We denote the network losses by $L$
and the parameters by $\theta$.
See \appref{app:networks} for details on
the networks and datasets and
\appref{app:cpf} for details on
the critical point-finding experiments.
Gradient norms for the first 100 iterations
appear in \figref{fig:gfp_dnn}A.
As in the non-linear autoencoder applied to the multivariate Gaussian data
(\figref{fig:nmr_comparison}C),
we found that, after 500 iterations,
all of the runs had squared gradient norms
over 10 orders of magnitude greater than the typical
values observed after convergence in the linear case
(<1e-30, \figref{fig:nmr_comparison}A).
14\% of runs terminated with squared gradient norm
below the cutoff in~\cite{frye2019}
and so found likely critical points
(blue{}).
Twice as many runs terminated above that cutoff
but terminated in a gradient-flat region
(28\%, orange{}),
while the remainder were above
the cutoff but were not in a gradient-flat region
at the final iteration
(black).
\begin{figure}[htpb]
\centering
\includegraphics[width=0.75\linewidth]{img/fig_gfp_dnn.pdf}
\caption{\textbf{Critical Point-Finding Methods
More Often Find Gradient-Flat Regions
on a Neural Network Loss}.
\textbf{A}:
Squared gradient norms across the first 100 iterations of Newton-MR
for 100 separate runs on an auto-encoder loss.
Gradient norms were flat after 100 iterations.
See~\appref{app:networks} for details.
Runs that terminate with squared gradient norm below 1e-10,
i.e.~at a critical point, in blue{}.
Runs that terminate above that cutoff and with $r$ above $0.9$,
i.e.~in a gradient-flat region, in orange{}.
All other runs in black.
Asterisks indicate trajectories in $\emph{B}$.
\textbf{B}:
The relative residual norm $r$,
an index of gradient-flatness,
for the approximate Newton update
computed by MR-QLP at each iteration
(solid lines)
for three representative traces.
Values are local averages with a window size of 10 iterations.
Raw values are plotted transparently underneath.
Top: non-flat, non-critical point (black).
Middle: flat, non-critical point (orange{}).
Bottom: flat, critical point (blue{}).
\textbf{C}:
Empirical cumulative distribution functions for
the final (top) and maximal (bottom) relative residual norm $r$ observed
during each run of Newton-MR.
Values above the cutoff for approximate gradient-flatness, $r>0.9$,
in orange{}.
Observations from runs that terminated below the cutoff for critical points,
$\sumsq{\grad{L}{\theta}} <$ 1e-10,
indicated with blue{} ticks.
\textbf{D}:
Loss and index for the maximally gradient-flat points
obtained during application of Newton-MR.
Points with squared gradient norm below 1e-10 in blue{}.
Other points colored by their gradient-flatness:
points above $0.9$ in orange{}, points below in black.
Only points with squared gradient norm below 1e-4 shown.
}
\label{fig:gfp_dnn}
\end{figure}
\afterpage{\clearpage}
The relative residual norm for the Newton solution, $r$,
is an index of gradient-flatness;
see \secref{sec:approxgfp} and \appref{app:residuals} for details.
The values of $r$ for every iteration
of Newton-MR are shown for three representative traces
in \figref{fig:gfp_dnn}B.
In the top trace,
$r$ is close to 0,
indicating that the iterates are not in a gradient-flat region
($r\ll0.9$, black).
Newton methods can be substantially slowed when even a small fraction
of the gradient is in the kernel~\cite{griewank1983}
and can converge to points that are not gradient-flat~\cite{byrd2004}.
By contrast, in the middle trace (orange{}),
the value of $r$ approaches $1$,
indicating that almost the entirety of the gradient is in the kernel.
This run terminated in a gradient-flat region,
at effectively an exactly gradient-flat point.
Further, the squared gradient norm at 500 iterations, 2e-5,
is five orders of magnitude higher than the cutoff,
1e-10.
This is smaller than the minimum observed during optimization
of this loss (squared gradient norms between 1e-4 and 5e1),
indicating the presence of non-critical gradient-flat regions
with very low gradient norm.
Critical point-finding methods that disqualify points
on the basis of their norm will both converge to
and accept these points,
even though they need not be near true critical points.
In the bottom trace (blue{}),
the behavior of $r$ is the same,
while the gradient norm drops much lower, to 3e-13,
suggesting convergence to a gradient-flat region around a critical point
that has an approximately singular Hessian.
Not all traces exhibit such simple behavior for the value of $r$.
In many traces, the value of $r$ oscillates from values close to 1
to middling values,
indicating that the algorithm is bouncing in and out
of one or more gradient-flat regions
(see~\appref{app:mlp} for examples, on a classifier).
This can occur when the final target of convergence
given infinite iterations
is a gradient-flat point,
as in the example in~\secref{sec:toy}.
We found that 99 of 100 traces included a point
where at least half of the gradient was in the kernel,
according to our residual measure,
while 89\% of traces included a point that had
a residual greater than $0.9$,
and 50\% included a point with $r > 0.99$
(\figref{fig:gfp_dnn}C, bottom).
This demonstrates that
there are many regions of substantive gradient-flatness,
in which second-order critical point-finding methods could be substantively slowed.
The original purpose of applying these critical point-finding methods
was to determine whether the no-bad-local-minima property held
for this loss function, and more broadly to characterize
the relationship at the critical points
between the loss and the local curvature,
summarized via the Morse index.
If we look at either the points found after 500 iterations
(results not shown;
see~\appref{app:mlp} for an example on a classifier)
or the iterates with the highest gradient-flatness
(\figref{fig:gfp_dnn}D),
we find that the qualitative features of the loss-index relationship reported in
\cite{dauphin2014} and \cite{pennington2017} are recreated:
convex shape, small spread at low index that increases for higher index,
no minima or near-minima at high values of the loss.
However, our analysis suggests that the majority of these points are not critical points
but either strict gradient-flat points (orange{})
or simply points of spurious or incomplete
Newton convergence (black).
The approximately critical points we do see (blue{})
have a very different loss-index relationship:
their loss is equal to the loss of a network with all parameters set to 0
and their index is low, but not 0.
\section{Discussion}
The results above and in the appendices demonstrate that gradient-flat regions,
where the gradient is nearly in the approximate kernel of the Hessian,
are a prevalent feature of some protoypical neural network loss surfaces
and that many critical point-finding methods are attracted to them.
The networks used in this paper are very small,
relative to practical networks for image recognition
and natural language processing,
which have several orders of magnitude more parameters.
However, increasing parameter count tends to
increase the singularity of loss Hessians%
~\cite{sagun2017},
and so we expect there to be even greater gradient-flatness
for larger networks.
The strategy of using gradient norm cutoffs to determine
whether a point is near enough to a critical point
for the loss and index to match the true value is natural,
but in the absence of guarantees on the smoothness of the behavior of
the Hessian (and its spectrum) around the critical point,
the numerical value sufficient to guarantee correctness is unclear.
Our observations of gradient-flat regions at extremely low
gradient norm and the separation of these values,
in terms of loss-index relationship,
from the bulk of the observations
suggest that there may be spurious targets
of convergence for critical point-finding methods
even at such low gradient norm.
Alternatively, they may in fact be near real critical points,
and so indicate that the simple, convex picture of loss-index relationship
painted by the numerical results in~\cite{dauphin2014} and~\cite{pennington2017}
is incomplete.
Indeed, recent analytical results
have demonstrated that bad local minima do exist for
almost all neural network architectures and datasets
(see~\cite{ding2019} for a helpful table of positive
and negative theoretical results regarding local minima).
Furthermore, our observation of singular Hessians
at low gradient norm
suggests that some approximate saddle points of neural network losses
may be degenerate (as defined in~\cite{jin2018a})
and non-strict (as defined in~\cite{lee2016}),
which indicates that gradient descent may be attracted to these points,
according to the analyses in~\cite{jin2018a} and~\cite{lee2016}.
These points need not be local minima.
However, in two cases we observe the lowest-index saddles at low values of the loss
(see \figref{fig:gfp_dnn}, \figref{fig:gfp_mlp})
and so these analyses still predict that gradient descent will
successfully reduce the loss,
even if it doesn't find a local minimum.
In the third case,
an over-parameterized network \figref{fig:gfp_mem},
we do observe a bad local minimum,
as predicted in~\cite{ding2019}
for networks capable of achieving 0 training error.
Our results motivate a revisiting of the numerical results
in~\cite{dauphin2014} and~\cite{pennington2017}.
Looking back at Figure 4 of~\cite{dauphin2014},
we see that their non-convex Newton method,
a second-order optimization algorithm designed to avoid saddle points
by reversing the Newton update along directions of negative curvature,
appears to terminate at a gradient norm of order 1.
This is only a single order of magnitude lower than what was observed during training.
It is likely that this point was either in a gradient-flat region
or otherwise had sufficient gradient norm in the Hessian kernel to
slow the progress of their algorithm.
This observation demonstrates that second-order methods
designed for optimization,
which use the loss as a merit function,
rather than norms of the gradient,
can terminate in gradient-flat regions.
In this case, the merit function encourages
convergence to points where the loss,
rather than the gradient norm, is small,
but it still cannot guarantee convergence to a critical point.
The authors of~\cite{dauphin2014} do not report a gradient norm cutoff,
among other details needed to recreate their critical point-finding experiments,
so it is unclear to which kind of points they converged.
If, however, the norms are as large as those of the targets of
their non-convex Newton method,
in accordance with our experience with damped Newton methods
and that of~\cite{coetzee1997},
then the loss-index relationships reported in their Figure 1
are likely to be for gradient-flat points,
rather than critical points.
The authors of~\cite{pennington2017}
do report a squared gradient norm cutoff of 1e-6.
This cutoff is right in the middle of the bulk of values
we observed,
and which we labeled gradient-flat regions and
points of spurious convergence,
based on the cutoff in~\cite{frye2019},
which separates a small fraction of runs from this bulk.
This suggests that some of their putative critical points were gradient flat points.
Their Figure 6 shows a disagreement between their predictions for the index,
based on a loss-weighted mixture of a Wishart and Wigner random matrix,
and their observations.
We speculate that some of this gap is due to their method
recovering approximate gradient-flat points rather than critical points.
Even in the face of results indicating the existence of bad local minima%
~\cite{ding2019},
it remains possible that bad local minima of the loss
are avoided by initialization and optimization strategies.
For example ReLU networks suffer from bad local minima
when one layer's activations are all $0$,
or when the biases are initialized at too small of a value%
~\cite{holzmller2020},
but careful initialization and training can avoid the issue.
Our results do not directly invalidate
this hypothesis,
but they do call the supporting numerical evidence into question.
Our observation of gradient-flat regions on almost every single run
suggests that, while critical points are hard to find
and may even be rare,
regions where gradient norm is extremely small are neither.
For non-smooth losses,
e.g.~those of ReLU networks or networks with max-pooling,
whose loss gradients can have discontinuities,
critical points need not exist,
but gradient-flat regions may.
Indeed, in some cases, the only differentiable minima
in ReLU networks are also flat%
~\cite{laurent2017}.
Other types of critical point-finding methods are not necessarily
attracted to gradient-flat regions, in particular Newton homotopy methods
(first used on neural networks in the 90s~\cite{coetzee1997},
then revived in the 2010s~\cite{ballard2017,mehta2018b}),
which are popular in algebraic geometry~\cite{bates2013}.
However, singular Hessians still cause issues:
for a singular Hessian $H$, the curve to be continued by the homotopy
becomes a manifold with dimension 1 + $\mathrm{rank}\left(H\right)$,
and orientation becomes more difficult.
This can be avoided by removing the singularity of the Hessian,
e.g.~by the randomly-weighted regularization method in~\cite{mehta2018a}.
However, while these techniques may make it possible to find
critical points,
they fundamentally alter the loss surface,
limiting their utility in drawing conclusions about other features
of the loss.
In particular, in the time since
the initial resurgence of interest in the curvature properties
of neural network losses sparked by~\cite{dauphin2014},
the importance of overparameterization for optimization of
and generalization by neural networks has been identified
\cite{li2018,poggio2020}.
Large overparameterized networks have more singular Hessians~\cite{sagun2017},
and so the difference between the original loss and an altered version
with an invertible Hessian is greater.
Furthermore the prevalence of gradient-flat regions should be greater,
since the Hessian kernel covers an increasingly large subspace.
The authors of~\cite{sagun2017} emphasize that
when the Hessian is singular everywhere,
the notion of a basin of attraction is misleading,
since targets of convergence form
connected manifolds
and some assumptions in theorems guaranteeing first-order convergence
become invalid~\cite{jin2018a},
though with sufficient, if unrealistic, over-parameterization
convergence can be proven~\cite{du2018}.
They speculate that a better approach
to understanding the behavior of optimizers
focuses on their exploration of the sub-level sets of the loss.
Our results corroborate that speculation and
further indicate that this flatness means using second-order methods
to try to accelerate exploration of these regions
in search of minimizers
is likely to fail:
the alignment of the gradient with the Hessian's approximate kernel
will tend to produce extremely large steps, for some methods,
or no acceleration and even convergence to non-minimizers,
for others.
Our observation of ubiquitous gradient-flatness further
provides an alternative explanation
for the success and popularity
of approximate second-order optimizers for neural networks,
like K-FAC~\cite{martens2015},
which uses a layerwise approximation to the Hessian.
These methods are typically motivated by appeals to the computational cost
of even Hessian-free exact second-order methods
and their brittleness in the stochastic (non-batch) setting.
However, exact second-order methods are only justified
when the second-order model is good,
and at an exact gradient-flat point,
the second-order model can be infinitely bad,
in a sense, along the direction of the gradient.
Approximations need not share this property.
Even more extreme approximations,
like the diagonal approximations in the adaptive gradient family
(e.g.~AdaGrad~\cite{duchi2011}, Adam~\cite{kingma2014}),
behave very reasonably in gradient-flat regions:
they smoothly scale up the gradient in the directions in which
it is small and changing slowly,
without making a quadratic model that is optimal
in a local sense but poor in a global sense.
Overall, our results underscore the difficulty of searching
for critical points of singular non-convex functions,
including deep network loss functions,
and shed new light on other numerical results in this field.
In this setting, second-order methods for finding critical points can fail badly,
by converging to gradient-flat points.
This failure can be hard to detect unless it is specifically measured.
Furthermore, gradient-flat points are generally places where
quadratic approximations become untrustworthy,
and so our observations are of relevance for the
design of exact and approximate second-order optimization methods as well.
\section*{Acknowledgements}
The authors would like to thank
Yasaman Bahri, Jesse Livezey, Dhagash Mehta, Dylan Paiton, and Ryan Zarcone for useful discussions.
Authors CF \& AL were supported by the National Science Foundation Graduate Research Fellowship Program
under Grant No. DGE 1752814.
NW was supported by the Google PhD Fellowship.
Author AL was supported by a National Institues of Health training grant,
5T32NS095939.
MRD was supported in part by the U. S. Army Research Laboratory and the U. S. Army Research Office
under contract W911NF-13-1-0390.
KB was funded by a DOE/LBNL LDRD, ‘Deep Learning for Science’, (PI, Prabhat).
| 8438693e93894992911cf308072ec13d3c454393 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Calculations of radiative energy loss via medium-induced gluon radiation (see e.g. \cite{CS} and refs. therein) off massive quarks were performed in \cite{ASW, DGLV,WW}. Two effects compete against each other. The gluon formation time, $\left[t_g^{\rm form}(m\ne 0)\right]^{-1}\sim \left[t_g^{\rm form}(m= 0)\right]^{-1}+m_q^2/E_q$, of a massive quark (with mass $m_q$ and energy $E_q$) is shorter than that of a massless quark, so gluon emission in the former is less suppressed by the LPM effect than that in the latter. On the other hand, gluon radiation off a massive quark is suppressed at angles smaller than the so-called dead cone angle $\theta_0$ $=$ $m_q / E_q$, similarly to the case in vacuum \cite{dc,DeadCone}. The generic result of this competition is
that a massive quark loses less energy in a medium than a massless one.
But the mentioned calculations do not consider the effects of the interference between emitters and therefore the extension to multi-gluon radiation relies on ad hoc conjectures.
In order to investigate these interference effects in vacuum, a quark-antiquark ($q {\bar q}$) antenna was considered, see \cite{book} and references therein. The soft gluon radiation spectrum off a massless $q {\bar q}$ antenna in vacuum exhibits angular ordering (AO): radiation is suppressed at $\theta$ $>$ $\theta_{q {\bar q}}$ (after averaging over the gluon azimuthal angle), where $\theta$ is the angle between the emitted gluon and the parent quark (antiquark) and $\theta_{q {\bar q}}$ is the antenna opening angle. The massless antenna spectrum diverges when $\theta$ $\rightarrow$ $0$ and $\omega$ $\rightarrow$ $0$.
When the quarks have a non-zero mass, AO is modified and the collinear divergence disappears due to the dead cone effect, but the soft divergence remains. The result was shown to hold for arbitrary color representations of the $q\bar q$ pair.
Medium-induced soft gluon radiation off a massless $q {\bar q}$ antenna has been studied recently \cite{MSTprl, MSTdecoh, MT, CI}. The spectrum exhibits antiangular ordering (AAO) - there is no collinear divergence ($\theta$ $>$ $\theta_{q {\bar q}}$ $>$ $0$) - but the soft divergence persists, at variance with the results in \cite{ASW, DGLV,WW}.
Due to space limitations, here we show some selected results for a massive quark-antiquark pair at first order in opacity.
\section{Results}
\subsection{Considered diagrams}
In Fig. \ref{aspec} we show the three types of diagrams that we consider. They correspond to the independent emission off the quark (antiquark), denoted as independent and the interference between the quark and antiquark, denoted as interference I and interference II, respectively. The total number of diagrams computed is 96.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\textwidth]{aspec.pdf}
\caption{\label{aspec} Examples of Feynman diagrams representing the three contributions to the antenna spectrum. The cross denotes the position of scattering, the l.h.s. of the dashed line is the amplitude and the r.h.s. of the dashed line is the complex conjugate amplitude, with the dashed line being the cut.}
\end{center}
\end{figure}
\subsection{Average energy loss}
The average radiative energy loss for gluons with energies $\omega_{\rm min} < \omega <\omega_{\rm max}$ is
\begin{center}
\begin{equation}
\Delta E = \int_{\omega_{\rm min}}^{\omega_{\rm max}} {\rm d} \omega \int_0^{\pi / 2} {\rm d} \theta \, \omega \frac{{\rm d} N}{{\rm d} \omega \, {\rm d} \theta}\ .
\label{deoe}
\end{equation}
\end{center}
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{HOA.pdf}
\caption{\label{HOA}Dependence of the medium-induced radiative relative energy loss on the antenna opening angle. The parameters are: Debye mass $m_D=0.5(2)$ GeV and medium length $L=4(10)$ fm for the plots on the left (right). The solid curves correspond to the massless antenna, the dotted curves to the bottom antenna, the dashed curves to the massless independent spectra and the dash-dotted curves to the bottom independent spectra. From top to bottom, the values used in Eq. (\ref{deoe}) for $\omega_{\rm min}$ are 0, 2 and 6 GeV, while those for $\omega_{\rm max}$ are 2, 6 and $E_q$.}
\end{center}
\end{figure}
The ratio $\Delta E / E$ as a function of $\theta_{q {\bar q}}$ is shown in Fig. \ref{HOA}, where $E=E_q=100$ GeV for the independent emitter case and for the antenna the spectrum is divided by 2 as it gets contributions from both the quark and antiquark.
In the soft and moderate gluon energy regions ($0\leq\omega\leq 2$ GeV and $2$ GeV $\leq \omega\leq 6$ GeV), both the massless and the massive antenna spectra grow monotonously with an increasing opening angle $\theta_{q {\bar q}}$ and the former is larger than the latter. In the hard gluon radiation sector ($6$ GeV $\leq\omega\leq E_q$), the situation is similar for large medium parameters while, for small medium parameters (lower left plot), there is a crossing between the massless and the bottom antennas due to the LPM effect which results in a larger energy loss for larger masses as discussed in the Introduction and found previously e.g. in \cite{ASW}. Therefore, for large medium parameters the dead cone effect dominates over the LPM in all cases.
Both massless and massive antenna results approach the ones from independent emitters when $\theta_{q {\bar q}}$ is large, showing that the interference between the quark and the antiquark of the antenna reduces with an increasing opening angle.
Besides, in the soft gluon emission region and for large medium parameters, there is apparently more energy loss in the antenna than for independent emitters in both the massless and the massive cases. This reflects the fact that the antenna spectrum exhibits a soft divergence while the independent spectrum is infrared finite. In the moderate and the hard gluon emission sectors, the antenna average energy loss increases and gradually approaches the independent average energy loss with an increasing antenna opening angle $\theta_{q {\bar q}}$, which indicates that more collimated projectiles lose less energy. There is no radiation for the antenna when $\theta_{q {\bar q}}$ $\rightarrow$ $0$. The size of the mass effect in the antenna is similar to the one for independent emitters.
\section{Conclusions}
In this contribution, we show some results of the medium-induced gluon radiation spectrum off a $q {\bar q}$ antenna at first order in opacity for the massive case. In this computation, performed in the high-energy limit, both the non-abelian LPM effect and the dead cone for massive quarks (both contained in the medium-induced gluon spectrum off individual emitting partons - the BDMPS-Z-W/GLV formalism), and the interference between emissions off the quark and antiquark, are included.
The antenna radiation is found to be dominated by that off independent emitters for large opening angles of the antenna and for large energies of the emitted gluon. More collimated antennas lose less energy. The phase space restriction for gluon emission implied by the dead cone effect is similar in the antenna and in the case of independent emitters. The effect of the interference between different emitters is to generate predominantly soft radiation at large angles.
\vskip 0.3 cm
\small{The work of NA, HM, YM-T, and CAS is supported by Ministerio de Ciencia e Innovaci\'on
of Spain (grants FPA2008-01177 and FPA2009-06867-E), Xunta de Galicia (Conseller\'{\i}a
de Educaci\'on and grant PGIDIT10PXIB 206017PR), project Consolider-Ingenio
CPAN CSD2007-00042, and FEDER. The work of KT is supported by the Swedish Research Council (contract number 621-2010-3326). CAS is a Ram\'on y Cajal researcher.}
\section*{References}
| a204de8670ff302da86834d8aafdee038dcfc253 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The damped nonlinear Schr\"odinger equation driven by a time-periodic external force,
\begin{subequations}
\begin{equation}
iu_t+ u_{xx} + 2 |u|^2u + \delta u = a e^{i \Omega t} - i \beta u,
\label{u12}
\end{equation}
and its parametrically driven counterpart
model two fundamental energy supply mechanisms
in a nearly-conservative spatially distributed system.
While the unperturbed Schr\"odinger is an archetypal equation
for the slowly varying envelope of a group of dispersive waves,
the damped-driven equations arise whenever the
resonant forcing of small amplitude is used to compensate
weak dissipative losses.
The simplest (and perhaps the most visually appealing) realisation of Eq.\eqref{u12}
is that of the amplitude equation for
a strongly coupled pendulum array
with the horizontal
sinusoidal driving \cite{FK}, taken in its continuum limit.
Here $a$ and $\Omega$ are the driving strength and driving frequency,
respectively; $\delta$ is the detuning of the driving frequency from the
continuum of linear waves in the array, and $\beta$ is the damping coefficient.
The array of torsionally coupled pendula can serve as a prototype model for the whole
variety of systems in condensed matter physics. Accordingly,
Eq.\eqref{u12} was employed to study systems as diverse as
the ac-driven long
Josephson junctions \cite{Jj} and
charge-density-wave
conductors with external electric field \cite{CDW};
double-layer quantum Hall (pseudo)ferromagnets \cite{Hall}
and easy-axis ferromagnets in a
rotating magnetic field \cite{magnetism}.
Eq.\eqref{u12} arises
in the theory of rf-driven waves in
plasma \cite{plasma,NB_plasma} and shear
flows in nematic liquid crystals \cite{LC};
the same equation governs the amplitude of the slowly varying $\pi$-mode in the
forced Fermi-Pasta-Ulam lattice \cite{FPU}.
A closely related equation is the one with the {\it spatially\/} periodic forcing,
\begin{equation}
iu_t+ u_{xx} + 2 |u|^2u + \delta u = a e^{iKx} - i \beta u,
\label{u1}
\end{equation}
and, more generally,
the one driven by the harmonic wave \cite{Cohen,trivial,driven_by_plane_wave}:
\begin{equation}
iu_t+ u_{xx} + 2 |u|^2u + \delta u = a e^{i (Kx +\Omega t)} - i \beta u.
\label{u120}
\end{equation}
\end{subequations}
A discrete version of Eq.\eqref{u1}
describes an array of coupled-waveguide resonators excited by a driving field
\cite{Egorov_Flach}
whereas Eq.\eqref{u120} models pulse propagation in
an asymmetric twin-core optical fiber \cite{Cohen}.
Equation \eqref{u120} includes \eqref{u12} and \eqref{u1}
as particular cases.
The transformation
\[
u(x,t)=\Psi(X, t) e^{i(Kx+\Omega t) }, \quad X=x-2Kt
\]
takes \eqref{u120} to
\begin{equation}
i \Psi_t + \Psi_{X X} + 2 |\Psi|^2 \Psi - \kappa^2 \Psi= a- i \beta \Psi,
\label{auto}
\end{equation}
with $\kappa^2= K^2+\Omega -\delta$.
The equation in this form has a history of applications of its own --- in particular, in the physics of
optical cavities. Originally, it was
introduced as the Lugiato - Levefer model \cite{Lugiato_Lefever}
of the diffractive cavity driven by a plane-wave stationary beam. Later it was employed to
describe a synchronously pumped ring laser with a nonlinear dispersive fiber \cite{fiber,Wabnitz}.
More recently the same equation was shown to govern the envelopes of short baroclinic Rossby waves
in the two-layer model of the atmosphere, or the ocean \cite{Rossby}.
Equation \eqref{auto} has undergone an extensive mathematical analysis. Topics covered included
existence \cite{KN,BS1,BZ_PhysicaD},
stability \cite{BZB,BS1} and bifurcation \cite{NB_plasma,bifurcations} of nonpropagating solitons
and their bound states \cite{Malomed,Wabnitz,BSA,Kollmann};
statistical mechanics of soliton creation and annihilation \cite{stats};
soliton
autoresonance phenomena \cite{autoresonance,driven_by_plane_wave};
regular \cite{Terrones} and
chaotic \cite{chaos} attractors on finite spatial intervals.
Here and below we use the word ``soliton" simply as a synonym
for ``localised travelling wave".
The recent paper \cite{MQB} studied solitons of the undamped ($\beta=0$)
equation \eqref{auto} travelling
with constant or oscillating velocities.
Summarising results of their direct numerical simulations
of Eq.\eqref{auto}, the authors
formulated an empirical stability criterion of the soliton
against small and large perturbations.
So far, this criterion has not been given any mathematical proof or physical justification.
Despite being tested on
a variety of initial conditions, it still has the status of conjecture.
In order to verify the validity of the empirical stability criterion
at least for infinitesimal perturbations, one needs to have
the travelling soliton existence and linearised stability domains accurately demarcated.
The classification of bifurcations
occurring when stability is lost would also be a useful step towards
the justification of the criterion.
This is what we shall concern ourselves with in this paper.
Here, we study travelling solitons of Eq.\eqref{auto} by path-following them
in the parameter space. One advantage of this approach over simulations is that it furnishes
{\it all\/} soliton solutions moving with a given velocity --- all stable and all unstable.
This, in turn, allows one
to
understand
the actual mechanisms
and details of the soliton transformations.
The outline of this paper is as follows.
In the next section, we give a brief classification of space- and time-independent
solutions of Eq.\eqref{auto} which may serve as the backgrounds for the solitons.
In particular, we show that there is only one stable background and
determine the value of the limit speed of the soliton propagating
over it.
In section \ref{Ins_lin} we describe insights one can draw from the analysis of the eigenvalues
of the symplectic linearised operator and its hermitian counterpart. These pertain to
the stability and bifurcation of the solitons.
In section \ref{Nonpropagating} we present four {\it nonpropagating\/} directly driven solitons.
Two of these have already been available in literature
while the other two have not been known before.
In sections \ref{Numerical_Simple} and \ref{Numerical_Twist}, we report on the continuation of
these stationary solitons to nonzero velocities.
Our results on the existence and stability of the travelling
solitons and their complexes, are summarised in
section \ref{Conclusions}. In particular, Fig.\ref{chart} gives a chart of ``stable" velocities
for each value of the driving strength.
\section{Flat solutions}
\label{Flat}
Assuming that $\kappa^2>0$ and defining
$t'= \kappa^2 t$,
$x'= \kappa X$, and
$\Psi= \kappa \psi$,
equation \eqref{auto} becomes
\[
i \psi_{t'} + \psi_{x' x'} + 2 |\psi|^2 \psi - \psi= -h -i \gamma \psi,
\]
where
$h = -a/\kappa^3$, $\gamma = \beta/\kappa^2$.
(In what follows, we omit primes above $x$ and $t$ for notational convenience.)
In this paper we study the above equation with zero damping: $\gamma=0$.
Without loss of generality we can assume that $h>0$.
Since we shall be concerned with solitons travelling at nonzero
velocities, it is convenient to transform the equation to a co-moving frame:
\begin{equation}
i \psi_{t} -iV \psi_\xi+ \psi_{\xi \xi} + 2 |\psi|^2 \psi - \psi= -h,
\label{our4}
\end{equation}
where $\xi =x-Vt$.
Flat solutions are roots of the cubic equation
\begin{equation}
2|\psi|^2 \psi- \psi=-h; \label{algebraic}
\end{equation}
these have been classified in \cite{BS1}.
If $0<h<(2/27)^{1/2}$, there are
3 roots, of which two ($\psi_1$ and $\psi_2$) are positive, and one ($\psi_3$)
is negative. Here
$\psi_1^2 < \frac16 <\psi_2^2< \frac12<\psi_3^2 <\frac23$.
If $h> (2/27)^{1/2}$, there is only one (negative) solution $\psi_3$, with
$\psi_3^2>\frac23$.
Let $\psi_0$ denote a root of equation \eqref{algebraic} --- one of the three roots $\psi_1$, $\psi_2$
and $\psi_3$.
The value $\psi_0$ does not depend on $V$: the flat solution
has the same form in any frame of reference. However
the spectrum of small perturbations of the flat solution does include a dependence
on $V$.
Letting $\psi=\psi_0+ [u(\xi)+iv(\xi))]e^{\lambda t}$ in \eqref{our4},
linearising in $u$ and $v$, and, finally, taking $u, v \propto e^{ik \xi}$, we obtain
\begin{equation}
(\lambda - ikV)^2=-(k^2+a^2)(k^2+b^2),
\label{dispersion}
\end{equation}
where we have introduced
\begin{equation}
a= \sqrt{1-6\psi_0^2}, \quad b= \sqrt{1-2 \psi_0^2}.
\label{ab}
\end{equation}
To determine whether $\psi_0$ can serve as a background to
a stationary localised solution of \eqref{our4},
consider a time-independent perturbation --- that is, set $\lambda=0$:
\begin{equation}
k^2V^2=(k^2+a^2)(k^2+b^2).
\label{decay}
\end{equation}
The only flat solution that is
{\it a priori\/} unsuitable as a background for
localised solutions is such $\psi_0$ whose
associated quadratic equation \eqref{decay}
has two nonnegative real roots, $(k^2)_1 \geq 0$ and $(k^2)_2 \geq 0$.
It is not difficult to check that the negative solution $\psi_3$
has two nonnegative roots for any choice of $h$ and $V$.
This disqualifies $\psi_3$ as a possible soliton background.
We also conclude that travelling solitons may not exist for
$h$ greater than $(2/27)^{1/2}$.
Next, if $V \leq c$, where
\begin{equation}
c=a+b,
\label{c}
\end{equation}
the smaller positive solution $\psi_1$ will have either two
complex or two negative roots $(k^2)_{1,2}$,
whereas for velocities greater than $c$, both roots are nonnegative.
Hence the $\psi_1$ solution can serve as a background only
for $V \leq c$. When $V< b-a$, the decay to the
background is monotonic (both roots are negative),
while when $V>b-a$, the decay
is by ondulation (the roots are complex).
This flat solution admits a simple explicit expression:
\[
\psi_1 = \sqrt{\frac23} \cos \left( \frac{\alpha}{3} - \frac{2 \pi}{3} \right),
\]
where
\[
\alpha = \arccos \left(- \sqrt{\frac{27}{2}} h \right),
\quad \frac{\pi}{2} \leq \alpha \leq \pi .
\]
Finally, the larger positive solution $\psi_2$ has two real roots
of opposite signs (for all $V$ and $0 \leq h \leq (2/27)^{1/2}$).
This flat solution may also serve as a
soliton background.
Next, one can readily check that a flat solution $\psi_0$ is stable
if $\psi_0^2 < \frac16$. Therefore, even if there are
solitons asymptotic to the flat solution $\psi_2$ as $x \to \infty$ or
$x \to -\infty$, these will be of little physical interest as
the background $\psi_2$ is always unstable.
In summary, only the small positive flat solution (the one with $\psi_0^2< \frac16$)
is stable. It
may serve as a background for solitons only if $V <c$; that is, the soliton propagation speed is limited by $c$.
The inequality $V \leq c$ limiting the soliton propagation speed, has a simple
physical interpretation. Indeed, one can easily check that $c$ gives the lower
bound for the phase velocity of radiation waves [in the original $(x,t)$ reference frame].
Therefore, a soliton travelling faster than $c$ would be exciting resonant radiation.
This is inconsistent with the asymptotic behaviour $\psi_x \to 0$ as $|x| \to \infty$;
neither could it be reconciled with the energy conservation.
\section{Insights from linearisation}
\label{Ins_lin}
Travelling wave solutions depend on $x$ and $t$ only in combination $\xi=x-Vt$.
For these, the partial differential equation \eqref{our4} reduces to
an ordinary differential equation
\begin{equation}
-iV \psi_\xi+ \psi_{\xi \xi} + 2 |\psi|^2 \psi - \psi= -h.
\label{stationary}
\end{equation}
It is this equation that we will be solving numerically in the following sections.
Let $\psi_s(\xi)$ be a localised solution of \eqref{stationary}.
In order to represent results of continuation graphically, we will need to characterise
the function $\psi_s(\xi)$ by a single value. A convenient choice for such a
bifurcation measure is the momentum integral
\begin{equation}
P= \frac{i}{2} \int (\psi^*_\xi \psi - \psi_\xi \psi^*) d \xi.
\label{P}
\end{equation}
One advantage of this choice is that
the momentum is an integral of motion for equation \eqref{our4}; hence $P$ is a physically
meaningful characteristic of solutions.
Another useful property of the momentum is that in some cases its extrema mark the
change of the soliton stability properties (see below).
\subsection{The hermitian and symplectic operator}
Many aspects of the soliton's bifurcation diagram can be explained
simply by the behaviour of the eigenvalues of the operator of linearisation about the
travelling-wave solution in question. Therefore,
before proceeding to the numerical continuation of travelling waves, we
introduce the linearised operator and discuss some of its properties.
Consider a
perturbation of the solution of Eq.\eqref{stationary}
of the form $\psi= \psi_s +[u(\xi)+ iv(\xi)] e^{\lambda t}$, with small $u$ and $v$.
Substituting $\psi$ in Eq. \eqref{our4} and linearising in $u$ and $v$, we get a symplectic eigenvalue problem
\begin{equation}
\mathcal{H} {\vec y} = \lambda J {\vec y}.
\label{EV}
\end{equation}
Here ${\vec y}$ is a two-component vector-function
\[
{\vec y}(\xi) = \left( \begin{array}{c} u \\ v \end{array} \right),
\]
and $\mathcal{H}$ is a hermitian differential operator acting on such functions:
\[
\mathcal{H}= \left(
\begin{array}{cc}
-\partial_\xi^2 +1 -2(3 \mathcal{R}^2+ \mathcal{I}^2) & -V \partial_\xi -4 \mathcal{R} \mathcal{I} \\
V \partial_\xi - 4 \mathcal{R} \mathcal{I} &
-\partial_\xi^2 + 1 -2(3 \mathcal{I}^2 + \mathcal{R}^2)
\end{array}
\right),
\]
with
$\mathcal{R}$ and $\mathcal{I}$ denoting the real and imaginary
part of the solution $\psi_s(\xi)$: $\psi_s= \mathcal{R}+i \mathcal{I}$.
Finally, $J$ is a constant skew-symmetric matrix
\[
J = \left(
\begin{array} {cc} 0 & -1 \\ 1 & 0 \end{array} \right).
\]
Assume that $\psi_s(\xi)$ is a localised solution decaying to $\psi_0$ as $x \to \pm \infty$,
where $\psi_0^2< \frac16$.
The continuous spectrum of the hermitian
operator $\mathcal{H}$ occupies the positive real axis
with a gap separating it from the origin:
$E \geq E_0 >0$. Discrete eigenvalues $E_n$ satisfy $E_n < E_0$.
On the other hand, the continuous spectrum of the {\it symplectic\/} eigenvalues
(that is, the continuous spectrum of the operator $J^{-1} \mathcal{H}$)
occupies the imaginary axis of $\lambda$ outside the gap $(-i \omega_0, i \omega_0)$.
The gap width here is given by
\begin{equation}
\omega_0= \sqrt{(k_0^2+ a^2)(k_0^2+b^2)}-Vk_0 >0,
\label{omega0}
\end{equation}
where $k_0$ is the positive root of the bicubic equation
\[
V^2(k^2+a^2)(k^2+b^2)= k^2(2k^2+ a^2+b^2)^2.
\]
Discrete eigenvalues of the operator $J^{-1} \mathcal{H}$ may include pairs of opposite
real values $ \lambda= \pm \rho$;
pure imaginary pairs
$\lambda= \pm i \omega$, with $0 \leq \omega \leq \omega_0$; and, finally, complex
quadruplets $\lambda= \pm \rho \pm i \omega$.
We routinely evaluate the spectrum of symplectic eigenvalues as we
continue localised solutions in $V$.
If there is at least one eigenvalue $\lambda$
with $\textrm{Re} \lambda>0$, the solution $\psi_s$ is considered linearly unstable.
Otherwise (that is, if all eigenvalues have $\textrm{Re} \lambda \leq 0$), the
solution is deemed linearly stable.
\subsection{Zero eigenvalues}
While the eigenvalues of the operator $J^{-1} \mathcal{H}$
(that is, the eigenvalues of the symplectic eigenvalue problem \eqref{EV})
determine stability or instability of the solution $\psi_s$, the eigenvalues of the operator $\mathcal{H}$
are significant for the continuability of this solution. Of particular importance are its zero eigenvalues.
At a generic point $V$, the operator $\mathcal{H}$ has only one zero eigenvalue,
with the translational eigenvector ${\vec \Psi}_\xi \equiv ( \mathcal{R}_\xi, \mathcal{I}_\xi)$.
This is due to the fact that the stationary equation \eqref{stationary}
has only one continuous symmetry.
For a given $V$, the solution $\psi_s(\xi)$ is a member of a {\it one}-parameter family
of solutions $\psi_s(\xi-\theta)$, where $\theta$ is an arbitrary translation.
On the other hand, the nonhermitian operator
$J^{-1} \mathcal{H}$ has two zero eigenvalues
at a generic point. The reason is that the
equation \eqref{our4} as well as its linearisation, are hamiltonian systems.
Real and imaginary eigenvalues of operators which generate
hamiltonian flows always come in pairs: If $\mu$ is an eigenvalue, so is $-\mu$
\cite{Arnold}.
The two zero eigenvalues of the operator $J^{-1} \mathcal{H}$ reflect the fact that
the function $\psi_s(\xi)$, considered as a solution of the partial differential
equation \eqref{our4}, is a member of a two-parameter family. One parameter
is the translation; the other one is the velocity $V$.
For generic $V$, the repeated zero eigenvalue of $J^{-1} \mathcal{H}$ is defective:
there is only one eigenvector ${\vec \Psi}_\xi$ associated with it.
There is also a generalised eigenvector ${\vec \Psi}_V$,
where
\[
{\vec \Psi}_V \equiv \left(\frac{\partial \mathcal{R}}{ \partial V}, \frac{\partial \mathcal{I}}{\partial V} \right).
\]
This vector-function is {\it not} an eigenvector of $J^{-1} \mathcal{H}$;
instead, differentiating \eqref{stationary} in $V$ one checks that ${\vec z}={\vec \Psi}_V$ satisfies the nonhomogeneous equation
\begin{equation}
\mathcal{H} {\vec z} = -J {\vec \Psi}_\xi.
\label{HVJ}
\end{equation}
[That is, ${\vec \Psi}_V$
is an eigenvector of the {\it square\/} of the symplectic operator:
$(J^{-1} \mathcal{H} )^2{\vec \Psi}_V=0 $.]
As we continue in $V$, a pair of opposite pure-imaginary symplectic eigenvalues may
collide at the origin on the $\lambda$-plane and cross to the positive
and negative real axis, respectively. The algebraic multiplicity of the
eigenvalue $\lambda=0$ increases from 2 to 4 at the point $V=V_c$;
however if the hermitian operator $\mathcal{H}$ does not acquire the second eigenvalue $E=0$
at this point, the geometric multiplicity remains equal to 1.
The change of stability of the soliton solution
does not affect its continuability, i.e. the soliton exists on either side of $V=V_c$.
In this case we have $dP/dV=0$ at the point where the stability changes \cite{Baer}.
The continuation may be obstructed only when another (the second) eigenvalue of the operator $\mathcal{H}$
crosses through zero at $V=V_c$:
$\mathcal{H} {\vec \Phi}=0$.
If the corresponding eigenvector ${\vec \Phi}$ is not
orthogonal to the vector-function $J {\vec \Psi}_\xi$ in the right-hand side of equation \eqref{HVJ},
its solution ${\vec z}= {\vec \Psi}_V$ will not be bounded.
This implies a saddle-node bifurcation; the soliton solution $\psi_s$ cannot be continued beyond $V=V_c$.
Note that although ${\vec \Phi}$ is
an eigenvector of the symplectic operator $J^{-1} \mathcal{H}$, the algebraic multiplicity
of the symplectic eigenvalue remains equal to 2
in this case.
Assume now that the eigenvector ${\vec \Phi}$ {\it is\/}
orthogonal to $J {\vec \Psi}_\xi$.
This may happen if the soliton solution $\psi_s$ of
equation \eqref{stationary} with $V=V_c$
is a member of a {\it two}-parameter family of
solutions $\psi_s=\psi_s(\xi-\theta; \chi)$,
with $\chi$ equal to some $\chi_0$.
Here we assume that {\it each\/} member of the family
$\psi_s(\xi-\theta; \chi)$ is a solution of Eq.\eqref{stationary} --- with the same $V=V_c$.
Then ${\vec \Phi}$ is given by
${\vec \Psi}_\chi \equiv \left. \partial {\vec \Psi}/ \partial \chi \right|_{\chi=\chi_0}$.
If $\chi_0$ is a root of the equation
\begin{subequations} \label{F}
\begin{equation}
F(\chi)=0,
\label{F1}
\end{equation}
where
\begin{equation}
F(\chi) \equiv
\int ({\vec \Psi}_\chi, J {\vec \Psi}_\xi) \, d \xi,
\label{F2}
\end{equation}
\end{subequations}
the vectors ${\vec \Psi}_\chi$ and $J {\vec \Psi}_\xi$ will be orthogonal
which, in turn, will imply that a bounded solution ${\vec \Psi}_V$ of the equation \eqref{HVJ} exists.
[In Eq.\eqref{F2} $(\phantom{a}, \phantom{b})$ stands for the
$\mathbb{R}^2$ scalar product: $({\vec a}, {\vec b}) \equiv a_1 b_1+ a_2 b_2$.]
In this case the value $V_0$ is {\it not\/} a turning point; the soliton solution $\psi_s$
exists on both sides of $V=V_0$.
The algebraic multiplicity of the zero symplectic eigenvalue increases
at the point $V=V_c$. In fact from the hamiltonian property it follows that it increases up to 4
(rather than 3).
Recalling the definition of the momentum integral \eqref{P} and
writing it in terms of the real and imaginary part of $\psi_s$,
equation \eqref{F} becomes simply
\[
\left. \frac{\partial P}{ \partial \chi} \right|_{\chi=\chi_0} =0.
\]
This condition ensures that a two-parameter family of solutions $\psi_s(x-\theta; \chi)$,
existing at the velocity $V=V_0$, has a one-parameter subfamily $\psi_s(x-\theta; \chi_0)$ continuable
to $V \neq V_c$ \cite{Baer}.
\section{Non-propagating solitons}
\label{Nonpropagating}
\subsection{Simple solitons}
\begin{figure}
\includegraphics[width =\linewidth]{fig1.ps}
\caption{\label{stat}
Stationary $\psi_+$ and $\psi_-$ solitons
}
\end{figure}
The ordinary differential equation \eqref{stationary} with $V=0$,
\begin{equation}
\psi_{xx} + 2 |\psi|^2 \psi - \psi= -h,
\label{our5}
\end{equation}
has two real-valued localised solutions, $\psi_+$ and $\psi_-$. These are given by explicit formulas
\cite{BZB}:
\begin{equation}
\psi_\pm (x) = \psi_0
\left[
1+ \frac{2 \sinh^2 \beta}{1 \pm \cosh \beta \cosh (Ax)}
\right],\label{pm}
\end{equation}
where the parameter $\beta$ ($0\leq \beta <\infty$) is in one-to-one correspondence
with the driving strength $h$:
\[
h = \frac{\sqrt{2} \cosh^2 \beta}{(1+ 2 \cosh^2 \beta)^{3/2}}.
\]
As
$h$ increases from 0 to $\sqrt{2/27} \approx 0.2722$,
$\beta$ decreases from infinity to zero.
(Hence $0 \leq h \leq 0.2722$ is the domain of existence of the two solitons.)
The asymptotic value $\psi_0$ and inverse width $A$ are also expressible through $\beta$:
\[
\psi_0= \frac{1}{\sqrt{2}} \frac{1}{\sqrt{1+ 2 \cosh^2 \beta}},
\quad
A= \frac{\sqrt{2} \sinh \beta}{\sqrt{1+ 2 \cosh^2 \beta}}.
\]
(Note that the asymptotic value $\psi_0$ corresponds to the stable background,
denoted $\psi_1$ in Sec.\ref{Flat}.)
The stationary soliton $\psi_+$ has a positive eigenvalue in the spectrum
of the linearised operator \eqref{EV}; hence the $\psi_+$ is unstable for all $h$ for which it exists
\cite{BZB}.
The spectrum of the stationary soliton $\psi_-$ with small $h$ includes two
discrete eigenvalues $\lambda_{1,2}=i \omega_{1,2}$, $\omega_{1,2}>0$ --- and their negative-imaginary counterparts.
As $h$ grows to 0.07749, $\lambda_1$ and $\lambda_2$ approach each other, collide
and acquire real parts of the opposite sign.
This is a hamiltonian Hopf bifurcation. For $h>0.07749$, the soliton $\psi_-$ is prone to the oscillatory instability
\cite{BZB}.
When a damping term is added to the equation,
the two stationary solitons $\psi_+$ and $\psi_-$
persist and can form a variety of multisoliton bound states, or complexes \cite{Malomed,Wabnitz,BSA,Kollmann}.
In the next subsection, we show that {\it undamped\/} directly driven solitons
can also form stationary complexes. Some of these complexes are bound so tightly that
the solution represents a single entity.
To distinguish these objects from the solitons
$\psi_+$ and $\psi_-$, we will be referring to the $\psi_+$ and $\psi_-$
as the {\it simple\/} solitons.
\subsection{The twist solitons}
In addition to the two simple solitons expressible in elementary functions,
the stationary equation
\eqref{our5} has two localised solutions that cannot be constructed analytically.
Unless $h$ is extremely small, each of these two solutions has the form of a single entity
[Fig.\ref{twi}(a,b)] ---
a soliton whose phase does not stay constant but grows, monotonically, as $x$
changes from large negative to large positive values.
When visualised in the three-dimensional $(x, \mathrm{Re} \psi, \mathrm{Im} \psi)$-space,
it looks like a twisted ribbon (twisted by $360^\circ$); hence we will be calling these two
solutions simply ``twists". For the reason that will become obvious
in the paragraph following the next one, we denote the two solutions
$\psi_{T_2}$ and $\psi_{T_3}$, respectively.
The twist solitons were previously encountered in the parametrically driven (undamped)
nonlinear Schr\"odinger equation \cite{Baer}. For each $h$, the parametrically driven twist is a member of a
two-parameter family of stationary {\it two}-soliton solutions.
The first parameter is the overall translation of the complex;
the second one is the separation distance between the two bound solitons. The twist corresponds to
a very small separation, where the two simple solitons bind to form a
single entity. (The resulting object does not have even a slightest reminiscence of a
two-soliton state; without knowing the whole family, the relation would hardly be possible to guess.)
The two simple solitons, $\psi_+$ and $\psi_-$, detach from the
$U(1)$-symmetric family of solitons
of the unperturbed nonlinear Schr\"odinger at $h=0$ \cite{BZ_PhysicaD}.
The two twist solutions of \eqref{our5} also hail from the solitons of the unperturbed
equation; however this time the relation is more complicated.
Reducing $h$, the two solutions transform into complexes
of well-separated solitons
[Fig.\ref{twi}(c,d)]. Namely, one of the two twist solutions becomes
a complex of two solitons:
\[
\psi_{T2} \to e^{3i \pi/4} \mathrm{sech}(x+x_0)
+ e^{-3i \pi/4} \mathrm{sech}(x-x_0),
\]
where $x_0 \to \infty$ as $h \to 0$. The other twist
continues to a complex of {\it three\/} unperturbed solitons:
\[
\psi_{T3} \to \mathrm{i} \, \mathrm{sech}(x+x_0) -\mathrm{sech} x - \mathrm{i } \, \mathrm{sech}(x-x_0),
\]
and again, the separation $x_0$ grows without bound as $h \to 0$.
The ``full names" of the two twists, $\psi_{T2}$ and $\psi_{T3}$, were coined to reflect
this multisoliton ancestry.
Despite being quiescent, nonpropagating objects, the twists carry
nonzero momentum. Since equation \eqref{our5} is invariant
under the space inversion, the twist soliton with momentum $P$ has a partner
with momentum $-P$ which is obtained by changing $x \to -x$.
This transformation leaves the absolute value of $\psi(x)$ intact
but changes the sign of the phase derivative, $(d/dx) \mathrm{arg} \, \psi(x)$. By analogy
with the right-hand rule of circular motion, the twist whose phase decreases as $x$ grows from $-\infty$
to $+\infty$ [that is, the trajectory on the $(\mathrm{Re} \, \psi, \mathrm{Im} \, \psi)$ phase plane
is traced clockwise], will be called right-handed.
The twist with the increasing phase (i.e. with a trajectory traced counter-clockwise)
will be called left-handed.
One can readily verify that the left-handed twist has a positive
momentum, whereas the right-handedness implies $P<0$.
\begin{widetext}
\begin{figure}
\includegraphics[width =0.49\linewidth]{fig2a.ps}
\includegraphics[width =0.49\linewidth]{fig2b.ps}
\includegraphics[width =0.49 \linewidth]{fig2c.ps}
\includegraphics[width =0.49\linewidth]{fig2d.ps}
\caption{\label{twi}
(a,b): The two nonpropagating twist
solutions for $h \sim 1$. (Here $h=0.2$).
(c,d): The corresponding quiescent solutions when continued to an exponentially small $h$.
(Here $h=2.15 \times 10^{-5}$).
All twist solutions shown in these figures are left-handed.
}
\end{figure}
\end{widetext}
Consider some particular value of the driving strength, $h=h_0$.
Unlike the twist solution in the {\it parametrically\/} driven NLS, the {\it directly\/}
driven twist with $h=h_0$ is a member of a one-parameter
(rather than two-parameter) family of solutions.
(The only free parameter is the translation, $-\infty<\theta< \infty$, whereas
the intersoliton separation $\chi$ is fixed by $h$.)
This can be concluded from the fact that
the corresponding operator $\mathcal{H}$ has only one, translational,
zero eigenvalue. Had the twist been a member of a family of solutions
parametrised by two continuous parameters, say $\theta$ and $\chi$, the operator $\mathcal{H}$
would have had an additional zero eigenvalue with the eigenvector ${\vec \Psi}_\chi$.
Letting $\psi= x_1+ix_2$, the stationary equation \eqref{our5}
can be written as a classical mechanical system on the plane, with the Lagrangian
\[
L= \frac12 ({\dot x}_1^2+ {\dot x}_2^2) - \frac12 (x_1^2+ x_2^2)^2
+ \frac12 (x_1^2+ x_2^2) -h x_1.
\]
The existence of a one-parameter family of homoclinic orbits ${\vec x}={\vec x}_\chi(t)$,
where ${\vec x} \equiv(x_1,x_2)$,
would imply that
the above system has the second integral of motion,
in addition to the energy.
However, equation \eqref{our5} is known not to have
any additional conserved quantities \cite{Hietarinta}.
Finally, we need to comment on the stability of the two twist solutions.
When $h$ is equal to 0 and the two solutions represent a doublet and a triplet of
infinitely separated solitons of the unperturbed nonlinear Schr\"odinger,
the symplectic spectrum includes 8 and 12 zero eigenvalues, respectively.
When $h$ is small nonzero, only two eigenvalues remain at the origin in each case.
In addition, the spectrum of the $\psi_{T2}$ twist includes a complex quadruplet
$\pm \lambda, \pm \lambda^*$ and a pair of opposite pure imaginary eigenvalues. As
$h$ is increased, the imaginary pair collides with another imaginary pair
emerging from the continuum, producing the second complex quadruplet.
The spectrum of the $\psi_{T3}$ twist includes two complex quadruplets
and a pair of pure imaginary eigenvalues; this arrangement remains in place for
all $h$, from very small to $h=\sqrt{2/27}$.
The bottom line is that both twist solutions are unstable for all $h$; the instability
is always of the oscillatory type.
\section{Numerical continuation of simple
solitons}
\label{Numerical_Simple}
\subsection{The travelling $\psi_+$ soliton}
Travelling solitons are sought as
solutions of the ordinary differential equation \eqref{stationary}
under the boundary conditions $\psi_\xi \to 0$ as $|\xi| \to \infty$.
We begin with the continuation of the quiescent soliton $\psi_+$.
For a sequence of $h$ sampling the interval $(0, \sqrt{2/27})$, the branch
starting at $\psi_+$ was path followed all the
way to $V=c$, where $c$ is given by Eq.\eqref{c}. As $V$ increases,
the amplitude of the solution decreases while
the width grows. A typical solution with $V$ close to $c$ is shown in Fig.\ref{Vc}.
As $V \to c$,
the momentum $P$ tends to zero.
\begin{figure}
\includegraphics[width =\linewidth]{fig3.ps}
\caption{\label{Vc}
As $V \to c$, the $\psi_+$ solitons (for all $h$) and $\psi_-$ solitons
(for small $h$) approach
linear waves with slowly decaying envelopes.
Shown is the $\psi_+$ solution with $V$ close to $c$.
(In this plot, $h=0.01$; the corresponding $c=1.9996$.)
The $\psi_-$ solutions with $V$ close to $c$ have a similar shape.}
\end{figure}
The resulting $P(V)$ diagram is shown in Fig.\ref{PV}(a).
For each $h$, the unstable stationary $\psi_+$ soliton remains unstable
when travelling sufficiently slow. The instability is due to a real eigenvalue $\lambda>0$ of
the linearised operator \eqref{EV}.
As $V$ grows, the unstable eigenvalue moves towards the
origin along the real axis.
Eventually, as the momentum $P$ reaches its maximum, the positive eigenvalue $\lambda$
collides with its opposite partner $\lambda'=-\lambda$, after which
both real eigenvalues move onto the imaginary axis and the soliton acquires stability.
The soliton remains stable all the way from the point
$V_c$, where
the momentum is maximum, to the value $V=c$ where $P=0$ and the soliton ceases to exist.
The resulting $P(V)$ dependence shows a remarkable similarity to the $P(V)$ diagram \cite{Baer}
for the {\it parametrically\/} driven nonlinear Schr\"odinger,
\begin{equation}
i \psi_t -iV \psi_\xi + \psi_{\xi \xi} + 2 |\psi|^2 \psi - \psi= h \psi^*.
\label{PDNLS}
\end{equation}
The ``parametrically driven" diagram is reproduced in Fig.\ref{PV}(b) for the sake of comparison. One should keep in mind here
that the notation used for the parametrically driven solitons is opposite to the notation employed
in the externally driven situation.
Thus, the parametrically driven stationary ($V=0$) soliton with a positive
symplectic eigenvalue in its spectrum
is denoted $\psi_-$ (and not $\psi_+$ as its externally driven counterpart).
On the other hand,
the parametrically driven stationary soliton denoted $\psi_+$
is stable for sufficiently small $h$ (like the externally driven soliton $\psi_-$).
For this reason, the objects
featuring $P(V)$ diagrams similar to those of our externally driven solitons $\psi_+$, are the parametrically driven solitons $\psi_-$.
\begin{widetext}
\begin{figure}
\includegraphics[width =0.49\linewidth]{fig4a.ps}
\hspace{0.5mm}
\includegraphics[width =0.49\linewidth]{fig4b.ps}
\caption{\label{PV} (a) The momentum of the $\psi_+$, $\psi_-$
solitons continued to positive velocities. Decimal fractions attached to branches label the corresponding values of $h$, with the superscripts
$+$ and $-$ indicating the $\psi_+$ and $\psi_-$ solitons. (For example, $0.1^+$
marks the branch emanating from the stationary $\psi_+$ soliton with $h=0.1$.)
The only two branches that have not been labelled
are the $\psi_+$ and $\psi_-$ branches
with $h=0.01$; the curve just above $h=0$ is $0.01^-$ and the curve just below
the $h=0$ branch is $0.01^+$.
Solid curves mark stable and dashed ones unstable branches.
(b) The corresponding $P(V)$ diagram for the travelling parametrically driven solitons
from \cite{Baer}.}
\end{figure}
\end{widetext}
\subsection{The travelling $\psi_-$ soliton; $h <0.06$}
In the case of the $\psi_-$ solitons, there are two characteristic scenarios.
When $h$ lies between $0$ and $0.06$, the soliton $\psi_-$ exists for all $V$ between $0$ and $c$.
As $V$ is increased from zero, the momentum $P$ grows from $P=0$ and
reaches its maximum at some point $V_c$, $0<V_c<c$.
As $V$ is changed from $V_c$ to $c$, the momentum
decays to zero [see Fig.\ref{PV}(a)]. On the other hand,
when $h$ equals $0.06$ or lies above this value, the curve $P(V)$ does not
exhibit a point of maximum.
Consider, first, the case $h<0.06$.
The transformation scenario here is similar to the case of the soliton $\psi_+$;
see Fig.\ref{PV}. What makes
the bifurcation curves for the $\psi_+$ and $\psi_-$ solitons
different, is the stability properties of the two solutions.
Unlike the $\psi_+$ solution,
the {\it stationary\/} $\psi_-$ soliton with $h \leq 0.07749$ is stable and its stability persists when it is continued to
small nonzero velocities. As $V$ grows to the value $V_c$ where the momentum reaches its maximum,
two opposite pure imaginary eigenvalues collide at the origin on the $(\mathrm{Re} \lambda, \mathrm{Im} \lambda )$
plane and cross to the positive and negative real axis, respectively.
For the driving strengths
$h \leq 0.055$, this implies the loss of stability.
As for the interval $0.0551 \leq h \leq 0.06$, here the instability sets in earlier, as $V$ reaches some $V=V_0$
(where $V_0<V_c$). At the point $V=V_0$,
two pairs of pure imaginary eigenvalues collide and produce a quadruplet of complex eigenvalues $\pm \lambda, \pm \lambda^*$. (Here $\lambda$ has a small real and finite imaginary part.)
This is a point of the hamiltonian Hopf bifurcation, associated with the oscillatory
instability \cite{ABP,Baer}.
As $V$ is increased to $V_1$
(where $V_0<V_1<V_c$), two pairs of complex-conjugate $\lambda$
converge on the real axis, becoming two positive
($\lambda_1=\lambda_2>0$) and two negative ($-\lambda_1=-\lambda_2$)
eigenvalues. Finally, when $V$ crosses through $V_c$, the eigenvalues $\lambda_1$ and $-\lambda_1$
move on to the imaginary axis. The soliton does not restabilise at this point though;
the real pair $\pm \lambda_2$ persists for all $V \geq V_c$.
The bifurcation values $V_0$ and $V_1$ are, naturally, functions of $h$.
The value $V_0$ decreases
(and $V_1$ increases) as $h$ is increased from $0.0551$. Eventually,
when $h$ reaches $0.07749$, $V_0$ reaches zero. It is interesting to
note that there is a gap between $V_1$ and $V_c$ for all $h$.
Therefore the oscillatory and nonoscillatory instability coexist for
no $V$; for smaller $V$ ($V_0 < V <V_1$) the instability is
oscillatory whereas for larger $V$ ($V>V_1$) the instability has a
monotonic growth.
Finally, it is appropriate to
mention here that the bifurcation curve for the $\psi_-$ solitons with small $h<0.06$
has the same form as the $P(V)$ dependence
for the small-$h$ {\it parametrically\/}
driven solitons (more specifically, parametrically
driven $\psi$-{\it plus\/} solitons) --- see Fig.\ref{PV}(b).
\subsection{The travelling $\psi_-$ soliton; $h \geq 0.06$}
The $P(V)$ graphs for $h \geq 0.06$ are qualitatively different
from the small-$h$ bifurcation curves. For these larger $h$, the
bifurcation curve emanating from the origin on the $(V,P)$-plane
turns back at some $V=V_\textrm{max}$, with the derivative $\partial P/ \partial V$ remaining strictly positive
for all $V \leq V_\textrm{max}$.
For $h$ in the interval $0.06 \leq h <0.25$, the $P(V)$ curve crosses the
$P$-axis [Fig.\ref{PV}(a), Fig.\ref{PV2}].
The solution arising at the point $V=0$ is nothing but the $\psi_{T2}$ twist soliton,
shown in Fig.\ref{twi}(a).
As we continue this branch to the $V<0$-region, the twist transforms into a complex of
two well-separated $\psi_-$ solitons.
The $P(V)$ curve makes one more turn
and eventually returns to the origin on the $(V,P)$-plane (Fig.\ref{PV2}).
As $V$ and $P$ approach zero, the distance between
the solitons in the complex tends to infinity.
\begin{figure}
\includegraphics[width =\linewidth]{fig5.ps}
\caption{\label{PV2}
The full $P(V)$ bifurcation diagram for the $\psi_-$ soliton with
$h=0.2$.
Also shown is the continuation of the $T3$ twist and
the $\psi_{(++)}$ branch.
More solution branches can be
obtained by the reflection $V \to -V$, $P \to -P$.
All branches shown in this figure correspond to unstable solutions.
}
\end{figure}
An interesting scenario arises when $h$ is greater or equal than $0.25$.
Here, as $V$ grows from zero, the soliton $\psi_-$ gradually transforms into a
three-soliton
complex $\psi_{(+-+)}$. The branch
turns back towards $V=0$ but does not cross the $P$-axis. Instead of continuing to negative $V$,
the branch reapproaches the origin in the $(V,P)$ plane, remaining
in the positive $(V,P)$ quadrant
at all times. The ingoing path is almost
coincident with the outgoing trajectory; as a result, the branch forms a lasso-looking loop [Fig.\ref{PV}(a)].
Turning to the stability properties of
solutions along the branch continued from $\psi_-$, we start with a
short interval $0.06 \leq h \leq 0.07749$.
The movements of the stability eigenvalues along the section of the curve emanating from
the origin on the $(V,P)$ plane,
are similar to the interval $0.055 < h <0.06$
that we discussed in the previous paragraph. The stationary $\psi_-$ soliton is stable
and stability persists for small $V$. As $V$ reaches a certain $V_0>0$, a quadruplet of complex eigenvalues is
born and oscillatory instability sets in.
Subsequently two pairs of the complex eigenvalues converge on the real axis,
dissociate, then recombine
and diverge to the complex plane again; a pair of opposite pure imaginary eigenvalues
moves to the real axis and back --- however, despite all this activity on the complex plane,
the soliton solution never regains its stability.
For larger $h$, $h>0.07749$, the stationary $\psi_-$ soliton is unstable,
with a complex quadruplet in its spectrum. As we continue in $V$, two pairs of opposite
pure imaginary eigenvalues move on to the real axis, one after another.
For $h \geq 0.25$, the resulting arrangement
(two pairs of opposite real eigenvalues and a complex quadruplet)
persists until the branch reaches the origin on the $(V,P)$ plane.
On the other hand, when $h$ lies in the interval $0.07749 <h<0.25$,
the four real eigenvalues collide, pairwise, producing the second complex quadruplet
at some point on the curve before it crosses the $P$ axis in Figs.\ref{PV}(a) and \ref{PV2}.
Two complex quadruplets persist in the spectrum as we continue the curve
further. Thus the unstable stationary
soliton $\psi_-$ with $h>0.07749$, remains unstable for all $V$.
\section{Numerical continuation of the twist soliton}
\label{Numerical_Twist}
When $0.06 \leq h <0.25$, the branch resulting from the continuation of
the stationary $\psi_-$ soliton turns back and crosses the $P$-axis; the point of crossing
corresponds to the $T2$ twist solution. On the other hand, when $h$ lies outside the
$(0.06, 0.25)$ interval, the stationary $T2$ twist is disconnected from the
stationary $\psi_-$ soliton and can be used as a starting point for a new,
independent, branch. Another new branch is seeded by the $T3$ solution.
These additional branches of travelling solitons are traced in this section.
\subsection{Travelling twist $T2$ ($h < 0.06$)}
We start with the situation of small $h$: $h < 0.06$, and consider the
$T2$ solution first.
When the stationary $\psi_{T2}$ twist is path followed to positive $V$, it transforms into a $\psi_{(++)}$ complex.
At some point, the $P(V)$ curve makes a U-turn [Fig. \ref{h05_complex}(a)] and
connects to the origin on the $(V,P)$ plane.
The entire positive-$V$ branch is unstable. The stationary twist has a complex quadruplet in its spectrum;
as the curve is continued beyond the turning point, the complex eigenvalues converge, pairwise,
on the positive and negative real axis. In addition,
a pair of opposite pure imaginary eigenvalues moves onto the real axis
as $V$ passes through the point of maximum of the
momentum in Fig.\ref{h05_complex}(a).
As the curve approaches the origin, the distance
between the two solitons in the complex
increases and becomes infinite when $V=P=0$.
The spectrum becomes the spectrum of two infinitely separated $\psi_+$ solitons,
i.e. it includes two positive eigenvalues $\lambda_1 \approx \lambda_2$;
their negative counterparts $-\lambda_1 \approx -\lambda_2$;
and four eigenvalues near the origin.
Continuing the $T2$ twist in the negative-$V$ direction, it transforms into a complex
of two $\psi_-$ solitons. At some point along the curve, a quadruplet of complex eigenvalues
converges on the imaginary axis
and the complex
stabilises.
(For the value $h=0.05$ which was used to produce Fig.\ref{h05_complex}(a), the stabilisation occurs at the point $V=-0.45$.)
Continuing to larger negative $V$, the branch turns back;
shortly after that (at $V=-0.503$ for $h=0.05$) the momentum reaches its minimum.
Two opposite imaginary eigenvalues collide at this point and move onto the real axis;
the solution loses its stability.
When continued beyond the turning point and the point of minimum
of momentum,
the curve connects to the origin on the $(V,P)$ plane (Fig. \ref{h05_complex}(a)).
As $V,P \to 0$,
the distance between the two $\psi_-$ solitons grows without bound.
The two opposite real eigenvalues decay in absolute value but remain in the spectrum
all the way to $V=0$.
It is interesting to note a similarity between the bifurcation diagram
resulting from the continuation of the small-$h$ $T2$ twist in the externally driven NLS
[Fig.\ref{h05_complex} (a)] and the corresponding diagram in the parametrically driven case.
The latter is reproduced, for convenience of comparison, in Fig.\ref{h05_complex} (b).
In both cases the continuation of the twist solution to negative velocities
gives rise to a stable complex of two stable solitons.
\begin{widetext}
\begin{figure}
\includegraphics[width =0.49 \linewidth]{fig6a.ps}
\includegraphics[width =0.49 \linewidth]{fig6b.ps}
\caption{\label{h05_complex} (a)
Continuation of the $T2$ and $T3$ twist solutions in the case of small $h$.
Also shown is the two-soliton branch which connects the origin to itself without
intersection the vertical axis.
(b) The continuation of the twist soliton in the case of the
parametrically driven NLS equation (adapted from \cite{Baer}).
More solution branches can be
obtained by the reflection $V \to -V$, $P \to -P$ both in (a) and (b).
}
\end{figure}
\end{widetext}
\subsection{Travelling twist $T3$, $h<0.25$}
Figs.\ref{h05_complex}(a)
and \ref{PV2}
also show the continuation of the $T3$ twist soliton.
The bifurcation diagrams obtained for
$h<0.06$ and $0.06 \leq h <0.25$ are qualitatively similar.
Continuing the stationary $T3$
to positive velocities, the solution transforms into a $\psi_{(+-+)}$ complex.
If we, instead, continue to negative velocities, the twist transforms into a triplet of
$\psi_-$ solitons. Both $V>0$ and $V<0$ parts of the curve turn
and connect to
the origin on the $(V,P)$ plane. As $V$ and $P$ approach the origin
on either side, the distance between the three solitons bound in the complex grows without limit.
The stationary $T3$ has two complex quadruplets in its spectrum;
depending on $h$, both or one of these converge on the real axis
as we continue it to $V>0$ and $V<0$.
Two opposite eigenvalues cross through $\lambda=0$ at the
extrema of $P(V)$. Finally,
as $V$ and $P$ approach the origin, the spectrum transforms into the union of spectra of three separate solitons.
\subsection{Travelling twists $T2$ and $T3$, $h \geq 0.25$}
Another parameter region where the continuation of the
$\psi_-$ does not cross the $P$-axis, is $h \geq 0.25$.
The result of the continuation of the two
twist solutions is shown in Fig.\ref{twist_25}(a).
The continuation of $T2$ to the negative velocities proceeds according to scenario
similar to $h=0.2$ and $h=0.05$: the twist transforms into a complex of two
solitons $\psi_-$. At some negative $V$ the curve turns back and connects to the origin on the $(V,P)$ plane,
with the distance between the two solitons bound in the complex increasing without bound.
The eigenvalues evolve accordingly: two complex quadruplets in the spectrum of the stationary $T2$
persist for all $V<0$, supplemented by a pair of real eigenvalues which arrive from the
imaginary axis at the point of minimum of $P(V)$. As $V,P \to 0$, the discrete spectrum
becomes the union of the eigenvalues of two simple solitons.
The continuation of $T2$ to positive $V$ produces a less expected outcome.
Instead of turning clockwise and connecting to the origin
as in Fig.\ref{h05_complex}(a), the curve turns counterclockwise
and crosses through the $P$-axis once again. The solution arising at
the point $V=0$
is nothing but the twist $T3$.
Two complex quadruplets in the spectrum of $T2$ persist as
it is continued to $T3$.
The subsequent continuation produces a hook-shaped curve similar to the
curve described in the previous paragraph and leading to the origin on the
$(V,P)$-plane. The corresponding solution is a complex of three $\psi_-$ solitons,
shown in Fig.\ref{twist_25}(b). The third complex quadruplet
emerges at some $V$ before the turning point, and a pair of opposite
real eigenvalues arrives from the imaginary axis at the point of minimum of the momentum.
As $V,P \to 0$, the distance between the solitons grows to infinity
and the spectrum approaches the union of the eigenvalues of three
separate solitons $\psi_-$.
\begin{widetext}
\begin{figure}
\includegraphics[width =0.49\linewidth]{fig7a.ps}
\includegraphics[width =0.49\linewidth]{fig7b.ps}
\caption{\label{twist_25} (a) The $P(V)$ curve resulting from the
continuation of the twist for $h=0.25$. The starting point of the continuation is
marked by an open circle. All branches shown in this figure are unstable.
(b) A $\psi_{(---)}$ solution on the lower branch in (a). Here $V=-0.85$, $P=-3.2$.
In (b), the solid lines show the real and dashed imaginary part.
}
\end{figure}
\end{widetext}
\subsection{Other branches}
It is appropriate to note that there are branches which do not originate on any
of the four stationary solutions listed above ($\psi_{\pm}$, $\psi_{T2}$ or $\psi_{T3}$).
The simplest of these emerge from the origin
on the $(V,P)$ plane as bound states of simple solitons with large separation.
One branch of this sort arises for $h \geq 0.06$ (Fig. \ref{PV2}).
It emerges from the origin as the $\psi_{(++)}$ and returns as the $\psi_{(++++)}$ complex.
The entire branch is unstable.
Next,
unlike in the parametrically driven NLS, the same pair of {\it externally\/} driven travelling solitons
may bind at various distances. In particular, when $h$ is smaller than 0.06,
there is more than one bound state of
two $\psi_+$ solitons and more than one complex of two $\psi$-minuses.
Fig.\ref{h05_complex}(a) shows a branch $\psi_{(--)}$ that emerges from the
origin in the first quadrant of the $(V,P)$ plane, describes a loop
and re-enters the origin --- this time as
a $\psi_{(++)}$ branch. Note that
for small $V$ and $P$, the re-entering $\psi_{(++)}$ branch is indistinguishable from
the other $\psi_{(++)}$ branch --- the one that continues from the twist solution.
(In a similar way, the $V \to -V$, $P \to -P$ reflection of the $\psi_{(--)}$ branch
overlaps with the small-$V,P$ section of the $\psi_{(--)}$ branch arriving from the twist.)
All solutions constituting this branch are unstable.
\section{Concluding remarks}
\label{Conclusions}
In this paper, we studied stationary and moving solitons of
the externally driven nonlinear Schr\"odinger equation,
\begin{equation}
i \psi_{t} + \psi_{xx} + 2 |\psi|^2 \psi - \psi= -h.
\label{undamped}
\end{equation}
Our continuation results are summarised in Fig.\ref{chart}(a)
which shows ranges of stable velocities for each value of the driving strength $h$.
The notation $\psi_+$ and $\psi_-$ in this figure is used for the travelling
waves obtained by the continuation of the stationary $\psi_+$ and $\psi_-$
solitons, respectively. The travelling soliton preserves some similarity with
its stationary ancestor; this justifies the use of the same notation.
The uppermost curve in this figure is given by $V=c(h)$ where $c$ is the
maximum velocity of the soliton propagation, Eq.\eqref{c}.
This curve serves as the upper bound of the travelling $\psi_+$ soliton existence domain.
The dotted curve demarcates the existence domain of the travelling
$\psi_-$ soliton. For $h$ between 0 and 0.06 it coincides with the $V=c$;
for $0.06 \leq h \leq 0.2722$ it is given by $V=V_{\rm max}(h)$ where $V_{\rm max}$
is the position of the turning point in Fig.\ref{PV}(a).
The area shaded in blue (light grey) gives the stability region of the soliton $\psi_+$
and the area shaded by purple (dark grey) is the $\psi_-$ stability domain.
Note that the blue and purple regions partially overlap: for small $h$,
there is a range of ``stable" velocities accessible to solitons of both families.
The light (yellow) strip inside the purple (dark grey) region
represents the stability domain of the bound state of two $\psi_-$ solitons.
As we cross the right-hand ``vertical" boundary of the purple (dark grey) region,
the $\psi_-$ soliton loses its stabiilty to an oscillatory mode. If we had damping in the
system, the onset of instability would correspond to the Hopf bifurcation
giving rise to a time-periodic solution.
In the absence of damping,
the oscillatory instability produces an oscillatory structure with long but finite lifetime \cite{ABP}.
These solitons with oscillating amplitude and width, travelling with oscillatory velocities,
were observed in \cite{MQB}. These
are expected to exist to the right of the purple (dark grey) region.
Where possible, we tried to emphasise the similarity of the arising
bifurcation diagrams with the corresponding diagrams for the
{\it parametrically\/}
driven nonlinear Schr\"odinger equation:
\begin{equation}
i \psi_{t} + \psi_{xx} + 2 |\psi|^2 \psi - \psi= h \psi^*.
\label{paramaribo}
\end{equation}
Fig.\ref{chart}(b) reproduces the soliton attractor chart for Eq.\eqref{paramaribo} \cite{Baer}.
The structure of the stability regions in the two figures is remarkably similar. The slowly moving
solitons in the purple- (dark grey-) tinted region inherit their stability from the stationary solitons
of the family which is stable for small $h$ (the $\psi_-$ family in the externally-driven
and the $\psi_+$ family in the parametrically-driven case). On the other hand, the
solitons in the blue- (light grey-) shaded area are transonic (i.e. move close to $c$, velocity
of the sound waves). Their stability is due to the proximity of the nonlinear Schr\"odinger
equation to the KdV in the transonic limit \cite{BM}.
\begin{widetext}
\begin{figure}
\includegraphics[width =0.49\linewidth]{fig8a.ps}
\includegraphics[width =0.49\linewidth]{fig8b.ps}
\caption{\label{chart}
(a) The chart of the stable one-soliton solutions of the externally driven
nonlinear Schr\"odinger equation \eqref{undamped}.
Here $h$ varies from 0 to $\sqrt{2/27} \approx 0.2722$.
(b) The corresponding attractor chart for the parametrically driven soliton
(adapted from \cite{Baer}).
}
\end{figure}
\end{widetext}
| f329e8c5ed95ee239530e4f1e5f94c5dbb0c9d55 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\noindent
The data processing inequality (DPI) has an intuitive interpretation: the information content in a quantum system cannot increase by performing local data processing on that system. It is an extremely useful property that is used extensively in quantum information \cite{nielsen00}. The DPI is known to hold for different entropy measures, and is stated generally as
\begin{equation}\label{eq:DPI}
\bar{H}(A|BC)_{\rho} \leq \bar{H}(A|B)_{\rho},
\end{equation}
where $\bar{H}(A|B)_{\rho}$ is a conditional entropic information measure of the state $\rho_{AB}$. Conditional entropy measures characterize the uncertainty about a system $A$ given a system $B$. The DPI is typically stated for the case where the local operation is a partial trace (i.e. a joint system $(B,C)$ is reduced to the system $B$), but this can be generalized to any physical operation.\footnote{The Stinespring dilation allows for any completely positive trace preserving (CPTP) map to be decomposed into a unitary followed by a partial trace. Since entropy measures are generally invariant under unitaries, the DPI applies to any CPTP map applied to the system $BC$.}
In particular, the DPI holds for one of the most widely used entropy measures: the conditional von Neumann entropy, $H(A|B)_{\rho}$ \cite{vonneumann55}. It is defined for normalized density operators acting on a bipartite Hilbert space $\mathcal{H}_{AB}$, $\rho \in S_{=}(\mathcal{H}_{AB})$ (where $S_{=}(\mathcal{H}):=\{\rho\in \mathcal{P}(\mathcal{H})|,\mathrm{Tr}(\rho)=1\}$ and $\mathcal{P}(\mathcal{H})$ is the set of positive semi-definite operators on $\mathcal{H}$), as $H(A|B)_{\rho} := H(AB)_{\rho} - H(B)_{\rho}$, where $H(A)_{\rho} := - \mathrm{Tr}(\rho_{A} \log \rho_{A})$ (all logarithms are taken to the base 2). For simplicity, we will not place the labels on density operators to denote which space they act on when it is clear from the context. Also, Eq.~\ref{eq:DPI} for the von Neumann entropy is equivalent to its strong subadditivity: $H(ABC)_{\rho}+H(B)_{\rho}\leq H(AB)_{\rho}+H(BC)_{\rho}$.
The first proofs of the DPI for the von Neumann entropy relied on abstract operator properties \cite{lieb73,lieb73b,simon79}. Recently these proofs have been simplified \cite{nielsen05,petz86,ruskai07}. Other approaches have used the operational meaning of the von Neumann entropy \cite{horodecki06,horodecki05}, Minkowski inequalities \cite{carlen99,carlen08}, or holographic gravity theory \cite{headrick07,hirata07}. There has also been recent interest in the structure of states where there is equality in the DPI \cite{hayden04,herbut04,jencova10}. Our approach provides a new perspective by decomposing the proof of the DPI into a simple proof of a more fundamental property, followed by a specialization. It also provides a new approach to teaching the DPI.
Most precisely, we first prove the DPI for a different entropy: the smooth min-entropy (Theorem~\ref{thm:subaddsme}). This proof is almost trivial and only involves the partial trace applied to the definition of the smooth min-entropy \cite{koenig08}. Then we can specialize the smooth min-entropy to the von Neumann entropy by the quantum asymptotic equipartition property (QAEP) (Theorem~\ref{thm:QAEP}) \cite{tomamichel08}. Here we provide a short proof that omits the analysis of the rate of convergence of this specialization, as apposed to \cite{tomamichel08}. We therefore obtain a self-contained proof for the von Neumann entropy DPI (Theorem~\ref{thm:subaddvN}).
We begin by introducing the smooth min-entropy (Section~2). This is followed by a high level proof of the data processing inequality for the von Neumann entropy (Section~3). Section~4 provides a proof of the QAEP. Finally Section~5 contains lemmas needed for the proofs in the previous sections.
\section{Smooth Min-Entropy}
\noindent
It has become apparent in recent works \cite{koenig08,tomamichel08,datta08,renner05} that smooth min-entropy is a relevant quantity for measuring quantum information. It characterizes operational tasks in information processing such as data compression and physics in the general one-shot setting, such as in statistical mechanics. Note that the one-shot setting does not make assumptions about the structure of relevant states, for example that they have product form. Since the von Neumann entropy also has an operational significance under certain additional assumptions, it could be expected that the von Neumann entropy can be obtained from smooth entropies as a special case. This is indeed true: the von Neumann entropy can be seen as an ``averaged" smooth entropy via the QAEP. We introduce a particular entropy, the min-entropy\footnote{It is sufficient to take the maximum over $\lambda$ if a finite dimensional system is considered. However, in infinite dimensions it is necessary to take a supremum \cite{furrer10}.}
\begin{equation}
H_{\min}(A|B)_{\rho}:=\max_{\lambda}\{\lambda\in\mathbb{R} \mid \exists \; \sigma_{B}\in S_{=}(\mathcal{H}_{B}) \text{ s.t. } \rho_{AB} \leq 2^{-\lambda} \mathbbm{1}_{A} \otimes \sigma_{B}\},
\end{equation}
which leads to the smooth min-entropy, defined as
\begin{equation}
H_{\min}^{\epsilon}(A|B)_{\rho} := \max_{\rho_{AB}' \in \mathcal{B}^{\epsilon}(\rho_{AB})} H_{\min}(A|B)_{\rho'}. \label{eq:Hmin1}
\end{equation}
The state $\sigma_{B}$ is chosen from the set of normalized states $S_{=}(\mathcal{H}_{B})$ in the Hilbert space $\mathcal{H}_{B}$. The state $\rho'_{AB}$ is chosen from the set of subnormalized states in the Hilbert space $\mathcal{H}_{AB}$ that are also close to the state $\rho_{AB}$: $\mathcal{B}^{\epsilon}(\rho_{AB}) := \left\{ \rho'_{AB} | \rho'_{AB} \in S_{\leq}(\mathcal{H}_{AB}) , P(\rho_{AB},\rho'_{AB}) \leq \epsilon \right\}$. To specify this $\epsilon$-ball around a state $\rho$, we use the purified distance \cite{tomamichel09} $P(\rho,\sigma) := \sqrt{1-F(\rho,\sigma)^2}$ (where $F(\rho,\sigma):=\norm{\sqrt{\rho}\sqrt{\sigma}}_{1}$ and $\norm{\rho}_{1}:=\mathrm{Tr} \sqrt{\rho \rho^{\dag}}$).\footnote{If $\rho$ and $\sigma$ are not normalized, then the generalized fidelity is used: $\bar{F}(\rho,\sigma) := \norm{\sqrt{\rho \oplus (1-\mathrm{Tr} \rho)}\sqrt{\sigma \oplus (1-\mathrm{Tr} \sigma)}}_{1}$. If either $\rho$ or $\sigma$ is normalized, then the generalized fidelity reduces to the standard fidelity.}
\section{Data processing inequality}
\noindent
We are now ready to state our main result and provide a high-level proof. If the entropies of interest are interpreted operationally then Theorem~\ref{thm:subaddsme} below deals with data processing in the one-shot scenario: a local physical operation is performed on a tri-partite quantum system \emph{once}, and a statement is made about the information content of such a system. Theorem~\ref{thm:subaddvN} can be interpreted as an average scenario: a statement is made about the information content \emph{on average} after applying a local physical operation to a tri-partite quantum state.
It is important to note that our proof of the DPI for the smooth min-entropy (Theorem \ref{thm:subaddsme}, below) applies to infinite- and finite-dimensional systems (see \cite{furrer10}), while our proof of the DPI for the von Neumann entropy (Theorem \ref{thm:subaddvN}, below) only applies to finite dimensions.
\subsection{General Data Processing Inequality}
\begin{theorem} [\cite{renner05,tomamichel09,koenig08} Smooth min-entropy DPI] \label{thm:subaddsme} Let $\rho \in S_{=}(\mathcal{H}_{ABC})$. Then
\begin{equation} \label{eq:dpi}
H_{\min}^{\epsilon}(A|BC)_{\rho} \leq H_{\min}^{\epsilon}(A|B)_{\rho}.
\end{equation}
\end{theorem}
\noindent
\proof{First we let $\lambda := H_{\min}^{\epsilon}(A|BC)_{\rho}$ and we choose the particular $\tilde{\rho}_{ABC} \in \ball{\rho_{ABC}}$ and $\sigma_{BC}$ in the definition of $H_{\min}^{\epsilon}(A|BC)_{\rho}$ such that $\lambda$ is maximized. From Eq.~\ref{eq:Hmin1} we have $\tilde{\rho}_{ABC} \leq 2^{-\lambda} \mathbbm{1}_{A} \otimes \sigma_{BC}$, and by tracing out system $C$, which is a positive map, we get $\tilde{\rho}_{AB} \leq 2^{-\lambda} \mathbbm{1}_{A} \otimes \sigma_{B}$. We know that $\tilde{\rho}_{ABC} \in \ball{\rho_{ABC}}$, and therefore $P(\rho_{ABC},\tilde{\rho}_{ABC})\leq \epsilon$. Since the purified distance does not increase under the partial trace (see Lemma~\ref{lemma:ballcpm}), it follows that $P(\rho_{AB},\tilde{\rho}_{AB})\leq \epsilon$. Therefore we have $\tilde{\rho}_{AB} \in \ball{\rho_{AB}}$, and $\sigma_{B}\in S_{=}(\mathcal{H}_{B})$, which are candidates for maximizing $H_{\min}^{\epsilon}(A|B)_{\rho}$.}
\subsection{Specialized Data Processing Inequality}
\noindent
Now we have completed the proof of the DPI in the most general case, and the only remaining difficulty is to specialize Theorem~\ref{thm:subaddsme} to the DPI for the von Neumann entropy. This specialization is achieved by using the limit of many i.i.d.\ copies of a state, called the QAEP.
\begin{theorem}[\cite{tomamichel08} QAEP]\label{thm:QAEP} Let $\rho\in S_{=}(\mathcal{H}_{AB})$. Then
\begin{equation}
\lim_{\epsilon\to 0} \lim_{n\to\infty}\frac{1}{n}H^{\epsilon}_{\min}(A^n|B^n)_{\rho^{\otimes n}} = H(A|B)_{\rho}.
\end{equation}
\end{theorem}
This directly reduces Theorem~\ref{thm:subaddsme} to the DPI for the von Neumann entropy.
\begin{theorem}[\cite{lieb73,lieb73b,simon79,nielsen05,petz86,ruskai07,horodecki06,horodecki05,carlen99,carlen08,headrick07,hirata07} von Neumann entropy DPI]\label{thm:subaddvN} Let $\rho \in S_{=}(\mathcal{H}_{ABC})$. Then
\begin{equation}
H(A|BC)_{\rho} \leq H(A|B)_{\rho}.
\end{equation}
\end{theorem}
However, in order to have a self contained proof of the data processing inequality for the von Neumann entropy we provide an alternative, shorter proof of the QAEP than that of \cite{tomamichel08}.
\section{Quantum Asymptotic Equipartition Property}
\noindent
In order to prove Theorem~\ref{thm:QAEP}, we upper and lower bound $\lim_{\epsilon\to 0}\lim_{n\to\infty}H^{\epsilon}_{\min}(A^n|B^n)_{\rho^{\otimes n}}$ by $H(A|B)_{\rho}$. These bounds rely on basic properties of smooth entropies, which will be proved in Section~5. The lower bound (Lemma~\ref{lemma:lower}) is obtained by applying a chain rule to the conditional smooth min-entropy such that it is bounded by a difference of non-conditional smooth entropies (Lemma~\ref{lemma:chain}). The i.i.d.\ limit of non-conditional smooth entropies can then be taken (Lemmas~\ref{lemma:lowerboundHminlimit} and~\ref{lemma:upperboundH0limit}). The upper bound (Lemma~\ref{lemma:upper}) can be obtained by bounding the smooth min entropy by the von Neumann entropy of a nearby state (Lemma~\ref{lemma:upperbound}), and then using the continuity of the von Neumann entropy when the i.i.d.\ limit is taken (Lemma~\ref{lemma:limitH}).
For these proofs we will need the smooth $0^{\text{th}}$ order R\'enyi entropy, which is defined as $H_{0}^{\epsilon}(A)_{\rho} := \min_{\rho' \in \mathcal{B}^{\epsilon}(\rho)} H_{0}(A)_{\rho'}$, where $H_{0}(A)_{\rho} := \log \mathrm{rank} \rho_{A}$. In addition, we will need the non-conditional smooth min-entropy defined as $H^{\epsilon}_{\min}(A)_{\rho}:=\max_{\rho'\in\ball{\rho}} H_{\min}(A)_{\rho'}$, where $H_{\min}(A)_{\rho} := -\log \norm{\rho_{A}}_{\infty}$. The infinity norm is defined as $\norm{\rho}_{\infty} := \max_{i} \{ | \lambda_{i} | \}$, where $\lambda_{i}$ are the eigenvalues of $\rho$. In addition, note that $H^{\epsilon}_{\min}(A|B)_{\rho}$ reduces to $H_{\min}(A)$ in the case that $B$ is trivial and $\epsilon=0$.
\begin{lemma}[Lower bound on the conditional smooth min-entropy]\label{lemma:lower}
Let $\rho\in S_{=}(\mathcal{H}_{AB})$. Then
\begin{equation}
H(A|B)_{\rho} \leq \lim_{\epsilon\to 0}\lim_{n\to\infty} \frac{1}{n} H^{\epsilon}_{\min}(A^{n}|B^{n})_{\rho^{\otimes n}}.
\end{equation}
\end{lemma}
\proof{We use the chain rule Lemma~\ref{lemma:chain} applied to the state $\rho\in S_{=}(\mathcal{H}_{AB})$:
\begin{equation}\label{eq:chainrule}
H_{\min}^{\frac{\epsilon}{3}}(AB)_{\rho} - H_{0}^{\frac{\epsilon}{3}}(B)_{\rho} \leq H_{\min}^{\epsilon}(A|B)_{\rho}.
\end{equation}
Next we use the non-conditional QAEP of Lemmas~\ref{lemma:lowerboundHminlimit} and \ref{lemma:upperboundH0limit} given by
\begin{equation}\label{eq:qaep2}
H(A)_{\rho} \leq \lim_{\epsilon \to 0} \lim_{n \to \infty} \frac{1}{n}H_{\min}^{\epsilon}(A^n)_{\rho^{\otimes n}}, \quad H(A)_{\rho} \geq \lim_{\epsilon \to 0} \lim_{n \to \infty} \frac{1}{n}H_{0}^{\epsilon}(A^n)_{\rho^{\otimes n}}.
\end{equation}
We can apply Eq.~\ref{eq:chainrule} to the state $\rho^{\otimes n}$, divide by $n$, take the limit as $\epsilon\to 0$ and $n\to\infty$, and then use Eq.~\ref{eq:qaep2} to show that the left hand side is bounded by
\begin{equation}
H(A|B)_{\rho} \leq\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{1}{n}(H_{\min}^{\frac{\epsilon}{3}}( A^n B^n )_{\rho^{\otimes n}} - H_{0}^{\frac{\epsilon}{3}}(B^n )_{\rho^{\otimes n}}),
\end{equation}
where we use the definition of the conditional von Neumann entropy.}
\begin{lemma}[Upper bound on the conditional smooth min-entropy]\label{lemma:upper}
Let $\rho\in S_{=}(\mathcal{H}_{AB})$. Then
\begin{equation}
\lim_{\epsilon\to 0}\lim_{n\to\infty} \frac{1}{n} H^{\epsilon}_{\min}(A^{n}|B^{n})_{\rho^{\otimes n}} \leq H(A|B)_{\rho}.
\end{equation}
\end{lemma}
\proof{We apply the relation of conditional von Neumann entropy and conditional smooth min-entropy, Lemma~\ref{lemma:upperbound}, to the state $\rho^{\otimes n}_{A^{n}B^{n}}$:
\begin{equation}
H_{\min}^{\epsilon}(A^n|B^n)_{\rho^{\otimes n}} \leq H(A^n|B^n)_{\tilde{\rho}},
\end{equation}
where $\tilde{\rho}\in\ball{\rho^{\otimes n}_{AB}}$. Dividing by $n$, then taking the limit as $\epsilon\to 0$ and $n\to\infty$, and using the limit of the conditional von Neumann entropy of an almost i.i.d. state, Lemma~\ref{lemma:limitH}, we have:
\begin{equation}
\lim_{\epsilon\to 0} \lim_{n\to\infty}\frac{1}{n}H(A^n|B^n)_{\tilde{\rho}} = H(A|B)_{\rho}.
\end{equation}}
\section{General Properties of Smooth Entropies}
\noindent
The following are properties of smooth entropies used to prove Lemmas~\ref{lemma:lower}, and \ref{lemma:upper}. In particular, we bound the smooth min-entropy and smooth $0^{th}$-order R\'enyi entropy in order to perform the i.i.d.\ limit of $\epsilon\to 0$, $n\to \infty$. The proofs rely on certain basic properties of the von Neumann entropy and distance measures, which are provided in the appendices.
\begin{lemma}[Chain rule]\label{lemma:chain}
Let $\rho \in S_{=}(\mathcal{H}_{AB})$. Then
\begin{equation}
H_{\min}^{\epsilon}(AB)_{\rho} - H_{0}^{\epsilon}(B)_{\rho} \leq H_{\min}^{3 \epsilon}(A|B)_{\rho}.
\end{equation}
\end{lemma}
\noindent
\proof{We pick the particular $\rho'_{AB} \in \ball{\rho_{AB}}$ in the definition of the non-conditional smooth min-entropy $H_{\min}^{\epsilon}(AB)_{\rho}=\lambda$ such that it is maximized. We also pick the particular $\tilde{\rho}_{B}\in\ball{\rho_{B}}$ from the definition of the $0^{\text{th}}$ order R\'enyi entropy such that it is minimized, and write the projector onto its support as $\Pi:=\Pi_{\mathrm{supp}(\tilde{\rho}_{B})}$. Now given that $\rho'_{AB} \leq 2^{-\lambda} \mathbbm{1}_{AB}$, then $\Pi\rho'_{AB}\Pi \leq 2^{-\lambda} \mathbbm{1}_{A} \otimes \mathbbm{1}_{\mathrm{supp}(\tilde{\rho}_{B})}$, so we have
\begin{equation}
H^{\epsilon}_{\min}(AB)_{\rho}= \lambda, \quad \Pi \rho'_{AB} \Pi\leq 2^{-\lambda} \mathbbm{1}_{A} \otimes \mathbbm{1}_{\mathrm{supp}(\tilde{\rho}_{B})}. \label{eq:minentrop}
\end{equation}
\noindent
Now we will need to ensure that $\hat{\rho}_{AB}:=\Pi\rho'_{AB} \Pi$ is close to $\rho_{AB}$. To do this, we use the triangle inequality for the purified distance (see Lemma 5 of \cite{tomamichel09}) in the first and third lines, as well as the fact that the purified distance decreases under the CP trace non-increasing map \hbox{$\rho\to\Pi \rho \Pi$} (Lemma~\ref{lemma:ballcpm}) in the second line:
\begin{align}
P(\hat{\rho}_{AB},\rho_{AB}) &\leq P(\hat{\rho}_{AB},\tilde{\rho}_{AB}) + P(\tilde{\rho}_{AB},\rho_{AB}) \\
&\leq P(\rho'_{AB},\tilde{\rho}_{AB}) + P(\tilde{\rho}_{AB},\rho_{AB}) \\
&\leq P(\rho'_{AB},\rho_{AB}) + 2 P(\tilde{\rho}_{AB},\rho_{AB}) \\
&= \epsilon + 2 P(\tilde{\rho}_{AB},\rho_{AB}), \label{eq:halfwaythere}
\end{align}
where we purify $\tilde{\rho}_{B}$ to the state $\ket{\phi}_{ABC}$ and define $\tilde{\rho}_{AB} := \mathrm{Tr}_{C}(\ket{\phi}\bra{\phi})$ (see Lemma 8 of \cite{tomamichel09}). Now all that is left to find is $P(\tilde{\rho}_{AB},\rho_{AB})$. From Theorem~\ref{thm:uhlmann} we can define a purification $\ket{\psi}_{ABC}$ of $\rho_{B}$ such that $\mathrm{Tr}_{C} \ket{\psi}\bra{\psi}=\rho_{AB}$ and the following holds:
\begin{equation}
P(\ket{\phi}_{ABC},\ket{\psi}_{ABC}) = P(\tilde{\rho}_{B},\rho_{B}). \label{eq:purifyequiv}
\end{equation}
Now since the purified distance doesn't increase under the partial trace (see Lemma~\ref{lemma:ballcpm}):
\begin{equation}
P(\ket{\phi}_{ABC},\ket{\psi}_{ABC}) \geq P(\tilde{\rho}_{AB},\rho_{AB}) \geq P(\tilde{\rho}_{B}, \rho_{B}). \label{eq:ptraceineq}
\end{equation}
Combining Eqs.~\ref{eq:purifyequiv} and~\ref{eq:ptraceineq} we get
\begin{equation} \label{eq:purifnochange}
P(\ket{\phi}_{ABC},\ket{\psi}_{ABC}) = P(\tilde{\rho}_{AB},\rho_{AB}) = P(\tilde{\rho}_{B}, \rho_{B}).
\end{equation}
We know that $P(\tilde{\rho}_{B},\rho_{B})\leq \epsilon$, and therefore $P(\tilde{\rho}_{AB},\rho_{AB}) \leq \epsilon$. This makes Eq.~\ref{eq:halfwaythere} $P(\hat{\rho}_{AB},\rho_{AB}) \leq 3 \epsilon$. Now returning to the the smooth min-entropy in Eq.~\ref{eq:minentrop}, we define $\tau_{\tilde{\rho}_{B}} := \mathbbm{1}_{\mathrm{supp} (\tilde{\rho}_{B})} / \mathrm{rank}(\tilde{\rho}_{B})$ so that we have
\begin{align}
H^{\epsilon}_{\min}(AB)_{\rho}&= \left\{ \lambda+\log(\mathrm{rank}(\tilde{\rho}_{B})) \mid \Pi\rho'_{AB} \Pi\leq 2^{-\lambda} \mathbbm{1}_{A} \otimes \tau_{\tilde{\rho}_{B}} \right\} \\
&\leq \max_{\hat{\rho} \in \mathcal{B}^{3\epsilon}(\rho)} \max_{\sigma_{B}} \left\{ \lambda \mid \hat{\rho}_{AB} \leq 2^{-\lambda} \mathbbm{1}_{A} \otimes \sigma_{B} \right\} + \log(\mathrm{rank}(\tilde{\rho}_{B})) \\
&= H_{\min}^{3\epsilon}(A|B)_{\rho_{AB}} + H^{\epsilon}_{0}(B)_{\rho_{B}}.
\end{align}}
Now we provide some bounds on non-conditional smooth R\'enyi entropies by non-conditional R\'enyi entropy (Lemmas \ref{lemma:lowerboundHmin} and \ref{lemma:upperboundH0}). We then use these bounds to show one direction of the non-conditional QAEP (Lemmas \ref{lemma:lowerboundHminlimit} and \ref{lemma:upperboundH0limit}). Note that the non-conditional QAEP is known, and is sometimes referred to as Schumacher compression \cite{schumacher95}. It can be proved by using projectors onto a typical set. It can also be essentially reduced to a classical problem that can be shown using the law of large numbers \cite{cover91}. We provide our proofs below since they provide an alternative proof using bounds on smooth entropies in terms of R\'enyi entropies, and these bounds may be of general interest in quantum information theory.
\begin{lemma}[Lower bound on the smooth min-entropy]\label{lemma:lowerboundHmin}
Let $\rho\in S_{=}(\mathcal{H}_{A})$, $\alpha>1$, and $\epsilon\in (0,1]$. Then
\begin{equation}
H_{\alpha}(A)_{\rho} + \frac{\log (1-\sqrt{1-\epsilon^2})}{\alpha-1} \leq H_{\min}^{\epsilon}(A)_{\rho}.
\label{eq:limitpart2}
\end{equation}
\end{lemma}
\proof{First, we let $\rho=\sum_{x} \lambda_{x} \ket{x}\bra{x}$. We construct a quantum state $\sigma$ whose eigenvectors are the same as those of $\rho$, and whose eigenvalues, $\nu_{x}$, are $\nu_{x} = \lambda_{x}$ if $x\in\mathcal{X}$ and $\nu_{x} = 0$ otherwise, where $\mathcal{X}:=\{x \in \{ 1,2,\ldots,\dim \mathcal{H} \} : \lambda_{x} \leq \lambda^{*}\}$, and $\lambda^{*} \in \left[ 0 , 1 \right]$. Note that we will fix $\lambda^{*}$ to a specific value later in the proof. Hence $\sigma \in S_{\leq}(\mathcal{H})$. Now we may write the fidelity between $\rho$ and $\sigma$ as
\begin{equation}
\norm{\sqrt{\rho}\sqrt{\sigma}}_{1} = \sum_{x} \lambda_{x}^{1/2} \nu_{x}^{1/2} = \sum_{x \in \mathcal{X}} \lambda_{x}.
\end{equation}
We can write (for $\alpha>1$):
\begin{equation}
\sum_{x} \lambda_{x}^{\alpha} \geq \sum_{x\notin\mathcal{X}} \lambda_{x}^{\alpha-1}\lambda_{x} \geq \norm{\sigma}_{\infty}^{(\alpha-1)}\sum_{x\notin\mathcal{X}} \lambda_{x} = \norm{\sigma}_{\infty}^{(\alpha-1)}\left(1-F(\rho,\sigma)\right).
\end{equation}
By taking the $\log$ of this equation and since $\nu_{x}\leq\norm{\sigma}_{\infty}$ $\forall x$ we get
\begin{equation}
H_{\alpha}(A)_{\rho} \leq \frac{1}{1-\alpha} \log (1-F(\rho,\sigma)) + H_{\min}(A)_{\sigma}.
\end{equation}
Now we choose a particular $\lambda^{*}$ so that the fidelity is fixed to be $F(\rho,\sigma)=\sqrt{1-\epsilon^2}$ ($1\geq\epsilon>0$). This means that $P(\rho,\sigma) \leq \epsilon$, and hence $\sigma \in \mathcal{B}^{\epsilon}(\rho)$, so $H_{\min}(A)_{\sigma} \leq H^{\epsilon}_{\min} (A)_{\rho}$.}
\begin{lemma}[Upper bound on the $0^{th}$ order R\'enyi entropy]\label{lemma:upperboundH0}
Let $\rho\in S_{=}(\mathcal{H}_{A})$, $1/2<\alpha<1$, and $\epsilon\in [0,1)$. Then
\begin{equation}
H_{0}^{\epsilon}(A)_{\rho} \leq H_{\alpha}(A)_{\rho}+\frac{1}{\alpha-1}\log \sqrt{1-\epsilon}.
\end{equation}
\end{lemma}
\proof{This proof follows similarly to the proof of Lemma~\ref{lemma:lowerboundHmin}. We can construct a quantum state $\sigma$ in the same manner as Lemma~\ref{lemma:lowerboundHmin}. Now $1/2<\alpha<1$ so we have
$
\sum_{x} \lambda_{x}^{\alpha} \geq \sum_{x\in\mathcal{X}}\lambda_{x}^{\alpha} \geq (1/\mathrm{rank}\sigma)^{(\alpha-1)}\sum_{x\in\mathcal{X}} \lambda_{x}.
$
Taking the $\log$ gives
$
H_{\alpha}(A)_{\rho} \geq \frac{1}{1-\alpha} \log F(\rho,\sigma) + H_{0}(A)_{\sigma}.
$
Now we choose a particular $\lambda^{*}$ so that we can write the fidelity as $F(\rho,\sigma)=\sqrt{1-\epsilon}$, ($1>\epsilon\geq0$), and so $\sigma\in\ball{\rho}$. Then $H_{0}(A)_{\sigma} \geq H^{\epsilon}_{0} (A)_{\rho}$, which gives the result.}
\begin{lemma}[Non-conditional QAEP for smooth min-entropy]\label{lemma:lowerboundHminlimit}
Let $\rho\in S_{=}(\mathcal{H}_{A})$. Then
\begin{equation}
H(A)_{\rho} \leq \lim_{\epsilon \to 0} \lim_{n \to \infty} \frac{1}{n}H_{\min}^{\epsilon}(A^n)_{\rho^{\otimes n}}.
\end{equation}
\end{lemma}
\proof{First, we calculate the quantum R\'enyi entropy of order $\alpha$, defined as $H_{\alpha}(A)_{\rho} := 1/(1-\alpha) \log \mathrm{Tr} \rho^{\alpha}$ for the state $\rho^{\otimes n}$:
\begin{equation}
H_{\alpha}(A^n)_{\rho^{\otimes n}} = n H_{\alpha}(A)_{\rho}. \label{eq:tensoralpha}
\end{equation}
Now we may write Eq.~\ref{eq:limitpart2} from Lemma~\ref{lemma:lowerboundHmin} as
\begin{equation}
\lim_{\epsilon \to 0} \lim_{n \to \infty} \frac{1}{n}H_{\min}^{\epsilon}(A^n)_{\rho^{\otimes n}} \geq H_{\alpha}(A)_{\rho}.
\end{equation}
This is true for all $\alpha >1$ and so in particular, it's true if we take the limit as $\alpha\to 1^{+}$, where we know from Lemma~\ref{lemma:alphato1} that $\lim_{\alpha \to 1} H_{\alpha}(A)_{\rho} = H(A)_{\rho}$.}
\begin{lemma}[Non-conditional QAEP for $0^{th}$-order R\'enyi entropy]\label{lemma:upperboundH0limit}
Let $\rho\in S_{=}(\mathcal{H}_{A})$. Then
\begin{equation}
H(A)_{\rho} \geq \lim_{\epsilon \to 0} \lim_{n \to \infty} \frac{1}{n}H_{0}^{\epsilon}(A^n)_{\rho^{\otimes n}}.
\end{equation}
\end{lemma}
\proof{This follows in a similarly to the proof of Lemma~\ref{lemma:lowerboundHminlimit}, but now Lemma~\ref{lemma:upperboundH0} is used.}
\begin{lemma}[Relation of conditional von Neumann and conditional smooth min-entropy]\label{lemma:upperbound}
Let $\rho \in S_{=}(\mathcal{H}_{AB})$. Then $\exists \; \tilde{\rho} \in \ball{\rho}$ such that
\begin{equation}
H^{\epsilon}_{\min}(A|B)_{\rho} \leq H(A|B)_{\tilde{\rho}}.
\end{equation}
\end{lemma}
\proof{We start with the definition of the conditional von Neumann entropy for subnormalized states $\tilde{\rho}_{AB}\in S_{\leq}(\mathcal{H}_{AB})$, so we have
\begin{align}
H(A|B)_{\tilde{\rho}}&:=\frac{1}{\mathrm{Tr} \tilde{\rho}_{AB}} \max_{\sigma_{B}} \mathrm{Tr} (\tilde{\rho}_{AB}(\log(\mathbbm{1}_{A} \otimes \sigma_{B})-\log(\tilde{\rho}_{AB}))) \\
&\geq \frac{1}{\mathrm{Tr} \tilde{\rho}_{AB}} \mathrm{Tr} ( \tilde{\rho}_{AB}(\log(\lambda \mathbbm{1}_{A} \otimes \sigma'_{B}) -\log(\tilde{\rho}_{AB}))) - \log \lambda,
\end{align}
where we drop the maximization, picking a specific $\sigma_{B}'$: the state that allows $\lambda$ to be maximized in $H_{\min}(A|B)_{\rho}$. We have also added and subtracted $\log \lambda$, defined as $-\log \lambda=H_{\min}^{\epsilon}(A|B)_{\rho}$, and we choose $\tilde{\rho}$ to be the state that allows $\lambda$ to be maximized in the definition of $H_{\min}^{\epsilon}(A|B)_{\rho}$.
Also, to simplify our expression, we use the quantum relative entropy, defined as $H(\rho||\sigma) := \mathrm{Tr} (\rho \log \rho) - \mathrm{Tr} (\rho \log \sigma)$. Now we may write
\begin{equation}
-\frac{1}{\mathrm{Tr} \tilde{\rho}_{AB}} H(\tilde{\rho}_{AB} || \lambda \mathbbm{1}_{A} \otimes \sigma'_{B}) + H_{\min}^{\epsilon}(A|B)_{\rho}\geq H_{\min}^{\epsilon}(A|B)_{\rho},
\end{equation}
where in the last line, we use the monotonicity of the $\log$ to show that $\tilde{\rho}_{AB}\log \tilde{\rho}_{AB} \leq \tilde{\rho}_{AB} \log (\lambda \mathbbm{1}_{A} \otimes \sigma'_{B})$. This then implies $-H(\tilde{\rho}_{AB} || \lambda \mathbbm{1}_{A} \otimes \sigma'_{B}) \geq 0$.}
\nonumsection{Acknowledgements}
\noindent
The authors acknowledge support from the European Research Council (grant no. 258932), and the Swiss National Science Foundation (grant no. 200020-135048).
\nonumsection{References}
| 26ab4b35bcc4bd89894e9dd63de00ffde0659712 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The construction of theories of gravity that are also true gauge theories has lead to the study of Chern-Simons gravities, first in 2+1 dimensions \cite{achucarro,chern-simons-2+1-2}, where these theories are equivalent, at least modulo the question of invertibility of the metric, to General Relativity, and then in higher dimensions \cite{chamseddine-1,chamseddine-2}. Chern-Simons gravities have been extended to the supersymmetric case in refs.\cite{troncoso-zanelli-1,troncoso-zanelli-2}. These theories have a wealth of interesting dynamical properties and solutions including black holes, that have been studied in many papers\footnote{for a recent review and an extensive and comprehensive list of references see ref.\cite{zanelli-lectures}}. Chern-Simons theories are defined in odd-dimensional manifolds and therefore some kind of dimensional reduction or compactification would be required to describe the observed four dimensional universe.
Chern-Simons gauge and gravity theories have been extended by using transgression forms instead of Chern-Simons forms as actions \cite{potsdam,borowiec-1,borowiec-2,IRS-1,IRS-2,sarda-1,sarda-2,tesis,motz1,motz3}, which makes the action gauge invariant instead of just quasi-invariant and has the advantage, in the case of gravity, that the boundary terms required to regularize the conserved charges and black hole thermodynamics are built in \cite{motz1,motz3}.
Wess-Zumino-Witten (WZW) and gauged wess-Zumino-Witten (gWZW) theories \cite{wess-zumino,witten-gWZW} were first introduced as effective theories in nuclear and particle physics. WZW and gWZW models are closely related to Chern-Simons and transgression theories, as they can be regarded as induced at the boundary of a manifold in which CS or transgresion theories respectively are defined if a given gauge field is pure gauge. WZW and gWZW theories are defined in even-dimensional manifolds.
Recently, a particular type of gWZW models for space-time groups have been considered as gravitational theories by Anabalon et al.\cite{anabalon-1,anabalon-2}, whom furthermore shown that in 3+1 dimensions, for the gauge group SO(4,2) and with some additional assumptions the model yields the field equations of General Relativity with cosmological constant. The particular model used is the G-G model (in which the full diagonal subgroup is gauged) which is explicitly topological (without the usual kinetic term).
As Chern-Simons gravities, gWZW models for space-time groups provide gauge theories of gravity, that could be relevant for instance for the construction of a consistent quantum gravity and as modified gravity theories that could solve in a dynamical way problems of modern cosmology such as the nature of dark energy or the cause of inflation in the early universe (if an infationary phase did indeed occur). Some possible problems with this approach are: the emergence of the geometrical interpretation of certain components of the fields which are a priori simply fields living in a topological space; the interpretation of the other field components (maybe as matter or dark energy, etc.).
In this paper we study these gravitational gWZW models along the lines of refs.\cite{anabalon-1,anabalon-2}, focusing in a simple two-dimensional toy model, as a first step towards a future more profound investigation of higher dimensional models of possible phenomenological interest.
In section 2 we review some background material and then derive the field equations of generic (not just gravitational) topological G-G gWZW models in any dimension.
In section 3 we give the explicit field equations of the gWZW model in two dimensions and for gauge group $SU(2)$, as a warm up exercise for the model with gauge group SO(2,1), in which we are interested. In section 4 we derive black hole solutions and discuss the Noether mass and thermodynamics of those solutions. Surprisingly, at least at first glance, the mass and entropy of the black hole turn out to be zero.
\section{Transgressions and gauged WZW actions}
\subsection{Transgressions}
Chern-Simons forms $\mathcal{C}_{2n+1}(A)$ are differential forms defined for
a connection $A$, which under gauge transformations of that connection transform by a closed form, so they are said to be \textit{quasi invariant}. Transgression forms $\mathcal{T}_{2n+1}$ are a generalization of Chern-Simons forms which are strictly invariant. Transgressions depend on two connections, $A$ and $\overline{A}$, and can be written as the difference of two Chern-Simons forms plus an exact form
$$
\mathcal{T}_{2n+1}=\mathcal{Q}_{2n+1}(A)-\mathcal{Q}_{2n+1}(\overline
{A})-dB_{2n}\left( A,\overline{A}\right),
$$
or also as\footnote{In what follows wedge product between forms is implicitly assumed.} (for the mathematical definitions and properties of Chern-Simons and transgression forms see \cite{nakahara,zumino-les-houches,alvarez} and references therein),
\begin{equation}
\mathcal{T}_{2n+1}\left( A,\overline{A}\right) =(n+1)\int_{0}^{1}%
dt\ <\Delta{A}F_{t}^{n}>,
\end{equation}
where
$A_{t} = tA+(1-t)\overline{A}=\overline{A}+t\Delta A $
is a connection that interpolates between the two independent gauge potentials
$A$ and $\overline{A}$ and $\Delta A=A-\overline{A}$. The Lie algebra-valued one-forms $A=A_{\mu}^{A}G_{A}\ dx^{\mu}$ and $\overline{A}=\overline{A}_{\mu}^{A}G_{A}\ dx^{\mu}$ are
connections under gauge transformations, $G_{A}$ are the generators of the Lie algebra of the gauge group $G$, and
$<\cdots>$ stands for a symmetrized invariant trace in the Lie algebra\footnote{Greek indices are used as space-time indices with values from $0$ to $d-1$ where $d$ is the dimensionality of spacetime; lower case Latin indices from the beginning of the alphabet $a,~b,~c,...$ are tangent space (or Lorentz) indices with values from $0$ to $d-1=2n$; lower case Latin indices from the middle of the alphabet $i,~j,~k,...$ will be used as ordinary vector indices with values from $0$ to $3$. Upper case Latin indices label the generators $G_{A}$ of the Lie group considered and take values from 1 to the dimension of the group.}.
The curvatures (or field strengths) are $F=dA+A^2$, $\overline{F}=d\overline{A}+\overline{A}^2$, while for the interpolating connection the curvature is
$F_{t}=dA_{t}+A_{t}^{2}=t F +(1-t)\overline{F}+t(t-1)(\Delta A)^{2}$. Setting $\overline{A} =0$
in the transgression form yields the Chern-Simons form $\mathcal{Q}_{2n+1}(A)$ for $A$, that is $\mathcal{Q}_{2n+1}(A)\equiv \mathcal{T}_{2n+1}(A,\overline{A}=0)$.
The exterior derivative of the transgression form gives globally the difference between the invariant polynomials associated to each gauge connection
$$<F^{n+1}>-<\overline{F}^{n+1}>=d\mathcal{T}_{2n+1}(A,\overline{A}).$$
Transgressions have been used to define gauge invariant actions for field theories that generalize Chern-Simons actions in refs.\cite{potsdam,borowiec-1,borowiec-2,IRS-1,IRS-2,sarda-1,sarda-2,tesis,motz1,motz3}. For those actions $A$ and $\overline{A}$ may be taken as defined in distinct manifolds $\mathcal{M}$ and $\overline{\mathcal{M}}$ respectively, with a common boundary $\partial\mathcal{M}\equiv \partial\overline{\mathcal{M}}$, and it is possible either to consider both fields as independent dynamical fields or to consider $\overline{A}$ as a fixed (non dynamical) background. The action for those theories is
$$
I_{Trans}(A,\overline{A})=\int _{\mathcal{M}}\mathcal{Q}_{2n+1}(A)-\int _{\overline{\mathcal{M}}}\mathcal{Q}_{2n+1}(\overline
{A})-\int _{\partial\mathcal{M}}B_{2n}\left( A,\overline{A}\right).
$$
Those actions generalize Chern-simons actions, providing in the case of Chern-Simons gravity the boundary terms necessary to have a well defined action principle and regularize the conserved charges and black hole thermodynamics \cite{motz1,motz3}.
\subsection{Gauged Wess-Zumino-Witten actions}
A class of related but different theories is obtained if $A$ and $\overline{A}$ are taken to be not independent, but related by a gauge transformation generated by an element $h(x)$ of $G$
$$\overline{A}=h^{-1}Ah+h^{-1}dh\equiv A^h$$
and the manifolds $\mathcal{M}$ and $\overline{\mathcal{M}}$ are the same.
Then the degrees of freedom of the theory correspond to $A$ and $h$. The difference between the Chern-Simons form for $A^h$ and the Chern-Simons form for $A$ is given by the sum of an exact and a closed form \cite{zumino-les-houches}
$$\mathcal{Q}_{2n+1}(A^h)-\mathcal{Q}_{2n+1}(A)=d\alpha _{2n}+\mathcal{Q}_{2n+1}(h^{-1}dh)$$
where $\mathcal{Q}_{2n+1}(h^{-1}dh)$ is the Wess-Zumino-Witten (WZW) form, which satisfies $d\mathcal{Q}_{2n+1}(h^{-1}dh)=0$.
The gauged Wess-Zumino-Witten (gWZW) action is defined as
\begin{equation}
I_{gWZW}(A,h)=\int _{\mathcal{M}}\mathcal{T}_{2n+1}(A,A^h)
\end{equation}
which can be written as
$$
I_{gWZW}(A,h)=\int _{\mathcal{M}}[\mathcal{Q}_{2n+1}(A)-\mathcal{Q}_{2n+1}(
A^h)]-\int _{\partial\mathcal{M}}B_{2n}\left( A,A^h\right)
$$
or equivalently
$$
I_{gWZW}(A,h)=-\int _{\mathcal{M}}\mathcal{Q}_{2n+1}(h^{-1}dh) -\int _{\partial\mathcal{M}}[\alpha _{2n}+B_{2n}\left( A,A^h\right)]
$$
The action $I_{gWZW}$ is effectively $2n$-dimensional and lives at the boundary $\partial\mathcal{M}$, for even though the WZW part lives in the $2n+1$-dimensional bulk $\mathcal{M}$, the way in which it is extended into the bulk is immaterial at the quantum level provided that the constant in front of the action (included in the definition of the invariant symmetrized trace) is quantized \cite{witten-gWZW}, and we will see below that the bulk is irrelevant for the classical field equations derived from the action.
The action $I_{gWZW}(A,h)$ is invariant under local gauge transformations generated by a point dependent element $g$ of the group $G$ given by $A\rightarrow g^{-1}[A+d]g$ and $h\rightarrow g^{-1}hg$, as it follows from its definition as a transgression.
For instance the gWZW action in 1+1 dimensions is \cite{anabalon-1}
\begin{equation}\label{action}
I_{gWZW}=\kappa\int_{\Sigma}{\frac{1}{3}\langle(h^{-1}dh)^{3}\rangle}-\kappa\int_{M^{2}}{\langle(A-h^{-1}dh)A^{h}\rangle}
\end{equation}
where $M^2\equiv\partial\Sigma$ is the space-time of the theory.
\subsection{Field equations for the gWZW action in any dimension}
We derived the field equations in any dimension from the variation of the action, which was computed starting from the formula for the variation of the transgression \cite{motz3}
$$
\delta\mathcal{T}_{2n+1}=(n+1) <F^n\delta A>-<\overline{F}^n\delta \overline{A}>
-n(n+1)d\{\int _0^1dt<\Delta AF_t^{n-1}\delta A_t>\}
$$
For the detailed derivation see Appendix A.
The resulting variation of the gWZW action is (though see next subsection for discussion of a subtle point)
\begin{eqnarray}\label{variation_action}
\delta I_{gWZW}=-(n+1)\int _{\Sigma ^{2n}}\Bigg\{
n \int _0^1dt~t<[(A-A^h)F_t^{n-1}+(A^{h^{-1}}-A)\tilde{F}_t^{n-1}]\delta A>+\Bigg.\\ \nonumber
\Bigg. +\int _0^1dt<F_t^{n}h^{-1}\delta h>+n\int _0^1dt~t(t-1)<\Delta A F_t^{n-1}[\Delta A,h^{-1}\delta h]>\Bigg\}+\\ \nonumber
+n(n+1)\int _{\partial\Sigma ^{2n}}\Big\{\int _0^1dt~t<\Delta A F_t^{n-1}h^{-1}\delta h>\Big\}
\end{eqnarray}
where the space-time manifold is $\Sigma ^{2n}\equiv\partial\mathcal{M}$ and $\partial\Sigma ^{2n}$ is its $2n-1$-dimensional boundary, and $A^h =h^{-1}(A+d)h$, $A^{h^{-1}}=h(A+d)h^{-1}$, $\Delta A =A-A^h$, $A_t=tA+(1-t)A^h$, $\tilde{A}_t=t~A+(1-t)A^{h^{-1}}$, $F_t=dA_t+A_t^2$ and $\tilde{F}_t=d\tilde{A}+\tilde{A}_t^2$.
From this expression we can read the field equations derived from the action principle $\delta I_{gWZW}=0$, with the ones corresponding to $\delta A$ being
\begin{eqnarray}
\int _0^1dt~t<[(A-A^h)F_t^{n-1}+(A^{h^{-1}}-A)\tilde{F}_t^{n-1}]G^A>=0
\end{eqnarray}
and the ones corresponding to $\delta h$ being
\begin{eqnarray}
\int _0^1dt<F_t^{n}G^A>+n\int _0^1dt~t(t-1)<\Delta A F_t^{n-1}[\Delta A,G^A]>=0
\end{eqnarray}
In the particular case of two dimensions ($n=1$), and assuming that the matrix $M^{AB}=<G^AG^B>$ is invertible
it results
\begin{eqnarray}
A^h-A^{h^{-1}}=0\\
F+F^h-\Delta A ^2=F+F^h-\frac{1}{2}[A^h-A,A^h-A]=0
\end{eqnarray}
This equations can be rewritten for later use as
\begin{eqnarray}
D(h^2)=0\label{1stequnrewritten}\\
F+F^h-(h^{-1}Dh)^2=0\label{2ndequnrewritten}
\end{eqnarray}
because $A^h-A^{h^{-1}}=h^{-1}D(h^2)h^{-1}$ and $A^h-A=h^{-1}Dh$.
Finally, note that (\ref{2ndequnrewritten}) can also be rewritten as:
\begin{eqnarray}
2F + D(h^{-1}Dh) =0\, .
\end{eqnarray}
\subsection{Boundary terms and action principle}
In principle, $I_{WZW}$ is defined as the integral of the transgression in a manifold with boundary, with that boundary being the space-time in which we are interested. However, in describing physical spacetime, we are interested in the case that $M_2$ is noncompact, for example with topology of $\mathbb{R}^2$. In this case we may use (\ref{action}) as our action, provided we regard $\Sigma$ to be a manifold whose boundary is the topological sphere $M_2 \cup \{\infty\}$.
This makes sense under appropriate asymptotic conditions on $h$ as one approaches infinity, i.e. $h$ is asymptotically constant. The black hole solution of vanishing mass considered below satisfies this condition. We emphasise that he one-point compactification is only for the purpose of defining the Wess-Zumino term for $h$- the gauge field $A$ may have nontrivial behavour at infinity.
The second term in (\ref{action}) may be regarded as an integral over a spacetime with a boundary at infinity. Therefore the variation of the action will produce a boundary contribution, which is precisely the last term in equation (\ref{variation_action}), specialised to $n=1$. This term is proportional to $\delta h$ and so the extremal action principle is consistent with the boundary conditions on $h$.
\section{Field equations in two dimensions}
We shall mainly be interested in the group SO(2,1), with its interpretation as the (anti)-de Sitter spacetime group in 1+1 dimensions. We shall also briefly consider the group SU(2) as a warmup.
But first let us make some general observations, which are valid for and subgroup of GL(2,$\mathbb{C}$) (this is of relevance since the component of SO(2,1) which is connected to the identity is isomorphic to SL(2,$\mathbb{R}$)).
Field equations (\ref{1stequnrewritten}) and (\ref{2ndequnrewritten}) simplify greatly if we assume that the connection is trivial. Setting $A=0$
the equations reduce to $
d(h^2) =0$, $dh \wedge dh =0$.
Let $h = \left(\begin{smallmatrix} A &
B\\C&D\end{smallmatrix}\right)$ be a complex matrix of nonvanishing
determinant. Then $d(h^2)=0$ implies:
\begin{align*}
2AdA + B dC + CdB =0\, ,
\\
B d(A+D) + (A+D) dB =0 \,
\\
C d(A+D) + (A+D) dC =0 \,
\\
2DdD + B dC + CdB =0\, .
\end{align*}
First we consider the case $A+D \neq 0$. Then using the above
equations, we can always obtain $\det (h_0) \, dA =0$ or $\det
(h_0)\, dD =0$, which then implies that all the components are
constants. Then we consider the traceless case $A+D =0$. This means
$h_0^2 = -\det(h_0)\, I$. The above equations are solved if
$\det(h_0)$ is constant, with no further restriction on $A,B,C,D$.
So to summarise, we have two types of matrices:
i) $h_0$ is constant;
ii) $h_0$ is a traceless matrix of constant determinant which may have nontrivial degrees of freedom, subject to $dh\wedge dh =0$.
Therefore, there is a special class of matrices, which for group SU(2) or SL(2,$\mathbb{R}$) satisfy $h^2 = -I$ which have especially rich structure.
The above applies in the case of a globally flat connection. A less trivial special case is when $F +F^h =0$. This also simplifies the field equations. In this case, although the calculations are more complicated, it seems that traceless matrices are also special, in that they allow a rich structure of solutions for the field $h$. It is these class of solutions which we shall focus on in what follows.
\subsection{$SU(2)$ Group}\label{su_2}
We begin by deriving the field equations for a simple case: the $SU(2)$ group. In order to derive that equations explicitly we must start with some algebraic preliminaries.
The Pauli matrices are
$
\sigma ^1=\left(\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}\right)
$,
$
\sigma ^2=\left(\begin{array}{cc}
0 & -i \\
i & 0
\end{array}\right)
$,
$
\sigma ^3=\left(\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}\right)
$ while the identity is $
I=\left(\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}\right)
$,
satisfying $\sigma ^i\sigma ^j=\delta ^{ij}I+i\epsilon ^{ijk}\sigma ^k$ and hence
$\{\sigma ^i,\sigma ^j\}=2\delta ^{ij}I$ and $[\sigma ^i,\sigma ^j]=2i\epsilon ^{ijk}\sigma ^k$.
It also follows that $\sigma ^i\sigma ^j\sigma ^k=i\epsilon ^{ijk}I+\delta ^{ij}\sigma ^k+\delta ^{jk}\sigma ^i-\delta ^{ik}\sigma ^j$.
A generic element of SU(2)is of the form
$h=h^0I+h^i\sigma ^i=h^0I+{\bf h.\sigma }$
with the constraint $(h^0)^2+h^ih^i=1$ or equivalently $(h^0)^2+\mid {\bf h}\mid ^2=1$. If $h$ is written in exponential form it is
$h=e^{i\theta{\bf n.\sigma}}=\cos\theta +i\sin\theta {\bf n.\sigma}$
with ${\bf n.n}=1$.
The gauge potential (or connection) matrix valued one-form is $A=-i A^i\sigma ^i=-i{\bf A.\sigma }$
while the field strength (or curvature) matrix valued two-form is $F=dA+A^2=-iF^i\sigma ^i=-i{\bf F.\sigma }$.
It follows $F^i=dA^i+\epsilon ^{ijk} A^jA^k $ or
$ {\bf F} = d{\bf A}+ {\bf A\times A}$.
If we have an algebra valued zero-form field $\alpha =-i\alpha ^i\sigma ^i=-i{\bf \alpha .\sigma}$ its covariant derivative is $D\alpha =-iD{\bf \alpha}.{\bf \sigma}=d\alpha +[A,\alpha ]$ or $D{\bf \alpha}=d{\bf \alpha}+2{\bf A\times \alpha}$
The equation $A^h-A^{h^{-1}}=0$ yields
\begin{equation}
h^0[d{\bf h}-2 {\bf h\times A}]-{\bf h}dh^0=0
\end{equation}
or equivalently
\begin{equation}
h^0D{\bf h}={\bf h}dh^0
\end{equation}
The equation $F+F^h-\frac{1}{2}[A^h-A,A^h-A]=0$ yields
\begin{equation}
{\bf F}+{\bf h}\times ({\bf h\times F})+h^0({\bf h\times F})-\frac{1}{2}({\bf h}\times D{\bf h})\times ({\bf h}\times D{\bf h})=0
\end{equation}
\subsection{$SO(2,1)$ Group}\label{so_21}
If we want this theory to behave as a simple toy gravity model, we can use the $SO(2,1)$ group. We make use of the conventions in the Section \ref{su_2}.
The generators of SO(2,1) are $J_{ab}$ with $a ,b =0,1,2$. One can relabel those generators as $J^{a}=\epsilon ^{abc}J_{bc}$, and $J_{a}=\eta _{ab}J^{b}$, with $\eta_{ab}=(-,+,+)$. Then the SO(2,1) algebra is written
$$[J_{a},J_{b}]=-2\epsilon _{abc}J^{c}$$
A convenient representation of this algebra is $J^0=-i\sigma_3$, $J^1=-\sigma _1$ and $J^2=-\sigma _2$. For this representation the following useful relations hold
$$J_{a}J_{b}=\eta _{ab}I-\epsilon _{abc}J^{c}$$
and hence
\begin{eqnarray}
\{J_{a},J_{b} \}=2\eta _{ab}I\nonumber\\
J_{a}J_{b}J_{c}=-\epsilon _{abc}+\eta _{ab}J_{c}-\eta _{ac}J_{b}+\eta _{cb}J_{a}
\end{eqnarray}
The gauge potential (connection) is $A=A^{a}J_{a}$, while an element of the group is of the form $h=\lambda I+\alpha ^{a}J_{a}$, with $\lambda ^2-\alpha ^2=1$, where $\alpha ^2=\alpha _{a}\alpha ^{a}$. In exponential form $h=e^{\beta ^{a}J_{a}}=\cosh \beta I+\frac{\beta ^{a}}{\beta}\sinh \beta J_{a}$ where $\beta ^2 =\beta _a\beta ^a$.
The field strength (or curvature) matrix valued two-form is $F=dA+A^2=F^{a}J_{a}={\bf F.\sigma }$.
It follows $F^{a}=dA^{a}-\epsilon ^{abc} A_{b}A_{c} $ or
$ {\bf F} = d{\bf A}- {\bf A\times A}$, where we define the "cross product" for differential forms of any order with indices in the adjoint representation of SO(2,1) as $(V\times W)^{a}=\epsilon ^{abc} V_{b}W_{c}$, while the "dot product" is $V^{a}W_{a}$.
If we have an algebra valued zero-form field $\alpha =\alpha ^{a}J_{a}={\bf \alpha .J}$ its covariant derivative is $D\alpha =D{\bf \alpha}.{\bf J}=d\alpha +[A,\alpha ]$ or $D{\bf \alpha}=d{\bf \alpha}+2{\bf \alpha\times A}$.
For the equation $A^h-A^{h^{-1}}=0$ we need $A^h=h^{-1}Ah+h^{-1}dh$. Considering $h^{-1}=\lambda I-\alpha ^{a}J_{a}$ and
using the properties of the generators, as in the SU(2) case, we get
$$h^{-1}Ah=[(\lambda ^2+\alpha ^2)A^{a}-2\lambda(A\times \alpha)^{a}-2({\bf A.\alpha)\alpha ^{a}}]J_{a}$$
and
$$h^{-1}dh=[\lambda d\alpha ^{a}-d\lambda \alpha ^{a}+(\alpha\times d\alpha)^{a}]J_{a}$$
We obtain then from the first equation
\begin{equation}
\lambda d\alpha ^{a}-d\lambda \alpha ^{a}=2\lambda (A\times \alpha)^{a}
\end{equation}
or
\begin{equation}
\lambda D\alpha ^{a}-d\lambda \alpha ^{a}=0
\end{equation}
The equation $F+F^h-\frac{1}{2}[A^h-A,A^h-A]=0$ yields
\begin{equation}
\lambda ^2F^{a}-\lambda (F\times\alpha)^{a}-({\bf F.\alpha})\alpha ^{a}+\frac{1}{2}[(\alpha\times D\alpha)\times(\alpha\times D\alpha)]^{a}=0
\end{equation}
\section{Solutions}
Here we will show that the gWZW model for SO(2,1) group, despite its simplicity, has non trivial black hole solutions. We found that the later have interesting properties, as we discuss below. We compute the Noether mass of the black hole and study its thermodynamics, finding that black holes of finite mass are massless and have zero entropy.
In solving the field equations
\begin{eqnarray}
D(h^2)=0\\
F+F^h-(h^{-1}Dh)^2=0
\end{eqnarray}
a remarkable simplification is achieved by setting $\lambda =0$ in $h=\lambda I+\alpha ^aJ_a$. In that case $h^2=-I$ and therefore the first of the previous field equations is fulfilled. Furthermore the second equation in the case $\lambda =0$ is
$$-(F\cdot\alpha)\alpha ^a+\frac{1}{2}[(\alpha\times D\alpha)\times(\alpha\times D\alpha)]^a=0$$
This is further simplified if we assume $F\cdot\alpha =0$, which is equivalent to $hF+Fh=0$, or $F=-h^{-1}Fh=-F^h$, which is a sort of anti-self-duality condition. We will use below an ansatz for which $F^0=F^1=0$, which follows from assuming that the torsion is zero, therefore the anti-self-duality condition $F\cdot\alpha =0$ would imply $\alpha ^2=0$.
Below we follow a ad hoc approach to the search of black hole solutions, but it turns out that the solutions of interest do indeed satisfy the conditions
\begin{eqnarray}
h^2=-I\\
F=-F^h
\end{eqnarray}
\subsection{Black holes}
Searching for generic black hole-like solutions leads to equations that we could not solve. Therefore we followed instead the strategy of assuming the metric to be the 1+1 black hole metric of Refs.\cite{CGHS-bh,witten-1+1-bh}, and look for configurations of the $\alpha$'s that would yield a solution of the full field equations of the theory.
The line element of the CGHS black hole \cite{CGHS-bh,witten-1+1-bh} is
\begin{equation}\label{metric}
ds^{2}=-\tanh(\gamma r)^{2}dt^{2}+dr^{2}
\end{equation}
where $\gamma$ is a constant. The vielbein is
\begin{eqnarray}\label{vielbein}
e^{0}=\tanh (\gamma r) dt~~~,~~~e^{1}=dr
\end{eqnarray}
and the torsion free spin connection $\omega^{01}$ is
\begin{equation}\label{omega}
\omega^{01}=\frac{\gamma}{\cosh^{2}(\gamma r)}dt.
\end{equation}
The Riemann curvature two form is then
\begin{equation}\label{R}
R^{01}=\frac{2\gamma^{2}\tanh(\gamma r)}{\cosh^{2}(\gamma r)}dt dr.
\end{equation}
and the curvature scalar is
$R=\frac{4\gamma^{2}}{\cosh^{2}(\gamma r)}$.
The idea now is to insert these ansatz into the field equations and see what equations must the $\alpha ^a$ satisfy.
As said before the $(A,F)$ are given in terms of the $(e,\omega)$, which in the torsion free case is
$A^{0}=\frac{1}{2}e^{1}~, ~A^{1}=\frac{1}{2}e^{0}~,~ A^{2}=-\frac{1}{2}\omega^{01}$
and $F^{0}=\frac{1}{2}T^{1}=0~,~ F^{1}=\frac{1}{2}T^{0}=0~,~F^{2}=-\frac{1}{2}(d\omega^{01}-e^{0}e^{1})$.
We could not solve the equations that result from that ansatz for generic $\alpha ^a$, but assuming time independence, as befits static solutions, and assuming that one of the components of $\alpha ^a$ vanish it is possible to find solutions (For detailed calculation of black hole solutions looking Appendix \ref{derivation_black_hole}).
There are no solutions with $\lambda \neq 0$.
If $\alpha^{2}=0$ and $\alpha^{0,1}(r,t)=\alpha^{0,1}(r)$ we get the solutions
\begin{equation}
\alpha^{0}=\pm\sqrt{(\alpha^{1})^{2}+1}~ ,~\alpha^{1}=\frac{C}{\tanh(\gamma r)}
\end{equation}
where $C$ is a constant.\\
In the case $\alpha^{1}=0$ and $\alpha^{0,2}(r,t)=\alpha^{0,2}(r)$ we get the solution:\\
\begin{equation}
\alpha^{0}=\pm\sqrt{(\alpha^{2})^{2}+1}~,~\alpha^{1}=C \cosh(\gamma r)e^{\frac{1}{8\gamma^{2}}\cosh(2\gamma r)}
\end{equation}
where $C$ is a constant. For this second kind of black hole solutions we found, following methods similar to the ones applied in the next subsections to the first kind of black hole solutions, that their mass is infinite, which would arguably rule them out as sensible solutions.
\subsection{Black hole mass from Noether's theorem and a more general zero mass result}\label{noether_section}
Given a generally covariant action $I[\phi ]$ for a physical theory, depending on a set of fields $\phi$, with a bulk lagrangian density $L$ and a boundary contribution, in a space-time $\Omega$ of the generic form
$$I=\int _{\Omega}L+\int _{\partial\Omega}B,$$
Noether's theorem states that the invariance under diffeomorphisms of this action implies that the following current is conserved
\begin{equation}
\star j=-\Theta - I_{\xi}L + d(I_{\xi}B)
\end{equation}
where the point dependent vector field $\xi$ generates the infinitesimal
diffeomorphisms $\delta x^{\mu}=\xi^{\mu}(x)$\footnote{The contraction operator $I_{\xi }$ is defined by
acting on a p-form $\alpha _p$ as
$$
I_{\xi }\alpha _{p}=\frac{1}{(p-1)!}\xi ^{\nu }\alpha _{\nu \mu
_{1}...\mu _{p-1}}dx^{\mu _{1}}...dx^{\mu _{p-1}}
$$
and being and anti-derivative in the sense that acting on the wedge
product of differential forms $\alpha _{p}$ and $\beta _{q}$ of
order p and q respectively gives $I_{\xi }(\alpha _{p}\beta
_{q})=I_{\xi }\alpha _{p}\beta _{q}+(-1)^{p}\alpha _{p}I_{\xi }\beta
_{q}$.}
and we can read $\Theta$ from
\[
\delta L=(Field~~~equations)\delta\phi+d\Theta
\]
where the variations $\delta\phi$ are infinitesimal but arbitrary in form. The Noether�s conserved charge $Q$ is defined as $Q=\int _{\mathcal{V}}\star j$, where $\mathcal{V}$ is a manifold that correspond to a fixed time slice of $\Omega$. The Noether's mass is the Noether's charge in the case $\xi=\frac{\partial ~}{\partial t}$.
We are interested in the Noether's mass of the black hole solution of the previous subsection. We have the 1+1 dimensional gWZW action (\ref{action})
$$
I_{gWZW}=\kappa\int_{\Sigma}{\frac{1}{3}\langle(h^{-1}dh)^{3}\rangle}-\kappa\int_{M^{2}}{\langle(A-h^{-1}dh)A^{h}\rangle},
$$
and we consider the $\alpha^{a}$�s and $A^{a}$�s corresponding to the black hole solution of eqs.(\ref{vielbein},\ref{omega}, \ref{R}).
We need to compute the different pieces of $\star j$. for our action there is no $B$ term, while from eq. (\ref{variation_action}) we can read $\Theta$ in arbitrary dimension
$$\Theta =-n(n+1)\{\int _0^1dt~t<\Delta A F_t^{n-1}h^{-1}\delta h>\}$$
which in our case ($n=1$) is
$$\Theta =-\frac{1}{2} <\Delta A~h^{-1}\delta h>$$
which vanishes because $\delta h =\mathcal{L}_{\xi}h=0$, where $\mathcal{L}_{\xi}h$ is the Lie derivative of $h$ along $\xi$, as $h$ is time independent.
We also have
$$I_{\xi}\langle(h^{-1}dh)^{3}\rangle =0$$
as there is no component of $\langle(h^{-1}dh)^{3}\rangle $ along $dt$ because $h$ is time independent.
Finally
\begin{eqnarray}\label{derivadasdelie}
I_{\xi}A&=&I_{\xi}A^{a}J_{a}=\frac{1}{2}\tanh(\gamma r)J_{1}-\frac{1}{2}\frac{\gamma}{\cosh^{2}(\gamma r)}J_{2} \nonumber\\
I_{\xi}A^{h}&=&I_{\xi}h^{-1}Ah=(-I_{\xi}A^{a}-2I_{\xi}A^{1}\alpha^{1}\alpha^{a}+2I_{\xi}A^{0}\alpha^{0}\alpha^{a})J_{a}\nonumber\\
&=&-\tanh(\gamma r)\alpha^{1}\alpha^{0}J_{0}+(-\frac{1}{2}\tanh(\gamma r)-\tanh(\gamma r)\alpha^{1}\alpha^{1})J_{1}+\frac{1}{2}\frac{\gamma}{\cosh^{2}(\gamma r)}J_{2}\nonumber
\end{eqnarray}
We can now compute $I_{\xi}L$:
\begin{equation}
I_{\xi}L=I_{\xi}\langle(A-h^{-1}dh)A^{h}\rangle=\langle(I_{\xi}A)A^{h}\rangle-\langle(A-h^{-1}dh)I_{\xi}A^{h}\rangle,
\end{equation}
where we used that $h$ depends only on $r$ for this solution. On the one hand
\begin{eqnarray}
\langle(I_{\xi}A)A^{h}\rangle=2(2\tanh(\gamma r)A^{0}\alpha^{0}\alpha^{1}-\frac{1}{2}(\alpha\otimes d\alpha)^{2}\frac{\gamma}{\cosh^{2}(\gamma r)}=\nonumber \\
=\tanh(\gamma r)\alpha^{0}\alpha^{1}dr-(\alpha\otimes d\alpha)^{2}\frac{\gamma}{\cosh^{2}(\gamma r)}.
\end{eqnarray}
and in the other hand
\begin{equation}
\langle(A-h^{-1}dh)I_{\xi}A^{h}\rangle=\tanh(\gamma r)\alpha^{0}\alpha^{1}dr-(\alpha\otimes d\alpha)^{2}\frac{\gamma}{\cosh^{2}(\gamma r)}.
\end{equation}
It follows that $I_{\xi}\langle(A-h^{-1}dh)A^{h}\rangle$ vanishes, therefore, putting
all together, the Noether's mass of the black hole is zero.
It is in fact possible to prove that the Noether mass of a time independent configuration satisfying the ``anti-self-duality" condition $F=-F^h$ is zero. The $\Theta $ contribution vanishes as before because $h$ is time independent. Furthermore in computing $I_{\xi}L$ we can use the original expression for the transgression
$$I_{\xi}L=I_{\xi}2\int_0^1dt~<\Delta AF_t>=2~I_{\xi}<h^{-1}Dh[\frac{F+F^h}{2}-\frac{1}{6}(h^{-1}Dh)^2]>$$
If $F=-F^h$ the field equation reduces to $(h^{-1}Dh)^2=0$ and the previous expression vanishes,
implying that the Noether current is zero and so is the mass of that configuration.
\subsection{Black Hole thermodynamics}
\subsubsection{Temperature}
The CGHS line element is (see eq. (\ref{metric}))
$$ds^2=-\tanh ^2 (\gamma r)dt^2+dr^2$$
which for euclidean time $t_E=-it$ becomes the euclidean line element
$$ds_E^2=\tanh ^2 (\gamma r)dt_E^2+dr^2$$
The horizon of this black hole is at $r=0$. The standard procedure to determine the Hawking temperature of a black hole is to study the near horizon geometry and chose the period $\beta$ of the euclidean time\footnote{The euclidean time period is related to the temperature $T$ by $\beta =\frac{1}{k_BT}$, where $k_B$ is the Boltzmann constant. In what follows we use natural units with $k_B=1$.} so that there is no conical singularity at the horizon. The reason for that is that the curvature diverges at the conical singularity what implies that that geometry is not a extremum of the euclidean action and will not be the one that contributes in a saddle point evaluation of the partition function.
Applying that procedure to the CGHS black hole geometry we have that the near horizon line element is
$$ds_E^2\cong (\gamma r)^2dt_E^2+dr^2=r^2 d\theta ^2+dr^2$$
where $\theta =\gamma t_E$. Avoiding a conical singularity would imply
$$\beta =\frac{2\pi }{\gamma}$$
The previous argument would apply for an action that explicitly includes the curvature $R^{ab}$, e.g. the Einstein-Hilbert action. However for the action (\ref{action}) only the spin connection appears, and not their derivatives, therefore we argue that a conical singularity at the horizon does not imply that the action is singular for that configuration and henceforth that the period $\beta$ of euclidean time is not restricted have a particular value. This situation is analogous to the case of extremal black holes.
\subsubsection{Euclidean Action}
In the semiclassical approximation of euclidean gravity the exponential of minus the euclidean action evaluated on configurations that extremize the action is proportional to the particion function
$$Z\approx N e^{-I_E}$$ and therefore the free energy $F$ is related to the euclidean action (\ref{action}) by $$\beta F= I_E+constant$$
We need therefore to evaluate the action
$$
I_{gWZW}=\kappa\int_{\Sigma}{\frac{1}{3}\langle(h^{-1}dh)^{3}\rangle}-\kappa\int_{M^{2}}{\langle(A-h^{-1}dh)A^{h}\rangle}
$$
on the black hole configuration.
The evaluation of the first integral of the second member would in principle require the extension of the two dimensional manifold $ M^2$ into the three dimensional manifold $\Sigma$. For instance we could picture the 1+1 black hole geometry as a "cigar-like" surface closing in $r=0$, with $r$ the coordinate along the cigar and $t_E$ the angular coordinate around the "cigar", and extend it to the solid interior of the "cigar" to carry on the 3D integration. However that would not be necessary, as the 1-form $h^{-1}dh$ has no component along $dt$, because $h$ is time independent, then the first term of the second member is zero
$$\int_{\Sigma}{\frac{1}{3}\langle(h^{-1}dh)^{3}\rangle}=0$$
In order to evaluate the second term, we have
\begin{eqnarray*}
{\langle(A-h^{-1}dh)A^{h}\rangle}&=&2\eta_{ab}(A^{a}-(\alpha\times d\alpha)^{a})(-A^{b}-2(\alpha\cdot A)+(\alpha\times d\alpha)^{b})\\
&=&2\eta_{ab}(A^{a}-(\alpha\times d\alpha)^{a})(-2(\alpha\cdot A)\alpha^{b}-(A^{b}-(\alpha\times d\alpha)^{b}))\\
&=&-4\eta_{ab}(A^{a}-(\alpha\times d\alpha)^{a})(\alpha\cdot A)\alpha^{b}\\
&=&-4(\alpha\cdot A)(\alpha\cdot A)+4((\alpha\times d\alpha)\cdot\alpha)(\alpha\cdot A)
\end{eqnarray*}
The first term vanished due $\alpha\cdot A$ is a real valued 1-form while the second one vanished in virtue that $(\alpha\times d\alpha)\cdot\alpha=0$ if we replace the $\alpha^{a}$ of the black hole solution.
So the euclidean action is zero for the black hole solution.
Using the thermodynamic formulas $I_E=\beta F=S-\beta E$, where $S$ is the entropy and $E$ is the energy (mass), or equivalently
$M=E=-\frac{\partial I_E}{\partial \beta}$ and $S=I_E+\beta E$ we get $M=0$ and $S=0$.
It is reassuring that the thermodynamical mass agrees with the Noether�s mass computed in the previous section and is also vanishing.
The vanishing of the entropy, as the arbitrariness of the temperature, indicates that the black hole solutions considered here are extremal.
It can be shown that the euclidean action is zero for all configurations satisfying the ``anti-self-duality" condition $F=-F^h$. We have for the lagrangian density
$$L=2\int_0^1dt~<\Delta AF_t>=2<h^{-1}Dh[\frac{F+F^h}{2}-\frac{1}{6}(h^{-1}Dh)^2]>$$
If $F=-F^h$ the field equation is $(h^{-1}Dh)^2=0$ and the lagrangian density is zero, and so is the action.
\subsection{Discussion}
That the mass of the 1+1 black holes studied here should be zero struck us as odd at first, but further thought convinced us that in fact that result could even be considered natural, considering some heuristic arguments given below.
Massless gravitationally bound solutions have a rather long history in field theory. More than fifty years ago Pascual Jordan suggested that a star could be created from nothing as fas as its negative gravitational energy was exactly equal to its possitive rest mass\footnote{Einstein himself was so stunned when George Gamow told him about that idea that it almost did cost both of them a serious accident. As Gamow tells the story \cite{gamow}: "I remember that once, walking with him to the Institute, I mentioned Pascual Jordan's idea of how a star can be created from nothing, since at the point zero its negative gravitational mass defect is numerically equal to its positive rest mass. Einstein stopped in his tracks, and, since we were crossing a street several cars had to stop to avoid running us down.".}. This idea is in essence what is behind the creation {\it ex nihilo} of the universe scenarios discussed in modern cosmology. There are several examples of zero mass black hole solutions in diverse dimensions and for different theories in the recent literature, for instance refs.\cite{lu,lu-pope}. It has been also claimed by Boulware et al.\cite{boulware} that for a specific gravitational theory, namely Conformal Gravity in a four dimensional space-time, every spatially bounded state must be massless.
As for the arguments that make the zero mass result plausible, firstly we could translate a heuristic argument made by Boulware et al.\cite{boulware} in their paper concerning the masslessness of bound states in conformal gravity (besides a formal argument) to the present context. The idea is that the action of conformal gravity in 4D has fourth order derivatives, what implies a propagator that goes as
$1/k^4$ in the four-momenta $k_{\mu}$, therefore the interaction or potential energy goes as $\int d^3k 1/k^4\sim 1/k\sim r$. Thus the potential of a localized source is linear and the work required to separate charge an infinite distance would be infinite, as happens with the interaction of quarks in QCD, which leads to instability against pair creation and color confinement on that case. Boulware et al. argue that conformal gravity must then be 'gravitationally confining', and as colour charge must be zero for bounded states in QCD, mass must be zero for bounded states in conformal gravity.
For a gravitational theory with second order derivatives in two space-time dimensions the propagator would go as $1/k^2$ and the potential would go as $\int dk
1/k^2\sim 1/k\sim r$, what would again imply that a generic gravitational theory with second order derivatives is gravitationally confining at the full quantum level and the mass of spatially localized states must be zero.
A similar argument was applied to the Schwinger model of electrodynamics with fermions in 1+1 dimensions \cite{schwinger} to argue that electrodynamics should be confining in that model, which was indeed confirmed by exact results.
These considerations seem to be in disagreement with the fact that CGHS black holes \cite{CGHS-bh,witten-1+1-bh} have non-zero masses.
However we observe that in the original article on CGHS black holes \cite{CGHS-bh} those black holes are proved to be unstable when quantum effects are taken in account, and in fact dubbed "evanescent black holes", and it is say there that the final state after their evaporation in the full quantum theory must be certain zero mass state. Similar considerations are made in ref.\cite{witten-1+1-bh}, where it is stated that the final state should be two copies of flat space.
The classical action considered in both refs.\cite{CGHS-bh,witten-1+1-bh} is different from the one considered here, however we may argue that because of the standard arguments for WZW actions \cite{witten-gWZW} the classical action considered here should receive no quantum corrections, being already the quantum effective action; and it could be even the quantum effective action for a whole class of classical gravitational actions in 1+1 dimensions. In that case a classical solution of the model studied here must have all quantum corrections included from the start, and therefore a static classical black hole solution must be, if the previous considerations hold, massless.
It is clear that the previous heuristic considerations must be taken with a grain of salt, as for instance the same argument applied to gravity in 3D would imply a logarithmic potential, that could also be confining, leading to only massless states, while the BTZ black holes of 3D gravity are certainly not massless in general, and one could also argue that the Chern-Simons gravity action in 3D does not receive quantum corrections. On the other hand one could argue that pure gravity in 3D has no propagating local degrees of freedom, therefore it makes no sense to speak of a propagator, making the Boulware et al. argument invalid in this case.
\section{Conclusions}
In this paper we derive the general field equations for topological gWZW models (gravitational or not) in any dimension and discuss the boundary contributions that could arise. In discussing gravitational gWZW models we consider the gauge group element $h$ to be generic, unlike refs.\cite{anabalon-1,anabalon-2} where it is chosen to have a specific form.
Another difference with refs.\cite{anabalon-1,anabalon-2} is that the gauge group considered in the 2D toy model we study is a de Sitter group in the dimension of the physical spacetime (2D) instead of a de Sitter group in the dimension that is reduced (that would be 3D for the toy model considered above), unlike what is done in the work of Anabalon et al.. That means that we only have one candidate vielbein, instead of two from which to arbitrarily chose, as they did. This choice is related to the fact that we had to chose as the required symmetrized trace the standard one, leading to Pontryagin-like invariants in 4D, instead of the Levi-Civita tensor, leading to Euler invariants in 4D, as Anabalon et al. did. As Pontryagin invariants exist in $d= 4 ~mod ~4$ dimensions, then higher dimensional gWZW theories analogous to the 2D model considered here would exist in $d= 2 ~mod ~4$. That means that to obtain phenomenologically interesting 4D theories one must compactify from a 6D theory of this kind in the simplest case. In a way that is a drawback, as one would like to have a theory that is four dimensional by construction, but obtaining a 4D theory from a 6D theory by a sort of flux compactification could introduce a dynamically generated scale in the resulting theory, which would be desirable, as gWZW (as CS) theories have no dimensional constants in the action. At first glance a theory based on Pontryagin invariants seems to have the wrong parity, yet the presence of two intertwinned sets of fields makes it not so clear.
We regard the work presented here as a modest step towards the goal of investigating gravitational gWZW in higher dimensions, based either in the Pontryagin or Euler invariants, as a possible source for phenomenologically interesting models in 4D that go beyond General Relativity, and may in particular shed light on cosmological problems such as the origin of Dark Energy, or provide a gauge theoretic foundation for more ad hoc models, as for instance TeVeS.
In the framework of the 2D toy model we found black hole solutions with a metric of the CGHS type, which turned out to be massless (both from Noether's and thermodynamical analysis) and with zero entropy . It would be interesting to know how unique this solutions are, and to study their stability against small perturbations of the fields.\\
\begin{acknowledgements} We are grateful to Jorge Zanelli for conversations on the subject of this paper. P. Pais and P. Mora are thankful for hospitality at Centro de Estudios Cient\'{\i}ficos (CECs) at different times while working in this paper. Pablo Pais has been partially supported by a ANII Master's Degree Scholarship during this work. The Centro de Estudios Cient\'{\i}ficos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of Conicyt.
\end{acknowledgements}
\begin{appendix}
\section{Derivation of gWZW action field equations}\label{derivation_gwzw}
We compute the variation of the action and obtain the field equations in any dimensions starting from the formula for the variation of the transgression ,\cite{motz3}
$$
\delta\mathcal{T}_{2n+1}=(n+1) <F^n\delta A>-<\overline{F}^n\delta \overline{A}>
-n(n+1)d\{\int _0^1dt<\Delta AF_t^{n-1}\delta A_t>\}.
$$
If $\overline{A}=A^h=h^{-1}Ah+h^{-1}dh$ then $\overline{F}=h^{-1}Fh$ and $\delta \overline{A}=h^{-1}[\delta A +D(\delta h h^{-1})]h$, with the covariant derivative $D=d+[A,~~~]$. Then, using the Bianchi identity $DF=0$, the identity $d<something>=<D(something)>$, the cyclicity of the symmetrized trace and some algebra it results
\begin{equation}
\delta\mathcal{T}_{2n+1}=-(n+1)d\{ <F^n\delta h h^{-1}>
+n \int _0^1dt<\Delta AF_t^{n-1}\delta A_t>\}.
\end{equation}
Using $\Delta A =A-A^h$, $h\Delta A h^{-1}=A^{h^{-1}}-A\equiv -\tilde{\Delta A}$, $hF_th^{-1}=\tilde{F}_u$, where
the parameter $u$ is $u=1-t$ and $\tilde{F}_u=uF+(1-u)F^{h^{-1}}+u(u-1)\tilde{\Delta A}^2=d\tilde{A}_u+\tilde{A}_u^2$, with $\tilde{A}_u=uA+(1-u)A^{h^{-1}}$ and again with some algebra we obtain
\begin{eqnarray}
\delta\mathcal{T}_{2n+1}=-(n+1)d\{
n \int _0^1dt~t<[(A-A^h)F_t^{n-1}+(A^{h^{-1}}-A)\tilde{F}_t^{n-1}]\delta A>+\\ \nonumber
+\int _0^1dt<\tilde{F}_t^{n}\delta hh^{-1}>+n\int _0^1dt~t(t-1)<\tilde{\Delta A}\tilde{F}_t^{n-1}[\tilde{\Delta A},\delta hh^{-1}]>+\\ \nonumber
-d\{n\int _0^1dt~(1-t)<\tilde{\Delta A}\tilde{F}_t^{n-1}\delta hh^{-1}>\}\},
\end{eqnarray}
which can be rewritten as
\begin{eqnarray}
\delta\mathcal{T}_{2n+1}=-(n+1)d\{
n \int _0^1dt~t<[(A-A^h)F_t^{n-1}+(A^{h^{-1}}-A)\tilde{F}_t^{n-1}]\delta A>+\nonumber\\
+\int _0^1dt<F_t^{n}h^{-1}\delta h>+n\int _0^1dt~t(t-1)<\Delta A F_t^{n-1}[\Delta A,h^{-1}\delta h]>+\nonumber\\
-d\{n\int _0^1dt~(1-t)<\Delta A F_t^{n-1}h^{-1}\delta h>\}\}\nonumber.
\end{eqnarray}
The variation o the gWZW action is therefore
\begin{eqnarray}
\delta I_{gWZW}=-(n+1)\int _{\Sigma ^{2n}}\{
n \int _0^1dt~t<[(A-A^h)F_t^{n-1}+(A^{h^{-1}}-A)\tilde{F}_t^{n-1}]\delta A>+\nonumber\\
+\int _0^1dt<F_t^{n}h^{-1}\delta h>+n\int _0^1dt~t(t-1)<\Delta A F_t^{n-1}[\Delta A,h^{-1}\delta h]>\}+\\ \nonumber
+n(n+1)\int _{\partial\Sigma ^{2n}}\{\int _0^1dt~(1-t)<\Delta A F_t^{n-1}h^{-1}\delta h>\}\nonumber,
\end{eqnarray}
where the space-time manifold is $\Sigma ^{2n}\equiv\partial\mathcal{M}$ and $\partial\Sigma ^{2n}$ is its $2n-1$-dimensional boundary.
The field equations derived from the action principle $\delta I_{gWZW}=0$, with the ones corresponding to $\delta A$ being
\begin{eqnarray}
\int _0^1dt~t<[(A-A^h)F_t^{n-1}+(A^{h^{-1}}-A)\tilde{F}_t^{n-1}]G^A>=0,
\end{eqnarray}
and the ones corresponding to $\delta h$ being
\begin{eqnarray}
\int _0^1dt<F_t^{n}G^A>+n\int _0^1dt~t(t-1)<\Delta A F_t^{n-1}[\Delta A,G^A]>=0
\end{eqnarray}
\section{Derivation of black hole solutions}\label{derivation_black_hole}
The line element of the CGHS black hole \cite{CGHS-bh,witten-1+1-bh} is (\ref{metric})
\begin{equation}
ds^{2}=-\tanh(\gamma r)^{2}dt^{2}+dr^{2},\nonumber
\end{equation}
where $\gamma$ is a constant.
The idea now is to insert these ansatz into the field equations and see what equations must the $\alpha ^a$ satisfy. As said before the $(A,F)$ are given in terms of the $(e,\omega)$, which in the torsion free case is
$A^{0}=\frac{1}{2}e^{1}~, ~A^{1}=\frac{1}{2}e^{0}~,~ A^{2}=-\frac{1}{2}\omega^{01}$
and $F^{0}=\frac{1}{2}T^{1}=0~,~ F^{1}=\frac{1}{2}T^{0}=0~,~F^{2}=-\frac{1}{2}(d\omega^{01}-e^{0}e^{1})$, with the vielbein and spin connection of Section 4.3.1.
\subsection{Case $\lambda \neq 0$}
We can write the field equations for each value of $a=0,1,2$ and separate components along $dr$ and $dt$.
For the first field equation and for each component we have:
\begin{eqnarray}
(\alpha_{a}\alpha^{a}+1)\left(\dot{\alpha^{0}}+\frac{\gamma\alpha^{1}}{\cosh^{2}(\gamma r)}+\tanh (\gamma r)\alpha^{2}\right) - \alpha^{0}(\alpha^{1}\dot{\alpha^{1}}+\alpha^{2}\dot{\alpha^{2}}-\alpha^{0}\dot{\alpha^{0}})=0\nonumber\\
(\alpha_{a}\alpha^{a}+1)\alpha^{0'}-\alpha^{0}(\alpha^{1}\alpha^{1'}+\alpha^{2}\alpha^{2'}-\alpha^{0}\alpha^{0'})=0\nonumber\\
(\alpha_{\mu}\alpha^{\mu}+1)\left(\dot{\alpha^{1}}+\frac{\gamma\alpha^{0}}{\cosh^{2}r}\right) - \alpha^{1}(\alpha^{1}\dot{\alpha^{1}}+\alpha^{2}\dot{\alpha^{2}}-\alpha^{0}\dot{\alpha^{0}})=0\nonumber\\
(\alpha_{a}\alpha^{a}+1)(\alpha^{1'}+\alpha^{2})-\alpha^{1}(\alpha^{1}\alpha^{1'}+\alpha^{2}\alpha^{2'}-\alpha^{0}\alpha^{0'})=0\nonumber\\
(\alpha_{a}\alpha^{a}+1)\left(\dot{\alpha^{2}}+\tanh(\gamma r)\alpha^{0}\right) - \alpha^{2}(\alpha^{1}\dot{\alpha^{1}}+\alpha^{2}\dot{\alpha^{2}}-\alpha^{0}\dot{\alpha^{0}})=0\nonumber\\
(\alpha_{a}\alpha^{a}+1)(\alpha^{2'}-\alpha^{1})-\alpha^{1}(\alpha^{1}\alpha^{1'}+\alpha^{2}\alpha^{2'}-\alpha^{0}\alpha^{0'})=0\nonumber,
\end{eqnarray}
where $\dot{\alpha^{a}}=\frac{\partial\alpha^{a}}{\partial t}$ y $\alpha^{a'}=\frac{\partial\alpha^{\mu}}{\partial r}$.
Using that if $\lambda\neq 0$ the first field equation implies $D\alpha^{a}=\frac{d\lambda}{\lambda}\alpha^{a}$, we get
\begin{equation}
(\alpha\times D\alpha)^{a}=\epsilon_{bcd}\eta^{da}\alpha^{b}D\alpha^{c}=\epsilon_{bcd}\eta^{da}\alpha^{b}\frac{d\lambda}{\lambda}\alpha^{c}=0.
\end{equation}
The second field equation reduces then to
\begin{equation}
\lambda^{2}F^{a} - \lambda(F\times\alpha)^{a} - (F.\alpha)\alpha^{a} = 0.
\end{equation}
Using our ansatz for $A$ we get, for each component:
\begin{eqnarray}
\alpha^{2}\alpha^{0}+\sqrt{\alpha_{a}\alpha^{a}+1}\alpha^{1}=0\\
\alpha^{2}\alpha^{1}+\sqrt{\alpha_{a}\alpha^{a}+1}\alpha^{0}=0\\
\lambda^{2}-\alpha^{2}\alpha^{2}=0.
\end{eqnarray}
Using Maple12 the following complex solutions result:\\
A. $\alpha^{0}=\alpha^{0}~,~\alpha^{1}=-i~,~\alpha^{2}=0$\\
B. $\alpha^{0}=\alpha^{0}~,~\alpha^{1}=i~,~\alpha^{2}=0$,\\
which are not allowed because $\alpha^{a}$ must be real, and furthermore $\lambda^{2}=(\alpha^{1})^{2}+1=0$, against our assumption.
Other solutions are:\\
C. $\alpha^{0}=1~,~\alpha^{1}=0~,~\alpha^{2}=0$\\
D. $\alpha^{0}=-1~,~\alpha^{1}=0~,~\alpha^{2}=0$
which against have $\lambda^{2}=-(\alpha^{0})^{2}+1=0$.
Finally a solution is:\\
E. $(\alpha^{1})^{2}=(\alpha^{2})^{2}-1~,~\dot{\alpha^{1}}=-\frac{\alpha^{1}\alpha^{0}\dot{\alpha^{0}}}{1-(\alpha^{0})^{2}}~,~\alpha^{1'}=\frac{\alpha^{1}\alpha^{0}\alpha^{0'}}{-1+(\alpha^{0})^{2}}~,~
\alpha^{2}=0$
which also has $\lambda^{2}=-(\alpha^{0})^{2}+(\alpha^{1})^{2}+1=0$.\\
In conclusion the solutions obtained using Maple12 are with $\lambda=0$,
againt the initial assumption of $\lambda\neq 0$, and must be discarded.
\subsection{Case $\lambda=0$}
In this case Field Equation I
$\lambda d\alpha^{a} - d\lambda\alpha^{a} - 2\lambda(A\times\alpha)^{a} =0$
is trivially satisfied if $\lambda=0$.
Field Equation II is
\begin{equation}
- 2(F.\alpha)\alpha^{a} - ((\alpha\times D\alpha)\times(\alpha\times D\alpha))^{a} = 0.
\end{equation}
Using $(\alpha\times D\alpha)^{a}=\epsilon_{bcd}\eta^{da}\alpha^{b}D\alpha^{c}$ and
$((\alpha\times D\alpha)\times((\alpha\times D\alpha))^{a}=\epsilon_{bcd}\eta^{da}(\alpha\times D\alpha)^{b}(\alpha\times D\alpha)^{c}$
we obtain
\begin{eqnarray}
(\alpha^{0}D\alpha^{1}-\alpha^{1}D\alpha^{0})(\alpha^{2}D\alpha^{0}-\alpha^{0}D\alpha^{2})-F^{2}\alpha^{2}\alpha^{0}=0\\
(\alpha^{0}D\alpha^{1}-\alpha^{1}D\alpha^{0})(\alpha^{2}D\alpha^{1}-\alpha^{1}D\alpha^{2})-F^{2}\alpha^{2}\alpha^{1}=0\\
(\alpha^{2}D\alpha^{1}-\alpha^{1}D\alpha^{2})(\alpha^{2}D\alpha^{0}-\alpha^{0}D\alpha^{2})-F^{2}\alpha^{2}\alpha^{2}=0.
\end{eqnarray}
From the field equations and the ansatz for $A$ we obtain the system of equations
\begin{eqnarray}
\frac{\alpha^{2}\alpha^{0}\tanh(r)}{2}\left(-\frac{2\gamma^{2}}{\cosh(\gamma r)^{2}}+1\right)=\nonumber\\
=\left(\alpha^{0}\dot{\alpha^{1}}+\frac{\gamma\alpha^{0}\alpha^{0}}{\cosh(\gamma r)^{2}}-\alpha^{1}\dot{\alpha^{0}}-\alpha^{1}\alpha^{2}\tanh(\gamma r)-\frac{\gamma\alpha^{1}\alpha^{1}}{\cosh(\gamma r)^{2}}\right)\times\nonumber\\ \times (\alpha^{2}\alpha^{0'}-\alpha^{0}\alpha^{2'}+\alpha^{0}\alpha^{1})-\nonumber\\
-(\alpha^{0}\alpha^{1'}+\alpha^{0}\alpha^{2}-\alpha^{1}\alpha^{0'})\times\nonumber\\ \times\left(\alpha^{2}\dot{\alpha^{0}}+\alpha^{2}\alpha^{2}\tanh(\gamma r)+\frac{\gamma\alpha^{2}\alpha^{1}}{\cosh(\gamma r)^{2}}-\alpha^{0}\dot{\alpha^{2}}-\alpha^{0}\alpha^{0}\tanh(\gamma r)\right)\nonumber
\end{eqnarray}
\begin{eqnarray}
\frac{\alpha^{2}\alpha^{1}\tanh(\gamma r)}{2}\left(-\frac{2\gamma^{2}}{\cosh(\gamma r)^{2}}+1\right)=\nonumber\\
=\left(\alpha^{0}\dot{\alpha^{1}}+\frac{\gamma\alpha^{0}\alpha^{0}}{\cosh(\gamma r)^{2}}-\alpha^{1}\dot{\alpha^{0}}-\alpha^{1}\alpha^{2}\tanh(\gamma r)-\frac{\gamma\alpha^{1}\alpha^{1}}{\cosh(\gamma r)^{2}}\right)\times\nonumber\\
\times(\alpha^{2}\alpha^{1'}+\alpha^{2}\alpha^{2}-\alpha^{1}\alpha^{2'}+\alpha^{1}\alpha^{1})-\nonumber\\-(\alpha^{0}\alpha{1'}+\alpha^{0}\alpha^{2}-\alpha^{1}\alpha^{0'})\left(\alpha^{2}\dot{\alpha^{1}}+\frac{\gamma\alpha^{2}\alpha^{0}}{\cosh(\gamma r)^{2}}-\alpha^{1}\dot{\alpha^{2}}-\alpha^{1}\alpha^{0}\tanh(\gamma r)\right)\nonumber
\end{eqnarray}
\begin{eqnarray}
\frac{\alpha^{2}\alpha^{2}\tanh(\gamma r)}{2}\left(-\frac{2\gamma^{2}}{\cosh(\gamma r)^{2}}+1\right)=\nonumber\\
=\left(\alpha^{2}\dot{\alpha^{1}}+\frac{\gamma\alpha^{2}\alpha^{0}}{\cosh(\gamma r)^{2}}-\alpha^{1}\dot{\alpha^{2}}-\alpha^{1}\alpha^{0}\tanh(\gamma r)\right)\times\nonumber\\ \times(\alpha^{2}\alpha^{0'}-\alpha^{0}\alpha^{2'}+\alpha^{0}\alpha^{1})-\nonumber\\
-(\alpha^{2}\alpha^{1'}+\alpha^{2}\alpha^{2}-\alpha^{1}\alpha^{2'}+\alpha^{1}\alpha^{1})\times\nonumber\\ \times\left(\alpha^{2}\dot{\alpha^{0}}+\alpha^{2}\alpha^{2}\tanh(\gamma r)+\frac{\gamma\alpha^{2}\alpha^{1}}{\cosh(\gamma r)^{2}}-\alpha^{0}\dot{\alpha^{2}}-\alpha^{0}\alpha^{0}\tanh(\gamma r)\right)\nonumber\\
(\alpha^{1})^{2}+(\alpha^{2})^{2}-(\alpha^{0})^{2}+1=0 \nonumber,
\end{eqnarray}
We could not solve this equations for generic $\alpha ^a$, but assuming time independence, as befits a static solution, and assuming one of the components of $\alpha ^a$ vanish it is possible to find the following solutions:
If $\alpha^{2}=0$ and $\alpha^{0,1}(r,t)=\alpha^{0,1}(r)$ we get
$$\alpha^{0}=\pm\sqrt{(\alpha^{1})^{2}+1}~ ,~\alpha^{1}=\frac{C}{\tanh(\gamma r)}$$
where $C$ is a constant.\\
If $\alpha^{1}=0$ and $\alpha^{0,2}(r,t)=\alpha^{0,2}(r)$ we get
$$\alpha^{0}=\pm\sqrt{(\alpha^{2})^{2}+1}~,~\alpha^{1}=C \cosh(\gamma r)e^{\frac{1}{8\gamma^{2}}\cosh(2\gamma r)}$$
where $C$ is a constant.
\end{appendix}
| 057510a37db9a8be5f60a37019763c8adbb9e6b5 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The electronic properties of ultrathin films are significantly different from those of their bulk states due to their reduced dimensionality
and the influence of substrates. In the case of oxides on metallic substrates, it has been predicted that characteristic
features such as the on-site Coulomb interaction energy $U$ and the charge transfer energy $\Delta$ from a ligand
to a neighboring cation would be substantially altered for
atomically thin films.\cite{Duffy,Altieri}
Altieri {\it et al.}\cite{Altieri} observed that for an ultrathin MgO film on Ag(001),
both $U$ and $\Delta$ decreased monotonically as the film became thinner.
They attributed the decrements from the bulk values of $U$ and $\Delta$ in the film, $\delta U$ and $\delta \Delta$, to the extra-atomic relaxation energies $E_{rlx}$
that developed in response to the altered charge states of the ions. As the major sources of $E_{rlx}$, they considered both the image charge potential
energy $E_{image}$ between an extra charge and its image induced in the metal substrate and the polarization
energy $E_{pol}$ of the oxide caused by the extra charge also known as the Madelung potential energy. The magnitude of the image potential energy should be larger for
thinner films due to the smaller mean distance between an extra charge in the film and its image in the substrate. The polarization energy
of the film should be different from that of the bulk oxide because the volume of the oxide in the film is reduced, whereas the polarizability
is enhanced at the surface of the oxide. The resulting variation between $E_{rlx}$ in the film and in the bulk state, $\delta E_{rlx}$, even quantitatively reproduced
the experimental $\delta U$ for a 1-monolayer (ML) MgO film on Ag(001).\cite{Altieri}
Nevertheless, the suspicion was raised that the successful reproduction of the experimental $\delta U$ by $\delta E_{rlx}$
could have been fortuitous, as there were many other effects (such as dipole-dipole interactions) that were not taken into account
as well as unjustified assumptions (such as $1/r$ dependence of the image potential on the atomic distance).
Moreover, Chambers and Droubay\cite{Chambers} reported that both Fe$_{2}$O$_{3}$ and Cr$_{2}$O$_{3}$ films on Pt(111) exhibited
negligible $\delta U$ and $\delta \Delta$.
This contrasting observation was attributed to effective intrinsic screening of charge transfer, which reduced
the extra-atomic relaxation to an undetectable level.
Thus, no comprehensive elucidation of the electronic properties of ultrathin oxide films on highly polarizable substrates
seems to exist,
and the number of experimental studies are very limited to properly assess the existing hypotheses.
The objective of the present work is to assess the existing hypotheses by comparing their predictions with experimental results for a different
system: ultrathin NiO films on Ag(001). Bulk NiO is prototypical as a charge-transfer insulator, \cite{Zaanen} and its
charge fluctuation energies have already been studied.\cite{Guenseop, Taguchi} Furthermore, the lattice mismatch between
NiO(001) and Ag(001) is only $\sim$ 2 $\%$, and the pseudomorphic growth of an NiO film is well established.\cite{Neddermeyer,Wollschlager,Caffio}
In other words, NiO films grown on Ag(001) are well suited for studying the thickness dependency of charge fluctuation energies such as $U$ and $\Delta$.
However, it is difficult to obtain the Coulomb interaction energy $U$(Ni $3d$) between Ni $3d$ electrons
via the method of Altieri {\it et al.},\cite{Altieri}
because the Ni $3d$ spectrum is difficult to isolate due to its overlap with the Ag $4d$ band of the substrate.
Instead, we study the interaction energy between Ni $3p$ holes $U$(Ni $3p$).
As the film becomes atomically thin, $U$(Ni $3p$) exhibits a substantial reduction from its bulk value.
Moreover, the extra-atomic relaxation energies represented by both $E_{image}$ and $E_{pol}$
well reproduce the change in $U$(Ni $3p$) from bulk to thin film, $\delta U$(Ni $3p)$, for a 1 ML NiO film on Ag(001).
Using the observed values of $\delta U$ and $\delta \Delta$, we estimate the N\'{e}el temperature $T_{N}$
in the mean field approximation that is found to be compatible with the experimental value of $T_{N}$ for a 3-ML NiO film.\cite{Tjeng}
These results reinforce the idea that the extra-atomic relaxation represented mainly
by $E_{image}$ and $E_{pol}$ determines $\delta U$ and $\delta \Delta$
for ultrathin oxide films of NiO, as well as MgO, on highly polarizable substrates.
\section{Experiment}
We performed {\it in situ} scanning tunneling microscopy (STM), photoelectron spectroscopy (PES), and Auger electron spectroscopy (AES) on ultrathin NiO films grown on Ag(001).
The STM work was carried out with a variable-temperature STM (Omicron).
The NiO films were grown in an attached preparation chamber, where the preliminary characterization of both Ag substrate
and NiO film was accomplished by x-ray photoelectron spectroscopy (XPS) and low-energy electron diffraction (LEED).
The PES and AES work were carried out with a soft x-ray beamline (7B1) at Pohang Light Source in Korea.
The end station of the beamline is composed of both an analysis chamber and a preparation chamber.
The analysis chamber is equipped with a hemispherical electron energy analyzer with a multichannel detector.
For the PES, the photoelectrons are collected at a take-off angle of 45$^{\circ}$ with respect to the surface normal of the sample.
The PES resolving power is $\sim$ 4000.\cite{HN}
The zero point of the binding energy is determined in reference to the binding energy of the Ag $3d$ (368.3 eV) of the clean Ag substrate.
All spectra presented in this work were recorded with the sample maintained at room temperature.
For both STM and PES, no charging effects were observed.
The NiO films were grown in preparation chambers for both STM and PES. The base pressures were
$<$ $5\times 10^{-10}$ Torr for both chambers. Wedge-shaped NiO films were grown by e-beam evaporation of high purity (5N) Ni
rod onto clean Ag(001) at room temperature at an ambient O$_{2}$ pressure ($P_{{\rm O}_{2}}$) of $1 \sim 3 \times 10^{-6}$ Torr.
The films were then thermally annealed at $430 \sim 450$ K at $P_{{\rm O}_{2}}$
$\sim 5 \times 10^{-7}$ Torr.
In the present work, we are especially interested in films within the monolayer limit,
as this enables a definite comparison of experimental $\delta U$ with theoretical values obtained by considering extra-atomic relaxation energies.
However, for films less than 2 ML, the growth mode is somewhat complicated due to the
($2 \times 1$) reconstruction and the bilayer growth of the NiO film.\cite{Neddermeyer, Wollschlager, Caffio}
Under the aforementioned growth conditions, we were able to grow 1 ML ($1 \times 1$) nickel oxide films, as assessed by a combination of techniques,
including STM, LEED, and XPS. (Further details are given in the following section.)
According to our previous extensive PES of NiO films, such growth conditions also minimize the chemical defects.\cite{chemical}
The thickness of each film was mainly determined by the ratio of the peak intensity of the Ag $3d$ in the NiO-covered region
to that of the clean Ag substrate, assuming layer-by-layer (LBL) growth of the film.
Because the growth of a NiO film does not follow LBL in an ideal fashion, the thicknesses described in the present work are nominal.
For coverage $\sim$ 0.5 ML, the film is mainly composed of monolayer-high islands and can be taken as a model for a 1 ML film (Fig. 1(a)).
(Further discussion is presented below.)
Moreover, up to 0.5 ML, the coverage recorded by a quartz microbalance is in reasonable agreement
with the nominal coverage estimated by the reduction of Ag $3d$ intensity, assuming layer-by-layer growth of the NiO film.
Based on these estimated film thicknesses, the growth rate is adjusted to $\sim$ 0.25 ML/min throughout the experiments.
\section{Result}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig1}
\caption[] {NiO films grown on Ag(001) at room temperature via e-beam evaporation of Ni at a $P_{O_{2}}$ of $3 \times 10^{-6}$ Torr.
The NiO film coverage is (a) 0.5 ML, (b) 0.67 ML, and (c) 1 ML. The scanning voltage and tunneling current were
-2.7 V (sample) and 0.1 nA, respectively. Figs. (b) and (c) show LEED images of ($1 \times 1$) patterns at an electron energy of 137 eV.}
\label{fig.1}
\end{figure}
Figure 1 (a)-1 shows a typical image of a nickel oxide film with $\sim$ 0.5 ML coverage, consisting of nickel oxide patches.
The line profile (Fig. 1 (a)-3) across a typical patch in Fig. 1 (a)-2 displays a plateau of apparent height $\sim 0.15$ nm, which corresponded to 1 ML in our previous STM study of NiO film on Ag(001) under similar tunneling conditions.\cite{STM}
As the film (with its nominal coverage of $\sim$ 0.5 ML) is mostly composed of islands of thickness 1 ML,
we regard its Ni and O spectra asrepresentative of the electronic properties of a 1 ML NiO film.
After further deposition to the nominal coverage is around 1 ML, the second layer is preferably occupied (Fig. 1 (b)), and the film is almost bilayered. (Fig. 1 (c)) All films exhibit the ($1 \times 1$) LEED pattern (Fig. 1 (b) and (c)), whereas the well-known ($2 \times 1$) reconstruction is observed only sporadically in the STM images (Figure not shown). However, the ($2 \times 1$) reconstruction becomes abundant if $P_{{\rm O}_{2}}$ is lowered beneath 10$^{-6}$ Torr.
\begin{figure}
\includegraphics[width=1\textwidth]{fig2}
\caption[width=0.5\textwidth] {As the ultrathin nickel oxide film on the Ag(001) substrate becomes thinner, the centroids of the (a) Ni $2p_{3/2}$ and (b) Ni $3p$ spectra shift to the lower binding energy side, while (c) the centroid of the Ni$_{LMM}$ Auger transition moves to the higher kinetic energy side. (d) The binding energy of the main peaks of the O $1s$ spectra becomes smaller as the film becomes thinner. The film thickness ranges from 0.5 to 15.00 ML. All the spectra are normalized by the incident photon intensity.}
\label{fig.2}
\end{figure}
$U$(Ni $3p$) can be obtained by comparing the energy of a two-hole state, Ni $3p^{4}$, to that of two one-hole state, Ni $3p^{5}$,
in accordance with the relationship
\begin{equation}
U(Ni\;\;3p) = E(3p^{4}) + E(3p^{6}) - 2E(3p^{5}).
\label{eqn:1}
\end{equation}
The variation of $U$(Ni $3p$) from that of bulk (actually bulk-like thick film) $\delta U$(Ni $3p$) can then be determined by the following relation,
\begin{equation}
\delta U(Ni\;\;3p) = \delta E^{bind}(Ni\;\;2p) - 2 \delta E^{bind}(Ni\;\;3p) - \delta E^{kin}(Ni_{LMM}),
\label{eqn:2}
\end{equation}
which is obtained via the approach of Altieri {\it et al.}\cite{Altieri}
To estimate $\delta U$(Ni $3p$), we measured the XPS spectra of Ni $2p$, Ni $3p$, and the Ni $LMM$ Auger transition as functions of the thickness of NiO film. We also utilized the O 1s spectra to estimate $\delta \Delta$(O $2p$ $\rightarrow$ Ni $3d$), as described below.
Figure 2 (a), (b), (c), and (d), respectively, show the Ni $2p$, Ni $3p$, Ni$_{LMM}$ Auger transition, and O $1s$ spectra of the NiO films.
The thicknesses of the films range from submonolayer (0.5 ML) to 15.00 ML. Even visual inspection reveals monotonic shifts of the major peaks of those spectra with variation of the film thickness. However, the core-level spectra of Ni comprise many peaks of various origins, such as final state effects and non-local screening,\cite{chemical} making identification of the main peak uncertain. This complication is also transferred to the Ni$_{LMM}$ Auger transition.
Thus, we estimate shifts of the peak positions of Ni $2p$, Ni $3p$, and the Ni$_{LMM}$ Auger transition with variation in the film thickness in terms of shifts of the centroids of their spectra,
anticipating that if shifts of the peak positions are mainly caused by extra-atomic relaxation, then all of the component peaks should shift by the same amount.
Each spectrum is fitted with a minimal number (three or four) of Gaussian-convoluted Lorentzian peaks with Shirley backgrounds, which are used to obtain the centroid position.
The red dotted lines overlapping the experimental spectra in Fig. 2 are best-fit curves.
To determine the peak positions of the O $1s$ spectra, we fit each spectrum with Gaussian-convoluted Lorentzian peaks.
The full widths at half-maxima of major peaks are $\sim$ 2.0 eV.
The tick marks in the spectra indicate the resulting peak positions of the O $1s$ spectra.
The curve-fitting results suggest the existence of some chemical defects, possibly Ni$_{2}$O$_{3}$, Ni(OH)$_{2}$ and/or NiO(OH),
which appear as small shoulders in the spectra.\cite{chemical}
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{fig3}
\caption[] {The centroids of (a) Ni $2p_{3/2}$, (b) Ni $3p$, (c) Ni$_{LMM}$, and (d) the main peak position for O $1s$ relative to those of the 15 ML film are shown
as functions of the thickness of the NiO film on Ag(001). Error limits were set by the scatter of the centroid positions, depending on the fitting parameters.}
\label{fig.3}
\end{figure}
Figure 3 shows the centroid energies of Ni $2p$, $3p$, the Auger Ni$_{LMM}$ spectra, and the energy of the O $1s$ main peak as functions of the film thickness relative to the corresponding energies of the 15 ML film that is considered as a bulk-like film.
Even though the data points exhibit some scatter, we may readily observe that as the film becomes thinner, the peak positions of all the photoelectron spectra tend to shift toward the lower binding energy side, whereas the Auger transition energy of Ni$_{LMM}$ increases monotonically. For an ultrathin MgO film on Ag(001), a similar reduction of the binding energies of the photoelectrons and increase of the Auger electron energy for relevant transitions are also observed as the thickness of the film decreases.\cite{Altieri}
In Fig. 4 (a), the values of $\delta U$(Ni $3p$) obtained from Eq. (2) are plotted relative to the value for the 15 ML film.
$\delta U$(Ni $3p$) decreases monotonically as the film becomes thin, as is the case for $\delta U$(Mg $2p$)
of ultrathin MgO films on Ag(001). However, $\delta U$(Ni $3p$) changes very rapidly with increasing film thickness and is already negligible for films thicker than 5 ML.
This behavior is in contrast to that of MgO films on Ag(001)\cite{Altieri}, which exhibit substantial $\delta U$ even for 10 ML
(although both MgO and NiO of films have similar $\delta U$ values for 1 ML coverage, as shown in Fig. 4 (a)).
This is attributed to the larger polarizabilities $\alpha$(O$^{2-}$) and $\alpha$(Ni$^{2+}$) of NiO compared with MgO.
In the bulk state, $\alpha$(O$^{2-}$) of NiO (1.98 {\AA}$^{3}$) is larger than that of MgO (1.65 {\AA}$^{3}$).
Furthermore, $\alpha$(Ni$^{2+}$) $\sim$ 0.68 {\AA}$^{3}$, which is much larger than $\alpha$(Mg$^{2+}$) $\sim$ 0.09 {\AA}$^{3}$ for MgO,
possibly due to its closed-shell nature.
The larger polarizabilities of NiO should make the screening of extra charges in the cation more effective,
so extra-atomic relaxation should be more localized in NiO films than in MgO films.
Hence, in response to charge fluctuation, NiO films exhibit bulk-like behavior at smaller thicknesses than MgO films.
Note that for Fe$_2$O$_3$, $\alpha$(O$^{2-}$)$_{bulk}$ is
2 $\sim$ 2.91 {\AA}$^{3}$ (Ref. \cite{Raymond}), which is even larger than that of NiO.
Thus, for the oxide film, the coverage at which nonzero $\delta U$ is observed would be further limited according to the above argument, possibly below the experimental limit, as Chambers and Droubay\cite{Chambers} did not observe any $\delta U$ for ultrathin Fe$_2$O$_3$ films on Pt(111). These authors also attributed the absence of $\delta U$ to the large polarizabilities of the oxide.
\begin{figure}
\includegraphics[width=0.50\textwidth]{fig4}
\caption[] {Dependence of (a) $\delta U$(Ni 3p) and (b) $\delta \Delta^{\ast}$ (defined in Eq. 4) on the nominal thickness of the NiO film on Ag(001).
The error limits are set by the fitting uncertainty. The red line in each figure is the best-fit line for the data. The blue lines show the theoretical values of $\delta U$ and $\delta \Delta^{*}$ using the respective polarizabilities of bulk and surface.}
\label{fig.4}
\end{figure}
The shifts of peak positions summarized in Fig. 3 can be suspected to originate from band bending due to charge transfer at the interface between the NiO film and Ag substrate.
However, the amount of peak shift varies for different transitions in the same film, as Fig. 3 indicates.
Hence, the peak shifts cannot be attributed to band-bending effects.
Furthermore, hybridization between the NiO film and the Ag substrate at the interface is shown to be very weak by photoelectron spectroscopy of the valence bands of the films\cite{electronic}, and this is also predicted by first principle calculations.\cite{Casassa}
\section{Discussion}
\begin{small}
\begin{table}
\begin{center}
\caption{Both experimental and theoretical values of $\delta U$(Ni $3p$), $\delta \Delta^{\ast}$, and E$_{image}$ of a 1 ML NiO film are summarized.
E$_{pol}$ for bulk (1 ML film) is calculated using the polarizability of bulk (surface) NiO.
The definition of $\delta \Delta^{\ast}$(Ni $3p$) is given in the text.}
\begin{tabular}{cc|cc}
\hline
\multicolumn{2}{c||}{$\alpha(O^{2-})$ ({\AA}$^{3}$)} & \multicolumn{1}{c|}{Bulk: 1.98 (Ref. \cite{Iguchi, Nakatsugawa,Moriceau})} & \multicolumn{1}{c}{Surface: 2.43 (Ref. \cite{Iguchi, Nakatsugawa,Welton})} \\
\hline
\hline
\multicolumn{2}{c||}{$\delta U_{exp}$(Ni $3p$) (eV)} & \multicolumn{2}{c}{-2.2 } \\
\hline
\multicolumn{2}{c||}{$\delta \Delta^{\ast}_{exp}$(Ni $3p$) (eV)} & \multicolumn{2}{c}{-1.3 } \\
\hline
\multicolumn{2}{c||}{$E_{image}$ (eV)} & \multicolumn{2}{c}{6.50} \\
\hline
\multicolumn{1}{c|}{$E_{pol}$ (eV))} & \multicolumn{1}{c||}{Bulk} & \multicolumn{1}{c|}{-12.20} & \multicolumn{1}{c}{$-$} \\
\hline
\multicolumn{1}{c|}{} & \multicolumn{1}{c||}{1 ML} & \multicolumn{1}{c|}{$-$} & \multicolumn{1}{c}{-8.18} \\
\hline
\multicolumn{2}{c||}{$\delta U_{theo}$(Ni $3p$) (eV)} & \multicolumn{1}{c|}{$-$} & \multicolumn{1}{c}{-2.48} \\
\hline
\multicolumn{2}{c||}{$\delta \Delta^{\ast}_{theo}$(Ni $3p$) (eV)} & \multicolumn{1}{c|}{$-$}& \multicolumn{1}{c}{-1.55} \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{small}
We investigate whether the extra-atomic relaxation represented by both $E_{image}$ and $E_{pol}$ can account
for the reduction of $U$ for the NiO films, as well as for the 1 ML MgO film on Ag(001).\cite{Altieri}
Because it is not easy to acquire layer-resolved $\delta U$ values experimentally for films thicker than 2 ML,
we calculated $E_{image}$, $E_{pol}$, and thus $\delta U$ only for a 1 ML film.
$\delta U$ is obtained from the following relation\cite{Altieri},
\begin{equation}
\delta U = -2(E_{image} - \delta E_{pol}),
\label{eqn:3}
\end{equation}
where $\delta E_{image}$ and $\delta E_{pol}$ are both chosen to be positive,
following the convention of Altieri {\it et al.}\cite{Altieri}
The contribution of $E_{image}$ to $U$(Ni $3p$) is obtained by comparing a two-hole state, Ni $3p^{4}$, with two one-hole states, Ni $3p^{5}$.
Hence, $E_{image}$ is the difference between (2e)$^{2}/(4\pi \epsilon_{0} \times 2D$) for $3p^4$
and two one-hole states 2$\times$e$^{2}/(4\pi \epsilon_{0} \times 2D$), corresponding to $3p^5$.
Here, $D$ is the distance between a real charge and its image in the Ag substrate.
By the analysis of image potential surface states on clean Ag(001), the image plane is located 1.26 {\AA} above the Ag atoms
in the surface layer.\cite{Smith} As a result, the Ni atoms are separated from the image plane by 1.11 {\AA}.\cite{Groppo}
Thus, according to Eq. (2), $E_{image}$ contributes $-6.50$ eV to $\delta U$ for a 1 ML film, assuming that E$_{image}$ is null for bulk NiO.
$E_{pol}$ is determined by the difference between the polarization energies of oxide for a two-hole state and two singly charged holes:
$\Sigma_{i}(4\pi\epsilon_{0} \alpha_{i} (2e)^{2}/2R_{i}^{4})$ - 2 $\Sigma_{i}(4\pi\epsilon_{0} \alpha_{i} (e)^{2}/2R_{i}^{4})$.
For the calculation of bulk $E_{pol}$, we employ the polarizabilities $\alpha$(O$^{2-}$, Ni$^{2+}$) of bulk NiO, $\alpha$(O, Ni)$_{bulk}$,
while the polarizabilities of both O and Ni at the surface of bulk NiO, $\alpha$(O, Ni)$_{surface}$, are used to calculate $E_{pol}$ of the 1 ML film.
For $\alpha$(O$^{2-})_{bulk}$, three values have been reported: 1.49, (Ref. \cite{Kress}), 1.98 (Ref. \cite{Iguchi,Nakatsugawa,Moriceau}), and 2.64 (Ref. \cite{Janssen}) {\AA}$^{3}$. Among these, 1.98 {\AA}$^{3}$ is widely accepted. A value of 2.43 has been reported for $\alpha$(O$^{2-})_{surface}$ (Ref. \cite{Iguchi,Nakatsugawa}), which best fits the LEED I/V (spot intensity versus electron energy) of a bulk-terminated NiO(001) surface.\cite{Welton}
$\alpha$(Ni$^{2+})$ is obtained from the empirical relationship $\alpha$(O$^{2-}$) + $\alpha$(Ni$^{2+})$
= 2.66 {\AA}$^{3}$. (Ref. \cite{Iguchi,Nakatsugawa}) This relationship was obtained for bulk NiO, but we tentatively assume that it holds down to a 1 ML film.
The resulting $E_{pol}$'s for both $\alpha$(O$^{2-})_{bulk}$ and $\alpha$(O$^{2-})_{surface}$ are summarized in Table I, along with $E_{image}$ for 1 ML NiO film.
Using the $E_{pol}$ and $E_{image}$ values in Table I, we obtain the theoretical value of $\delta U$ for a 1 ML NiO film from Eq. (3).
The experimental value of $\delta U$ (-2.2 eV) for the nominal 0.5 ML film, which is a model system for a 1 ML film, is well reproduced by the theoretical value, -2.48 eV (See Table I and Fig. 4).
This observation suggests that $E_{pol}$ and $E_{image}$ are the major origins of $\delta U$ for NiO films, as well as for MgO films\cite{Altieri}, and reinforces the model of Duffy {\it et al.}\cite{Duffy}
and Altieri {\it et al.}\cite{Altieri}
Altieri {\it et al.} suggested that manipulation of charge fluctuation energies of ultrathin oxide films can be used to
control their physical properties, such as N\'eel temperature $T_{N}$.\cite{Altieri}
Reduced charge fluctuation energies affect the superexchange interaction in NiO films.
According to Anderson's expression for the superexchange, the coupling constant $J$ depends on both $U$(Ni $3d$) and $\Delta$(Ni $3d$) as follows:
$J = -2t^{4}/ \Delta^{2} \times (1/\Delta + 1/U)$.
Therefore, one can expect that the reduced values of $U$(Ni $3d$) and $\Delta$(Ni $3d$) for an NiO film would lead to an increase
in the superexchange interaction for the film.
In line with this conjecture, Altieri {\it et al.} found that for a 3 ML NiO film
on Ag(001), $T_{N}$
did not decay as much as for an NiO film on an MgO substrate (in which no image charge screening is expected, and which therefore exhibits less
reduction of $U$ and $\Delta$).\cite{Tjeng}
To evaluate $J_{3ML}$ from Eq. (4), we use $\delta U$ (Ni 3p) in place of $\delta U$ (Ni 3d) (which is not available).
The variation in $U$ has an extra-atomic origin, and thus we can expect that $\delta U$(Ni $3d$) will differ little
from $\delta U$(Ni $3p$). We may then estimate $\delta \Delta$ along the lines of Altieri {\it et al.}\cite {Altieri}:
\begin{equation}
\begin{split}
\delta \Delta (O\;\;2p \rightarrow Ni\;\;3d) & = \delta E^{bind}(O\;\;1s) - \delta E^{bind}(Ni\;\;3d) + \delta U(Ni\;\;3d)\\
& \approx \delta E^{bind}(O\;\;1s) - \delta E^{bind}(Ni\;\;3p) + \delta U(Ni\;\;3p)\\
& = \delta \Delta^{\ast}.
\end{split}
\label{eqn.5}
\end{equation}
Here, we use $\delta U$(Ni $3p$) in place of $\delta U$(Ni $3d$) and denote the resulting $\delta \Delta$ by $\delta \Delta^{\ast}$.
From the intrapolation of $\delta U$(Ni $3p$) and $\delta \Delta^{\ast}$ in Fig. 4, the values of $\delta U$(Ni $3p$)
and $\delta \Delta^{\ast}$ for a 3 ML film are estimated to be -0.62 and -0.47, respectively.
For $U$ and $\Delta$ of bulk NiO, we use 6.5 and 4.0 eV, respectively, referring to the report of Taguchi {\it et al.}\cite{Taguchi}
Combining the above input, $J$ is found to be $-2t^{2}$ $\times$ 0.0364 for a 3 ML NiO film on Ag, while $J$ for bulk NiO is
$-2t^{2}$ $\times$ 0.0252. Here, we assume the transfer integral $t$ between the anion O $2p$ and the cation Ni $3d$ is the same for both bulk and the 3 ML film, as $t$ is a very local property and is assumed to be little influenced by extra-atomic effects.
We can now estimate $T_{N}$ for a 3 ML film in the mean field approximation.
In the mean field approximation, $T_{N} \sim S(S + 1) \times N \times J$, where S is the spin moment of an Ni ion, and N the mean number of nearest neighbors of Ni ions. Then,
\begin{equation}
T_{N,film} = T_{N,bulk} \times S_{film}(S_{film} + 1)/S_{bulk}(S_{bulk} + 1) \times (N_{film}/N_{bulk}) \times (J_{3ML}/J_{bulk})
\label{eqn:6}
\end{equation}
For a bulk-like thick NiO film on Ag(001), $T_{N}$ was experimentally determined to be 535 K.\cite{Tjeng} 1.90 (Ref. \cite{Cheetham}) and 2.2 (Ref. \cite{Fernandez, Neubeck}) $\mu_B$ have been reported for the total magnetic
moment $M_{bulk}$ of bulk NiO. First principle calculations
predict that $M_{3ML}$ of a 3 ML NiO film on Ag(001) reduces to $\sim$ 1.67 $\mu_{B}$ (the average of the moments of the 1st, 2nd, and
3rd layers).\cite{Cinquini} If the ratio $L/S$ of the orbital moment to the spin moment is assumed to be the same (0.34)
(Ref. \cite{Fernandez, Neubeck}) for both bulk
and film, $S_{bulk}$ is 0.81 (Ref. \cite{Cheetham}) or 0.94 (Ref. \cite{Fernandez, Neubeck}), and $S_{3ML}$ is 0.71. The mean number of nearest neighbors of a 3 ML film is $\sim$ 9.33. If all this input is taken into account, then according to Eq. (6),
$T_{N}$ is between 400 and 498 K for a 3 ML NiO film on Ag(001).
The wide variation in $T_{N}$ originates mainly from the large uncertainty in the spin moment of bulk NiO. The
experimentally determined $T_{N}$ of a 3 ML NiO film is 390 K\cite{Tjeng}, which is close to the range of the present estimate. Despite the many
simplifications and assumptions, $\delta U$(Ni $3p$) and $\delta \Delta^{*}$(Ni $3p$) seem to provide a reasonable estimate of the range of $T_{N}$ for
an NiO film under the mean field approximation.
Most importantly, the lower limit of the present estimate (400.00 K) is still much higher than the $T_{N} \sim$ 40 K observed for a 3 ML NiO film on an MgO substrate.\cite{Tjeng}
At the very least, this supports the argument that the reduction of charge fluctuation energies on a polarizable substrate gives rise to high values of $T_{N}$ (as observed for the 3 ML NiO film on Ag(001) in comparison with the results for MgO(001)).
\section{Summary and Conclusion}
Using both photoelectron spectroscopy and Auger electron spectroscopy, we found that the on-site Coulomb interaction energy $U$ of ultrathin NiO films on Ag(001) decreases monotonically in a manner analogous to the case of ultrathin MgO films on Ag(001). The observed value of $\delta U$ (Ni $3p$) for a 1 ML film was well reproduced by considering extra-atomic relaxations represented by image charge screening by the Ag substrate and modified polarization energy of the film, thus affirming the pictures of Duffy {\it et al.}\cite{Duffy} and Altieri {\it et al.}\cite{Altieri} Furthermore, using $\delta U$ (Ni $3p$), we estimated the value of T$_N$ for a 3 ML NiO film on Ag(001), and the estimate was comparable to experimental observation. Hence, the model proposed by the aforementioned authors seems to offer a unified picture of the variation in charge fluctuation energies of ultrathin MgO and NiO films, even though further refinement is necessary in view of the many assumptions/estimations employed without precise quantitative justification.
\section{Acknowledgement}
This work was supported by the KOSEF grant No. R01-2007-000-20249-0 and the NRF grant No. 20110004239.
| 0cea7e0c960be2a4ed950c8bab658d7faffdf916 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The logic of induction, the process by which we obtain predictive
laws, theories, or models of the world, has been a long standing concern
of philosophy, science, statistics and artifical intelligence. Theories
typically have two aspects: structural or qualitative (corresponding
to concepts or variables and their relationships, or, in philosophical
parlance, \textit{ontology}) and numeric or quantitative (corresponding
to parameters e.g., probabilities). Once the qualitative aspect of
a certain law is fixed, the quantitative aspect becomes the subject
of experimental science and statistics. Induction is the process of
inferring predictive laws, theories, or models of the world from a
stream of observations. In general, the observations may be passive,
or may be the outcomes of interventions by the learning agent. Here,
we limit ourselves to induction from passive observation alone.
Under the \textit{computationalistic assumption} (i.e., the Church-Turing
thesis, which asserts that any theory can be described by a Turing
Machine \cite{Turing 1936}), one way to solve the induction problem
is to enumerate all the Turing machines (dovetailing in order to cope
with the countably infinite number of them) and pick one that strikes
a good balance between the predictability (of the finite experience
stream) and size (complexity) \cite{Solomonoff 1964a}, \cite{Solomonoff 1964b},
\cite{Schmidhuber et al. 1997} or within a Bayesian setting, using
a weighted vote among the predictions of the various models \cite{Hutter 2005}
(See\cite{Burgin 2005} and references therein). In the general setting,
a priori the number of types of possible structural laws that can
be postulated is infinite. This makes it difficult to design general
purpose induction strategy. We ask whether a finite and minimalistic
set of fundamental structural operations suffice to construct \textit{any}
set of laws. If such a set will render induction more tractable because
at any step the learner will have to pick from a small \textit{finite}
set of possible operations as opposed to an infinite one.
Because Turing machines are rather opaque from a structural standpoint,
we use the alternative, yet equivalent, mechanism of generative grammar
\footnote{See \cite{Oates et al. 2004} for a similarly motivated attempt using
\textit{Lambda calculus
}. This allows us to work with theories that can be built recursively
by applying structural operations drawn from a finite set. The intuition
behind this approach is that induction involves incrementally constructing
complex structures using simpler structures (e.g., using super-structuring,
also called \textit{chunking}), and simplifying complex structures
when possible (e.g., using abstraction). Such a compositional approach
to induction offers the advantage of increased transparency over the
enumerate-and-select approach pioneered by Solomonoff \cite{Solomonoff 1964a},
\cite{Solomonoff 1964b}. It also offers the possibility of reusing
intermediate structures as opposed to starting afresh with a new Turing
machine at each iteration, thereby replacing enumeration by a process
akin to dynamic programming or its heuristic variants such as the
A{*} algorithm.
We seek laws or patterns that explain a stream of observations through
successive applications of operations drawn from a small finite set.
The induced patterns are not necessarily described solely in terms
of the input observations, but may also use (a finite number of) additional
internal or hidden (i.e., not directly observable) entities. The role
of these internal variables is to simplify explanation. The introduction
of internal variables to aid the explanation process is not without
perils\cite{Popper 1934
\footnote{Consider for example, a hidden variable which stands for the truth
value of the sentence: {}``In heaven, if it rains, do the angels
get wet or not?'
}. One way to preclude the introduction of internal variables is to
apply the following\textit{ demarcation criterion}: If the agent cannot
distinguish possible streams of observations based on the values of
an internal variable, then the variable is non-sensical (i.e., independent
of the data or {}``senses'')
\footnote{This is a radical interpretation of an idea that shows up in the history
of Philosophy from Positivism through the empiricists and scholastics
down to Aristotle's \emph{{}``Nihil est in intellectu quod non prius
fuerit in sensu''}
}. The direct connection requirement restricts the no-nonsense theories
to those formed out empirical laws \cite{Ayer 1936} (i.e, laws that
relate only measurable quantities). However several scientists, including
Albert Einstein, while being sympathetic to the positivists ideas,
have successfully used in their theories, hidden variables that have
at best indirect connection to observables. This has led to a series
of revisions of the positivists doctrine culminating in Carnap's attempt
to accommodate hidden variables in scientific explanations\cite{Carnap 1966}.
The observables and the internal variables in terms of which the explanation
is offered can be seen as the ontolog
\footnote{The ontology in this case is not universal as it is often the case
in philosophy; it is just a set of concepts and interrelations among
them that afford the expression of theories
} - i.e., the set of concepts and their interrelationships found useful
by the agent in theorizing about its experience. In this setting,
structural induction is tantamount to ontology construction.
The rest of the paper is organized as follows: Section 2 introduces
Abstraction Super-structuring Normal Forms that correspond to a general
class of Turing-equivalent generative grammars that can be used to
express theories about the world; and shows that: \textit{abstraction}
(grouping \textit{similar} entities) and super-structuring (combining
topologically e.g., spatio-temporally close entities) as the essential
structural operations in the induction process; Only two more structural
operations, namely, \textit{reverse abstraction} and \textit{reverse
super-structuring} (the duals of abstraction and super-structuring
respectively, suffice in order to exploit the full power of Turing-equivalent
generative grammars in induction. Section 3 interprets the theoretical
results in a larger context the nature of hidden variables, radical
positivism and the 2-century old claim of David Hume about the principles
of \textit{connexion} among ideas. Section 4 concludes with a summary.
\vspace*{-0.1in}
\section{Abstraction Super-Structuring Normal Forms}
We start by recapitulating the definitions and notations for generative
grammars and the theorem that claims the equivalence between Generative
Grammars and Turing Machines. We then draw the connections between
the process of induction and the formalism of generative grammars
and motivate the quest for a minimalistic set of fundamental structural
operations. We then get to the main results of the paper: a series
of characterization theorems of two important classes of Generative
Grammars: Context-Free and General Grammars, in terms of a small set
of fundamental structural operations.
\subsection{Generative Grammars and Turing Machines}
\textbf{Definitions (Grammar)} A (generative) grammar is a quadruple
$(N,T,S,R)$ where $N$ and $T$ are disjoint finite sets called NonTerminals
and Terminals, respectively, $S$ is a distinguished element from
$N$ called the start symbol and $R$ is a set of rewrite rules (a.k.a.
production rules) of the form $(l\to r)$ where $l\in(N\cup T)^{*}N(N\cup T)^{*}$
and $r\in(N\cup T)^{*}$. Additionally, we call $l$ the left hand
side (lhs) and $r$ the right hand side (rhs) of the rule $(l\to r)$.
The language generated by a grammar is defined by $L(G)=\{w\in T^{*}|S\overset{*}{\to}w\}$
where $\overset{*}{\to}$ stands for the reflexive transitive closure
of the rules from $R$. Furthermore $\overset{+}{\to}$ stands for
the transitive (but not reflexive) closure of the rules from $R$.
We say that two grammars $G$,$G'$ are equivalent if $L(G)=L(G')$.
The steps contained in a set of transitions $\alpha\overset{*}{\to}\beta$
is called a derivation. If we want to distinguish between derivations
in different grammars we will write $\alpha\overset{*}{\to_{G}}\beta$
or mention it explicitly. We denote by $\epsilon$ the empty string
in the language. We will sometimes use the shorthand notation $l\to r_{1}|r_{2}|...|r_{n}$\textbf{
}to stand for the set of rules $\{l\to r_{i}\}_{i=1,n}$. See e.g.,
\cite{Salomaa 1985} for more details and examples.
\textbf{Definition (Grammar Types)} Let $G=(N,T,S,R)$ be a grammar.
Then
\begin{enumerate}
\item G is a \textbf{regular grammar (REG)} if all the rules $(l\to r)\in R$
have the property that $l\in N$ and $r\in(T^{*}\cup T^{*}N)$.
\item G is \textbf{context-free grammar (CFG)} if all the rules $(l\to r)\in R$
have the property that $l\in N$.
\item G is \textbf{context-sensitive grammar (CSG)} if all the rules $(l\to r)\in R$
have the property that they are of the form $\alpha A\beta\to\alpha\gamma\beta$
where $A\in N$ and $\alpha,\beta,\gamma\in(N\cup T)^{*}$ and $\gamma\neq\epsilon$.
Furthermore if $\epsilon$ is an element of the language one rule
of the form $S\to\epsilon$ is allowed and furthermore the restriction
that $S$ does not appear in the right hand side of any rule is imposed.
We will call such a sentence an $\epsilon-Amendment$.
\item G is \textbf{general grammar (GG)} if all the rules $(l\to r)\in R$
have no additional restrictions.
\end{enumerate}
\textbf{Theorem 1.} \emph{The set of General Grammars are equivalent
in power with the set of Turing Machines. That is, for every Turing
Machine $T$ there exists a General Grammar $G$ such that $L(G)=L(T)$
and vice versa.}
\emph{Proof.} This theorem is a well known result. See for example
\cite{Salomaa 1985} for a proo
\footnote{Similar results of equivalence exist for transductive versions of
Turing machines and grammars as opposed to the recognition versions
given here (See e.g., \cite{Burgin 2005} and references therein).
Without loss of generality, we will assume the recognition as opposed
to the transductive setting
}. $\Box$
\vspace*{-0.1in}
\subsection{Structural Induction, Generative Grammars and Motivation}
Before proceeding with the main results of the paper we examine the
connections between the setting of generative grammars and the problem
of structural induction. The terminals in the grammar formalism denote
the set of observables in our induction problem. The NonTerminals
stand for internal variables in terms of which the observations (terminals)
are explained. The {}``explanation'' is given by a derivation of
the stream of observations from the initial symbol $S\overset{*}{\to}w$.
The NonTerminals that appear in the derivation are the internal variables
in terms of which the surface structure given by the stream of observations
$w$ is explained. Given this correspondence, structural induction
aims to find an appropriate set of NonTerminals $N$ and a set of
rewrite rules $R$ that will allow us to derive (explain) the input
stream of observations $w$ from the initial symbol $S$. The process
of Structural Induction may invent a new rewrite rule $l\to r$ under
certain conditions and this new rule may contain in turn new NonTerminals
(internal variables) which are added to the already existing ones.
The common intuition is that $l$ is a simpler version of $r$, as
the final goal is to reduce $w$ to $S$. The terminals constitute
the input symbols (standing for observables), the NonTerminals constitute
whatever additional {}``internal'' variables that are needed, the
rewrite rules describe their interrelationship and altogether they
constitute the ontology. The correspondence between the terms used
in structural induction and generative grammars is summarized in Table
\ref{tab:Correspondence-between-Structural}.
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|}
\hline
Structural Induction & Generative Grammar\tabularnewline
\hline
\hline
Observables & Terminals $T$\tabularnewline
\hline
Internal Variables & NonTerminals $N$\tabularnewline
\hline
Law / Theory & production rule(s) $l\to r$\tabularnewline
\hline
Ontology & Grammar $G$\tabularnewline
\hline
Observations Stream & word $w$\tabularnewline
\hline
Explanation & Derivation $S\overset{*}{\to}w$\tabularnewline
\hline
Partial Explanation & Derivation $\alpha\overset{*}{\to}w$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{tab:Correspondence-between-Structural}Correspondence between
Structural Induction and Generative Grammars }
\end{table}
Thus, in general, structural induction may invent any rewrite rule
of the form $l\to r$, potentially introducing new NonTerminals, the
problem is that there are infinitely many such rules that we could
invent at any point in time. In order to make the process more well
defined we ask whether it is possible to find a set of fundamental
structural operations which is finite and minimalistic, such that
all the rules (or more precisely sets of rules) can be expressed in
terms of these operations. This would establish a normal form in terms
of a finite set of operations and then the problem of generating laws
will be reduced to making appropriate choices from this set without
sacrificing completeness. In the next subsection we will attempt to
decompose the rules $l\to r$ into a small finite set of fundamental
structural elements which will allow us to design better structure
search mechanisms.
\vspace*{-0.1in}
\subsection{ASNF Theorems}
\textbf{Issue} ($\epsilon-Construction$). In the rest of the paper
we will prove some theorems that impose various sets of conditions
on a grammar $G$ in order for the grammar to be considered in a certain
Normal Form. If \emph{$\epsilon\in L(G)$ }however,\emph{ }we will
allow two specific rules of the grammar $G$ to be exempted from these
constraints and still consider the grammar in the Normal Form. More
exactly if \emph{$\epsilon\in L(G)$} and given a grammar $G'$ such
that $L(G')=L(G\backslash\{\epsilon\})$ and $G'=(N',T,S',R')$ is
in a certain Normal Form then the grammar $G=(N\cup\{S\},T,S,R=R'\cup\{S\to\epsilon,S\to S'\})$
where $S\notin N'$ will also be considered in that certain Normal
Form despite the fact that the two productions $\{S\to\epsilon,S\to S'\}$
may violate the conditions of the Normal Form. These are the only
productions that will be allowed to violate the Normal Form conditions.
Note that $S$ is a brand new NonTerminal and does not appear in any
other productions aside from these two. Without loss of generality
we will assume in the rest of the paper that $\epsilon\notin L(G)$.
This is because if $\epsilon\in L(G)$ we can always produce using
the above-mentioned construction a grammar $G''$ that is in a certain
Normal Form and $L(G'')=L(G')$ from a grammar $G'$ that is in that
Normal Form and satisfies $L(G')=L(G\backslash\{\epsilon\})$. We
will call the procedure just outlined the $\epsilon-Construction$.
We will call the following statement the $\epsilon-Amendment$: Let
\emph{$G=(N,T,S,R)$ }be a grammar, if $\epsilon$ is an element of
the language $L(G)$ one rule of the form $S\to\epsilon$ is allowed
and furthermore the restriction that $S$ does not appear in the right
hand side of any rule is imposed.
First we state a weak form of the Abstraction SuperStructuring Normal
Form for Context Free Grammars.
\textbf{Theorem 2 (Weak-CFG-ASNF).} \emph{Let $G=(N,T,S,R)$, }$\epsilon\notin L(G)$\emph{
be a Context Free Grammar. Then there exists a Context Free Grammar
$G'$ such that $L(G)=L(G')$ and $G'$ contains only rules of the
following type:}
\begin{enumerate}
\item \emph{$A\to B$}
\item \emph{$A\to BC$ }
\item \emph{$A\to a$ }
\end{enumerate}
\emph{Proof . }Since $G$ is a CFG it can be written in the Chomsky
Normal Form \cite{Salomaa 1985}. That is, such that it contains only
productions of the forms 2 and 3. If $\epsilon\in L(G)$ a rule of
the form $S\to\epsilon$ is allowed and $S$ does not appear in the
rhs of any other rule\emph{ }($\epsilon-Amendment$). Since we have
assumed that $\epsilon\notin L(G)$ we do not need to deal with $\epsilon-Amendment$
and hence the proof.
$\Box$
\textbf{Remarks. }
\begin{enumerate}
\item We will call the rules of type 1 Renamings (REN).
\item We will call the rules of type 2 SuperStructures (SS) or compositions.
\item The rules of the type 3 are just convenience renamings of observables
into internal variables in order to uniformize the notation and we
will call them Terminal (TERMINAL).
\end{enumerate}
We are now ready to state the the Weak ASNF theorem for the general
case.
\textbf{Theorem 3 (Weak-GEN-ASNF).} \emph{Let $G=(N,T,S,R)$, }$\epsilon\notin L(G)$\emph{
be a General (unrestricted) Grammar. Then there exists a grammar $G'$
such that $L(G)=L(G')$ and $G'$ contains only rules of the following
type:}
\begin{enumerate}
\item \emph{$A\to B$}
\item \emph{$A\to BC$ }
\item \emph{$A\to a$ }
\item \emph{$AB\to C$ }
\end{enumerate}
\emph{Proof .} See Appendix.
\textbf{Remark. }We will call the rules of type 4 Reverse Super-Structuring
(RSS).
In the next theorem we will strengthen our results by allowing only
the renamings (REN) to be non unique. First we define what we mean
by uniqueness and then we proceed to state and prove a lemma that
will allow us to strengthen the Weak-GEN-ASNF by imposing uniqueness
on all the productions safe renamings.
\textbf{Definition (}\textbf{\emph{strong-uniqueness}}\textbf{).}
We will say that a production $\alpha\to\beta$ respects \emph{strong-uniqueness}
if this is the only production that has the property that it has $\alpha$
in the lhs and also this is the only production that has $\beta$
on the rhs.
\textbf{Lemma 2. }\emph{Let $G=(N,T,S,R)$, $\epsilon\notin G$ a
grammar such that all its productions are of the form:}
\begin{enumerate}
\item $A\to B$
\item $A\to\zeta$ , $\zeta\notin N$
\item $\zeta\to B$ , $\zeta\notin N$
\end{enumerate}
\emph{Modify the the grammar G to obtain $G'=(N',T,S',R')$ as follows:}
\begin{enumerate}
\item \emph{Introduce a new start symbol $S'$ and the production $S'\to S$.}
\item \emph{For each $\zeta\notin N$ that appears in the rhs of one production
in $G$ let $\{A_{i}\to\zeta\}_{i=1,n}$ all the the productions that
contain $\zeta$ in the rhs of a production. Introduce a new NonTerminal
$X_{\zeta}$ and the productions $X_{\zeta}\to\zeta$ and $\{A_{i}\to X_{\zeta}\}_{i=1,n}$
and eliminate the old productions $\{A_{i}\to\zeta\}_{i=1,n}$.}
\item \emph{For each $\zeta\notin N$ that appears in the lhs of one production
in $G$ let $\{\zeta\to B_{j}\}_{j=1,m}$ all the the productions
that contain $\zeta$ the lhs of a production. Introduce a new NonTerminal
$Y_{\zeta}$ and the productions $\zeta\to Y_{\zeta}$ and $\{Y_{\zeta}\to B_{j}\}_{j=1,m}$
and eliminate the old productions $\{\zeta\to B_{j}\}_{j=1,m}$.}
\end{enumerate}
\emph{Then the new grammar $G'$ generates the same language as the
initial grammar $G$ and all the productions of the form $A\to\zeta$
and $\zeta\to B$ , $\zeta\notin N$ respect strong-uniqueness. Furthermore,
if the initial grammar has some restrictions on the composition of
the $\zeta\notin N$ that appears in the productions of type 2 and
3, they are respected since $\zeta$ is left unchanged in the productions
of the new grammar and the only other types of productions introduced
are renamings that are of neither type 2 nor type 3. }
\emph{Proof}. See Appendix \cite{Silvescu 2011}.
By applying Lemma 2 to the previous two Weak-ASNF theorems we obtain
strong versions of these theorems which enforce \emph{strong-uniqueness}
in all the productions safe the renamings.
\textbf{Theorem 4 (Strong-CFG-ASNF).} \emph{Let $G=(N,T,S,R)$, }$\epsilon\notin L(G)$\emph{
be a Context Free Grammar. Then there exists a Context Free Grammar
$G'$ such that $L(G)=L(G')$ and $G'$ contains only rules of the
following type:}
\begin{enumerate}
\item \emph{$A\to B$}
\item \emph{$A\to BC$ - and this is the only rule that has $BC$ in the
rhs and this is the only rule that has $A$ in the lhs (strong-uniqueness).}
\item \emph{$A\to a$ - and this is the only rule that has $a$ in the rhs
and this is the only rule that has $A$ in the lhs (strong-uniqueness).}
\end{enumerate}
\emph{Proof}. Apply Lemma 2 to the grammar converted into Weak-CFG-ASNF$\Box$
\textbf{Theorem 5 (Strong-GEN-ASNF).} \emph{Let $G=(N,T,S,R)$, }$\epsilon\notin L(G)$\emph{
be a general (unrestricted) grammar. Then there exists a grammar $G'$
such that $L(G)=L(G')$ and $G'$ contains only rules of the following
type:}
\begin{enumerate}
\item \emph{$A\to B$}
\item \emph{$A\to BC$ - and this is the only rule that has $BC$ in the
rhs and this is the only rule that has $A$ in the lhs (strong-uniqueness).}
\item \emph{$A\to a$ - and this is the only rule that has $a$ in the rhs
and this is the only rule that has $A$ in the lhs (strong-uniqueness).}
\item \emph{$AB\to C$ - and this is the only rule that has $C$ in the
rhs and this is the only rule that has $AB$ in the lhs (strong-uniqueness).}
\end{enumerate}
\emph{Proof}. Apply Lemma 2 to the grammar converted into Weak-GEN-ASNF$\Box$
\textbf{Remark. }After enforcing strong uniqueness the only productions
that contain choice are those of type 1 - renamings (REN).
In the light of this theorem we proceed to introduce the concept of
abstraction and prove some additional results.
\vspace*{-0.1in}
\subsection{Abstractions And Reverse Abstractions}
\textbf{Definitions (Abstractions Graph).} Given a grammar $G=(N,T,S,R)$
which is in an ASNF from any of the Theorems 1 - 4 we call an \emph{Abstractions
Graph of the grammar $G$ }and denote it by $AG(G)$ a Directed Graph
$G=(N,E)$ whose nodes are the NonTerminals of the grammar $G$ and
whose edges are constructed as follows: we put a directed edge starting
from $A$ and ending in $B$ iff $A\to B$ is a production that occurs
in the grammar. Without loss of generality, we can assume that the
graph has no self loops, i.e., edges of the form $A\to A$; If such
self-loops exist, the corresponding productions can be eliminated
from the grammar without altering the language. In such a directed
graph a node $A$ has a set of outgoing edges and a set of incoming
edges which we refer to as out-edges and in-edges respectively. We
will call a node $A$ along with its out-edges the \emph{Abstraction
at A} and denote it $ABS(A)=\{A,OE_{A}=\{(A,B)|(A,B)\in E\}\}$. Similarly,
we will call a node $A$ along with its in-edges the \emph{Reverse
Abstraction at A} and denote it $RABS(A)=\{A,IE_{A}=\{(B,A)|(B,A)\in E\}\}$.
\vspace*{-0.1in}
\subsection{Grow Shrink Theorem}
\textbf{Theorem 6. }\emph{Let $G=(N,T,S,R)$, }$\epsilon\notin L(G)$\emph{
be a General Grammar. Then we can convert such a grammar into the
Strong-GEN-ASNF i.e., such that all the productions are of the following
form: }
\begin{enumerate}
\item \emph{$A\to B$}
\item \emph{$A\to BC$ - and this is the only rule that has $BC$ in the
rhs and this is the only rule that has $A$ in the lhs. (strong-uniqueness)}
\item \emph{$A\to a$ - and this is the only rule that has $A$ on the lhs
and there is no other rule that has $a$ on the rhs. (strong uniqueness) }
\item \emph{$AB\to C$ - and this is the only rule that has $C$ in the
rhs and this is the only rule that has $AB$ in the lhs. (strong-uniqueness)}
\end{enumerate}
\emph{And furthermore for any derivation $w$ such that $\gamma\overset{*}{\to}w$
, in $G$, $\gamma\in N^{+}$ there exists a derivation $\gamma\overset{*}{\to}\mu\overset{*}{\to}\nu\overset{*}{\to}w$
such that $\mu\in N^{+}$, $\nu\in N^{*}$ and $\gamma\overset{*}{\to}\mu$
contains only rules of type 1 and 2 (REN, SS), $\mu\overset{*}{\to}\alpha$
contains only rules of the type 1, more particularly only Reverse
Abstractions and type 4 (REN(RABS), RSS) and $\nu\overset{*}{\to}w$
contains only rules of type 3 (TERMINAL).}
\emph{Proof}. See Appendix.
We have therefore proved that for each General Grammar $G$ we can
transform it in a Strong-GEN-ASNF such that the derivation (explanation
in structural induction terminology) of any terminal string $w$ can
be organized in three phases such that: Phase 1 uses only productions
that grow (or leave unchanged) the size of the intermediate string;
Phase 2 uses only productions that shrink (or leave unchanged) the
size of the intermediate string; and Phase 3 uses only TERMINAL production
\footnote{At first sight, it may seem that this construction offers a way to
solve the halting problem. However, this is not the case, since we
do not answer the question of deciding when to stop expanding the
current string and start shrinking which is key to solving the halting
problem.
}. In the case of grammars that are not in the normal form as defined
above, the situation is a little more complicated because of successive
applications of grow and shrink phases. However, we have shown that
we can always transform an arbitrary grammar into one that in the
normal form. Note further that the grow phase in both theorems use
only context free productions.
We now proceed to examine the implications of the preceeding results
in the larger context including the nature of hidden variables, radical
positivism and the David Hume's principles of \textit{connexion} among
ideas.
\vspace*{-0.1in}
\section{The Fundamental Operations of Structural Induction }
Recall that our notion of structural induction entails: Given a sequence
of observations $w$ we attempt to find a theory (grammar) that explains
$w$ and simultaneously also the explanation (derivation) $S\overset{*}{\to}w$.
In a local way we may think that whenever we have a production rule
$l\to r$ that $l$ explains $r$. In a bottom up - data driven way
we may proceed as follows: First introduce for every observable $a$
a production $A\to a$. The role of these productions is simply to
bring the observables into the realm of internal variables. The resulting
association is between the observables and the corresponding internal
variables unique (one to one and onto) and hence, once this association
is established, we can forget about the existence of bbservables (Terminals).
Since establishing these associations is the only role of the TERMINAL
productions, they are not true structural operations. With this in
mind, if we are to construct a theory in the GEN-ASNF we can postulate
laws of the following form:
\begin{enumerate}
\item $A\to BC$ - Super-structuring (SS) which takes two internal variables
$B$ and $C$ that occur within proximity of each other (adjacent)
and labels the compound. Henceforth, the shorter name $A$ can be
used instead for $BC$ . This is the sole role of super-structuring
- to give a name to a composite structure to facilitate shorter explanations
at latter stages.
\item $A\to B|C$ - Abstraction (ABS). Introduces a name for the occurrence
of either of the variables $B$ or $C$. This allows for compactly
representing two productions that are identical except that one uses
$B$ and the uses $C$ by a single production using $A$. The role
of Abstraction is to give a name to a group of entities (we have chosen
two only for simplicity) in order to facilitate more general explanations
at latter stages which in turn will produce more compact theories.
\item $AB\to C$ - Reverse Super-structuring (RSS) which introduces up to
two existing or new internal variables that are close to each other
(with respect to a specified topology) that together {}``explain''
the internal variable $C$.
\item $A\to C$, $B\to C$ - Reverse Abstraction (RABS) which uses existing
or new internal variables \textit{$A$} and $B$ as alternative explanations
of the internal variable $C$ (we have chosen two variables only for
simplicity).
\end{enumerate}
\vspace*{-0.1in}
\subsection{Reasons for Postulating Hidden Variables}
Recall that are at least two types of reasons for creating Hidden
Variables:
\begin{enumerate}
\item (\textbf{OR} type) - {[}multiple alternative hidden causes{]} The
OR type corresponds to the case when some visible effect can have
multiple hidden causes $H1\to Effect$, $H2\to Effect$ . In our setting,
this case corresponds to Reverse Abstraction. One typical example
of this is: The grass is wet, and hence either it rained last night
or the sprinkler was on. In the statistical and machine learning literature
the models that use this type of hidden variables are called mixture
models \cite{Lauritzen 1996 5}.
\item (\textbf{T-AND} type) - {[}multiple concurrent hidden causes{]} The
T-AND type, i.e., topological AND type, of which the AND is a sepcial
case. This corresponds to the case when one visible effect has two
hidden causes both of which have to occur within proximity of each
other (with respect to a specified topology) in order to produce the
visible effect. $H1H2\to Effect$. In our setting, this corresponds
Reverse Super-structuring. In the Statistical / Graphical Models literature
the particular case of AND hidden explanations is the one that introduces
edges between hidden variables in the depedence graph \cite{Elidan and Friedman 2005},
\cite{Lauritzen 1996 5}, \cite{Pearl 1988 5}.
\end{enumerate}
The preceeding discussion shows that we can associate with two possible
reasons for creating hidden variables, the structural operations of
Reverse Abstraction and Reverse Super Structuring respectively. Because
these are the only two types of productions that introduce hidden
variables in the GEN-ASNF this provides a characterization of the
rationales for introducing hidden variables.
\vspace*{-0.1in}
\subsection{Radical Positivism}
If we rule out the use of RSS and RABS, the only operations that involve
the postulation of hidden variables, we are left with only SS and
ABS which corresponds to the radical positivist \cite{Ayer 1936}
stance under the computationalist assumption. An explanation of a
stream of observations $w$ in the Radical Positivist theory of the
world is mainly a theory of how the observables in the world are grouped
into classes (Abstractions) and how smaller chunks of observations
are tied together into bigger ones (Super-Structures). The laws of
the radical positivist theory are truly empirical laws as they only
address relations among observations.
However, structural induction, if it is constrained to using only
ABS and SS, the class of theories that can be induced is necessarily
a subset of the set of theories that can be described by Turing Machines.
More precisely, the resulting grammars will be a strict subset of
Context Free Grammars, (since CFG contain SS, and REN(ABS+RABS)).
Next we will examine how any theory of the world may look like from
the most general perspective when we do allow Hidden Variables.
\vspace*{-0.1in}
\subsection{General theories of the world}
If structural induction is allowed to take advantage of RSS and RABS
in addition to SS and ABS, the resulting theories can make use of
hidden variables. Observations are a derivative byproduct obtained
from a richer hidden variable state description by a reduction: either
of size - performed by Reverse SuperStructuring or of information
- performed by Reverse Abstraction. Note that, while in general, structural
induction can alternate several times between REN+SS and RABS+RSS,
we have shown that three phases suffice: a growth phase (REN+SS);
a shrink phase (RABS+RSS); and a Terminal phase. Whether we can push
all the RABS from the first phase into the second phase and make the
first phase look like the one in the radical positivist stance (only
ABS+SS) remains an open question (See Appendix for a Conjecture to
this effect).
\vspace*{-0.1in}
\subsection{Hume's principles of connexion among ideas}
We now examine, against the backdrop of GEN-ASNF theorem, a statement
made by philosopher David Hume more that 2 centuries ago: \emph{{}``I
do not find that any philosopher has attempted to enumerate or class
all the principles of association {[}of ideas{]}. ... To me, there
appear to be only three principles of connexion among ideas, namely,
Resemblance, Contiguity in time or place, and Cause and Effect''
\cite{Hume 1993}}. If we substitute Resemblance with Abstraction
(since abstraction is triggered by resemblance or similarity), Contiguity
in time or place with Super-Structuring (since proximity, e.g., spatio-temporal
proximity drives Super-Structuring) and Cause and Effect with the
two types of explanations that utilize hidden variables, it is easy
to see that the GEN-ASNF theorem is simply a precise restatement of
Hume's claim under the computationalist assumption.
\vspace*{-0.1in}
\section{Summary}
We have shown that \textit{abstraction} (grouping \textit{similar}
entities) and super-structuring (combining topologically e.g., spatio-temporally
close entities) as the essential structural operations in the induction
process. A structural induction process that relies only on abstraction
and super-structuring corresponds to the radical positivist stance.
We have shown that only two more structural operations, namely, \textit{reverse
abstraction} and \textit{reverse super-structuring} (the duals of
abstraction and super-structuring respectively) (a) suffice in order
to exploit the full power of Turing-equivalent generative grammars
in induction; and (b) operationalize two rationales for the introduction
of hidden variables into theories of the world. The GEN-ASNF theorem
can be seen as simply a restatement, under the computationalist assumption,
of Hume's 2-century old claim regarding the principles of connexion
among ideas.
| 54d2edfb70ce4a15b14cc883073ac35401a2d4e3 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Quantum magnets, based on $S=1/2$ local moments, have been of intense recent interest due to the nature of the exotic ground states they display and their relation to high temperature superconductivity\cite{1}. The ground states they can display are varied. One limiting case is that of a non-magnetic singlet state typical of quasi-one-dimensional spin-Peierls systems, MEM-(TCNQ)$_2$\cite{2}, CuGeO$_3$\cite{3,4}, TiOCl\cite{5,6,7} and TiOBr\cite{8,9}, as well as certain quasi-two dimensional Shastry-Sutherland systems, such as Sr$_2$Cu(BO$_3$)$_2$\cite{10}. However, antiferromagnetic N$\acute{e}$el ground states also exist, as occurs in the parent compounds of the high temperature superconductors, such as La$_2$CuO$_4$\cite{11,12,13}.
CuMoO$_4$ is a triclinic magnetic insulator made up of networks of quantum $S=1/2$ magnetic moments residing at the Cu$^{2+}$ site\cite{14,15}. Several different polymorphs of CuMoO$_4$ have been reported \cite{16,17,18,19,20}. At high pressure, CuMoO$_4$ crystallizes in two distorted wolframite-like structures, CuMoO$_4$-II and CuMoO$_4$-III, that both display antiferromagnetic order at low temperatures \cite{21,22}. Another polymorph, $\epsilon$-CuMoO$_4$, has a monoclinic crystal structure under ambient conditions and orders magnetically with a ferromagnetic component below $\sim$ 10 K \cite{23}.
The CuMoO$_4$ polymorphs which are the subject of the present article exhibit two triclinic phases at high and low temperatures and ambient pressure, the $\alpha$ and $\gamma$ phases, respectively. There is a strongly hysteretic 1st order structural phase transition between these two structures at $T_{\mathrm {C}}$ $\sim$ 190 - 250 K\cite{15,16,24}. While both the $\alpha$ (high temperature) and $\gamma$ (low temperature) phases are triclinic, they differ in unit cell volume by a remarkable 13\% on either side of $T_{\mathrm {C}}$, with the low temperature $\gamma$ phase displaying the smaller cell volume. This phase change is accompanied by a change in color of the material from green ($\alpha$) to red-ish brown ($\gamma$). For this reason, this material is referred to as displaying piezo or thermal chromism, and is of considerable current interest for these properties alone\cite{25,26,27}.
The lattice parameters for CuMoO$_4$ in both its high temperature ($\alpha$) phase and its low temperature ($\gamma$) phase are listed in Table 1 as taken from \cite{15}. Along with the unit cell volume reduction of 13\% on cooling through $T_{\mathrm {C}}$, the lattice constant shrinks by $\sim$ 7\%, with the largest change being along the {\bf b}-axis\cite{15,16,24}.
\begin{table}[h]
\begin{tabular}{|c|c|}
\hline
$\alpha$-CuMoO$_4$&$\gamma$-CuMoO$_4$ \\
high temperature phase&low temperature phase\\\hline
space group: P$\bar{1}$ (No. 2)&space group: P$\bar{1}$ (No. 2)\\
a = 9.901(3) \AA&a = 9.699(9) \AA\\
b = 6.786(2) \AA&b = 6.299(6) \AA\\
c = 8.369(3) \AA&c = 7.966(7) \AA\\
$\alpha$ = 101.13(1)$^\circ$&$\alpha$ = 94.62(4)$^\circ$\\
$\beta$ = 96.88(1)$^\circ$&$\beta$ = 103.36(4)$^\circ$\\
$\gamma$ = 107.01(1)$^\circ$&$\gamma$ = 103.17(4)$^\circ$\\\hline
\end{tabular}
\caption{Lattice parameters for CuMoO$_4$ in its high temperature ($\alpha$) and its low temperature ($\gamma$) phases\cite{15}.}
\end{table}
The structure within the $\alpha$ phase can be described in terms of relatively isolated clusters of six Cu-O polyhedra, while within the $\gamma$ phase the Cu-O polyhedra take on a one-dimensional connectivity within the {\bf a-b} plane\cite{14}. The connectivity of the CuO$_6$ octahedra in its $\gamma$ phase gives rise to chains of ``molecules'' formed by 6 corner- and edge-sharing CuO$_6$ octahedra. Cu$^{2+}$ $S=1/2$ magnetic moments connected by corner-sharing octahedra are expected to interact via relatively strong antiferromagnetic exchange, due to the $\sim$ 180$^{\circ}$ Cu-O-Cu bond angles, while those which are connected by edge-sharing octahedra would experience relatively weaker magnetic coupling. The expectation arising from the connectivity within the six CuO$_6$ octahedra is that the six $S=1/2$ moments would form two loosely coupled singlet dimers and two relatively free $S=1/2$ moments per unit cell at low temperature\cite{14}. This scenario is also supported by magnetization measurements which show a magnetization plateau typical for singlet ground state systems \cite{28}. This paper reports on new heat capacity and neutron scattering measurements on polycrystalline CuMoO$_4$ which characterize its magnetic properties above and below $T_{\mathrm {N}}$ $\sim$ 1.75 K, within its $\gamma$ phase structure. As we will discuss, the scenario of moments arranged into two loosely coupled singlet dimers and two relatively free $S=1/2$ moments per unit cell is fully consistent with the measurements we present.
\section{Experimental Details}
Polycrystalline samples were prepared by mixing the following powders in stoichiometric proportions:
\begin{center}
MoO$_3$ + CuO $\rightarrow$ CuMoO$_4$
\end{center}
\noindent
The mixed powders were pressed hydrostatically at 65~MPa, and annealed in air at 700$^\circ$C for 72 hours. Powder X-ray diffraction measurements of the final samples revealed high quality polycrystalline material with very little CuO residue.
In order to investigate the magnetic structure and magnetic excitations associated with the low temperature ground state of CuMoO$_4$, we carried out both elastic and inelastic neutron scattering measurements on polycrystalline samples as a function of temperature and magnetic field on the C2 powder diffractometer and the C5 triple axis spectrometer at the Canadian Neutron Beam Centre (CNBC), Chalk River, as well as inelastic time-of-flight neutron scattering measurements using the Cold Neutron Chopper Spectrometer (CNCS)\cite{21} at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL).
The polycrystalline samples used for the measurements at CNBC were loaded in a sealed Al can in a $^4$He exchange gas and mounted in different cryostats: either a pumped $^3$He or $^4$He cryostat, in order to achieve temperatures as low as 0.3 K and 1.5 K, respectively, and with magnetic field capabilities up to 7.5 T. For the triple axis measurements at CNBC, we employed a vertically focusing pyrolytic graphite (PG) monochromator and a flat analyzer with a fixed final energy of E$_f$ = 14.7 meV. Two PG filters were used in the scattered neutron beam in order to eliminate higher order wavelength contamination. A liquid N$_2$ cooled sapphire filter was used in the main beam to minimize the fast neutron background. Soller slits in the four beam paths (from source to detector) produced a collimation of [none, 0.48$^\circ$, 0.56$^\circ$, 1.2$^\circ$] resulting in an energy resolution of $\sim$ 1 meV for these triple axis measurements.
Time-of-flight neutron scattering measurements were performed using the CNCS at the SNS at ORNL. CNCS is a direct geometry, multi-chopper inelastic spectrometer optimized for high-{\bf Q} and good-E resolution measurements using low incident neutron energies (E$_i$ $<$ 30 meV). Measurements were carried out on a 17 g polycrystalline CuMoO$_4$ sample that was loaded in a standard aluminum sample can. The sample environment consisted of a 5 T vertical field magnet cryostat with a base temperature of 1.5 K. A cadmium mask was attached to the sample can to reduce multiple scattering and scattering from the cryostat. The sample was measured using two different settings for the incident neutron energy to cover a broad range of (Q, E) space.
E$_i$ = 6.6 meV and E$_i$ = 1.55 meV were employed for moderate and high energy resolution measurements, respectively. For all measurements, the spectrometer was run in the high-flux configuration and the double-disk chopper was phased at 180 Hz, providing elastic energy resolutions of $\sim$ 0.18 meV, and $\sim$~0.025~meV for the two E$_i$ settings, respectively.
\section{Heat Capacity Measurements}
\begin{figure}[b]
\includegraphics{fig1.pdf}
\caption{\label{1}(Color online) The temperature dependence of the heat capacity of CuMoO$_4$ as observed in zero magnetic field as well as in magnetic fields up to 4 T.}
\end{figure}
We carried out heat capacity measurements at low temperatures to characterize the magnetic phase behavior in CuMoO$_4$. Figure 1 shows heat capacity measurements as a function of temperature and magnetic field, up to 4 T. The measurements were performed using the quasi-adiabatic heat pulse method and a $^3$He refrigerator. The heat capacity measurements at H=0 T reveal a $\lambda$-like anomaly signifying a magnetic phase transition at $T_{\mathrm {N}}$ $\sim$ 1.75 K, as well as a weaker peak near $\sim$ 2.2 K, likely indicating a buildup of short-range correlations at that temperature. The sharp $\lambda$-like anomaly moves to lower temperatures and weakens in amplitude on application of a magnetic field. The transition appears to have been fully suppressed for fields $>$ 2 T, with only a broad anomaly remaining in the heat capacity at $\sim$ 2.2 K for higher fields.
\section{Neutron Scattering Results and Discussion}
We performed two sets of neutron powder diffraction measurements on CuMoO$_4$. The first of these, shown in Fig. 2 (a), was performed on the C2 powder diffractometer at CNBC, Chalk River, using an incident neutron wavelength of $\lambda$ = 2.37 \AA. Figure 2 (a) shows the low angle portion of the neutron diffraction pattern at T = 5 K and T = 0.4 K, and a clear temperature dependent, resolution-limited Bragg peak is observed at a scattering angle of $\sim$ 7.4$^{\circ}$ for which $Q$ = 0.342 \AA$^{-1}$. This was the only additional Bragg peak to appear on cooling through $T_{\mathrm {N}}$ $\sim$ 1.75 K.
\begin{figure}[h]
\includegraphics{fig2.pdf}
\caption{\label{3}(Color online) (a) Elastic neutron scattering measured at the C2 powder diffractometer at CNBC showing the magnetic Bragg peak at $|$Q$|$ $\sim$ 0.34 \AA$^{-1}$ at $\sim$ 0.4 K. A complete powder diffraction pattern using the position-sensitive linear detector took $\sim$ 240 minutes. (b) The proposed spin configuration in the H = 0 T ground state of CuMoO$_4$, based on the ordering wavevector of antiferromagetic arrangements of spins in neighboring cells consistent with the (1/2, 0, 0) ordering wavevector. Inelastic scattering indicates the presence of triplet excitations out of paired singlets, shown as ellipses surrounding the $S=1/2$ moments which pair to form the singlets and which take part in the magnetic structure.}
\end{figure}
Powder diffraction measurements were also performed on the C5 triple axis spectrometer which allowed a parametric study of the temperature and magnetic field dependence of the low temperature Bragg peak at Q = 0.342 \AA$^{-1}$ shown in Fig. 2 (a). These order parameter measurements are shown in Fig. 3 (a) and (b), for the temperature dependence at zero field, and the field dependence at T = 0.4 K, respectively. This field and temperature dependence of the order parameter identify the new low temperature Bragg peak as magnetic in origin and corresponding to the sharp anomaly observed in the zero field heat capacity, as shown in Fig. 1. Interestingly, the phase transition appears to be continuous as a function of temperature at zero field (Fig. 3 (a)) but rather discontinuous as a function of field at low temperatures (Fig. 3 (b)).
While we have observed only a single magnetic Bragg peak for CuMoO$_4$ below $T_{\mathrm {N}}$, we can model its magnetic structure, based on its periodicity as shown in Fig. 2 (b), wherein we pair off 4 of the 6 $S=1/2$ moments per unit cell into 2 singlets, and allow the remaining 2 spins to order ferromagnetically within a unit cell and antiferromagnetically from cell to cell along the triclinic {\bf a} direction. The rationale for pairing 4 of the 6 $S=1/2$ spins per unit cell off into non-magnetic singlets comes both from recent high-field magnetization studies \cite{28} and from inelastic neutron scattering results which we report below. The magnitude of the ordering wavevector Q = 0.34 \AA$^{-1}$ is correctly accounted for by the magnitude of the resulting (1/2, 0, 0) antiferromagnetic ordering wavevector within this triclinic structure.
\begin{figure}
\includegraphics{fig3.pdf}
\caption{\label{4} (a) The antiferromagnetic order parameter as measured by the integrated intensity of the Q = 0.34 \AA$^{-1}$ Bragg peak is shown as a function of temperature. A high temperature (T = 50 K) background has been subtracted from this data set. (b) The same antiferromagnetic Bragg intensity at Q = 0.34 \AA$^{-1}$ is shown as a function of magnetic field at T $\sim$ 0.4 K, within the magnetically ordered state. While the thermal evolution of the transition appears continuous in zero field, the antiferromagnetic order drops discontinuously to zero with field at low temperatures. The lines in both (a) and (b) are guides to the eye, with error bars of $\sigma$ from counting statistics. Both data sets shown are obtained on the triple axis spectrometer C5 at the CNBC.}
\end{figure}
We also carried out two sets of inelastic neutron scattering measurements on this polycrystalline sample. We will first describe time-of-flight inelastic measurements taken with the CNCS chopper spectrometer at SNS, and then triple axis measurements taken with the C5 spectrometer at CNBC, Chalk River.
The inelastic excitation spectrum for CuMoO$_4$ is shown in Fig. 4 as a color contour map of the inelastic neutron scattering intensity, S(Q, E), for an incident neutron energy of E$_i$ = 6.6 meV. Figure 4 (a), (b), and (c) show this spectum in the ordered phase below $T_{\mathrm {N}}$ at T = 1.5 K and at zero applied magnetic field; in the disordered phase at base temperature T = 1.5 K and H = 4 T; and in the paramagnetic phase at H = 0 T and T = 6 K, respectively. Figure 4 (d) shows a cut in energy, of these same three data sets, integrated in Q for $|Q| = [0.6, 1.0]$ \AA$^{-1}$. These data sets have had a high temperature, 50 K, H = 0 T background data set subtracted from them.
These inelastic data sets are consistent with low energy spin wave excitations in the ordered state below $T_{\mathrm {N}}$ (Fig. 4 (a)) coexisting with a gapped excitation spectrum characteristic of triplet excitations out of a singlet ground state. The triplet excitations have a band width of $\sim$ 2.5 meV, with a gap of $\sim$ 2.3 meV. The band width likely originates from weak dispersion within the triplet of excited states due to inter-dimer exchange coupling, as observed in other singlet ground state systems such as SrCu$_2$(BO$_3$)$_2$\cite{10}.
On applying an H = 4 T magnetic field at low temperatures (Fig. 4 (b)), the bottom of the triplet band moves down to below $\sim$ 1.9 meV, consistent with an expected downward shift of the energy of the lowest of the three triplet states by $\Delta E = g\mu_BH=$0.46 meV for g=2 and H=4 T. The low energy spin wave scattering is also raised in energy, and the spin wave spectrum appears to be gapped. This can be seen more clearly in the energy cuts of this same data shown in Fig. 4 (d), wherein the low energy spectrum below $\sim$ 0.5 meV for T = 1.5 K and H = 0 T, appears to be depleted and it displays a pronounced inelastic peak at $\sim$ 0.8 meV for T = 1.5 K and H = 4 T.
\begin{figure}[h]
\includegraphics{fig4.pdf}
\caption{(Color online) Color contour maps of S(Q, E) observed in CuMoO$_4$ are shown for E$_i$ = 6.6 meV at T = 1.5 K in zero applied magnetic field (a) and in an H = 4 T magnetic field (b). Panel (c) shows S(Q, E) in the H = 0 T paramagnetic phase at T = 6 K. An energy cut through S(Q, E) for an interval of $|Q|$ = 0.6 to 1 \AA$^{-1}$. A high temperature background (at T = 50 K, within the paramagnetic state) has been subtracted from all panels to isolate the magnetic scattering.}
\end{figure}
Higher energy resolution measurements were also taken with CNCS using an incident neutron energy of E$_i$ = 1.55~meV, and these are shown for T = 1.5 K in Fig. 5. Again a high temperature data set at T = 50 K and H = 0 T has been subtracted from this data set to isolate the magnetic scattering from the sample. This higher energy resolution data set clearly shows a Goldstone spin wave mode emanating out of the magnetic Bragg peak position at Q = 0.34 \AA$^{-1}$. Beyond $\sim$ 0.6 \AA$^{-1}$ the spin wave density of states is strongly peaked at $\sim$ 0.55 meV, however, a second distribution of magnetic scattering, with a bandwidth of $\sim$ 0.25 meV is evident between 0.7 and 1.0 meV. These two bands of spin wave scattering account for the quasi-elastic magnetic scattering observed with lower energy resolution (Fig. 4) wherein quasi-elastic scattering is observed out to $\sim$ 1 meV. Given that the magnetically ordered ground state below $T_{\mathrm {N}}$ is described by an antiferromagnetic alternation of ferromagnetically-coupled spins along the trigonal {\bf a} direction, it is not surprising that two bands of spin wave excitations, an acoustic and an optic band, would be present at low temperatures.
\begin{figure}[h]
\includegraphics{fig5.pdf}
\caption{(Color online) A high energy resolution measurement of S(Q, E) using CNCS and an incident neutron energy of E$_i$ = 1.55 meV is shown for CuMoO$_4$ at T = 1.5 K and H = 0 T. A high temperature (T = 50 K) background data set has been subtracted from the data set. The low energy spin dynamics are seen to consist of a Goldstone mode emanating from the ordering wavevector, Q = 0.34 \AA$^{-1}$, and a relatively dispersionless band of excitations near 0.5 meV. Magnetic spectral weight is observed out to $\sim$ 1 meV.}
\end{figure}
\begin{figure}[h]
\includegraphics{fig6.pdf}
\caption{\label{6}(Color online) (a) Relatively low energy resolution inelastic scattering at Q = 1.1 \AA$^{-1}$ reveals a gapped spin excitation spectrum in the ordered state at T = 1.5 K and in zero magnetic field. The gap is $\sim$ 2.3 meV with a bandwidth of $\sim$ 2.5 meV. (b) At the largest applied magnetic field, we observe the lower bound of the zero field triplet band to extend well within the $\sim$ 2.3 meV gap. Note the logarithmic intensity scale.}
\end{figure}
\begin{figure}[h]
\includegraphics{fig7.pdf}
\caption{\label{7}(Color online) (a) The effect of application of a magnetic field in the range of 0 T to 7.5 T on the magnetic excitation spectrum is shown in CuMoO$_4$. (b) The high temperature background-subtracted inelastic intensity at zero field and at an applied field of 7.5 T is shown. Clearly the bottom of the triplet scattering near $\sim$ 2.5 meV is depleted and transferred down to $\sim$ 1.75 meV, within the gap, by application of the magnetic field. This is consistent with expectations for Zeeman splitting of a weakly dispersive triplet band. Simultaneously, the spectral weight of the low energy spin wave excitations are moved to higher energies within the gap. Note that the intensity scale is logarithmic in (a) and linear in (b).}
\end{figure}
Constant-Q = 1.1 \AA$^{-1}$ inelastic neutron scattering measurements carried out with the C5 triple axis spectrometer at CNBC, Chalk River are shown in Figs. 6 and 7. These largely corroborate the time-of-flight measurements show in Figs. 4 and 5, although they are carried out to larger applied magnetic fields, up to H = 7.5~T. These measurements were performed with an energy resolution of $\sim$ 1 meV, and thus detail within the quasi-elastic spin wave band of scattering is obscured by nuclear elastic incoherent scattering. Nonetheless, the triplet excitations and singlet-triplet gap can be clearly observed, as well as the filling of the gap with increasing magnetic field.
Figure 6 (a) shows S(Q = 1.1 \AA$^{-1}$, E) for T = 1.5 K and T = 50 K and zero magnetic field, while Fig. 6 (b) shows the same spectra but for H = 7.5 T. Figure 7 (a) shows the field dependence of S(Q = 1.1 \AA$^{-1}$, E) at T = 1.5 K, while Fig. 7 (b) focusses on the low energy part of this spectrum, for H = 0 T and H = 7.5 T only, after the high temperature background at T = 50 K has been subtracted from both data sets. Note that the intensity scale for Fig. 6 (a) and (b) and Fig. 7 (a) is logarithmic, while that of Fig. 7 (b) is linear. This comparison clearly shows spectral weight from the low energy side of the triplet band being depleted and displaced downwards in energy by g$\mu_B$H $\sim$ 0.7 meV to $\sim$ 1.6 meV. The low energy spin waves are also raised in energy by application of a field, and this also pushes magnetic intensity into the gap. The higher end of the triplet bandwidth, above 3.5 meV, shows less field dependence, but the scattering is weaker here, and the averaging effects of the triplet dispersion may be greater.
\section{Conclusions}
We have carried out heat capacity and both elastic and inelastic neutron scattering measurements on powder samples of the triclinic quantum magnet CuMoO$_4$ with the purpose of understanding the nature of its low temperature ground state. All results are consistent with an antiferromagnetic long range ordered ground state appearing below $T_{\mathrm {N}}$ = 1.75 K in H = 0 T. The ordering wavevector associated with this antiferromagnetic order is identified as (1/2, 0, 0), and this is consistent with a low temperature state in which the molecule of six Cu$^{2+}$ ions, which makes up the triclinic structure, organizes into 4 $S=1/2$ moments which form two singlets, as well as 2 ferromagnetically coupled $S=1/2$ moments which then order antiferromagnetically along the triclinic {\bf a} direction. We explicitly show that at low temperatures this ordered state is destroyed by an applied magnetic field of $\sim$ 1.5 T.
Our inelastic neutron scattering measurements on CuMoO$_4$ probe the magnetic excitation spectrum and show it to be well described by dispersive triplet excitations with a gap of $\sim$ 2.3 meV and a bandwidth of $\sim$ 2.5 meV. Low lying spin wave excitations are also observed, and are shown to display a Goldstone mode for T$<$ $T_{\mathrm {N}}$ which is soft at the ordering wavevector of Q $\sim$ 0.35 \AA$^{-1}$, as well as a second branch of spin wave excitations forming a band in the approximate range 0.6 - 1.0 meV.
The picture arising from the measurements we present, of an antiferromagnetic long range ordered structure made up of co-existing non-magnetic singlets and ferromagnetically-coupled spins, is clearly exotic, comprised as it is by both of the ground states normally associated with antiferromagnetism in materials. CuMoO$_4$ is clearly an interesting and exotic example where both ordered spins and singlets are the building blocks from which the antiferromagnetic ground state is constructed. We hope this study will guide and inform further theoertical studies of this and related quantum magnets.
\section{Acknowledgements}
We wish to acknowledge the contributions of K.A. Ross and J.P.C. Ruff to the neutron scattering measurements reported here. This work was supported by NSERC of Canada and by the Research Exchange Program between JSPS and NSERC and Grants-in-Aid for Scientific Research, MEXT (No.21560723).
| 2f024cf869ae61c7cedb213d6b50f80c770ac86d | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Observations of the Milky Way (MW) and the galaxy M31 shape to a great extent our
understanding of galaxy formation and evolution.
In particular, three landmarks have been pivotal in the development of
theoretical studies of structure formation: a) the abundance of MW
galaxy satellites that motivated one of the strongest points of tension
with the now-standard $\Lambda$ Cold Dark Matter ($\Lambda$CDM) paradigm
of structure formation \citep{1999ApJ...522...82K,1999ApJ...524L..19M}, b) the spatial
distribution of the same satellites which triggered discussions on how
unique the host dark matter halo of the MW is
\citep{2009MNRAS.394.2223M} and c) the measurements of
the tidal debris of disrupted merging galaxies around the MW
and M31 galaxy, confirming the hierarchical nature of
galaxy evolution, one of the fundamental characteristics of
$\Lambda$CDM \citep{2009Natur.461...66M}. However, inferring general
conclusions on galaxy evolution based on observations of these two
galaxies requires an assessment on how biased the properties of the MW and
M31 are with respect to a given control population.
In the framework of $\Lambda$CDM, the study of the MW and M31 starts by
modeling their individual host dark matter halos, \emph{assuming} that their simulated
formation histories are "typical", or at least compatible with the assembly of the
real Local Group \citep{2009MNRAS.395..210D,2010MNRAS.406..896B}. The basic definition of a
LG (in terms of the dark matter distribution) has two basic elements based on
the state of the system \emph{today}: (i) the estimated masses of the dark matter
halos corresponding to the MW and M31 (see for instance
\cite{2010MNRAS.406..264W} and references therein) and (ii) the isolation of these
two halos from other massive structures
\citep{2004AJ....127.2031K}. Two additional constraints could be the
separation and the relative velocity of the two halos \citep{2005ApJ...635L..37R}. However, the condition
on the LG isolation admits a strict formulation, by requiring that the
environment, in terms of the mass and position of the dominant galaxy
clusters in the Local Universe, be as close as possible to the one
inferred from observations. Such an additional condition imposes
restrictions on the possible outcomes of structure formation on scales
of the order of $\sim5$ Mpc. This is considered here as the meso-scale
as opposed to the large ($\gtrsim 5$ Mpc) or the small ($\lesssim 1$
Mpc) scales.
The new feature in the analysis presented in this paper is the
inclusion of such observational constraints around the LG environment
in the initial conditions of the simulation. In a series of three
simulations from such initial conditions, in a WMAP5 cosmology with a
normalization $\sigma_{8}=0.817$ \citep{2009ApJS..180..330K}, we are able to define a sample of
three LG dark matter halo pairs that form and evolve under specific
conditions reflecting structure of the Local Universe. In addition we
will take advantage of one of the largest cosmological simulations
carried out to date, the Bolshoi Simulation
\citep{2010arXiv1002.3660K}, to explore a larger sample of halos within the
mass range of the LG, and calibrate possible cosmic variance effects.
We analyze the constrained simulations with the primary goal of
quantifying the assembly histories of the LG halos. This is driven by
two different motivations. One is to find out whether the simulated
LGs, that are selected by dynamical considerations pertaining to their
redshift zero structure, have mass aggregation histories (MAHs) that
lead to the formation of disk galaxies like the MW and M31. The other
is to find out whether such a MAH is dictated by by meso-scale
environment of the LG, or whether a random selection of objects
similar to the LG is likely to have a similar MAH.
In Section \ref{sec:clues}, we describe
our simulations and the method to re-construct the mass aggregation
histories. In Section \ref{sec:sample} we describe how we build the different
control samples for our statistical analysis. In Section \ref{sec:results} we
study the MAHs in the different samples and argue that the selection
by different isolation criteria does not induce a strong bias in the
statistics describing the MAHs. In Section \ref{sec:discussion} we discuss the
possible origin of these findings and comment on the connection with
observations of the MW and M31. In Section \ref{sec:conclusions} we summarize
our conclusions.
\section{The simulations and Mass Aggregation Histories}
\label{sec:clues}
In this paper we make use of four cosmological N-body dark matter simulations. Three
of them are part of the Constrained Local Universe Simulations
(CLUES) project \footnote{\tt http://www.clues-project.org/},
whose aim is to perform N-body cosmological simulations that reproduce the
local large scale structure in the Universe as accurately as current
observations allow. The fourth simulation is the Bolshoi Simulation,
which was performed from unconstrained initial conditions and spans a
volume $\sim 60$ times larger than each one of the CLUES
simulations. In this section we will describe these simulations and
the procedure we have used to construct the mass aggregation histories
for the dark matter halos.
\subsection{The CLUES simulations}
First we describe the procedure employed to generate the constrained initial conditions. The observational constraints are the peculiar velocities drawn from the
MARK III \cite{1997ApJS..109..333W}, surface brightness fluctuation
\cite{2001ApJ...546..681T} and the position and virial properties of
nearby X-ray selected clusters of galaxies \cite{2002ApJ...567..716R}.
The
\cite{1991ApJ...380L...5H} algorithm is used to generate
the initial conditions as constrained realizations of Gaussian random
fields. These observational data sets impose constraints on the
outcome of structure formation on scales larger than a few
megaparsec.
These constraints affect only the large and meso-scales of the initial
conditions of the simulations, leaving the small scales essentially
random. In particular. the presence of a local group with two dark matter
halos roughly matching the masses, separation and relative velocities
of the MW and M31 cannot be constrained. The strategy employed here is to
construct an ensemble of 200 different realizations of the constrained
initial conditions and simulate these with $256^3$ particles on a box with side length
$64$\hMpc using the Tree-PM MPI N-body code Gadget2 \citep{2005MNRAS.364.1105S}, and then scan these for
appropriate LG-like objects within a search box centered on the actual
position of the LG. Only three realizations are found to have such a LG object
following the criteria detailed at the end of Sect. \ref{sec:sample}. It follows that the
simulations analyzed here obey two kinds o selection rules. By
construction these are constrained simulations whose large and
meso-scales are designed to mimic the local Universe. Then, post
factum, the simulations that have the appropriate LGs are selected for
further analysis.
The selected simulations are then re-simulated at high resolution of $1024^3$
particles. The high resolution extension of the low-resolution simulation is
obtained by creating an unconstrained realization at the desired resolution,
fast Fourier transforming it to $k$-space and substituting the unconstrained
low $k$ modes with the constrained ones. The resulting realization is made of
unconstrained high $k$ modes and constrained low $k$ ones. The
transitional scale happens around the length scale corresponding to
the Nyquist frequency of the $256^3$ mesh, $\lambda_{\mathrm{Ny}}=2\times
64/256$ \hMpc $=0.5 $\hMpc. This corresponds to a mass scale of
$M_{\mathrm{Ny}}\approx 1.2\times 10^{9}$\hMsun, below which the
structure formation can be considered as emerging primarily from the
unconstrained $k$ modes.
The cosmological parameters in these high resolution simulations are consistent with a WMAP5
cosmology with a density $\Omega_{m}=0.28$, a cosmological constant
$\Omega_{\Lambda} = 0.72$, a dimensionless Hubble parameter $h=0.73$, a
spectral index of primordial density perturbations $n=0.96$ and a
normalization $\sigma_{8}=0.817$ \citep{2009ApJS..180..330K}. With these characteristics each particle has a
mass $m_{p}=1.89\times 10^{7}$ \hMsun.
\subsection{The Bolshoi Simulation}
We have used as well the Bolshoi simulation \citep{2010arXiv1002.3660K} to
verify that the constrained simulation did not bias the halo samples and their
MAHs\footnote{Halo catalogs for these simulations are available at \tt http://www.multidark.org/MultiDark/}. The simulation was done in a cubic volume of $250$ \hMpc on a side
using $2048^3$ particles, leading to a particle mass of $m_p=1.35\times
10^{8}$ \hMsun, roughly 10 times lower than the resolution in the CLUES
simulations.
We take from the Bolshoi simulation eight non-overlapping sub-volumes. Each sub-volume has a
cubic size of $100$ \hMpc on a side, corresponding to a comoving
volume comparable to the three CLUES simulations
combined. The halo samples in the sub-volumes will be used to
calibrate the impact of cosmic variance on the different statistics we
use to characterize the halo populations.
\subsection{Halo identification and merger tree construction}
In order to identify halos we use a FOF algorithm. We do not include
any information of the substructure in each halo. All the
analysis related to the mass aggregation history is done in terms of
the host halos. In particular the mergers do not correspond to the fusion
of an accreted sub-halo with a central dominant host halo, but instead
correspond to the moment of two halos overlapping for the first time.
The FOF algorithm has a linking length of $b=0.17$ times the mean inter
particle separation. The mean overdensity of objects found with
this linking length at redshift $z=0$ is 680
\citep{2011arXiv1103.0005M}. We identify the halos for $80$
snapshots more or less equally spaced
over the 13 Gyrs between redshifts
$0<z<7$. All the objects with 20 or more particles are kept in the
halo catalogue and considered in the merger tree construction. This
corresponds to a minimum halo mass of $M_{min}=3.78\times
10^{8}$\hMsun. Within the CLUES simulations a Milky Way like dark
matter halo of mass $\sim 1.0\times 10^{12}$\hMsun\ is resolved
with $\sim 5 \times 10^{4}$ particles, in the Bolshoi simulation it
is resolved with $\sim 7 \times 10^{3}$ particles. For the Bolshoi
simulation
we have used snapshots spaced by roughly $400$Myr and
followed the exact same procedure to build the halo catalogues and the
merger trees.
Within the FOF analysis all FOF groups with 20 or more
particles are identified. The merger tree construction is based on
the comparison of the particles in FOF groups in
two consecutive snapshots. Starting at $z=0$ for every FOF group in the
catalog, $G_{0}$, we find all the FOF groups in the previous snapshots that
share at least thirteen particles with $G_{0}$ and label them as tentative
progenitors. Then, for each tentative progenitor, we find all the
descendants
sharing at least thirteen particles. Since the smallest FOF groups
contain 20 particles at least 2/3 of the particles must be
identified in tentative progenitors or descendants. Only the
tentative progenitors that have as a
main descendant the group $G_{0}$ are labeled as confirmed
progenitors at that
level. We iterate this procedure for each confirmed progenitor,
until the last
available snapshot at high redshift. By construction, each halo in the tree
can have only one descendant, but many progenitors.
The mergers of FOF groups correspond to the time where the FOF
radii of two halos overlap for the first time. The infall of the
less massive halo into the host and the subsequent inspiral,
disruption and fusion will be delayed with respect to the time of
the FOF merger. Different theoretical approximations and
methodologies can predict the infall-fusion time-scale only as an
order-of-magnitude estimate \citep{2010ApJ...724..915H}. The most used time-scale for this
process is based on the Chandrasekhar dynamical friction formula,
but improved estimates based on
numerical simulations
\citep{2008MNRAS.383...93B,2010ApJ...724..915H} yield
\begin{equation}
t_{\mathrm{infall}} = 0.56\left( \frac{R_{\rm vir}}{V_{\rm vir}}\right)
\frac{(M_{\rm vir}/M_{\rm sat})^{1.3}}{\ln(1+M_{\rm vir}/M_{\rm sat})},
\end{equation}
where $R_{\rm vir}$, $V_{\rm vir}$ and $M_{\rm vir}$ are the virial radius,
velocity and mass of the host halo, $M_{\rm sat}$ the mass of the
future satellite at the moment of infall at $R_{\rm vir}$. A median initial circularity of
the satellite orbit of $0.5$ has been assumed. For mass ratios of
$M_{\rm vir}/M_{\rm sat}=10$
\begin{equation}
\label{eq:infall}
t_{\mathrm{infall}} = 4.85 \left(\frac{R_{\rm vir}}{V_{\rm vir}}\right).
\end{equation}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.50]{figures/conf_gas_index_1_xy_env_10909.jpg}\hspace{0.5cm}
\includegraphics[scale=0.50]{figures/conf_gas_index_1_xy_env_16953.jpg}\vspace{0.8cm}
\includegraphics[scale=0.50]{figures/conf_gas_index_1_xy_env_2710.jpg}\hspace{0.5cm}
\includegraphics[scale=0.50]{figures/conf_gas_index_1_xy_env_BOLSHOI.jpg}
\end{center}
\caption{\label{fig:lss}Halo distribution in the three CLUES and
the Bolshoi (lower right)
simulations. Only halos more massive than $M_h>
2\times 10^{10}$\hMsun have been included. The radius of each circle corresponds to the radius
defined by the FOF algorithm which is calculated to be the radius of a
sphere with an equivalent volume as the FOF group. The dashed
circle marks a $5$\hMpc environment centered in the most massive halo of the
LG. The solid thick circle
shows the projected position of the halo identified with the Virgo
cluster. The cut is $25$\hMpc thick and is centered at the LG
position.}
\end{figure*}
\begin{table*}
\caption{\label{table:assembly} Properties of the MW-M31 pairs. Column 1:
parent simulation. Column 2: Halo name (either MW or M31); column 3: FOF mass; column 4: last major merger time ; column 5:
formation time; column 6: assembly time; column 7: matter
overdensity calculated with in a sphere of $5$\hMpc. All times are look-back-times.}
\begin{tabular}{ccccccc}\hline
Simulation & Halo Name & FOF Mass & $\tau_{M}$& $\tau_{F}$ & $\tau_{A}$ & $\delta_5 + 1$\\
& & [$10^{12}\hMsun$] & [Gyr] & [Gyr] & [Gyr] & \\\hline
CLUES-1 & M31 & 1.39 & 11.0 & 11.0 & 11.5 & 0.72\\
CLUES-1 & MW & 0.99 & 10.0& 9.3& 9.7& 0.69\\
CLUES-2 & M31 & 0.98 & 12.0& 10.0& 10.4& 0.78\\
CLUES-2 & MW & 0.77 & 11.3& 11.0& 11.0& 0.87\\
CLUES-3 & M31 & 1.45 & 11.0& 10.6& 11.0& 0.75\\
CLUES-3 & MW & 1.11 & 9.8& 9.8& 11.0& 0.80\\\hline
Average & & 1.15 &10.9 & 10.3 & 10.8 & 0.76\\
Standard Deviation & & 0.23 &0.8 & 0.6& 0.6& 0.05\\\hline
\end{tabular}
\end{table*}
\subsection{Local Group selection}
A LG in a constrained simulation consists of two
main halos within a certain mass range, within a distance range
and obeying some isolation conditions\footnote{A quantitative description of
these conditions is presented at the end of Section \ref{sec:sample}.}. In addition it should reside
close to the relative position of the LG with respect to the Virgo cluster. Given the periodic boundary conditions of the
simulations and the lack of treatment of the Zeldovich linear displacement in
the reconstruction of the initial conditions, the large scale structure of the
simulations is displaced by a few Megaparsecs among different
realizations of the simulation. The most robust features
of the constrained simulations are the Virgo cluster and the Local
Supercluster. Their positions in the initial conditions are known,
at $z=0$ their environment is searched for halos in the corresponding mass
range to determine their present positions.
These are used to fix the 'position' of the simulation in
relation to the actual universe. In Table
\ref{table:assembly} we summarize the masses of the MW and M31 halos
identified by the FOF halo finder in these three simulations.
Figure \ref{fig:lss} shows the large scale structure of the three
constrained realizations centered on the position of the LG in
each box in a slice 25\hMpc thick. In the three
CLUES simulations shown in Fig. \ref{fig:lss} the projected position
of the Virgo cluster is shown by a thick circle. The fourth panel in
the same figure shows a cut of the same geometrical characteristics
from the Bolshoi simulation, centered on one LG-like object.
\subsection{Merger trees description}
For each merger tree we define three different times to characterize
the MAHs. Each time has direct connection with the expected properties
of the baryonic component in the halo. The times, measured as
look-back time in Gyr, are:
\begin{itemize}
\item{{\bf Last major merger time} ($\tau_M$): defined as the time
when the last FOF halo interaction with ratio 1:10 starts. This
limit is considered to be the mass ratio below which the
merger contribution to the bulges can be estimated to be $< 5\%$-$10\%$
\citep{2010ApJ...715..202H}. Strictly speaking, as we do not
follow sub-structure in the simulation, this event corresponds
to the time when the merger fell into the larger halo and for
the first time became a sub-halo. One can use
Eq. (\ref{eq:infall}) to estimate the infall time-scale of the
satellite to the center of the host.}
\item{{\bf Formation time} ($\tau_F$): marks the time when the main branch in the tree
reached half of the halo mass at $z=0$. This marks the epoch when
approximately half of the
total baryonic content in the halo could be already in place in a virialized
object.}
\item{{\bf Assembly time} ($\tau_A$): defined as the time when the
mass in progenitors more massive than M$_f=10^{10}$\hMsun is half
of the halo mass at $z=0$. This time is related to the epoch of
stellar component assembly, as the total stellar mass depends
on the integrated history of all progenitors
\citep{2006MNRAS.372..933N,2008MNRAS.389.1419L}. The exact
value of $\tau_{A}$ is dependent on $M_f$, the specific value selected in this
work was chosen to allow the comparison of assembly times against
the results of the Bolshoi simulation which has a lower mass
resolution than the CLUES volumes.}
\end{itemize}
In Table \ref{table:assembly} we summarize the values of these three
different times for the three pairs of MW-M31 halos. In Figure
\ref{fig:mah} we show the median mass aggregation history in the
main branch as a function of redshift for halos in the mass range
$5.0\times 10^{11}\hMsun< M_{h}<5.0\times 10^{12}\hMsun$. Following
\cite{2010MNRAS.406..896B} we fit the MAH by a function of the kind
\begin{equation}
M(z) = M_{0}(1+z)^{\beta}\exp(-\alpha(\sqrt{1+z} -1 )),
\end{equation}
with $\alpha=4.5$ and $\beta=2.24$. These values provide a good fit within
$2.3\%$ for $z<7$. In the same Figure we overplot the main branch
growth for the six halos in the three simulated LGs. The MAHs of
these halos are systematically located above the mean, an indicator of
early matter assembly with respect to the halos within the same mass
range.
\begin{figure}
\begin{center}
\includegraphics[scale=0.50]{figures/mass_growth.pdf}
\end{center}
\caption{\label{fig:mah} Mass assembly histories of LG halos in the CLUES
simulation as a function of redshift. The solid black line shows the median
MAH for all halos in the CLUES simulations within the mass range $5.0\times
10^{11}\hMsun< M_{h}<5.0\times 10^{12}\hMsun$, the dashed lines show the
first and third quartiles. Also plotted as colour lines are the MAHs for the
MW (dotted) and M31 halos (continuous) in the three constrained
simulations. The assembly history for the LG halos is systematically
located over the median values as sign of early assembly with
respect to all halos in the same mass range.}
\end{figure}
\section{Selection of Local Groups and Control Samples}
\label{sec:sample}
\begin{table*}
\caption{\label{table:samples} Names and description of the four samples used
to quantify the formation history of the LG halos. The three first samples
are constructed both from the CLUES and Bolshoi simulations. By definition
the {\it LG} sample can only be constructed from the CLUES simulations.
The size refers to the total number of objects in the corresponding volume (individual
halos or pairs).}
\begin{tabular}{llcc}\hline
Name & Description & Size (CLUES)& Size (Bolshoi)\\\hline
{\it Individuals} & All the distinct halos in the mass range $5.0\times
10^{11}$\hMsun - $5.0\times 10^{12}$\hMsun & 4278 & 88756\\
{\it Pairs} & All the pairs of halos constructed from the {\it
Individuals} sample. & 1101 & 21877 \\
{\it Isolated Pairs}& Subset from the {\it Pairs} sample
following some isolation criteria (see \S\ref{sec:sample})& 85 & 1785\\
{\it LG}& The three pairs of LG halos from the constrained simulations. & 3&---\\\hline
\end{tabular}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.33]{figures/last_merger_A_B.pdf}\hspace{0.5cm}
\includegraphics[scale=0.33]{figures/formation_A_B.pdf}\hspace{0.5cm}
\includegraphics[scale=0.33]{figures/assembly_A_B.pdf}
\end{center}
\caption{\label{fig:integrated_A_B} Fraction of halos with merger histories
described by a MAHs with $\tau_M$, $\tau_F$ and $\tau_A$ larger than a given
value. The lines represent different samples. The sample of {\it
Individuals} (dashed) and {\it Pairs} (thick continuous lines) from the CLUES
simulations and the {\it Pairs} extracted from eight sub-volumes in the
Bolshoi simulation (thin continuous lines).}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.45]{figures/scatter_tmerg.pdf}\hspace{0.5cm}
\includegraphics[scale=0.45]{figures/last_merger_B_C.pdf}
\includegraphics[scale=0.45]{figures/scatter_tform.pdf}\hspace{0.5cm}
\includegraphics[scale=0.45]{figures/formation_B_C.pdf}
\includegraphics[scale=0.45]{figures/scatter_tass.pdf} \hspace{0.5cm}
\includegraphics[scale=0.45]{figures/assembly_B_C.pdf}
\end{center}
\caption{\emph{Left column}. Joint distributions of three different times
(last major merger, formation and assembly) describing the mass aggregation histories.
Each point in the plane represents a pair MW-M31 with histories described by the time values at that
point. Levels in shading coding indicate the number of halo pairs in the
Bolshoi simulations in that parameter range. Dark regions represent a high
number of pairs. An absolute scaling for this shading can be obtained from
the plots presented in the Right Column. The stars mark the location of the three
LG pairs, each one coming from one of the constrained simulations.
\emph{Right column} Integrated probability of these three different MAH
times. The continuous black lines represent the results for the {\it
Pairs} sample in the CLUES simulations. The {\it Isolated Pairs} sample from CLUES
is represented by the thick dashed lines. The results from the {\it
Isolated Pairs} samples in eight
sub-volumes of the Bolshoi simulation are represented by the thin continuous
grey lines. The thick continuous lines represent the results for the
{\it LG} sample.
The distributions from the {\it Pairs} and {\it Isolated Pairs}
control samples are basically indistinguishable. In other words,
detailed selection criteria for halo pairs, based on isolation only,
do not narrow down significantly the range of dark matter halo
assembly properties.
\label{fig:surface}}
\end{figure*}
Four different samples of halos are constructed here, in a nested hierarchy in
which the first sample contains the second which contains the third. The
fourth sample is the one that includes the 3 LGs. These are to be used to
study how the various criteria employed in constructing the samples affect the
MAH of its members. The first three samples are constructed also from the
Bolshoi simulation, and are used to look for possible biases in the constrained
simulations.
The first sample we define consists of all halos in the mass range
$5\times 10^{11}$\hMsun $<M_{h}<$$ 5\times 10^{12}$ \hMsun
\citep{2010MNRAS.406..264W}. We refer to
this set as the {\it Individuals} halo sample.
The second is a sample of halo pairs. Two halos, $H_{A}$ and $H_{B}$,
from the {\it Individuals} sample are considered a pair if and only if
halo $H_{B}$ is the closest halo to $H_{A}$ and
vice-versa. Furthermore, with respect to each halo in the pair there
cannot be any halo more massive than $5.0\times10^{12}$\hMsun closer than
its companion \citep{2004AJ....127.2031K}. We do not apply any further dynamical restrictions. For
instance an element in this sample may be a pair of halos that are
infalling into a cluster and are coincidentally close to each
other. We refer to this set as the {\it Pairs} sample.
The third is a sample of isolated pairs. We construct it by imposing
additional conditions on each member of the previous sample. These conditions
are defined to obtain a LG-like halo pair according to a series of
requirements that follow the lines of \cite{1997NewA....2...91G},
\cite{2005MNRAS.359..941M} \cite{2007MNRAS.378.1601M}. We will refer
to this sample as the {\it Isolated Pairs} sample. The conditions are
the following:
z
\begin{enumerate}
\item[a)]The distance between the center of the halos is smaller than $0.7$
\hMpc \citep{2005ApJ...635L..37R}.
\item[b)]The relative radial velocity of the two halos is negative.
\item[c)]There must not be objects more massive than either of the LG halos within
a radius of $2$\hMpc from each object \citep{2009MNRAS.395.1915T}.
\item[d)]There must not be a halo of mass $> 5.0\times 10^{13}$\hMsun within a
radius of $5$\hMpc with respect to each halo center \citep{2004AJ....127.2031K}.
\end{enumerate}
The final fourth sample contains the three objects that fulfil the criteria
of third sample, and are located at about $10 \hMpc$ 'south' of the Virgo
cluster in the Supergalactic Plane. This sample is referred to as {\it LG}.
We build these three samples both from the CLUES and Bolshoi simulations.
A short summary description of each sample is contained in Table \ref{table:samples}.
\section{Results}
\label{sec:results}
The backbone of our analysis is the study of the MAH of halos in the
mass range $\big(5.0\times 10^{11}\ - \ 5.0\times
10^{12}\big)$\hMsun. Our results must be described in the 6
dimensional parameter space, spanned by the three characteristic
times of the two halos, dubbed as MW and M31. The distribution of
$\tau_M$, $\tau_F$ and $\tau_A$ of the three different samples is
studied in \S\ref{sec:assembly} and the possible dependence of these
distributions on the ambient density around the LGs and the mass ratio
of the MW and M31 members of the LGs in \S\ref{sec:density}.
\subsection{Mass accretion history of the different samples}
\label{sec:assembly}
Figure \ref{fig:integrated_A_B} presents distribution of $\tau_M$,
$\tau_F$ and $\tau_A$ for the {\it Individuals} and {\it Pairs}
samples of both the CLUES and Bolshoi simulations. The distribution
with respect to the MW and M31 are virtually indistinguishable, and
the curves present both halos. We calibrate the effect of cosmic
variance with the $(100\hMpc)^3$ volumes extracted from the Bolshoi
simulation. The results are overplotted as thin
magenta lines. The distribution of $\tau_M$ and $\tau_F$ are well within
the scatter of the sub-volumes, while the $\tau_A$ is somewhat out of
the range.
We conclude that with respect to the MAH, the constrained simulations are
essentially unbiased with respect to the unconstrained one. The interesting
fact that emerges here is the halos in the {\it Individuals} and {\it Pairs}
sample share the same MAH, as expressed by the three times described here.
Figure \ref{fig:surface} presents the main results of the paper. It
shows the distribution of the three times for the different sample of
pairs of halos. The left column is made of three grey scale maps
describing the number of objects in the {\it Pairs}
sample in the subspace of $\big(\tau{_X^{M31}}, \tau{_X^{MW}}\big)$,
where $X=M, F, A$. The shades represent the number of pairs
around a given region of parameter space calculated from the {\it
Pairs} samples in the Bolshoi simulation. The three different {\it
LG} pairs are overplotted as stars.
The right hand side column of Figure \ref{fig:surface} shows
the integrated relative distribution of the halos in the three
different times of the {\it Pairs}, {\it Isolated Pairs} and {\it LGs}
samples. For the {\it LGs} this is further separated for the MW and
M31 halos. The distribution of the {\it Isolated Pairs} of the Bolshoi
sub-volume is presented as well.
Two important comments can be made based on Figure \ref{fig:surface}. First,
we see that the times in the {\it LGs} sample are confined to a narrow range
compared to the broad {\it Pairs} sample. The merger, formation and assembly times
in this sample are confined within the range $9.5-12$ Gyr. Second, from the integrated
distribution, we infer that the {\it Pairs} and {\it Isolated Pairs}
samples are virtually indistinguishable. This implies that the commonly used isolation criteria
\citep{1997NewA....2...91G,2005MNRAS.359..941M,2007MNRAS.378.1601M} do
not automatically produce the narrow parameter space occupied by the
{\it LG} pairs.
\subsection{The influence of the Local Matter Density and the Mass Ratio}
\label{sec:density}
The {\it Pairs} and {\it Isolated Pairs} samples are selected based on
isolation and dynamics. The similarity of the distribution of the different
MAH times of the the different samples motivates us to look for the possible
dependence of these distributions on some other characteristics of the three
LGs. In particular, the three LGs are found to share the two following
properties: the mass ratio between the two halos and the matter over-density
in a sphere of $5$\hMpc radius \footnote{$\delta_{5}$ has been calculated from the total mass in halos more massive than
$1\times 10^{10}\hMsun$ contained within a sphere of radius $5$\hMpc
centered at the position of each halo.}, noted as $\delta_{5}$. The values for the halo masses in the pairs and the local over-densities are listed in Table
\ref{table:assembly} together with the assembly, formation and last major merger
times.
A series of sub-samples of the halos sample are constructed by requiring that the masses and mass ratios between the pairs are bounded by the LG limits or the values $\delta_5$. These sub-sampling do not bias the LG-like objects towards the region of parameter space defined by the LG sample.
\section{Discussion}
\label{sec:discussion}
Three basic facts emerge from the results presented in the
previous section : a) the three LGs share a common formation
history, b) this formation history is quiet out to at $\approx
(10-12)$Gyr and c) none of the selection rules applied here to the
pairs of halos have defined a sample of objects with MAH similar to
that of the three LGs. In what follows, we discuss the possible origin and the
predictable consequences of these facts.
\begin{table*}
\caption{\label{table:probability}Fraction of halos/pairs the different samples with
times $\tau_{M}$, $\tau_{F}$ and $\tau_{A}$ located in the
parameter space defined by the minima characteristic times of the LG
halos in the constrained simulations. These minima from the
LGs are defined for each $\tau_{X}$ in two different ways: 1) as the mean value minus
two times the standard deviation (see Table \ref{table:assembly})
and 2) as the minimum value of all realisations. These minima times
are denoted $\tau_{X}^{\prime}$ and $\tau_{X}^{\prime\prime}$
respectively and are presented in the first rows. In the following
rows, the first column describes the name and origin of
the sample. The three following columns indicate the fraction of the
total population with a $\tau_{X}$ larger than the calculated
$\tau_{X}^{\prime}$ or $\tau_{X}^{\prime\prime}$ (in parenthesis).
In the case of pairs
samples, we require the times for both halos to be above the threshold. The last column refers to
the three different $\tau_{X}$ being \emph{simultaneously} larger than the
corresponding $\tau_{X}^{\prime}$ ($\tau_{X}^{\prime\prime}$).}
\begin{tabular}{lcccc}\hline
"Two sigma" bound & $\tau_{M}^{\prime}$ [Gyr]& $\tau_{F}^{\prime}$ [Gyr]& $\tau_{A}^{\prime}$ [Gyr]& \\
& 9.3 & 9.0 &9.6 & \\\hline
"Minima" bound & $\tau_{M}^{\prime\prime}$ [Gyr]& $\tau_{F}^{\prime\prime}$ [Gyr]& $\tau_{A}^{\prime\prime}$ [Gyr]& \\
& 9.8 & 9.3 &9.7 & \\\hline
Sample & $\tau_{M}\geq\tau_{M}^{\prime}$ ($\tau_{M}^{\prime\prime}$)& $\tau_{F}\geq\tau_{F}^{\prime}$ ($\tau_{F}^{\prime\prime}$)& $\tau_{A}\geq\tau_{A}^{\prime}$ ($\tau_{A}^{\prime\prime}$)& $\tau_{M,F,A}\geq\tau_{M,F,A}^{\prime}$ ($\tau_{M,F,A}^{\prime\prime}$)\\\hline
CLUES {\it Individuals} & 0.24 (0.18)& 0.29 (0.24)& 0.85 (0.85)& 0.17
(0.12)\\
CLUES {\it Pairs} & 0.06 (0.03)& 0.09 (0.06)& 0.74 (0.74)& 0.03 (0.01)\\
CLUES {\it Isolated Pairs} & 0.06 (0.03)& 0.08 (0.05)& 0.70 (0.70)&
0.05 (0.03)\\
Bolshoi {\it Individuals} & 0.23 (0.19)& 0.23 (0.23)& 0.87 (0.87)&
0.17 (0.12)\\
Bolshoi {\it Pairs} & 0.05 (0.04)& 0.10 (0.05)& 0.76 (0.76)& 0.03 (0.02)\\
Bolshoi {\it Isolated Pairs} & 0.05 (0.03)& 0.10 (0.06)& 0.73 (0.73)& 0.03 (0.01)\\\hline
\end{tabular}
\end{table*}
\subsection{On the Common Formation History}
Naively, one might hypothesise that the fact that all three CLUES LGs have a
common MAH, as defined here, is consistent with being drawn at random from the
sample of pairs, i.e. the range of properties spanned by three random halo pairs can be
naturally expected to be narrow. This is the null hypothesis we test now.
What is the probability that 3 randomly selected pairs have MAHs
within the range of properties found for the LG? We compute this
probability based on the fraction of halos in the pair samples that
share the {\it LG} formation properties.
We define first the minimal subspace that contains the 3 simulated LGs
by providing lower bounds on the different times describing the MAHs.
Table \ref{table:probability} lists the minimal last major merger,
formation and assembly look-back times, where two options are taken to
estimate the minima. The first defines the "two sigma" bound, namely
the average value minus twice the standard deviation of each time of
the 6 halos of the 3 LGs, the second takes the minimum value for each
time.
The table provides the fraction of halos in the {\it Individuals}
sample satisfying each one of the conditions $\tau_X \geq
\tau_{X}^{\rm bound}$ independently and all of them simultaneously, where
$X=M$, $F$ and $A$ and the subscript ${bound}$ denotes the minimal bound of such
time. We find that the fraction of {\it Individuals} in the quiet MAH subspace
is $f_{i}=0.17 (0.12)$ both in CLUES and Bolshoi for the first
(second) minima option. If we consider now the halos either in the
{\it Pairs} or {\it Isolated Pairs} samples, only a fraction of $f_{p}=
0.03 (0.01)$ pairs are composed of halos that are both within the {\it
LG} parameter space. To a good approximation, the pair fraction can
be calculated as the individual fraction squared, $f_{p}\approx
f_{i}\times f_{i}$. This is the expected result under the assumption
that the assembly of the MW and M31 are independent.
The probability of randomly selecting three random halo pairs and
having them within the range of parameters defined by the {\it LG} can
be calculated as $p_{LG}=f_{p}^3\approx 2.7\times 10^{-5} (1.0\times 10^{-6})$. This small
probability is a consequence of having found 3 halo pairs within a set
of properties shared by $0.17 (0.12)$ of the total population of
halos. If we consider pairs with a range of desired properties
within shared by, say, $0.68$ of the halos in the total population (the
fraction within one standard deviation around the mean), the
probability of finding three pairs inside that range would be
$p_{1-\sigma}(0.68 \times 0.68)^{3}\approx 0.1$.
Comparing the results of the probabilities $p_{LG}$ and
$p_{1-\sigma}$, the null hypothesis can be safely rejected. It is
highly unlikely that the three randomly selected pairs show a
narrow range of properties as in the case of the {\it LG} sample.
Both the {\it ab-initio} and {\it post-factum} constraints imposed
on the LG yield a {\it LG} sample with very similar MAHs. In the CLUES
simulations only the large and mid-scales are effectively constrained
by the data leaving the galactic and smaller scales effectively
random. It follows that the MAH of objects similar to the LG is
strongly affected by their environment. To what extent this is valid
for DM halos in general remains an open question.
\subsection{On the Quietness of the Formation History}
We established in the previous sections that the MAHs are quiet out to $\approx
(10-12)$Gyr, and that none of the selection rules applied here to the pairs
of halos have defined a sample of objects with MAH similar to that of
the three LGs.
The last point is consistent with the results previous studies that have
approached the same question of estimating a possible bias of the LG with
respect to a general halo population
\citep{2009MNRAS.395..210D,2010MNRAS.406..896B}. These studies apply
isolation criteria on scales of $1$\hMpc over halos in the mass range
we study here, and find as well that no significant bias is introduced
in the isolated halo population with respect to the parent halo
population.
The parameter subspace defined by the three {\it LG}
cannot by explained either in terms of the isolation criteria listed
at the end of \S\ref{sec:sample} or by adding constraints on
the values of the local over-density on $5$\hMpc scales. The
properties of the dynamical environment, common to all the CLUES
simulations and provide the quiet formation history for a LG, remain
to be found. Ideally, that result should be expressed in a suitable
form to search for LG pairs in an unconstrained simulation.
Is the observed Local Group biased in the same manner? We cannot
provide the answer to that question with the simulations we
present in this paper. Nonetheless, the theoretical predictions
we show here for the dark matter assembly in the LG seem to be in
agreement with the disk dominated morphology of MW and M31.
\subsection{The Connection with the Observed Local Group}
\label{sec:observations}
The most distinct feature of the MW and M31 is that both galaxies have a
disk dominated morphology. It is often mentioned that abundant mergers, which are
presumed to destroy the disk and be source of morphological change, are expected on all
mass scales in the hierarchical picture of galaxy formation of $\Lambda$CDM\;
generating a possible contradiction with the abundance of disk galaxies
in the local Universe and, in particular, with the fact that the MW
and M31 are disk galaxies
\citep{1992ApJ...389....5T,1993ApJ...403...74Q,2008ApJ...688..254K}.
Our results provide new theoretical evidence that the MW and M31
could be expected to be disk dominated galaxies in $\Lambda$CDM . From the
results presented here, we have found that the last merger started on
average $11$ Gyr ago. At these redshifts the mass of the MW host halo is
$1-4\times 10^{11}$\hMsun, its virial velocity is $\approx 200$ km/s
and its virial radius $\approx 0.1$ \hMpc. Using these quantities and
Eq. \ref{eq:infall} we estimate the final infall time for the
satellite to be $\approx 3.5$ Gyr, reaching the center $\approx 7.5$
Gyr ago. This quiet history should favour the survival of a disk
formed in the halo \citep{2011arXiv1103.6030G}. Although, detailed estimations on these matters
might have to include the inflow of gas into the disk
\citep{2009MNRAS.396..696S}.
A distinct and well characterised feature of the MW is the thick
disk. This disk component of the MW has been known for more than 25
years \citep{1983MNRAS.202.1025G}. The thick disk contains a
population of stars with different kinematics, spatial distribution,
ages and chemical enrichment compared to the thin galactic disk. Although M31
seems to have a similar component \citep{2011MNRAS.tmp..248C}, the
observational and theoretical work on the MW's thick disk has a long history, and
its origin can therefore be discussed in greater detail.
One of the possible formation scenarios for the MW thick disk is an in-situ formation
during/after a gas rich merger \citep{2009MNRAS.400L..61S}. The analysis of
the orbital eccentricity of stars based on RAVE and SDSS data supports
the gas-rich merger mechanism
\citep{2010ApJ...725L.186D,2011MNRAS.tmp..260W}. In our results the last
merger reaches the center $\approx 7.5$ Gyr ago, close to the look-back time of
$8$Gyr as required by the in-situ formation scenario.
\section{Conclusions}
\label{sec:conclusions}
We use constrained simulations of the local Universe to study the
dark matter mass aggregation history (MAH) of the Local Group (LG). Two basic
questions motivate this study: 1. To what extent the simulated LGs can
account for the observed structure of the MW and M31 galaxies? Namely,
if the disk dominated morphology implies that the MW and M31 halos had
a quiet MAH over the last $\approx 11$Gyrs, can simulations
recover this recent quiet history? 2. Does this quiet MAH arise from
the intrinsic properties of the DM halos, or is it induced by
environment within which the LG is embedded? Is the implied MAH of
the LG triggered by the large and meso-scales, or is it induced by the
small, i.e. galactic and sub-galactic, scales?
The methodology adopted here is to use constrained simulations of the
local Universe, designed to reproduce the large and meso-scales of
the LG environment, and search for halos that resemble the actual
LG. The identification of a pair of halos as a LG-like object is based
on a set of isolation and dynamical criteria, all formulated by their
redshift zero structure, in complete ignorance of their formation history. A
LG-like object that is found close to the actual position of the
observed LG with respect to the large scale structure environment is defined here as a LG. By construction a constrained
simulation can have only one LG or none at all. Indeed, out of a suit
of 200 constrained simulations only 3 harbour a LG. Controlled samples
of individual halos and pairs have been constructed as reference
samples. The analysis has been extended to the unconstrained Bolshoi
simulation that is used here for an unbiased reference
\citep{2010arXiv1002.3660K}.
The construction of the identification of the 3 LGs is done
independently of the MAH of the halos. Yet, the MW's and M31's halos
of the 3 LGs all have a common quiet MAH, defined as having the last
major merger, formation and assembly look-back time extending over
$\approx (10\ -12)$ Gyr. This quiet formation history of the simulated
LGs can help to explain the disk dominated morphology of the MW and
M31, adding evidence to the internal instability origin of the
spheroidal component of the MW \citep{2010ApJ...720L..72S}. Based on
measurements of the eccentricity of orbits in the MW, it has been
recently claimed
\citep{2009MNRAS.400L..61S,2010ApJ...725L.186D,2011MNRAS.tmp..260W},
that a rich merger taking place 10.5 to 8 Gyr ago is a favoured
mechanism explain the thick disk in the MW \cite{2004ApJ...612..894B}. Our
finding of a quiet MAH of the LG provides a suitable platform for such
a process to take place.
The LG halos are assumed here to be selected from FOF halos in the
mass range $5\times 10^{11}\hMsun <M_{h} < 5\times 10^{12}\hMsun$ at
$z=0$. Between $12\%$ and $17\%$ of these halos are found
to have a quiet MAH, depending on the detailed definition of the quiet
parameter space. From this point of view the MW
and M31 halos are not rare. However, how likely is a pair of halos to
have such a quiet history, shared by both halos? Making the naive
null assumption that the MAH of a halo is an intrinsic property of a
halo independent of its environment then the fraction of pairs should be the
product of the fractions for a single halo. Indeed, the {\it Pairs}
sample drawn out of the Bolshoi simulation confirms this assertion,
finding that between $1\%$ to $3\%$ of the pairs have as quiet an MAH as
the LG systems do. The probability of having selecting 3 pairs randomly
and finding them with a quiet MAH is on the order of $\sim 10^{-5}$.
Next, we look for what dynamical or environmental property determines
rethe MAH of a LG-like object. We find here that the mere pairing of
the MW-like halos does not affect the MAH fiducial times. Imposing
the isolation and dynamical constraints that define the {\it
Isolated Pairs} sample does not affect it either. This leaves us
with an open question as to what determines the MAH of halo pairs
similar to the LG. The one hint that we have is that all of the three LGs
reside in the same large and meso-scale environment. We speculate that
the cosmic web plays a major role in shaping the MAH of LG-like
objects, although it is not yet clear what mechanism is
responsible. A larger sample of constrained LGs is needed to confirm and
further explore the reasons behind this result.
\section*{Acknowledgements}
We acknowledge stimulating discussions with Cecilia Scannapieco and Noam
I. Libeskind. Y.H. has been partially supported by the ISF (13/08) and the
Johann Wempe Award by the AIP. We acknowledge the use of the CLUES data storage
system EREBOS at AIP. GY would like to thank the MICINN (Spain) for financial
support under project numbers FPA 2009-08958 AYA 2009-13875-C03 and the SyeC
Consolider project CSD 2007-0050. The simulations were performed at the Leibniz
Rechenzentrum Munich (LRZ) and at Barcelona Supercomputing Center (BSC). We
thank DEISA for giving us access to computing resources in these through the
DECI projects SIMU-LU and SIMUGAL-LU.
\bibliographystyle{mn2e}
| c0b72edfce3f8f504ebc6231e8d297a34a7cab28 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Given the lyrics and musical scores, an essential singing voice synthesis (SVS) system is expected to generate singing voices equipped with accurate pitch, natural rhythm, and high-fidelity waveform. Inspired by the improvements~\cite{wang2017tacotron,ren2019fastspeech} in the text-to-speech (TTS) field, many researchers employed deep neural networks (DNNs) subsequently for acoustic modeling in SVS systems~\cite{kim2018korean,yi2019singing,lu2020xiaoicesing}, which undoubtedly outperform the traditional SVS methods~\cite{bonada2016expressive,nakamura2014hmm} largely in terms of naturalness. Recently, some neural SVS systems~\cite{gu2021bytesing,wang2022singing,zhang2022wesinger} were built by adopting modern acoustic models and auto-regressive neural vocoders~\cite{kalchbrenner2018efficient,valin2019lpcnet}, and they provided good paths to the synthesized singing voices with both higher naturalness and better quality. However, such approaches suffer from a slower iterative synthesis speed and over-smoothing problem.
The Generative adversarial network (GAN), originally introduced in~\cite{goodfellow2014generative} for unsupervised image generation, can be defined as training a generative model to capture the distribution of the observed data well with the necessary guidance from an adversarial discriminator.
A large amount of literature has tried to adopt GAN with efficient model structures for SVS tasks. In particular, some works~\cite{hono2019singing,chandna2019wgansing,choi2020korean,wu2020adversarially} achieved the realistic spectrogram prediction by leveraging different GANs with additional conditions such as the lyrics and musical scores. Besides, the potential of GANs in high-fidelity waveform generation has also been revealed for SVS tasks in~\cite{liu21e_interspeech,chen2021singgan,huang2021multi}. These methods either employed some autoregressive modules, or have not applied GAN to both spectrogram prediction and waveform generation simultaneously, and there are still some obvious flaws in the generated singing voices. Until recently, there have been several SVS systems that apply adversarial training for optimizing both acoustic models and neural vocoders, such as HiFiSinger~\cite{chen2020hifisinger} and N-Singer~\cite{lee2021n}. Both of them shared the same structure in the feed-forward Transformer (FFT) acoustic model and Parallel WaveGAN~\cite{yamamoto2020parallel} vocoder, however, they did not investigate the multi-singer training under the GAN framework.
Inspired by the above studies, in this paper, we propose a fully parallel multi-singer SVS system and employ adversarial training to prompt the more realistic spectrogram and waveform generation sequentially. The highlights of our work can be summarized as follows:
\begin{itemize}
\item We propose an efficient and robust multi-singer SVS system composed of fully parallel architectures without any recurrent unit, and it is feasible to do singer adaptation with limited singing data.
\item A generic, simple but effective couple of multi-random area discriminators with each singer's identity as the condition are leveraged to train the acoustic model jointly with the traditional L1 loss, which can significantly ease the over-smoothing problem and improve the quality of the Mel-spectrogram. By the way, we adopt an effective multi-receptive dilated convolutional Postnet following the FFT-based acoustic model.
\item With the linearly-interpolated F0 sequence and Mel-spectrogram produced by the acoustic model, a carefully designed GAN-based vocoder is introduced, which works well for both SVS and TTS tasks. To enhance the performance and training stability, we leverage the speaker identity information to the intermediate layer of each discriminator.
\end{itemize}
Our proposed system is an extension work with our previous work WeSinger~\cite{zhang2022wesinger}, which is a data-augmented SVS system with pitch-weighted progressive loss and takes the 24 kHz LPCNet as the neural vocoder. Our work in this paper shares many hyper-parameter settings, data augmentation techniques, and auxiliary loss functions with WeSinger~\cite{zhang2022wesinger}. Therefore, our proposed system in this paper is named WeSinger 2. Experimental results indicated that WeSinger 2 can generate natural and high-quality singing voices more efficiently, and several ablation studies demonstrated the effectiveness of our designs.
\section{Methodology}
\label{section2}
\subsection{Architecture Overview}
The general architecture of WeSinger 2 is illustrated in Figure~\ref{fig:wesinger2}(a). The input representation and the FFT-based encoder follow the designs of WeSinger~\cite{zhang2022wesinger}. Different from WeSinger which adopts a BLSTM-based duration predictor, here we replace the LSTM layers with several 1-D convolutional (Conv1D) layers inspired by~\cite{ren2020fastspeech} for faster inference speed and similar performance. The Mel-spectrogram with linearly-interpolated F0 (denoted as ``LI-F0'' in Figure~\ref{fig:wesinger2}(a)) sequence is produced from FFT blocks and improved by an effective Post-net based on multi-receptive field (MRF) instead of the CBHG Post-net. In addition, the LPCNet vocoder adopted by WeSinger~\cite{zhang2022wesinger} is also replaced with a parallel GAN-based vocoder to efficiently produce the high-fidelity waveforms. We will introduce the key improvements of WeSinger 2 in the following subsections.
\begin{figure}[htp]
\centering
\includegraphics[width=8cm]{wesinger2.pdf}
\caption{\textbf{(a)} The overall architecture of our proposed multi-singer SVS system WeSinger 2 at the inference stage. \textbf{(b)} The conditional multi random area discriminators (MRADs) designed for training the acoustic model. \textbf{(c)} The reconstruction and adversarial functions adopted for training the GAN-based neural vocoder. The gradient reversal layer for singer classification during multi-singer training and the progressive decoder loss are omitted here, which have been described in~\cite{zhang2022wesinger}.}
\label{fig:wesinger2}
\end{figure}
\subsection{Acoustic Model}
Based on WeSinger~\cite{zhang2022wesinger}, we made two major improvements to the FFT-based acoustic model as followings, including the MRF-based post-net and the conditional adversarial training.
\begin{figure*}[htp]
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.8\linewidth]{mel_gt.pdf}
\caption{ground-truth}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.8\linewidth]{mel_L1.pdf}
\caption{L1 loss}
\label{fig:sub-second}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.8\linewidth]{mel_L1_GAN.pdf}
\caption{L1 + adv. loss}
\label{fig:sub-second}
\end{subfigure}
\caption{The quality improvement in the predicted Mel-spectrogram by WeSinger 2 with Multi Random Area Discriminators (MRADs).}
\label{fig:mel_spectrogram}
\end{figure*}
\textit{(1) \textbf{MRF-based Post-net}} It is an important and necessary design to append Post-net to the acoustic model for improving the quality of Mel-spectrogram prediction~\cite{shen2018natural}. Inspired by~\cite{kong2020hifi}, we design a deep residual Conv1D-based post-net based on multi-receptive fields to enhance the performance of contextual modeling. Specifically, the post-net in WeSinger 2 is composed of three blocks with different convolutional kernel sizes, each block is composed of three Conv1D layers with different dilation rates. Each Conv1D layer is followed by leaky rectified linear unit (ReLU) activation. To ease the optimization, we add residual connections between all adjacent Conv1D layers and blocks. Compared with the deep CNNs based post-net adopted in~\cite{lee2021n,shen2018natural}, our MRF-based post-net can be more effective in reproducing Mel-spectrogram with around the same computational cost.
\textit{(2) \textbf{Conditional Adversarial Training}} We found that optimizing the duration-allocated acoustic model by only L1 construction loss could lead to fuzzy and over-smoothing Mel-spectrogram prediction, which will either harm the high fidelity of waveforms generated from the neural vocoder or make the synthesized singing voices lack expressiveness, such as trill and glissando in singing. To alleviate these defects, we combine the adversarial training with L1 reconstruction loss to help train the acoustic model. Different from the sub-frequency discriminator~\cite{chen2020hifisinger} and voice-aware conditional discriminator~\cite{lee2021n}, we design the conditional
multi-random area discriminators (MRADs) to discriminate different structures in the Mel-spectrogram of each singer, which is more generic and easier to implement. As shown in Figure~\ref{fig:wesinger2}(b), each discriminator is composed of six two-dimensional convolutional (Conv2D) layers, which take as input a random rectangle region from the Mel-spectrogram and predict a scalar value to indicate whether the selected region is real or fake. The height of each randomly selected region represents the frequency band and the width represents the number of frames. We fix the values of height to be $[20, 30, 40, 50]$ and the values of width to be $[190, 160, 70, 30]$, which can effectively capture different resolutions in both the time domain and frequency domain. The location of each rectangle region is randomly sampled at every training step to achieve the effect of data augmentation. Besides, we found that adding the singer embedding vectors to the output of the first Conv2D layer can both avoid the generator falling into the overfitting to a specified singer's data and improve the quality of the predicted Mel-spectrogram. For adversarial training, we use the popular objective functions proposed in LSGAN~\cite{mao2017least} and combine them with the L1 reconstruction loss as follows:
\begin{equation}
\begin{split}
\mathcal{L}_{block}(D;c) &=\mathbb{E}_{y\sim p_{data}(y)}[\sum_{i=1}^{4}(D_i(y, c)-1)^2] \\
&+\mathbb{E}_{x\sim p_{g}(x)}[\sum_{i=1}^{4}(D_i(x, c))^2] \\
\mathcal{L}_{block}(G;c) &=\lambda_{adv}\mathbb{E}_{x\sim p_{g}(x)}[\sum_{i=1}^{4}(D_i(x, c)-1)^2]+\lambda_{l}|x - y| \\
\end{split}
\end{equation}
where $D_i$ denotes the $i$-th discriminator, $G$ represents the FFT-based acoustic model, $c$ denotes the identity of each singer, $x$ is the generated Mel-spectrogram from each decoder block, $y$ is the ground-truth Mel-spectrogram, $p_{data}$ is the distribution of ground-truth Mel-spectrogram, $p_{g}$ is the distribution of generated Mel-spectrogram from each decoder block, $|x-y|$ is the L1 reconstruction loss between the predicted Mel-spectrogram from each decoder block and the ground-truth Mel-spectrogram, and $\mathcal{V}_{block}$ indicates the objective function of each decoder block. $\lambda_{adv}$ and $\lambda_{l}$ are the coefficients for adversarial loss and L1 reconstruction loss, respectively. We set $\lambda_{l}=20$ in this paper. $\lambda_{adv}$ is set to be $1$ when pre-training and to be $0.7$ when fine-tuning. We also adopt the progressive decoder loss here, the effectiveness of which has been demonstrated in ~\cite{zhang2022wesinger}. Therefore, the overall objective functions for the acoustic model's generator $\mathcal{V}_{am}(G)$ and discriminators $\mathcal{V}_{am}(D)$ can be formulated as:
\begin{equation}
\begin{split}
\mathcal{V}_{am}(D) &=\mathop{min}\limits_{D} \sum_{block=1}^{B} \mathcal{L}_{block}(D;c) \\
\mathcal{V}_{am}(G) &=\mathop{min}\limits_{G} \sum_{block=1}^{B} \mathcal{L}_{block}(G;c) \\
\end{split}
\end{equation}
where $B$ denotes the number of all decoder blocks and post-net. Similar as~\cite{zhang2022wesinger}, the duration loss and the adversarial singer classifier loss are also adopted for training and we omit both terms here.
\subsection{Neural Vocoder}
\textit{(1) \textbf{Waveform Generator}} Inspired by HiFiGAN~\cite{kong2020hifi}, we design a new GAN-based neural vocoder as shown in Figure~\ref{fig:wesinger2}(a), which is effective for both SVS and TTS tasks. Considering that singing voices usually have a wide range of pitch and longer vowel duration~\cite{lee2021n} than speech, we made several improvements in the design of the generator. First, in addition to the commonly-used Mel-spectrogram, our vocoder also considers as input the 88 discrete piano keys which are quantized from the linearly-interpolated F0 sequence according to the MIDI standard~\cite{midi1996complete}. The discrete keys are then embedded by looking up a trainable matrix $F\in \mathbb{R}^{88\times d}$, in which $d$ denotes the size of the embedding vector. We set $d=16$ empirically in this paper and observed that a larger value of $d$
could harm the performance. Second, to avoid the instability of different singers' corpus during training, we employ a fully-connected layer with \textit{softplus} activation function for the key embedding sequence, and the Mel-spectrogram is also transformed by a linear fully-connected layer, then we take the summation of them as input to the following convolution layers. We found that such designs can both stabilize training and improve the quality of generated singing
waveforms compared with concatenation or summation directly.
\textit{(2) \textbf{Discriminative Training}} As for optimizing the generator (shown in Figure~\ref{fig:wesinger2}(c)), we adopt the multi-scale discriminator (MSD)~\cite{kumar2019melgan} to capture long-term dependencies and multi-period discriminator (MPD)~\cite{kong2020hifi} to handle the diverse periodic patterns in the audio signal. Besides, we also introduce the multi-length discriminator (MLD) proposed in~\cite{chen2020hifisinger} by taking audio waveform as input of several Conv1D layers with linearly increasing dilation rates. Different from previous works, here we inject the singer identity information to the specified intermediate layer of each discriminator, which can utilize the multi-singer pre-training effectively and achieve training the generator with a larger kernel size and dilation rate stably. In addition to the conditional discriminators, we apply both the multi-resolution STFT~\cite{yang2021multi} and multi-resolution Mel-spectrogram~\cite{kong2020hifi} reconstruction loss functions to help train the generator more quickly. Overall, the final training objectives for the generator $\mathcal{V}_{voc}(G)$ and the discriminators $\mathcal{V}_{voc}(D)$ can be formulated as:
\begin{equation}
\begin{split}
\mathcal{L}_{adv}(D;c) &= \mathbb{E}_{y\sim p_{data}(y)}[(D(y, c)-1)^2] \\
&+ \mathbb{E}_{x\sim p_{g}(x)}[(D(x, c))^2],\forall D\in\mathcal{S} \\
\mathcal{L}_{adv}(G;c) &= \mathbb{E}_{x\sim p_{g}(x)}[\sum_{D\in\mathcal{S}}(D(x, c)-1)^2]\\
\mathcal{L}_{recons}(G) &= \lambda_{s}\mathcal{L}_{stft} + \lambda_{m}\mathcal{L}_{mel} \\
\mathcal{V}_{voc}(D) &= \mathop{min}\limits_{D} \mathcal{L}_{adv}(D;c) \\
\mathcal{V}_{voc}(G) &= \mathop{min}\limits_{G}[ \mathcal{L}_{adv}(G;c) + \mathcal{L}_{recons}(G)] \\
\end{split}
\end{equation}
in which $\mathcal{S}$=\{MSD, MPD, MLD\}, $p_{data}$ is the distribution of the ground-truth waveform, $p_{g}$ is the distribution of the generated waveform. $\lambda_{s}$ and $\lambda_{m}$ are the scale factors for both the multi-resolution STFT reconstruction loss and the multi-resolution Mel-spectrogram reconstruction loss, respectively. We obtained the optimal performance of the proposed vocoder by setting $\lambda_{s}=\lambda_{m}=15$ in this paper.
\section{Experiments}
\subsection{Datasets}
\label{datasets}
We follow our previous work WeSinger~\cite{zhang2022wesinger} to use a 30-hour multi-singer singing corpus to do pre-training for WeSinger 2. With the pre-trained model, we try to fine-tune it on the following processed datasets (24 kHz with 16-bit quantization):
\begin{itemize}
\item \textit{\textbf{Opencpop}}~\cite{wang2022opencpop}, a public female singing dataset\footnote{https://wenet.org.cn/opencpop/} including 3550 segments for training and 206 segments for testing.
\item \textit{\textbf{Data-L}}, a large-scale high-quality singing dataset including 4700 segments for training and 298 segments for testing.
\item \textit{\textbf{Data-S}}, a small-scale low-quality amateur singing dataset including 348 segments for training and 10 segments for testing.
\end{itemize}
To improve the diversity, each recording (except the recordings from \textit{Opencpop}) is split into audio segments with the \textit{VS}-augmented~\cite{zhang2022wesinger} method. Mel-spectrograms with 80 bands are extracted as the intermediate feature representation, which is re-scaled by the mean-variance normalization based on the statistics of each singer. By the way, the F0 contours were extracted with the YIN~\cite{de2002yin} algorithm and we replaced the unvoiced parts with the linearly-interpolated F0 values.
\subsection{Singer Adaptation Strategy}
We compared WeSinger 2 with our previous work WeSinger~\cite{zhang2022wesinger}. All acoustic models and neural vocoders were first pre-trained for 1200k steps with a batch size of 24 on the 30-hour multi-singer corpus. The initial learning rate was set to 8e-4 with a warm-up strategy at the beginning 150k steps and then decayed until the final learning rate of 1e-4 for the following 1050k steps. The pre-trained models were then fine-tuned with a batch size of 24 and a fixed learning rate of 1e-4 on three datasets as described in Section~\ref{datasets}.
To make a trade-off between the generalization and high fidelity, we simplified the sampling strategy proposed in~\cite{Zheng2022Zero} by selecting a sample of the target singer with a probability of $0.7$ and a sample of other singers with a probability of $0.3$ to prepare the training dataset for fine-tuning. With such a simple sampling strategy during fine-tuning, it is feasible to build a robust SVS system without obvious harm to the timbre of the target singer especially when the training data of the target singer is very limited. We leave the detailed investigation of efficient few-shot SVS for future work.
\subsection{Evaluations}
All short sentences were generated from the fine-tuned systems with the ground-truth duration and then concatenated into singing segments. The evaluation for the performance of both WeSinger~\cite{zhang2022wesinger} and WeSinger 2 is two folds. Subjectively, we conducted the Mean Opinion Score (MOS) test, in which twenty listeners were asked to rate each audio segment on a scale from 1 to 5 in terms of the overall quality. Objectively, we calculate F0 RMSE (F0 root mean square error) and MSD (Mel-spectrogram distortion) between the
generated voices and recordings to distinguish whether the generated voice is out-of-tune or has obviously poor quality.
\begin{table}[]
\centering
\arrayrulewidth1.0pt
\caption{Quantitative and qualitative evaluation results of both WeSinger and WeSinger 2 on three different datasets.}
\begin{tabular}{ccccc}
corpus & \multicolumn{1}{c}{system} & MOS & \begin{tabular}[c]{@{}c@{}}F0 \\ RMSE\end{tabular} & MSD \\ \hline
& GT & $3.91\pm0.10$ & $-$ & $-$ \\
& WeSinger & $3.21\pm0.15$ & $17.51$ & $1.40$ \\
\multirow{-3}{*}{ \textit{Opencpop}} & \cellcolor[HTML]{EFEFFF}\textbf{WeSinger 2} & \cellcolor[HTML]{EFEFFF} $\textbf{3.40}\pm\textbf{0.06}$ & \cellcolor[HTML]{EFEFFF}$\textbf{16.61}$ & \cellcolor[HTML]{EFEFFF}$\textbf{1.31}$ \\ \hline
& GT & $4.10\pm0.06$ & $-$ & $-$ \\
& WeSinger & $3.60\pm0.14$ & $15.16$ & $1.53$ \\
\multirow{-3}{*}{\textit{Data-L}} & \cellcolor[HTML]{EFEFFF}\textbf{WeSinger 2} & \cellcolor[HTML]{EFEFFF}$\textbf{3.71}\pm\textbf{0.07}$ & \cellcolor[HTML]{EFEFFF}$\textbf{14.68}$ & \cellcolor[HTML]{EFEFFF}$\textbf{1.41}$ \\ \hline
& GT & $3.75\pm0.15$ & $-$ & $-$ \\
& WeSinger & $3.21\pm0.15$ & $13.23$ & $2.64$ \\
\multirow{-2}{*}{\textit{Data-S}} & \cellcolor[HTML]{EFEFFF}\textbf{WeSinger 2} & \cellcolor[HTML]{EFEFFF}$3.19\pm0.04$ & \cellcolor[HTML]{EFEFFF}$\textbf{12.32}$ & \cellcolor[HTML]{EFEFFF}$\textbf{1.32}$ \\
\end{tabular}
\label{MOS_for_each_singer}
\begin{flushleft}
\footnotesize{\textbf{NOTE}: The duration accuracy is not shown here as the duration model behaves on par with that in~\cite{zhang2022wesinger}. The singing voices generated with predicted duration will be also available in our demo page.}\\
\end{flushleft}
\end{table}
\begin{table}[]
\centering
\arrayrulewidth1.0pt
\caption{Parameters and efficiency of WeSinger and WeSinger 2 at the synthesis stage.}
\begin{tabular}{cccc}
system & \begin{tabular}[c]{@{}c@{}}\#param \\ \textit{A.M.}\textsuperscript{\textdagger} \end{tabular} & \begin{tabular}[c]{@{}c@{}}\#param \\ \textit{Voc.}\textsuperscript{\textdaggerdbl} \end{tabular} & \begin{tabular}[c]{@{}c@{}}Speed on CPU\textsuperscript{\textsection}\\(kHz/s) \end{tabular} \\ \hline
WeSinger &37 M&2.49 M& 4.01\\
\cellcolor[HTML]{EFEFFF}\textbf{WeSinger 2} & \cellcolor[HTML]{EFEFFF}\textbf{49 M} & \cellcolor[HTML]{EFEFFF}\textbf{5.89 M} & \cellcolor[HTML]{EFEFFF}\textbf{65.24} \\
\end{tabular}
\label{systems}
\begin{flushleft}
\footnotesize{\textsuperscript{\textdagger} Acoustic model. \textsuperscript{\textdaggerdbl} Neural vocoder. \textsuperscript{\textsection} Tested on a single core of AMD EPYC 7K62 CPU @ 2.60 GHz in Python Environment. Notably, all discriminators are not considered here.}\\
\end{flushleft}
\end{table}
As shown in Table~\ref{MOS_for_each_singer}, compared with WeSinger~\cite{zhang2022wesinger}, WeSinger 2 has the higher MOS with significantly higher confidence on both \textit{Opencpop} and \textit{Data-L} datasets, and around the same MOS on the \textit{Data-S} dataset. Most listeners made comments that the key improvement with WeSinger 2 is that it maintains a very stable performance at musical notes with long vowels, which can also be indicated by the lower MSD value. In addition, for some rare and difficult musical notes, WeSinger 2 also demonstrates a much better performance than WeSinger~\cite{zhang2022wesinger}, which can be reflected in the lower F0 RMSE value. Besides, we found that most of the generated singing audios by WeSinger 2 do not have metallic or inconsistent artifacts as found in~\cite{wang2022opencpop} and do not sound unnatural. Apart from the performance,
around 3 times faster than real-time synthesis can be achieved on a single core of a moderate CPU @ 2.60 GHz in Python environment. The number of parameters and synthesis speed of both WeSinger and WeSinger 2 are listed in Table~\ref{systems}.
\begin{table}[!tb]
\arrayrulewidth1.0pt
\begin{minipage}[b]{0.45\linewidth}\centering
\caption{Ablation study for the acoustic model of WeSinger 2.}
\begin{tabular}{lc}
\multicolumn{1}{c}{case} & CMOS\textdownarrow \\ \hline
\rowcolor[HTML]{EFEFFF}proposed & 0 \\
w/o MRF\textsuperscript{\textdagger} & $-0.10$ \\
w/o condition & $-0.15$ \\
w/o adv. loss & $-0.27$
\end{tabular}
\label{abalation_1}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.45\linewidth}\centering
\caption{Ablation study for the neural vocoder of WeSinger 2.}
\begin{tabular}{lc}
\multicolumn{1}{c}{case} & CMOS\textdownarrow \\ \hline
\rowcolor[HTML]{EFEFFF}proposed & 0 \\
w/o condition & $-0.10$ \\
w/o WLD & $-0.14$ \\
w/o pitch & $-0.16$
\end{tabular}
\label{abalation_2}
\end{minipage}
\begin{flushleft}
\footnotesize{\textsuperscript{\textdagger} It means adopting deep CNNs as Post-net for the acoustic model.}\\
\end{flushleft}
\end{table}
\subsection{Ablation Studies}
\label{ablation}
To verify the effectiveness of our designs in WeSinger 2, we further conduct several ablation studies for the acoustic model and neural vocoder, respectively. We do training on the \textit{Data-L} dataset and choose the Comparative Mean Opinion Score (CMOS) as the evaluation metric. As shown in Table~\ref{abalation_1}, that conditional adversarial training with MRADs plays an important role in better performance, which can alleviate the over-smoothing problem and produce more realistic Mel-spectrograms as illustrated in Figure~\ref{fig:mel_spectrogram}. Meanwhile, replacing the deep CNNs with MRF Post-net can also
result in a gain of $0.1$ CMOS without additional computational cost. As for the GAN-based neural vocoder (shown in Table~\ref{abalation_2}), we found that removing the singer's identity for discriminators can decrease the high-fidelity of generated audios, and removing the MLD and the key embedding part can also lead to a decrease of around 0.16 CMOS.
\section{Conclusion}
In this paper, we introduced a new SVS system named WeSinger 2 to produce high-quality singing voices efficiently. We proposed the simple but generic random area conditional discriminators to achieve the realistic Mel-spectrogram generation and described how to convert the Mel-spectrogram with a frame-level linearly-interpolated F0 sequence into singing waveforms with the proposed GAN-based neural vocoder. Evaluation results indicate that WeSinger 2 can synthesize natural singing voices. The robustness and efficiency of WeSinger 2 make it easy to be deployed in the production environment.
\vfill\pagebreak
\clearpage
\bibliographystyle{IEEEtran}
| c9414743061d85c328d1801ec9d3d2cb84dd4e57 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
$L_\infty$-algebras (also called strong homotopy Lie algebras or SH Lie
algebras) are a generalization of differential graded Lie algebras
(DGLAs), where the Jacobi identity only holds up to compatible higher
homotopies. They were introduced in
\cite{schlessinger.stasheff:1985a,stasheff:1992a,lada.stasheff:1993a},
and they appeared at first in a supporting role in deformation theory.
The idea is the following: In a usual DGLA
$(\liealg{g},\D,[\argument{,}\argument])$ one has a cochain
differential $\D$ on a $\mathbb{Z}$-graded vector space $\liealg{g}$
and a compatible graded
Lie bracket $[\argument{,}\argument]$, i.e. $\D$ is satisfies the graded
Leibniz rule. In an $L_\infty$-algebra $(L,\{l_n\}_{n\in \mathbb{N}})$
one has instead a whole collection of antisymmetric maps
$l_n \colon \Anti^n L \to L$ of degrees $2-n$, where $n\geq 1$, satisfying
certain compatibilities that generalize those of DGLAs. These
maps are called Taylor coefficients or structure maps. In particular,
the binary map $l_2$ that corresponds to the Lie bracket
$[\argument{,}\argument]$ satisfies the Jacobi identity only up to terms depending on the homotopy $l_3$, and so on.
Using the Koszul duality between DGLAs and graded cocommutative coalgebras,
it is shown in \cite{lada.markl:1994a} that $L_\infty$-algebra
structures $\{l_n\}_{n\in\mathbb{N}}$ on a graded vector space $L$ are in
one-to-one correspondence with codifferentials $Q$ of degree $1$ on the cofree
cocommutative conilpotent coalgebra $\Sym(L[1])$ generated by the
$L[1]$, where the
degree is shifted by one. This dual coalgebraic formulation goes
actually back to the BV-BRST formalism \cite{batalin.vilkovisky:1983a,batalin.fradkin:1983a}, see also \cite{stasheff:1997b}.
As mentioned above, $L_\infty$-algebras play an important role in deformation
theory. Here the basic philosophy is that, over a field of characteristic
zero, every deformation problem is governed by a DGLA or more generally
an $L_\infty$-algebra via solutions of the Maurer-Cartan equation modulo
equivalences, see e.g. \cite{kontsevich.soibelman:book,manetti:2005a}.
Staying for the moment for simplicity in the context of DGLAs, a
Maurer-Cartan element $\pi$ in $(\liealg{g},\D,[\argument{,}\argument])$ is
an element of degree one satisfying the Maurer-Cartan equation
\begin{equation*}
0
=
\D \pi
+
\frac{1}{2}[\pi,\pi].
\end{equation*}
This can be generalized to $L_\infty$-algebras, and there are suitable
notions of gauge and homotopy equivalences of Maurer-Cartan elements,
and in deformation theory one is interested in the
transformation groupoid of the gauge action, the also called
\emph{Goldman-Millson groupoid} or \emph{Deligne groupoid}
\cite{goldman.millson:1988a}. Moreover, one can use
Maurer-Cartan elements to twist the $L_\infty$-structure, i.e. change
the $L_\infty$-structure in a certain way. For example, for a DGLA
$(\liealg{g},\D,[\argument{,}\argument])$ with Maurer-Cartan element
$\pi$, the twisted DGLA takes the following form
$(\liealg{g}, \D + [\pi,\argument],[\argument{,}\argument])$, where
the Maurer-Cartan equation implies that $\D + [\pi,\argument]$
squares to zero.
One nice feature of $L_\infty$-algebras is that they allow a
more general notion of morphisms, so-called $L_\infty$-morphisms:
an $L_\infty$-morphism from an $L_\infty$-algebra $(L,Q)$ to $(L',Q')$ is
just a coalgebra morphism between the corresponding coalgebras
$\Sym(L[1])$ and $\Sym(L'[1])$ that commutes with the codifferentials.
In particular, $L_\infty$-morphisms are generalizations of Lie algebra morphisms
and they are still compatible with Maurer-Cartan elements. Moreover,
there is a notion of $L_\infty$-quasi-isomorphism generalizing the notion
of quasi-isomorphisms between DGLAs. One can prove that
such $L_\infty$-quasi-isomorphisms admit quasi-inverses, and
that $L_\infty$-quasi-isomorphisms induce bijections on the
equivalence classes of Maurer-Cartan elements, which is important for
deformation theory. One
can also twist $L_\infty$-morphisms by Maurer-Cartan elements, which gives
$L_\infty$-morphisms between the twisted $L_\infty$-algebras.
Note that the notion of $L_\infty$-algebras and
$L_\infty$-morphisms can be generalized to algebras over
general Koszul operads, see e.g. \cite{loday.vallette:2012a}.
In addition, as one would expect, the generalization from DGLAs to $L_\infty$-algebras
also leads to a generalization of the representation theory, i.e. from
DG Lie modules to $L_\infty$-modules.
One famous deformation problem solved by $L_\infty$-algebraic
techniques is the deformation quantization problem of Poisson manifolds,
which was solved by Kontsevich's celebrated formality theorem
\cite{kontsevich:2003a}, see also \cite{dolgushev:2005a,dolgushev:2005b} for the globalization of this result and the
invariant setting of Lie group actions. More explicitly, the formality
theorem provides an $L_\infty$-quasi-isomorphism between
the differential graded Lie algebra of polyvector fields
$\Tpoly(M)$ and the polydifferential operators $\Dpoly(M)$ on a smooth
manifold $M$. As such, it induces a one-to-one correspondence
between equivalence classes of Maurer-Cartan elements, i.e.
between equivalence classes of (formal) Poisson structures
and equivalence classes of star products. Using
the language of $L_\infty$-modules, there has also been proven
a formality theorem for Hochschild chains
\cite{dolgushev:2006a,shoikhet:2003a}.
Moreover, in the last years many additional developments have taken place,
see e.g. \cite{calaque:2005a,calaque:2007a, liao:2019a}.
Apart from that, $L_\infty$-algebras can be used to
describe many more geometric deformation problems, e.g. deformations of
complex manifolds \cite{manetti:2004a}, deformations of foliations
\cite{vitagliano:2014a}, deformations of Dirac structures
\cite{gualtieri.matviichuk.scott:2020a}, and many more. In addition, $L_\infty$-algebras are
also an important tool in homological reduction theory, following the
BV-BRST spirit, see e.g. \cite{schaetz:2008a,cattaneo.felder:2007a,esposito.kraft.schnitzer:2020a:pre,esposito.kraft.schnitzer:2022a:pre},
and in physics, where they occur for example in string theory and
in quantum field theory.
Another important observation from \cite{dolgushev:2007a} is that
$L_\infty$-morphisms themselves correspond to
Maurer-Cartan elements in a certain convolution-like $L_\infty$-algebra,
which gives a way to speak of homotopic $L_\infty$-morphisms in
the case of equivalent Maurer-Cartan elements.
Homotopic $L_\infty$-morphisms share many features: for example,
an $L_\infty$-morphism that is homotopic to an $L_\infty$-quasi-isomorphism
is automatically an $L_\infty$-quasi-isomorphism itself, and
homotopic $L_\infty$-morphisms induce the same maps on the equivalence
classes of Maurer-Cartan elements. The second observation was used in
\cite{bursztyn.dolgushev.waldmann:2012a} to prove that the globalization
of the Kontsevich formality by Dolgushev \cite{dolgushev:2005a,dolgushev:2005b}
is, at the level of equivalence classes, independent of the chosen connection.
Finally, note that there is a generalization of $L_\infty$-algebras
to curved $L_\infty$-algebras $(L,\{l_n\}_{n\in \mathbb{N}_0})$,
where one allows an additional zero-th structure map
$l_0 \colon\mathbb{K} \to L[2]$, where $\mathbb{K}$ denotes the ground field.
This corresponds to the curvature $l_0(1)\in L^2$. In this way,
one can generalize the notion of curved Lie algebras $(\liealg{g},R,\D,[\argument{,}\argument])$, where the curvature $R\in \liealg{g}^2$ is closed, and
where $\D^2=[R,\argument]$. In this curved setting, one can speak
of curved Maurer-Cartan elements,
and all the notions from the above (flat) $L_\infty$-algebras generalize
to this setting. For example, the curved Maurer-Cartan equation in
a curved Lie algebras reads
\begin{equation*}
0
=
R +\D \pi + \frac{1}{2}[\pi,\pi].
\end{equation*}
In particular, one can now twist with general
elements of degree one, not just with Maurer-Cartan elements, without
leaving the setting of curved $L_\infty$-algebras. Moreover,
one also obtains a more general notion of curved $L_\infty$-morphisms
\cite{getzler:2018a}, that are no longer coalgebra morphisms as in the
above (flat) setting, but only coalgebra morphisms up to a twist.
Curved $L_\infty$-algebras play for example an important role
in Tsygan's conjecture of an equivariant formality in deformation
quantization \cite{tsygan:note}, where the fundamental vector
fields of the Lie group action play the role of the curvature.
The aim of this paper is to give a review over the theory of
curved $L_\infty$-algebra, Maurer-Cartan elements, $L_\infty$-modules,
and their homotopy theories. We mainly collect known results from
the literature, but also include some results that are to our
knowledge new or at least folklore knowledge, but not yet written
properly down. For example, following \cite{kraft:2021a} we show that
$L_\infty$-algebras that are twisted with equivalent Maurer-Cartan
elements are $L_\infty$-isomorphic, which we could only find in the
literature for the case of DGLAs. Moreover, we study the homotopy
theory of curved $L_\infty$-morphisms, where we show in particular
that curved $L_\infty$-morphisms that
are twisted with equivalent Maurer-Cartan elements are homotopic.
This result for DGLAs allowed us in \cite{kraft.schnitzer:2021a:pre}
to prove that Dolgushev's globalization procedure \cite{dolgushev:2005a,
dolgushev:2005b} of the Kontsevich formality \cite{kontsevich:2003a}
with respect to different covariant derivatives yields homotopic
$L_\infty$-quasi-isomorphisms.
The paper is organized as follows: We start in
Section~\ref{sec:Coalgebras} with the construction of cocommutative cofree
conilpotent coalgebras and coderivations on them. This allows a
compact definition of (curved) $L_\infty$-algebras and
$L_\infty$-morphisms in
Section~\ref{sec:DefLinfty} in terms of codifferentials $Q$ on
$\Sym(L[1])$ and coalgebra morphisms commuting with the codifferentials.
In Section~\ref{sec:homotopytransfertheorem} we give explicit formulas for the homotopy transfer theorem, which provides a way to
transfer $L_\infty$-structures along deformation retracts. There
are many formulations and proofs for it, and in our special cases we
use the symmetric tensor trick.
In Section~\ref{sec:MCandEquiv} we introduce the notion of
Maurer-Cartan elements and compare the gauge equivalence
of Maurer-Cartan elements in the setting of DGLAs and the more general
notion of homotopy equivalence in $L_\infty$-algebras. Moreover,
we recall the twisting procedure of curved $L_\infty$-algebras and
of $L_\infty$-morphisms. In Section~\ref{sec:HomotopyTheoryLinftyAlgandMorph}
we introduce the interpretation of $L_\infty$-morphisms as Maurer-Cartan
elements and the notion of homotopic $L_\infty$-morphisms.
We recall the homotopy classification of flat $L_\infty$-algebras
and show that $L_\infty$-morphisms that are twisted with equivalent
Maurer-Cartan elements are homotopic. In particular, we
study curved $L_\infty$-morphisms, which are no longer coalgebra
morphisms, but in some sense coalgebra morphisms up to twist.
They are still compatible with curved Maurer-Cartan elements and
have an analogue homotopy theory as the strict $L_\infty$-morphisms in
the flat case. Finally, we recall in
Section~\ref{sec:LinftyModules}
the notion of $L_\infty$-modules over $L_\infty$-algebras,
$L_\infty$-module morphisms between them, and the
corresponding notion of homotopy equivalence.
\textbf{Acknowledgements:}
The authors are grateful to Severin Barmeier, Chiara Esposito and Luca Vitagliano
for many helpful comments and feedback which helped to improve the first version
of this review.
\section{Introduction to Coalgebras}
\label{sec:Coalgebras}
In this first section we want to recall the basic properties related to
graded coalgebras. In view of $L_\infty$-algebras we are particularly interested
in cocommutative conilpotent graded coalgebras.
We mainly follow \cite{esposito:2015a,loday.vallette:2012a,waldmann:2019:note}.
\subsection{Reminder on (Free) Graded Algebras}
Before starting with coalgebras we recall some notions
connected to graded algebras over some commutative unital ring
$\mathbb{K}$, where we always assume $\mathbb{Q} \subseteq \mathbb{K}$.
By grading we mean $\mathbb{Z}$-grading
and we always apply the Koszul sign rule,
e.g. for tensor products of homogeneous morphisms $\phi\colon V^\bullet
\rightarrow \widetilde{V}^{\bullet + \abs{\phi}}$, $\psi \colon W^\bullet
\rightarrow \widetilde{W}^{\bullet + \abs{\psi}}$
between graded $\mathbb{K}$-modules $V^\bullet,\widetilde{V}^\bullet,W^\bullet,
\widetilde{W}^\bullet$:
\begin{equation*}
(\phi \otimes \psi)(v\otimes w)
=
(-1)^{\abs{\psi}\abs{v}} \phi(v)\otimes \psi(w),
\end{equation*}
where $v\in V^{\abs{v}}, w\in W$. Note that graded $\mathbb{K}$-modules with
degree zero morphisms become a symmetric monoidal category with the
graded tensor product and the graded switching map $\tau$. Recall that on homogeneous
elements $v\in V, w\in W$ one has $\tau(v\otimes w) = (-1)^{\abs{v}\abs{w}} w \otimes v$.
However, this monoidal structure is not compatible with the degree shift functor,
where we write $V[i]^k = V^{i+k}$ and $V[i]^\bullet = (\mathbb{K}[i]\otimes V)^\bullet$.
By $(A^\bullet,\mu)$ we usually denote a graded algebra with
associative product $\mu$ and by $1_A= 1 \colon \mathbb{K} \rightarrow A$ we
denote a unit. Let $\phi \colon A^\bullet \rightarrow B^\bullet$ be a morphism
of graded algebras, which is necessarily of degree zero. Then a graded derivation
$D \colon A^\bullet \rightarrow B^{\bullet +k}$ of degree $k\in \mathbb{Z}$
along $\phi$ is a $\mathbb{K}$-linear homogeneous map $D$ such that
\begin{equation*}
D \circ \mu_A
=
\mu_B \circ (\phi \otimes D + D \otimes \phi).
\end{equation*}
The tensor product $A\otimes B$ becomes a graded algebra with product
\begin{equation*}
\mu_{A\otimes B}
=
(\mu_A \otimes \mu_B) \circ (\id_A \otimes \tau \otimes \id_B),
\end{equation*}
where $\tau$ is again the graded switching map.
The notion of free objects will be used frequently in the following, whence
we recall the free associative algebra generated by $\mathbb{K}$-modules,
as well as the commutative analogues. We write $\mathrm{T}^k(V) = V^{\otimes k}$ for
$k > 0 $ and $\mathrm{T}^0(V)=\mathbb{K}$ and obtain:
\begin{proposition}
Let $V^\bullet$ be a graded $\mathbb{K}$-module. Then the tensor algebra
$\mathrm{T}^\bullet(V)= \bigoplus_{k=0}^\infty \mathrm{T}^k(V)$ with induced grading from
$V^\bullet$ and canonical inclusion
$\iota \colon V = \mathrm{T}^1(V) \rightarrow \mathrm{T}(V)$ is the free graded associative
unital algebra generated by $V^\bullet$. More precisely, for every
homogeneous $\mathbb{K}$-linear map $\phi \colon V \rightarrow A$ from $V$
to a unital graded associative algebra $A$, there exists a unique algebra
morphism $\Phi \colon \mathrm{T}(V)\rightarrow A$ such that the following diagram
\begin{equation*}
\begin{tikzcd}
\mathrm{T}(V) \arrow[r,"\exists ! \Phi"] & A \\
V \arrow[ru,swap,"\phi"] \arrow[u,"\iota"]&
\end{tikzcd}
\end{equation*}
commutes.
\end{proposition}
Explicitly, the map $\Phi$ is given by
\begin{equation*}
\Phi(1)
=
1_A
\quad \text{ and } \quad
\Phi(v_1\otimes\cdots \otimes v_n)
=
\phi(v_1)\cdots \phi(v_n).
\end{equation*}
In order to construct the free commutative algebra generated by $V^\bullet$, i.e. the
analogue of the above proposition in the commutative setting, we
can consider the (graded) symmetric algebra
\begin{equation}
\Sym(V)
=
\mathrm{T}(V) / \Span\{ x \otimes y - (-1)^{\abs{x}\abs{y}}y \otimes x\}
\end{equation}
with product denoted by $\vee$.
For later use we also recall the definition of the (graded) exterior algebra
\begin{equation}
\Anti(V)
=
\mathrm{T}(V)/\Span\{x \otimes y + (-1)^{\abs{x}\abs{y}} y\otimes x\}
\end{equation}
with product denoted by $\wedge$.
The following sign conventions are also needed:
\begin{definition}[Graded signature, Koszul sign]
\label{def:GradedSigns}
Let $\sigma \in S_n$ be a permutation, $V^\bullet$ a graded $\mathbb{K}$-module,
and $x_1,\dots,x_n$ homogeneous elements of degree $\abs{x_i}$ for $i=1,\dots,n$.
The \emph{Koszul sign} $\epsilon(\sigma)$ is defined by the relation
\begin{equation}
\epsilon(\sigma) x_{\sigma(1)}\vee \cdots \vee x_{\sigma(n)}
=
x_1 \vee \cdots \vee x_n.
\end{equation}
By $\chi(\sigma)= \sign(\sigma)\epsilon(\sigma)$ we denote the \emph{antisymmetric Koszul sign}, i.e.
\begin{equation}
\chi(\sigma) x_{\sigma(1)}\wedge \cdots \wedge x_{\sigma(n)}
=
x_1 \wedge \cdots \wedge x_n.
\end{equation}
\end{definition}
Note that both signs depend on the degrees of the homogeneous elements, i.e.
we should actually write $\epsilon(\sigma)=\epsilon(\sigma,x_1,\dots,x_n)$,
analogously for $\chi(\sigma)$.
With these signs one can consider a right action of $S_n$ on
$V^{\otimes n}$ given by the symmetrization
\begin{equation*}
\Symmetrizer_n(v)
=
\frac{1}{n!} \sum_{\sigma \in S_n} v \racts \sigma,
\end{equation*}
where
\begin{equation*}
(v_1\otimes \cdots \otimes v_n) \racts \sigma
=
\epsilon(\sigma) v_{\sigma(1)}\otimes \cdots \otimes v_{\sigma(n)}.
\end{equation*}
It turns out that $\bigoplus_n \image \Symmetrizer_n \cong \Sym(V)$ is indeed the
free associative graded commutative unital algebra generated by $V$.
These free algebras also satisfy a universal property with respect to derivations:
\begin{proposition}
Let $V^\bullet$ be a graded $\mathbb{K}$-module.
\begin{propositionlist}
\item Let $\phi \colon V^\bullet \rightarrow A^\bullet$ be a homogeneous map
of degree zero into a graded associative unital algebra $A$ and let
$\D \colon V^\bullet \rightarrow A^{\bullet +k}$ be a linear map of
degree $k\in \mathbb{Z}$. Then there exists a unique graded derivation
\begin{equation*}
D \colon
\mathrm{T}(V)^\bullet
\rightarrow
A^{\bullet + k}
\end{equation*}
along $\Phi \colon \mathrm{T}(V) \rightarrow A$ such that $D\at{V} = \D$.
\item If $A$ is in addition graded commutative, then there exists a unique graded
derivation
\begin{equation*}
D \colon
\Sym(V)^\bullet
\rightarrow
A^{\bullet + k}
\end{equation*}
along $\Phi \colon \Sym(V) \rightarrow A$ such that $D\at{V} = \D$.
\end{propositionlist}
\end{proposition}
The proof is straightforward, one defines $D$ on homogenous factors
by
\begin{equation*}
D(v_1\otimes \cdots \otimes v_n)
=
\sum_{r=1}^n (-1)^{k(\abs{v_1}+\cdots+\abs{v_{r-1}})}
\phi(v_1)\cdots \D(v_r)\cdots \phi(v_n)
\end{equation*}
and $D(1)=0$ and notes that in the commutative case
\begin{equation}
\label{eq:SymmIdeal}
I(V)
=
\Span\{ x \otimes y - (-1)^{\abs{x}\abs{y}}y \otimes x\}
\end{equation}
is in the kernel, whence
$D$ passes to the quotient $\Sym(V)$.
\subsection{Definition and First Properties of Graded Coalgebras}
\label{sec:gradedcoalgebras}
Now we want to recall the analogous constructions for graded coalgebras.
We start with the definition of graded coalgebras and some basic properties.
\begin{definition}[Graded Coalgebra]
A \emph{graded coassociative coalgebra} over $\mathbb{K}$ is a
graded $\mathbb{K}$-module $C^\bullet$
equipped with a binary co-operation, i.e. a linear map
\begin{equation}
\Delta \colon
C^\bullet
\longrightarrow
C^\bullet \otimes C^\bullet
\end{equation}
of degree zero that is \emph{coassociative}, i.e.
\begin{equation}
(\Delta \otimes \id) \circ \Delta
=
(\id \otimes \Delta) \circ \Delta.
\end{equation}
\end{definition}
The map $\Delta$ is called \emph{coproduct} and we have
\begin{equation}
\Delta(C^i)
\subset
\bigoplus_{j+k=i} C^j \otimes C^k.
\end{equation}
To simplify the notation we use \emph{Sweedler's notation}
\begin{equation*}
\Delta^n(x)
=
\sum x_{(1)} \otimes \cdots \otimes x_{(n+1)}
=
x_{(1)} \otimes \cdots \otimes x_{(n+1)}
\in C^{\otimes n+1},
\end{equation*}
where $\Delta^n = (\Delta \otimes \id \otimes \cdots \otimes \id)\circ \Delta^{n-1}$
denotes the iterated coproduct $\Delta^n \colon C \rightarrow C^{\otimes n+1}$ with
$\Delta^1 = \Delta$ and $\Delta^0 = \id$. Note that by the
coassociativity we have
\begin{equation}
\Delta^n
=
(\id\otimes \cdots \otimes \id \otimes \Delta \otimes \id \otimes \cdots
\otimes \id) \circ \Delta^{n-1},
\end{equation}
whence the notation makes sense.
The coassociative coalgebra $C$ is called \emph{counital} if it is
equipped with a \emph{counit}, i.e. a linear map of degree zero with
\begin{equation}
\epsilon \colon
C
\longrightarrow
\mathbb{K},
\quad \text{ satisfying }\quad
(\epsilon \otimes \id) \circ \Delta
=
\id
=
(\id \otimes \epsilon)\circ \Delta.
\end{equation}
For example, $\mathbb{K}$ is itself a coassociative coalgebra with
$\Delta(1)=1 \otimes 1$.
A \emph{morphism} of graded coalgebras $f \colon C \rightarrow C'$ is a linear
map of degree zero that commutes with the coproducts and in the counital case also
with the counits, i.e.
\begin{equation}
(f\otimes f) \circ \Delta_C
=
\Delta_{C'} \circ f,
\quad \quad
\epsilon_{C'} \circ f
=
\epsilon_C.
\end{equation}
The graded coalgebra $(C,\Delta)$ is called cocommutative if
$\tau \circ \Delta=\Delta$, where $\tau$ denotes again the graded switching map.
Denoting the graded dual by $(C^*)^\bullet = \Hom_\mathbb{K}^\bullet
(C,\mathbb{K}) $, one can
check that the inclusion $C^* \otimes C^* \rightarrow (C\otimes C)^*$ induces
always an algebra structure on $(C^*)^\bullet$. The converse is generally not true;
e.g., if $\mathbb{K}$ is a field, then the converse is only true in the case of
finite dimensional vector spaces over $\mathbb{K}$.
Another useful interplay of coalgebras and algebras is the convolution algebra:
\begin{definition}[Convolution product]
\label{def:ConvProduct}
Let $(A^\bullet,\mu,1)$ be a graded associative unital algebra over
$\mathbb{K}$ and
let $(C^\bullet,\Delta,\epsilon)$ be a graded coassociative counital algebra over
$\mathbb{K}$. Then one defines for $\phi,\psi \in \Hom^\bullet_\mathbb{K}(C,A)$
their \emph{convolution} by
\begin{equation}
\label{eq:convolution}
\phi \star \psi
=
\mu \circ (\phi \otimes\psi) \circ \Delta.
\end{equation}
\end{definition}
One can directly check that $(\Hom^\bullet_\mathbb{K}(C,A), \star,1\epsilon)$
is a graded associative unital algebra, see e.g.
\cite[Proposition~1.6.1]{loday.vallette:2012a}.
Dually to ideals and subalgebras of algebras one can consider
coideals and subcoalgebras.
\begin{definition}[Coideal and Subcoalgebra]
Let $(C^\bullet,\Delta)$ be a graded coalgebra over $\mathbb{K}$.
\begin{definitionlist}
\item A graded subspace $I^\bullet \subseteq C^\bullet$ is called \emph{coideal}
if
\begin{equation}
\label{eq:coideal}
\Delta(I)
\subseteq
I \otimes C + C \otimes I
\quad \text{ and } \quad
I \subseteq \ker \epsilon.
\end{equation}
\item A graded subspace $U^\bullet \subseteq C^\bullet$ is called
\emph{subcoalgebra} if
\begin{equation}
\label{eq:subcoalgebra}
\Delta(U) \subseteq U \otimes U.
\end{equation}
\end{definitionlist}
\end{definition}
Note that if $\mathbb{K}$ is just a ring and not a field, then $I\otimes C$ might
not be mapped injectively into $C\otimes C$ due to some torsion effects. We
always assume that our modules have enough flatness properties and ignore these
subtleties.
As expected, the image of a coalgebra morphism is a subcoalgebra and the
kernel is a coideal. We can quotient by coideals and every subcoalgebra $U$ with
$U \subseteq \ker \epsilon$ is automatically a coideal.
There are special elements in coalgebras:
\begin{definition}[Group-like elements]
Let $(C^\bullet,\Delta,\epsilon)$ be a graded coalgebra. An element $g\in C$ is
called \emph{group-like} if
\begin{equation}
\label{eq:grouplikeElement}
\Delta(g)
=
g\otimes g
\quad \quad \text{ and } \quad \quad
\epsilon(g) = 1.
\end{equation}
\end{definition}
The second condition is just to exclude the trivial case $g=0$, since
$\Delta(g)=g\otimes g$ implies $g = \epsilon(g)g$ and since we assume torsion-freeness. Note that a morphism
of coalgebras maps grouplike elements to grouplike elements.
In view of $L_\infty$-algebras, coderivations will play an important role.
\begin{definition}[Coderivation]
Let $\Phi \colon C^\bullet \rightarrow E^\bullet$ be a morphism of graded
coalgebras. A \emph{graded coderivation} along $\Phi$ is
a linear map $D\colon C^\bullet \rightarrow E^{\bullet + k}$ of degree $k$ such that
\begin{equation}
\Delta\circ D
=
(D \otimes \Phi +\Phi \otimes D) \circ \Delta.
\end{equation}
\end{definition}
One can check that the set of coderivations of $C$ along the identity is a graded
Lie subalgebra of $\End^\bullet_\mathbb{K}(C)$.
\begin{proposition}
\label{prop:epsilonDzero}
A coderivation $D \colon C^\bullet \rightarrow E^{\bullet +k}$ along
$\Phi$ satisfies $\epsilon \circ D = 0$.
\end{proposition}
\begin{proof}
We compute
\begin{align*}
D(x)
& =
(\id\otimes \epsilon)\circ \Delta \circ D(x)
=
D(x) \pm \Phi(x_{(1)}) \epsilon(D(x_{(2)}))
\end{align*}
and thus applying $\epsilon$ gives the result.
\end{proof}
There are examples of coalgebras with many grouplike elements, e.g.
group coalgebras. However, we are mainly interested in coalgebras with one specific
group-like element $1$, called coaugmented.
\begin{definition}
A counital graded coalgebra $(C,\Delta,\epsilon)$ is called \emph{coaugmented}
if there exists a coalgebra morphism $u \colon \mathbb{K} \rightarrow C$.
\end{definition}
The element $1=u(1)$ is indeed group-like since we know $\Delta \circ u =
(u \otimes u)\circ \Delta$ and we obtain a non-full
subcategory of coaugmented coalgebras, where the morphisms satisfy $\Phi(1)=1$.
Moreover, the definition implies $\id_\mathbb{K} = \epsilon \circ u$ and we get
\begin{equation}
C
=
\cc{C} \oplus \mathbb{K}1
=
\ker \epsilon \oplus \mathbb{K}1
\end{equation}
via $c \mapsto (c - \epsilon(c)1) + \epsilon(c)1$.
In this case one can define a \emph{reduced coproduct}
$\cc{\Delta}\colon \cc{C} \rightarrow \cc{C}\otimes \cc{C}$ by
\begin{equation}
\cc{\Delta}(x)
=
\Delta(x) - 1 \otimes x - x \otimes 1
\end{equation}
and we directly check for $x \in \cc{C}$
\begin{equation*}
(\epsilon \otimes \id) (\cc{\Delta}(x))
=
x - \epsilon(1)x - \epsilon(x)1
=
0
=
(\id \otimes \epsilon )(\cc{\Delta}(x)).
\end{equation*}
An element $x\in C$ is called \emph{primitive} if
\begin{equation}
\Delta(x)
=
x \otimes 1 + 1 \otimes x,
\end{equation}
or equivalently $x\in \cc{C}$ with $\cc{\Delta}(x)=0$. The set of primitive elements is
denoted by $\mathrm{Prim}(C)$.
\begin{proposition}
The map $\cc{\Delta} \colon C \rightarrow C \otimes C$ is coassociative
and restricts to $\cc{\Delta} \colon \cc{C}\to \cc{C}\otimes\cc{C}$.
Thus $(\cc{C},\cc{\Delta})$ becomes a coalgebra without counit, the so-called
\emph{reduced coalgebra}.
\end{proposition}
In particular, a map $\Phi\colon (C,\Delta,\epsilon,1) \rightarrow (E,\Delta,\epsilon,1)$
is a morphism of coaugmented coalgebras, i.e. a coalgebra morphism with $\Phi(1)=1$,
if and only if
\begin{equation*}
( \Phi \otimes \Phi )\circ \cc{\Delta}
=
\cc{\Delta} \circ\Phi,
\end{equation*}
i.e. if and only if $\Phi$ induces a coalgebra morphism $\Phi \colon (\cc{C},\cc{\Delta})
\rightarrow (\cc{E},\cc{\Delta})$. The analogue holds for coderivations: a linear map
$D \colon C^\bullet \rightarrow E^{\bullet +k}$ with $D(1)=0$ is a coderivation
along $\Phi$ if and only if
\begin{equation*}
(D\otimes \Phi + \Phi \otimes D) \circ \cc{\Delta}
=
\cc{\Delta} \circ D.
\end{equation*}
A coaugmented graded coalgebra $(C,\Delta,\epsilon,1)$ is said to be
\emph{conilpotent} if for each $x \in \cc{C}$ there exists
$n \in \mathbb{N}$ such that $\cc{\Delta}^m(x)=0$ for all $m\geq n$. This is
equivalent to the \emph{coradical filtration} being exhaustive,
i.e.
\begin{equation*}
C^\bullet
=
\bigcup_{k\in\mathbb{Z}} F_k C^\bullet.
\end{equation*}
Here $F_0C = \mathbb{K}1$ and
\begin{equation*}
F_kC
=
\mathbb{K}1 \oplus \{c \in \cc{C} \mid \cc{\Delta}^{k}c=0\}.
\end{equation*}
In particular, $F_1C = \mathbb{K}1 \oplus \mathrm{Prim}(C)$. Note that
this filtration is canonical and that morphisms of coaugmented graded coalgebras
are automatically compatible with this filtration. Moreover, one can check that
in the case of conilpotent coalgebras the group-like element is unique.
\begin{proposition}
Let $(C^\bullet,\Delta,\epsilon,1)$ be a conilpotent coaugmented graded coalgebra
over a field $\mathbb{K}$. Then $1$ is the only group-like element.
\end{proposition}
\begin{proof}
Let $g\in C$ be a group-like element and set $c = g - 1 \in \cc{C}$ since
$\epsilon(g)=1$. Then
\begin{equation*}
\Delta(g)
=
\Delta(1+c)
=
1\otimes 1 + \Delta(c)
\quad \text{ and }\quad
\Delta(g)
=
(1+c)\otimes (1+c)
\end{equation*}
imply
\begin{equation*}
\cc{\Delta}^k(c)
=
c\otimes \cdots \otimes c
\end{equation*}
for any $k$. By the conilpotency $\cc{\Delta}^k(c)=0$ for some $k$, and thus $c=0$.
\end{proof}
We want to list some other immediate features of coaugmented coalgebras.
\begin{lemma}
Let $(C^\bullet,\Delta,\epsilon,1)$ be a coaugmented coalgebra and let
$V^\bullet$ be a graded $\mathbb{K}$-module. Then
\begin{equation*}
(\phi_1\otimes \cdots \otimes \phi_n)\circ \Delta^{n-1}
=
(\phi_1\otimes \cdots \otimes \phi_n)\circ \cc{\Delta}^{n-1}
\end{equation*}
for all $\mathbb{K}$-linear maps $\phi_i \colon C \rightarrow V$ with
$\phi_i(1)=0$.
\end{lemma}
This implies for the convolution the following simplification:
\begin{lemma}
Let $(C^\bullet,\Delta,\epsilon,1)$ be a coaugmented coalgebra and let
$(A^\bullet,\mu,1)$ be an algebra. Then one has
\begin{equation}
\label{eq:CoaugmentedConvolution}
\phi_1\star \cdots\star \phi_n
=
\mu^{n-1} \circ(\phi_1\otimes\cdots\otimes \phi_n)\circ \cc{\Delta}^{n-1}
\end{equation}
for all $\mathbb{K}$-linear maps $\phi_i \colon C \rightarrow A$ with
$\phi_i(1)=0$.
\end{lemma}
\begin{remark}
This lemma motivates the following convention. Whenever we have a
$\mathbb{K}$-linear map $\phi\colon \cc{C}^\bullet \rightarrow V^\bullet$
we understand it to be extended to $C$ by zero.
\end{remark}
These observations allow us to define power series in $\Hom_\mathbb{K}(\cc{C},A)$:
\begin{lemma}
Let $(C^\bullet,\Delta,\epsilon,1)$ be a conilpotent coalgebra and let
$A^\bullet$ be an algebra. For $\phi\in \Hom_\mathbb{K}(\cc{C},A)$ the
map
\begin{equation}
\label{eq:FormalSeriesinConvolution}
\mathbb{K}[[x]] \ni a
=
\sum_{n=0}^\infty a_n x^n
\longmapsto
a_\star(\phi)
=
\sum_{n=0}^\infty a_n \phi^{\star n} \in
\Hom_\mathbb{K}(C,A)
\end{equation}
is a well-defined unital algebra morphism by the conilpotency, where
$\phi^{\star 0} = 1\epsilon$.
\end{lemma}
\begin{example}
This allows us to define for $\phi \in \Hom_\mathbb{K}(\cc{C},A)$
\begin{equation}
\label{eq:ConvolutionExpandLog}
\log_\star(1\epsilon + \phi)
=
\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\phi^{\star n}
\quad \quad \text{ and } \quad \quad
\exp_\star(\phi)
=
\sum_{n=0}^\infty \frac{1}{n!} \phi^{\star n}
\end{equation}
and they are inverse to each other by the usual combinatorial formulas.
\end{example}
A nice consequence is that in the setting of conilpotent coaugmented
coalgebras we can easily construct the cofree objects.
\subsection{Cofree Coalgebras}
We state at first the universal property that the cofree conilpotent
coalgebra should satisfy.
\begin{definition}[Cofree conilpotent coalgebra]
Let $V$ be a graded $\mathbb{K}$-module and let $C$ be a conilpotent
graded coassociative coalgebra.
The \emph{cofree conilpotent coassociative coalgebra} over $V$ is a
conilpotent coassociative coalgebra $\mathcal{F}^c(V)$ equipped with
a linear map $p \colon \mathcal{F}^c(V) \rightarrow V$ such that
$1 \mapsto 0$ and such that the following universal condition holds
for all $C$: Any linear map $\phi \colon C \rightarrow V$ with $\phi(1)=0$
extends uniquely to a coaugmented coalgebra morphism $\Phi\colon C \rightarrow
\mathcal{F}^c(V)$ such that the following diagram
\begin{equation*}
\begin{tikzcd}
C \arrow[d,swap,"\exists ! \Phi"] \arrow[rd,"\phi"] & \\
\mathcal{F}^c(V) \arrow[r,"p"] & V
\end{tikzcd}
\end{equation*}
commutes.
\end{definition}
In other words, $\mathcal{F}^c$ is right adjoint to the forgetful functor.
Moreover, recall that conilpotent coalgebras are by definition
coaugmented. Now we want to construct the cofree coalgebra, where the
uniqueness up to isomorphism follows as usual for universal properties.
Let $V^\bullet$ be a graded $\mathbb{K}$-module, then the
\emph{tensor coalgebra} $(\mathrm{T}^c(V),\Delta)$
is given by the tensor module $\mathrm{T}^c(V) = \mathrm{T}(V)
= \mathbb{K}1 \oplus V \oplus V^{\otimes 2} \oplus \cdots$ with
\emph{deconcatenation coproduct}
\begin{equation}
\Delta(v_1\cdots v_n)
=
\sum_{i=0}^n( v_1 \cdots v_i )\otimes (v_{i+1}\cdots v_n) \in
\mathrm{T}(V) \otimes \mathrm{T}(V)
\quad \text{and} \quad
\Delta(1) = 1 \otimes 1,
\end{equation}
where $v_1,\dots,v_n \in V$. The counit $\epsilon\colon \mathrm{T}^c(V)\rightarrow \mathbb{K}$
is the identity on $\mathbb{K}$ and zero otherwise, and the coalgebra
turns out to be coassociative and counital. Moreover, it is coaugmented
via the inclusion $\iota \colon \mathbb{K} \rightarrow \mathrm{T}^c(V)$ and thus
\begin{equation}
\mathrm{T}^c(V)
\cong \cc{T}^c(V) \oplus \mathbb{K}1
\end{equation}
with reduced tensor module $\cc{T}^c(V) = \bigoplus_{i \geq 1} V^{\otimes i}$.
The reduced coproduct is given by
\begin{equation}
\cc{\Delta}(v_1\cdots v_n)
=
\sum_{i=1}^{n-1} v_1 \cdots v_i \otimes v_{i+1}\cdots v_n,
\quad \text{in particular}\quad
\cc{\Delta}(v) = 0.
\end{equation}
Thus $(\mathrm{T}^c(V),\Delta,\epsilon,1)$ is conilpotent and
the construction is functorial in the following sense:
\begin{lemma}
Let $\phi \colon V^\bullet \rightarrow W^\bullet$ be a homogeneous
$\mathbb{K}$-linear map of degree zero. Then the map
$\Phi \colon \mathrm{T}(V) \rightarrow
\mathrm{T}(W)$ is a morphism of coaugmented coalgebras.
\end{lemma}
We also have the projection $\pr_V \colon \mathrm{T}^c(V) \rightarrow V$ that is the
identity on $V$ and $0$ elsewhere, giving the following useful property:
\begin{lemma}
\label{lemma:projectiononTn}
Let $V^\bullet$ be a graded $\mathbb{K}$-module. Then one has for all
$n\in \mathbb{N}$
\begin{equation}
\label{eq:identitycofreecoalg}
\pr_{\mathrm{T}^n(V)}
=
\mu^{n-1}\circ(\pr_V \otimes \cdots \otimes \pr_V)\circ \Delta^{n-1}
=
\mu^{n-1}\circ(\pr_V \otimes \cdots \otimes \pr_V)\circ \cc{\Delta}^{n-1},
\end{equation}
where $\mu=\otimes$ denotes the tensor product.
\end{lemma}
Thus we can write the projection $\pr_{\mathrm{T}^n(V)} = \pr_V^{\star n}$ as
convolution, which also holds for $n=0$ since $\pr_{\mathrm{T}^0(V)} = 1 \epsilon$, and
we write $x_n = \pr_{\mathrm{T}^n(V)}x$ for $x \in \mathrm{T}(V)$.
We can show that we constructed indeed the cofree conilpotent coalgebra,
compare \cite[Proposition~1.2.1]{loday.vallette:2012a}:
\begin{theorem}[Cofree conilpotent coalgebra]
\label{thm:CofreeConilpotentCoalg}
Let $V^\bullet$ be a graded module over $\mathbb{K}$ and let
$(C^\bullet,\Delta,\epsilon,1)$ be a conilpotent graded coassociative
coalgebra over $\mathbb{K}$.
\begin{theoremlist}
\item The tensor coalgebra $(\mathrm{T}^c(V),\Delta,\epsilon,1)$ is cofree and cogenerated
by $V$ in the category of conilpotent coalgebras. Explicitly, for every
homogeneous $\mathbb{K}$-linear map $\phi \colon \cc{C} \rightarrow V$ of
degree zero extended to $C$ by $\phi(1)=0$ there exists a unique
counital coalgebra morphism $\Phi \colon C \rightarrow \mathrm{T}^c(V)$ such that
$\pr_V \circ \Phi = \phi$.
\item If in addition $\D \colon \cc{C}^\bullet \rightarrow V^{\bullet +k}$ is
a homogeneous $\mathbb{K}$-linear map of degree $k$ extended to $C$ by
$\D(1)=0$, then there exists a unique coderivation $D \colon
C^\bullet \rightarrow \mathrm{T}^c(V)^{\bullet+k}$ along $\Phi$ vanishing on $1$
such that $\pr_V \circ D = \D$.
\end{theoremlist}
\end{theorem}
\begin{proof}
The morphism $\Phi \colon C \rightarrow \mathrm{T}^c(V)$ has to satisfy for $x\in \cc{C}$
\begin{itemize}
\item $\Phi(1) = 1$ since $\Phi$ maps the group-like element to the group-like
element,
\item $\Phi(x)_0 = 0$ by counitality,
\item $\Phi(x)_1 = \phi(x)$ by the universal property,
\item $\Phi(x)_n = \sum \phi(x_{(1)}) \otimes \cdots \otimes \phi(x_{(n)})$
by the coalgebra morphism property, since
\begin{equation*}
\pr_{\mathrm{T}^n(V)}\Phi(x)
=
\mu^{n-1} \circ(\phi \otimes \cdots \otimes \phi) \circ \cc{\Delta}^{n-1}(x).
\end{equation*}
\end{itemize}
As $C$ is conilpotent, there is only a finite number of nontrivial
$\Phi(x)_n$ and $\Phi(x)=\sum_n \Phi(x)_n$ gives a well-defined linear map
of degree zero. This shows the uniqueness and a direct computation
shows that $\Phi$ is indeed a well-defined coalgebra morphism. For the
second part we show the uniqueness by the same arguments: Suppose $D$ is
such a coderivation, then since $\epsilon \circ D = 0$ by
Proposition~\ref{prop:epsilonDzero} we know
\begin{equation*}
D(c)
=
\sum_{n=1}^\infty D(c)_n
\end{equation*}
with $D(c)_1 = \D(c)$ for $c\in C$. For $n>1$ we get with the Leibniz rule
\begin{align*}
D(c)_n
& =
\mu^{n-1}\circ(\pr_V \otimes \cdots \otimes \pr_V) \circ
\left(\sum_{r=0}^{n-1} \Phi \otimes \cdots\otimes\Phi \otimes D \otimes \Phi
\otimes \Phi\right) \circ \Delta^{n-1} (c) \\
& =
\mu^{n-1}\circ
\left(\sum_{r=0}^{n-1} \phi \otimes \cdots\otimes\phi \otimes \D \otimes \phi
\otimes \phi\right) \circ \cc{\Delta}^{n-1} (c)
\end{align*}
This shows that necessarily $D(1)=0$ and a straightforward computation shows that
this is indeed a well-defined coderivation.
\end{proof}
\begin{corollary}
For the coalgebra morphism $\Phi$ and the coderivation $D$ along $\Phi$
from the above theorem one has
\begin{equation*}
\Phi
=
1\epsilon + \phi + \phi \star \phi + \dots
=
\frac{1}{1-\phi}\star
\end{equation*}
and
\begin{equation*}
D
=
\D + \phi \star \D + \D \star \phi + \phi \star \D \star\phi + \dots
=
\Phi \star \D \star \Phi.
\end{equation*}
\end{corollary}
For $C = \mathrm{T}^c(V)$ itself and $\phi = \pr_V$, i.e. $\Phi = \id$,
we get the following nice property.
\begin{corollary}
\label{cor:coderivationcogenerators}
The coderivations of $\mathrm{T}^c(V)$ vanishing on $1$ form a Lie subalgebra of the
endomorphisms. This Lie algebra is in bijection to $\Hom^\bullet_\mathbb{K}(
\cc{\mathrm{T}^c(V)},V)$ via
\begin{equation*}
\Hom^\bullet_\mathbb{K}(\cc{\mathrm{T}^c(V)},V) \ni
\D
\longmapsto
D = \id \star \D \star \id \in
\mathrm{CoDer}^\bullet_0(\mathrm{T}^c(V)).
\end{equation*}
\end{corollary}
Let now $D \in \mathrm{CoDer}^\bullet_0(\mathrm{T}^c(V))$, then the above corollary implies
that it is completely determined by its projection $\pr_V \circ D = \D$.
Here one has
\begin{equation*}
\D
=
\sum_{n=1}^\infty \D_n
\quad \quad \text{ with } \quad \quad
\D_n = \pr_V \circ D \circ \pr_{\mathrm{T}^n(V)}.
\end{equation*}
These maps $\D_n \colon \mathrm{T}^n(V) \rightarrow V$ are called \emph{Taylor coefficients}
of $D$.
\begin{remark}[$A_\infty$-structures]
\label{rem:Ainftystructures}
By the above corollary we have the following isomorphism
\begin{equation*}
\mathrm{CoDer}^\bullet_0(\mathrm{T}^c(V))
\cong
\Hom_\mathbb{K}^\bullet(\cc{\mathrm{T}^c(V)},V).
\end{equation*}
We can use it to induce on $\Hom_\mathbb{K}^\bullet(\cc{\mathrm{T}^c(V)},V)$
a Lie algebra structure. The induced bracket is just the Gerstenhaber bracket
$[\argument{,}\argument]_G$.
In particular, for $V = A[1]$, where $(A,\mu)$ is an associative algebra,
one gets on the right hand side the space of Hochschild cochains, and the
product $\mu$ of $A$ corresponds to a coderivation
on $\mathrm{T}^c(A[1])$. The associativity of $\mu$ is equivalent to $[\mu,\mu]_G=0$, i.e.
the induced coderivation is even a codifferential.
More generally, for a graded module $A$,
a codifferential $D$ on $\mathrm{T}^c(A[1])$ of degree $+1$ is equivalent to a
Maurer-Cartan element in $(\Hom^\bullet_\mathbb{K}(\cc{\mathrm{T}^c(A[1])},A[1]),0,
[\argument{,}\argument]_G)$. This is called \emph{$A_\infty$-structure} on $A$,
compare e.g. \cite[Chapter~9]{loday.vallette:2012a},
and it is determined by a sequence of maps $\mu_n \colon
\mathrm{T}^n(A[1]) \rightarrow A[2]$, where $n\geq 1$, with
\begin{equation*}
\sum_{q=1}^n \sum_{j=0}^{p-1}\pm
\mu_{n-q+1}(a_1,\dots, a_j,\mu_q(a_{j+1},\dots,a_{j+q}),a_{j+q+1},\dots,a_n)
=
0
\end{equation*}
because of $D^2=0$. In the case where we also allow $D(1)\neq 0$ for a
coderivation on $\mathrm{T}^c(A[1])$
we get curved $A_\infty$-algebras with $\mu_0 \neq 0$.
In operadic terms, this definition of $A_\infty$-algebras is motivated by the
fact that the Koszul dual of the operad encoding associative
algebras is the cooperad encoding coassociative coalgebras.
\end{remark}
Now we want to conclude this introductory section with the construction
of the cofree cocommutative conilpotent coalgebras. From the point of
$L_\infty$-algebras they are of great interest since they are the structures
which one uses to define $L_\infty$-structures. The abstract reason for this
is that the Koszul dual of the Lie operad is the cooperad encoding
cocommutative coalgebras, see e.g. \cite[Chapter~10]{loday.vallette:2012a} for
the general setting.
Consider again a graded $\mathbb{K}$-module $V^\bullet$. We can use
the tensor algebra $\mathrm{T}(V)$ of $V$ to define a new coproduct, different from the
deconcatenation coproduct $\Delta$ from above. Explicitly, we equip
$\mathrm{T}(V)$ with the structure of a bialgebra, i.e. we construct the new
coproduct as an algebra morphism $\Delta_\sh \colon \mathrm{T}(V) \rightarrow \mathrm{T}(V) \otimes \mathrm{T}(V)$.
Since $\mathrm{T}(V)$ is the free algebra, we specify $\Delta_\sh $ on the generators by
\begin{equation*}
\Delta_\sh (v)
=
v\otimes 1 + 1 \otimes v
\end{equation*}
for $v\in V$ and $\Delta_\sh (1) = 1 \otimes 1$. One can check with the signs
from Definition~\ref{def:GradedSigns} that we get
\begin{equation}
\label{eq:cocommutativegradedcoproduct}
\Delta_\sh (v_1 \cdots v_n)
=
\sum_{k=0}^{n} \sum_{\sigma \in Sh(k,n-k)} \epsilon(\sigma)
(v_{\sigma(1)} \cdots v_{\sigma(k)}) \otimes
(v_{\sigma(k+1)} \cdots v_{\sigma(n)}).
\end{equation}
Here $Sh(k,n-k)\subset S_n$ denotes the set of $(k,n-k)$-\emph{shuffles},
i.e. $\sigma(1) < \cdots < \sigma(k)$ and
$\sigma(k+1) < \cdots < \sigma(n)$.
We set $Sh(0,n)=Sh(n,0)=\{\id\}$,
and the above coproduct is well-defined because
of $S_n \cong Sh(k,n-k) \circ (S_k \times S_{n-k})$. Moreover,
$\Delta_\sh $ is coassociative, counital and graded cocommutative with respect
to the usual counit $\epsilon \colon \mathrm{T}(V) \rightarrow \mathbb{K}$ and we call
it \emph{shuffle coproduct}.
Since $\Delta_\sh $ is an algebra morphism, it is sufficient to show all these claims
on generators.
In particular, we can again consider the reduced coproduct
\begin{equation}
\label{eq:redcocommutativegradedcoproduct}
\cc{\Delta_\sh }(v_1 \cdots v_n)
=
\sum_{k=1}^{n-1} \sum_{\sigma \in Sh(k,n-k)} \epsilon(\sigma)
(v_{\sigma(1)} \cdots v_{\sigma(k)}) \otimes
(v_{\sigma(k+1)} \cdots v_{\sigma(n)})
\end{equation}
and as it splits the tensor factors without using the unit $1$ we get again
\begin{equation*}
\cc{\Delta_\sh }^n(v_1\cdots v_n)
=
0.
\end{equation*}
\begin{proposition}
The tensor algebra $(\mathrm{T}(V),\Delta_\sh ,\epsilon,1)$ is a conilpotent
cocommutative coalgebra. In fact, it is even a bialgebra with respect to
the usual tensor product $\mu = \otimes$ and the usual unit $1$.
\end{proposition}
In analogy to Lemma~\ref{lemma:projectiononTn} we get the following result:
\begin{lemma}
\label{lemma:projectiononTnCocom}
Let $V^\bullet$ be a graded $\mathbb{K}$-module. For $n\in \mathbb{N}$ one
has
\begin{equation}
\label{eq:projectiononTnCocom1}
(\pr_V\otimes \cdots \otimes \pr_V)
\circ \Delta_\sh ^{n-1} (v_1\cdots v_n)
=
\sum_{\sigma\in S_n}
\epsilon(\sigma) v_{\sigma(1)} \otimes \cdots \otimes v_{\sigma(n)}.
\end{equation}
\end{lemma}
This is not exactly the property we would like to have since the right
hand side is not the tensor we started with. Recall that this property was exactly
the statement of Lemma~\ref{lemma:projectiononTn} that
we used extensively in Theorem~\ref{thm:CofreeConilpotentCoalg} to show
that $(\mathrm{T}(V),\Delta)$ is the cofree conilpotent coalgebra. However,
we see that the right hand side of \eqref{eq:projectiononTnCocom1} is
$n! \Symmetrizer_n(v_1\otimes \cdots \otimes v_n)$, whence we would get
the desired equation if we pass from $\mathrm{T}(V)$ to $\Sym(V)$. This is indeed possible.
\begin{lemma}
Let $V^\bullet$ be a graded $\mathbb{K}$-module. Then the ideal $I(V) \subseteq \mathrm{T}(V)$
from \eqref{eq:SymmIdeal} is a coideal with respect to $\Delta_\sh$ and $\epsilon$.
\end{lemma}
\begin{proof}
The property $I(V) \subseteq \ker\epsilon$ is obvious, and for $v,w\in V^\bullet$
we have
\begin{equation*}
\Delta_\sh( vw - (-1)^{\abs{v}\abs{w}} wv)
=
1 \otimes (vw - (-1)^{\abs{v}\abs{w}}wv) +
(vw - (-1)^{\abs{v}\abs{w}} wv) \otimes 1 \in
I(V) \otimes \mathrm{T}(V) + \mathrm{T}(V)\otimes I(V)
\end{equation*}
and the algebra morphism property shows the result.
\end{proof}
This gives us a bialgebra structure on the quotient.
\begin{proposition}
\label{prop:Symwithshuffle}
Let $V^\bullet$ be a graded $\mathbb{K}$-module.
\begin{propositionlist}
\item The shuffle coproduct and the counit pass to the quotient by $I(V)$
and yield a
bialgebra structure $(\Sym(V),\mu_S,1,\Delta_\sh,\epsilon)$, where
$\mu_S = \vee$.
\item The bialgebra $(\Sym(V),\mu_S,1,\Delta_\sh,\epsilon)$ is graded commutative
and graded cocommutative.
\item The coalgebra $(\Sym(V),\Delta_\sh,\epsilon,1)$ is coaugmented and
conilpotent with coradical filtration
\begin{equation}
F_n(\Sym(V)) = \bigoplus_{k=0}^n \Sym^k(V).
\end{equation}
\end{propositionlist}
\end{proposition}
In particular, Lemma~\ref{lemma:projectiononTnCocom} implies now immediately
the desired identity:
\begin{lemma}
\label{lemma:projectiononSnCocom}
Let $V^\bullet$ be a graded $\mathbb{K}$-module. For $n\in \mathbb{N}$ one
has
\begin{equation}
\label{eq:projectiononTnCocom}
\pr_{\Sym^n(V)}
=
\frac{1}{n!} \mu_S^{n-1} \circ (\pr_V\otimes \cdots \otimes \pr_V)
\circ \Delta_\sh ^{n-1}
=
\frac{1}{n!} \mu_S^{n-1} \circ (\pr_V\otimes \cdots \otimes \pr_V)
\circ \cc{\Delta_\sh}^{n-1}.
\end{equation}
\end{lemma}
This allows us to show that $\Sym(V)$ is the cofree cocommutative conilpotent
coalgebra:
\begin{theorem}[Cofree cocommutative conilpotent coalgebra]
\label{thm:CofreeCocomConilpotentCoalg}
Let $V^\bullet$ be a graded module over $\mathbb{K}$ and let
$(C^\bullet,\Delta,\epsilon,1)$ be a conilpotent graded cocommutative
coalgebra over $\mathbb{K}$.
\begin{theoremlist}
\item The cocommutative conilpotent coalgebra
$(\Sym(V),\Delta_\sh,\epsilon,1)$ is the cofree cocommutative conilpotent
coalgebra cogenerated by $V$, i.e. the free object in the category
of conilpotent cocommutative coalgebras. Explicitly, for every
homogeneous $\mathbb{K}$-linear map $\phi \colon C \rightarrow V$ of
degree zero with $\phi(1)=0$ there exists a unique
coalgebra morphism $\Phi \colon C \rightarrow \Sym(V)$ such that
\begin{equation*}
\begin{tikzcd}
C \arrow[d,swap," \Phi"] \arrow[rd,"\phi"] & \\
\Sym(V) \arrow[r,"\pr_V"] & V,
\end{tikzcd}
\end{equation*}
commutes, i.e. $\pr_V \circ \Phi = \phi$. Explicitly, one has
$\Phi = \exp_\star(\phi)$ for the convolution product of
$\Hom^\bullet(C,\Sym(V))$ with respect to $\mu_S$.
\item Let $\D \colon C^\bullet \rightarrow V^{\bullet +k}$ be
a homogeneous $\mathbb{K}$-linear map of degree $k$, then there exists
a unique coderivation $D \colon
C^\bullet \rightarrow \Sym(V)^{\bullet+k}$ along $\Phi$
such that
\begin{equation*}
\begin{tikzcd}
C^\bullet \arrow[d,swap,"D"] \arrow[rd,"\D"] & \\
\Sym(V)^{\bullet + k} \arrow[r,"\pr_V"] & V^{\bullet +k},
\end{tikzcd}
\end{equation*}
commutes. Explicitly, one has
$ D = \Phi \star \D = \D \star \Phi$ and
$D(1) = 0$ if and only if $\D(1)=0$.
\end{theoremlist}
\end{theorem}
\begin{proof}
The proof is completely analogue to the proof of Theorem~\ref{thm:CofreeConilpotentCoalg}.
\end{proof}
For $C = \Sym(V)$ itself and $\phi = \pr_V$, i.e. $\Phi = \id$,
we get analoguously to
Corollary \ref{cor:coderivationcogenerators}.
\begin{corollary}
\label{cor:coderivationcogeneratorscocom}
Coderivations of $\Sym(V)$ form a Lie subalgebra of the
endomorphisms. This Lie algebra is in bijection to $\Hom^\bullet_\mathbb{K}(
\Sym(V),V)$ via
\begin{equation*}
\Hom^\bullet_\mathbb{K}(\Sym(V),V) \ni
\D
\longmapsto
D = \D \star \id \in
\mathrm{CoDer}^\bullet(\Sym(V)).
\end{equation*}
\end{corollary}
Let now $D \in \mathrm{CoDer}^\bullet(\Sym(V))$, then we know again that it is completely
determined by its projection $\pr_V \circ D = \D$.
Here
\begin{equation*}
\D
=
\sum_{n=0}^\infty \D_n
\quad \quad \text{ with } \quad \quad
\D_n = \pr_V \circ D \circ \pr_{\mathrm{T}^n(V)}
\end{equation*}
and the maps $\D_n \colon \mathrm{T}^n(V) \rightarrow V$ are again called \emph{Taylor coefficients}
of $D$.
By the above corollary we have the isomorphism
\begin{equation*}
\mathrm{CoDer}^\bullet(\Sym(V))
\cong
\Hom_\mathbb{K}^\bullet(\Sym(V),V)
\end{equation*}
that we can use to induce on $\Hom_\mathbb{K}^\bullet(\Sym(V),V)$
a Lie algebra structure. The induced bracket is called
\emph{Nijenhuis-Richardson bracket}. Similarly to the case of
associativity in Remark~\ref{rem:Ainftystructures} this
bracket encodes now the Jacobi identity. For example, for a
graded Lie algebra $(L,[\argument{,}\argument])$ the
rescaled bracket $\widetilde{\mu}(x,y) = -(-1)^{\abs{x}}[x,y]$,
where $\abs{x}$ denotes the degree of $x$ in $L[1]$, is
a Maurer-Cartan element in $(\Hom_\mathbb{K}^\bullet(\Sym(L[1]),L[1]),0,
[\argument{,}\argument]_{NR})$, inducing a codifferential of degree $+1$ on
$\Sym(L[1])$. Consequently, in the general case a (curved) $L_\infty$-structure
on a graded $\mathbb{K}$-module $L$ is a Maurer-Cartan element in the DGLA
$(\Hom_\mathbb{K}^\bullet(\Sym(L[1]),L[1]),0,[\argument{,}\argument]_{NR})$,
which in turn is equivalent to
a codifferential of degree $+1$ on $\Sym(L[1])$. As mentioned above, this is
due to the fact that the Koszul dual of the operad encoding Lie algebras is the
cooperad encoding cocommutative coalgebras, and we want to study these
structures now in more detail.
\section{Introduction to $L_\infty$-algebras}
\label{sec:DefLinfty}
\subsection{Definition and First Properties}
After this introduction to coalgebras we are now ready to study $L_\infty$-algebras.
We start with (flat) $L_\infty$-structures, i.e. those corresponding to coderivations
$Q$ with $Q(1)=0$. For simplicity, we assume from now on that $\mathbb{K}$ is a field
of characteristic zero and we follow mainly \cite{canonaco:1999a,dolgushev:2005b}.
\begin{definition}[$L_\infty$-algebra]
A (flat) $L_\infty$-\emph{algebra} is a graded vector space $L$ over $\mathbb{K}$
endowed with a degree one codifferential $Q$ on the reduced
symmetric coalgebra $(\cc{\Sym}(L[1]),\cc{\Delta_\sh})$.
An $L_\infty$-morphism between two $L_\infty$-algebras $F\colon (L,Q)
\rightarrow (L',Q')$ is a morphism of graded coalgebras
\begin{equation}
F \colon
\cc{\Sym}(L[1])
\longrightarrow
\cc{\Sym}(L'[1])
\end{equation}
such that $F \circ Q = Q' \circ F$.
\end{definition}
By Corollary~\ref{cor:coderivationcogeneratorscocom} we can characterize
the coderivation by its Taylor coefficients, also called structure maps.
\begin{proposition}
\label{prop:coderivationsymmetricsequence}
An $L_\infty$-algebra $(L,Q)$ is a graded vector space $L$ endowed
with a sequence of maps
\begin{equation}
Q_n^1 \colon
\Sym^n (L[1])
\longrightarrow
L[2]
\end{equation}
for $n> 0$. The coderivation $Q$ is given by
\begin{align}
\label{eq:QinSymmetricviaTaylor}
\begin{split}
Q(x_1 \vee \cdots \vee x_n)
& =
\sum_{k=1}^{n} \sum_{\sigma\in Sh(k,n-k)}
\epsilon(\sigma) Q_{k}^1(x_{\sigma(1)}\vee\cdots\vee x_{\sigma(k)})
\vee x_{\sigma(k+1)} \vee \cdots \vee x_{\sigma(n)}
\end{split}
\end{align}
for homogeneous vectors $x_1,\dots,x_n \in L$, and $Q^2=0$ is equivalent to
\begin{equation}
\label{eq:QsquaredZero}
\sum_{k=1}^n \sum_{\sigma\in Sh(k,n-k)}
\epsilon(\sigma)
Q_{n-k+1}^1(Q_k^1(x_{\sigma(1)}\vee\cdots\vee x_{\sigma(k)})\vee
x_{\sigma(k+1)}\vee \dots \vee x_{\sigma(n)})
=
0.
\end{equation}
\end{proposition}
In particular, \eqref{eq:QsquaredZero} implies $(Q_1^1)^2=0$, i.e. that $Q_1^1$
defines a differential on $L$ of degree one. The next order shows that
$Q_2^1$ satisfies a Leibniz rule with respect to $Q_1^1$, and the third one
can be interpreted as $Q_2^1$ satisfying a Jacobi identity up to terms depending on
$Q_3^1$.
We also write $Q_k = Q_k^1$ and following
\cite{canonaco:1999a} we denote by $Q_n^i$ the component
\begin{equation*}
Q_n^i
=
\pr_{\Sym^i(L[1])} \circ \, Q\at{\Sym^n(L[1])}
\colon \Sym^n( L[1] )\longrightarrow \Sym^i (L[2])
\end{equation*}
of $Q$. It is given by
\begin{equation}
\label{eq:Qniformula}
Q_n^i(x_1\vee \cdots \vee x_n)
=
\sum_{\sigma \in \mathrm{Sh}(n+1-i,i-1)}
\epsilon(\sigma) Q_{n+1-i}^1(x_{\sigma(1)}\vee \cdots\vee x_{\sigma(n+1-i)})\vee
x_{\sigma(n+2-i)} \vee \cdots \vee x_{\sigma(n)},
\end{equation}
where $Q_{n+1-i}^1$ are the usual structure maps.
$L_\infty$-algebras have also been called strong homotopy Lie algebras
as they are DGLAs up to higher homotopies.
\begin{example}[DGLA]
\label{ex:DGLAasLinfty}
A differential graded Lie algebra (DGLA)
$(\liealg{g},\D,[\argument{,}\argument])$ is an $L_\infty$-algebra
with $Q_1 = -\D$ and $Q_2(\gamma \vee \mu) =
-(-1)^{\abs{\gamma}}[\gamma,\mu]$ and $Q_n=0$ for all
$n > 2$, where $\abs{\gamma}$ denotes the degree
in $\liealg{g}[1]$.
\end{example}
This means that every DGLA is an $L_\infty$-algebra and for
later use we consider the following example:
\begin{example}\label{ex:EndDGLA}
Let $(M^\bullet,\D)$ be a cochain complex over $\mathbb{K}$. Then we define
\begin{align*}
\End^k(M)=\{\phi\colon M^\bullet\to M^{\bullet+k}\ | \ \phi \text{ linear }\}
\end{align*}
and $\End^\bullet(M)=\bigoplus_{i\in\mathbb{Z}} \End^i(M)$. For elements
$A_i\in \End^{|A_i|}(M)$, we define
\begin{align*}
[A_1,A_2]:=A_1\circ A_2 -(-1)^{|A_1||A_2|}A_2\circ A_1,
\end{align*}
which is a graded Lie bracket. Finally, setting $D=[\D,\argument]$,
one sees that $(\End^\bullet(M),D,[\argument{,}\argument])$ is a DGLA and hence an $L_\infty$-algebra.
\end{example}
\begin{remark}
\label{rem:CohomologyLieAlg}
Note that $(Q_1^1)^2=0$ allows us to study the cohomology
$\mathrm{H}(L)$ of the cochain complex $(L,Q^1_1)$. In particular,
\eqref{eq:QsquaredZero} implies that $\mathrm{H}(L)$ interits
a Lie algebra structure since the bracket induced by $Q^1_2$ satisfies
the usual Jacobi identity.
\end{remark}
\begin{remark}[Antisymmetric formulation]
Using the d\'ecalage-isomorphism
\begin{align*}
dec^n \colon
\Sym^n (L)
& \longrightarrow
\Anti^n(L[-1])[n]
\end{align*}
one can show that an $L_\infty$-algebra structure on $L$ is equivalently given
by a sequence of maps
\begin{equation}
Q^a_n \colon
\Anti^n L
\longrightarrow
L[2-n]
\end{equation}
for $n> 0$ with
\begin{equation}
\sum_{k=1}^n (-1)^{n-k} \sum_{\sigma\in Sh(k,n-k)}
\chi(\sigma)
Q^a_{n-k+1}(Q^a_k(x_{\sigma(1)}\wedge\dots\wedge x_{\sigma(k)})\wedge
x_{\sigma(k+1)}\wedge \dots \wedge x_{\sigma(n)})
=
0
\end{equation}
for $n \geq 1$, compare
e.g. \cite[Theorem~1.3.2]{manetti:note}.
Because of the grading shift from $\Sym(L[1])$ to $\Anti L$
one sometimes also speaks of $L_\infty[1]$-algebra structures on $L[1]$
in the symmetric setting, and of $L_\infty$-algebra structures on $L$
in the antisymmetric setting.
But since they are equivalent we refer to them just as $L_\infty$-algebras.
Moreover, note that the symmetric interpretation of the structure maps of
$L_\infty$-morphisms is more natural in the sense of the definition via
the cocommutative cofree coalgebra. However, if one sees $L_\infty$-algebras
as a generalization of DGLAs the antisymmetric interpretation is the most natural one.
\end{remark}
Similarly, we know from Theorem~\ref{thm:CofreeCocomConilpotentCoalg} that every
$L_\infty$-morphism is characterized by its
Taylor coefficients, i.e. by a sequence of maps.
\begin{proposition}
\label{prop:linftymorphismsequence}
An $L_\infty$-morphism $F\colon (L,Q) \rightarrow (L',Q')$ is
uniquely determined by the collection of
multilinear graded maps
\begin{equation}
F_n^1
=
\pr_{L[1]} \circ F \at{\Sym^n(L[1])}
\colon
\Sym^n (L[1])
\longrightarrow
L'[1]
\end{equation}
for $n \geq 1$. Setting $F_0^1=0$, it follows that $F$ is given by
\begin{align}
\begin{split}
F(x_1 \vee \cdots \vee x_n)
&
=
\exp_\star(F^1)(x_1 \vee \cdots \vee x_n)
=
\sum_{p \geq 1} \sum_{k_1 + \cdots + k_p=n, k_i\geq 1}
\sum_{\sigma\in Sh(k_1,\dots,k_p)}
\frac{\epsilon(\sigma)}{p!} \\
& \quad \quad F_{k_1}^1(x_{\sigma(1)}\vee \dots\vee x_{\sigma(k_1)})
\vee \cdots \vee F_{k_p}^1(x_{\sigma(n-k_p+1)}\vee \cdots \vee
x_{\sigma(n)})
\end{split}
\end{align}
and the compatibility with the coderivations leads to further constraints.
In particular, one has in lowest order $F_1^1 \circ Q_1^1 = (Q')_1^1\circ F_1^1$.
\end{proposition}
We also write $F_k = F_k^1$ and we get coefficients $F_n^j =\pr_{\Sym^j(L[1])} \circ F
\at{\Sym^n(L[1])}
\colon \Sym^n (L[1] )\rightarrow
\Sym^j (L'[1])$ of $F$. Note that $F_n^j$ depends only on $F_k^1 = F_k$ for $k\leq n-j+1$.
\begin{remark}
Analogously to the case of coderivations, one can interpret
an $L_\infty$-morphism as a sequence of multilinear maps
\begin{equation}
F_n \colon
\Anti^n L
\longrightarrow
L'[1-n],
\end{equation}
satisfying certain compatibility relations. In the case of DGLAs
$(\liealg{g}_1,\D_1,[\argument{,}\argument]_1)$ and
$(\liealg{g}_2,\D_2,[\argument{,}\argument]_2)$ one can show
that the compatibility with the differentials takes the following form
\begin{align}
\begin{split}
\label{eq:linftymorphismdiff}
\D_2 F_n(x_1,\dots,x_n)
& =
\sum_{i=1}^n (-1)^{k_1+ \cdots + k_{i-1}+1-n}
F_n(x_1,\dots,\D_1 x_i,\dots,x_n) \\
& \quad
+ \frac{1}{2} \sum_{k=1}^{n-1} \sum_{\sigma\in Sh(k,n-k)}
\pm [F_k(x_{\sigma(1)},\dots,x_{\sigma(k)}),F_{n-k}(x_{\sigma(k+1)},\dots,
x_{\sigma(n)})]_2 \\
& \quad
- \sum_{i \neq j} \pm F_{n-1}([x_i,x_j]_1,x_1,\dots,\widehat{x_i},\dots,
\widehat{x_j},\dots,x_n)
\end{split}
\end{align}
for $x_i \in \liealg{g}_1^{k_i}$, compare \cite{dolgushev:2005a}.
\end{remark}
In order to gain a better understanding, we consider at first
$L_\infty$-isomorphisms and their inverses. We
follow \cite[Section~2]{canonaco:1999a}.
\begin{proposition}
\label{prop:Linftyiso}
An $L_\infty$-morphism $F$ between $L_\infty$-algebras $(L,Q)$ and
$(L',Q')$ is an isomorphism if and only if $F_1^1$ is an isomorphism.
\end{proposition}
\begin{proof}
We recall the proof from \cite[Proposition~2.2]{canonaco:1999a}.
We only have to show that $F$ is invertible as coalgebra morphism if
and only if $F_1^1$ is invertible with inverse $(F^1_1)^{-1}$. The fact that
$F^{-1}$ is an $L_\infty$-morphism follows then directly:
\begin{equation*}
F^{-1}Q'
=
F^{-1}Q' FF^{-1}
=
F^{-1}FQF^{-1}
=
QF^{-1}.
\end{equation*}
Let therefore $F$ be an isomorphism with inverse $F^{-1}$. Then
\begin{equation*}
\id_{L[1]}
=
(\id_{\cc{\Sym} (L[1])})^1_1
=
(F^{-1}F)^1_1
=
(F^{-1})^1_1 F_1^1,
\end{equation*}
analogously $\id_{L'[1]}=F^1_1(F^{-1})_1^1$, whence $F_1^1$ is an isomorphism.
Suppose now that $F_1^1$ is an isomorphism. We construct a left inverse
$G$ of $F$ by defining recursively its coefficient functions $G_n$.
Starting with $G_1 = (F_1^1)^{-1}$ we want for $n>1$
\begin{equation*}
(GF)_n
=
\sum_{i=1}^n G^1_i F^i_n
\stackrel{!}{=}
(\id_{\cc{\Sym}(L[1])})^1_n
=
0,
\end{equation*}
which is fulfilled for
\begin{equation*}
G_n
=
G^1_n
=
- \left( \sum_{i=1}^{n-1} G^1_i F^i_n\right) (F^n_n)^{-1}.
\end{equation*}
Note that $F_n^n$ is invertible since $F_1^1$ is invertible. By the same
argument there exists a coalgebra morphism $F'$ with $F'G = \id$ and
thus $F' = F' GF = F$.
\end{proof}
We saw in Proposition~\ref{prop:linftymorphismsequence} that
the first Taylor coefficient resp. structure map $F_1$ of an $L_\infty$-morphism
is a morphism of cochain complexes $(L,Q_1)$ and $(L,Q'_1)$, which leads us to the
following definition.
\begin{definition}[$L_\infty$-quasi-isomorphism]
\label{def:Linftyquis}
An \emph{$L_\infty$-quasi-isomorphism} between two $L_\infty$-algebras $(L,Q)$ and
$(L',Q')$ is a morphism $F$ of $L_\infty$-algebras such that the first
structure map $F_1$ is a quasi-isomorphism of cochain complexes.
\end{definition}
\begin{example}[DGLA II]
A DGLA quasi-isomorphism $\phi \colon \liealg{g} \rightarrow \liealg{g}'$
is an $L_\infty$-quasi-isomorphism $F$ with only non-vanishing structure
map $F_1 = \phi$.
\end{example}
The notion of $L_\infty$-quasi-isomorphisms allows us to introduce
the notion of formal $L_\infty$-algebras.
\begin{definition}
An $L_\infty$-algebra $(L,Q)$ is called \emph{formal}, if there exists
$L_\infty$-quasi-isomorphism $(L,Q)\to (\mathrm{H}(L),Q_\mathrm{H})$,
where $Q_\mathrm{H}$ denotes the Lie algebra structure on the cohomolology
$\mathrm{H}(L)$ from Remark~\ref{rem:CohomologyLieAlg}.
\end{definition}
\begin{example}[Kontsevich's Formality Theorem]
In \cite{kontsevich:2003a} Kontsevich proved that there exists an
$L_\infty$-quasi-isomorphism
from the DGLA of polyvector fields $T_\poly(\mathbb{R}^d)$ to the DGLA
of polydifferential operators $D_\poly(\mathbb{R}^d)$,
compare Example~\ref{ex:tpoly}
and Example~\ref{ex:dpoly}. This implies that the DGLA of
polydifferential operators is formal, explaining the name
"formality theorem".
\end{example}
One reason why $L_\infty$-algebras are particularly useful is that all
$L_\infty$-quasi-isomorphisms admit \emph{$L_\infty$-quasi-inverses},
i.e. $L_\infty$-quasi-isomorphisms in the other direction that
induce the inverse isomorphism in cohomology, see
Theorem~\ref{thm:QuisInverse} below. Moreover, $L_\infty$-quasi-isomophisms
are important for the homotopy classification of $L_\infty$-algebras, compare
Section~\ref{sec:HomClassLinftyAlgs}, for which also another construction
is needed: the direct sum resp.
direct product of $L_\infty$-algebras.
\begin{lemma}
\label{lem:DirectSumLinftyAlg}
Let $(L,Q)$ and $(L',Q')$ be two $L_\infty$-algebras. Then
\begin{equation}
\label{eq:directsumCoder}
\widehat{Q}
=
i \circ Q \circ p + i' \circ Q' \circ p'
\end{equation}
defines a codifferential on $\Sym(\widehat{L}[1])$, where
$\widehat{L} = L \oplus L'$, $i \colon \cc{\Sym}(L[1]) \rightarrow
\cc{\Sym}(\widehat{L}[1])$ is
the inclusion and $p \colon \cc{\Sym}( \widehat{L}[1]) \rightarrow
\cc{\Sym}(L[1])$ is the projection, analogously for $i',p'$. Moreover,
$(\widehat{L},\widehat{Q})$ is the direct product in the category
of conilpotent cocommutative differential graded coalgebras without counit.
\end{lemma}
\begin{proof}
Explicitly, one has for $x_1,\dots,x_m \in L$ and $x_{m+1},\dots,x_n
\in L'$
\begin{equation*}
\widehat{Q}^1(x_1\vee\cdots\vee x_n)
=
\begin{cases}
Q^1 (x_1\vee \cdots \vee x_n) \quad & \text{ if } m=n \\
Q'^1 (x_1\vee \cdots\vee x_n) \quad & \text{ if } m=0 \\
0 \quad \quad & \text{ if } 0 <m <n,
\end{cases}
\end{equation*}
and one can directly check $\widehat{Q}\widehat{Q}=0$. In order to show that
$(\widehat{L},\widehat{Q})$ is the direct product of $(L,Q)$ and $(L',Q')$,
consider two morphisms $F \colon C \rightarrow
\cc{\Sym}(L[1])$ and $F' \colon C \rightarrow \cc{\Sym}(L'[1])$,
where $(C,D,\nabla)$ is a conilpotent cocommutative DG coalgebra without
counit. Then $\widehat{F} \colon C \rightarrow \cc{\Sym}(\widehat{L}[1])$
defined by $\widehat{F}^1 = F^1 \oplus F'^1$ is the only coalgebra morphism
with $p\widehat{F}=F$ and $p'\widehat{F}=F'$. We only have to check
$\widehat{Q}\widehat{F} = \widehat{F}D$, where we get with
Lemma~\ref{lemma:projectiononSnCocom}
\begin{align*}
\widehat{Q}^1\widehat{F}
& =
\sum_{n> 0} \frac{1}{n!}\widehat{Q}^1_n
\mu_S^{n-1}(F^1 \oplus F'^1)^{\otimes n}\Delta^{(n-1)}_\sh \\
& =
\sum_{n>0} \frac{1}{n!} \left(
Q^1_n \mu_S^{n-1}(F^1)^{\otimes n} \Delta^{(n-1)}_\sh \oplus
Q'^1_n \mu_S^{n-1}(F'^1)^{\otimes n} \Delta^{(n-1)}_\sh \right) \\
& =
Q^1F \oplus Q'^1F'
=
F^1 D \oplus F'^1 D
=
\widehat{F}^1D.
\end{align*}
This shows the desired equality.
\end{proof}
\subsection{Curved $L_\infty$-algebras}
\label{subsec:CurvedLinftyAlgs}
As mentioned above, we can use the whole power of
Theorem~\ref{thm:CofreeCocomConilpotentCoalg} by considering
coderivations of $\Sym(L[1])$ that do not vanish on the unit,
yielding the notion of curved $L_\infty$-algebras.
\begin{definition}[Curved $L_\infty$-algebra]
A \emph{curved} $L_\infty$-\emph{algebra} is a graded vector space
$L$ over $\mathbb{K}$ endowed with a degree one codifferential $Q$
on the cofree conilpotent coalgebra $(\Sym(L[1]),\Delta_\sh)$
cogenerated by $L[1]$.
\end{definition}
This codifferential $Q$ is equivalent to a sequence of maps $Q_n$ with
$n= 0,1,\dots$, where the sum \eqref{eq:QinSymmetricviaTaylor}
starts now at $k=0$. In particular, $Q^2=0$ implies
\begin{equation}
Q_1(Q_0(1))
=
0
\quad \text{ and } \quad
Q_1(Q_1(x))
=
- Q_2(Q_0(1)\vee x),
\end{equation}
i.e. $Q_0(1)$ is always closed with respect to $Q_1$, but
$Q_1$ is in general no longer a coboundary operator. However,
if $Q_0(1)$ is \emph{central}, i.e.
\begin{equation*}
Q_{n+1}(Q_0(1)\vee x_1 \vee \cdots\vee x_n)
=
0
\end{equation*}
for all $n \geq 1$, then we have again $(Q_1)^2=0$. Morphisms of
curved $L_\infty$-algebras are degree $0$ counital coalgebra morphisms $F$
such that $F \circ Q = Q' \circ F$. As in the flat setting they are
characterized by a sequence of maps $F_n$ with $n\geq 1$ satisfying the
properties of Proposition~\ref{prop:linftymorphismsequence} and the fact that
$F(1)=1$. Note that this last property is clear since we consider a
morphism between conilpotent counital coalgebras. These have unique
grouplike elements $1$ and $F$ has to map one to the other.
Finally, note that a curved $L_\infty$-algebra with $Q_0=0$ is just a
flat $L_\infty$-algebra as expected.
\begin{example}[Curved Lie algebra]
The basic example is a curved Lie algebra
$(\liealg{g},R,\D,[\,\cdot\,{,}\,\cdot\,])$, i.e. a graded Lie algebra
with derivation $\D$ of degree $+1$ and $\D^2 = \ad(R)= [R,\argument]$ as well as $\D R=0$.
The element $R\in \liealg{g}^2$ is
also called \emph{curvature}.
By setting $Q_0(1)=- R$ and with higher orders as in Example~\ref{ex:DGLAasLinfty}
we obtain a curved
$L_\infty$-algebra and $\D$ is a differential, i.e. $\D^2=0$, if and only if
$R$ is central.
Morphisms of curved Lie algebras are Lie algebra morphisms $f\colon \liealg{g}\rightarrow
\liealg{g}'$ such that $f \circ \D = \D' \circ f$ and $f(R)=R'$.
\end{example}
\begin{remark}[Curved morphisms of curved Lie algebras]
\label{rem:curvedMorphisms}
Note that there exists a more general notion of \emph{curved morphisms} $(f,\alpha)
\colon (\liealg{g},R,\D)\rightarrow (\liealg{g}',R',\D')$ of curved
Lie algebras where $\alpha \in \liealg{g}'^{1}$ and where for all $x\in\liealg{g}$
\begin{align*}
\D' f(x)
=
f(\D x) + [\alpha,f(x)]
\quad \text{ and } \quad
R'
=
f(R) + \D'\alpha - \frac{1}{2}[\alpha,\alpha],
\end{align*}
see \cite[Definition~4.3]{maunder:2017a}. The usual case with $\alpha=0$ is called
\emph{strict} and for a curved Maurer-Cartan element $x\in \liealg{g}^1$, see
Section~\ref{sec:MCinDGLA} for the definition, one gets a curved
Maurer-Cartan element $(f,\alpha)(x)=f(x)-\alpha \in \liealg{g}'^1$. In particular,
$(\id,\alpha)$ corresponds to twisting with $\alpha$ since
\begin{equation*}
R'
=
R + \D \alpha + [\alpha,\alpha]- \frac{1}{2}[\alpha,\alpha]
=
R^\alpha,
\end{equation*}
and $(f,\alpha)$ can be seen as strict morphism into the twisted
curved Lie algebra $(\liealg{g}',R'^{-\alpha},\D'^{-\alpha}) =
(\liealg{g}', R'- \D'\alpha + \frac{1}{2}[\alpha,\alpha],
\D' - [\alpha,\argument])$,
see again Section~\ref{sec:MCinDGLA} for more details on the twisting
procedure.
\end{remark}
\begin{remark}[Curved morphisms of curved $L_\infty$-algebras]
The above curved morphisms of curved Lie algebras can be generalized to
curved $L_\infty$-algebras by allowing zero-th Taylor coefficients
$F_0^1 \colon \mathbb{K} \rightarrow L'[1]$ with $F_0^1(1)=\alpha\in L'[1]^0$.
These curved morphisms of $L_\infty$-algebras are no longer coalgebra morphisms,
but they still have some nice properties. In order to fully understand them we need
to introduce Maurer-Cartan elements and the concept of twisting, which is done in the
next sections. Afterwards, we will investigate the curved
morphisms of curved $L_\infty$-algebras in Remark~\ref{rem:CurvedMorphLinfty},
see also \cite{positselski:2018a}
for the case of curved $A_\infty$-algebras.
\end{remark}
Concerning the compatibility of flat $L_\infty$-morphisms with the curvature
one has the following relation:
\begin{proposition}
\label{prop:CompatibilityMorphismCurvatur}
Let $F$ be an $L_\infty$-morphism of flat $L_\infty$-algebras $(L,Q)$
and $(L',Q')$. In addition, let $Q_0
\in L[1]^1, Q'_0 \in L'[1]^1$ be closed and
central with respect to the $L_\infty$-structures $Q$ and $Q'$,
with induced curved $L_\infty$-structures $\widetilde{Q}$ on $L$ and
$\widetilde{Q'}$ on $L'$. Then the structure maps of $F$
induces a morphism of curved $L_\infty$-algebras if and only if
\begin{equation}
F_1^1(Q_0)
=
Q_0'
\quad \quad \text{ and } \quad \quad
F_k^1(Q_0 \vee \argument)
=
0
\quad \quad \forall \; k>1.
\end{equation}
\end{proposition}
\begin{proof}
One only has to check if $F$ is compatible with the codifferentials
$\widetilde{Q}$ and $\widetilde{Q'}$, where one gets for
$v_1\vee \cdots \vee v_n$ with $ v_i \in L[1]^{k_i}$ and $n>0$
\begin{align*}
F^1 \circ \widetilde{Q} (v_1 \vee \cdots \vee v_n)
& =
F^1 ( Q_0 \vee v_1 \vee \cdots \vee v_n
+ Q(v_1\vee \cdots \vee v_n)) \\
& =
F_{n+1}^1( Q_0 \vee v_1 \vee \cdots \vee v_n)
+ (Q')^1\circ F (v_1\vee \cdots \vee v_n) \\
& \stackrel{!}{=}
\widetilde{Q'}^1 \circ F(v_1\vee \cdots \vee v_n).
\end{align*}
In addition, one has
\begin{equation*}
Q'_0
=
\widetilde{Q'} \circ F(1)
\stackrel{!}{=}
F \circ \widetilde{Q}(1)
=
F_1^1(Q_0),
\end{equation*}
which directly yields the above identities.
\end{proof}
In the following, if we speak of $L_\infty$-algebras we allow curved
$L_\infty$-algebras, in cases where flatness is required we speak of
flat $L_\infty$-algebras.
\section{The Homotopy Transfer Theorem and the Minimal Model of a $L_\infty$-algebra}
\label{sec:homotopytransfertheorem}
It is well-known that given a
homotopy retract one can transfer $L_\infty$-structures, see
e.g. \cite[Section~10.3]{loday.vallette:2012a}. Explicitly, a homotopy retract (also
called homotopy equivalence data) consists of two cochain complexes $(A,\D_A)$ and
$(B,\D_B)$ with chain maps $i,p$ and
homotopy $h$ such that
\begin{equation}
\label{eq:homotopyretract}
\begin{tikzcd}
(A,\D_A)
\arrow[rr, shift left, "i"]
&&
\left(B, \D_B\right)
\arrow[ll, shift left, "p"]
\arrow[loop right, "h"]
\end{tikzcd}
\end{equation}
with $h \circ \D_B + \D_B \circ h = \id - i \circ p$, and such that $i$ and $p$ are
quasi-isomorphisms. Then the homotopy transfer theorem
states that if there exists
a flat $L_\infty$-structure on $B$, then one can transfer it to $A$ in
such a way that $i$ extends to an $L_\infty$-quasi-isomorphism. By
the invertibility of $L_\infty$-quasi-isomorphisms, a statement that we
prove in Theorem~\ref{thm:QuisInverse} below, there also exists an
$L_\infty$-quasi-isomorphism into $A$ denoted by $P$, see
e.g. \cite[Proposition~10.3.9]{loday.vallette:2012a}.
\subsection{Homotopy Transfer Theorem via Symmetric Tensor Trick}
\label{sec:HTTSymmetricTensorTrick}
We want to state different versions of this statement.
For simplicity, we assume that we have a deformation retract, i.e. we
are in the situation \eqref{eq:homotopyretract} with additionally
\begin{equation*}
p\circ i
=
\id_A.
\end{equation*}
By \cite[Remark~2.1]{huebschmann:2011a} we can assume that we have even a
special deformation retract, also called \emph{contraction}, where
\begin{equation*}
h^2
=
0,
\quad \quad
h \circ i
=
0
\quad \text{ and } \quad
p\circ h
=
0.
\end{equation*}
Assume now that $(B,Q_B)$ is an $L_\infty$-algebra with $(Q_{B})_1^1=-\D_B$.
In the following we
give a more explicit description of the transferred $L_\infty$-structure $Q_A$ on $A$ and
of the $L_\infty$-projection $P\colon (B,Q_B)\rightarrow (A,Q_A)$ inspired by the
symmetric tensor
trick \cite{berglund:2014a,huebschmann:2010a,huebschmann:2011a,manetti:2010a},
see also \cite{merkulov:1999a} for the case of $A_\infty$-algebras.
The map $h$ extends to a homotopy $H_n
\colon \Sym^n (B[1]) \rightarrow \Sym^n(B[1])[-1]$ with respect to
$Q_{B,n}^n \colon \Sym^n (B[1]) \rightarrow \Sym^n(B[1])[1]$, see
e.g. \cite[p.~383]{loday.vallette:2012a} for the construction on the
tensor algebra, which adapted to our setting works as follows:
we define the operator
\begin{align*}
K_n
\colon
\Sym^n(B[1])
\longrightarrow
\Sym^n(B[1])
\end{align*}
by
\begin{align*}
K_n(x_1\vee\cdots\vee x_n)
=
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\frac{\epsilon(\sigma)}{n-i}ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee X_{\sigma(i+1)}\vee X_{\sigma(n)}.
\end{align*}
Note that here we sum over the whole symmetric group and
not the shuffles, since in this case the formulas are easier. We extend $-h$ to a
coderivation to $\Sym(B[1])$, i.e.
\begin{align*}
\tilde{H}_n(x_1\vee\cdots\vee x_n):=
-\sum_{\sigma\in \mathrm{Sh}(1,n-1)}
\epsilon(\sigma) \;
hx_{\sigma(1)}\vee x_{\sigma(2)}\vee\cdots\vee x_{\sigma(n)}.
\end{align*}
\begin{lemma}
With the definition above, we have
\begin{align}
\label{eq:extendedHomotopy}
K_n\circ Q_{B,n}^n
=
Q_{B,n}^n\circ K_n
\quad
\text{ and }
\quad
K_n\circ \tilde{H}_n
=
\tilde{H}_n\circ K_n
,
\end{align}
where $Q_{B,n}^n$ is the extension of the differential $Q_{B,1}^1 = - \D_B$ to
$\Sym^n(B[1])$ as a coderivation.
\end{lemma}
\begin{proof}
Since $i$ and $p$ are chain maps, it is clear that
$K_n\circ Q_{B,n}^n=Q_{B,n}^n\circ K_n$. The second equation follows from the fact that
we have $h\circ i=0$ and $p\circ h=0$.
\end{proof}
With the definition
\begin{align*}
H_n
:=
K_n\circ \tilde{H}_n
=
\tilde{H}_n\circ K_n
\end{align*}
we have, together with equations \eqref{Eq: ExtHomotopy}, that
\begin{align*}
Q_{B,n}^n H_n + H_n Q_{B,n}^n
=
(n\cdot\id-ip)\circ K_n,
\end{align*}
where $ip$ is extended as a coderivation to $\Sym(B[1])$.
\begin{proposition}\label{Prop: CoAlgHom}
In the above setting one has
\begin{equation}
\label{Eq: ExtHomotopy}
Q_{B,n}^n H_n + H_n Q_{B,n}^n
=
\id - (ip)^{\vee n}.
\end{equation}
\end{proposition}
\begin{proof}
Because of the previous results, it is enough to show that
$\id - (ip)^{\vee n}=(n\cdot\id-ip)\circ K_n$.
Let $X_1\vee\dots \vee X_n\in \Sym^n(B[1])$, then we have
\begin{align*}
(n\cdot\id-&ip)\circ K_n (X_1\vee\dots \vee X_n)\\&
=
\frac{1}{(n-1)!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\frac{\epsilon(\sigma)}{n-i}ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee X_{\sigma(i+1)}\vee X_{\sigma(n)}\\&
\quad-
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\frac{i\,\epsilon(\sigma)}{n-i}ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee X_{\sigma(i+1)}\vee X_{\sigma(n)}\\&
\quad-
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\frac{\epsilon(\sigma)}{n-i}ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee ip(X_{\sigma(i+1)}\vee X_{\sigma(n)})\\&
=
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\epsilon(\sigma)ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee X_{\sigma(i+1)}\vee X_{\sigma(n)}\\&
\quad -
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\substack{\sigma\in S_n,\\\tau \in Sh(1,n-i-1)}}\hspace*{-5pt}
\frac{\epsilon(\sigma\circ (\id^i\times\tau))}{n-i}
ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee ipX_{\sigma(\tau(i+1))}\vee X_{\sigma(\tau(n))}\\&
=
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\epsilon(\sigma)ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee X_{\sigma(i+1)}\vee X_{\sigma(n)}\\&
\quad -
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\substack{\sigma\in S_n,\\\tau \in Sh(1,n-i-1)}}
\frac{\epsilon(\sigma)}{n-i}ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee ipX_{\sigma(i+1)}\vee X_{\sigma(n)}\\&
=
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\epsilon(\sigma)ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee X_{\sigma(i+1)}\vee X_{\sigma(n)}\\&
\quad -
\frac{1}{n!}
\sum_{i=0}^{n-1}
\sum_{\sigma\in S_n}
\epsilon(\sigma)ipX_{\sigma(1)}\vee\cdots\vee ipX_{\sigma(i)}
\vee ipX_{\sigma(i+1)}\vee X_{\sigma(n)}.
\end{align*}
The finalization of the proof is just a comparison of the summands.
\end{proof}
Suppose now that we already have constructed a codifferential $Q_A$ and a
morphism of coalgebras $P$ with structure maps $P_\ell^1 \colon \Sym^\ell(B[1]) \rightarrow A[1]$
such that $P$ is an $L_\infty$-morphism up to order $k$, i.e.
\begin{equation*}
\sum_{\ell=1}^m P^1_\ell \circ Q_{B,m}^\ell
=
\sum_{\ell=1}^m Q_{A,\ell}^1\circ P^\ell_{m}
\end{equation*}
for all $m \leq k$. Then we have the following statement.
\begin{lemma}
\label{lemma:Linftyuptok}
Let $P \colon \Sym(B[1]) \rightarrow \Sym (A[1])$ be an
$L_\infty$-morphism up to order $k\geq 1$ . Then
\begin{equation}
\label{eq:Linftykplusone}
L_{\infty,k+1}
=
\sum_{\ell = 2}^{k+1} Q_{A,\ell}^1 \circ P^\ell_{k+1} - \sum_{\ell =1}^{k}
P_\ell^1 \circ Q^\ell_{B,k+1}
\end{equation}
satisfies
\begin{equation}
\label{eq:linftycommuteswithq}
L_{\infty,k+1} \circ Q_{B,k+1}^{k+1}
=
-Q_{A,1}^1 \circ L_{\infty,k+1}.
\end{equation}
\end{lemma}
\begin{proof}
The statement follows from a straightforward computation. For
convenience we omit the index of the differential:
\begin{align*}
L_{\infty,k+1} Q_{k+1}^{k+1}
& =
\sum_{\ell = 2}^{k+1} Q_{\ell}^1 (P\circ Q)^\ell_{k+1}
- \sum_{\ell = 2}^{k+1} \sum_{i=1}^k Q_{\ell}^1 P^\ell_i Q^i_{k+1}
+ \sum_{\ell =1}^{k}\sum_{i=1}^k P_\ell^1 Q^\ell_i Q_{k+1}^i \\
& =
\sum_{\ell = 2}^{k+1} Q_{\ell}^1 (Q\circ P)^\ell_{k+1}
- \sum_{\ell = 2}^{k+1} \sum_{i=1}^k Q_{\ell}^1 P^\ell_i Q^i_{k+1}
+ \sum_{\ell =1}^{k}\sum_{i=1}^k Q_\ell^1 P^\ell_i Q_{k+1}^i \\
& =
-Q_1^1 (Q \circ P)^1_{k+1} + Q_1^1 \sum_{i=1}^k P^1_{i}Q^i_{k+1}
=
- Q_1^1 L_{\infty,k+1},
\end{align*}
where the last equality follows from $Q_1^1Q_1^1 = 0$.
\end{proof}
This allows us to prove one version of the homotopy transfer theorem
as formulated in \cite[Theorem~B.2]{esposito.kraft.schnitzer:2022a:pre}.
\begin{theorem}[Homotopy transfer theorem]
\label{thm:HTTJonas}
Let $(B,Q_B)$ be a flat $L_\infty$-algebra with $(Q_B)^1_1=-\D_B$ and contraction
\begin{equation}
\begin{tikzcd}
(A,\D_A)
\arrow[rr, shift left, "i"]
&&
\left(B, \D_B\right)
\arrow[ll, shift left, "p"]
\arrow[loop right, "h"]
\end{tikzcd}
\end{equation}
Then
\begin{align}
\begin{split}
\label{eq:HTTTransferredStructures}
(Q_A)_1^1
& =
- \D_A,
\quad \quad\quad\quad\;
(Q_A)^1_{k+1}
=
\sum_{i=1}^{k} P^1_i \circ (Q_B)^i_{k+1} \circ i^{\vee (k+1)},\\
P_1^1
& =
p,
\quad \quad\quad\quad\quad\quad\;\;
P^1_{k+1}
=
L_{\infty,k+1}\circ H_{k+1}
\quad \text{ for } k\geq 1
\end{split}
\end{align}
turns $(A,Q_A)$ into an $L_\infty$-algebra with
$L_\infty$-quasi-isomorphism $P\colon (B,Q_B)\rightarrow (A,Q_A)$.
Moreover, one has $P^1_k \circ i^{\vee k} =0$ for $k\neq 1$.
\end{theorem}
\begin{proof}
We observe $P_{k+1}^1(ix_1 \vee \cdots \vee i x_{k+1}) = 0$ for all
$k\geq 1$ and $x_i \in A$, which directly follows from $h\circ i =
0$ and thus $H_{k+1} \circ i^{\vee (k+1)} = 0$. Suppose that
$Q_A$ is a codifferential up to order $k\geq 1$, i.e.
$\sum_{\ell=1}^m (Q_A)^1_\ell(Q_A)^\ell_m=0$ for all $m \leq k$,
and that $P$ is an $L_\infty$-morphism
up to order $k\geq 1$. We know that these conditions are satisfied for $k=1$ and we show that they hold for $k+1$. Starting with $Q_A$ we compute
\begin{align*}
(Q_AQ_A)^1_{k+1}
& =
(Q_AQ_A)^1_{k+1} \circ P^{k+1}_{k+1}\circ i^{\vee (k+1)}
=
\sum_{\ell=1}^{k+1} (Q_AQ_A)^1_\ell P^\ell_{k+1} i^{\vee (k+1)}
=
(Q_AQ_AP)^1_{k+1}i^{\vee (k+1)} \\
& =
\sum_{\ell=2}^{k+1} (Q_A)^1_\ell (Q_AP)^\ell_{k+1} i^{\vee (k+1)}
+(Q_A)^1_1(Q_AP)^1_{k+1} i^{\vee (k+1)} \\
& =
\sum_{\ell=2}^{k+1} (Q_A)^1_\ell (PQ_B)^\ell_{k+1} i^{\vee (k+1)}
+(Q_A)^1_1(Q_A)^1_{k+1} \\
& =
(Q_APQ_B)^1_{k+1}i^{\vee (k+1)} -(Q_A)^1_1(Q_A)^1_{k+1} + (Q_A)^1_1(Q_A)^1_{k+1} \\
& =
\sum_{\ell=1}^{k} (Q_AP)^1_\ell (Q_B)^\ell_{k+1}i^{\vee (k+1)}
+ (Q_AP)^1_{k+1} (Q_B)^{k+1}_{k+1}i^{\vee (k+1)} \\
& =
\sum_{\ell=1}^{k} (PQ_B)^1_\ell (Q_B)^\ell_{k+1}i^{\vee (k+1)}
+ (Q_AP)^1_{k+1}i^{\vee (k+1)} (Q_A)^{k+1}_{k+1}\\
& =
- (PQ_B)^1_{k+1}i^{\vee (k+1)} (Q_A)^{k+1}_{k+1} +
(Q_AP)^1_{k+1}i^{\vee (k+1)} (Q_A)^{k+1}_{k+1} \\
& =
- (Q_A)^1_{k+1}(Q_A)^{k+1}_{k+1} +
(Q_A)^1_{k+1}(Q_A)^{k+1}_{k+1}
=
0.
\end{align*}
By the same computation as in Lemma~\ref{lemma:Linftyuptok}, where one in fact only needs
that $Q_A$ is a codifferential up to order $k+1$, it follows that
\begin{equation*}
L_{\infty,k+1} \circ Q_{B,k+1}^{k+1}
=
-Q_{A,1}^1 \circ L_{\infty,k+1}.
\end{equation*}
It remains to show that $P$ is an $L_\infty$-morphism
up to order $k + 1$. We have
\begin{align*}
P_{k+1}^1 \circ (Q_B)^{k+1}_{k+1}
& =
L_{\infty,k+1} \circ H_{k+1} \circ (Q_B)_{k+1}^{k+1} \\
&=
L_{\infty,k+1} - L_{\infty,k+1} \circ (Q_B)_{k+1}^{k+1}\circ H_{k+1}
-L_{\infty,k+1} \circ (i\circ p)^{\vee (k+1)} \\
& =
L_{\infty,k+1} + (Q_A)_1^1 \circ P_{k+1}^1
\end{align*}
since
\begin{align*}
L_{\infty,k+1} \circ (i\circ p)^{\vee (k+1)}
& =
\left(\sum_{\ell = 2}^{k+1} Q_{A,\ell}^1 \circ P^\ell_{k+1} - \sum_{\ell =1}^{k}
P_\ell^1 \circ Q^\ell_{B,k+1}\right) \circ (i\circ p)^{\vee (k+1)} \\
& =
(Q_A)^1_{k+1} \circ p^{\vee (k+1)}
- (Q_A)^1_{k+1}\circ p^{\vee(k+1)}
=
0.
\end{align*}
Therefore
\begin{equation*}
P_{k+1}^1 \circ (Q_B)^{k+1}_{k+1} - (Q_A)_1^1 \circ P_{k+1}^1
=
L_{\infty,k+1},
\end{equation*}
i.e. $P$ is an $L_\infty$-morphism up to order $k+1$, and the
statement follows inductively.
\end{proof}
\begin{remark}[HTT vs. HPL]
In view of Lemma \ref{Prop: CoAlgHom}, we can define $H\colon \Sym(B[1])\to \Sym(B[1])$ by
$H\at{\Sym^n(B[1])}=H_n$ to obtain
\begin{equation*}
\begin{tikzcd}
(S(A[1]),\hat{Q}_A)
\arrow[rr, shift left, "I"]
&&
\left(S(B[1]), \hat{Q}_B\right)
\arrow[ll, shift left, "P"]
\arrow[loop right, "H"]
\end{tikzcd}
\end{equation*}
where $I$ (resp. $P$) is the map $i$ (resp. $p$) extended as a coalgebra morphism
and $\hat{Q}_A$ (resp. $\hat{Q}_B$) is the differential $-\D_A$ (resp. $-\D_B$) extended as a coderivation.
Note that if $h^2=0$ we also have $H^2=0$. Now we can see the higher brackets $Q_B-\hat{Q}_B$ as a pertubation of the coderivation $\hat{Q}_B$.
Since $(Q_B-\hat{Q}_B)\circ H$ decreases the symmetric degree,
it is locally nilpotent, and we can apply the homological pertubation lemma, see e.g.
\cite{crainic:2004} and references therein.
Note, that it is not clear why the resulting deformation retract
gives maps which are compatible with the coalgebra structure.
\end{remark}
With the homotopy transfer theorem we can show that every contraction
induces a splitting of the $L_\infty$-algebra in the following sense,
compare Lemma~\ref{lem:DirectSumLinftyAlg} for the notion of a
direct sum of $L_\infty$-algebras.
\begin{theorem}
\label{thm:ContractionSplitting}
In the above setting one has an $L_\infty$-isomorphism
\begin{equation}
L \colon
B
\longrightarrow
A \oplus \image [\D_B,h],
\end{equation}
where the $L_\infty$-structure on $\image [\D_B,h]$ is given by just the differential
$Q_1^1= -\D_B$ and the $L_\infty$-structure on $A \oplus \image [\D_B,h]$ is the product
$L_\infty$-structure of the transferred one on $A$ and the differential on
$ \image [\D_B,h]$.
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:HTTJonas} we already have an $L_\infty$-morphism $P\colon B
\rightarrow A$ with first structure map $p$. Now we construct an $L_\infty$-morphism
$F \colon B \rightarrow \image [\D_B,h] = C$ by setting
\begin{equation*}
F_1^1
=
[\D_B,h]
\quad\quad \quad \text{ and }\quad
\quad \quad
F^1_n
=
- h \circ \sum_{i=1}^{n-1} F_i^1 (Q_B)^i_n
\quad \text{ for } n>1.
\end{equation*}
It is an $L_\infty$-morphism up to order one. Suppose it is one up to order
$n\geq 1$, then we get
\begin{align*}
(Q_C)_1^1 F^1_{n+1}
& =
\D_B\at{C} \circ h \circ \sum_{i=1}^{n} F_i^1 (Q_B)^i_{n+1}
=
(\id - ip - h \circ\D_B\at{C})\sum_{i=1}^{n} F_i^1 (Q_B)^i_{n+1}\\
& =
\sum_{i=1}^{n} F_i^1 (Q_B)^i_{n+1} + h \circ \sum_{i=1}^{n} (FQ_B)^1_i (Q_B)^i_{n+1}
=
\sum_{i=1}^{n+1} F_i^1 (Q_B)^i_{n+1},
\end{align*}
thus by induction $F$ is an $L_\infty$-morphism.
The universal property of the product gives the desired
$L_\infty$-morphism $L= P\oplus F$ which is even an $L_\infty$-isomorphism since its first
structure map $p \oplus (\D_B h + h\D_B)$ is an isomorphism
with inverse $i \oplus \id$, see Theorem~\ref{prop:Linftyiso}.
\end{proof}
We also want to give an explicit formula for a $L_\infty$-quasi-inverse of
$P$, where we follow \cite[Proposition~B.3]{esposito.kraft.schnitzer:2022a:pre}.
\begin{proposition}
\label{prop:Infinityinclusion}
The coalgebra map $I\colon \Sym^\bullet (A[1])\to \Sym^\bullet
(B[1])$ recursively defined by the maps $I_1^1=i$ and $I_{k+1}^1=h\circ
L_{\infty,k+1}$ for $k\geq 1$ is an $L_\infty$-quasi inverse of $P$.
Since $h^2=0= h\circ i$, one even has $I_{k+1}^1 = h \circ \sum_{\ell = 2}^{k+1} Q_{B,\ell}^1 \circ I^\ell_{k+1}$ and $P \circ I = \id_A$.
\end{proposition}
\begin{proof}
We proceed by induction: assume that $I$ is an $L_\infty$-morphism up to
order $k$, then we have
\begin{align*}
I^1_{k+1}Q_{A,k+1}^{k+1} - Q_{B,1}^1I^1_{k+1}&=
-Q_{B,1}^1\circ h\circ L_{\infty,{k+1}}
+h\circ L_{\infty,{k+1}}\circ Q_{A,k+1}^{k+1}\\&
=-Q_{B,1}^1\circ h\circ L_{\infty,{k+1}}-h\circ Q_{B,1}^{1}\circ L_{\infty,{k+1}}\\&
=(\id-i\circ p)L_{\infty,{k+1}}.
\end{align*}
We used that $Q_{B,1}^1=-\D_B$ and the homotopy equation of $h$.
Moreover, we get with $p\circ h=0$
\begin{align*}
p \circ L_{\infty,{k+1}}
& =
p\circ \left( \sum_{\ell = 2}^{k+1} Q_{B,\ell}^1 \circ I^\ell_{k+1}
- \sum_{\ell =1}^{k} I_\ell^1 \circ Q^\ell_{A,k+1} \right) \\
& =
\sum_{\ell = 2}^{k+1} (P\circ Q_B)^1_\ell \circ I^\ell_{k+1}
- \sum_{\ell =2}^{k+1}\sum_{i=2}^{\ell} P^1_i\circ Q^i_{B,\ell} \circ
I^\ell_{k+1} -
Q^1_{A,k+1} \\
& =
\sum_{\ell = 2}^{k+1} (Q_A \circ P)^1_\ell \circ I^\ell_{k+1}
- \sum_{i =2}^{k+1}\sum_{\ell=i}^{k+1} P^1_i\circ Q^i_{B,\ell} \circ
I^\ell_{k+1} - Q^1_{A,k+1} \\
& =
Q^1_{A,k+1} - \sum_{i =2}^{k+1}\sum_{\ell=i}^{k+1}
P^1_i\circ I^i_{\ell} \circ Q^\ell_{A,k+1} - Q^1_{A,k+1}
=
0,
\end{align*}
and therefore $I$ is an $L_\infty$-morphism.
\end{proof}
\begin{remark}
Note that in the homotopy transfer theorem the property $h^2=0$ is not
needed, and that one can also adapt the above construction of $I$
to this more general case.
\end{remark}
Following \cite{esposito.kraft.schnitzer:2020a:pre},
let us now consider the special case of contractions of
DGLAs. More explicitly, let now $A,B$ be two DGLAs and assume in addition
that $i$ is a DGLA morphism. Then the homotopy transfer theorem immediately yields:
\begin{proposition}
\label{prop:Infinityprojection}
Defining $P_1^1 = p$ and $P_{k+1}^1 = L_{\infty,k+1} \circ H_{k+1}$
for $k \geq 1$ yields an $L_\infty$-quasi-isomorphism $P \colon
(B,Q_B) \rightarrow (A,Q_A)$ that is quasi-inverse to $i$. Here the codifferentials are
induced by the respective DGLA structures.
\end{proposition}
\begin{proof}
The transferred $L_\infty$-structure on $A$ from \eqref{eq:HTTTransferredStructures}
is indeed just the DGLA structure since $i$ is a DGLA morphism.
\end{proof}
Let us now assume that $p\colon B\to A$ in the contraction
\eqref{eq:homotopyretract} is a DGLA morphism and that $i$ is just a chain
map. Then we can analogously give a formula for the extension $I$ of
$i$ to an $L_\infty$-quasi-isomorphism.
\begin{proposition}
\label{prop:InfinityinclusionDGLA}
The coalgebra map $I\colon \Sym^\bullet (A[1])\to \Sym^\bullet
(B[1])$ recursively defined by the maps $I_1^1=i$ and $I_k^1=h\circ
L_{\infty,k}$ for $k\geq 2$ is an $L_\infty$-quasi inverse of $p$.
Since $h^2=0= h\circ i$, one even has $I_k^1 = h \circ Q^1_{2} \circ
I^2_{k}$.
\end{proposition}
\begin{proof}
This is just a special case of Proposition~\ref{prop:Infinityinclusion}.
\end{proof}
\subsection{The Minimal Model and the Existence of $L_\infty$-Quasi-Inverses}
As a first application of the homotopy transfer theorem we want to show
that $L_\infty$-algebras split into the direct product
of two special ones, whence we recall some definitions.
\begin{definition}
An $L_\infty$-algebra $(L,Q)$ is called \emph{minimal} if $Q_1^1 = 0$
and \emph{linear contractible} if $Q_n^1=0$ for $n>1$ with acyclic $Q_1^1$.
\end{definition}
An $L_\infty$-algebra is called contractible if it is isomorphic to
a linear contractible one, and we can show that every $L_\infty$-algebra
is isomorphic to the direct sum of these two types
\cite[Proposition~2.8]{canonaco:1999a}:
\begin{proposition}
\label{prop:StandardFormLinfty}
Any $L_\infty$-algebra is isomorphic to the direct sum of a
minimal and of a linear contractible one.
\end{proposition}
\begin{proof} Let $(L,Q)$ be a $L_\infty$-algebra and let
$(L,\D = - Q^1_1)$ be the underlying cochain complex. Since we are
working over a field $\mathbb{K}$ of characteristic zero, we can find a deformation retract
\begin{equation}
\begin{tikzcd}
(\mathrm{H}(L),0)
\arrow[rr, shift left, "i"]
&&
\left(L, \D\right)
\arrow[ll, shift left, "p"]
\arrow[loop right, "h"]
\end{tikzcd}
\end{equation}
where $\mathrm{H}(L)$ denotes the cohomology of $(L,\D)$.
Then we apply Theorem \ref{thm:ContractionSplitting} and get the result.
\end{proof}
Denoting the transferred minimal $L_\infty$-structure on
$\mathrm{H}(L)$ by $Q_\mathrm{H}$, the above proposition gives in particular
an $L_\infty$-quasi-isomorphism
$(L,Q) \to (\mathrm{H}(L),Q_\mathrm{H})$. For this reason,
$(\mathrm{H}(L),Q_\mathrm{H})$ is also called \emph{minimal model} of
$(L,Q)$. This result allows us to explicitly invert
$L_\infty$-quasi-isomorphisms.
\begin{theorem}
\label{thm:QuisInverse}
If $F$ is an $L_\infty$-quasi-isomorphism from $(L,Q)$ to $(L',Q')$,
then there exists an $L_\infty$-morphism $G$ in the other direction, inducing
the inverse isomorphism in cohomology.
\end{theorem}
\begin{proof}
We know that both $L$ and $L'$ are isomorphic to direct sums of minimal
$L_\infty$-algebras $L_{min}$ and $L'_{min}$ and linear
contractible ones. In particular, the inclusions and projections
\begin{equation*}
i \colon
(L_{min},Q_{min}) \longrightarrow (L,Q),
\quad \quad \quad
p \colon
(L,Q) \longrightarrow (L_{min},Q_{min})
\end{equation*}
are $L_\infty$-quasi-isomorphisms, analogously for $i'$ and $p'$.
In particular,
\begin{equation*}
F_{min} = p'Fi \colon
(L_{min},Q_{min}) \longrightarrow (L'_{min},Q'_{min})
\end{equation*}
is an $L_\infty$-quasi-isomorphism. But since $(Q_{min})_1^1 = 0
=(Q'_{min})_1^1$ we know that $(F_{min})_1^1$ is an isomorphism, thus $F_{min}$ is
an $L_\infty$-isomorphism by Proposition~\ref{prop:Linftyiso}, and we can
set $G= i(F_{min})^{-1} p'$ for the $L_\infty$-quasi-isomorphism in the
other direction.
\end{proof}
\section{Maurer-Cartan Elements, their Equivalence Classes and Twisting}
\label{sec:MCandEquiv}
In this section we recall the notion of Maurer-Cartan elements and different
notions of equivalences between them. One important application is the
twisting of $L_\infty$-algebras and $L_\infty$-morphisms.
\subsection{Maurer-Cartan Elements in DGLAs}
\label{sec:MCinDGLA}
We start with the case of Maurer-Cartan elements in DGLAs.
\begin{definition}[Maurer-Cartan elements]
\label{def:MCelementDGLA}
Let $(\liealg{g},\D ,[\argument{,}\argument])$ be a DGLA. Then
$\pi \in \liealg{g}^1$ is called \emph{Maurer-Cartan element} if it
satisfies the Maurer-Cartan equation
\begin{equation}
\D \pi + \frac{1}{2}[\pi,\pi]
=
0.
\end{equation}
\end{definition}
The set of Maurer-Cartan elements is denoted by $\Mc(\liealg{g})$ and
we directly see that for a Maurer-Cartan element $\pi$ the map
$\D + [\pi,\,\cdot\,]$ is again a differential, the so-called
\emph{twisted} differential. Taking a general element $x \in \liealg{g}^1$
the derivation $\D + [x,\,\cdot\,]$ yields a curved Lie algebra with curvature
\begin{equation*}
R^x
=
\D x + \frac{1}{2}[x,x].
\end{equation*}
Starting with a curved Lie algebra $(\liealg{g},R,\D,[\argument{,}\argument])$,
the twisting yields the new curvature $R^x = R +\D x + \frac{1}{2}[x,x]$ and one calls $\pi \in \liealg{g}^1$
\emph{curved Maurer-Cartan element} if one has
\begin{equation*}
R^\pi
=
R + \D \pi + \frac{1}{2} [\pi,\pi]
=
0.
\end{equation*}
In this case the twisted DGLA $(\liealg{g}, R^\pi=0, \D + [\pi,\argument],
[\argument{,}\argument])$ is a flat DGLA. One example for
curved Maurer-Cartan elements are principal connections on principal bundles,
see e.g. \cite[Section~11]{kolar.michor.slovak:1993a} for more details.
\begin{example}[Principal Connection]
Let $\group{G}$ be a Lie group and $\pi \colon P \to M$ be a
smooth principal $\group{G}$-bundle. Then one way to define a
\emph{principal $\group{G}$-connection} on $P$ is as an
equivariant $\liealg{g}$-valued differential $1$-form
$\omega \in \Omega^1(P,\liealg{g})^\group{G}$, where the
equivariance is taken with respect to the product of the action on
$P$ and the adjoint action on the Lie algebra $\liealg{g}$ of
$\group{G}$, satisfying $\omega(\xi_P) = \xi$ for all $\xi \in \liealg{g}$,
where $\xi_P$ denotes the fundamental vector field. The \emph{curvature form}
$\Omega \in \Omega^2(P,\liealg{g})^\group{G}$ is given by
\begin{equation}
\Omega
=
\D \omega + \frac{1}{2} [\omega,\omega],
\end{equation}
where $\D$ denotes the de Rham differential, and where the Lie bracket
$[\argument{,}\argument]$ is induced by the $\wedge$ product of ordinary
differential forms and the Lie bracket on $\liealg{g}$.
In other words, the principal connection $\omega$ satisfies the Maurer-Cartan equation
in the curved Lie algebra $(\Omega(P,\liealg{g})^\group{G}, -\Omega,
\D,[\argument{,}\argument])$.
\end{example}
Two other examples of DGLAs and Maurer-Cartan elements that
we want to mention are the DGLAs of polyvector fields
and the DGLA of polydifferential operators. They play an important role in
(formal) deformation quantization \cite{bayen.et.al:1978a,kontsevich:2003a,waldmann:2007a}.
\begin{example}[Polyvector fields]
\label{ex:tpoly}
Let $M$ be a smooth manifold. The \emph{polyvector fields} are
the sections $\Tpoly^\bullet(M)=\Secinfty(\Anti^{\bullet +1} TM)$.
Together with the Schouten bracket $[\argument{,}\argument]_S$ they form
a graded Lie algebra, and together with the zero differential a DGLA
\begin{equation}
T_\poly^\bullet(M)
=
(\Secinfty(\Anti^{\bullet +1} TM),0,[\argument{,}\argument]_S).
\end{equation}
The Maurer-Cartan elements are given by bivectors $\pi \in
\Secinfty(\Anti^2 TM)$ satisfying the Maurer-Cartan equation $[\pi,\pi]_S=0$,
i.e. by Poisson structures. We denote
by $\{\argument,\argument\}$ the corresponding Poisson brackets on $\Cinfty(M)$.
In deformation quantization one is interested in formal deformations,
whence one considers formal
polyvector fields $\Tpoly^\bullet(M)[[\hbar]]
= \Secinfty(\Anti^{\bullet +1} TM)[[\hbar]]$ in the real formal parameter $\hbar$.
Formal Poisson structures are then formal power series
\begin{equation}
\pi_\hbar
=
\pi_0 + \hbar \pi_1 + \cdots \in
\Secinfty(\Anti^2 TM)[[\hbar]]
\end{equation}
with $[\pi_\hbar,\pi_\hbar]_S = 0$. In lowest order this implies in particular
$[\pi_0,\pi_0]_S=0$. Two such formal Poisson structures $\pi_\hbar$ and
$\widetilde{\pi}_\hbar$ are called \emph{equivalent} if there exists a
formal diffeomorphism such that
\begin{equation}
\pi_\hbar
=
\exp(\hbar [X,\,\cdot\,]_S) \widetilde{\pi}_\hbar,
\end{equation}
where $X\in \Secinfty(TM)[[\hbar]]$. In particular, $\pi_\hbar$ and
$\widetilde{\pi}_\hbar$ deform the same $\pi_0$. In view of later applications,
it turns out to be useful to
consider formal Poisson structures $\pi_\hbar = \hbar\pi_1 + \cdots$ that
start in the first order of $\hbar$, i.e. that deform the zero Poisson structure.
Consequently, one considers formal Maurer-Cartan elements $\Mc^\hbar
= \Mc \cap \hbar \Tpoly^1(M)[[\hbar]]$. We denote the equivalence classes
by
\begin{equation}
\Def(\Tpoly(M)[[\hbar]])
=
\frac{\Mc^\hbar(\Tpoly(M))}{\group{G}^0(\Tpoly(M)[[\hbar]])},
\end{equation}
where $\group{G}^0(\Tpoly(M)[[\hbar]])=
\{ \exp([X,\,\cdot\,]_S) \mid X \in \hbar \Tpoly^0(M)[[\hbar]]\}$ is called
\emph{gauge group}.
\end{example}
In deformation quantization, the polyvector fields with their formal Poisson structures as Maurer-Cartan elements
corresponds to the classical side. From the quantum point of view one looks
for star products on $\Cinfty(M)[[\hbar]]$ that can be interpreted as
formal power series of bidifferential operators. Before we can introduce
the corresponding DGLA of polydifferential operators we have to
recall the Hochschild cochains:
\begin{example}[Hochschild cochains]
\label{ex:Hochschild}
For a unital associative algebra $(A,\mu_0,1)$ we recall the Hochschild cochains
with a shifted grading
\begin{equation}
C^n(A)
=
\Hom(A^{\otimes(n+1)},A)
\end{equation}
for $n \geq 0$ and $C^{-1}(A)=A$. There are different
operations for cochains $D$ and $E$, the \emph{cup-product}
\begin{equation}
D \cup E (a_0,\dots,a_{d+e+1})
=
D(a_0,\dots, a_d)E(a_{d+1},\dots,a_{d+e+1}),
\end{equation}
where $a_0,\dots,a_{d+e+1}\in A$, the concatenation
\begin{equation}
D \circ E (a_0,\dots,a_{d+e})
=
\sum_{i=0}^{\abs{ D}} (-1)^{i \abs{ E}}
D(a_0,\dots, a_{i-1}, E(a_i,\dots,a_{i+e}),a_{i+e+1},\dots,a_{d+e})
\end{equation}
and the \emph{Gerstenhaber bracket}
\begin{equation}
\label{eq:GerstenhaberBracketClassical}
[D,E]_G
=
(-1)^{\abs{E}\abs{D}} \left(D \circ E - (-1)^{\abs{D} \abs{E}} E \circ D\right).
\end{equation}
Note that we use the sign convention from \cite{bursztyn.dolgushev.waldmann:2012a},
not the original one from \cite{gerstenhaber:1963a}.
This yields a graded Lie algebra $(C^{\bullet}(A),[\argument{,}\argument]_G)$
and a graded associative algebra $(C^{\bullet-1}(A),\cup)$.
The product $\mu_0$ is an element of $C^1(A)$
and one notices that the associativity of $\mu_0$ is equivalent to
$[\mu_0,\mu_0]_G=0$, compare Remark~\ref{rem:Ainftystructures}.
In particular, we get an induced differential
$\del \colon C^\bullet(A) \rightarrow C^{\bullet +1}(A)$ via
$\del = [\mu_0,\argument]_G$, the so-called \emph{Hochschild differential},
and thus a DLGA structure. One can check that $\pi \in C^1(A)$ is a Maurer-Cartan
element if and only if $\mu_0 + \pi$ is an associative product.
The cohomology of $(C^\bullet(A),\del)$
inherits even the structure of a Gerstenhaber algebra:
More explicitly, in cohomology the $\cup$-product is graded commutative
and one has the following Leibniz rule:
\begin{equation*}
[F,D \cup E]_G
=
[F,D]_G \cup E + (-1)^{\abs{F} (\abs{D} -1)} D \cup [F,E]_G,
\end{equation*}
compare \cite{gerstenhaber:1963a} and \cite[Satz~6.2.18]{waldmann:2007a}.
\end{example}
In the context of deformation quantization \cite{bayen.et.al:1978a} we
are interested in star products. They are Maurer-Cartan elements in the
DGLA of polydifferential operators, i.e. in the differential Hochschild cochain complex:
\begin{example}[Polydifferential operators]
\label{ex:dpoly}
Recall that a star product $\star$ on a Poisson manifold $(M,\pi)$ is an associative
product on $\Cinfty(M)[[\hbar]]$ of the form
\begin{equation}
f \star g
=
\mu_0(f, g) + \sum_{r=1}^\infty \hbar^r C_r(f,g) \in
\Cinfty(M)[[\hbar]],
\end{equation}
where $C_1(f,g)-C_1(g,f) = \{f,g\}$, and where all $C_r$ are bidifferential
operators vanishing on constants. Two star products $\star$ and $\star'$
are equivalent if there
exists an $\hbar$-linear isomorphism $S = \id + \sum_{r=1}^\infty \hbar^r S_r$
with differential operators $S_r$ and
\begin{equation}
S(f \star g)
=
Sf \star' Sg.
\end{equation}
To describe the notion of star products in terms of a DGLA we consider
the associated Hochschild DGLA $(C^\bullet(\Cinfty(M)),\del,
[\argument{,}\argument]_G)$. To incorporate the bidifferentiability we restrict
ourselves to the \emph{polydifferential operators}
\begin{equation}
\Dpoly^\bullet(M)[[\hbar]]
=
\bigoplus_{n=-1}^\infty \Dpoly^{n}(M)[[\hbar]],
\end{equation}
were $\Dpoly^{n}(M)[[\hbar]]= \Hom_{\mathrm{diff}}(\Cinfty(M)^{\otimes n+1},
\Cinfty(M))[[\hbar]]$ are polydifferential operators vanishing on constants.
We know that star products can be interpreted as $\star = \mu_0 + \sum_{r=1}^\infty
\hbar^r C_r= \mu_0 +\hbar m_\star \in \Dpoly^{1}(M)[[\hbar]]$ and the associativity leads to
\begin{equation}
\label{eq:starproductmc}
0
=
[\star,\star]_G
=
2[\mu_0,\hbar m_\star] + [\hbar m_\star,\hbar m_\star].
\end{equation}
Thus star products correspond to Maurer-Cartan elements
$\hbar m_\star \in \hbar \Dpoly^{1}(M)[[\hbar]]$, and the equivalence of $\star$
and $\star'$ is equivalent to
\begin{equation}
\exp(\hbar [T,\cdot]_G) \star
=
\star',
\end{equation}
where $\hbar T = \log S \in \hbar
\Dpoly^{(0)}(M)[[\hbar]]$, see \cite[Proposition~6.2.20]{waldmann:2007a}.
Consequently, the equivalence classes
of star products are given by
\begin{equation}
\Def(\Dpoly(M)[[\hbar]])
=
\frac{\Mc^\hbar(\Dpoly(M)[[\hbar]])}{\group{G}^0(\Dpoly(M)[[\hbar]])},
\end{equation}
where the gauge group action of $\group{G}^0(\Dpoly(M)[[\hbar]])=
\{ \exp([\hbar T,\,\cdot\,]_G) \mid \hbar T\in \hbar \Dpoly^{0}(M)[[\hbar]] \}$
is given by
\begin{equation*}
\hbar m_{\star'}
=
\exp([\hbar T,\argument]_G) \acts \hbar m_\star
=
\exp([\hbar T,\argument]_G)(\mu_0 +\hbar m_\star) - \mu_0,
\end{equation*}
and where we consider again
only formal Maurer-Cartan elements $\hbar m_\star \in \Mc^\hbar = \Mc \cap
\hbar \Dpoly^{1}(M)[[\hbar]]$, i.e. starting in order one of $\hbar$.
\end{example}
In the above examples we have seen that it is useful to identify
'equivalent' Maurer-Cartan elements by actions of elements of degree $0$.
In both settings the exponential maps were well-defined because of
the complete filtration induced by the formal power series.
Therefore, we restrict ourselves to (curved)
Lie algebras with complete descending filtrations $\mathcal{F}^\bullet\liealg{g}$ satisfying
\begin{equation}
\cdots
\supseteq
\mathcal{F}^{-2}\liealg{g}
\supseteq
\mathcal{F}^{-1}\liealg{g}
\supseteq
\mathcal{F}^{0}\liealg{g}
\supseteq
\mathcal{F}^{1}\liealg{g}
\supseteq
\cdots,
\quad \quad
\liealg{g}
\cong
\varprojlim \liealg{g}/\mathcal{F}^n\liealg{g},
\end{equation}
and
\begin{equation}
\D(\mathcal{F}^k\liealg{g})
\subseteq
\mathcal{F}^k\liealg{g}
\quad \quad \text{ and } \quad \quad
[\mathcal{F}^k\liealg{g},\mathcal{F}^\ell\liealg{g}]
\subseteq
\mathcal{F}^{k+\ell}\liealg{g}.
\end{equation}
In most cases the filtration will be bounded below, i.e.
bounded from the left with $\liealg{g}=\mathcal{F}^k\liealg{g}$ for some
$k\in \mathbb{Z}$, preferably $k=0$ or $k=1$. If the filtration is unbounded,
then we assume in addition that it is exhaustive, i.e. that
\begin{equation}
\liealg{g}
=
\bigcup_n \mathcal{F}^n\liealg{g},
\end{equation}
even if we do not mention it explicitly. Note that instead of considering
filtered DGLAs one can tensorize the DGLAs with nilpotent algebras:
\begin{remark}[Nilpotent DGLA]
\label{rem:nilpotentdgla}
Alternatively to the filtration, one can tensorize the DGLA $(\liealg{g},\D,
[\argument{,}\argument])$ by a graded commutative associative
$\mathbb{K}$-algebra $\liealg{m}$, compare \cite[Section~2.4]{esposito:2015a}:
\begin{align*}
(\liealg{g} \otimes \liealg{m})^n
& =
\bigoplus_i (\liealg{g}^i \otimes \liealg{m}^{n-i}) \\
\D(x \otimes m)
& =
\D x \otimes m \\
[x \otimes m, y \otimes n]
& =
(-1)^{\abs{m}\abs{y}} [x,y] \otimes mn.
\end{align*}
In particular, if $\liealg{m}$ is nilpotent, the DGLA $\liealg{g}\otimes
\liealg{m}$ is nilpotent, too. Thus under the assumption that
$\liealg{g}^0 \otimes \liealg{m}$ is nilpotent, the above exponential maps
are also in the non-filtered setting well-defined. For further details on this approach and more details on the deformation functor
see \cite{canonaco:1999a,manetti:2005a}.
\end{remark}
\begin{example}[Formal power series]
If we consider formal power series $\liealg{G}=\liealg{g}[[\hbar]]$ of a
DGLA $\liealg{g}$, where all maps are $\hbar$-linearly extended,
then we can choose the filtration
$\mathcal{F}^k \liealg{G} = \hbar^k \liealg{G}$ and the completeness
is trivially fulfilled.
\end{example}
As in the above examples we set for the curved Maurer-Cartan elements in
$(\liealg{g},R,\D,[\argument{,}\argument])$
\begin{equation}
\Mc^1(\liealg{g})
=
\{\pi \in \mathcal{F}^1\liealg{g}^1 \mid R +\D \pi + \frac{1}{2}[\pi,\pi]=0\}
\end{equation}
and we want to define a group action by the gauge group
\begin{equation}
\group{G}^0(\liealg{g})
=
\{ \Phi = e^{[a,\,\cdot\,]} \colon \liealg{g}
\longrightarrow \liealg{g} \mid a \in \mathcal{F}^1\liealg{g}^0\},
\end{equation}
where we consider again only elements of filtration order $1$.
In order to define the gauge action, we consider as in \cite{manetti:2005a}
the DGLA $(\liealg{g}_{\D},[\argument{,}\argument]_{\D},0)$ with
$\liealg{g}_{\D} = \liealg{g} \oplus \mathbb{K}\D$ and
\begin{equation}
\label{eq:gplusd}
[x+ r\D, y+s\D]_{\D}
=
[x,y] + r \D(y) -(-1)^{\abs{x}}s\D(x) + 2rs R.
\end{equation}
Then $\pi$ is a Maurer-Cartan element in $\liealg{g}$ if and only if
$\phi(\pi)= \pi + \D$ is a Maurer-Cartan element in $\liealg{g}_{\D}$ with zero
differential. But in $(\liealg{g}_{\D},[\argument{,}\argument]_{\D},0)$
we already have an action of $\group{G}^0(\liealg{g})$ on the
Maurer-Cartan elements by the adjoint representation.
Pulling this action back on $\liealg{g}$ yields the following:
\begin{proposition}[Gauge action]
\label{prop:GaugeactionDGLA}
Let $(\liealg{g},R,\D,[\argument{,}\argument])$ be a curved Lie algebra with complete
descending filtration. The \emph{gauge group} $ \group{G}^0(\liealg{g})$
acts on $\Mc^1(\liealg{g})$ via
\begin{equation}
\label{eq:gaugeactiong0}
\exp([g,\,\cdot\,]) \acts \pi
=
\sum_{n=0}^\infty \frac{( [g,\,\cdot\,])^n}{n!}(\pi)
-
\sum_{n=0}^\infty \frac{([g,\,\cdot\,])^n}{(n+1)!} (\D g)
=
\pi + \sum_{n=0}^\infty \frac{([g,\,\cdot\,])^n}{(n+1)!} ([g,\pi]-\D g).
\end{equation}
The equivalence classes of Maurer-Cartan elements are denoted by
\begin{equation}
\Def(\liealg{g})
=
\frac{\Mc^1(\liealg{g})} {\group{G}^0(\liealg{g})}.
\end{equation}
\end{proposition}
$\Def(\liealg{g})$ is the orbit space of the
transformation groupoid $\group{G}^0(\liealg{g})\ltimes\Mc^1(\liealg{g})$ of the gauge action and
$\group{G}^0(\liealg{g})\ltimes\Mc^1(\liealg{g})$ is also called \emph{Goldman-Millson groupoid} or
\emph{Deligne groupoid} \cite{goldman.millson:1988a}.
It plays an important role in deformation theory \cite{manetti:2005a}.
An additional motivation for this gauge action comes from
the following general consideration:
\begin{lemma}
\label{lemma:TwistedDGLAsIsomorphic}
Let $(\liealg{g},R,\D,[\argument{,}\argument])$ be a curved Lie algebra with complete
descending filtration. For all $g\in \mathcal{F}^1\liealg{g}^0$
and all derivations $D$ of degree $+1$ one has
\begin{equation}
\label{eq:ConjunctionofDer}
\exp([g,\argument]) \circ D \circ \exp([-g,\argument])
=
D - \left[ \frac{\exp([g,\argument])-\id}{[g,\argument]} (Dg),
\argument\right].
\end{equation}
\end{lemma}
\begin{proof}
The proof is the same as \cite[Lemma~1.3.20]{neumaier:2001a}. It follows directly
from $\Ad(\exp([g,\argument])) D = e^{\ad([g,\argument])} D$ and
$\ad([g,\argument])D = - [Dg,\argument]$.
\end{proof}
This immediately implies that twisting with gauge equivalent Maurer-Cartan
elements leads to isomorphic DGLAs.
\begin{corollary}
\label{cor:GaugeEquivMCTwistsQuis}
Let $(\liealg{g},R,\D,[\argument{,}\argument])$ be a curved
Lie algebra with complete
descending filtration and let $\pi, \widetilde{\pi}$ be two
gauge equivalent Maurer-Carten elements
via $ \widetilde{\pi} = \exp([g,\,\cdot\,]) \acts \pi $. Then one has
\begin{equation}
\label{eq:MCEquivalenceIso}
\D + [\widetilde{\pi},\argument]
=
\exp([g,\,\cdot\,]) \circ (\D + [\pi,\argument]) \circ \exp([-g,\,\cdot\,]),
\end{equation}
i.e. $\exp([g,\argument]) \colon (\liealg{g},\D+[\pi,\argument],
[\argument{,}\argument]) \rightarrow (\liealg{g},\D +[\widetilde{\pi},\argument],
[\argument{,}\argument])$ is an isomorphism of DGLAs.
Moreover, one also has in the coalgebra setting
\begin{equation}
\label{eq:MCEquivalenceCoalgIso}
Q^{\widetilde{\pi}}
=
\exp([g,\,\cdot\,]) \circ Q^\pi \circ \exp([-g,\,\cdot\,]),
\end{equation}
where $Q^\pi,Q^{\widetilde{\pi}}$ are the codifferentials with
$Q^\pi_1 = - \D - [\pi,\argument]$ and $Q^{\widetilde{\pi}}_1 = -\D
- [\widetilde{\pi},\argument]$ and second structure map given by
the bracket.
\end{corollary}
\begin{proof}
The first equation is clear by \eqref{eq:ConjunctionofDer}. The second one is
clear since $\exp([g,\,\cdot\,])$ is a Lie algebra automorphism of degree zero
intertwining the differentials.
\end{proof}
Finally, note that we recover indeed the equivalence notions for the polyvector
fields and polydifferential
operators.
\begin{example}[$\Tpoly(M)$ and $\Dpoly(M)$]
It is easy to see that the action of $\group{G}^0$ on $\Mc^\hbar$ as defined in
\eqref{eq:gaugeactiong0} coincides with the actions in the above
examples $\Tpoly(M)[[\hbar]]$ and $\Dpoly(M)[[\hbar]]$.
For $\Tpoly(M)$ in Example~\ref{ex:tpoly} the differential of the DGLA is
zero, i.e. the gauge action $\exp(\hbar[X,\,\cdot\,]) \acts \cdot$
coincides with the usual action by formal diffeomorphisms. In the case of $\Dpoly(M)$ in
Example~\ref{ex:dpoly} the differential is given by $\del=[\mu_0,\,\cdot\,]$,
where $\mu_0$ denotes the pointwise product on functions. Therefore, two formal
star products $\star=\mu_0 + \hbar m_\star$ and $\star'=\mu_0 + \hbar m_{\star'}$
are equivalent via
$ \exp(\hbar [T,\,\cdot\,]) \star= \star'$ with $\hbar T \in
\hbar \Dpoly^{0}(M)[[\hbar]]$ if and only if
\begin{align*}
\star'
=
\mu_0 + \hbar m_{\star'}
=
\exp(\hbar [T,\,\cdot\,]) (\mu_0 + \hbar m_\star)
=
\mu_0 + \exp(\hbar[T,\,\cdot\,]) \acts \hbar m_\star.
\end{align*}
\end{example}
\subsection{Maurer-Cartan Elements in $L_\infty$-algebras}
\label{sec:MCinLinftyAlgs}
The notion of Maurer-Cartan elements can be transferred to general
$L_\infty$-algebras. There are again different possibilities for conditions
on the $L_\infty$-algebra.
One possibility is to require the $L_\infty$-algebra $(L,Q)$ to be
\emph{nilpotent} \cite{getzler:2009a,manetti:note}: for example, one requires
$L^{[n]}=0$ for $n \gg 0$, where
\begin{equation}
L^{[n]}
=
\Span\{ Q_k(x_1\vee \cdots \vee x_k) \mid
k \geq 2, \; x_i \in L^{[n_i]}, \; 0 < n_i <n ,\; \sum_i n_i > n\}
\end{equation}
and $L^{[1]}=L$. In particular, this implies $Q_n =0$ for $n \gg 0$.
However, we consider again $L_\infty$-algebras with complete descending and
exhaustive filtrations, where we implicitly include exhaustive when we say
'complete filtration'. Moreover, we
require from now on that the codifferentials and the $L_\infty$-morphisms
are compatible with the filtrations.
\begin{remark}
There are again different conventions about the filtrations. Sometimes one
considers \emph{complete} $L_\infty$-algebras $L$, i.e.
$L_\infty$-algebras with complete descending filtrations where
$L = \mathcal{F}^1 L$, see e.g. \cite{dotsenko.poncin:2016a}.
\end{remark}
\begin{definition}[Maurer-Cartan elements II]
Let $(L,Q)$ be a (curved) $L_\infty$-algebra
with complete descending filtration. Then $\pi\in \mathcal{F}^1 L[1]^0
= \mathcal{F}^1 L^1$ is called
\emph{(curved) Maurer-Cartan element} if it satisfies the Maurer-Cartan equation
\begin{equation}
\label{eq:mclinfty}
Q^1(\exp(\pi))
=
\sum_{n\geq 0} \frac{1}{n!} Q_n^1(\pi\vee\cdots\vee \pi)
=
0.
\end{equation}
The set of (curved) Maurer-Cartan elements is denoted by $\Mc^1(L)$.
\end{definition}
Note that the sum in \eqref{eq:mclinfty} is well-defined for $\pi\in
\mathcal{F}^1L^1$ because of the completeness. From now on we
assume to be in this setting and we collect some useful properties:
\begin{lemma}
\label{lemma:twistinglinftymorphisms}
Let $F\colon (L,Q)\rightarrow (L',Q')$ be an
$L_\infty$-morphism of (curved) $L_\infty$-algebras and
$\pi \in \mathcal{F}^1L^1$.
\begin{lemmalist}
\item $\pi$ is a (curved) Maurer-Cartan element if and only if
$Q(\exp(\pi))=0$.
\item $F(\exp(\pi)) = \exp(S)$ with
$S = F_\MC(\pi)= F^1(\cc{\exp}(\pi))$, where
$\cc{\exp}(\pi)=\sum_{k=1}^\infty \frac{1}{k!} \pi^{\vee k}$.
\item If $\pi$ is a (curved) Maurer-Cartan element, then so is
$F_\MC(\pi)$.
\end{lemmalist}
\end{lemma}
\begin{proof}
The proof for the case of flat DGLAs can be found in \cite[Proposition~1]{dolgushev:2005b}.
Note that in the flat case it suffices to consider
$\cc{\exp}(\pi)=\sum_{k=1}^\infty \frac{1}{k!} \pi^{\vee k}$ instead of $\exp(\pi)$.
In our general setting we have
\begin{equation*}
\Delta_\sh(\exp(\pi))
=
\exp(\pi) \otimes \exp(\pi),
\end{equation*}
where we have to consider the completed symmetric tensor algebra with respect
to the symmetric degrees. For $\pi \in \mathcal{F}^1 L[1]^0$ we compute with \eqref{eq:QinSymmetricviaTaylor}
\begin{align*}
Q(\exp(\pi))
& =
\sum_{n=0}^\infty \frac{1}{n!} \sum_{k=0}^n
\binom{n}{k} Q_k^1(\pi\vee \cdots \vee \pi)\vee \pi \vee \cdots \vee \pi \\
& =
Q^1(\exp(\pi))\vee \exp (\pi)
\end{align*}
and thus the first point is clear.
The second point follows from the explicit form of $F$ from
Proposition~\ref{prop:linftymorphismsequence}.
Concerning the last part we have
\begin{equation*}
Q'(\exp(S))
=
Q'F(\exp(\pi))
=
F Q(\exp(\pi))
=
0
\end{equation*}
whence the statement follows with the first and second point.
\end{proof}
We have shown that $L_\infty$-morphisms map Maurer-Cartan elements to Maurer-Cartan
elements, but we would like to have even more: they should map equivalence
classes of Maurer-Cartan elements to equivalence classes and thus yield maps
on the deformations $\Def(\liealg{g})$ of DGLAs resp. of $L_\infty$-algebras.
Therefore, we have to generalize at first the gauge action to an equivalence
relation on the set of Maurer-Cartan elements of (curved) $L_\infty$-algebras.
This is achieved by introducing the homotopy equivalence relation, see e.g.
\cite[Lemma~5.1]{manetti:2005a} for the case of DGLAs and
\cite[Definition~6.5]{maunder:2017a} for the case of curved Lie algebras.
For the $L_\infty$-setting we follow \cite[Section~4]{canonaco:1999a} but
adapt the definitions to the case of $L_\infty$-algebras with complete
descending filtrations as in \cite{dotsenko.poncin:2016a}.
Let therefore $(L,Q)$ be an $L_\infty$-algebra with complete descending filtration and
consider $L[t]=L \otimes \mathbb{K}[t]$ which has again a descending filtration
\begin{equation*}
\mathcal{F}^k L[t]
=
\mathcal{F}^kL \otimes \mathbb{K}[t].
\end{equation*}
We denote its completion by $\widehat{L[t]}$ and note that since $Q$ is compatible
with the filtration it extends to $\widehat{L[t]}$. Similarly,
$L_\infty$-morphisms extend to these completed spaces.
\begin{remark}
\label{rm:completion}
Note that one can define the completion as space of equivalence classes of
Cauchy sequences with respect to the filtration topology.
Alternatively, the completion can be identified with
\begin{equation*}
\varprojlim L[t] / \mathcal{F}^n L[t]
\subset \prod_n L[t]/\mathcal{F}^nL[t]
\cong
\prod_n L/\mathcal{F}^nL \otimes\mathbb{K}[t]
\end{equation*}
consisting of all coherent tuples $X=(x_n)_n \in
\prod_n L[t]/\mathcal{F}^nL[t] $, where
\begin{equation*}
L[t]/\mathcal{F}^{n+1}L[t] \ni x_{n+1}
\longmapsto
x_n \in L[t]/\mathcal{F}^n[t]
\end{equation*}
under the obvious surjections.
Moreover, $\mathcal{F}^n\widehat{L[t]}$ corresponds to the kernel
of $\varprojlim L[t]/\mathcal{F}^nL[t] \rightarrow L[t]/\mathcal{F}^nL[t]$
and thus
\begin{equation*}
\widehat{L[t]} / \mathcal{F}^n\widehat{L[t]}
\cong
L[t] / \mathcal{F}^n L[t].
\end{equation*}
Since $L$ is complete, we can also interpret $\widehat{L[t]}$ as the
subspace of $L[[t]]$ such that
$X \;\,\mathrm{mod}\;\, \mathcal{F}^nL[[t]]$ is polynomial in $t$. In particular,
$\mathcal{F}^n\widehat{L[t]}$ is the subspace of elements in
$\mathcal{F}^nL[[t]]$ that are polynomial in $t$ modulo
$\mathcal{F}^mL[[t]]$ for all $m>n$.
\end{remark}
By the above construction of $\widehat{L[t]}$ it is clear that
differentiation $\frac{\D}{\D t}$ and integration with respect to
$t$ extend to it since they do not change the filtration. Moreover,
\begin{equation*}
\delta_s
\colon
\widehat{L[t]} \ni X(t)
\longmapsto
X(s) \in L
\end{equation*}
is well-defined for all $s\in \mathbb{K}$ since $L$ is complete.
\begin{example}
In the case that the filtration of $L$ comes from a grading $L^\bullet$, the
completion is given by $\widehat{L[t]}\cong \prod_i L^i[t]$, i.e. by polynomials
in each degree. A special case is here the case of formal power series
$L= V[[\hbar]]$ with $\widehat{L[t]} \cong (V[t])[[\hbar]]$ as in
\cite[Appendix~A]{bursztyn.dolgushev.waldmann:2012a}.
\end{example}
Now we can introduce a general equivalence relation between Maurer-Cartan
elements of $L_\infty$-algebras. We write $\pi_0 \sim \pi_1$ if
there exist $\pi(t) \in \mathcal{F}^1\widehat{L^1[t]}$ and
$\lambda(t) \in \mathcal{F}^1\widehat{L^0[t]}$ such that
\begin{align}
\label{eq:EquivMCElements}
\begin{split}
\frac{\D}{\D t} \pi(t)
& =
Q^1 (\lambda(t) \vee \exp(\pi(t)))
=
\sum_{n=0}^\infty \frac{1}{n!} Q^1_{n+1} (\lambda(t) \vee \pi(t) \vee
\cdots \vee \pi(t)), \\
\pi(0)
& =
\pi_0
\quad \quad \text{ and }\quad \quad
\pi(1)
=
\pi_1.
\end{split}
\end{align}
We directly see that $\sim$ is reflexive and symmetric and one can check that
it is also transitive. We write $[\pi_0]$ for the homotopy class of
$\pi_0$ and define:
\begin{definition}[Homotopy equivalence]
\label{def:HomEquivalenceofMC}
Let $(L,Q)$ be a (curved) $L_\infty$-algebra with complete descending filtration.
The \emph{homotopy equivalence relation} on the set $\Mc^1(L)$ is given by
the relation $\sim$ from \eqref{eq:EquivMCElements}. The set of equivalence classes of Maurer-Cartan elements is denoted
by $\Def(L) = \Mc^1(L) / \sim$.
\end{definition}
\begin{remark}
This definition can be reformulated: two Maurer-Cartan elements $\pi_0$ and $\pi_1$ in $L$
are homotopy equivalent if and only if there exists a Maurer-Cartan element
$\pi(t)-\lambda(t)\D t$ in $\widehat{L[t,\D t]} $ with $\pi(0)=\pi_0$
and $\pi(1)=\pi_1$, see e.g.
\cite{dotsenko.poncin:2016a} for $L_\infty$-algebras and \cite{manetti:2005a} for DGLAs.
\end{remark}
Note that in the case of
nilpotent $L_\infty$-algebras it suffices to consider polynomials in $t$
as there is no need to complete $L[t]$, compare \cite{getzler:2009a}.
We check now that this is well-defined and even yields a curve $\pi(t)$
of Maurer-Cartan elements, see \cite[Proposition~4.8]{canonaco:1999a}.
\begin{proposition}
\label{prop:PioftUnique}
For every $\pi_0 \in \mathcal{F}^1L^1$ and $\lambda(t) \in
\mathcal{F}^1\widehat{L^0[t]}$
there exists a unique $\pi(t) \in \mathcal{F}^1\widehat{L^1[t]}$
such that
$\frac{\D}{\D t} \pi(t) = Q^1 (\lambda(t) \vee \exp(\pi(t)))$ and
$\pi(0) = \pi_0$. If $\pi_0 \in \Mc^1(L)$, then $\pi(t) \in \Mc^1(L)$ for
all $t\in \mathbb{K}$.
\end{proposition}
\begin{proof}
At first we show that there exists a unique solution $\pi(t) = \sum_{k=0}^\infty
\pi_k t^k$ in the formal power series $\mathcal{F}^1 L^1 \otimes \mathbb{K}[[t]]$.
On one hand one has
\begin{equation*}
\frac{\D}{\D t} \pi(t)
=
\sum_{k =0}^\infty (k+1) \pi_{k+1} t^k,
\end{equation*}
on the other hand there exist $\phi_k \in \mathcal{F}^1L^1$ such that
\begin{equation*}
Q^1 (\lambda(t) \vee \exp(\pi(t)))
=
\sum_{k=0}^\infty \phi_k t^k.
\end{equation*}
Here the $\phi_k$ depend only on the $\lambda_j$ of $\lambda(t)= \sum_{j=0}^\infty
\lambda_j t^j$ and the $\pi_i$ for $i\leq k$, hence they can be defined
inductively. It remains to check that one has even
$\pi(t)\in \mathcal{F}^1\widehat{L^1[t]}$, i.e. by
Remark~\ref{rm:completion} that
$\pi(t)\;\,\mathrm{mod}\;\, \mathcal{F}^nL^1[[t]] \in L^1[t]$ for all $n$.
Indeed, we have inductively
\begin{equation*}
\frac{\D}{\D t}\pi(t) \;\,\mathrm{mod}\;\, \mathcal{F}^2L^1[[t]]
=
Q^1(\lambda(t)) \;\,\mathrm{mod}\;\,\mathcal{F}^2L^1[[t]]
\in
L^1[t].
\end{equation*}
For the higher orders we get
\begin{equation*}
\frac{\D}{\D t}\pi(t)
\equiv
\sum_{k=0}^{n-2}\frac{1}{k!}
Q^1_{k+1}(\lambda(t)\vee (\pi(t)\;\,\mathrm{mod}\;\,\mathcal{F}^{n-1})\vee \cdots
\vee (\pi(t)\;\,\mathrm{mod}\;\,\mathcal{F}^{n-1}))
\;\,\mathrm{mod}\;\, \mathcal{F}^nL^1[[t]]
\end{equation*}
and thus $\pi(t) \,\mathrm{mod}\, \mathcal{F}^nL^1[[t]] \in L^1[t]$.
Let now $\pi_0$ be a curved Maurer-Cartan element, the flat case follows directly
from the curved one. We have to show $g(t) = Q^1(\exp(\pi(t))) = 0$ for
all $t$, so it suffices to show $g^{(n)}(0)= \frac{\D^n}{\D t^n} g (0)=0$ for all $n\geq 0$.
The case $n=0$ is clear, for $n=1$ we get
\begin{align*}
g^{(1)}(t)
& =
Q^1 \left(\exp(\pi(t)) \vee Q^1(\lambda(t)\vee \exp(\pi(t)))\right) \\
& =
Q^1\left(Q\left(\lambda(t)\vee \exp(\pi(t))\right) + \lambda(t)\vee \exp(\pi(t)) \vee
Q^1(\exp(\pi(t)))\right) \\
& =
Q^1\left(\lambda(t)\vee \exp(\pi(t)) \vee g^{(0)}(t)\right).
\end{align*}
The statement follows by induction.
\end{proof}
In the case of a curved Lie algebra $(\liealg{g},R,\D,[\argument{,}\argument])$ this
recovers the gauge action from Proposition~\ref{prop:GaugeactionDGLA}.
Going from gauge equivalence to homotopy equivalence is easy:
Explicitly, let $\pi_1 = \exp([g,\argument])\acts \pi_0$, then setting
$\lambda(t) = g$ and $\pi(t) = \exp([tg,\argument])\acts \pi$ satisfies
\begin{align*}
\frac{\D}{\D t} \pi(t)
& =
\exp([tg,\argument])[g,\pi_0] - \exp([tg,\argument])(\D g) \\
& =
-\D g + \exp([tg,\argument])[g,\pi_0] -
\sum_{n=0}^\infty \frac{([tg,\,\cdot\,])^{n+1}}{(n+1)!} (\D g) \\
& =
Q_1^1(\lambda(t)) + [\lambda(t), \exp([tg,\argument])\acts \pi_0].
\end{align*}
For the flat setting, the other direction from the homotopy equivalence to
the gauge equivalence is contained in the following theorem, see e.g.
\cite[Theorem~5.5]{manetti:2005a}.
\begin{theorem}
\label{thm:DGALHomvsGaugeEquiv}
Two Maurer-Cartan elements in $(\liealg{g},\D,[\argument{,}\argument])$
are homotopy equivalent if and only if they are gauge equivalent.
\end{theorem}
This theorem can be rephrased in a more explicit manner in the following proposition,
see \cite[Proposition~2.13]{kraft.schnitzer:2021a:pre}.
\begin{proposition}
\label{prop:HomEquvsGaugeEqu}
Let $(\liealg{g},R,\D,[\argument{,}\argument])$ be a curved Lie algebra
equipped with a complete descending filtration.
Consider $\pi_0 \sim \pi_1$ with homotopy equivalence given by
$\pi(t) \in \mathcal{F}^1\widehat{\liealg{g}^1[t]}$ and
$\lambda(t) \in \mathcal{F}^1\widehat{ \liealg{g}^0[t]}$.
The formal solution of
\begin{equation}
\label{eq:ODEforA}
\lambda(t)
=
\frac{\exp([A(t),\argument])-\id}{[A(t),\argument]} \left(\frac{\D}{\D t}A(t)\right),
\quad\quad
A(0)
=
0
\end{equation}
is an element $A(t) \in \mathcal{F}^1\widehat{ \liealg{g}^0[t]}$
and satisfies
\begin{equation}
\pi(t)
=
e^{[A(t),\argument]}\pi_0
- \frac{\exp([A(t),\argument]-\id}{[A(t),\argument]} \D A(t).
\end{equation}
In particular, one has for $g =A(1) \in \mathcal{F}^1 \liealg{g}^0$
\begin{equation}
\pi_1
=
\exp([g,\argument])\acts \pi_0.
\end{equation}
\end{proposition}
\begin{proof}
As formal power series in $t$ Equation~\eqref{eq:ODEforA} has a unique solution
$A(t) \in \mathcal{F}^1 \liealg{g}^0 \otimes \mathbb{K}[[t]]$.
But one has even $A(t)\in \mathcal{F}^1\widehat{\liealg{g}^0[t]}$
since
\begin{align*}
\frac{\D A(t)}{\D t} &
\equiv
\lambda(t) - \sum_{k=1}^{n-2} \frac{1}{(k+1)!} [A(t),\argument]^k
\frac{\D A(t)}{\D t}
\;\,\mathrm{mod}\;\, \mathcal{F}^n\liealg{g}[[t]] \\
& \equiv
\lambda(t) - \sum_{k=1}^{n-2} \frac{1}{(k+1)!}
[A(t)\;\,\mathrm{mod}\;\, \mathcal{F}^{n-1}\liealg{g}[[t]],\argument]^k
\left(\frac{\D A(t)}{\D t} \;\,\mathrm{mod}\;\, \mathcal{F}^{n-1}\liealg{g}[[t]]\right)
\;\,\mathrm{mod}\;\, \mathcal{F}^n\liealg{g}[[t]]
\end{align*}
is polynomial in $t$ by induction. Writing $\dot{A}(t)=\frac{\D}{\D t}A(t)$
one has
\begin{equation}
\tag{$*$}
\label{eq:DiffofExp}
\frac{\D}{\D t} e^{[A(t),\argument]}
=
\left[\frac{\exp([A(t),\argument]-\id}{[A(t),\argument]}\dot{A}(t) ,
\argument \right]
\circ
\exp([A(t),\argument]).
\end{equation}
Our aim is now to show that
$\pi'(t) = e^{[A(t),\argument]}\pi_0
- \frac{\exp([A(t),\argument]-\id}{[A(t),\argument]} \D A(t)$
satisfies
\begin{equation*}
\frac{\D \pi'(t)}{\D t}
=
-\D \lambda(t) + \left[\lambda(t), e^{[A(t),\argument]}\pi_0
- \frac{\exp([A(t),\argument])-\id}{[A(t),\argument]} \D A(t)\right].
\end{equation*}
Then we know $\pi'(t) = \pi(t)\in \mathcal{F}^1\widehat{\liealg{g}^1[t]}$
since the solution $\pi(t)$ is unique by
Proposition~\ref{prop:PioftUnique}, which immediately gives
$\pi'(1)=\pi_1$. At first compute
\begin{align*}
\D \lambda(t)
& =
\frac{\exp([A(t),\argument])-\id}{[A(t),\argument]}\D \dot{A}(t)
+
\sum_{k=0}^\infty \sum_{j=0}^{k-1} \frac{1}{(k+1)!} \binom{k}{j+1}
[ \ad_A^j\D A(t), \ad_A^{k-1-j} \dot{A}(t)]
\end{align*}
and with \eqref{eq:DiffofExp} we get
\begin{align*}
\frac{\D \pi'(t)}{\D t}
& =
\left[\frac{\exp([A(t),\argument])-\id}{[A(t),\argument]} \dot{A}(t),
\exp([A(t),\argument]) \pi_0\right] -
\frac{\exp([A(t),\argument])-\id}{[A(t),\argument]}\D \dot{A}(t) \\
& \;
- \sum_{k=0}^\infty \sum_{j=0}^{k-1} \frac{1}{(k+1)!}\binom{k}{j+1}
[\ad_A^j \dot{A}(t), \ad_A^{k-1-j}\D A] \\
& =
-\D \lambda(t) + \left[\lambda(t), e^{[A(t),\argument]}\pi_0
- \frac{\exp([A(t),\argument])-\id}{[A(t),\argument]} \D A(t)\right]
\end{align*}
and therefore $\pi'(t)=\pi(t)$ and the proposition is proven.
\end{proof}
\begin{remark}
\label{rem:QuillenandGaugeEquivalence}
More generally, there are also different notions of homotopy resp. gauge
equivalences for Maurer-Cartan elements in $L_\infty$-algebras:
for example the above definition, sometimes also called \emph{Quillen homotopy},
the \emph{gauge homotopy} where one requires $\lambda(t) = \lambda$ to be
constant, compare \cite{dolgushev:2007a}, and the \emph{cylinder homotopy}.
In \cite{dotsenko.poncin:2016a}
it is shown that these notions are also equivalent for flat
$L_\infty$-algebras with complete
descending filtration and compatible higher brackets, extending the
result for DGLAs from \cite{manetti:2005a}.
\end{remark}
Now we can finally show that $L_\infty$-morphisms map equivalence classes of
Maurer-Cartan elements to equivalence classes.
\begin{proposition}
\label{prop:FmapsEquivMCtoEquiv}
Let $F \colon (L,Q) \rightarrow (L',Q')$ be an $L_\infty$-morphism between
(curved) $L_\infty$-algebras,
and $\pi_0,\pi_1 \in \Mc^1(L)$ with $[\pi_0]=[\pi_1]$. Then $F$ is compatible
with the homotopy equivalence relation, i.e. one has
$[F^1(\exp\pi_0)] = [F^1(\exp(\pi_1)]$. In particular, one has an induced
map $F_\MC \colon \Def(L) \rightarrow \Def(L')$.
\end{proposition}
\begin{proof}
Let $\pi(t)$ and $\lambda(t)$ encode the equivalence between $\pi_0$ and
$\pi_1$. We set $\widetilde{\pi}(t) = F^1(\exp(\pi(t)))$ and
$\widetilde{\lambda}(t) = F^1(\lambda(t)\vee \exp(\pi(t)))$. We compute
\begin{align*}
\frac{\D}{\D t} \widetilde{\pi}(t)
& =
F^1\left( \exp(\pi(t))\vee Q^1\left(\lambda(t) \vee \exp(\pi(t))\right)\right) \\
& =
F^1\left(Q(\lambda(t)\vee \exp(\pi(t))) + \lambda(t)\vee \exp(\pi(t)) \vee
Q^1(\exp(\pi(t)))\right) \\
& =
Q'^1 \circ F \left(\lambda(t)\vee \exp(\pi(t))\right) \\
& =
Q'^1\left(F^1\left(\lambda(t)\vee \exp(\pi(t))\right) \vee \exp (F^1(\exp(\pi(t))))\right) \\
& =
Q'^1\left(\widetilde{\lambda}(t) \vee \exp(\widetilde{\pi}(t))\right),
\end{align*}
and thus the desired $[F^1(\exp\pi_0)] = [F^1(\exp\pi_1)]$.
\end{proof}
If one does not want to restrict to $L_\infty$-algebras with complete filtrations, one can tensorize general $L_\infty$-algebras by nilpotent algebras, compare
Remark~\ref{rem:nilpotentdgla} for the case of DGLAs. In this setting the
deformations are not a set but a functor:
For a (curved) $L_\infty$-algebra $L$ the deformation
functor $\Def_L \colon \mathrm{Art}_\mathbb{K} \rightarrow \mathrm{Set}$
maps a local Artinian ring $A$ to the set
\begin{equation*}
\Def_L(A)
=
\Def(L \otimes m_A).
\end{equation*}
Here $m_A$ is the maximal ideal of $A$ and thus $L\otimes m_A$ is a
nilpotent $L_\infty$-algebra and the above is well-defined.
In this case it can be shown that one
even has the following statement, see \cite[Theorem~4.6]{kontsevich:2003a} and
\cite[Theorem~4.12]{canonaco:1999a} and also \cite[Proposition~4]{dolgushev:2005a} for
the filtered setting.
\begin{theorem}
\label{thm:lquisbijondef}
Let $F \colon (L,Q)\rightarrow (L',Q')$ be an $L_\infty$-quasi-isomorphism
of flat $L_\infty$-algebras. Then the map
\begin{equation}
\label{eq:mclinftymorphism}
F_\MC \colon
\pi
\longmapsto
F_\MC(\pi) = \sum_{n>0}\frac{1}{n!} F_n(\pi\vee \cdots \vee\pi)
\end{equation}
induces an isomorphism between the deformation functors
$\Def_L$ and $\Def_{L'}$.
\end{theorem}
\begin{proof}[Sketch]
In Proposition~\ref{prop:StandardFormLinfty} we have shown that
every $L_\infty$-algebra $L$ is isomorphic to the direct product of a minimal one
$L_{min}$, i.e. one with
$Q_1=0$, and a linear contractible one $L_{lc}$, see also
\cite[Proposition~2.8]{canonaco:1999a}.
On linear contractible $L_\infty$-algebras
the deformation functor is trivial, i.e. for all $A$ the set $\Def_{L_{lc}}(A)$
contains just one element: A Maurer-Cartan element $\pi$ is just a closed
element. But since the cohomology is trivial, it is exact and there exists
$\lambda \in L^0_{lc} \otimes m_A$ with $Q_1^1(\lambda) = \pi$, and $\pi(t)
= t \pi$ shows that $\pi$ is homotopy equivalent to zero.
In addition, one easily sees that the deformation functor is compatible
with direct products, i.e.
\begin{equation*}
\Def_{L\oplus L'}
\cong
\Def_L \times \Def_{L'}.
\end{equation*}
Summarizing, this yields
\begin{equation*}
\Def_L
\cong
\Def_{L_{min}}
\cong
\Def_{L'_{min}}
\cong
\Def_{L'}
\end{equation*}
since $L_{min}$ and $L_{min}'$ are $L_\infty$-isomorphic, see also
\cite[Section~4]{canonaco:1999a}.
\end{proof}
\begin{remark}
The above result can be further generalized: In fact, a morphism
$L \colon \liealg{g} \rightarrow \liealg{g}'$ of DGLAs induces an isomorphism
on the deformation functors if the induced map in cohomology is bijective in
degree one, injective in degree two and surjective in degree zero, compare
\cite[Theorem~3.1]{manetti:2005a}.
\end{remark}
We are mainly interested in the setting of formal power series, where we directly get
the following statement.
\begin{corollary}
\label{cor:BijFormalMC}
Let $F$ be an $L_\infty$-quasi-isomorphism between two flat DGLAs $\liealg{g}$ and
$\liealg{g}'$. Then it induces a bijection $F_\MC$ between
$\Def(\liealg{g}[[\hbar]])$ and $\Def(\liealg{g}'[[\hbar]])$.
\end{corollary}
\begin{proof}
The above statement follows from Theorem~\ref{thm:lquisbijondef}
since $\mathbb{K}[[\hbar]] =
\lim \mathbb{K}[\hbar] / \hbar^k \mathbb{K}[\hbar]$ is pro-Artinian with the
pronilpotent $\hbar\mathbb{K}[[\hbar]]$ as maximal ideal.
\end{proof}
In the curved setting the situation is more complicated
since the proof of Theorem~\ref{thm:lquisbijondef} does not generalize:
if the curvature is not central one does
not even have a differential and there is no obvious notion of
$L_\infty$-quasi-isomorphisms between curved $L_\infty$-algebras. Therefore, we postpone these considerations
until we understand the twisting procedure, which is a way to obtain a flat
$L_\infty$-algebra out of a curved one, see Lemma~\ref{lemma:CorrespCurvedandFlatMC}
below.
\subsection{Twisting of (Curved) $L_\infty$-Algebras}
\label{subsec:Twisting}
Recall that for a Maurer-Cartan
element $\pi$ of a DGLA $(\liealg{g},\D,[\argument{,}\argument])$ the map
$\D + [\pi,\,\cdot\,]$ is a differential on $\liealg{g}$,
the \emph{twisted} differential by $\pi$. This can be generalized
to $L_\infty$-algebras, see e.g. \cite{dolgushev:2005a,
dolgushev:2005b,dotsenko.shadrin.vallette:2018,esposito.dekleijn:2021a}.
\begin{lemma}
\label{lemma:twistCodiff}
Let $(L,Q)$ be a (curved) $L_\infty$-algebra and $\pi \in \mathcal{F}^1 L[1]^0$.
Then the map $Q^\pi$ given by
\begin{equation}
Q^\pi(X)
=
\exp((-\pi)\vee) Q (\exp(\pi \vee)X),
\quad \quad
X \in \Sym(L[1]),
\end{equation}
defines a codifferential on $\Sym (L[1])$. If $\pi$ is in addition a (curved)
Maurer-Cartan element, then $(L,Q^\pi)$ is a flat $L_\infty$-algebra.
\end{lemma}
\begin{proof}
At first we have to show that $Q^\pi$ is a well-defined map into $\Sym(L[1])$
and not into its completion with respect to the symmetric degree. This is clear
since its structure maps $(Q^\pi)^1_n$ are well-defined maps into $L[1]$ by the completeness of the filtration and since $Q^\pi$ defines a coderivation on
$\Sym (L[1])$ by
\begin{align*}
\Delta_\sh Q^\pi (X)
& =
\Delta_\sh \exp(-\pi \vee)Q (\exp(\pi\vee)X) \\
& =
\exp(-\pi \vee) \otimes \exp(-\pi \vee)
(Q \otimes \id + \id \otimes Q)
(\exp(\pi\vee) \otimes \exp(\pi\vee))(\Delta_\sh X) \\
& =
(Q^\pi \otimes \id + \id \otimes Q^\pi)(\Delta_\sh X).
\end{align*}
The property $(Q^\pi)^2=0$ is clear.
If $\pi$ is in addition a (curved) Maurer-Cartan element, then one obtains
$Q^\pi(1) =\exp(-\pi \vee) Q(\exp(\pi)) = 0$ and thus after twisting a flat
$L_\infty$-algebra.
\end{proof}
The $L_\infty$-algebra $(L,Q^\pi)$ is again called \emph{twisted}
and it turns out that one can also twist the $L_\infty$-morphism,
see \cite[Proposition~1]{dolgushev:2005b} for the flat setting and
\cite[Lemma~2.7]{esposito.dekleijn:2021a} for the curved setting.
\begin{proposition}
\label{prop:twistinglinftymorphisms}
Let $F\colon (L,Q)\rightarrow (L',Q')$ be an
$L_\infty$-morphism between (curved) $L_\infty$-algebras,
$\pi \in \mathcal{F}^1L^1$ and $S = F^1(\cc{\exp}\pi)
\in \mathcal{F}^1 (L')^1$.
\begin{propositionlist}
\item The map
\begin{equation*}
F^\pi
=
\exp(-S\vee) F \exp(\pi\vee) \colon
\Sym(L[1])
\longrightarrow
\Sym(L'[1])
\end{equation*}
defines an $L_\infty$-morphism between the (curved)
$L_\infty$-algebras $(L,Q^\pi)$ and $(L',(Q')^S)$.
\item The structure maps of $F^\pi$ are given by
\begin{equation}
\label{eq:twisteslinftymorphism}
F_n^\pi(x_1,\dots, x_n)
=
\sum_{k=0}^\infty \frac{1}{k!}
F_{n+k}(\pi, \dots, \pi,x_1 , \dots, x_n)
\end{equation}
and $F^\pi$ is called \emph{twisted by $\pi$}.
\item If $\pi$ is a (curved) Maurer-Cartan element, then $F^\pi$ is an
$L_\infty$-morphism between flat $L_\infty$-algebras.
\item Let $F$ be an $L_\infty$-quasi-isomorphism between flat $L_\infty$-algebras
such that $F_1^1$ is not only a quasi-isomorphism of filtered complexes
$L\rightarrow L'$ but even induces a quasi-isomorphism
\begin{equation*}
F_1^1 \colon
\mathcal{F}^k L
\longrightarrow
\mathcal{F}^kL'
\end{equation*}
for each $k$. If $\pi$ is a flat Maurer-Cartan element, then $F^\pi$ is
also an $L_\infty$-quasi-isomorphism.
\end{propositionlist}
\end{proposition}
\begin{proof}
$F^\pi$ is well-defined since its structure maps are well-defined maps into
$L'[1]$ by the completeness of the filtration and since
$F^\pi$ is a coalgebra morphism
\begin{align*}
\Delta_\sh \exp(-S\vee)F\exp(\pi \vee)X
& =
\exp(-S\vee)\otimes \exp(-S\vee)(F\otimes F)\Delta_\sh\exp(\pi \vee)X \\
& =
F^\pi \otimes F^\pi \Delta_\sh (X).
\end{align*}
The compatibility of $F^\pi$ with the coderivations
$Q^\pi$ and $(Q')^S$ follows directly with the definitions.
The third point follows directly from Lemma~\ref{lemma:twistCodiff}.
The last claim follows by a standard argument of spectral sequences.
Since $F_1$ is a quasi-isomorphism w.r.t.
$Q^1_1$ and $(Q')^1_1$ that is compatible with the filtrations, the map
$F_1^\pi$ induces a quasi-isomorphism on the zeroth level of
the corresponding spectral sequence, and therefore also on the
terminal $E_\infty$-level, compare \cite[Proposition~1]{dolgushev:2005b}.
\end{proof}
It follows directly that the twisting
procedure is functorial in the sense that
\begin{equation}
(G \circ F)^\pi
=
G^S \circ F^\pi
\end{equation}
for an $L_\infty$-morphism $G\colon L' \rightarrow L''$,
see \cite[Proposition~4]{dolgushev:2006a}, as well as
\begin{equation*}
(Q^\pi)^B
=
Q^{\pi + B},
\quad \quad
(F^\pi)^B
=
F^{\pi + B}.
\end{equation*}
Now we can come back to the correspondences of curved Maurer-Cartan elements
under $L_\infty$-morphisms. Let $(L,Q)$ be a curved $L_\infty$-algebra with
curved Maurer-Cartan element $m \in \mathcal{F}^1L^1$. Then we know
from Lemma~\ref{lemma:twistCodiff} that the twisted codifferential
$Q^{m}$ is flat and we get the following, compare \cite[Proposition~4.6]{maunder:2017a}
for the case of curved Lie algebras.
\begin{lemma}
\label{lemma:CorrespCurvedandFlatMC}
Let $(L,Q)$ be a curved $L_\infty$-algebra with
curved Maurer-Cartan element $m \in \mathcal{F}^1L^1$.
Then the curved Maurer-Cartan elements in $(L,Q)$ are in one-to-one
correspondence with flat Maurer-Cartan elements in $(L,Q^m)$ via
$\pi \mapsto \pi-m$. The
correspondence is compatible with equivalences.
\end{lemma}
\begin{proof}
We have for $\pi \in \mathcal{F}^1L[1]^0$
\begin{equation*}
Q^1(\exp \pi)
=
Q^1(\exp(\pi) \vee \exp (m )\vee \exp(-m))
=
(Q^m)^1(\exp(\pi-m)),
\end{equation*}
so the first part follows.
Suppose that $\pi_0$ and $\pi_1$ are equivalent, i.e. there
exists $\pi(t)$ with $\pi(0)=\pi_0$, $\pi(1)=\pi_1$ and
\begin{equation*}
\frac{\D}{\D t} \pi(t)
=
Q^{1} (\lambda(t) \vee \exp(\pi(t))).
\end{equation*}
Then $\pi'(t) = \pi(t) - m$ induces the
equivalence between $\pi_0 - m$ and $\pi_1 - m$ in the
flat setting since
\begin{align*}
\frac{\D}{\D t} \pi'(t)
=
\frac{\D}{\D t} \pi(t)
=
Q^1 (\exp(m)\vee\lambda(t) \vee \exp(\pi(t)-m))
=
Q^{m,1} (\lambda(t) \vee \exp(\pi'(t))),
\end{align*}
and the statement is shown.
\end{proof}
This directly implies for the equivalence classes of curved
Maurer-Cartan elements:
\begin{corollary}
\label{cor:CurvdMCEquivBijection}
Let $F \colon (L,Q) \rightarrow (L',Q')$ be an
$L_\infty$-morphism between two curved $L_\infty$-algebras
with complete filtrations.
If $L$ has a curved Maurer-Cartan
element $m \in \mathcal{F}^1L^1$ such that $F^{m}$ induces
bijection on the equivalence classes of flat Maurer-Cartan elements, then
$F$ induces a bijection $F_\MC$ on the equivalence classes of curved
Maurer-Cartan elements.
\end{corollary}
\begin{proof}
We know that $F$ maps $m$ to a curved Maurer-Cartan element $m' =
F_\MC(m)= F^1(\cc{\exp}m) \in \mathcal{F}^1 L'^1$ and by Lemma~\ref{lemma:CorrespCurvedandFlatMC} we know
that
\begin{equation*}
\Def(L,Q)
\cong
\Def(L,Q^m),
\quad \quad
\Def(L',Q')
\cong
\Def(L',(Q')^{m'}).
\end{equation*}
But by assumption we have
\begin{equation*}
\Def(L,Q^m)
\cong
\Def(L',(Q')^{m'})
\end{equation*}
and the statement is clear.
\end{proof}
\section{Homotopic $L_\infty$-morphisms}
\label{sec:HomotopyTheoryLinftyAlgandMorph}
One of our aims is to investigate the relation between twisted $L_\infty$-morphisms.
More explicitly, let
$F \colon (\liealg{g},Q) \rightarrow (\liealg{g}',Q')$ be an $L_\infty$-morphism
between DGLAs and let $\pi \in \mathcal{F}^1\liealg{g}^1$ be a
Maurer-Cartan element equivalent to zero. Then we want to take a look at
the relation between $F^{\pi}$ and $F$.
To this end we need to recall the definition
of homotopic $L_\infty$-morphisms, which also has consequences
for the homotopy classification of $L_\infty$-algebras.
At first, let us recall from
Corollary~\ref{cor:coderivationcogeneratorscocom} that an $L_\infty$-structure
$Q$ on the vector space $L$ is equivalent to a Maurer-Cartan element in the DGLA
$(\Hom_\mathbb{K}^\bullet(\Sym(L[1]),L[1]),0,[\argument{,}\argument]_{NR})$.
Analogously, we show now that we
can also interpret $L_\infty$-morphisms as Maurer-Cartan elements in
a convolution $L_\infty$-algebra.
\subsection{$L_\infty$-morphisms as Maurer-Cartan Elements}
\label{sec:LinftyMorphasMC}
Let $(L,Q),(L',Q')$ be two flat
$L_\infty$-algebras and consider the space $\Hom(\cc{\Sym}(L[1]),L')$
of graded linear maps. If $L$ and $L'$ are equipped with complete descending
filtrations, then we require the maps to be compatible with the filtrations.
We can interpret elements $F^1,G^1\in \Hom(\cc{\Sym}(L[1]),L')[1]$ as maps in
$\Hom(\cc{\Sym}(L[1]),\cc{\Sym}(L'[1]))$, where we have a convolution product $\star$,
compare Definition~\ref{def:ConvProduct}. For example, one has
\begin{equation*}
F^1 \star G^1
=
\vee \circ (F^1 \otimes G^1) \circ \cc{\Delta_\sh}
\colon
\cc{\Sym}(L[1])
\longrightarrow
\Sym^2(L[1]),
\end{equation*}
and since $\cc{\Delta_\sh}$ and $\vee$ are (co-)commutative, one directly sees that
$\star$ is graded commutative. We can use this and the
$L_\infty$-structures on $L$ and $L'$ to define an $L_\infty$-structure on this
vector space of maps, see \cite[Proposition~1 and Proposition~2]{dolgushev:2007a}
and also \cite{bursztyn.dolgushev.waldmann:2012a} for the case of DGLAs.
\begin{proposition}
\label{prop:ConvLinftyStructure}
The coalgebra $\cc{\Sym}(\Hom(\cc{\Sym}(L[1]),L')[1])$ can be equipped with
a codifferential $\widehat{Q}$ with structure maps
\begin{equation}
\label{eq:DiffonConvLinfty}
\widehat{Q}^1_1 F
=
Q'^1_1 \circ F - (-1)^{\abs{F}} F \circ Q
\end{equation}
and
\begin{equation}
\label{eq:BracketonConvLinfty}
\widehat{Q}^1_n(F_1\vee \cdots \vee F_n)
=
(Q')^1_n \circ
(F_1\star F_2\star \cdots \star F_n).
\end{equation}
It is called \emph{convolution $L_\infty$-algebra} and its
Maurer-Cartan elements can be identified with $L_\infty$-morphisms.
Here $\abs{F}$ denotes the degree in
$\Hom(\cc{\Sym}(L[1]),L')[1]$.
\end{proposition}
\begin{proof}
The fact that this yields a well-defined $L_\infty$-structure follows
directly from the fact that $L$ and $L'$ are $L_\infty$-algebras, and
in particular from the cocommutativity and coassociativity of $\cc{\Delta_\sh}$.
Now we want to show that the Maurer-Cartan elements are indeed in one-to-one
correspondence with the $L_\infty$-morphisms.
At first we recall that a coalgebra morphism $F$ from
$\cc{\Sym}(L[1])$ into $\cc{\Sym}(L'[1])$ is
uniquely determined by its projection $F^1$ to $L'$ via $F=\exp_\star(F^1)$, compare
Theorem~\ref{thm:CofreeCocomConilpotentCoalg}. Consequently, we can
identify it with a degree one element in
$\Hom(\cc{\Sym}(L[1]),L')$. It remains to show that the
Maurer-Cartan equation is equivalent to the fact that $F$ commutes with the
codifferentials. But again by Theorem~\ref{thm:CofreeCocomConilpotentCoalg} one
sees that $Q' F = F Q$ is equivalent to
$\pr_{L'[1]} (Q'F - FQ) = 0$ which is just the Maurer-Cartan equation for $F^1$.
\end{proof}
\begin{example}[Convolution DGLA]
Let $\liealg{g},\liealg{g}'$ be two DGLAs. Then
$\Hom(\cc{\Sym}(\liealg{g}[1]), \liealg{g}')$ is in fact a DGLA, the
so-called \emph{convolution DGLA} with differential
\begin{equation}
\label{eq:DiffonConvDGLA}
\del F
=
\D' \circ F + (-1)^{\abs{F}} F \circ Q
\end{equation}
and bracket
\begin{equation}
\label{eq:BracketonConvDGLA}
[F,G]
=
- (-1)^{\abs{F}} (Q')^1_2 \circ (F\star G) .
\end{equation}
Here $\abs{F}$ denotes again the degree in
$\Hom(\cc{\Sym}(\liealg{g}[1]),\liealg{g}')[1]$ and the induced codifferential
is also denoted by $\widehat{Q}$.
\end{example}
In order to obtain a notion of equivalent Maurer-Cartan elements we need a
complete filtration. Note that the convolution $L_\infty$-algebra
$\mathcal{H} = \Hom(\cc{\Sym}(L[1]),L')$ is indeed
equipped with the following complete descending filtration:
\begin{align}
\label{eq:FiltrationConvLieAlg}
\begin{split}
\mathcal{H}
& =
\mathcal{F}^1\mathcal{H}
\supset
\mathcal{F}^2\mathcal{H}
\supset \cdots \supset
\mathcal{F}^k\mathcal{H}
\supset \cdots \\
\mathcal{F}^k\mathcal{H}
& =
\left\{ f \in \Hom(\cc{\Sym}(L[1]),L') \mid
f \at{\Sym^{<k}(L[1])}=0\right\}.
\end{split}
\end{align}
Thus all twisting procedures are well-defined and
one can define a notion of homotopic $L_\infty$-morphisms.
\begin{definition}
\label{def:homotopicMorph}
Two $L_\infty$-morphisms $F,F'$ between flat $L_\infty$-algebras
$(L,Q)$ and $(L',Q')$
are called \emph{homotopic} if they are homotopy equivalent Maurer-Cartan
elements in the convolution $L_\infty$-algebra $(\Hom(\cc{\Sym}(L[1]), L'),\widehat{Q})$.
\end{definition}
However, we are mainly interested in $L_\infty$-morphisms between
$L_\infty$-algebras resp. DGLAs with complete filtrations, whence we
introduce a new filtration on the convolution $L_\infty$-algebra
$\mathcal{H} = \Hom(\cc{\Sym}(L[1]),L')$ that takes into
account the filtrations on $\cc{\Sym}(L[1])$ and $L'$:
\begin{align}
\label{eq:FiltrationConvLieAlg2}
\begin{split}
\mathcal{H}
& =
\mathfrak{F}^1\mathcal{H}
\supset
\mathfrak{F}^2\mathcal{H}
\supset \cdots \supset
\mathfrak{F}^k\mathcal{H}
\supset \cdots \\
\mathfrak{F}^k\mathcal{H}
& =
\sum_{n+m=k}
\left\{ f \in \Hom(\cc{\Sym}(L[1]),L') \mid
f \at{\Sym^{<n}(L[1])}=0\quad \text{ and }\quad
f\colon\mathcal{F}^\bullet \rightarrow
\mathcal{F}^{\bullet +m}\right\}.
\end{split}
\end{align}
Here the filtration on $\cc{\Sym}(L[1])$ is the product filtration
induced by
\begin{equation*}
\mathcal{F}^k(L[1] \otimes L[1])
=
\sum_{n+m=k} \image\left( \mathcal{F}^nL[1] \otimes
\mathcal{F}^mL[1] \rightarrow L[1]\otimes L[1]\right),
\end{equation*}
see e.g. \cite[Section~1]{dotsenko.shadrin.vallette:2018}.
\begin{proposition}
\label{prop:CompleteFiltrationConvLinfty}
The above filtration \eqref{eq:FiltrationConvLieAlg2} is a complete descending
filtration on the convolution $L_\infty$-algebra
$\Hom(\cc{\Sym}(L[1]),L')$.
\end{proposition}
\begin{proof}
The filtration is obviously descending and
$\mathcal{H}=\mathfrak{F}^1\mathcal{H}$ since we consider in the convolution
$L_\infty$-algebra only maps that are compatible with the filtration.
It is compatible with the convolution $L_\infty$-algebra
structure and complete since $L'$ is complete.
\end{proof}
Recall that we introduced in Remark~\ref{rem:curvedMorphisms} the definition of
curved morphisms between curved Lie algebras from \cite[Definition~4.3]{maunder:2017a}.
There exists
a similar generalizations of curved morphisms between curved $A_\infty$-algebras
\cite{positselski:2018a}. Considering now the convolution $L_\infty$-algebra between two
curved $L_\infty$-algebras we get directly the analogue generalization of
curved morphisms for the $L_\infty$-setting:
\begin{remark}[Curved convolution $L_\infty$-algebra]
\label{rem:CurvedMorphLinfty}
Let us consider now two curved $L_\infty$-algebras $(L,Q)$ and $(L',Q')$.
Here we use the counital coaugmented coalgebra
$\Sym(\Hom(\Sym(L[1]),L')[1])$ with coproduct $\Delta_\sh$ and codifferential
$\widehat{Q}$ with Taylor components \eqref{eq:DiffonConvLinfty} and
\eqref{eq:BracketonConvLinfty} as in the flat case plus curvature component
\begin{equation*}
\widehat{Q}_0^1
=
(1
\longmapsto
Q'_0(1) )
\in
(\Hom(\mathbb{K},L')[1])^1.
\end{equation*}
This gives indeed a \emph{curved convolution $L_\infty$-algebra} and we want
to interpret its Maurer-Cartan elements. We restrict ourselves to the filtered setting
where we set $\mathcal{F}^0\mathbb{K}=\mathbb{K}$, and where we
assume $Q'_0\in \mathcal{F}^1L'$. Then Mauer-Cartan elements
$F \in \mathfrak{F}^1(\Hom(\Sym(L[1]),L')[1])^0$ with the
filtration from \eqref{eq:FiltrationConvLieAlg2} are given by
Taylor components $F_n^1 \colon \Sym^n (L[1])
\rightarrow L'[1]$ for $n\geq 0$. The only difference to the flat setting is
the zero component $F_0^1(1) = \alpha \in \mathcal{F}^1L'^1$.
Then the Maurer-Cartan equation implies
\begin{align}
\label{eq:curvedmorphLinfty}
\tag{$*$}
0
=
\widehat{Q}_0^1 + Q'^1_1 \circ F^1 - F^1\circ Q +
\sum_{n=2}^\infty \frac{1}{n!} Q'^1_n \circ (F^1)^{\star n}.
\end{align}
If $F^1_0(1)=0$ the evaluation at $1$ yields $Q'_0(1) = F^1_1(Q_0(1))$ and thus
$F$ is a $L_\infty$-morphism of curved $L_\infty$-algebras
in the usual sense, i.e.\ a coalgebra morphism commuting with the codifferentials. But for $F_0^1(1)=\alpha\neq 0$ we no longer get an induced coalgebra morphism since we no longer have $F(1)=1$.
In analogy to \cite[Definition~4.3]{maunder:2017a} we call such a general $F$
\emph{curved morphism of $L_\infty$-algebras} and for $\alpha=0$
the morphism $F$ is called \emph{strict}. Note that in
\cite{getzler:2018a} those curved $L_\infty$-morphisms are just called
$L_\infty$-morphism.
We will study the curved setting in more details in
Section~\ref{sec:HomTheoryofCurvedLinfty}, where we show that
curved $L_\infty$-morphisms still satisfy some nice properties; in
particular they are compatible with Maurer-Cartan elements,
see e.g. Proposition~\ref{prop:ProvertiesofCurvedLinftyMorph}.
\end{remark}
For now we restrict us to the simpler
flat case. Therefore, from now on, unless stated otherwise, all DGLAs and
$L_\infty$-algebras are assumed to be flat. We collect a few immediate consequences about
homotopic $L_\infty$-morphisms, see e.g. \cite[Proposition~1.4.6]{kraft:2021a}:
\begin{proposition}
\label{prop:PropertiesofHomotopicMorphisms}
Let $F,F'$ be two homotopic $L_\infty$-morphisms
between the flat $L_\infty$-algebras $(L,Q)$ and $(L',Q')$.
\begin{propositionlist}
\item $F_1^1$ and $(F')_1^1$ are chain homotopic.
\item If $F$ is an $L_\infty$-quasi-isomorphism, then so is $F'$.
\item $F$ and $F'$ induce the same maps from $\Def(L)$ to
$\Def(L')$, i.e. $F_\MC = F'_\MC $.
\item In the case of DGLAs $\liealg{g},\liealg{g}'$, compositions
of homotopic $L_\infty$-morphisms with a
DGLA morphism of degree zero are again homotopic.
\end{propositionlist}
\end{proposition}
\begin{proof}
Concerning the first two points, let $F^1(t)$ and $\lambda^1(t)$ be the
paths encoding the homotopy equivalence, i.e.
\begin{equation}
\label{eq:HomEquFFprime}
\tag{$*$}
\frac{\D}{\D t} F^1(t)
=
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t)))
\end{equation}
with $F^1(0)=F^1$ and $F^1(1)=(F')^1$.
In particular, this implies $\frac{\D}{\D t} F_1^1(t)
= (Q')^1_1 \circ \lambda_1^1(t) + \lambda_1^1(t) \circ Q^1_1$ which gives
the statement with $F_1^1(0)=F_1^1$.
For DGLAs the third point is proven in \cite[Lemma~B.5]{bursztyn.dolgushev.waldmann:2012a}.
In our general setting we consider a
Maurer-Cartan element $\pi \in \mathcal{F}^1L^1$ and recall that
$\cc{\exp}(\pi)=\sum_{k=1}^\infty \frac{1}{k!} \pi^{\vee k}$
satisfies $\cc{\Delta_\sh}\cc{\exp}(\pi)
= \cc{\exp}(\pi) \otimes \cc{\exp}(\pi)$ and
$ Q \cc{\exp}(\pi) = 0$ by Lemma~\ref{lemma:twistinglinftymorphisms}.
Applying now \eqref{eq:HomEquFFprime} on $\cc{\exp}\pi$ gives
\begin{align*}
\frac{\D}{\D t} F^1(t)(\cc{\exp}\pi)
& =
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t))(\cc{\exp}\pi) \\
& =
(Q')^1(\lambda^1(t)(\cc{\exp}\pi) \vee \exp(F^1(t)(\cc{\exp}\pi) ),
\end{align*}
i.e. $\pi(t) = F^1(t)(\cc{\exp}\pi) $ and $\lambda(t) = \lambda^1(t)(\cc{\exp}\pi) $
encode the homotopy equivalence between $F_\MC(\pi)= F^1(\cc{\exp}\pi) $
and $F'_\MC(\pi)=(F')^1(\cc{\exp}\pi) $.
The last point follows directly
since DGLA morphisms commute with brackets and differentials.
\end{proof}
We want to generalize the last point to compositions with
$L_\infty$-morphisms. Since we could not find a reference we prove the
statements in detail. We start with the post-composition as in
\cite[Proposition~3.5]{kraft.schnitzer:2021a:pre}.
\begin{proposition}
\label{prop:CompofHomotopicHomotopic}
Let $F_0,F_1$ be two homotopic $L_\infty$-morphisms
from $(L,Q)$ to $(L',Q')$. Let
$H$ be an $L_\infty$-morphism from $(L',Q')$ to
$(L'',Q'')$, then $HF_0 \sim HF_1$.
\end{proposition}
\begin{proof}
For $F^1\in \Hom(\cc{\Sym}(L[1]),L')$ we write $ H^1F = H^1 \circ \exp_\star(F^1)$,
where $\star$ denotes again the convolution product with respect to $\vee$ and
$\Delta_\sh$ resp. $\cc{\Delta_\sh}$.
Let us denote by $F^1(t) \in \widehat{(\Hom(\cc{\Sym}
(L[1]),L')[1])^0[t]}$ and $\lambda^1(t) \in
\widehat{(\Hom(\cc{\Sym}(L[1]),L')[1])^{-1}[t]}$
the paths encoding the homotopy equivalence between $F_0$ and $F_1$. Then
$H^1F(t) \in
\widehat{(\Hom(\cc{\Sym}(L[1]),L'')[1])^{-1}[t]}$
satisfies
\begin{align*}
\frac{\D}{\D t} H^1 F(t)
& =
H^1 \circ
\left( \widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t))) \star \exp_\star(F^1)\right) .
\end{align*}
Since $F^1(t)$ is a path of Maurer-Cartan elements, we get
\begin{align*}
\frac{\D}{\D t} H^1F(t)
& =
H^1 \circ Q' \circ
(\lambda^1(t) \star \exp_\star(F^1(t)))
+
H^1\circ(\lambda^1(t) \star \exp_\star(F^1(t))) \circ Q \\
& =
(Q'')^1 \circ H \circ
(\lambda^1(t) \star \exp_\star(F^1(t)))
+
H^1\circ(\lambda^1(t) \star \exp_\star(F^1(t))) \circ Q \\
& =
(\widehat{Q}')^1_1 \left( H^1 \circ
(\lambda^1(t) \star \exp_\star(F^1(t)))\right)
+ \sum_{\ell=2}^\infty (Q'')^1_\ell \circ H^\ell \circ
(\lambda^1(t) \star \exp_\star(F^1(t))).
\end{align*}
Finally, we know from Theorem~\ref{thm:CofreeCocomConilpotentCoalg} that
$\lambda^1(t) \star \exp_\star(F^1(t))$ is a coderivation along $F(t)$ and we get
for the second term
\begin{align*}
H^\ell \circ (\lambda^1(t) \star \exp_\star(F^1(t)))
& =
\left((H^1\circ(\lambda^1(t)\star \exp_\star F^1)) \star
\frac{1}{(\ell-1)!}(H^1F)^{\star (\ell-1)}\right).
\end{align*}
Summarizing, we have
\begin{align*}
\frac{\D}{\D t} H^1F(t)
=
(\widehat{Q}')^1
\left((H^1\circ(\lambda^1(t)\star \exp_\star F^1)) \vee
\exp(H^1F)\right)
\end{align*}
and the statement is shown.
\end{proof}
Analogously, we have for the pre-composition \cite[Proposition~3.6]{kraft.schnitzer:2021a:pre}:
\begin{proposition}
\label{prop:preCompofHomotopicHomotopic}
Let $F_0,F_1$ be two homotopic $L_\infty$-morphisms
from $(L,Q)$ to $(L',Q')$. Let
$H$ be an $L_\infty$-morphism from $(L'',Q'')$ to
$(L,Q)$, then $F_0 H\sim F_1H$.
\end{proposition}
\begin{proof}
Let $F^1(t) \in \widehat{(\Hom(\cc{\Sym}
(L[1]),L')[1])^0[t]}$ and $\lambda^1(t) \in
\widehat{(\Hom(\cc{\Sym}(L[1]),L')[1])^{-1}[t]}$
describe the homotopy equivalence between $F_0$ and $F_1$. Then
we consider
\begin{equation*}
F^1(t)H
=
F^1(t) \circ \exp_\star(H^1)
\in
\widehat{(\Hom(\cc{\Sym}
(L''[1]),L')[1])^0[t]}
\end{equation*}
in the notation of the above proposition. We compute
\begin{align*}
\frac{\D}{\D t} (F^1(t)H)
& =
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t))) \circ H \\
& =
(Q')^1_1 \circ \lambda^1 \circ H
+ \lambda^1 \circ Q \circ H
+ \sum_{\ell=2}^\infty\frac{1}{(\ell-1)!}
(Q')^1_\ell \circ (\lambda^1\star F^1\star \cdots \star F^1)
\circ H \\
& =
(Q')^1_1 \circ \lambda^1 \circ H
+ \lambda^1 \circ H\circ Q''
+ \sum_{\ell=2}^\infty\frac{1}{(\ell-1)!}
(Q')^1_\ell \circ (\lambda^1 H\star F^1 H\star \cdots \star F^1 H) \\
& =
\widehat{Q}^1 (\lambda^1(t)H\vee \exp(F^1(t)H))
\end{align*}
since $H$ is a coalgebra morphism intertwining $Q''$ and $Q$ and of
degree zero. Finally, since $\lambda^1(t)H \in
\widehat{(\Hom(\cc{\Sym}(L''[1]),L')[1])^{-1}[t]}$ the
statement follows.
\end{proof}
\begin{corollary}
Let $F_0,F_1$ be two homotopic $L_\infty$-morphisms
from $(L,Q)$ to $(L',Q')$, and let
$H_0,H_1$ be two homotopic $L_\infty$-morphisms from $(L',Q')$ to
$(L'',Q'')$, then $H_0 F_0\sim H_1F_1$.
\end{corollary}
We want to end this section with an example of homotopic $L_\infty$-morphisms:
Let $(\liealg{g},\D,[\argument{,}\argument])$ be a DGLA with complete descending
filtration, and assume that $h\in \mathcal{F}^1\liealg{g}^0$.
\begin{proposition}
If $g$ is closed, then
\begin{equation}
\label{eq:EtoAdh}
e^{[g,\argument]} \colon
(\liealg{g},\D,[\argument{,}\argument])
\longrightarrow
(\liealg{g},\D,[\argument{,}\argument])
\end{equation}
is an DGLA automorphism that maps equivalent Maurer-Cartan elements
to equivalent ones. If $g$ is exact, then it is even homotopic to
the identity $\id_\liealg{g}$.
\end{proposition}
\begin{proof}
If $g$ is closed, then $\D \circ e^{[g,\argument]} =
e^{[g,\argument]}\circ \D$, see e.g. Lemma~\ref{lemma:TwistedDGLAsIsomorphic}.
Let now $\pi \in \mathcal{F}^1\liealg{g}^1$ be a Maurer-Cartan element.
Then we know from the formula for the gauge action from
Proposition~\ref{prop:GaugeactionDGLA} that we have
\begin{equation*}
\exp([g,\argument])\acts \pi
=
e^{[g,\argument]}\pi,
\end{equation*}
i.e. $e^{[g,\argument]}$ maps a Maurer-Cartan element to an equivalent one.
Suppose now that $g = - \D \alpha$ with $\alpha \in
\mathcal{F}^1\liealg{g}^{-1}$. Then we set
\begin{equation*}
\pi(t)
=
e^{[tg,\argument]},
\quad \quad
\lambda(t)
=
e^{[tg,\argument]} \circ [\alpha,\argument].
\end{equation*}
We have $\pi(0)=\id_\liealg{g}$ and $\pi(1)= e^{[g,\argument]}$,
and we want to show that $\pi(t)$ and $\lambda(t)$ encode the homotopy equivalence relation as in Definition~\ref{def:HomEquivalenceofMC}.
Using the formulas for the convolution $L_\infty$-structure, we have to show
\begin{align*}
e^{[tg,\argument]} \circ [g,\argument]
=
\frac{\D}{\D t} \pi(t)
\stackrel{!}{=}
\widehat{Q}^1(\lambda(t)\vee \exp(\pi(t))).
\end{align*}
For the right hand side we compute
\begin{align*}
\widehat{Q}^1(\lambda(t)\vee \exp(\pi(t)))
& =
- \D \circ \lambda(t) - \lambda(t) \circ \D
+ \lambda(t) \circ Q^1_2
+ Q^1_2 \circ (\lambda(t) \star \pi(t)),
\end{align*}
where $Q^1_2(x \vee y) = -(-1)^{\abs{x}} [x,y]$ for $x,y \in \liealg{g}$
with $x \in (\liealg{g}[1])^{\abs{x}}$. For the first two terms we get
\begin{align*}
- \D \circ \lambda(t) - \lambda(t) \circ \D
=
- \D \circ e^{[tg,\argument]} \circ [\alpha,\argument]
- e^{[tg,\argument]} \circ [\alpha,\argument] \circ \D
=
- e^{[tg,\argument]} \circ [\D \alpha,\argument]
=
e^{[tg,\argument]} \circ [g,\argument].
\end{align*}
Thus we only have to show
\begin{equation*}
\lambda(t) \circ Q^1_2
+ Q^1_2 \circ (\lambda(t) \star \pi(t))
=
0.
\end{equation*}
For homogeneous $x \in (\liealg{g}[1])^{\abs{x}},
y\in (\liealg{g}[1])^{\abs{y}}$ we compute
\begin{align*}
\lambda(t) \circ Q^1_2(x\vee y)
& + Q^1_2 \circ (\lambda(t) \star \pi(t))(x \vee y)
=
-(-1)^{\abs{x}} e^{[tg,\argument]} \circ [\alpha,\argument] ([x,y]) \\
& -(-1)^{\abs{x} -1} [ e^{[tg,\argument]} [\alpha,x],
e^{[tg,\argument]}y]
- (-1)^{\abs{x}\abs{y}} (-1)^{\abs{y} -1}
[ e^{[tg,\argument]} [\alpha,y],
e^{[tg,\argument]}x]
=
0,
\end{align*}
and the proposition is shown.
\end{proof}
\subsection{Homotopy Classification of Flat $L_\infty$-algebras}
\label{sec:HomClassLinftyAlgs}
The above considerations allow us to understand the homotopy classification
of flat $L_\infty$-algebras \cite{canonaco:1999a,kontsevich:2003a} in a better way.
\begin{definition}
\label{def:HomEquLinftyAlgs}
Two flat $L_\infty$-algebras $(L,Q)$ and $(L',Q')$ are
said to be
\emph{homotopy equivalent} if there are $L_\infty$-morphisms
$F\colon (L,Q)\to (L',Q')$ and
$G\colon (L',Q')\to (L,Q)$
such that $F\circ G\sim \id_{L'}$ and
$G\circ F\sim \id_{L}$. In that case $F$ and $G$ are said
to be quasi-inverse to each other.
\end{definition}
As in \cite[Lemma~3.8]{kraft.schnitzer:2021a:pre} we can immediately show that this definition coincides indeed with the definition of homotopy equivalence
via $L_\infty$-quasi-isomorphisms from \cite{canonaco:1999a}.
\begin{lemma}
\label{lemma:HomEquvsQuasiIso}
Two flat $L_\infty$-algebras $(L,Q)$ and $(L',Q')$ are
homotopy equivalent if and only if there exists an $L_\infty$-quasi-isomorphism
between them.
\end{lemma}
\begin{proof}
Due to Proposition~\ref{prop:StandardFormLinfty} every $L_\infty$-algebra $L$ is
isomorphic to the product of a linear contractible one and a minimal
one $L[1]\cong V\oplus W$.
This means $L[1]\cong V\oplus W$ as vector spaces, such that
$V$ is an acyclic cochain complex with differential $\D_V$ and $W$
is an $L_\infty$-algebra with codifferential $Q_W$ with
$Q_{W,1}^1=0$. The codifferential $Q$ on $\cc{\Sym}(V\oplus W)$ is given on
$v_1\vee \dots \vee v_m$ with $v_1,\dots, v_k\in V$ and
$v_{k+1},\dots, v_m\in W$ by
\begin{align*}
Q^1(v_1\vee\dots \vee v_m)=
\begin{cases}
-\D_V(v_1), & \text{ for } k=m=1\\
Q_W^1(v_1\vee\dots\vee v_m), & \text{ for } k=0\\
0, & \text{ else. }
\end{cases}
\end{align*}
This implies in particular that the canonical maps
\begin{align*}
i\colon W\longrightarrow V \oplus W \ \text{ and } \
p\colon V\oplus W\longrightarrow W
\end{align*}
are $L_\infty$-quasi-isomorphisms. We want to show now that $i\circ p\sim \id$
and therefore choose a contracting homotopy
$h_V\colon V\to V$ with $h_V\D_V+\D_V h_V=\id_V$ and define the maps
\begin{align*}
P(t)\colon
V\oplus W\ni (v,w)
\longmapsto (tv,w)
\in V\oplus W
\end{align*}
and
\begin{align*}
H(t)
=
H
\colon V\oplus W\ni (v,w)\longmapsto (-h_V(v),0)\in V\oplus W.
\end{align*}
Note that $P(t)$ is a path of $L_\infty$ morphisms because of the
explicit form of the codifferential. We clearly have
\begin{align*}
\frac{\D}{\D t} P^1_1(t)
=
\pr_V
=
Q^1_1 \circ H(t) + H(t)\circ Q^1_1
=
\widehat{Q}^1_1(H(t))
\end{align*}
since $h_V$ is a contracting homotopy. This implies
\begin{equation*}
\frac{\D}{\D t} P^1(t)
=
\widehat{Q}^1(H(t)\vee \exp(P(t))
\end{equation*}
as $\image (H(t))\subseteq V$ and as the higher brackets $Q$ vanish
on $V$. From
$P(0)=i\circ p$ and $P(1)=\id$ we conclude that
$i\circ p\sim \id$.
We choose a similar splitting for $L'[1]=V'\oplus W'$
with the same
properties and consider an $L_\infty$-quasi-isomorphism
$F\colon L\to L'$. In Theorem~\ref{thm:QuisInverse} we constructed
an $L_\infty$-quasi-inverse $G= i\circ(F_{min})^{-1} \circ p'$.
Since by Proposition~\ref{prop:CompofHomotopicHomotopic} and
Proposition~\ref{prop:preCompofHomotopicHomotopic} compositions of homotopic
$L_\infty$-morphisms with an $L_\infty$-morphism are again homotopic, we get
\begin{align*}
F\circ G&
=
F\circ i\circ (F_{min})^{-1}\circ p'
\sim
i'\circ p'\circ F\circ i\circ (F_{min})^{-1}\circ p'\\&
=i'\circ F_{min}\circ (F_{min})^{-1}\circ p'=i'\circ p'\sim \id
\end{align*}
and similarly $G\circ F \sim \id$.
The other direction follows from Proposition~\ref{prop:PropertiesofHomotopicMorphisms}. Suppose $F\circ G \sim \id$ and
$G\circ F \sim \id$, then we know that $F_1^1\circ G_1^1$ and
$G_1^1\circ F_1^1$ are both chain homotopic to the identity. Therefore,
$F$ and $G$ are $L_\infty$-quasi-isomorphisms.
\end{proof}
\begin{corollary}
Let $F\colon (L,Q)\to(L',Q')$ be a an
$L_\infty$-quasi-isomorphism with two given quasi-inverses
$G,G'\colon (L',Q')\to (L,Q)$ in the sense of
Definition~\ref{def:HomEquLinftyAlgs}. Then one has $G\sim G'$.
\end{corollary}
\begin{proof}
One has
\begin{align*}
G\sim G\circ (F\circ G')=(G\circ F)\circ G'\sim G'
\end{align*}
and the statement is shown.
\end{proof}
As a first application, we want to show that the construction of the
morphisms for the homotopy transfer theorem \ref{thm:HTTJonas} are natural
with respect to homotopy equivalences
\begin{corollary}
\label{cor:IPsimId}
In the setting of Theorem \ref{thm:HTTJonas} one has $ P \circ I = \id_A$ and
$I \circ P \sim \id_B$.
\end{corollary}
\begin{proof}
By Lemma~\ref{lemma:HomEquvsQuasiIso} $P$ admits a quasi-inverse $I'$ such that $P \circ I' \sim \id_A$ and $I'\circ P\sim\id_B$,
which implies
\begin{equation*}
I \circ P
=
\id_B \circ I \circ P
\sim
I' \circ P \circ I \circ P
=
I' \circ P
\sim
\id_B,
\end{equation*}
and the statement is shown.
\end{proof}
\subsection{Homotopy Equivalence between Twisted Morphisms}
\label{sec:HomEquivTwistedMorphisms}
Let now $F \colon (\liealg{g},\D,[\argument{,}\argument]) \rightarrow (\liealg{g}',
\D',[\argument{,}\argument])$ be an
$L_\infty$-morphism between (flat) DGLAs with complete descending and
exhaustive filtrations.
Instead of comparing the twisted morphisms $F^\pi$ and
$F^{\pi'}$ with respect to two equivalent Maurer-Cartan elements
$\pi$ and $\pi'$, we consider
for simplicity just a Maurer-Cartan element $\pi \in \mathcal{F}^1\liealg{g}^1$
equivalent to zero via
$\pi = \exp([g,\argument])\acts 0$, i.e. $\lambda(t)=g = \dot{A}(t)\in
\mathcal{F}^1\widehat{\liealg{g}^0[t]}$.
Then we know that $0$ and $S =F_\MC(\pi)= F^1(\cc{\exp}(\pi))\in \mathcal{F}^1(\liealg{g}')^1$
are equivalent
Maurer-Cartan elements in $(\liealg{g}',\D')$. Let the equivalence
be implemented by an $A'(t)\in\mathcal{F}^1\widehat{(\liealg{g}')^0[t]}$ as in Proposition~\ref{prop:HomEquvsGaugeEqu}.
Then we have the diagram of $L_\infty$-morphisms between (flat) DGLAs
\begin{equation}
\label{eq:TwistingofMorph}
\begin{tikzcd}
& (\liealg{g}',\D') \arrow[rd, "e^{[A'(1),\argument]}", bend left=12] \arrow[dd, Rightarrow, shorten >=12pt, shorten <=12pt] & \\
(\liealg{g},\D) \arrow[ur,"F",bend left=12] \arrow[dr,swap, "e^{[A(1),\argument]}",bend right=12] & &
(\liealg{g}',\D' + [S,\argument])\\
&(\liealg{g},\D + [\pi,\argument]) \arrow[ur, swap,"F^\pi", bend right=12] &
\end{tikzcd}
\end{equation}
where $e^{[A(1),\argument]}$ and $e^{[A'(1),\argument]}$ are well-defined by
the completeness of the filtrations. Following \cite[Proposition~3.10]{kraft.schnitzer:2021a:pre},
we show that it commutes
up to homotopy, which is indicated by the vertical arrow.
\begin{proposition}
\label{prop:TwistMorphHomEqu}
The $L_\infty$-morphisms $F$ and $e^{[-A'(1),\argument]}\circ F^{\pi} \circ
e^{[A(1),\argument]}$ are homotopic, i.e.
homotopy equivalent Maurer-Cartan elements
in $(\Hom(\cc{\Sym}(\liealg{g}[1])),\liealg{g}'),\widehat{Q})$.
\end{proposition}
The candidate for the path between $F$ and
$e^{[-A'(1),\argument]}\circ F^{\pi} \circ
e^{[A(1),\argument]}$ is
\begin{equation*}
F(t)
=
e^{[-A'(t),\argument]}\circ F^{\pi(t)} \circ
e^{[A(t),\argument]}.
\end{equation*}
However, $F(t)$ is not necessarily in the completion
$\widehat{\Hom(\cc{\Sym}(\liealg{g}[1]),\liealg{g}')^1[t]}$
with respect to the
filtration from \eqref{eq:FiltrationConvLieAlg} since for example
\begin{align*}
F(t) \;\,\mathrm{mod}\;\, \mathcal{F}^2\Hom(\cc{\Sym}(\liealg{g}[1]),\liealg{g}')[[t]]
=
e^{[-A'(t),\argument]}\circ F^{\pi(t)}_1 \circ
e^{[A(t),\argument]}
\end{align*}
is in general not polynomial in $t$. But using the filtration from
\eqref{eq:FiltrationConvLieAlg2} we can prove
Proposition~\ref{prop:TwistMorphHomEqu}.
\begin{proof}[of Proposition~\ref{prop:TwistMorphHomEqu}]
The path $F(t) = e^{[-A'(t),\argument]}\circ F^{\pi(t)} \circ
e^{[A(t),\argument]}$ is an element in the completion
$\widehat{(\Hom(\cc{\Sym}(\liealg{g}[1]),\liealg{g}')[1])^{0}[t]}$
with respect to the filtration from \eqref{eq:FiltrationConvLieAlg2}.
This is clear since $A(t)\in \mathcal{F}^1 \widehat{\liealg{g}^0[t]}$,
$A'(t)\in \mathcal{F}^1 \widehat{(\liealg{g}')^0[t]}$ and
$\pi(t)\in \mathcal{F}^1 \widehat{\liealg{g}^1[t]}$ imply that
\begin{align*}
\sum_{i=1}^{n-1}
e^{[-A'(t),\argument]}\circ F^{\pi(t)}_i \circ
e^{[A(t),\argument]}
\;\,\mathrm{mod}\;\, \mathfrak{F}^n(\Hom(\cc{\Sym}(\liealg{g}[1]),\liealg{g}')[1])[[t]]
\end{align*}
is polynomial in $t$. Moreover, $F(t)$ satisfies by \eqref{eq:ODEforA}
\begin{align*}
\frac{\D F(t)}{\D t}
& =
-e^{[-A'(t),\argument]} \circ
\left[\lambda'(t) ,
\argument \right]
\circ F^{\pi(t)} \circ
e^{[A(t),\argument]}
+
e^{[-A'(t),\argument]}\circ F^{\pi(t)} \circ
\left[\lambda(t),
\argument \right] \circ
e^{[A(t),\argument]} \\
& \quad
+ e^{[-A'(t),\argument]}\circ \frac{\D F^{\pi(t)}}{\D t} \circ
e^{[A(t),\argument]} .
\end{align*}
But we have
\begin{align*}
\frac{\D F^{\pi(t)}_k}{\D t}&(X_1 \vee \cdots \vee X_k)
=
F_{k+1}^{\pi(t)}(Q^{\pi(t),1}_1(\lambda(t)) \vee X_1 \vee \cdots
\vee X_k) \\
& =
F_{k+1}^{\pi(t)}(Q^{\pi(t),k+1}_{k+1}(\lambda(t)\vee X_1\vee \cdots\vee X_k))
+
F_{k+1}^{\pi(t)}(\lambda(t) \vee Q^{\pi(t),k}_k(X_1 \vee \cdots \vee X_k)))\\
& =
Q^{S(t),1}_1 F_{k+1}^{\pi(t),1}(\lambda(t)\vee X_1 \vee \cdots \vee X_k) +
Q^{S(t),1}_2 F_{k+1}^{\pi(t),2}(\lambda(t)\vee X_1 \vee \cdots \vee X_k) \\
& \quad
- F_{k}^{\pi(t),1}\circ Q^{\pi(t),k}_{k+1}
(\lambda(t)\vee X_1 \vee \cdots \vee X_k)
+
F_{k+1}^{\pi(t)}(\lambda(t) \vee Q^{\pi(t),k}_k(X_1 \vee \cdots \vee X_k)).
\end{align*}
Setting now $\lambda_k^F(t)(\cdots)
= F_{k+1}^{\pi(t)}( \lambda(t)\vee \cdots)$ we get
\begin{align*}
\frac{\D F^{\pi(t)}_k}{\D t}
=
\widehat{Q}^{t,1}_1 (\lambda^F(t)) +
\widehat{Q}^{t,1}_2(\lambda^F(t)\vee F^{\pi(t)}) -
F^{\pi(t)}_k\circ[\lambda(t),\argument] +[\lambda'(t),\argument] \circ
F^{\pi(t)}_k.
\end{align*}
Thus we get
\begin{align*}
\frac{\D F(t)}{\D t}
& =
e^{[-A'(t),\argument]}\circ \left(\widehat{Q}^{t,1}_1 (\lambda^F(t)) +
\widehat{Q}^{t,1}_2(\lambda^F(t)\vee F^{\pi(t)})\right) \circ
e^{[A(t),\argument]} \\
& =
\widehat{Q}^{1}_1 (e^{[-A'(t),\argument]}\lambda^F(t)e^{[A(t),\argument]}) +
\widehat{Q}^{1}_2(e^{[-A'(t),\argument]}\lambda^F(t)e^{[A(t),\argument]}\vee
F(t))
\end{align*}
since the $\exp([A(t),\argument])$ and
$\exp([A'(t),\argument])$ commute with the brackets and intertwine the
differentials. Thus $F(0) = F$ and $F(1)$ are homotopy equivalent.
\end{proof}
\begin{remark}[Application to Deformation Quantization]
This result allowed us in \cite{kraft.schnitzer:2021a:pre} to prove that
Dolgushev's globalizations \cite{dolgushev:2005a,
dolgushev:2005b} of the Kontsevich formality \cite{kontsevich:2003a} with
respect to different covariant derivatives are homotopic.
\end{remark}
Now we want to generalize the results from the above section to twisted morphisms
between general $L_\infty$-algebras. As a first step, we have to generalize
Lemma~\ref{lemma:TwistedDGLAsIsomorphic} and Corollary~\ref{cor:GaugeEquivMCTwistsQuis},
i.e. we have to show that $L_\infty$-algebras that are twisted with equivalent
Maurer-Cartan elements are $L_\infty$-isomorphic.
\begin{lemma}
\label{lem:MorphPhitTwistedLinftyAlgs}
Let $(L,Q)$ be a flat $L_\infty$-algebra with complete descending filtration, and let
$\pi(t)$ and $\lambda(t)$ encode a homotopy equivalence between two Maurer-Cartan
elements as in Definition~\ref{def:HomEquivalenceofMC}. For $a \in \Sym^i(L[1])$ with
$i \geq 0$ the recursively defined system of differential equations
\begin{align}
\label{eq:ODEforPhit}
\begin{split}
\frac{\D}{\D t} (\Phi_t)^1_i(a)
& =
\sum_{k=1}^i\left(
[Q^{\pi(t)}, \lambda(t)\vee \argument] - Q^{\pi(t)}(\lambda(t))\vee \argument
\right)^1_k (\Phi_t)^k_i(a) \\
& =
\sum_{k=1}^i (Q^{\pi(t)})^1_{k+1}(\lambda(t)\vee (\Phi_t)^k_i(a)) ,
\quad \quad
(\Phi_0)^1_i(a)
=
\pr_{L[1]}(a)
\end{split}
\end{align}
has unique solutions $(\Phi_t)^1_i \colon \Sym^i(L[1]) \rightarrow \widehat{L[1][t]}$,
where $(\Phi_t)^k_i(a)$ depends indeed only on
$(\Phi_t)^1_j$ for $j\leq i-k+1$ as for $L_\infty$-morphisms. In fact, one has
$\Phi_t^1 \in \widehat{\Hom (\cc{\Sym}(L[1]),L)^1[t]}$.
\end{lemma}
\begin{proof}
The right hand side of the differential equation \eqref{eq:ODEforPhit} depends only on
$(\Phi_t)^1_j$ with $j\leq i$. Thus it has a unique solution
$(\Phi_t)^1_i(a) \in \widehat{L[t]}$ since
$[Q^{\pi(t)}, \lambda(t)\vee \argument]- Q^{\pi(t)}(\lambda(t))\vee \argument$
increases the filtration and since $\pi(t),\lambda(t)\in \mathcal{F}^1\widehat{L[t]}$.
Similarly, we see that has $\Phi_t^1 \in \widehat{\Hom (\cc{\Sym}(L[1]),L)^1[t]}$:
With the filtration $\mathfrak{F}^\bullet$ of the convolution algebra
from \eqref{eq:FiltrationConvLieAlg2} we have
\begin{align*}
\frac{\D}{\D t} (\Phi_t)^1_i
&\hspace{-0.1cm} \equiv
\sum_{\ell=0}^\infty \frac{1}{\ell !}\sum_{k=1}^i Q^1_{k+\ell +1}(
\pi(t)^{\vee \ell}\vee\lambda(t) \vee (\Phi_t)^k_i(\argument))
\;\,\mathrm{mod}\;\, \mathfrak{F}^n \\
& \equiv
\sum_{\ell=0}^{n-1} \frac{1}{\ell !}\sum_{k=1}^i Q^1_{k+\ell +1}(
(\pi(t)\;\,\mathrm{mod}\;\, \mathcal{F}^{n-1})^{\vee \ell}\vee
( \lambda(t)\;\,\mathrm{mod}\;\, \mathcal{F}^{n}) \vee (\Phi_t)^k_i \;\,\mathrm{mod}\;\,
\mathfrak{F}^{n-1} ) \;\,\mathrm{mod}\;\, \mathfrak{F}^n
\end{align*}
and thus $(\Phi_t)^1_i \;\,\mathrm{mod}\;\, \mathfrak{F}^n \in L[t]$ by induction on $i$ and $n$.
\end{proof}
Thus $\Phi_t^1$ induces for all evaluations of $t$ a coalgebra morphism $\Phi_t$ and
we have
\begin{equation}
\frac{\D}{\D t} \Phi_t(a)
=
\left([Q^{\pi(t)}, \lambda(t)\vee \argument] - Q^{\pi(t)}(\lambda(t))\vee \argument
\right) (\Phi_t)(a),
\quad \quad
\Phi_0(a)
=
a
\end{equation}
since $\frac{\D}{\D t}$ and $([Q^{\pi(t)}, \lambda(t)\vee \argument] -
Q^{\pi(t)}(\lambda(t))\vee \argument)$ are coderivations with respect to
$\Delta_\sh$ vanishing on $1$, i.e. also with respect to $\cc{\Delta_\sh}$.
But one can even show that $\Phi_t$ is an $L_\infty$-morphism, i.e. compatible with the
codifferentials:
\begin{lemma}
\label{lemma:PhitLinftyMorph}
One has
\begin{equation}
\Phi_t \circ Q^{\pi_0}
=
Q^{\pi(t)} \circ \Phi_t.
\end{equation}
In particular, $\Phi_1$ induces an $L_\infty$-isomorphism from $(L,Q^{\pi_0})$
to $(L,Q^{\pi_1})$.
\end{lemma}
\begin{proof}
We compute for $a \in \Sym(L[1])$
\begin{align*}
\frac{\D}{\D t} ( Q^{\pi(t)} \circ \Phi_t(a))
& =
\left[Q^{\pi(t)}, \frac{\D}{\D t}\pi(t) \vee \argument\right] \Phi_t(a)
+
Q^{\pi(t)} \circ \left([Q^{\pi(t)}, \lambda(t)\vee \argument] -
Q^{\pi(t)}(\lambda(t))\vee \argument\right) \circ \Phi_t(a) \\
& =
\left([Q^{\pi(t)}, \lambda(t)\vee \argument] -
Q^{\pi(t)}(\lambda(t))\vee \argument\right)\circ Q^{\pi(t)} \circ \Phi_t(a).
\end{align*}
Thus $Q^{\pi(t)} \circ \Phi_t(a)$ is at $t=0$ just $Q^{\pi_0}(a)$ and satisfies
the differential equation \eqref{eq:ODEforPhit}. Since the solution is unique,
it follows $\Phi_t \circ Q^{\pi_0}(a) = Q^{\pi(t)} \circ \Phi_t(a)$.
In order to show that $\Phi_1$ is an $L_\infty$-isomorphism it suffices by
Proposition~\ref{prop:Linftyiso} to show that $(\Phi_1)^1_1$ is an isomorphism.
But this is clear since $(\Phi_t)^1_1 - \id \equiv 0 \;\,\mathrm{mod}\;\, \mathfrak{F}^2$ and
the completeness of the filtration. Moreover, by the construction of the
$L_\infty$-inverse we even have $((\Phi_t)^{-1})^1 \in
\widehat{\Hom(\cc{\Sym}(L[1]),L)^1[t]}$.
\end{proof}
\begin{example}
If $(L,Q)$ is just a DGLA $(\liealg{g},\D,[\argument{,}\argument])$,
then \eqref{eq:ODEforPhit} simplifies to
\begin{equation*}
\frac{\D}{\D t} (\Phi_t)^1_1(x)
=
[\lambda(t),(\Phi_t)^1_1(x)],
\quad \quad
(\Phi_0)^1_1(x)
=
x
\quad \quad \forall \, x \in \liealg{g}.
\end{equation*}
For $\lambda(t)=g$ this has the solution $(\Phi_t)^1_1(x) = e^{[tg,\argument]}x$ and
$(\Phi_t)^1_n=0$ for $n\neq 1$, i.e. we recover the setting of
Lemma~\ref{lemma:TwistedDGLAsIsomorphic} and
Corollary~\ref{cor:GaugeEquivMCTwistsQuis}.
\end{example}
Finally, we can use this $\Phi_t$ in order to generalize
Proposition~\ref{prop:TwistMorphHomEqu} to $L_\infty$-algebras.
\begin{proposition}
\label{prop:TwistedLinftyIsom}
Let $(L,Q)$ be a flat $L_\infty$-algebra with complete descending filtration and let
$\pi\in \mathcal{F}^1L^1$ be a Maurer-Cartan element that is homotopy equivalent
to $0$ via
$\pi(t),\lambda(t)$. Moreover, let $F\colon (L,Q)\rightarrow (L',Q')$ be an
$L_\infty$-morphism. Then the $L_\infty$-morphisms $F$ and $(\Phi_1')^{-1}\circ F^{\pi}
\circ \Phi_1$ are homotopic.
\end{proposition}
\begin{proof}
The candidate for the homotopy is $F(t)=(\Phi'_t)^{-1} \circ F^{\pi(t)} \circ \Phi_t$.
In the proof of Lemma~\ref{lemma:PhitLinftyMorph} we saw that
$((\Phi_t')^{-1})^1 \in \widehat{\Hom(\cc{\Sym}(L'[1]),L')^1[t]}$.
Thus it directly follows that $F^1(t)$ is indeed in the completion
$\widehat{\Hom(\cc{\Sym}(L[1]),L')^1[t]}$. We compute
\begin{align*}
\frac{\D}{\D t} F(t)
& =
- (\Phi'_t)^{-1} \circ \left(([Q^{\pi'(t)}, \lambda'(t)\vee \argument]-
Q^{\pi'(t)}(\lambda'(t))\vee \argument \right) \circ F^{\pi(t)} \circ \Phi_t \\
& \quad +
(\Phi'_t)^{-1} \circ F^{\pi(t)} \circ \left(([Q^{\pi(t)}, \lambda(t)\vee \argument]-
Q^{\pi(t)}(\lambda(t))\vee \argument \right)\circ \Phi_t
+
(\Phi'_t)^{-1} \circ \frac{\D}{\D t} F^{\pi(t)} \circ \Phi_t.
\end{align*}
With $\frac{\D}{\D t} F^{\pi(t)} = F^{\pi(t)} \circ (\frac{\D}{\D t}\pi(t) \vee \argument)
- (\frac{\D}{\D t}\pi'(t) \vee \argument) \circ F^{\pi(t)}$ we get
\begin{align*}
\frac{\D}{\D t} F(t)
& =
Q' \circ \left((\Phi'_t)^{-1} \circ (- \lambda'(t)\vee\argument) \circ F^{\pi(t)}
\circ \Phi_t + (\Phi'_t)^{-1} \circ F^{\pi(t)}\circ ( \lambda(t)\vee\argument)
\circ \Phi_t \right) \\
& \quad +
\left((\Phi'_t)^{-1} \circ (- \lambda'(t)\vee\argument) \circ F^{\pi(t)}
\circ \Phi_t + (\Phi'_t)^{-1} \circ F^{\pi(t)}\circ ( \lambda(t)\vee\argument)
\circ \Phi_t \right) \circ Q.
\end{align*}
Projecting to $L[1]$ yields indeed
\begin{equation*}
\frac{\D}{\D t} F^1(t)
=
\widehat{Q}^1(\lambda_F(t) \vee\exp F^1(t))
\end{equation*}
for $\lambda_F(t) = \pr_{L[1]}\circ ((\Phi'_t)^{-1} \circ (- \lambda'(t)\vee\argument)
\circ F^{\pi(t)}\circ \Phi_t + (\Phi'_t)^{-1} \circ F^{\pi(t)}\circ
( \lambda(t)\vee\argument) \circ \Phi_t )$.
Note that this is true since the term in the bracket is a coderivation with respect to
$\Delta_\sh$ along $F(t)$ that vanishes on $1$, therefore also a coderivation along
$\cc{\Delta_\sh}$. Moreover, $\lambda_F$ is indeed in the completion
$\widehat{\Hom(\cc{\Sym}(L[1]),L')^0[t]}$ since $F^1(t)$ is.
\end{proof}
\subsection{Homotopy Theory of Curved $L_\infty$-Algebras}
\label{sec:HomTheoryofCurvedLinfty}
As mentioned above, we want to generalize now
the above homotopy theory of flat $L_\infty$-algebras to the curved setting.
Therefore, we want to interpret $L_\infty$-morphisms again as Maurer-Cartan
elements, as we did in the flat case from \eqref{prop:ConvLinftyStructure}.
From Remark~\ref{rem:CurvedMorphLinfty} we recall the
following result:
\begin{proposition}
\label{prop:CurvedConvLinftyStructure}
Let $(L,Q)$ and $(L',Q')$ be (curved) $L_\infty$-algebras with
complete descending filtrations such that one has $Q_0^1 \in
\mathcal{F}^1 L$ and $Q_0'\in \mathcal{F}^1 L'$ for the curvatures.
Then the coalgebra $\Sym(\Hom(\Sym(L[1]),L')[1])$ can be equipped with
a codifferential $\widehat{Q}$ with structure maps
\begin{equation}
\label{eq:CurvonConvLinftyCurved}
\widehat{Q}_0^1
=
(1
\longmapsto
Q'_0(1) )
\in
(\Hom(\mathbb{K},L')[1])^1,
\end{equation}
\begin{equation}
\label{eq:DiffonConvLinftyCurved}
\widehat{Q}^1_1 F
=
Q'^1_1 \circ F - (-1)^{\abs{F}} F \circ Q
\end{equation}
and
\begin{equation}
\label{eq:BracketonConvLinftyCurved}
\widehat{Q}^1_n(F_1\vee \cdots \vee F_n)
=
(Q')^1_n \circ
(F_1\star F_2\star \cdots \star F_n),
\end{equation}
where $\abs{F}$ denotes the degree in
$\Hom(\Sym(L[1]),L')[1]$. Moreover, \eqref{eq:FiltrationConvLieAlg2}
generalizes to a complete descending filtration
$\mathfrak{F} \Hom(\Sym(L[1]),L')[1]$.
The curved $L_\infty$-algebra $(\Hom(\Sym(L[1]),L'), \widehat{Q})$
is called \emph{convolution $L_\infty$-algebra}.
\end{proposition}
\begin{proof}
The fact that this defines an $L_\infty$-structure follows as in
Proposition~\ref{prop:ConvLinftyStructure}, the fact that we
get a complete descending filtration follows as in
Proposition~\ref{prop:CompleteFiltrationConvLinfty}.
\end{proof}
From now on we always assume that our curved $L_\infty$-algebras $(L,Q)$
have a complete descending filtration and that $Q_0^1 \in \mathcal{F}^1 L$.
In this case, the above proposition immediately leads us to the following definition of curved
$L_\infty$-morphisms and their homotopy equivalence relation,
generalizing the observations for curved Lie algebras from
Remark~\ref{rem:curvedMorphisms}:
\begin{definition}[Curved $L_\infty$-morphism]
Let $(L,Q)$ and $(L',Q')$ be (curved) $L_\infty$-algebras with
complete descending filtrations such that one has $Q_0^1 \in
\mathcal{F}^1 L$ and $Q_0'\in \mathcal{F}^1 L'$ for the curvatures.
The Maurer-Cartan elements in $\mathfrak{F}^1\Hom(\Sym(L[1]),L')$ are
called \emph{curved morphisms between $(L,Q)$ and $(L',Q')$} or
\emph{curved $L_\infty$-morphisms}. Two curved $L_\infty$-morphisms
are called \emph{homotopic} if they are homotopy equivalent Maurer-Cartan
elements.
\end{definition}
We write for a curved $L_\infty$-morphism
$F^1\colon (L,Q) \rightsquigarrow (L',Q')$ and $F^1\sim F'^1$ if $F^1$ and
$F'^1$ are homotopic.
\begin{remark}[Difficulties]
\label{rem:Difficulties}
Note that this generalization to the curved setting yields two main difficulties compared to the flat case:
\begin{itemize}
\item As mentioned above, in the case of curved $L_\infty$-algebras
$(L,Q)$ the first structure map $Q_1^1$ does in general not square
to zero. Thus we do not have the notion of an
$L_\infty$-quasi-isomorphism as in Definition~\ref{def:Linftyquis}
and we can not use Proposition~\ref{prop:StandardFormLinfty},
i.e. that every flat $L_\infty$-algebra is isomorphic to the
direct sum of a minimal one and a linear contractible one.
Using the homotopy classification of flat $L_\infty$-algebras
from Lemma~\ref{lemma:HomEquvsQuasiIso} we propose below a notion
of curved quasi-isomorphisms.
\item Curved morphisms of $L_\infty$-algebras
$F^1 \in \mathfrak{F}^1(\Hom(\Sym(L[1]),L')[1])^0$ are given by
Taylor components $F_n^1 \colon \Sym^n (L[1])
\rightarrow L'[1]$ for $n\geq 0$, where one has in particular a
zero component $F_0^1= F_0^1(1) = \alpha \in \mathcal{F}^1L'^1$.
By definition, they satisfy the Maurer-Cartan equation
\begin{align}
\label{eq:curvedmorphLinftyII}
0
=
\widehat{Q}_0^1 + Q'^1_1 \circ F^1 - F^1\circ Q +
\sum_{n=2}^\infty \frac{1}{n!} Q'^1_n \circ (F^1)^{\star n}.
\end{align}
If $F^1_0(1)=0$ the evaluation at $1$ yields
$Q'_0(1) = F^1_1(Q_0(1))$ and thus $F$ is a $L_\infty$-morphism
of curved $L_\infty$-algebras in the usual sense, from now on
called \emph{strict} $L_\infty$-morphism.
However, for $F_0^1(1)=\alpha\neq 0$ we no longer get an
induced coalgebra morphism on the symmetric coalgebras, compare
Theorem~\ref{thm:CofreeCocomConilpotentCoalg}.
\end{itemize}
\end{remark}
The second point is in fact no big problem as we explain now:
At first, note that we can still extend $F^1$ to all symmetric orders
via $F^i = \frac{1}{i!} (F^1)^{\star i}$ and $F^0 = \pr_\mathbb{K}$.
Writing $F = \exp_\star F^1$ we get a coalgebra morphism
\begin{equation}
\label{eq:relationFFtilde}
F \colon
\Sym (L[1])
\longrightarrow
\widehat{\Sym}(L'[1]),
\quad
X
\longmapsto
F(X)
=
\exp \alpha \vee \widetilde{F}(X)
\end{equation}
into the completed symmetric coalgebra, where we complete
with respect to the induced filtration. Here
$\widetilde{F}\colon \Sym(L[1]) \rightarrow \Sym(L'[1])$ is
the extension of $F^1_n$ with $n\geq 1$ to a coalgebra morphism,
i.e. in particular $\widetilde{F}(1)=1$. We sometimes identify $F^1$
with its extension $F$. Note that since $Q'$ is compatible with
the filtration, it extends to the completion $\widehat{\Sym}(L'[1])$
and \eqref{eq:curvedmorphLinftyII} implies
$F^1 \circ Q = Q'^1 \circ F$ as expected.
Thus we see that curved $L_\infty$-morphisms $F^1$ are still
well-behaved and we collect some useful properties:
\begin{proposition}
\label{prop:ProvertiesofCurvedLinftyMorph}
Let $(L,Q)$ and $(L',Q')$ be (curved) $L_\infty$-algebras with
complete descending filtrations such that one has $Q_0^1 \in
\mathcal{F}^1 L$ and $Q_0'\in \mathcal{F}^1 L'$ for the curvatures.
Moreover, let $F^1 \in \mathfrak{F}^1(\Hom(\Sym(L[1]),L')[1])^0$ be a
curved $L_\infty$-morphism.
\begin{propositionlist}
\item Extending $F^1_n$ with $n\geq 1$ to a coalgebra morphism
$\widetilde{F}$, one gets for all $X\in \Sym(L[1])$
\begin{equation}
\widetilde{F}^1 \circ Q (X)
=
Q'^1 \circ \left(\exp (\alpha)\vee \widetilde{F}(X)\right),
\end{equation}
i.e. $\widetilde{F}$ is a strict morphism into the
twisted $L_\infty$-algebra $(L,Q'^{\alpha})$.
\item Conversely, every strict morphism $\widetilde{F} \colon
(L,Q)\rightarrow (L',Q'^\alpha)$ corresponds to a curved
$L_\infty$-morphism $F^1$ with $F^1_0(1) = \alpha$ and
$F^1_i = \widetilde{F}^1_i$ for $i>0$.
\item $F^1$ induces a map at the level of Maurer-Cartan elements.
If $\pi \in \mathcal{F}^1L^1$ is a curved Maurer-Cartan element
in $(L,Q)$, then
\begin{align}
\label{eq:CurvdMorphMC}
F_\MC(\pi)
=
F^1(\exp \pi)
=
\alpha +
\sum_{k=1}^\infty \frac{1}{k!} F^1_k(\pi \vee \cdots \vee \pi)
\end{align}
is a curved Maurer-Cartan element in $(L',Q')$.
\item The map from \eqref{eq:CurvdMorphMC} is compatible with homotopy
equivalences, i.e. $F^1$ induces a map $F_\MC \colon
\Def(L)\rightarrow \Def(L')$.
\item The case $F^1_1=\id$, $F_0^1=\alpha$ and $F_n^1=0$ for
all $n \geq 2$ corresponds again to twisting by $-\alpha$. We denote
it by
\begin{equation}
\label{eq:TwCurvedMorph}
\tw_{\alpha}\colon
(L,Q)
\rightsquigarrow
(L,Q^{-\alpha}).
\end{equation}
\item The curved $L_\infty$-morphisms from the curved $L_\infty$-algebra
$0$ to $L$ are just the set of curved Maurer-Cartan elements of $L$.
\end{propositionlist}
\end{proposition}
\begin{proof}
The first and second point follow directly from the Maurer-Cartan equation
\eqref{eq:curvedmorphLinftyII} and \eqref{eq:relationFFtilde}.
Then the third and fourth point follow from Lemma~\ref{lemma:CorrespCurvedandFlatMC} since $\widetilde{F}$
is compatible with equivalences by Proposition~\ref{prop:FmapsEquivMCtoEquiv}.
The other statements are clear.
\end{proof}
\begin{corollary}
Curved $L_\infty$-morphisms $F^1$ from $(L,Q)$ to $(L',Q')$ are in
one-to-one correspondence with strict $L_\infty$-morphisms $\widetilde{F}$
from $(L,Q)$ into $(L',Q'^\alpha)$ with $F^1_0(1)=\alpha \in \mathcal{F}^1
L'^1$ and $F^1_i =\widetilde{F}^1_i$ for $i>0$.
\end{corollary}
As expected, we can compose curved $L_\infty$-morphisms between curved
$L_\infty$-algebras:
\begin{proposition}
\label{prop:CompofCurved}
Let $F^1 \colon (L,Q) \rightsquigarrow (L',Q')$ and $G^1\colon
(L',Q') \rightsquigarrow (L'',Q'')$ be curved $L_\infty$-morphisms
with $F_0^1 = \alpha \in \mathcal{F}^1L'^1$ and
$G_0^1 = \beta \in \mathcal{F}^1L''^1$. Then there exists a
curved $L_\infty$-morphism
\begin{equation}
\label{eq:CompCurvedLinftyMorph}
(G\circ F)^1
\coloneqq
G^1 \circ F
\colon
(L,Q)
\longrightsquigarrow{1}
(L'',Q'')
\end{equation}
with $(G\circ F)_0^1 = G^1(\exp \alpha) =
\beta + \widetilde{G}^1(\cc{\exp}\alpha)$
and $\widetilde{G\circ F} = \widetilde{G}^\alpha \circ \widetilde{F}$,
and the composition is associative. Moreover, one has
\begin{equation}
\label{eq:ComponMC}
(G\circ F)_\MC
=
G_\MC \circ F_\MC
\colon
\Def(L)
\longrightarrow
\Def(L'').
\end{equation}
at the level of equivalence classes of Maurer-Cartan elements.
\end{proposition}
\begin{proof}
We saw in Proposition~\ref{prop:ProvertiesofCurvedLinftyMorph} that
$F$ and $G$ correspond to strict $L_\infty$-morphisms
\begin{equation*}
\widetilde{F}
\colon
(L,Q)
\longrightarrow
(L',Q'^\alpha)
\quad \text{ and } \quad
\widetilde{G}
\colon
(L',Q')
\longrightarrow
(L'',(Q'')^\beta).
\end{equation*}
In particular, we can twist $\widetilde{G}$ with
$\alpha$ and obtain
\begin{equation*}
(L,Q)
\stackrel{\widetilde{F}}{\longrightarrow}
(L',Q'^\alpha)
\stackrel{\widetilde{G}^\alpha}{\longrightarrow}
(L'',(Q'')^{\beta + \widetilde{G}^1(\cc{\exp}\alpha)}).
\end{equation*}
Thus $(G\circ F)^1_i = (\widetilde{G}^\alpha \circ \widetilde{F})^1_i$
for $i>0$ and $(G\circ F)^1_0 = \beta + \widetilde{G}^1(\cc{\exp}\alpha)$
defines indeed a curved $L_\infty$-morphism. Moreover, we have
$G^1 \circ F = G^1 \circ \exp_\star F^1 = G^1 \circ \exp(\alpha) \vee \widetilde{F} = (G\circ F)^1$. It is easy to check that the composition
is associative. Therefore, let us now look at the induced map at the level
of Maurer-Cartan elements: by \eqref{eq:CurvdMorphMC} $(G\circ F)_\MC$ maps
$\pi \in \mathcal{F}^1L^1$ to
\begin{align*}
(G\circ F)_\MC (\pi)
& =
\beta + \widetilde{G}^1(\cc{\exp}\alpha)
+ \widetilde{G\circ F}^1(\cc{\exp}\pi)
=
\beta + \widetilde{G}^1(\cc{\exp}\alpha) +
\widetilde{G}^1 ( \exp \alpha \vee \widetilde{F}(\cc{\exp}\pi)) \\
& =
G^1 (\exp \alpha \vee \exp \widetilde{F}^1(\cc{\exp}\pi))
=
G_\MC \circ F_\MC(\pi)
\end{align*}
as desired.
\end{proof}
We see that if both $F,G$ are strict morphisms, then $F=\widetilde{F}$ and $G=\widetilde{G}$ and the above composition is just the usual one.
Moreover, we have the following observation:
\begin{corollary}
\label{cor:CompwithTwist}
Let $F^1 \colon (L,Q) \rightsquigarrow (L',Q')$ be a curved
$L_\infty$-morphism with $F_0^1=\alpha \in \mathcal{F}^1L'^1$.
Then one has for $\beta \in \mathcal{F}^1L'^1$
\begin{equation}
\label{eq:curvLinftyFlatTwisted}
\tw_\alpha \circ \tw_\beta
=
\tw_{\alpha+\beta},
\quad \quad
F^1
=
\tw_\alpha \circ \widetilde{F},
\quad \text{ and }\quad
\tw_{-\alpha} \circ F
=
\widetilde{F}^1
\end{equation}
\end{corollary}
\begin{proof}
By the fifth point of Proposition~\ref{prop:ProvertiesofCurvedLinftyMorph}
we know that $\tw_\alpha$ corresponds to twisting with $-\alpha$, i.e.
\begin{equation*}
\tw_\alpha \colon
(L',Q'^\alpha)
\longrightsquigarrow{0.8}
( L',Q'^{\alpha-\alpha})
=
(L',Q'),
\quad \quad
(\tw_\alpha)^1_0 = \alpha,
\quad
(\tw_\alpha)^1_1 = \id.
\end{equation*}
We compute $(\tw_\alpha \circ \tw_\beta)^1_0 = \alpha + \beta$
and $(\tw_\alpha \circ \tw_\beta)^1_i = \delta^1_i \id$ for $i\geq 0$,
thus $\tw_\alpha \circ \tw_\beta= \tw_{\alpha + \beta}$ is show.
Then Proposition~\ref{prop:CompofCurved} gives the desired
$F^1 = \tw_\alpha \circ \widetilde{F}$, and the last identity follows
with the first.
\end{proof}
In a next step, we want to investigate if the (curved) homotopy
equivalence is compatible with the interpretation of curved morphisms
as strict $L_\infty$-morphisms into a twisted codomain. At first, let
$F^1$ and $F'^1$ be homotopic via $F^1(t)$ and $\lambda^1(t)$, i.e.
\begin{align}
\label{eq:HomEquCurvMorph}
\frac{\D}{\D t} F^1(t)
& =
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t)))
=
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1_0(t)(1))\vee
\exp(\widetilde{F}^1(t)))
\end{align}
with $F^1(0) = F^1$ and $F^1(1)= F'^1$.
This implies for the component $F^1_0(t)=F^1_0(t)(1)$:
\begin{equation*}
\frac{\D}{\D t} F^1_0(t)
=
\lambda^1 (t) \circ Q_0^1
+
Q'^1 \circ (\lambda^1_0(t) \vee \exp F^1_0(t)).
\end{equation*}
Thus we see that $\alpha_0 \neq \alpha_1$ is possible, and
we can in general not directly compare $\widetilde{F}_0$ and
$\widetilde{F}_1$ since they can have different codomains. However, we
directly see:
\begin{proposition}
\label{prop:RelationHomEquiv}
Let $F^1,F'^1 \colon (L,Q)\rightsquigarrow (L',Q')$ be two curved
$L_\infty$-morphisms with $\alpha = F_0^1=F'^1_0$.
Then $F^1\sim F'^1$ if and only if $\widetilde{F}^1\sim \widetilde{F'}^1$.
\end{proposition}
\begin{proof}
By Corollary~\ref{cor:CompwithTwist} we only have to show that
$F^1 \sim F'^1$ implies $(\tw_\beta \circ F) \sim (\tw_\beta \circ F')$
for all $\beta \in \mathcal{F}^1L'^1$ and all curved $L_\infty$-mophisms
with $F_0^1=F'^1_0$, where
\begin{equation*}
(\tw_\beta \circ F), (\tw_\beta \circ F') \colon
(L,Q)
\longrightsquigarrow{1}
(L',Q'^{-\beta}).
\end{equation*}
Thus assume that $F^1(t),\lambda^1(t)$
encode the equivalence between $F^1$ and $F'^1$, then we have by
\eqref{eq:HomEquCurvMorph}
\begin{align*}
\frac{\D}{\D t} F^1(t)
& =
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t)))
\quad \text{ with }
F^1(0) = F^1
\text{ and }
F^1(1)= F'^1.
\end{align*}
With $G^1(t) = \tw_\beta \circ F^1(t)$ we get
\begin{align*}
\frac{\D}{\D t} G^1(t)
& =
\frac{\D}{\D t} F^1(t)
=
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t))) \\
& =
\widehat{Q}^1(\lambda^1(t)\vee \exp(-\beta)\vee \exp(\beta)
\vee \exp(F^1(t))) \\
& =
\widehat{Q^{-\beta}}^1(\lambda^1(t) \vee\exp(G^1(t))) ,
\end{align*}
where $\widehat{Q^{-\beta}}$ is the codifferential on the curved
convolution $L_\infty$-algebra induced by $(L,Q)$ and $(L',Q'^{-\beta})$.
Thus we get the homotopy equivalence between $\tw_\beta \circ F$
and $\tw_\beta \circ F'$.
\end{proof}
Moreover, \eqref{eq:HomEquCurvMorph} implies:
\begin{corollary}
Let $F^1$ and $F'^1$ be homotopic curved $L_\infty$-morphisms from
$(L,Q)$ to $(L',Q')$ with $\alpha_0 = F^1_0$ and
$\alpha' =F'^1_0$. If $Q_0^1=0$ and if $\alpha$ is a
Maurer-Cartan element, then so is $\alpha'$ and both are equivalent.
\end{corollary}
Analogously to the flat case in Proposition~\ref{prop:PropertiesofHomotopicMorphisms} we can show that homotopic
curved $L_\infty$-morphisms induce the same maps on the equivalence classes
of Maurer-Cartan elements.
\begin{proposition}
\label{prop:HomCurvedMorphonDef}
Let $F^1,F'^1 \colon (L,Q)\rightsquigarrow (L',Q')$ be two curved
$L_\infty$-morphisms. If $F^1$ and $F'^1$ are homotopic, then they
induce the same maps from $\Def(L)$ to $\Def(L')$, i.e.
$F_\MC = F'_\MC$.
\end{proposition}
\begin{proof}
Let us assume that $F^1(t),\lambda^1(t)$
encode the equivalence between $F^1$ and $F'^1$, then we have by
\eqref{eq:HomEquCurvMorph}
\begin{align*}
\frac{\D}{\D t} F^1(t)
& =
\widehat{Q}^1(\lambda^1(t)\vee \exp(F^1(t)))
\quad \text{ with }
F^1(0) = F^1
\text{ and }
F^1(1)= F'^1.
\end{align*}
Applying this to $\exp( \pi)$ for $\pi \in \Mc^1(L)$ gives
\begin{align*}
\frac{\D}{\D t} F^1(t)\exp( \pi)
& =
\widehat{Q}^1(\lambda^1(t)\exp( \pi)\vee \exp(F^1(t)\exp( \pi))),
\end{align*}
i.e. $\pi(t) = F^1(t)(\exp\pi) $ and $\lambda(t) = \lambda^1(t)(\exp\pi) $
encode the homotopy equivalence between
$F_\MC(\pi)=F^1(\exp\pi) $ and $F'_\MC(\pi')=(F')^1(\exp\pi)$. Thus $F^1$ and $F'^1$ map indeed
Maurer-Cartan elements to equivalent ones.
\end{proof}
In the proof of Proposition~\ref{prop:RelationHomEquiv} we saw that
$\tw_\alpha \circ \widetilde{F}$ is homotopic to
$\tw_\alpha \circ \widetilde{F'}$ if $\widetilde{F}$ and
$\widetilde{F'}$ are homotopic. We want to generalize this in
the spirit of Proposition~\ref{prop:CompofHomotopicHomotopic} and
Proposition~\ref{prop:preCompofHomotopicHomotopic} and show that general
pre- and post-compositions of homotopic curved $L_\infty$-morphisms with a
curved $L_\infty$-morphism are again homotopic.
\begin{proposition}
\label{prop:CompofCurvedHomotopicHomotopic}
Let $F^1,F'^1 \colon (L,Q)\rightsquigarrow (L',Q')$ be two curved
$L_\infty$-morphisms and assume $F^1\sim F'^1$.
\begin{propositionlist}
\item If $H^1 \colon (L',Q') \rightsquigarrow (L'',Q'')$
is a curved $L_\infty$-morphism, then $(H^1\circ F) \sim (H^1\circ F')$.
\item If $H^1 \colon (L'',Q'') \rightsquigarrow (L,Q)$
is a curved $L_\infty$-morphism, then $ (F^1 \circ H)\sim (F'^1\circ H)$.
\end{propositionlist}
\end{proposition}
\begin{proof}
The statements follow as in the flat case in Proposition~\ref{prop:CompofHomotopicHomotopic} and
Proposition~\ref{prop:preCompofHomotopicHomotopic} since we used there
only bialgebraic properties that still hold in our setting by the completeness of the filtration.
\end{proof}
\begin{corollary}
Let $F^1,F'^1$ be two homotopic curved $L_\infty$-morphisms
from $(L,Q)$ to $(L',Q')$, and let
$H^1,H'^1$ be two homotopic curved $L_\infty$-morphisms from $(L',Q')$ to
$(L'',Q'')$, then $(H^1 \circ F)\sim (H'^1 \circ F')$.
\end{corollary}
Let us now address the first difficulty from Remark~\ref{rem:Difficulties},
namely the fact that we do not have an obvious notion of
curved $L_\infty$-quasi-isomorphism since curved $L_\infty$-algebras
$(L,Q)$ do not induce a cochain complex $(L,Q^1_1)$.
Recall that we showed for the flat case in
Lemma~\ref{lemma:HomEquvsQuasiIso} that there exists an
$L_\infty$-quasi-isomorphism between two flat $L_\infty$-algebras
$(L,Q)$ and $(L',Q')$ if and only if there are $L_\infty$-morphisms
$F\colon (L,Q)\to (L',Q')$ and $G\colon (L',Q')\to (L,Q)$
such that $F\circ G\sim \id_{L'}$ and $G\circ F\sim \id_{L}$.
Thus we define:
\begin{definition}[Curved $L_\infty$-quasi-isomorphism]
\label{def:CurvedQuis}
Let $F^1 \colon (L,Q) \rightsquigarrow (L',Q')$ be a curved
$L_\infty$-morphism between curved $L_\infty$-algebras. One
calls $F$ \emph{curved $L_\infty$-quasi-isomorphism} if and only if
there exists a curved $L_\infty$-morphism $G^1\colon (L',Q')
\rightsquigarrow (L,Q)$
such that $F^1\circ G\sim \id_{L'}$ and $G^1\circ F\sim \id_{L}$. In
this case, $F^1$ and $G^1$ are said to be quasi-inverse to each other.
\end{definition}
Concerning the behaviour on Maurer-Cartan elements we directly get
analogue of Theorem~\ref{thm:lquisbijondef}.
\begin{corollary}
Let $F^1 \colon (L,Q) \rightsquigarrow (L',Q')$ be a
curved $L_\infty$-quasi-isomorphism. Then the induced map
$F_\MC \colon\Def(L)\rightarrow \Def(L')$ is a bijection.
\end{corollary}
\begin{proof}
Let $G^1$ be the quasi-inverse of $F^1$, then we know by
Proposition~\ref{prop:HomCurvedMorphonDef} that
$(F\circ G)_\MC$ and $(G \circ F)_\MC$ are the identity maps
on $\Def(L')$ resp. $\Def(L)$. But
Proposition~\ref{prop:CompofCurved} implies that
$(F\circ G)_\MC = F_\MC \circ G_\MC$ and
$(G\circ F)_\MC = G_\MC \circ F_\MC$, which implies
that $F_\MC$ and $G_\MC$ are bijections.
\end{proof}
We can also generalize the twisting procedure for
$L_\infty$-morphisms from the strict case in
Proposition~\ref{prop:twistinglinftymorphisms} to our curved
$L_\infty$-morphisms, which is straightforward:
\begin{proposition}
\label{prop:TwistingofCurvedMorph}
Let $F^1 \colon (L,Q)\rightsquigarrow (L',Q')$ be a curved
$L_\infty$-morphism with $F_0^1=\alpha$ and let
$\pi \in \mathcal{F}^1L^1$. Then the curved $L_\infty$-morphism
\begin{equation}
(F^\pi)^1
=
\tw_{- \widetilde{F}^1(\exp \pi)} \circ
F \circ \tw_{\pi} \colon
(L,Q^\pi)
\longrightsquigarrow{1}
(L',Q'^{\widetilde{F}^1(\exp \pi)})
\end{equation}
has structure maps $(F^\pi)^1_i = \sum_{k=0}^\infty \frac{1}{k!} F^1_{k+i}(
\pi^{\vee k}\vee \argument )$ for $i>0$ and $(F^\pi)^1_0=\alpha$.
\end{proposition}
\begin{proof}
At first, it is clear that the composition $(F^\pi)^1
= \tw_{- F^1(\exp \pi)} \circ F \circ \tw_{\pi}$ is a curved
$L_\infty$-morphism
\begin{equation*}
(L,Q^\pi)
\stackrel{\tw_\pi}{\longrightsquigarrow{1}}
(L,Q)
\stackrel{F}{\longrightsquigarrow{1}}
(L',Q')
\stackrel{\tw_{- \widetilde{F}^1(\exp \pi)}}{\longrightsquigarrow{1}}
(L',Q'^{\widetilde{F}^1(\exp\pi)}).
\end{equation*}
For the structure maps, we see that $(F^\pi)^1_0 = -\widetilde{F}^1(\exp \pi)
+ \alpha + \widetilde{F}^1(\exp \pi) = \alpha$ and for $i>0$ we get
\begin{equation*}
(F^\pi)^1_i
=
(F \circ \tw_\pi)^1_i
=
(\widetilde{F}^\pi)^1_i,
\end{equation*}
which implies in particular
$\widetilde{F^\pi}= \widetilde{F}^\pi$.
\end{proof}
\begin{remark}
We collect a few immediate observations:
\begin{remarklist}
\item We directly see that $(F^0)^1 = F^1$, i.e. twisting by zero does
not change the morphism.
\item The above definition for the twisted curved $L_\infty$-morphism
$(F^\pi)^1$ recovers the results for the strict case from
Proposition~\ref{prop:twistinglinftymorphisms}: If $F^1$ is strict,
i.e. $F^1_0=0$ and $F = \widetilde{F}$, then we get indeed $F^\pi
= \widetilde{F}^\pi= \widetilde{F^\pi}$.
\item If $F^1_0 \neq 0 $ and if we twist with a Maurer-Cartan element,
then the image does no longer have to be a flat $L_\infty$-algebra,
since $\widetilde{F}^1(\exp\pi)$ does not have
to be a Maurer-Cartan element.
\item The twist of a curved $L_\infty$-morphism is always a curved
$L_\infty$-morphism, i.e. we can not obtain a strict one by twisting.
\end{remarklist}
\end{remark}
Finally, note that the only thing we did not generalize yet are the
results from Section~\ref{sec:HomEquivTwistedMorphisms}, where
we showed that strict $L_\infty$-morphisms between flat
$L_\infty$-algebras that are twisted with equivalent Maurer-Cartan
elements are homotopic. Note that in Proposition~\ref{prop:TwistMorphHomEqu}
and Proposition~\ref{prop:TwistedLinftyIsom} we considered only the case of
a Maurer-Cartan element $\pi$ equivalent to zero, which was sufficient for
the flat case since it implies the statement for all generic
equivalent Maurer-Cartan elements $\pi$ and $\pi'$.
\begin{itemize}
\item Let us consider at first the context of strict $L_\infty$-morphisms
of curved $L_\infty$-algebras. Let $\pi \sim \pi'$ be two equivalent
curved Maurer-Cartan elements in $(L,Q)$. Then we know from
Lemma~\ref{lemma:twistCodiff} that
$(L,Q^\pi)$ and $(L,Q^{\pi'})$ are flat $L_\infty$-algebras,
and from Lemma~\ref{lemma:CorrespCurvedandFlatMC} that
$\pi'-\pi \sim 0$ in $(L,Q^\pi)$. Thus we can directly apply
Proposition~\ref{prop:TwistedLinftyIsom}.
\item In the context of curved $L_\infty$-morphisms it is more
difficult to generalize these results since we saw in
Proposition~\ref{prop:TwistingofCurvedMorph} that if we twist
a curved $L_\infty$-morphism with a Maurer-Cartan element, then
the codomain does not need to be a flat $L_\infty$-algebra.
More explicitly, let $F^1 \colon (L,Q) \rightsquigarrow (L',Q')$
be a curved $L_\infty$-morphism and let $\pi,\pi'\in \mathcal{F}^1L^1$
be two equivalent Maurer-Cartan elements. Then we end up with
the following diagram:
\begin{equation}
\label{eq:CurvedTwistDiagram}
\begin{tikzpicture}
\node (LQpi) at (0,0) {{$(L,Q^\pi)$ }};
\node (LprimeFpi) at (6,0)
{{$(L',Q'^{\widetilde{F}^1(\exp\pi)})$ }};
\node (LQpiprime) at (0,-2.5)
{{$(L,Q^{\pi'})$}};
\node (LprimeFpiprime) at (6,-2.5)
{{$(L',Q'^{\widetilde{F}^1(\exp\pi')})$}};
\draw [->] (LQpi) -- (LQpiprime) node[midway,left]
{$\Phi_1$};
\draw [->,line join=round,
decorate, decoration={
zigzag,
segment length=4,
amplitude=.9,post=lineto,
post length=2pt
}] (LprimeFpi) -- (LprimeFpiprime) node[midway,right]
{$\tw_\alpha \circ \Phi'_1 \circ \tw_{-\alpha}$};
\draw [->,line join=round,
decorate, decoration={
zigzag,
segment length=4,
amplitude=.9,post=lineto,
post length=2pt
}] (LQpiprime) -- (LprimeFpiprime) node[midway,above]
{$(F^{\pi'})^1$};
\draw [->,line join=round,
decorate, decoration={
zigzag,
segment length=4,
amplitude=.9,post=lineto,
post length=2pt
}] (LQpi) -- (LprimeFpi) node[midway,above] {$(F^\pi)^1$};
\end{tikzpicture}
\end{equation}
where $\Phi_1$ and $\Phi'_1$ are strict $L_\infty$-isomorphisms
by Lemma~\ref{lemma:PhitLinftyMorph}. Note that in particular
\begin{equation*}
\Phi_1' \colon
(L',Q'^{\alpha +\widetilde{F}^1(\exp\pi)})
\longrightarrow
(L',Q'^{\alpha +\widetilde{F}^1(\exp\pi')})
\end{equation*}
is a well-defined strict $L_\infty$-isomorphism
since $\alpha +\widetilde{F}^1(\exp\pi) = F_\MC(\pi)$ and
$\alpha +\widetilde{F}^1(\exp\pi') = F_\MC(\pi')$ are equivalent
Maurer-Cartan elements.
\end{itemize}
We can show that Diagram~\eqref{eq:CurvedTwistDiagram} commutes up to
homotopy:
\begin{proposition}
The curved $L_\infty$-morphisms $(F^\pi)^1$ and $\tw_\alpha \circ (
\Phi'_1)^{-1} \circ \tw_{-\alpha} \circ F^{\pi'} \circ \Phi_1$
are homotopic.
\end{proposition}
\begin{proof}
We know from Corollary~\ref{cor:CompwithTwist} that we have
\begin{align*}
(F^\pi)^1
=
\tw_\alpha \circ \widetilde{F^\pi},
\quad \text{ and } \quad
(F^{\pi'})^1
=
\tw_\alpha \circ \widetilde{F^{\pi'}}
\end{align*}
which implies
\begin{align*}
\tw_\alpha \circ
( \Phi'_1)^{-1} \circ \tw_{-\alpha} \circ F^{\pi'} \circ \Phi_1
& =
\tw_\alpha \circ
( \Phi'_1)^{-1} \circ \widetilde{F^{\pi'}} \circ \Phi_1.
\end{align*}
Moreover, Proposition~\ref{prop:RelationHomEquiv} implies
\begin{align*}
\left((F^\pi)^1
\sim
\tw_\alpha \circ
( \Phi'_1)^{-1} \circ \tw_{-\alpha} \circ F^{\pi'} \circ \Phi_1 \right)
\quad \quad \Longleftrightarrow \quad \quad
\left( \widetilde{F^\pi}
\sim
( \Phi'_1)^{-1} \circ \widetilde{F^{\pi'}} \circ \Phi_1 \right).
\end{align*}
But the right hand side is exactly
Proposition~\ref{prop:TwistedLinftyIsom} since
we know $\widetilde{F^\pi} = \widetilde{F}^\pi$ and
$\widetilde{F^{\pi'}} = \widetilde{F}^{\pi'}$.
\end{proof}
\section{$L_\infty$-modules}
\label{sec:LinftyModules}
After having introduced the notion of $L_\infty$-algebras, we want
to understand the basics of their representation theory, i.e.
the notion of $L_\infty$-modules,
see e.g. \cite{dolgushev:2005b,dolgushev:2006a,esposito.dekleijn:2021a}.
For example, they play an important role in the formality theorem for
Hochschild chains~\cite{dolgushev:2006a,shoikhet:2003a}.
\subsection{Definition and First Properties}
\begin{definition}[$L_\infty$-module]
Let $(L,Q)$ be a (curved) $L_\infty$-algebra. An \emph{$L_\infty$-module} over $(L,Q)$
is a graded vector space $M$ over $\mathbb{K}$ equipped with a codifferential
$\phi$ of degree $1$ on the
cofreely cogenerated comodule $\Sym(L[1]) \otimes M$ over $(\Sym(L[1]),Q)$.
\end{definition}
On the total space of the comodule $\Sym(L[1]) \otimes M$ one has the coaction
\begin{equation}
a \colon
\Sym(L[1]) \otimes M
\longrightarrow
\Sym(L[1]) \otimes (\Sym(L[1]) \otimes M)
\end{equation}
that is defined by
\begin{align*}
a(\gamma_1 \vee \cdots \vee \gamma_n \otimes m)
& =
(\Delta_\sh \otimes \id) (\gamma_1 \vee \cdots \vee \gamma_n \otimes m) \\
& =
\sum_{k=0}^{n} \sum_{\sigma \in Sh(k,n-k)}
\epsilon(\sigma) \gamma_{\sigma(1)}\vee \cdots \vee \gamma_{\sigma(k)}
\otimes \left( \gamma_{\sigma(k+1)}\vee \cdots \vee \gamma_{\sigma(n)}
\otimes m\right),
\end{align*}
where $\gamma_i \in L$ and $m\in M$ are homogeneous. For example, we have
$a(1 \otimes m)=1 \otimes 1 \otimes m$ and $a(\gamma \otimes m) =
\gamma \otimes 1\otimes m + 1 \otimes\gamma \otimes m$.
By the coassociativity of $\Delta_\sh$ it follows directly
$(\id \otimes a)a(X)= (\Delta_\sh \otimes \id)a(X)$ for all
$X\in \Sym(L[1]) \otimes M$, thus the well-definedness of the coaction.
\begin{remark}
Note that in the flat setting one can restrict to modules over
$\cc{\Sym}(L[1])$ instead of $\Sym(L[1])$. In this case
the sum in the definition of
$a$ starts at $k=1$. For example, one has then in the flat case
$\ker a = M$, which is not true in the curved setting, see \cite[Section~2.2]{dolgushev:2006a}.
\end{remark}
By definition, an $L_\infty$-module structure is a codifferential
$\phi$ of $\Sym(L[1]) \otimes M$, which means
\begin{equation}
a \circ \phi(X)
=
(\id \otimes \phi)(a X) + (Q \otimes \id\otimes \id) (aX).
\end{equation}
In terms of homogeneous elements this takes the form
\begin{align*}
\begin{split}
\phi(\gamma_1 &\vee \cdots \vee \gamma_n \otimes m)
=
Q(\gamma_1 \vee \cdots \vee \gamma_n ) \otimes m \\
& +
\sum_{k=0}^n\sum_{\sigma\in Sh(k,n-k)}
(-1)^{\sum_{i=1}^k \abs{\gamma_{\sigma(i)}}} \epsilon(\sigma)
\gamma_{\sigma(1)} \vee \cdots \vee \gamma_{\sigma(k)} \vee
\phi_{n-k}^1(\gamma_{\sigma(k+1)} \vee \cdots \vee \gamma_{\sigma(n)}\otimes m)
\end{split}
\end{align*}
with
\begin{equation}
\phi_n^1 = \phi_n \colon
\Sym^n (L[1]) \otimes M
\longrightarrow
M[1],
\quad n \geq 0.
\end{equation}
In other words,
\begin{equation}
\label{eq:ComodStructure}
\phi
=
Q \otimes \id + (\id \otimes \phi^1) \circ (\Delta_\sh \otimes \id).
\end{equation}
For example, this implies
\begin{equation*}
\phi(\gamma \otimes m)
=
Q(\gamma)\otimes m + 1 \otimes \phi_1^1(\gamma \otimes m)
+ (-1)^{\abs{\gamma}} \gamma \otimes \phi_0^1(m).
\end{equation*}
The condition $\phi^2=0$ translates to quadratic relations, see e.g.
\cite[Formula~(2.25)]{dolgushev:2006a}. We compute:
\begin{align}
\label{eq:phisquared}
\begin{split}
\phi \circ \phi
& =
(Q\otimes \phi^1)\circ (\Delta_\sh \otimes \id)
+
(\id \otimes \phi^1)\circ (Q \otimes \id \otimes \id +
\id \otimes Q \otimes \id)\circ (\Delta_\sh\otimes\id) \\
& \quad
+ Q\circ Q \otimes \id+(\id \otimes \phi^1)\circ (\Delta_\sh\otimes \id )\circ
(\id \otimes \phi^1)\circ(\Delta_\sh\otimes\id)\\
& =
(\id \otimes \phi^1\circ (Q\otimes \id+
(\id \otimes\phi^1)\circ(\Delta_\sh\otimes\id))) \circ (\Delta_\sh \otimes \id).
\end{split}
\end{align}
In particular, this gives
\begin{equation*}
\phi_0(1 \otimes \phi_0(1\otimes m))
+ \phi_1((Q_0(1)\otimes m)
=
0
\end{equation*}
and
\begin{align*}
\phi_0(1 \otimes \phi_1(\gamma \otimes m))
+ \phi_1(Q_1(\gamma)\otimes m)
+ \phi_2(Q_0(1)\vee \gamma \otimes m)
+(-1)^{\abs{\gamma}} \phi_1(\gamma \otimes \phi_0(1 \otimes m))
=
0
\end{align*}
for $\gamma \in L[1]^{\abs{\gamma}}$. In the flat setting, i.e. if $Q_0(1)=0$, the map
$\phi_0$ is indeed a differential on $M$ and $\phi_1$ is closed with
respect to the induced differential on $\Hom(L[1]\otimes M ,M)$.
\begin{example}[DGLA module]
\label{ex:dglamodule}
The simplest example is as expected a DG module $(M,b,\rho)$ over a DGLA
$(\liealg{g},\D,[\argument{,}\argument])$. The only non-vanishing
structure maps of $\phi$ are $\phi_0 = -b$ and $\phi_1(\gamma\otimes m)
= -(-1)^{\abs{\gamma}}\rho(\gamma)m$, where $\rho$ is the action of
$\liealg{g}$ on $M$.
\end{example}
\begin{example}
Another basic example comes from $L_\infty$-morphisms. Let $F\colon (L,Q)
\rightarrow (L',Q')$ be an $L_\infty$-morphism, then it induces on $L'$
the structure of an $L_\infty$-module over $L$ via
\begin{equation*}
\phi_k( \gamma_1 \vee \cdots \vee \gamma_k \otimes m)
=
Q'^1(F(\gamma_1 \vee \cdots \vee \gamma_k)\vee m)
\end{equation*}
for $\gamma_i \in L, m\in L'$.
\end{example}
As in the case of $L_\infty$-algebras, $L_\infty$-module structures can
be interpreted as Maurer-Cartan elements in a convolution-like algebra:
\begin{proposition}
Let $M$ be a graded vector space over $\mathbb{K}$ and $(L,Q)$ an
$L_\infty$-algebra. Then the vector space $
\liealg{h}_{M,L}= \Hom(\Sym(L[1])\otimes M, M)$
of graded linear maps can be equipped with the structure of a
DGLA with differential
\begin{equation}
\del \phi
=
-(-1)^{\abs{\phi}} \phi \circ (Q\otimes \id),
\end{equation}
where $\abs{\argument}$ denotes the degree in $\Hom(\Sym(L[1])\otimes M, M)$,
and bracket induced by the product
\begin{equation}
\label{eq:ProductLinftyModDGLA}
\phi \bullet \psi
=
\phi \circ (\id \otimes \psi)\circ (\Delta_\sh \otimes \id).
\end{equation}
The Maurer-Cartan elements $\phi$ of this DGLA can be identified with
$L_\infty$-module structures $\widehat{\phi}$ on $M$.
\end{proposition}
\begin{proof}
The product \eqref{eq:ProductLinftyModDGLA} is associative and thus induces a
Lie bracket:
\begin{align*}
\phi \bullet (\psi \bullet \eta)
& =
\phi \circ (\id \otimes (\psi \circ (\id \otimes \eta)\circ(\Delta_\sh\otimes\id)))
\circ (\Delta_\sh \otimes\id) \\
& =
\phi \circ (\id \otimes \psi)\circ (\id \otimes \id \otimes \eta)
(\Delta_\sh^2 \otimes \id)
=
(\phi \bullet \psi)\bullet \eta.
\end{align*}
The identity $\del^2=0$ is clear. The compatibility of $\del$ with the product
follows from
\begin{align*}
\del (\phi \bullet \psi)
& =
-(-1)^{\abs{\phi}+\abs{\psi}}
\phi \circ (\id \otimes \psi)\circ (\Delta_\sh \otimes \id)(Q \otimes \id) \\
& =
-(-1)^{\abs{\phi}+\abs{\psi}}
\phi \circ (\id \otimes \psi)\circ (Q\otimes \id \otimes \id +
\id \otimes Q \otimes \id)(\Delta_\sh \otimes \id) \\
& =
\del \phi \bullet \psi +
(-1)^{\abs{\phi}} \phi \bullet \del \psi.
\end{align*}
Let now $\phi$ be a Maurer-Cartan element, i.e.
\begin{align*}
0
=
\del \phi + \phi \bullet \phi
=
\phi \circ (Q\otimes \id) +
\phi \circ (\id \otimes \phi)\circ (\Delta_\sh \otimes \id) .
\end{align*}
But by \eqref{eq:phisquared} this is equivalent to $\phi$ inducing
a codifferential $\widehat{\phi}=
Q \otimes \id + (\id \otimes \phi) \circ (\Delta_\sh \otimes \id)$.
\end{proof}
As usual, if $L$ and $M$ are equipped with filtrations, we require the maps
in $\liealg{h}_{M,L}$ to be compatible with the filtrations.
\begin{remark}
\label{rem:FiltrationModConvDGLA}
For a flat $L_\infty$-algebra $(L,Q)$ the DGLA
$\liealg{h}_{M,L}$ itself has again complete descending filtration
\begin{align}
\begin{split}
\liealg{h}_{M,L}
=
& \mathcal{F}^0\liealg{h}_{M,L}
\supset
\mathcal{F}^1\liealg{h}_{M,L}
\supset \cdots \supset
\mathcal{F}^k\liealg{h}_{M,L}
\supset \cdots \\
\mathcal{F}^k\liealg{h}_{M,L}
& =
\left\{ f \in \Hom(\Sym(L[1])\otimes M,M) \mid
f \at{\Sym^{<k}(L[1])\otimes M}=0\right\},
\end{split}
\end{align}
analogously to \eqref{eq:FiltrationConvLieAlg}. In the curved setting
we have to assume that $L$ and $M$ have complete descending filtrations
and that $Q_0(1)\in \mathcal{F}^1L$. This induces a filtration on
$\liealg{h}_{M,L}$ analogously to \eqref{eq:FiltrationConvLieAlg2}, taking
the filtrations on $L$ and $M$ into account.
\end{remark}
This allows us to give another equivalent definition of $L_\infty$-modules, now
in terms of $L_\infty$-morphisms:
\begin{lemma}
\label{lemma:LinfModLinftMorph}
Let $\phi$ be a coderivation of degree one on the
comodule $\Sym(L[1])\otimes M$ over $(\Sym(L[1]),Q)$ with
Taylor coefficients $\phi_n$, $n \geq 0$. Then $\phi$
is a codifferential, i.e. defines an $L_\infty$-module structure on $M$,
if and only if the maps $\Phi_k\colon \Sym (L[1])\to \End(M)$ defined by
\begin{align}\label{eq: Ten-Hom-adj}
\Phi(X_1\vee\dots\vee X_k)(m):=\phi_k(X_1\vee\dots\vee X_k\tensor m)
\end{align}
for $X_i \in L[1], m\in M$
are the Taylor coefficients of an $L_\infty$-morphism, where
the $L_\infty$-structure on $\End(M)$ is the one induced by the Lie bracket
from Example~\ref{ex:EndDGLA} and zero differential.
\end{lemma}
\begin{proof}
First we notice that if we consider the \emph{graded} tensor-hom adjunction, we get
\begin{align*}
\Hom(\Sym(L[1])\otimes M, M)\simeq \Hom (\Sym(L[1]), \End(M))
\end{align*}
and the isomorphism is given by equation \eqref{eq: Ten-Hom-adj}.
One can check that the induced $L_\infty$-algebra structure on $\Hom (\Sym(L[1]), \End(M))$ coincides up to a sign with the convolution algebra
structure of the two $L_\infty$-algebras.
\end{proof}
Next we want to recall the definition of morphisms between $L_\infty$-modules.
\begin{definition}[Morphism of $L_\infty$-modules]
Let $(L,Q)$ be an $L_\infty$-algebra with two $L_\infty$-modules
$(M,\phi^M)$ and $(N,\phi^N)$ over $L$. Then a morphism between $L_\infty$-modules
is a morphism $\kappa$ between the comodules $\Sym(L[1]) \otimes M$
and $\Sym(L[1]) \otimes N$, i.e.
\begin{equation}
(\id \otimes \kappa) \circ a^M
=
a^N \circ \kappa,
\end{equation}
such that $\kappa \circ \phi^M = \phi^N \circ \kappa$.
\end{definition}
One can again show that $\kappa$ is uniquely determined by its structure
maps
\begin{equation}
\kappa_n \colon
\Sym^n (L[1]) \otimes M
\longrightarrow N
\end{equation}
via
\begin{equation}
\kappa(\gamma_1 \vee \cdots \vee \gamma_n \otimes m)
=
\sum_{k=0}^n \sum_{\sigma\in Sh(k,n-k)} \epsilon(\sigma)
\gamma_{\sigma(1)} \vee \cdots \vee \gamma_{\sigma(k)} \otimes
\kappa_{n-k}(\gamma_{\sigma(k+1)} \vee \cdots \vee \gamma_{\sigma(n)}\otimes m).
\end{equation}
The compatibility with the coderivations in the flat case implies in particular
$\kappa_0 \circ \phi^M_0 = \phi^N_0 \circ \kappa_0$, see
\cite[Formula~(2.29)]{dolgushev:2006a} for the general relation.
\begin{definition}
A \emph{quasi-isomorphism} $\kappa$ of $L_\infty$-modules over flat
$L_\infty$-algebras is a morphism
with the zeroth structure map $\kappa_0$ being a quasi-isomorphism of
complexes $(M,\phi^M_0)$ and $(N,\phi^N_0)$.
\end{definition}
Moreover, as for $L_\infty$-morphisms between $L_\infty$-algebras
in Proposition~\ref{prop:Linftyiso} we see that an
$L_\infty$-module morphism is an isomorphism if and only if
the first structure map $\kappa_0 \colon M\to N$ is an isomorphism:
\begin{proposition}
\label{prop:LinftyModIso}
Let $\kappa \colon (M,\phi^M)\to(N,\phi^N)$ be a morphism of
$L_\infty$-modules over $(L,Q)$. Then $\kappa$ is an
$L_\infty$-module isomorphism if and only if $\kappa_0 \colon
M\to N$ is an isomorphism.
\end{proposition}
\begin{proof}
The proof is completely analogue to the $L_\infty$-algebra case in
Proposition~\ref{prop:Linftyiso}.
\end{proof}
As expected, morphisms of $L_\infty$-modules can be again interpreted
as Maurer-Cartan elements of a convolution DGLA, now even a commutative one,
i.e. morphisms of $L_\infty$-modules are just closed elements of degree one
in a cochain complex:
\begin{proposition}
\label{prop:LinftymodMorphasMC}
Let $(M,\phi^M)$ and $(N,\phi^N)$ be two $L_\infty$-modules over $(L,Q)$.
The vector space $\Hom(\Sym(L[1])\otimes M, N)[-1]$ can be equipped with
the structure of an abelian DGLA $(\Hom(\Sym(L[1])\otimes M, N)[-1], \del,0)$
with differential
\begin{equation}
\del X
=
\phi^{N,1} \circ(\id\otimes X)\circ (\Delta_\sh \otimes \id)
+(-1)^{\abs{X}} X \circ \phi^M
\end{equation}
and zero bracket. Maurer-Cartan elements, i.e. closed elements
$\kappa^1$ of degree one,
can be identified with morphisms of $L_\infty$-modules via
\begin{equation}
\label{eq:MorphofLinftyMod}
\kappa
=
(\id \otimes \kappa^1)\circ(\Delta_\sh\otimes \id) \colon
\Sym(L[1])\otimes M
\longrightarrow
\Sym(L[1])\otimes N.
\end{equation}
\end{proposition}
\begin{proof}
The fact that $\del^2=0$ follows since $\phi^M$ and $\phi^N$ are
codifferentials. Explicitly, we have
\begin{align*}
\del^2 X
& =
\phi^{N,1}\circ (\id \otimes \phi^{N,1} \circ(\id\otimes X)\circ (\Delta_\sh \otimes \id))\circ (\Delta_\sh \otimes \id )
+(-1)^{\abs{X}+\abs{X}+1}X \circ \phi^M\circ\phi^M \\
& \quad
+(-1)^{\abs{X}+1} \phi^{N,1} \circ (\id\otimes X)\circ(\Delta_\sh\otimes\id)\circ
\phi^M
\\
& \quad
+ (-1)^{\abs{X}} \phi^{N,1} \circ (\id \otimes X\circ\phi^M)\circ(\Delta_\sh\otimes
\id) \\
& =
\phi^{N,1}\circ (\id \otimes \phi^{N,1} \circ(\id\otimes X)\circ (\Delta_\sh \otimes \id))\circ (\Delta_\sh \otimes \id ) \\
& \quad
+(-1)^{\abs{X}+1} \phi^{N,1} \circ (\id\otimes X)\circ(\id \otimes \phi^M + Q \otimes\id\otimes\id)\circ(\Delta_\sh\otimes\id)
\\
& \quad
+ (-1)^{\abs{X}} \phi^{N,1} \circ (\id \otimes X\circ\phi^M)\circ(\Delta_\sh\otimes
\id) \\
& =
\phi^{N,1}\circ (\id \otimes \phi^{N,1} )\circ (\id \otimes \id
\otimes X) \circ (\Delta_\sh \otimes\id\otimes \id)\circ (\Delta_\sh\otimes\id)
+\phi^{N,1}\circ (Q\otimes X)\circ(\Delta_\sh\otimes\id) \\
& =
\phi^{N,1}\circ (\id \otimes \phi^{N,1} ) \circ (\Delta_\sh \otimes\id)\circ
(\id \otimes X)\circ(\Delta_\sh\otimes\id)
+\phi^{N,1}\circ (Q\otimes X)\circ(\Delta_\sh\otimes\id)
=
0.
\end{align*}
Moreover, \eqref{eq:MorphofLinftyMod} yields a comodule morphism since
\begin{align*}
(\id \otimes \kappa) \circ a
& =
(\id \otimes( (\id \otimes \kappa^1)\circ(\Delta_\sh \otimes \id)))\circ (\Delta_\sh \otimes\id)
=
(\id \otimes \id \otimes \kappa^1)\circ(\id \otimes \Delta_\sh \otimes \id)
\circ (\Delta_\sh \otimes \id) \\
& =
(\Delta_\sh \otimes \id) \circ (\id \otimes \kappa^1)\circ(\Delta_\sh \otimes \id)
=
a \circ \kappa.
\end{align*}
If $\kappa^1$ is of degree one in the shifted complex, then the induced comodule
morphism is of degree zero and $\del \kappa^1=0$ is equivalent to
$\phi^N\circ \kappa=\kappa\circ \phi^M$.
\end{proof}
\subsection{Homotopy Equivalence of Morphisms of $L_\infty$-Modules}
This allows us to define a homotopy equivalence relation between morphisms of
$L_\infty$-modules. Analogously to the case of $L_\infty$-algebras, it is
defined via the gauge equivalence in the convolution DGLA.
Since the convolution DGLA has only the
differential as non-vanishing structure map, there is no filtration needed for the
well-definedness of the gauge action.
\begin{definition}
Let $(M,\phi^M)$ and $(N,\phi^N)$ be two $L_\infty$-modules over $(L,Q)$.
Two morphisms $\kappa_1,\kappa_2$ of $L_\infty$-modules are called
\emph{homotopic} if they are gauge equivalent Maurer-Cartan elements in
$(\Hom(\Sym(L[1])\otimes M, N)[-1], \del,0)$, i.e. if
\begin{equation}
\kappa_2
=
\kappa_1- \del h
\end{equation}
for some $h$ of degree zero. In other words, $[\kappa_1]=[\kappa_2]$ in
$\mathrm{H}^1(\Hom(\Sym(L[1])\otimes M,N)[-1],\del)$.
\end{definition}
The twisting procedures from Section~\ref{subsec:Twisting} can be
transferred to $L_\infty$-modules, see \cite[Proposition~3]{dolgushev:2006a} for the
flat case and \cite{esposito.dekleijn:2021a} for the curved setting.
\begin{proposition}
\label{prop:twistlinftymodules}
Let $(L,Q)$ be an $L_\infty$-algebra with $L_\infty$-module $(M,\phi)$
and let $\pi \in \Mc^1(L)$.
\begin{propositionlist}
\item For any $X \in \Sym(L[1]) \otimes M$
\begin{equation}
a(\exp(\pi \vee)X)
=
\exp(X\vee) \otimes \exp(\pi\vee)(aX).
\end{equation}
\item The map
\begin{equation}
\phi^\pi = \exp(-\pi \vee)\phi \exp(\pi \vee)
\end{equation}
is a coderivation of $\Sym(L[1])\otimes M$ along $Q^\pi$ squaring to zero.
\item If $\kappa\colon M \rightarrow N$ is an $L_\infty$-morphism of
$L_\infty$-modules over $L$, then
\begin{equation}
\kappa ^\pi
=
\exp(-\pi \vee) \kappa \exp(\pi \vee)
\end{equation}
is an $L_\infty$-morphism of the twisted modules over the twisted
$L_\infty$-algebra.
\end{propositionlist}
\end{proposition}
\begin{proof}
The first claim is clear. The second and the third claim follow directly.
\end{proof}
The twisted structure maps are as expected given by
\begin{equation}
\label{eq:twistenlmoduecoderivation}
\phi^\pi_n(\gamma_1\vee \cdots \vee \gamma_n\otimes m)
=
\sum_{k=0}^\infty\frac{1}{k!}
\phi_{n+k}(\pi\vee \cdots\vee \pi \vee \gamma_1 \vee
\cdots \vee \gamma_n \otimes m)
\end{equation}
and
\begin{equation}
\label{eq:twistedlinftymodule}
\kappa^\pi_n(\gamma_1\vee \cdots \vee \gamma_n\otimes m)
=
\sum_{k=0}^\infty \frac{1}{k!}
\kappa_{n+k}(\pi\vee \cdots\vee \pi \vee \gamma_1 \vee
\cdots \vee \gamma_n \otimes m).
\end{equation}
We want to investigate the relation between equivalent Maurer-Cartan elements,
i.e. equivalent $L_\infty$-module structures. To this end, assume that we
have a complete descending filtration $\mathcal{F}^\bullet$ on the convolution DGLA $\liealg{h}_{M,L}=\Hom(\Sym(L[1])\otimes M, M)$, compare
Remark~\ref{rem:FiltrationModConvDGLA}.
\begin{proposition}
\label{prop:EquModulStructuresIsomorphic}
Let $\phi_0,\phi_1 \in \mathcal{F}^1\liealg{h}_{M,L}^1$ be two equivalent
Maurer-Cartan elements with equivalence described by
\begin{equation}
\phi_1
=
e^{[h,\argument]}\phi_0 - \frac{e^{[h,\argument]}-\id}{[h,\argument]}\del h,
\quad \quad
h \in \mathcal{F}^1\liealg{h}_{M,L}^0.
\end{equation}
Then $A_h = (\id \otimes e^h)\circ(\Delta_\sh \otimes \id)$ is an $L_\infty$-isomorphism
between the $L_\infty$-modules $(\Sym(L[1])\otimes M, \widehat{\phi_0})$
and $(\Sym(L[1])\otimes M, \widehat{\phi_1})$.
\end{proposition}
\begin{proof}
$A_h$ is a comodule morphism by
Proposition~\ref{prop:LinftymodMorphasMC}.
Concerning the compatibility with the codifferentials we compute
\begin{align*}
A_h \widehat{\phi_0}
& =
(\id \otimes e^h)\circ(\Delta_\sh \otimes \id)\circ (Q\otimes \id)
+ (\id \otimes e^h)\circ(\Delta_\sh \otimes \id)\circ(\id \otimes\phi_0)\circ
(\Delta_\sh \otimes \id) \\
& =
(Q\otimes \id) \circ A_h +
(\id \otimes (-\del e^h + e^h\bullet \phi_0))\circ (\Delta_\sh \otimes \id) \\
& =
(Q\otimes \id) \circ A_h +
\left(\id \otimes \left(- \frac{e^{[h,\argument]}-\id}{[h,\argument]}
\del h \bullet e^h
+ e^{[h,\argument]}\phi_0 \bullet e^h \right)\right) \circ(\Delta_\sh \otimes \id) \\
& =
(Q\otimes \id) \circ A_h + \left(\id \otimes (\phi_1 \bullet e^h)\right)\circ
(\Delta_\sh \otimes \id)
=
\widehat{\phi_1} A_h
\end{align*}
and the statement is shown.
\end{proof}
Note that an $L_\infty$-morphism $F\colon (L,Q)\rightarrow (L',Q')$ induces
a map
\begin{equation}
\label{eq:PullBackLinftyMod}
F^* \colon
\liealg{h}_{M,L'} \ni
\phi
\longmapsto
F^*\phi
=
\phi \circ (F \otimes \id) \in
\liealg{h}_{M,L}
\end{equation}
via the pull-back. It is compatible with the DGLA structure:
\begin{lemma}
\label{lem:PullbackLinftyMod}
The map $F^* \colon \liealg{h}_{M,L'} \rightarrow \liealg{h}_{M,L}$ is a
DGLA morphism.
\end{lemma}
\begin{proof}
Concerning the differential we get
\begin{align*}
F^*\del' \phi
=
-(-1)^{\abs{\phi}} \phi \circ (Q'\otimes \id)\circ (F\otimes \id)
=
-(-1)^{\abs{\phi}} \phi \circ (F\otimes \id)\circ (Q\otimes \id)
=
\del F^*\phi.
\end{align*}
Similarly, we get for the product $\bullet$
\begin{align*}
F^*(\phi \bullet \psi)
& =
F^*(\phi \circ (\id \otimes \psi)\circ (\Delta_\sh \otimes \id))
=
(\phi \circ (\id \otimes \psi)\circ (F\otimes F \otimes \id) \circ
(\Delta_\sh \otimes \id)\\
& = F^*\phi \bullet F^*\psi
\end{align*}
and the statement is shown.
\end{proof}
\begin{remark}
Using Lemma~\ref{lemma:LinfModLinftMorph} we can identify
the DGLA $\liealg{h}_{M,L}= \Hom(\Sym(L[1])\otimes M, M)$ with
$\Hom (\Sym(L[1]), \End(M)) $ equipped with the convolution
$L_\infty$-structure. Under this identification the above
pull-back is indeed just the usual pull-back. All the constructions below
work in both points of view, however, we formulate them in terms of
$\liealg{h}_{M,L}$.
\end{remark}
We want to investigate the relation between pull-backs with homotopic
$L_\infty$-morphisms, where
we assume for simplicity that our $L_\infty$-algebras are flat.
Let therefore $F(t)$ and $\lambda(t)$ encode the homotopy equivalence between
two $L_\infty$-morphisms from $L$ to $L'$ in the convolution $L_\infty$-algebra
$(\Hom(\cc{\Sym}(L[1]), L'),\widehat{Q})$ with
\begin{equation*}
\frac{\D}{\D t} F^1(t)
=
\widehat{Q}(\lambda^1(t) \vee \exp(F^1(t))),
\end{equation*}
compare Definition~\ref{def:homotopicMorph}.
This can be rewritten in the following way:
\begin{proposition}
\label{prop:CoalgHomotopyGamma}
The map $\Gamma = \lambda^1(t) \star F(t)=
\vee \circ (\lambda^1(t) \otimes F(t)) \circ
\Delta_\sh$ satisfies
\begin{equation}
\frac{\D}{\D t} F(t)
=
Q' \circ \Gamma + \Gamma \circ Q
\end{equation}
and
\begin{equation}
\label{eq:GammaCoderivation}
\Delta_\sh \circ \Gamma
=
(\Gamma \otimes F + F \otimes \Gamma) \circ \Delta_\sh.
\end{equation}
\end{proposition}
\begin{proof}
By Theorem~\ref{thm:CofreeCocomConilpotentCoalg} we know that $\Gamma$ is a coderivation
along $F$ and \eqref{eq:GammaCoderivation} is clear.
Moreover, both $\frac{\D}{\D t}F$ and
$(Q' \circ \Gamma + \Gamma \circ Q)$ are coderivations along $F$ and thus it
suffices to show
\begin{equation*}
\frac{\D }{\D t} F^1(t)
=
Q'^1 \circ \Gamma + \lambda^1 \circ Q.
\end{equation*}
But we know
\begin{align*}
\frac{\D}{\D t} F^1(t)
& =
\widehat{Q}^1(\lambda^1(t) \vee \exp(F^1(t))
=
\sum_{i=1}^\infty \frac{1}{(i-1)!} Q'^1_i \circ (\lambda^1(t) \star
F^1(t)\star\cdots \star F^1(t))
+ \lambda^1(t) \circ Q \\
& =
Q'^1 \circ \Gamma + \lambda^1 \circ Q
\end{align*}
and the proposition is shown.
\end{proof}
This allows us to show that homotopic $L_\infty$-morphisms give homotopic pull-back morphisms.
\begin{theorem}
Let $F_0$ and $F_1$ be two homotopic $L_\infty$-morphisms
between the flat $L_\infty$-algebras $(L,Q)$ to $(L',Q')$ and assume that
$\liealg{h}_{M,L'}$ and $\liealg{h}_{M,L}$ are equipped with complete filtrations
as described in Remark~\ref{rem:FiltrationModConvDGLA}.
Then the induced DGLA morphisms
\begin{equation}
F_0^*, F_1^* \colon
\liealg{h}_{M,L'}
\longrightarrow
\liealg{h}_{M,L}
\end{equation}
are homotopic.
\end{theorem}
\begin{proof}
Let $F(t)$ and $\lambda^1(t)$ encode the homotopy equivalence between
$F_0$ and $F_1$. We show that
\begin{equation*}
\frac{\D}{\D t} F_t^*
=
\widehat{Q}^1(\Gamma_t^* \vee \exp F_t^*).
\end{equation*}
Here $\Gamma_t^* \phi = (-1)^{\abs{\phi}} \phi \circ (\Gamma(t) \otimes \id)$,
$F_t^*\phi = \phi \circ (F(t)\otimes\id)$, where $\abs{\phi}$ denotes now
the degree in $\liealg{h}_{M,L'}[1]$. Moreover,
$\widehat{Q}$ is the codifferential on the convolution DGLA
$\Sym(\Hom(\cc{\Sym}(\liealg{h}_{M,L'}[1]),\liealg{h}_{M,L})[1])$. It is given
by
\begin{equation*}
\widehat{Q}_1^1 X
=
- \del X - (-1)^{\abs{X}} X \circ Q_{\liealg{h}_{M,L'}}
\end{equation*}
and the bracket is induced by the bracket on $\liealg{h}_{M,L}$.
Using Proposition~\ref{prop:CoalgHomotopyGamma}, we have
for $\phi \in \liealg{h}_{M,L'}$
\begin{align*}
\frac{\D}{\D t} F_t^*\phi
=
\frac{\D}{\D t} \phi \circ (F(t) \otimes \id)
=
\phi \circ
((Q' \circ \Gamma + \Gamma \circ Q) \otimes\id).
\end{align*}
Moreover, we get
\begin{align*}
(\widehat{Q}^1_1\Gamma_t^*)^1_1 \phi
=
- \del \Gamma_t^* \phi + \Gamma_t^*\circ (-\del')\phi
=
\phi \circ ((\Gamma \circ Q + Q' \circ\Gamma )\otimes \id) .
\end{align*}
The higher orders of $\widehat{Q}^1(\Gamma_t^* \vee \exp F_t^*)$ are given by
\begin{align*}
\Gamma_t^* \circ Q_{\liealg{h}_{M,L'},2}^1
+ Q_{\liealg{h}_{M,L},2}^1 \circ (\Gamma_t^* \vee F_t^*) \circ \Delta_\sh.
\end{align*}
We can compute
\begin{align*}
\Gamma_t^*(\phi \bullet \psi)
& =
(-1)^{\abs{\phi} +\abs{\psi}+1} \phi \circ (\id \otimes \psi)\circ
(\Delta_\sh \otimes \id)\circ (\Gamma \otimes \id) \\
& =
(-1)^{\abs{\phi} +\abs{\psi}+1} \phi \circ (\id \otimes \psi)\circ
((\Gamma \otimes F + F \otimes \Gamma) \otimes \id)(\Delta_\sh \otimes \id) \\
& =
\Gamma_t^*\phi \bullet F_t^*\psi
- (-1)^{\abs{\phi}} F_t^*\phi \bullet \Gamma_t^* \psi
\end{align*}
which yields
\begin{align*}
\Gamma_t^* \circ Q_{\liealg{h}_{M,L'},2}^1 (\phi \vee \psi)
& =
-(-1)^{\abs{\phi}}
\Gamma_t^*(\phi \bullet \psi - (-1)^{(\abs{\phi}-1)(\abs{\psi}-1)}
\psi \bullet \phi) \\
& =
-(-1)^{\abs{\phi}} \left(\Gamma_t^*\phi \bullet F_t^*\psi
- (-1)^{\abs{\phi}} F_t^*\phi \bullet \Gamma_t^* \psi\right) \\
& \quad
- (-1)^{(\abs{\phi} -1)\abs{\psi}}\left(
\Gamma_t^*\psi \bullet F_t^*\phi
- (-1)^{\abs{\psi}} F_t^*\psi \bullet \Gamma_t^* \phi
\right) \\
& =
-Q_{\liealg{h}_{M,L},2}^1 (\Gamma_t^*\phi \vee F_t^*\psi)
-(-1)^{\abs{\phi}\abs{\psi}}Q_{\liealg{h}_{M,L},2}^1
(\Gamma_t^*\psi \vee F_t^*\phi) \\
& =
- Q_{\liealg{h}_{M,L},2}^1 \circ (\Gamma_t^* \vee F_t^*) \circ \Delta_\sh
(\phi \vee \psi).
\end{align*}
Thus the higher terms cancel each other. We only have to check that
$F_t^*$ and $\Gamma_t^*$ are in the completion of the polynomials in $t$
with respect to the filtration. But this is clear since $F(t)$ and
$\Gamma(t)$ have these properties.
\end{proof}
With Proposition~\ref{prop:EquModulStructuresIsomorphic} this immediately
yields:
\begin{corollary}
The pull-backs of an $L_\infty$-module structure
$\phi \in \mathcal{F}^1\mathfrak{h}^1_{M,L'}$ on $M$ over $L'$
via two homotopic $L_\infty$-morphisms $F_0,F_1\colon (L,Q)
\rightarrow (L',Q')$ yield two isomorphic $L_\infty$-module
structures on $M$ over $L$.
\end{corollary}
\begin{remark}
This statement is similar to the following theorem from algebraic topology,
see e.g. \cite[Theorem~2.1]{cohen:script}: Let $p \colon E \rightarrow B$ be a fiber
bundle and let $f_0, f_1 \colon X \rightarrow B$ be two homotopic maps. Then the pull-back bundles
are isomorphic. It would be interesting to see if there are other statements that can be transferred
to our algebraic setting.
\end{remark}
Analogously to our computations in Section~\ref{sec:HomEquivTwistedMorphisms}
for the case of flat $L_\infty$-algebras and
Section~\ref{sec:HomTheoryofCurvedLinfty} for curved
$L_\infty$-algebras, we want to show now that $L_\infty$-module
morphisms that are twisted with equivalent Maurer-Cartan elements are
homotopic.
At first, one can check that one has a version of
Proposition~\ref{prop:TwistMorphHomEqu} for DGLAs $(\liealg{g},\D,[\argument{,}\argument])$
and DGLA-modules $(M,b,\rho)$ and $(N,b',\rho')$ with complete filtrations. Let $\pi \in \mathcal{F}^1\liealg{g}^1$ be equivalent to zero via $\pi = \exp([A(1),\argument]) \acts 0$ with
$g\in \liealg{g}^0$. Then
\begin{equation}
\label{eq:DGLAModTwistEquMC}
\kappa(t)
=
e^{-\rho'(A(t))} \circ
\kappa^{\pi(t),1}\circ\left(e^{[A(t),\argument]}\otimes e^{\rho(A(t))} \right)
\end{equation}
is a path between the $L_\infty$-module morphism $\kappa$ and the twisted
morphism $e^{-\rho'(A(1))} \circ \kappa^{\pi(1),1}\circ\left(e^{[A(1),\argument]}\otimes
e^{\rho(A(1))} \right)$. Moreover, it satisfies
\begin{equation*}
\frac{\D}{\D t} \kappa(t)
=
\del \left(
e^{-\rho'(A(t))} \circ
\kappa^{\pi(t),1}(\lambda(t) \vee \argument)
\circ\left(e^{[A(t),\argument]}\otimes e^{\rho(A(t))} \right)\right).
\end{equation*}
This can be generalized to general $L_\infty$-modules over
$L_\infty$-algebras. Let $(L,Q)$ be a (flat) $L_\infty$-algebra
and $\pi \in \mathcal{F}^1L^1$ be a Maurer-Cartan element equivalent
to zero via $\pi(t),\lambda(t)$. Let us assume for simplicity that
$\lambda(t)=\lambda$ is constant, which is possible by
Remark~\ref{rem:QuillenandGaugeEquivalence}.
Let moreover $\kappa \colon (M,\phi)\to (N,\phi')$ be an $L_\infty$-module
morphism of $L_\infty$-modules over $(L,Q)$. Then we know from
Proposition~\ref{prop:twistlinftymodules} that $\kappa^\pi \colon
(M,\phi^\pi)\to (N,\phi'^\pi)$ is an $L_\infty$-module morphism of
$L_\infty$-modules over $(L,Q^\pi)$.
By Proposition~\ref{lemma:PhitLinftyMorph} we have an $L_\infty$-isomorphism
$\Phi_t \colon (L,Q) \to (L,Q^{\pi(t)})$ satisfying
\begin{equation*}
\frac{\D}{\D t} \Phi_t
=
\left(
[Q^{\pi(t)}, \lambda\vee \argument] -
Q^{\pi(t)}(\lambda)\vee \argument
\right) \circ \Phi_t
\end{equation*}
By Lemma~\ref{lem:PullbackLinftyMod} we can use $\Phi_t$ to pull-back
the $L_\infty$-module structures $\phi^{\pi(t)}$ and $\phi'^{\pi(t)}$
on $M$ resp. $N$. Analogously, one can pull-back $\kappa^{\pi(t)}$ and
one obtains an $L_\infty$-module morphism
\begin{equation*}
\Phi_t^* \kappa^{\pi(t)}
\colon
(M,\Phi_t^* \phi^{\pi(t)})
\longrightarrow
(N, \Phi_t^*\phi'^{\pi(t)})
\end{equation*}
over $(L,Q)$, where $(\Phi_t^*\kappa^{\pi(t)})^1 =
(\kappa^{\pi(t)})^1 \circ (\Phi_t \otimes \id)$, analogously to
the case \eqref{eq:PullBackLinftyMod} for the codifferential.
\begin{proposition}
In the above setting there exists an $L_\infty$-module isomorphism
$\Psi_{M,t} \colon (M,\phi) \to (M,\Phi_t^* \phi^{\pi(t)})$ over
$(L,Q)$ with structure maps
$(\Psi_t)_i^1 \colon \Sym^i(L[1])\otimes M \to M$ for $i\geq 0$
defined by
\begin{equation}
\label{eq:PsitTwistedMods}
\frac{\D}{\D t} (\Psi_t)^1_i
=
\left((\Phi_t^*\phi^{\pi(t)}) \circ (\lambda \vee \argument)
\right)^1
\circ (\Psi_t)_i,
\quad
\quad
(\Psi_0)^1_i = \pr_M.
\end{equation}
\end{proposition}
\begin{proof}
The well-definedness of $\Psi_t$ follows analogously to the
well-definedness of $\Phi_t$ in Lemma~\ref{lem:MorphPhitTwistedLinftyAlgs}.
It remains to show that it is indeed a $L_\infty$-module morphism.
To this end, we compute for $X \in \Sym^i(L[1])$, $m\in M$
\begin{align*}
\frac{\D}{\D t} & (\Phi_t^*\phi^{\pi(t)})^1 \circ
\Psi_t (X\otimes m)
=
\frac{\D}{\D t} \phi^1 \circ (\exp(\pi(t))\vee \Phi_t \otimes \id_M)
\circ (X^{(1)} \otimes \Psi_t^1 (X^{(2)} \otimes m)) \\
& =
(\phi^{\pi(t)})^1 \circ ( Q^{\pi(t)}(\lambda)\vee
\Phi_t \otimes \id_M)
\circ (X^{(1)} \otimes \Psi_t^1 (X^{(2)} \otimes m)) \\
& \quad
+(\phi^{\pi(t)})^1 \circ ( \left(
[Q^{\pi(t)}, \lambda\vee \argument] -
Q^{\pi(t)}(\lambda)\vee \argument
\right) \circ \Phi_t \otimes \id_M) \circ (X^{(1)}
\otimes \Psi_t^1 (X^{(2)} \otimes m)) \\
& \quad
+
(\phi^{\pi(t)})^1 \circ ( \Phi_t \otimes \id_M)
\circ \left(X^{(1)} \otimes\left((\Phi_t^*\phi^{\pi(t)}) \circ
(\lambda\vee \argument) \right)^1
\circ ( X^{(2)} \otimes \Psi_t^1 (X^{(3)} \otimes m))\right) \\
& =
(\phi^{\pi(t)})^1 \circ \left( \left(
[Q^{\pi(t)}, \lambda\vee \argument] \Phi_tX^{(1)}\right)
\otimes \Psi_t^1 (X^{(2)} \otimes m)\right) \\
& \quad
+
(\phi^{\pi(t)})^1 \circ
\left(\Phi_t X^{(1)} \otimes\left((\phi^{\pi(t)})^1 \circ
\Phi_t(\lambda \vee X^{(2)} )\otimes \Psi_t^1 (X^{(3)} \otimes m)
\right) \right).
\end{align*}
Using $(\Phi_t^*\phi^{\pi(t)})^2 (\lambda \vee X^{(1)} \otimes
\Psi_t^1(X^{(2)}\otimes m))=0$ and the fact that
$\Phi_t \circ(\lambda \vee\argument) = (\lambda \vee \argument)\circ
\Phi_t$, we see that this coincides with
\begin{align*}
\frac{\D}{\D t} (\Phi_t^*\phi^{\pi(t)})^1 \circ
\Psi_t (X\otimes m)
& =
(\phi^{\pi(t)})^1 \circ ( \Phi_t \otimes \id )
\circ (\lambda \vee \argument) \circ
\big( Q(X^{(1)}) \otimes \Psi_t^1(X^{(2)}\otimes m) \\
& \quad
+ X^{(1)} \otimes (\phi^{\pi(t)})^1 (\Phi_t X^{(2)} \otimes
\Psi_t^1(X^{(3)}\otimes m))\big) \\
& =
\left((\Phi_t^*\phi^{\pi(t)}) \circ (\lambda \vee \argument)
\right)^1
\circ (\Phi_t^*\phi^{\pi(t)}) \circ
\Psi_t (X\otimes m) .
\end{align*}
Therefore, $(\Phi_t^*\phi^{\pi(t)})^1 \circ
\Psi_t (X\otimes m)$ coincides with $\Psi^1_t \circ \phi (X \otimes m)$
since
\begin{align*}
\frac{\D}{\D t} \Psi^1_t \circ \phi (X \otimes m)
=
\left((\Phi_t^*\phi^{\pi(t)}) \circ (\lambda \vee \argument)
\right)^1 \circ (\Psi_t \circ \phi (X\otimes m)),
\end{align*}
i.e. both expressions satisfy the same differential equation, and
both coincide at $t=0$.
The fact that $\Psi_t$ is even an $L_\infty$-module isomorphism follows
since $(\Psi_t)_0\colon M \to M$ is an isomorphism compare
Proposition~\ref{prop:LinftyModIso}.
\end{proof}
This allows us to generalize the case of DGLA modules from
\eqref{eq:DGLAModTwistEquMC} and we can show:
\begin{proposition}
The $L_\infty$-module morphisms $\kappa$ and
$\Psi_{N,1}^{-1} \circ (\Phi^*_1\kappa^{\pi}) \circ \Psi_{M,1}$
from $(M,\phi)$ to $(N,\phi')$ over $(L,Q)$ are homotopic.
\end{proposition}
\begin{proof}
We set $\kappa(t)= \Psi_{N,t}^{-1}\circ
(\Phi_t^*\kappa^{\pi(t)})\circ \Psi_{M,t}$ and compute
\begin{align*}
\frac{\D}{\D t}& \Psi_{M,t}
=
\left(\id \otimes (\Phi^*_t\phi^{\pi(t)})^1 \right)\circ
( \id \otimes (\lambda \vee \argument)\otimes\id) \circ
(\id \otimes \Psi_{M,t}) \circ
(\Delta_\sh\otimes \id) \\
& =
\left(\id \otimes (\Phi^*_t\phi^{\pi(t)})^1 \right)\circ
(\Delta_\sh\otimes \id) \circ \lambda \vee \Psi_{M,t}
+ \left((\lambda\vee \argument) \otimes (\Phi^*_t\phi^{\pi(t)})^1 \right)
\circ (\Delta_\sh\otimes \id)\circ \Psi_{M,t} \\
& =
(\Phi^*_t\phi^{\pi(t)}) \circ \lambda \vee \Psi_{M,t}
- Q \circ \lambda \vee \Psi_{M,t}
+ (\lambda\vee \argument) \circ (\Phi^*_t\phi^{\pi(t)})
\circ \Psi_{M,t}
- \lambda \vee Q \circ \Psi_{M,t} \\
& =
(\Phi^*_t\phi^{\pi(t)}) \circ \lambda \vee \Psi_{M,t}
+ (\lambda\vee \argument) \circ (\Phi^*_t\phi^{\pi(t)})
\circ \Psi_{M,t}
- [Q,\lambda \vee \argument] \circ \Psi_{M,t}
\end{align*}
and similarly
\begin{align*}
\frac{\D}{\D t} \Psi_{N,t}^{-1}
& =
- \Psi_{N,t}^{-1} \circ( \Phi^*_t\phi'^{\pi(t)})
\circ (\lambda \vee \argument)
- \Psi_{N,t}^{-1} \circ (\lambda\vee \argument) \circ
(\Phi^*_t\phi'^{\pi(t)})
+ \Psi_{N,t}^{-1} \circ [Q,\lambda \vee \argument] .
\end{align*}
In addition, we have
\begin{align*}
\frac{\D}{\D t}& \Phi_t^*\kappa^{\pi(t)}
=
\frac{\D}{\D t} (\id \otimes \kappa^1)\circ
(\id \otimes \exp(\pi(t))\vee \Phi_t \otimes \id)\circ
(\Delta_\sh\otimes\id) \\
& =
(\id \otimes (\kappa^{\pi(t)})^1)\circ
(\id \otimes Q^{\pi(t)}(\lambda) \vee \Phi_t \otimes \id)\circ
(\Delta_\sh\otimes\id) \\
& \quad
+
(\id \otimes (\kappa^{\pi(t)})^1)\circ
\left(\id \otimes \left(
[Q^{\pi(t)}, \lambda\vee \argument] -
Q^{\pi(t)}(\lambda)\vee \argument
\right) \circ \Phi_t \otimes \id\right)\circ
(\Delta_\sh\otimes\id)\\
& =
(\id \otimes (\kappa^{\pi(t)})^1)\circ
\left(\id \otimes [Q^{\pi(t)}, \lambda\vee \argument] \circ
\Phi_t \otimes \id\right)\circ (\Delta_\sh\otimes\id).
\end{align*}
With $\Phi_t \circ (\lambda \vee \argument) = (\lambda\vee \argument)\circ
\Phi_t$ and $Q^{\pi(t)}\circ \Phi_t = \Phi_t \circ Q$, this gives
\begin{align*}
\frac{\D}{\D t}& \Phi_t^*\kappa^{\pi(t)}
=
(\id \otimes (\kappa^{\pi(t)})^1)\circ
\left(\id \otimes \Phi_t \circ [Q, \lambda\vee \argument]
\otimes \id\right)\circ (\Delta_\sh\otimes\id) \\
& =
(\Phi_t^*\kappa^{\pi(t)}) \circ [Q,\lambda\vee \argument]
-
[Q,\lambda\vee \argument] \circ (\Phi_t^*\kappa^{\pi(t)}).
\end{align*}
Summarizing, we have shown
\begin{align*}
\frac{\D}{\D t} \kappa(t)
& =
\phi'\circ \left( - \Psi_{N,t}^{-1}\circ (\lambda\vee\argument)\circ
(\Phi_t^*\kappa^{\pi(t)})\circ \Psi_{M,t}
+
\Psi_{N,t}^{-1}\circ
(\Phi_t^*\kappa^{\pi(t)})\circ(\lambda\vee\argument) \circ \Psi_{M,t}
\right) \\
& \quad
+
\left( - \Psi_{N,t}^{-1}\circ (\lambda\vee\argument)\circ
(\Phi_t^*\kappa^{\pi(t)})\circ \Psi_{M,t}
+
\Psi_{N,t}^{-1}\circ
(\Phi_t^*\kappa^{\pi(t)})\circ(\lambda\vee\argument) \circ \Psi_{M,t}
\right) \circ \phi
\end{align*}
which implies
\begin{align*}
\frac{\D}{\D t} \kappa^1(t)
& =
\del \left( - (\Psi_{N,t}^{-1})^1\circ (\lambda\vee\argument)\circ
(\Phi_t^*\kappa^{\pi(t)})\circ \Psi_{M,t}
+
(\Psi_{N,t}^{-1})^1\circ
(\Phi_t^*\kappa^{\pi(t)})\circ(\lambda\vee\argument) \circ \Psi_{M,t}
\right)
\end{align*}
as desired.
\end{proof}
\begin{remark}[Application to Deformation Quantization II]
These results imply that Dolgushev's globalizations
\cite{dolgushev:2006a} of Shoikhet's formality for Hochschild
chains \cite{shoikhet:2003a} with respect to different covariant
derivatives are homotopic, completely analogously to the
cochain case in \cite{kraft.schnitzer:2021a:pre}.
\end{remark}
\bibliographystyle{nchairx}
| 92784d48b126309cc6f9bf4af3d040c837b518f1 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
The hypernucleus is a self-bound quantum many-body system with one or more hyperons immersed in an ordinary nucleus composed of neutrons and protons. In a single $\Lambda$ hypernucleus, the $\Lambda$ hyperon can occupy any energy-allowed state as an impurity and thus provides a unique probe of the properties of atomic nucleus. Meanwhile, due to the short lifetime ($\sim10^{-10}$ s) of $\Lambda$ hyperon, it is difficult to perform $\Lambda$-nucleon ($\Lambda N$) scattering experiments~\cite{Hashimoto:2006,Gal:2016}. Thus, the $\Lambda N$ interaction is still poorly constrained, even though there is a proposal to collect more data at $e^+ e^-$ colliders~\cite{Dai:2022}, and remarkable progress has been achieved in the studies with lattice QCD~\cite{Beane:2007} and chiral effective field theory~\cite{Beane:2005,Haidenbauer:2013,Li:2016,Li:2018CPC,Song:2018,Song:2022}. Nevertheless, the lifetime of $\Lambda$ hyperons is much longer than the typical timescale of the strong interactions ($<10^{-22}$ s) and the typical half-lives ($\sim10^{-12}$ s) of nuclear excited states decaying via the electromagnetic interaction, one can produce $\Lambda$-hypernuclei at different energy states and measure their properties experimentally. Therefore, the structure of $\Lambda$-hypernuclei provides an important way to extract information on the $\Lambda N$ interaction in hypernuclear matter, the knowledge of which is relevant to solve the so-called {\em hyperon puzzle} in neutron stars~\cite{Lonardoni:2015PRL,Gandolfi:2015,Hagino:2016,Tolos:2020,Rong:2021,Tu:2022,Sun:2022}, namely, the difficulty to reproduce the measured maximal masses of neutron stars with the presence of hyperons.
Hypernuclei have been and will be studied more extensively with induced reactions of meson and electron beams at Japan Proton Accelerator Research Complex (J-PARC)~\cite{Koike:2019}, the Thomas Jefferson National Accelerator Facility (JLab), the Mainz Microtron (MAMI), the Research Center for Electron Photon Science (ELPH), Facility for Antiproton and Ion Research (FAIR), and the High Intensity Heavy-ion Accelerator Facility (HIAF)~\cite{Saito:2021,Zhou:2020AAPPS}. With the development of high-resolution germanium detector arrays for hypernuclear $\gamma$-ray spectroscopy, low-lying states in several $p$-shell~\cite{Tamura:2000,Hashimoto:2006} and $sd$-shell~\cite{Yang:2018} hypernuclei have already been measured precisely. The next-generation facility J-PARC has already been in operation, opening up a new opportunity to perform high-precision hypernuclear $\gamma$-ray spectroscopy studies, especially the electric and magnetic dipole transition strengths~\cite{Aoki:2021}. The hypernuclear spectroscopic data that have been accumulated or to be measured contain rich information on hyperon-nucleon interactions in nuclear medium, and the impurity effect of hyperon particle on the structure of atomic nuclei. To extract this information from data, various hypernuclear models have been developed, including the valence-space shell model~\cite{Gal:1971,Gal:1972,Gal:1978,Millener:1985}, cluster~\cite{Motoba:1983} and few body models~\cite{Hiyama:2009PPNP}, ab initio no-core shell model~\cite{Wirth:2016,Wirth:2017,Le:2020}, and the Monte Carlo technique of impurity lattice effective field theory~\cite{Frame:2020}.
In addition to the hypernuclear models mentioned above, self-consistent mean field approaches~\cite{Bender:2003RMP} and nuclear energy density functional (EDF) theories~\cite{Meng:2006PPNP} have been extensively applied to study the bulky properties of hypernuclei~\cite{Rayet:1976,Mares:1993,LuHF:2002,Zhou:2007,Win:2008,Lu:2011,Tanimura:2012PRC,Schulze:2014,Xue:2015,Sun:2017,Liu:2018,Chen:2021SC,Rong:2021,Ding:2022PRC,Xue:2022}. In these studies, $\Lambda$ hyperon binding energies at different orbits have been well reproduced in hypernuclei from light to heavy mass regions, even though most of which are near closed shells and thus preserve spherical symmetry. It implies that the $\Lambda$ hyperon in these hypernuclei has a well-defined shell structure. In recent years, beyond-mean-field approaches have been developed for hypernuclear spectroscopy based on different nuclear energy density functionals, including anti-symmetrized molecular dynamics for hypernuclei (HyperAMD)~\cite{Isaka:2011,Isaka:2012}, microscopic particle-rotor model~\cite{Mei:2014,Mei:2015}, and generator coordinate method for hypernuclei (HyperGCM)~\cite{Mei:2016PRC,Cui:2017} based on a Skyrme energy density functional or covariant density functional theory~\cite{Meng:2016Book,Meng:2020Hs,Wang:2022CTP}. In the latter, the configuration mixing of only axially deformed states was considered for the low-lying states of $^{21}_\Lambda$Ne, which has been studied with microscopic cluster model~\cite{Yamada:1984} and HyperAMD~\cite{Isaka:2011}. It has been found that the $\Lambda$ hyperon in the lowest energy state (labeled as $\Lambda_s$) weakly couples with the ground-state rotational band of the core nucleus $^{20}$Ne, forming a similar rotational band with almost degenerate doublet states. The $E2$ transition strength for the $2^+ \to 0^+$ transition in $^{20}$Ne is reduced by a factor ranging from 5\%~\cite{Cui:2017} to 13\%~\cite{Mei:2016PRC} by the $\Lambda_s$ hyperon.
Recently, we have extended the above HyperGCM for hypernuclear low-lying states with the mixing of different quadrupole-octupole deformed configurations whose parity is allowed to be violated~\cite{Xia:2019}. The broken symmetries are recovered with projections of parity, particle number, and angular momentum. It has been found that the $\Lambda$ hyperon disfavors the formation of reflection-asymmetric molecular-like $\alpha$+$^{16}$O structure in $^{20}$Ne. One may expect that the electric octupole transition strengths are quenched, even though they are not investigated yet. In the meantime, we have found that the negative-parity states with the configuration $\ket{^{20}{\rm Ne}(K^\pi=0^-)}\otimes \ket{\Lambda_s}$ are close in energy to those with the configuration $\ket{^{20}{\rm Ne}(K^\pi=0^+)}\otimes \ket{\Lambda_p}$. In this work, we extend this framework further by mixing the above two types of configurations, examining the interplay of hyperon single-particle excitation and hypernuclear collective motions. This effect on nuclear energy spectra and electric multipole transition strengths will be studied in detail. \tbd{We note that the effect of mixing the configurations of $\ket{I^-}\otimes \ket{\Lambda_p}$ and $\ket{I^+}\otimes \ket{\Lambda_s}$ was studied before with an extended shell model for $^{12}_\Lambda$C and $^{16}_\Lambda$O~\cite{Motoba:1983NPA}.}
The paper is organized as follows. In Sect.~\ref{Sec.II}, we present the formalism of the extended HyperGCM for the low-lying states of single-$\Lambda$ hypernuclei with quadrupole and octupole correlations. In Sect.~\ref{Sec.results}, we present the energy spectra and the electric quadrupole and octupole transition strengths from the mixing of the two types of configurations with the $\Lambda$ placed on each of the first two lowest states. The results are discussed in comparison to those of calculations with one of the two types of configurations. A summary is given finally in sect.~\ref{Sec.summary}.
\section{The framework}
\label{Sec.II}
\subsection{The covariant density functional theory for quadrupole-octupole deformed hypernuclei}
In the covariant density functional theory for $\Lambda$ hypernuclei, the effective nucleon-nucleon ($NN$) and $\Lambda N$ interactions are described by the Lagrangian density of relativistic point-coupling model, where the $\Lambda N$ interaction part is given by~\cite{Tanimura:2012PRC},
\begin{eqnarray}
{\cal L}^{N \Lambda}&=& -\alpha_S^{(N\!\Lambda)}(\bar{\psi}^{N}\psi^N)(\bar{\psi}^{\Lambda}\psi^{\Lambda})\nonumber\\
&&
-\alpha_V^{(N\!\Lambda)}(\bar{\psi}^{N}\gamma_{\mu}\psi^N)
(\bar{\psi}^{\Lambda}\gamma^{\mu}\psi^{\Lambda})\nonumber\\
&&-\delta_S^{(N\!\Lambda)}(\partial_{\mu}\bar{\psi}^{N}\psi^N)
(\partial^{\mu}\bar{\psi}^{\Lambda}\psi^{\Lambda}) \nonumber\\
&& -\delta_V^{(N\!\Lambda)}(\partial_{\mu}\bar{\psi}^{N}\gamma_{\nu}\psi^N)
(\partial^{\mu}\bar{\psi}^{\Lambda}\gamma^{\nu}\psi^{\Lambda})\nonumber\\
&&+
\alpha^{(N\!\Lambda)}_T(\bar{\psi}^{\Lambda}\sigma^{\mu\nu}\psi^{\Lambda})
(\partial_{\nu}\bar{\psi}^{N}\gamma_{\mu}\psi^N).
\end{eqnarray}
The $\psi^{N/\Lambda}$ represents nucleon and $\Lambda$ hyperon fields, respectively. Nucleons and $\Lambda$ hyperon are further approximated as independent particles trapped in each potential determined by the densities and currents in a self-consistent manner. All the fields are taking the expectation values of the mean-field state whose wave function $\ket{\Phi^{(N\Lambda)}_{n}(\mathbf{q})}$ can be factorized as a product of the nuclear part and hyperon part,
\begin{equation}
\label{eq:hypernuclear_config}
\ket{\Phi^{(N\Lambda)}_{n}(\mathbf{q})}
= \ket{\Phi^{N}(\mathbf{q})} \otimes\ket{\varphi^{(\Lambda)}_n(\mathbf{q})}
\end{equation}
and is determined by the Ritz variational principle. To generate the wave function of a $\Lambda$-hypernucleus with the correct average number of particles and different multipole deformation parameters $\mathbf{q}$, we add a linear term on the particle-number operator $\hat N$, and quadratic constraint terms on the quadrupole and octupole moments
\begin{eqnarray}
\label{Dirac}
\delta \bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat H
&& - \sum_{\tau=n, p} \lambda_\tau \hat N_\tau \nonumber\\
&& - \sum_{\lambda=1, 2,3} C_\lambda (\hat Q_{\lambda0} - q_\lambda)^2 \ket{\Phi^{(N\Lambda)}_{n}(\mathbf{q})}=0.
\end{eqnarray}
The Lagrange multiplier $\lambda_\tau$ is determined by the constraint $\langle q \vert \hat N_\tau\vert q\rangle=N(Z)$. The $C_\lambda$ is the stiffness parameter. The center-of-mass coordinate is imposed by the constraint $\langle \Phi^{(N\Lambda)}_{n} \vert \hat Q_{10} \vert \Phi^{(N\Lambda)}_{n}\rangle=0$. The $\hat N_\tau$ and $\hat Q_{\lambda 0}\equiv r^\lambda Y_{\lambda 0}$ are particle number and mass multipole moment operators, respectively. The deformation parameters $\beta_{\lambda\mu}$ are defined as
\begin{equation}\label{deformation}
\beta_{\lambda\mu}
=\dfrac{4\pi}{3A R^\lambda} \bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat Q_{\lambda\mu} \ket{\Phi^{(N\Lambda)}_{n}(\mathbf{q})},
\end{equation}
with $A$ representing the mass number of the nucleus, $\quad R=1.2A^{1/3}$ fm. For simplicity, the deformation parameters $\mathbf{q}=(\beta_{20}, \beta_{30})$ are considered. In this case, the hyperon wave function $\ket{\varphi^{(\Lambda)}_n(\mathbf{q})}$ can be characterized with the quantum number $\Omega_\Lambda$ which is the component of the angular momentum of the hyperon along the $z$-axis. Besides, the hypernuclear system composed of an even-even nuclear core plus a single $\Lambda$ hyperon is considered. The symbol $n$ distinguishes different hyperon states. The $\Lambda$ is placed in one of the two lowest energy states and is labeled as $\Lambda_s$ and $\Lambda_p$, respectively, even though the $\Lambda$ wave function is generally an admixture of states with different orbital angular momenta due to nonzero quadrupole-octupole deformation.
In this work, we employ the relativistic point coupling energy functional PC-F1~\cite{Burvenich:2002PRC} parameterization for the $NN$ effective interaction and the PCY-S2~\cite{Tanimura:2012PRC} for the $\Lambda N$ effective interaction. We note that the calculation using PC-PK1~\cite{Zhao:2010PRC} for the $NN$ effective interaction does not change the topology of the energy surface for the nuclear core, even though the results could be somewhat different quantitatively. See the study in Ref.~\cite{Zhou:2016PLB}.
\subsection{The generator coordinate method for hypernuclei}
In the extended version of HyperGCM for single-$\Lambda$ hypernucleus, the wave function of a hypernuclear state with spin-parity $J^\pi$ is generally constructed as a superposition of quantum-number projected hypernuclear mean-field states,
\begin{eqnarray}
\label{GCM:wf}
\ket{\Psi^{J^\pi}_{\alpha} }
= \sum _{n,\mathbf{q},K} f^{JK\pi}_{\alpha}(\mathbf{q},n)\ket{NZ JK \pi; \mathbf{q},n},
\end{eqnarray}
where the symbol $\alpha$ labels the quantum numbers of the state other than the spin parity $J^\pi$. The basis function $\ket{NZ JK\pi; \mathbf{q},n}$ for the many-body state $\ket{\Psi^{J^\pi}_{\alpha} }$ is determined as follows.
\begin{eqnarray}
\label{basis_function}
\ket{NZ JK\pi; \mathbf{q},n} = \hat{P}^J_{MK}\hat{P}^N \hat{P}^Z\hat P^\pi \ket{\Phi^{(N\Lambda)}_{n}(\mathbf{q})},
\end{eqnarray}
where $\hat P^{J}_{MK}$, $\hat{P}^{N, Z}$, and $\hat P^\pi$ are the projection operators that extract the component with the right angular momentum $J$, neutron number $N$, proton number $Z$, and parity $\pi$,
\begin{subequations}
\begin{align}
\hat P^{J}_{MK}&=\dfrac{2J+1}{8\pi^2}\int d\Omega D^{J\ast}_{MK}(\Omega) \hat R(\Omega),\\
\hat P^{N_\tau} &= \dfrac{1}{2\pi}\int^{2\pi}_0 d\varphi_{\tau} e^{i\phi_{\tau}(\hat N_\tau-N_\tau)},\\
\hat P^\pi &= \dfrac{1}{2}(1+\pi\hat P),
\end{align}
\end{subequations}
where the operator $\hat P^J_{MK}$ extracts from the intrinsic state $|\Phi^{(N\Lambda)}_{n}(\beta_{20},\beta_{30})\rangle$ the component whose angular momentum along the intrinsic axis $z$ is given by $K$. The $\hat P^{N_\tau}=\hat P^{N,Z}$ and $\hat P^\pi$ are the projection operators of particle numbers ($N, Z$) and parity $\pi=\pm1$, respectively. For the single-$\Lambda$ hypernucleus with an even-even nuclear core, the total angular momentum $J$ is a half-integer number. Besides, as discussed in Eq.(\ref{Dirac}), the symmetry of rotation invariance with respect to the $z$-axis is imposed in the intrinsic configurations $\ket{\Phi^{(N\Lambda)}_{n}(\mathbf{q})}$, there is no $K$ mixing. Therefore, the $K$ quantum number in Eq.(\ref{basis_function}) is identical to $\Omega_\Lambda$. For the mean-field states with the hyperon in a $\Omega_\Lambda$ configuration, the angular momentum $J$ of the projected state takes the value of $\vert \Omega_\Lambda\vert, \vert \Omega_\Lambda\vert+1, \cdots$.
The weight function $f^{JK\pi}_{\alpha}(\mathbf{q},n)$ in Eq.(\ref{GCM:wf}) is determined by the variational principle, which in the case of without $K$-mixing leads to the following generalized eigenvalue equation,
\begin{eqnarray}
\label{HWE}
\sum_{n',\mathbf{q}'}
\left[{\cal H}^{JK\pi}_{nn'}(\mathbf{q},\mathbf{q}') -E^{JK\pi}_{\alpha}
{\cal N}^{JK\pi}_{nn'}(\mathbf{q},\mathbf{q}')\right]
f^{JK\pi}_{\alpha}(\mathbf{q}',n')=0.
\end{eqnarray}
The norm kernel ${\cal N}$, Hamiltonian kernel ${\cal H}$, and the kernel of electric multipole operators can generally be written in the following form.
\begin{eqnarray}
\label{eq:kernels}
&&{\cal O}^{JK\pi}_{nn'}(\mathbf{q},\mathbf{q}') \nonumber\\
&=&
\bra{NZ JK\pi; \mathbf{q},n}\hat O \ket{NZ JK\pi; \mathbf{q}',n'} \nonumber\\
&=& \frac{2J+1}{8\pi^2}\int D^{J\ast}_{KK} (\Omega)d\Omega
\int^{2\pi}_0 \frac{e^{-i\phi_N N}}{2\pi}d\phi_N
\int^{2\pi}_0 \frac{e^{-i\phi_ZZ}}{2\pi} d\phi_Z
\nonumber\\
&&
\times\bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat O\hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z}\left(\frac{1+\pi\hat P}{2}\right) \ket{\Phi^{(N\Lambda)}_{n'}(\mathbf{q}')}
\end{eqnarray}
with $\hat{O}=1, \hat{H}$, and $\hat{Q}^{(e)}_{\lambda \mu}\equiv er^\lambda Y_{\lambda\mu}$, respectively. The overlap functions in the integrand can be classified into the following types.
The overlap in the norm kernel is given by
\begin{eqnarray}
\label{eq:overlap}
&& \bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z}\left(\frac{1+\pi\hat P}{2}\right) \ket{\Phi^{(N\Lambda)}_{n'}(\mathbf{q}')}\nonumber\\
&=& \frac{1}{2}\left[\bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \ket{\Phi^{(N\Lambda)}_{n'}(\mathbf{q}')}\right.\nonumber\\
&&\left.+ \pi\bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \hat P \ket{\Phi^{(N\Lambda)}_{n'}(\mathbf{q}')}\right].
\end{eqnarray}
Substituting Eq.(\ref{eq:hypernuclear_config}) into the above expression, one finds
\begin{eqnarray}
&&\bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \ket{\Phi^{(N\Lambda)}_{n'}(\mathbf{q}')}\nonumber\\
&=&\bra{\Phi^{(N)}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \ket{\Phi^{(N)}(\mathbf{q}')} \nonumber\\
&&\times \bra{\varphi^{(\Lambda)}_n (\mathbf{q})}\hat R(\Omega)\ket{\varphi^{(\Lambda)}_{n'}(\mathbf{q}')}
\end{eqnarray}
and
\begin{eqnarray}
&&\bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z}\hat P \ket{\Phi^{(N\Lambda)}_{n'}(\mathbf{q}')}\nonumber\\
&=&\bra{\Phi^{(N)}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \hat P\ket{\Phi^{(N)}(\mathbf{q}')} \nonumber\\
&&\times \bra{\varphi^{(\Lambda)}_n (\mathbf{q})}\hat R(\Omega)\hat P\ket{\varphi^{(\Lambda)}_{n'}(\mathbf{q}')}.
\end{eqnarray}
The overlaps of the hyperon part are determined by
\begin{subequations}
\begin{eqnarray}
\label{eq:D4Lambda}
\bar D_{\Lambda_k\Lambda_{k'}}(\mathbf{q},\mathbf{q}';\Omega)
&\equiv& \bra{\varphi^{(\Lambda)}_k (\mathbf{q})}\hat R(\Omega)\ket{\varphi^{(\Lambda)}_{k'}(\mathbf{q}')} \nonumber\\
&=&\sum_{\mu\mu^\prime}F^\ast_{k\mu}(\mathbf{q}) F_{k^\prime \mu^\prime}(\mathbf{q}') \delta_{n n'}\delta_{l l'}\delta_{j j'} D^j_{m_jm'_j}(\Omega)\nonumber \\
&& +\sum_{\mu\mu^\prime}G^\ast_{k\mu}(\mathbf{q}) G_{k^\prime \mu^\prime}(\mathbf{q}') \delta_{n n'}\delta_{l l'}\delta_{j j'} D^j_{m_jm'_j}(\Omega),\nonumber\\
\end{eqnarray}
and
\begin{eqnarray}
&& \bra{\varphi^{(\Lambda)}_k (\mathbf{q})}\hat R(\Omega)\hat P\ket{\varphi^{(\Lambda)}_{k'}(\mathbf{q}')} \nonumber\\
&=&\sum_{\mu\mu^\prime}(-1)^{\ell}F^\ast_{k\mu}(\mathbf{q}) F_{k^\prime \mu^\prime}(\mathbf{q}') \delta_{n n'}\delta_{l l'}\delta_{j j'} D^j_{m_jm'_j}(\Omega)\nonumber \\
&& +\sum_{\mu\mu^\prime}(-1)^{\ell}G^\ast_{k\mu}(\mathbf{q}) G_{k^\prime \mu^\prime}(\mathbf{q}') \delta_{n n'}\delta_{l l'}\delta_{j j'} D^j_{m_jm'_j}(\Omega),
\end{eqnarray}
\end{subequations}
where $F_{k\mu}$ and $G_{k \mu}$ are the expansion coefficients of the large and small components of the Dirac spinor for the $k$-th hyperon single particle state on a set of spherical harmonic oscillator basis functions $\ket{\mu}=\ket{nljm_j}$ within 10 major shells. The $D^j_{m_jm'_j}(\Omega)=\bra{jm_j}\hat R(\Omega)\ket{jm'_j}$ is the Wigner D-function. See, for instance, Ref.~\cite{Xue:2015} for details.
For a general one-body operator $\hat O^{(1B)}$, the corresponding overlap can be obtained by rewriting the operator in the second quantization form,
\begin{eqnarray}
\hat O^{(1B)}= \sum_{NN'} O_{NN'}c^\dagger_N c_{N'} + \sum_{\Lambda\Lambda'}O_{\Lambda\Lambda'}c^\dagger_\Lambda c_{\Lambda'},
\end{eqnarray}
where the first term acts only on the wave function of the nucleons, while the second term acts on the wave function of the hyperon part. The corresponding overlap function becomes
\begin{eqnarray}
&&\bra{\Phi^{(N\Lambda)}_{n}(\mathbf{q})} \hat O^{(1B)} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \ket{\Phi^{(N\Lambda)}_{n'}(\mathbf{q}')}\nonumber\\
&=&\sum_{NN'} O_{NN'}\bra{\Phi^{(N)}(\mathbf{q})}c^\dagger_N c_{N'} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \ket{\Phi^{(N)}(\mathbf{q}')} \nonumber\\
&&\times \bra{\varphi^{(\Lambda)}_n (\mathbf{q})}\hat R(\Omega)\ket{\varphi^{(\Lambda)}_{n'}(\mathbf{q}')}\nonumber\\
&&+ \sum_{\Lambda\Lambda'}O_{\Lambda\Lambda'}\bra{\Phi^{(N)}(\mathbf{q})} \hat R(\Omega)e^{i\phi_N\hat N}e^{i\phi_Z\hat Z} \ket{\Phi^{(N)}(\mathbf{q}')} \nonumber\\
&&\times \bra{\varphi^{(\Lambda)}_n (\mathbf{q})}c^\dagger_\Lambda c_{\Lambda'}\hat R(\Omega)\ket{\varphi^{(\Lambda)}_{n'}(\mathbf{q}')}.
\end{eqnarray}
\tbd{In the spherical harmonic oscillator basis $\ket{\mu}$, the expressions for the matrix elements of the mixed densities of neutrons and protons can be found in Ref.~\cite{Yao:2009}. For the single hyperon, the matrix element of density is given by
\begin{eqnarray}
&& \rho_{\Lambda_{\mu_2}, \Lambda_{\mu_1}}(\Omega; \mathbf{q}k; \mathbf{q}'k') \nonumber\\
&=& \frac{\bra{\varphi^{(\Lambda)}_{k} (\mathbf{q})}c^\dagger_{\mu_1} c_{\mu_2}\hat R(\Omega)\ket{\varphi^{(\Lambda)}_{k'}(\mathbf{q}')}}
{\bra{\varphi^{(\Lambda)}_{k} (\mathbf{q})} \hat R(\Omega)\ket{\varphi^{(\Lambda)}_{k'}(\mathbf{q}')}}\nonumber\\
&=& \bra{\mu_2} \hat R(\Omega) \ket{\varphi^{(\Lambda)}_{k'}(\mathbf{q}')}
\bar D^{-1}_{\Lambda_k\Lambda_{k'}}(\mathbf{q},\mathbf{q}';\Omega)
\bra{\varphi^{(\Lambda)}_{k} (\mathbf{q})} \mu_1\rangle\nonumber\\
&=& \bar D^{-1}_{\Lambda_k\Lambda_{k'}}(\mathbf{q},\mathbf{q}';\Omega)\Bigg(F^\ast_{k\mu_1} F_{k' \mu_2} \delta_{n_1 n_2}\delta_{l_1 l_2}\delta_{j_1 j_2} D^{j_1}_{m_{j_1}m_{j_2}}(\Omega)\nonumber\\
&&+G^\ast_{k\mu_1} G_{k' \mu_2} \delta_{n_1 n_2}\delta_{l_1 l_2}\delta_{j_1 j_2} D^{j_1}_{m_{j_1}m_{j_2}}(\Omega)\Bigg),
\end{eqnarray}
where the $\bar D_{\Lambda_k\Lambda_{k_2}}(\mathbf{q},\mathbf{q}';\Omega)$ has been defined in (\ref{eq:D4Lambda}). The mixed density of the $\Lambda$ hyperon in coordinate space is thus given
by
\begin{eqnarray}
&&\rho_\Lambda(\mathbf{r},\Omega;\mathbf{q}k;\mathbf{q}'k')\nonumber\\
&=&\sum_{\Lambda_{\mu_2}\Lambda_{\mu_1}}
\phi^\ast_{\mu_2}(\mathbf{r}) \rho_{\Lambda_{\mu_2}, \Lambda_{\mu_1}}(\Omega; \mathbf{q}k; \mathbf{q}'k')\phi_{\mu_1}(\mathbf{r}),
\end{eqnarray}
with $\phi_{\mu}(\mathbf{r})=\bra{\mathbf{r}}\mu\rangle$ being the wave function of the spherical harmonic oscillator.
}
The overlap of a two-body operator can be evaluated similarly with the help of the generalized Wick theorem~\cite{Balian:1969}.
More details on the calculation of the overlap functions can be found, for instance, in Refs.~\cite{Yao:2009,Yao:2022PPNP}.
The solution of the HWG equation (\ref{HWE}) provides the energy $E^{JK\pi}_{\alpha}$
and the weight function $f^{JK\pi}_{\alpha}(\mathbf{q}',n')$
for each hypernuclear state.
With the hypernuclear wave function (\ref{GCM:wf}), one can calculate the electric multipole ($\lambda=2, 3$) transition strength as follows,
\begin{eqnarray}
B(E\lambda; J^{\pi_i}_{\alpha_i} \rightarrow J^{\pi_f}_{\alpha_f})
\equiv\frac{1}{2J_i+1} \Bigg|M(E\lambda; J^{\pi_i}_{\alpha_i} \rightarrow J^{\pi_f}_{\alpha_f})\Bigg|^2.
\end{eqnarray}
The reduced transition matrix element $M(E\lambda)$ is given by
\begin{eqnarray}
&&M(E\lambda; J^{\pi_i}_{\alpha_i} \rightarrow J^{\pi_f}_{\alpha_f})\nonumber\\
&=& \sum_{\mathbf{q_i}\mathbf{q_f};n_in_f} f^{J_fK_f\pi_f\ast}_{\alpha}(\mathbf{q}_f,n_f)f^{J_iK_i\pi_i}_{\alpha}(\mathbf{q}_i,n_i) \nonumber\\
&&\times \bra{NZ J_fK_f \pi_f; \mathbf{q}_f,n_f}\vert \hat Q^{(e)}_{\lambda}\vert\ket{NZ J_iK_i \pi_i; \mathbf{q}_i,n_i},
\end{eqnarray}
where the configuration-dependent reduced matrix element reads
\begin{eqnarray}
\label{eq:RME}
&& \bra{NZ J_fK_f \pi_f; \mathbf{q}_f,n_f}\vert \hat Q_{\lambda}\vert\ket{NZ J_iK_i \pi_i; \mathbf{q}_i,n_i}\nonumber\\
&=&\delta_{\pi_{f} \pi_{i},(-1)^{\lambda}} \hat{J}^2_f (-1)^{J_{f}-K_{f}} \sum_{M^\prime M^{\prime \prime}}
\left(\begin{array}{c c c}
J_{f} & \lambda & J_{i} \\
-K_{f} & M^{\prime} & M^{\prime \prime}
\end{array}\right) \nonumber\\
&&\times\bra{\Phi^{(N\Lambda)}_{n_f}(\mathbf{q}_f)}\hat{Q}^{(e)}_{\lambda M^{\prime}} \hat{P}_{M^{\prime\prime} K_{i}}^{J_{i}}\hat{P}^{Z}\hat{P}^{N} \hat{P}^{\pi}\ket{\Phi^{(N\Lambda)}_{n_i}(\mathbf{q}_i)}
\end{eqnarray}
In this work, only the hypernuclear states with $K_i=K_f=K=\Omega_\Lambda=1/2$ are considered. The configuration-dependent reduced matrix element in this case is simplified as follows.
\begin{eqnarray}
&& \bra{NZ J_f K \pi_f; \mathbf{q}_f,n_f}\vert \hat Q_{\lambda}\vert\ket{NZ J_iK \pi_i; \mathbf{q}_i,n_i}\nonumber\\
&=& (-1)^{J_{f}-K} \delta_{\pi_{f} \pi_{i},(-1)^{\lambda}}
\frac{\hat{J}^2_i \hat{J}^2_f}
{2} \sum_{\mu M}
\left(\begin{array}{c c c}
J_{f} & \lambda & J_{i} \\
-K & \mu & M
\end{array}\right)\nonumber\\
&&\times\int^{\pi}_0 sin\theta d\theta
d^{J_i}_{MK}(\theta)\bra{\Phi^{(N\Lambda)}_{n_f}(\mathbf{q}_f)}\hat{Q}^{(e)}_{\lambda \mu} e^{i \hat{J_y}\theta}
\hat{P}^{Z}\hat{P}^{N}\hat{P}^{\pi_i}\ket{\Phi^{(N\Lambda)}_{n_i}(\mathbf{q}_i)},\nonumber\\
\end{eqnarray}
where $\hat J=\sqrt{2J+1}$. In the electric multipole operator $\hat{Q}^{(e)}_{\lambda \mu}$, the bare value of the proton charge is used.
\section{Results and discussion}%
\label{Sec.results}
\begin{figure}[]
\centering
\includegraphics[width=\columnwidth]{figs/fig1a.eps}
\includegraphics[width=\columnwidth]{figs/fig1b.eps}
\caption{(Color online) The potential energy surfaces of $^{21}_{\Lambda_s}$Ne and $^{21}_{\Lambda_p}$Ne in the plane of quadrupole-octupole deformation parameters ($\beta_{20}, \beta_{30}$), where the $\Lambda$ hyperon occupies the first (a) and second (b) lowest energy states, respectively. The black triangles indicate the configurations that are employed in the HyperGCM calculations. See text for details.}
\label{figs:PES}
\end{figure}
\begin{figure}[]
\includegraphics[width=\columnwidth]{figs/fig2.eps}
\caption{ The low-lying energy spectra of $^{20}$Ne (a,b) and $^{21}_{ \Lambda}$Ne (c,d,e) from the HyperGCM calculation. In the panel (d) and (e), only the configurations $\ket{\Phi^{N}}\otimes \ket{\Lambda_{s/p}}$ with $\Lambda$ occupying either the first ($\Lambda_s$) or the second ($\Lambda_p$) lowest-energy state ($\Omega_\Lambda=1/2$) are included, respectively. In panel (c), the configurations in both (d) and (e) are allowed to be mixed. }
\label{figs:spectra}
\end{figure}
\begin{figure}[]
\includegraphics[width=\linewidth]{figs/fig3a.eps}
\includegraphics[width=\linewidth]{figs/fig3b.eps}
\includegraphics[width=\linewidth]{figs/fig3c.eps}
\includegraphics[width=\linewidth]{figs/fig3d.eps}
\caption{(Color online) The distribution of collective wave functions $|g^{JK\pi}_{\alpha}(\mathbf{q},n)|^2$ for the low-lying states in Fig.~\ref{figs:spectra} from HyperGCM calculations with the mixing of different configurations. The results for the states in Fig.~\ref{figs:spectra}(d) and (e) are indicated with open symbols. The weight of each configuration in the hypernuclear state of interest is also provided.}
\label{figs:WFS}
\end{figure}
\begin{figure}[]
\includegraphics[height=3.8cm]{figs/fig4a.eps}
\includegraphics[height=3.8cm]{figs/fig4b.eps}
\caption{(Color online) The contour plots of the nucleon density and the density profile of the $\Lambda$ in the $(x, z)$ plane at $y = 0$ fm (a) for the configuration $\ket{\Phi^{N}}\otimes \ket{\Lambda_{s}}$ with $\beta_{20}=0.55, \beta_{30}=0.10$ [which dominates the positive-parity states in Fig.~\ref{figs:spectra}(c)], and (b) for the configuration $\ket{\Phi^{N}}\otimes \ket{\Lambda_{p}}$ with $\beta_{20}=0.85, \beta_{30}=0.70$. The difference between two neighboring lines is $0.015$ fm$^{-3}$. }
\label{figs:density_positive}
\end{figure}
\begin{figure}[]
\includegraphics[height=3.8cm]{figs/fig5a.eps}
\includegraphics[height=3.8cm]{figs/fig5b.eps}
\caption{(Color online) Same as Fig.~\ref{figs:density_positive}, but (a) for the configuration $\ket{\Phi^{N}}\otimes \ket{\Lambda_{s}}$ with $\beta_{20}=0.85, \beta_{30}=0.70$ and (b) for the configuration $\ket{\Phi^{N}}\otimes \ket{\Lambda_{p}}$ with $\beta_{20}=0.55, \beta_{30}=0.10$. These two configurations are strongly mixed in the negative-parity states in Fig.~\ref{figs:spectra}(c).}
\label{figs:density_negative}
\end{figure}
Figure~\ref{figs:PES} displays the potential energy surfaces (PESs) of $^{21}_{ \Lambda}$Ne, where the $\Lambda$ hyperon occupies the first and second lowest-energy states with $K_\Lambda=1/2$, respectively. These PESs are essentially the same as those in Ref.~\cite{Xia:2019}, except that in the present work the quadrupole deformation $\beta_{20}$ and octupole deformation $\beta_{30}$ are extended up to $\beta_{20}=1.5$ and $\beta_{30}=2.0$, respectively. The contour lines of the PESs form triangles approximately with the center located around $\beta_{20}=0.55, \beta_{30}=0.0$, which corresponds to the global energy minimum.
The low-lying states of $^{21}_{ \Lambda}$Ne are obtained with the HyperGCM, where the quadrupole-octupole deformation parameters $(\beta_{20}, \beta_{30})$ are chosen as generator coordinates. Considering the observation in Ref.~\cite{Zhou:2016PLB} that the low-lying energy spectrum predicted by the GCM with the mixing of all the configurations on the entire $(\beta_{20}, \beta_{30})$ plane can be reasonably reproduced by mixing the configurations only located along the ``valley", for the sake of simplicity, only the configurations indicated with black triangles in Fig.~\ref{figs:PES} are included in the present HyperGCM calculation. \tbd{Considering the fact that the configurations with $\beta_{30}<0$ are automatically included in the HyperGCM calculation with the parity-projection operator, only the configurations with $\beta_{30}\ge0$ are needed in the practical calculation.}
The predicted low-lying states for $^{21}_{ \Lambda}$Ne are shown in Fig.~\ref{figs:spectra}(c). For comparison, the data and GCM predicted energy spectra of $^{20}$Ne are shown in Fig.~\ref{figs:spectra}(a) and (b), respectively. Those of $^{21}_{ \Lambda}$Ne from the HyperGCM calculations with the mixing of the configurations of only $^{21}_{ \Lambda_s}$Ne or $^{21}_{ \Lambda_p}$Ne are shown in Fig.~\ref{figs:spectra}(d), and (e), respectively. It is worth pointing out that in our previous work~\cite{Mei:2016PRC} with the mixing of only axially deformed configurations, the predicted hypernuclear low-lying states with $K=1/2$ should be compared to the positive-parity states in Fig.~\ref{figs:spectra}(d) and the negative-parity states in Fig.~\ref{figs:spectra}(e). Here, octupole-deformed configurations are included additionally. As a result, we also obtain negative-parity states from the mixing of configurations $\ket{\Phi^{N}}\otimes \ket{\Lambda_s}$, and positive-parity states from the mixing of configurations $\ket{\Phi^{N}}\otimes \ket{\Lambda_p}$. Since parity is violated in the configurations of both $^{21}_{ \Lambda_s}$Ne and $^{21}_{ \Lambda_p}$Ne, these two types of configurations can also mix, leading to the spectrum in Fig.~\ref{figs:spectra}(c). Because the mixing introduces additional correlations, each state in Fig.~\ref{figs:spectra}(c) is lower than that in both Fig.~\ref{figs:spectra}(d) and Fig.~\ref{figs:spectra}(e). This is particularly true for the negative-parity states. Comparing (c) and (d), it is seen that the excitation energies of the positive-parity doublets $(3/2^+, 5/2^+)$ are shifted down by about 0.4 MeV, while those of the negative-parity doublets $(1/2^-, 3/2^-)$ are reduced by about 3.2 MeV. We note that the energy splitting between $(1/2^-, 3/2^-)$ is 88 keV, which cannot be interpreted as the spin-orbit splitting of the hyperon $p$ state as their wave functions are far more complicated than the picture of a spherical nuclear core coupled to $p$-orbital $\Lambda$~\cite{Xia:2017}. The size of energy splitting is much smaller than that found in the HyperGCM calculation~\cite{Mei:2016PRC} with the mixing of only axially deformed configurations $\ket{\Phi^N}\otimes\ket{\Lambda_p}$. Besides, we note that the energy splitting between the positive-parity doublets $(3/2^+, 5/2^+)$ and $(7/2^+, 9/2^+)$ is 18 keV and 38 keV, respectively.
\begin{table*}[tb]
\centering
\tabcolsep=5pt
\caption{The electric quadrupole transition strengths $B(E2)$ for the low-lying states in $^{20}$Ne and $^{21}_\Lambda$Ne from the (Hyper)GCM calculation starting from the relativistic point-coupling PC-F1 plus PCY-S2 interactions, in comparison with those by the HyperAMD model~\cite{Isaka:2011}. The results from the HyperGCM calculations with the mixing of one of the two types of configurations $\ket{\Phi^{N}}\otimes \ket{\Lambda_{s}}$ and $\ket{\Phi^{N}}\otimes \ket{\Lambda_{p}}$ are also given for comparison, and they are labeled with HyperGCM($\Lambda_s$) and HyperGCM($\Lambda_p$), respectively. The data for $^{20}$Ne is taken from Ref.~\cite{NNDC}. }
\begin{tabular}{cccccccccccc}
\hline \hline
\multicolumn{4}{c}{$^{20}$Ne} & \multicolumn{5}{c}{$^{21}_\Lambda$Ne} \\
\cline{1-4} \cline{6-10} \\
& \multicolumn{4}{c}{$B(E2;I^\pi_i\to I^\pi_f)$ ($e^2$fm$^4$)} & \multicolumn{6}{c}{$B(E2;J^\pi_i\to J^\pi_f)$ ($e^2$fm$^4$)} & \\
\cline{2-4} \cline{6-10} \\
$I^\pi_i\to I^\pi_f$ & Exp. & GCM & AMD~\cite{Isaka:2011} & & $J^\pi_i\to J^\pi_f$ & HyperGCM & HyperGCM($\Lambda_s$) & HyperGCM ($\Lambda_p$) & HyperAMD~\cite{Isaka:2011} \\
\hline
$2^+_1\to 0^+_1$ &63(5)& 72.3 & 72.2 & & $3/2^+_1\to 1/2^+_1$ &58.7&59.0 &108.7 & 63.7 \\
%
&& & & & $5/2^+_1\to 1/2^+_1$ &58.7 &59.0&107.1 & 63.9 \\
$4^+_1\to 2^+_1$ &71(6) & 94.1 & 86.9& & $7/2^+_1\to 3/2^+_1$ & 73.0&73.3&139.9 & 64.3 \\
& && & & $9/2^+_1\to 5/2^+_1$ & 75.8 &76.4&148.9 & 75.7 \\
$6^+_1\to 4^+_1$ & 64(10) & 79.7 & 55.1& & $11/2^+_1\to 7/2^+_1$ &71.4 &71.9 &159.1 & 40.3 \\
& && & & $13/2^+_1\to 9/2^+_1$ &70.4 &71.1 &159.6 & 48.0 \\
\hline
%
$3^-_1\to 1^-_1$ & 164(26) &150.0 & 221.2 & & $5/2^-_1\to 1/2^-_1$ &102.3 &140.2 & 65.9 &139.2 & \\
%
&& & & & $7/2^-_1\to 3/2^-_1$ & 114.1 & 163.8 &67.6 & 178.5\\
$5^-_1\to 3^-_1$ && 172.3 & 249.3 & & $9/2^-_1\to 5/2^-_1$ &120.3 &187.3 &68.6 & 184.2 \\
&& & && $11/2^-_1\to 7/2^-_1$ &122.6 & 193.7 & 71.0& 189.3 \\
$7^-_1\to 5^-_1$ &&178.1 & 240.3& & $13/2^-_1\to 9/2^-_1$ &112.6 & 211.7 & 64.5& 166.7 \\
\hline \hline
\end{tabular}
\label{tab:BE2}
\end{table*}
\begin{table*}[tb]
\centering
\tabcolsep=12pt
\caption{Same as Tab.~\ref{tab:BE2}, but for the electric octupole transition strengths $B(E3)$($e^2$fm$^6$). The absolute value of the reduced matrix element $M(E3)=|\bra{J_f}|Q_3|\ket{J_i}|$ ($e$fm$^3$) defined in (\ref{eq:RME}) is given in parentheses for comparison. Data for $^{20}$ Ne are taken from~\cite{Kibedi:2002BE3}. }
\begin{tabular}{cccccrrl}
\hline \hline
\multicolumn{3}{c}{$^{20}$Ne} & & \multicolumn{3}{c}{$^{21}_\Lambda$Ne} \\
\cline{1-3} \cline{5-7} \\
& \multicolumn{2}{c}{$B(E3;I^\pi_i\to I^\pi_f$) ($M(E3)$)} & & & \multicolumn{2}{c}{$B(E3;J^\pi_i\to J^\pi_f$) ($M(E3)$)} \\
\cline{2-3} \cline{6-7} \\
$I^\pi_i\to I^\pi_f$ & Exp. & GCM & & $J^\pi_i\to J^\pi_f$ & HyperGCM & HyperGCM($\Lambda_s$) \\
\hline
$3^-_1\to 0^+_1$ & 321 & 257.5 (42.5) & &
$5/2^-_1\to 1/2^+_1$ & 69.7 (20.5) & 205.3 (35.1) & \\
& & & &
$7/2^-_1\to 1/2^+_1$ & 48.9 (19.8) & 160.3 (35.8) & \\
$5^-_1\to 2^+_1$ & & 330.9 (60.3) & &
$9/2^-_1\to 3/2^+_1$ & 91.5 (30.2) & 327.6 (57.2) & \\
& & & &
$11/2^-_1\to 5/2^+_1$ & 47.4 (23.9) & 213.2 (50.6) & \\
$1^-_1\to 4^+_1$ & & 677.8 (45.1) & &
$1/2^-_1\to 7/2^+_1$ & 497.0 (31.5) & 1379.8 (52.5) & \\
& & & &
$3/2^-_1\to 9/2^+_1$ & 240.1 (31.0) & 660.5 (51.4) & \\
\hline \hline
\end{tabular}
\label{tab:BE3}
\end{table*}
Decomposing nuclear wave functions helps to understand how each configuration contributes to the states of interest. Since the weight functions, $f^{JK\pi}_{\alpha}(\mathbf{q},n)$ in Eq.(\ref{GCM:wf}) are not orthogonal to each other and their module squares cannot be interpreted as a probability, a new collective wave function $g^{JK\pi}_{\alpha}(\mathbf{q},n)$ is usually introduced as follows,
\begin{equation}
g^{JK\pi}_{\alpha}(\mathbf{q},n)
=\sum_{n',\mathbf{q}'} \left[{\cal N}^{JK\pi}_{nn'}(\mathbf{q},\mathbf{q}')\right]^{1/2}f^{JK\pi}_{\alpha}(\mathbf{q}',n'),
\end{equation}
which fulfills the normalization condition.
The modules of the collective wave functions $|g^{JK\pi}_{\alpha}(\mathbf{q},n)|^2$ from different types of calculations for the low-lying hypernuclear states in Fig.~\ref{figs:spectra}(c), (d), and (e) are shown in Fig.~\ref{figs:WFS}. One can see that the low-lying positive-parity states are dominated by the configurations
$\ket{\Phi^{N}}\otimes \ket{\Lambda_s}$ which exhaust more than 95\% of the total wave function. In contrast, the negative-parity states are strong admixtures of the configurations $\ket{\Phi^{N}}\otimes \ket{\Lambda_s}$ and $\ket{\Phi^{N}}\otimes \ket{\Lambda_p}$. In the lowest negative-parity states $(1/2^-, 3/2^-)$, the ratio of the component $\ket{\Phi^{N}}\otimes \ket{\Lambda_s}$ to the component $\ket{\Phi^{N}}\otimes \ket{\Lambda_p}$ is about two. With the increase of angular momentum, the mixing becomes stronger with the ratio close to one. This strong admixture causes the evident energy shift of the negative-parity states. This is different from what has been found in the previous studies~\cite{Isaka:2011,Mei:2016PRC,Xia:2019}. Similar to the finding in $^{20}$Ne~\cite{Zhou:2016PLB}, the peak position of the $|g|^2$ for the positive-parity states of $^{21}_\Lambda$Ne is slightly decreasing with the increase of angular momentum $J$, indicating the quadrupole collectivity is weakening at the higher-spin states.
\tbd{Clustering structure in $^{20}$Ne has been comprehensively studied in nuclear EDFs~\cite{Kimura:2004PRC,Ebran:2012Nature,Ebran:2014PRC} and beyond mean-field approaches~\cite{Zhou:2013PRL,Zhou:2016PLB}. It has been found in Refs.~\cite{Kimura:2004PRC,Zhou:2016PLB} that the positive-parity states of $^{20}$Ne are dominated by the $\alpha+^{12}$C+$\alpha$ structure, while the negative-parity states are by the pear-shaped $\alpha+^{16}$O structure. The occurrence of clustering structure in nuclear core may change evidently the binding energy of hyperons in hypernuclei~\cite{Lu:2014PRC,Isaka:2015PTEP,Wu:2017PRC,Cui:2022CPC}.} Figs.~\ref{figs:density_positive} and \ref{figs:density_negative} show the density profiles of nucleons and $\Lambda$ hyperon of the predominant configurations for the positive and negative parity states, respectively. With the inclusion of one $\Lambda$ hyperon, the positive-parity states of $^{21}_\Lambda$Ne are anticipated to be dominated by the $\alpha+^{12}$C+$\alpha+\Lambda_s$ structure, as shown in Fig.~\ref{figs:density_positive}. In contrast, the negative-parity states are strong admixtures of $\alpha+^{16}$O+$\Lambda_s$ structure and $\alpha+^{12}$C+$\alpha+\Lambda_p$ structure, as discussed in Fig.~\ref{figs:WFS}.
Tables~\ref{tab:BE2} and ~\ref{tab:BE3} list the transition strengths of the electric quadrupole ($E2$) and octupole ($E3$) for the low-lying states in $^{21}_\Lambda$Ne. For comparison, the transition strengths in $^{20}$Ne from the GCM calculation based on the PC-F1 force for the $NN$ interaction, and those in $^{21}_\Lambda$Ne from the HyperAMD calculation~\cite{Isaka:2011} are also provided. It is seen from Tab.~\ref{tab:BE2} that the $B(E2; 2^+_1\to 0^+_1)$ in $^{20}$Ne and the $B(E2)$ values for the transitions from $(3/2,5/2)^+_1\to 1/2^+_1$ in $^{21}_\Lambda$Ne by the HyperGCM and HyperAMD calculations are consistent with each other. Both results demonstrate the impurity effect of $\Lambda$ on the reduction of nuclear quadrupole collective properties. In particular, the feature that the $B(E2)$ value first increases and then decreases with angular momentum in $^{20}$Ne ($^{21}_\Lambda$Ne) is reproduced (predicted) by both methods, even though the values are quantitatively different from each other. Moreover, one can see that the $B(E2)$ values by the HyperGCM with the mixing of $\Lambda_s$ and $\Lambda_p$ orbits are systematically and slightly smaller than those by the HyperGCM($\Lambda_s$), indicating that the mixing of the component $\ket{\Phi^{N}}\otimes \ket{\Lambda_p}$ additionally also reduces nuclear quadrupole collectivity. This reduction effect is shown to be more obvious in the negative-parity states than in the positive-parity states. Tab.~\ref{tab:BE3} shows that this reduction effect is stronger in the $B(E3)$ values. Quantitatively, the reduced $E3$ matrix element for the transition $5/2^-\to 1/2^+$ is quenched from 35.1 $e$fm$^3$ to 20.5 $e$fm$^3$, corresponding to the quenching factor of $42\%$.
\section{Summary}
\label{Sec.summary}
We have extended the HyperGCM framework for the low-lying hypernuclear states with the mixing of the configurations associated with both the hyperon excitation and quadrupole-octupole collective excitations based on a covariant density functional theory. The method has been applied to the low-lying states of $^{21}_\Lambda$Ne which are dominated by clustering structure. We have found that the inclusion of additional octupole-deformed configurations leads to the negative-parity states with a strong mixing of the configurations where the $\Lambda$ hyperon occupies the first and second lowest-energy states, respectively. In other words, the low-lying negative-parity states are dominated by the admixtures of $\alpha+^{16}$O+$\Lambda_s$ structure and $\alpha+^{12}$C+$\alpha+\Lambda_p$ structure. As a result, the excitation energies of low-lying negative-parity states become much smaller than what are expected from the previous studies~\cite{Isaka:2011,Cui:2017,Mei:2016PRC}. Besides, the admixture of these two types of configurations leads to a reduction in the values of $B(E2)$. This reduction effect is even stronger in the $E3$ transition strengths which is to be confirmed in the future hypernuclear experiments. In contrast, the positive-parity states are still dominated by the $\alpha+^{12}$C+$\alpha+\Lambda_s$ structure. This newly-developed HyperGCM framework provides a theoretical tool of choice to study the impact of baryon-baryon interactions on hypernuclear low-lying states, especially the to-be-measured electric and magnetic dipole transitions which are expected to be sensitive to the coupling strengths in the $\Lambda N$ interaction~\cite{Yao:2008,Sang:2013}. Work in this direction is in progress.
\section*{Acknowledgments}
HJX is supported by Science and Technology Project of Hebei Education Department (No. ZC2021011), Scientific Research and Development Planning Project of Handan City (No. 21422901160). XYW is supported by the National Natural Science Foundation of China
under Grant No. 12005082, the Jiangxi Provincial Natural Science Foundation 20202BAB211008, Jiangxi Normal University (JXNU) Initial Research Foundation Grant to Doctor (12019504), and the Young Talents Program under JXNU (12019870). JMY is partially supported by Natural Science Foundation under Grant No. 12141501 and the Fundamental Research Funds for Central Universities, Sun Yat-sen University.
| de51e070db7a1c74e0ef851a60bd8d368b90ac06 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Abstract}
This work revolves around the two following questions: Given a convex body $C\subset\mathbb{R}^d$, a positive integer $k$ and a finite set $S\subset\mathbb{R}^d$ (or a finite $\mu$ Borel measure in $\mathbb{R}^d$), how many homothets of $C$ are required to cover $S$ if no homothet is allowed to cover more than $k$ points of $S$ (or have measure more than $k$)? how many homothets of $C$ can be packed if each of them must cover at least $k$ points of $S$ (or have measure at least $k$)? We prove that, so long as $S$ is not too degenerate, the answer to both questions is $\Theta_d(\frac{|S|}{k})$, where the hidden constant is independent of $d$, this is clearly best possible up to a multiplicative constant. Analogous results hold in the case of measures. Then we introduce a generalization of the standard covering and packing densities of a convex body $C$ to Borel measure spaces in $\mathbb{R}^d$ and, using the aforementioned bounds, we show that they are bounded from above and below, respectively, by functions of $d$. As an intermediate result, we give a simple proof the existence of weak $\epsilon$-nets of size $O(\frac{1}{\epsilon})$ for the range space induced by all homothets of $C$. Following some recent work in discrete geometry, we investigate the case $d=k=2$ in greater detail. We also provide polynomial time algorithms for constructing a packing/covering exhibiting the $\Theta_d(\frac{|S|}{k})$ bound mentioned above in the case that $C$ is an Euclidean ball. Finally, it is shown that if $C$ is a square then it is NP-hard to decide whether $S$ can be covered by $\frac{|S|}{4}$ squares containing $4$ points each.
\newpage
\tableofcontents
\newpage
\section{Introduction}\label{chap:intro}
This work revolves around the two following natural questions: Given a convex body $C$, a finite set of points $S\subset\mathbb{R}^d$, and a positive integer $k$, how many homothets of $C$ are required in order to cover $S$ if each homothet is allowed to cover at most $k$ points? (covering question). How many homothets can be packed if each of them must cover at least $k$ points? (packing question). We shall denote these two quantities by $f(C,k,S)$ and $g(C,k,S)$, respectively. Analogous functions can be defined if, instead of $S$, we consider a finite Borel measure $\mu$ in $\mathbb{R}^d$. As far as we know, these questions have not been studied before in such generality.
Clearly, $f(C, k, S)\geq \frac{|S|}k$ and $g(C, k, S)\leq \frac{|S|}k$, and it is easy to construct, for any $C$ and $k$, arbitrarily large sets $S$ for which equality holds (take, for example, any set formed by some clusters which lie far away from each other and contain $k$ points each). Perhaps surprisingly, under some mild assumptions on $S$ (or $\mu$) $f$ and $g$ will also be bounded from above and below, respectively, by linear functions of $\frac{|S|}{k}$ (or $\frac{\mu(\mathbb{R}^d)}{k})$, that is, $f(C,k,S)=O_d(\frac{|S|}k)$ and $f(C,k,S)=\Omega_d(\frac{|S|}k)$, where the hidden constant depends only on $d$. For Euclidean balls, both of these bounds follow from the Besicovitch covering theorem, first shown by Besicovitch~\cite{besicovitch_1945} in the planar case and later extended to higher dimensions and more general objects by Morse~\cite{Morse_1947} and Bliedtner and Loeb~\cite{BliedtnerLoeb}, this is discussed in further detail in the following section. We give a proof of the desired bounds for $f$ and $g$ that does not rely on the Besicovitch covering theorem.
The standard packing and covering densities depend implicitly on the Lebesgue measure. We introduce a generalization of covering and packing densities to Borel measure spaces in $\mathbb{R}^d$. Then, using the aforementioned bounds on $f$ and $g$, we show that for every $C$ and every nice enough measure, these covering and packing densities are bounded from above and below, respectively, by two constants that depend only on $d$. When restricted to the Lebesgue measure, this is equivalent to the relatively simple fact, mentioned earlier, that the standard covering and packing densities are accordingly bounded by a function of $d$.
For squares, disks and triangles in the plane, the case $k=2$ has received some attention in discrete geometry (\cite{delaunaytoughness,matchingsquares,matchingnp,trianglematchingsfirst,trianglematchings,strongmatchings}). Continuing this trend, we separately study the case $d=k=2$ for more general convex bodies.
We discuss algorithms for efficiently packing and covering with homothets that contain at least $k$ and at most $k$ points, respectively. Bereg et al. \cite{matchingnp} showed that, even for $k=2$, finding an optimal packing with such homothets of a square is NP-hard, we complement this result by showing that the covering problem is also NP-hard in the case of squares \cite{matchingnp}.
At some point in this work, we require some basic tools from the study of Delaunay triangulations and $\epsilon$-nets.
\section{Preliminaries}\label{chap:prelim}
\subsection{Basic notation and definitions}\label{sec:notation}
A set $C\subset\mathbb{R}^d$ is a \textit{convex body} if it is convex, compact and its interior is nonempty. Furthermore, if the boundary of a convex body contains no segment of positive length, then we say that it is a \textit{strictly convex body}. Given any set $C\subset\mathbb{R}^d$, an \textit{homothetic copy} of $C$ (or, briefly, an \textit{homothet} of $C$) is any set of the form $\lambda C+x=\lbrace \lambda c+x : c\in C\rbrace$ for some $x\in\mathbb{R}^d$ and $\lambda>0$\footnote{Some texts ask only that $\lambda\neq 0$. We consider only positive homothets.}; the number $\lambda$ is said to be the \textit{coefficient of the homothety}\footnote{An homothety maps every point $p\subset \mathbb{R}^d$ to $\lambda p+x$, for some $x\in\mathbb{R}^d$, $\lambda\neq 0$.}. From here on, $C$ will stand for a convex body in $\mathbb{R}^d$.
We say that a set of points $S\subset\mathbb{R}^d$ is \textit{non-$t/C$-degenerate} if the boundary of any homothet of $C$ contains at most $t$ elements of $S$. We say that $S$ is in $C$-\textit{general position} if it is non-$(d+2)/C$-degenerate.
All measures we consider in this work are Borel measures in $\mathbb{R}^d$ which take finite values on all compact sets. A measure $\mu$ is \textit{finite} if $\mu(\mathbb{R}^d)<\infty$. We say that a measure is \textit{non-$C$-degenerate} if it vanishes on the boundary of every homothet of $C$. Notice that, in particular, any absolutely continuous measure (with respect to the Lebesgue measure) is non-$C$-denegerate. Finally, a measure $\mu$ is said to be \textit{$C$-nice} if it is finite, non-$C$-degenerate, and there is a ball $K\subset\mathbb{R}^d$ such that $\mu(K)=\mu(\mathbb{R}^d)$.
Given a set of points $S\subset\mathbb{R}^d$ (resp. a measure $\mu$) and a positive number $k$, an homothet will be called a $k^+/S$-\textit{homothet} ($k^+/\mu$-\textit{homothet}) if it contains at least $k$ elements of $S$ (if $\mu(C')\geq k $). Similarly, $k^-/S$-\textit{homothets} and $k^-/\mu$-\textit{homothets} are homothets that contain at most $k$ points and have measure at most $k$, respectively.
For any finite set $S$ and any positive integer $k$, define $f(C,k,S)$ as the least number of $k^-/S$-homothets of $C$ that can be used to cover $S$, and $g(C,k,S)$ as the maximum number of interior disjoint $k^+/S$-homothets of $C$ that can be arranged in $\mathbb{R}^d$. Similarly, for any $C$-nice measure $\mu$ and any real number $k>0$, define $f(C,k,\mu)$ as the the minimum number number of $k^-/\mu$-homothets that cover $K$, where $K$ denotes the ball such that $\mu(B)=\mu(\mathbb{R}^d)$\footnote{Strictly speaking, $f$ is a function of $C,k,\mu$ and $K$. This will not cause any trouble, however, since all the properties that we derive for $f$ will hold independently of the choice of $K$.}, and define $g(C,k,\mu)$ as the maximum number of interior disjoint $k^+/\mu$-homothets that can be arranged in $\mathbb{R}^d$. It is not hard to see that, since $S$ is finite and $\mu$ is $C$-nice, $f$ and $g$ are well defined and take only non-negative integer values.
Next, we introduce $\alpha$-\textit{fat} convex objects. For any point $x\in\mathbb{R}^d$ and any positive $r$, let $B(x,r)$ denote the open ball with center $x$ and radius $r$ (with the Euclidean metric). We write $B^d$ for $B(O,1)$, where $O$ denotes the origin (this way, $rB^d$ denotes the ball of radius $r$ centered at the origin). Given $\alpha\in (0,1]$, a convex body $C$ will be said to be $\alpha$-\textit{fat} if $B(x,\alpha r)\subseteq C\subseteq B(x,r)$ for some $x$ and $r$. The following well-known fact (e.g.~\cite{fatlemma,fattening}) will play a key role in ensuring that the hidden constants in the bounds of $f$ and $g$ are independent of $C$.
\begin{lemma}\label{teo:fat}
Given a convex body $C\subset\mathbb{R}^d$, there exists a non-singular affine transformation $T$ such that $T(C)$ is $1/d$-fat. More precisely, $B^d\subseteq T(C)\subseteq dB^d$.
\end{lemma}
By a \textit{planar embedded graph} we mean a planar graph drawn in the plane so that the vertices correspond to points, the edges are represented by line segments, no edge contains a vertex other than its endpoints, and no two edges intersect, except possibly at a common endpoint.
As usual, $\mathbb{S}^{d-1}$ stands for the unit sphere in $\mathbb{R}^d$ centered at the origin. We denote the Euclidean norm of a point $x\in\mathbb{R}^d$ by $|x|$. Throughout this text we use the standard $O$ and $\Omega$ notations for asymptotic upper and lower bounds, respectively. The precise definitions can be found, for example, in any introductory textbook on algorithm design and analysis.
\subsection{Packing and covering densities}\label{sec:packcover}
A family of sets in $\mathbb{R}^d$ forms a \textit{packing} if their interiors are disjoint, and it forms a \textit{covering} if their union is the entire space. The \textit{volume} of a measurable set $A\subset\mathbb{R}^d$ is simply its Lebesgue measure, which we denote by $\text{Vol}(A)$. The precise definitions of packing and covering densities vary slightly from text to text; for reasons that will become apparent later, we follow~\cite{packandcover}.
Let $\mathcal{A}$ be a family of sets, each having finite volume, and $D$ a set with finite volume, all of them in $\mathbb{R}^d$. The \textit{inner density} $d_{\text{inn}}(\mathcal{A}|D)$ and \textit{outer density} $d_{\text{out}}(\mathcal{A}|D)$ are given by $$d_{\text{inn}}(\mathcal{A}|D)=\frac{1}{\text{Vol}(D)}\sum_{A\in\mathcal{A},A\subset D}\text{Vol}(A),$$ $$d_{\text{out}}(\mathcal{A}|D)=\frac{1}{\text{Vol}(D)}\sum_{A\in\mathcal{A},A\cap D\neq\emptyset}\text{Vol}(A).$$
We remark that these densities may be infinite.
The \textit{lower density} and \textit{upper density} of $\mathcal{A}$ are defined as $$d_{\text{low}}(\mathcal{A})=\liminf_{r\rightarrow\infty}d_{\text{inn}}(\mathcal{A}|rB^d),$$ $$d_{\text{upp}}(\mathcal{A})=\limsup_{r\rightarrow\infty}d_{\text{out}}(\mathcal{A}|rB^d).$$
It is not hard to see that these values are independent of the choice of $O$.
The \textit{packing density} and \textit{covering density} of a convex body $C$ are given by $$\delta(C)=\sup\{d_\text{upp}(\mathcal{P}):\mathcal{P}\text{ is a packing of }\mathbb{R}^d\text{ with congruent copies of }C\},$$ $$\Theta(C)=\text{inf}\{d_\text{low}(\mathcal{C}):\mathcal{C}\text{ is a covering of }\mathbb{R}^d\text{ with congruent copies of }C\}.$$
The \textit{translational packing density} $\delta_H(C)$ and the \textit{translational covering density} $\Theta_H(C)$ are defined by taking the supremum and infimum over all packings and coverings with translates of $C$, instead of congruent copies. See \cite{packandcover} for a summary of the known bounds for the packing and covering densities.
Notice that the definitions of upper and lower density of $\mathcal{A}$ with respect to $D$ are directly tied to the Lebesgue measure, but could be readily extended to other measures. Similarly, the translates of $C$ can be interpreted as homothets of $C$ that have the same Lebesgue measure as $C$. These observations motivate the following generalization of the previous definitions.
Let $\mu$ be a measure on $\mathbb{R}^d$. For a family $\mathcal{A}$ of sets of finite measure and a set $D$, also of finite measure, we define the \textit{inner density with respect to $\mu$} $d_{inn}(\mu,\mathcal{A}|D)$ and the \textit{outer density with respect to $\mu$} $d_{out}(\mu,\mathcal{A}|D)$ as $$d_{inn}(\mu,\mathcal{A}|D)=\frac{1}{\mu(D)}\sum_{A\in\mathcal{A},A\subset D}\mu(A),$$ $$d_{out}(\mu,\mathcal{A}|D)=\frac{1}{\mu(D)}\sum_{A\in\mathcal{A},A\cap D\neq\emptyset}\mu(A).$$
The \textit{lower density with respect to $\mu$} and \textit{upper density with respect to $\mu$} of $\mathcal{A}$ are now given by $$d_{\text{low}}(\mu,\mathcal{A})=\liminf_{r\rightarrow\infty}d_{\text{inn}}(\mu,\mathcal{A}|rB^d),$$ $$d_{\text{upp}}(\mu,\mathcal{A})=\limsup_{r\rightarrow\infty}d_{\text{out}}(\mu,\mathcal{A}|rB^d).$$
If $\mu$ is non-$C$-degenerate and $\mu(C)>0$, then we define the \textit{homothety packing density with respect to $\mu$} and the \textit{homothety covering density with respect to $\mu$} as $$\delta_H(\mu,C)=\sup\{d_\text{upp}(\mu,\mathcal{P}):\mathcal{P}\text{ is a packing of } \mathbb{R}^d \text{ with homothets of }C\text{ of measure }\mu(C)\},$$ $$\Theta_H(\mu,C)=\text{inf}\{d_\text{low}(\mu,\mathcal{C}):\mathcal{C}\text{ is a covering of } \mathbb{R}^d \text{ with homothets of }C\text{ of measure }\mu(C)\}.$$
Given the properties of $\mu$, it is not hard to see that sets over which we take the infimum and the supremum are nonempty.
The packing and covering density can also be generalized in a natural way by considering packings and coverings with sets that are similar\footnote{Two sets $A$ and $B$ in $\mathbb{R}^d$ are similar if there exists a $\lambda>0$ such that $\lambda A$ and $B$ are congruent.} to $C$ and have fixed measure $\mu(C)$. However, all lower bounds on $\delta_H(\mu,C)$ and all upper bounds on $\Theta_H(\mu,C)$, which are one of the main focus points of this work, are obviously true for the (non-translational) packing and covering densities as well. Just as in the Lebesgue measure case, the packings and covering densities with respect to $\mu$ measure, in a sense, the efficiency of the best possible packing/covering of the measure space induced by $\mu$.
See~\cite{packandcover} for a review of the existing literature on packings and coverings and~\cite{researchproblems} for further open problems and interesting questions.
\subsection{The Besicovitch covering theorem}\label{sec:besicovitch}
The Besicovitch covering theorem extends an older result by Vitali~\cite{Vitali}. The result was first shown by Besicovitch in the planar case, and then generalized to higher dimensions by Morse~\cite{Morse_1947}, it can be stated as follows
\begin{theorem}\label{teo:besicovitch}
There is a constant $c_d$ (which depends only on $d$) with the following property: Given a bounded subset $A$ of $\mathbb{R}^d$ and a collection $\mathcal{F}$ of Euclidean balls such that each point of $A$ is the center of at least one of these balls, it is possible to find subcollections $\mathcal{F}_1,\mathcal{F}_2,\dots,\mathcal{F}_{c_d}$ of $\mathcal{F}$ such that each $\mathcal{F}_i$ consists of disjoint balls and $$A\subset\bigcup_{i=1}^{c_d}\bigcup_{B\in\mathcal{F}_i}B.$$
\end{theorem}
In fact, Morse~\cite{Morse_1947} and Bliedtner and Loeb~\cite{BliedtnerLoeb} extended the result to more general objects and normed vector spaces. Later, F{\"u}redi and Loeb \cite{besicovitchconstant} studied the least value of $c_d$ for which the result holds.
Assume that a finite set $S\subset\mathbb{R}^d$ is such that for each point $p\in S$ there is a ball with center $p$ that covers exactly $k$ elements of $S$, then the collection of all these $|S|$ balls covers $S$. By the Besicovitch covering theorem, we can find $c_d$ subcollections, each composed of disjoint balls, whose union covers $S$. Each subcollection clearly contains at most $\frac{|S|}{k}$ balls and, thus, their union forms a covering of $S$ formed by at most $c_d\frac{|S|}{k}$ $k^-/S$-homothets of $B^d$. Since the union of the subcollections covers $S$, it contains at least $\frac{|S|}{k}$ balls, and we can find a subcollection with at least $\frac{1}{c_d}\frac{|S|}{k}$ balls, which is actually a packing formed by $k^+/S$-homothets of $B^d$. This shows that $f(B^d,k,S)=O_d(\frac{|S|}{k})$ and $g(B^d,k,S)=\Omega_d(\frac{|S|}{k})$. A careful analysis of the proof by Bliedtner and Loeb~\cite{BliedtnerLoeb} (combined with some other geometric results), reveals that this can be extended to general convex bodies.
\subsection{VC-dimension and \texorpdfstring{$\epsilon$}{e}-nets}\label{sec:VCnets}
A \textit{set system} is a pair $\Sigma =(X,\mathcal{R})$, where $X$ is a set of base elements and $R$ is a collection of subsets of $X$. Given a set system $\Sigma =(X,\mathcal{R})$ and a subset $Y\subset X$, let $\mathcal{R}|_Y=\{Y\cap R : R\in\mathcal{R}\}$. The VC-dimension of the set system is the maximum integer $d$ for which there is a subset $Y\subset X$ with $|Y|=d$ such that $\mathcal{R}|_Y$ consists of all $2^d$ subsets of $Y$, the VC-dimension may be infinite. In a way, the VC-dimension measures the complexity of a set system, and it plays a very important role in multiple areas, such as computational geometry, statistical learning theory, and discrete geometry.
Let $\Sigma =(X,\mathcal{R})$ be a set system with $X$ finite. An $\epsilon$-\textit{net} for $\Sigma$ is a set $N\subseteq X$ such that $N\cap R\neq \emptyset$ for all $R\in\mathcal{R}$ with $|R|\geq \epsilon |X|$. A landmark result of Haussler and Welzl~\cite{VCdimension} tells us that range spaces with VC-dimension at most $d$ admit $\epsilon$-nets whose size depends only on $d$ and $\frac{1}{\epsilon}$; in fact, any random subset of $X$ of adequate size will be such an $\epsilon$-net with high probability. The precise bounds were later improved by Pach and Tardos \cite{boundsfornets}.
Given a point set $X$ and a family $\mathcal{R}$ of sets (which are not necessarily subsets of $X$), the \textit{primal set system} $(X,\mathcal{R}|_X)$ induced by $X$ and $\mathcal{R}$ is the set system with base set $X$ and $\mathcal{R}|_X=\{R\cap X\text{ }|\text{ } R\in\mathcal{R}\}$. If $X$ is finite, a \textit{weak} $\epsilon$-\textit{net} for the range space $(X,\mathcal{R}|_X)$ is a set of elements $W\subset\bigcup_{R\in\mathcal{R}R}$ such that $W\cap R\neq \emptyset$ for all $R\in\mathcal{R}$ with $|R|_X|\geq \epsilon |X|$. Weak $\epsilon$-nets have been particularly studied in geometric settings, where $X$ is a set of points and the elements of $\mathcal{R}$ are geometric objects; and this is also the setting that we care about here. The most famous result in the subject asserts the existence of a weak $\epsilon$-net whose size depends only on $d$ and $\epsilon$ for any primal set system induced by a finite set of points and the convex subsets of $\mathbb{R}^d$, the best known upper bounds on the size of such a net are due to Rubin \cite{RubinHigDim,RubinWeakNets}. Weak epsilon nets can also be defined for finite measures: if $\mu$ is finite and $\mathcal{R}$ is a family of sets in $\mathbb{R}^d$, a weak $\epsilon$-net for the pair $(\mu,\mathcal{R})$ consists of a collection $W$ of points in $\mathbb{R}^d$ such that $W\cap R\neq \emptyset$ for all $R\in\mathcal{R}$ with $\mu(R)\geq \epsilon\mu(\mathbb{R}^d)$.
We refer the reader to~\cite{surveynets} for a survey on $\epsilon$-nets and other similar concepts.
\subsection{Delaunay triangulations }\label{sec:delaunay}
Given a finite point set $S\subset\mathbb{R}^2$, the \textit{Delaunay graph} $D(S)$ is the embedded planar graph with vertex set $S$ in which two vertices are adjacent if an only if there is an Euclidean ball that contains those two points but no other point of $S$. It is not hard to check that $D(S)$ is indeed planar and that, as long as no four points lie on a circle and no three belong to the same line, $D(S)$ will actually be a triangulation\footnote{An embedded planar graph with vertex set $S$ is a \textit{triangulation} if all its bounded faces are triangles and their union is the convex hull of $S$.}.
Delaunay graphs have a natural generalization which arises from considering general convex bodies instead of balls. The \textit{Delaunay graph of $S$ with respect to $C$}, which we denote by $D_C(S)$, is the embedded planar graph with vertex set $S$ and an edge between two vertices if an only if there is an homothet of $C$ that covers those two points but no other point of $S$. If $C$ is strictly convex and has smooth boundary, and $S$ is in $C$-general position and does not contain three points on the same line, then $D_C(S)$ will again be a triangulation. The edges of $D_C(S)$ encode the pairs of points of $S$ that can be covered using a $2^-/S$-homothet of $C$ and, thus, finding an optimal cover with $2^-/S$-homothets is equivalent to finding the largest possible matching in $D_C(S)$.
It is good to keep in mind that Delaunay graphs can be defined analogously in higher dimensions, even if we will only really need them in the planar case.
Many properties of generalized Delaunay triangulations can by found in Cano's PhD dissertation \cite{generalizeddelaunay}.
\subsection{Previous related work}\label{sec:previouswork}
The functions $f$ and $g$ have been indirectly studied in some particular cases. The first instance of this that we know of appeared in a paper by Szemerédi and Trotter~\cite{combdistprojective}, who obtained a lemma that implies a bound of $g(C,k,S)=\Omega(\frac{|S|}{k})$ in the case that $C$ is a square in the plane; they applied this result to a point-line incidence problem.
Dillencourt~\cite{delaunaytoughness} studied the largest matching that can be obtained in a point set using disks; in our setting, this is actually equivalent to the $k=2$ case of the covering problem. Dillencourt showed that all planar Delaunay triangulations (with respect to disks) are $1$-tough\footnote{Given a positive real number $t$, a graph $G$ is $t$-\textit{tough} if in order to split it into any number $k \geq 1$ of connected components, we need to remove at least $tk$ vertices.} and thus, by Tutte's matching theorem, contain a matching of size $\lfloor\frac{|S|}{2}\rfloor$. Ábrego et al. \cite{matchingsquares} obtained a similar result for squares; they essentially proved that, as long as no two points lie on the same vertical or horizontal line, the Delaunay triangulation with respect to an axis aligned square contains a Hamiltonian path and, as a consequence, a matching of size $\lfloor\frac{|S|}{2}\rfloor$. These results immediately translate to $f(C,2,S)\leq\lceil\frac{|S|}{2}\rceil$ whenever $C$ is a disk or a square (and $S$ has the required properties), this bound is obviously optimal. Panahi et al. \cite{trianglematchingsfirst} and Babu et al. \cite{trianglematchings} studied the problem for equilateral triangles (their results actually hold for any triangle, as can be seen by applying an adequate affine transformation), it was shown in the second of these papers that as long as $S$ is in general position the corresponding Delaunay graph must admit a matching of size at least $\lceil\frac{|S|-1}{3}\rceil$ and that this is tight. Ábrego et al. \cite{matchingsquares} also studied \textit{strong matchings} for disks and squares, which are interior disjoint collections of homothets, each of which covers exactly two points of the set, their results imply that $g(C,2,S)\geq\lceil\frac{|S|-1}{8}\rceil$ if $C$ is a disk and $g(C,2,S)\geq\lceil\frac{|S|}{5}\rceil$ if $C$ is a square, again under some mild assumptions on $S$. The bound for squares was improved to $g(C,2,S)\geq\lceil\frac{|S|-1}{4}\rceil$ by Biniaz et al. in \cite{strongmatchings}, where they also showed that $g(C,2,S)\geq\lceil\frac{n-1}{9}\rceil$ in the case that $C$ is an equilateral triangle and presented various algorithms for computing large strong matchings of various types. In a similar vein, large matchings in Gabriel graphs\footnote{The \textit{Gabriel graph} of a planar point set $S$ is the graph in which two points $p,q\in S$ are joined by an edge if an only if the disk whose diameter is the segment from $p$ to $q$ contains no other point of $S$.} and strong matchings with upward and downward equilateral triangles are treated in \cite{gabrielmatching,strongmatchings}.
Bereg et al.~\cite{matchingnp} considered matchings and strong matchings of points using axis aligned rectangles and squares. They provided various algorithms for finding large such matchings and showed that deciding if a point set has a strong perfect matching using squares (i.e. deciding if $g(C,2,S)=\frac{|S|}{2}$ in the case that $C$ is a square) is $NP$-hard.
\section{Results}\label{chap:results}
\subsection{Overview of Section \ref{chap:cover}}\label{sec:overview4}
In Section~\ref{sec:weaknets} we use a simple technique by Kulkarni and Govindarajan~\cite{weaknets} to construct a weak $\epsilon$-net of size $O_d(\frac{1}{\epsilon})$ for any primal range space (on a finite base set of points $S$) induced by the family $\mathcal{H}_C$ of all homothets of a convex body $C$. This result follows too from the known bounds on the Hadwiger-Debrunner $(p,q)$-problem for homothets (see ~\cite{pq-problem}), but our proof is short and elementary, and it also yields an analogous result for finite measures. We remark that Naszódi and Taschuk~\cite{infiniteVC} showed that $(\mathbb{R}^d, \mathcal{H}_{C})$ may have infinite VC-dimension for $d\geq 3$, so there might be no small (strong) $\epsilon$-net for $(S, \mathcal{H}_{C}|_S)$. For $d=2$, however, any range space induced by pseudo-disks (and, in particular, $(S, \mathcal{H}_{C}|_S)$), admits an $\epsilon$-net of size $O(\frac{1}{\epsilon})$~\cite{newexistenceproofs,shallowcell}.
In Section \ref{sec:covering}, we use the result on weak $\epsilon$-nets to show that, under some mild assumptions, $f(C,k,S)=O_d(\frac{|S|}{k}),f(C,k,\mu)=O_d(\frac{\mu(\mathbb{R}^d)}{k})$. The proof does not make use of the Besicovitch covering theorem (see Section \ref{sec:besicovitch}), and it will provide us with a scheme for designing one of the algorithms discussed in Section \ref{chap:computational}.
The bound for measures is then applied in Section \ref{sec:coverdensity} to prove that if $\mu$ is non-$C$-degenerate, $\mu(C)>0$ and $\mu(\mathbb{R}^d)=\infty$, then the translational covering density $\Theta_H(\mu,C)$ is bounded from above by a function of $d$. It is easy to see that $\Theta_H(\mu,C)$ is infinite for finite measures, so the $\mu(\mathbb{R}^d)=\infty$ condition is essential.
\subsection{Overview of Section \ref{chap:pack}}\label{sec:overview5}
In Section \ref{sec:pack} we prove that, under the same conditions that allowed us to obtained an upper bound for $f$, $g(C,k,S)=\Omega_d(\frac{|S|}{k}),g(C,k,\mu)=\Omega_d(\frac{\mu(\mathbb{R}^d)}{k})$. The proof relies on some properties of collections of homothets which intersect a common homothet; this is very similar to the study of $\tau$\textit{-satellite configurations} in the proof of \cite{BliedtnerLoeb,Morse_1947}.
Similar to the covering case, the bound on $g$ is then utilized in Section \ref{sec:packdensity} to prove that if $\mu$ is non-$C$-degenerate and $\mu(\mathbb{R}^d)>\mu(C)>0$, then the translational packing density $\Theta_H(\mu,C)$ is bounded from below by a function of $d$. The $\mu(\mathbb{R}^d)>\mu(C)$ condition is clearly necessary.
\subsection{Overview of Section \ref{chap:computational}}\label{sec:overview6}
Given $C\subset\mathbb{R}^d$ and a positive integer $k$, let $C$-$k$-COVER denote the optimization problem that consists of determining, given an instance point set $S\subset\mathbb{R}^d$, the least integer $m$ such that $S$ can be covered by $m$ $k^-/S$-homothets of $C$. Similarly, the problem $C$-$k$-PACK consists of finding the largest $m$ such that there is a packing composed of $m$ $k^+/S$-homothets of $C$.
Section \ref{sec:algorithms} is devoted to the description of polynomial time algorithms for approximating $C$-$k$-COVER and $C$-$k$-PACK up to a multiplicative constant in the case that $C$ is a disk. The proofs are based on the ideas developed in sections \ref{sec:weaknets}, \ref{sec:covering} and \ref{sec:pack}.
There has been extensive research regarding the complexity of geometric set cover problems, and a variety of these have been shown to be NP-complete, see~\cite{optpackcovernp} for one of the first works in this direction. As mentioned in Section \ref{sec:previouswork}, Bereg et al.~\cite{matchingnp} proved that when $C$ is a square it is NP-hard to decide if $g(C,2,S)=\frac{|S|}{2}$; this implies, in particular, that $C$-$2$-COVER is NP-hard for squares. As long as we are capable of computing $D_C(S)$ in polynomial time (which is the case for hypercubes, balls and any other convex body which can be described by a bounded number of algebraic inequalities), $f(C,2,S)$ can be computed, also in polynomial time, by applying any of the known algorithms for finding the largest possible matching in a given graph. However, in Section~\ref{sec:complexity} we show that if $C$ is a square and $k$ is a multiple of $4$, then deciding if $f(C,k,S)=\frac{|S|}{k}$ is NP-hard. Unfortunately, our proof is not very robust in the sense that it depends heavily on the fact that $C$ is a square and that $S$ is not required to be in general position.
\subsection{Overview of Section \ref{chap:matching}}\label{sec:overview7}
As mentioned in Section~\ref{sec:previouswork}, Dillencourt~\cite{delaunaytoughness} showed that the Delaunay triangulation (with respect to disks) of a point set $S\subset\mathbb{R}^2$ with no three points on the same line and no four points on the same circle is $1$-tough. Biniaz~\cite{simpletough} later gave a simpler proof of this result.
In Section \ref{sec:tough} we extend the technique of Biniaz to show that, under some assumptions on $C$ and $S$, $D_C(S)$ is almost $t$-tough, where $t$ depends on how fat $C$ is (or, rather, how fat it can be made by means of an affine transformation). This result is then applied, again in similar fashion to \cite{simpletough}, in Section \ref{sec:matchings} to bound $f(C,2,S)$. Using a well-known result by Nishizeki and Baybars \cite{planarmatchings} on the size of the largest matchings in planar graphs, we also obtain a weaker bound that holds in greater generality.
\section{Covering}\label{chap:cover}
\subsection{Small weak \texorpdfstring{$\epsilon$}{e}-nets for homothets\label{sec:weaknets}}
The purpose of this section is to prove the following result about weak $\epsilon$-nets.
\begin{theorem} \label{teo:nets}
Let $C\subset\mathbb{R}^d$ be a convex body and denote the family of all homothets of $C$ by $\mathcal{H}_{C}$. Then, for any finite set $S\subset\mathbb{R}^d$ and any $\epsilon>0$, $(S, \mathcal{H}_{C}|_S)$ admits a weak $\epsilon$-net of size $O_d(\frac{1}{\epsilon})$, where the hidden constant depends only on $d$. Similarly, for any $C$-nice measure $\mu$, $(\mu,\mathcal{H}_{C})$ admits weak $\epsilon$-net of size $O_d(\frac{1}{\epsilon})$.
\end{theorem}
The simple lemma below will provide us with the basic building blocks for constructing the weak $\epsilon$-net.
\begin{lemma}\label{teo:hit}
There is a constant $c_1=c_1(d)$ with the following property: Given a convex body $C\subset\mathbb{R}^d$, there is a finite set $P_C\subset\mathbb{R}^d$ of size at most $c_1$ that hits every homothet $C'$ of $C$ with $C'\cap C\neq\emptyset$ and homothety coefficient at least $1$.
\end{lemma}
\begin{proof}
Let $T$ be an affine transformation as in Lemma~\ref{teo:fat}. We begin by showing the result for $C_T=T(C)$. Every homothet $C_T'$ with $C_T'\cap C_T\neq\emptyset$ and coefficient at least $1$ contains a translate $C_T''$ of $C_T$ with $C_T''\cap C_T\neq\emptyset$; this translate satisfies $C_T''\subseteq dB^d+2dB^d\subset [-3d,3d]^d$. On the other hand, $B^d\subset C_T$, so $C_T''$ must contain a translate of an axis parallel $d$-hypercube of side $\frac{2}{\sqrt{d}}$. Now it is clear that we may take $P_{C_T}$ to be the set of points from a $\frac{2}{\sqrt{d}}$ grid\footnote{By a $\frac{2}{\sqrt{d}}$ grid we mean an axis parallel $d$-dimensional grid with separation $\frac{2}{\sqrt{d}}$ between adjacent points.} that lie in the interior of $[-3d,3d]^d$, and this grid may be chosen so that $|P_{C_T}|\leq (3d^{3/2})^d$. Setting $c_1(d)=(3d^{3/2})^d$ and $P_C=T^{-1}(P_{C_T})$ yields the result.
\end{proof}
Notice that the value $1$ plays no special role in the proof, the result still holds (with a posssibly larger $c_1$) if we wish for $P_C$ to hit every homothet whose coefficient is bounded from below by a positive constant. The construction used in the proof has the added benefit that it allows us to compute $P_{C}$ in constant time (for fixed $d$), so long as we know $T$.
Using some known results, it is possible to obtain better bounds for $c_1$. In fact, a probabilistic approach by Erd\H{o}s and Rogers \cite{erdosrogers} (see also \cite{hittingtranslates}) shows that we can take \[c_1(d)\leq3^{d+1}2^d\frac{d}{d+1}d(\log d+\log\log d+4)\] for all large enough $d$. See \cite{Hellyandrelatives} for some earlier bounds on $c_1(d)$.
Next, we prove Theorem~\ref{teo:nets}.
\begin{proof}
We show that $(S,\mathcal{H}_C|_S)$ admits a small weak $\epsilon$-net, the proof for $(\mu,\mathcal{H}_C)$ is analogous. The weak $\epsilon$-net $W$ is constructed by steps. Consider the smallest homothet $C'$ of $C$ which contains at least $\epsilon |S|$ points of $S$ and add the elements of the set $P_{C'}$, given by Lemma~\ref{teo:hit}, to $W$. Now, we forget about the points covered by $C'$ and repeat this procedure with the ones that remain until there are less than $\epsilon |S|$ points left. Since we pick at most $c_1$ points at each step, $|W|\leqslant c_1\frac{1}{\epsilon}$, so all that is left to do is show that $W$ is a weak $\epsilon$-net for $(S,\mathcal{H}_{C}|_S)$.
Let $C_{1}$ be an homothet with $C_{1}\cap S\geqslant\epsilon |S|$ and consider, along the process of constructing $W$, the first step at which the taken homothet contains at least one element of $S\cap C_{1}$, this homothet will be called $C_{2}$. Clearly, $C_{1}$ and $C_{2}$ have nonempty intersection and, since none of the points in $C_1$ had yet been erased when $P_{C_2}$ was added to $W$, $C_{1}$ is not smaller than $C_{2}$. It follows that $C_{1}$ contains at least one point of $P_{C_2}\subset W$, as desired.
\end{proof}
As mentioned in the introduction, the technique from the last paragraph was first used by Kulkarni and Govindarajan~\cite{weaknets} to show that primal set systems induced by hypercubes and disks admit weak $\epsilon$-nets of size $O(\frac{1}{\epsilon})$.
We remark that if $c$ is a constant then it suffices to take, at each step, an homothet $C'$ that contains at least $\epsilon |S|$ points and its coefficient is at most $c$ times larger than the coefficient of the smallest homothet with that property, and then add to $W$ the set given by Lemma \ref{teo:hit} when $1$ is substituted by $1/c$. This observation will be important in Section \ref{chap:computational}.
\subsection{Covering finite sets and measures}\label{sec:covering}
At last, we state the main result about the asymptotic behavior of the function $f$ defined in Section \ref{sec:packcover}.
\begin{theorem}\label{teo:fbound}
Let $C\subset\mathbb{R}^d$ be a convex body. Then, for any positive integer $k$ and any non-$\frac{k}{2}/C$-degenerate finite set of points $S\subset\mathbb{R}^d$, we have that $f(C,k,S)=O(\frac{|S|}{k})$, where the hidden constant depends only on $d$. Similarly, for any positive real number $k$ and any $C$-nice measure, $f(C,k,\mu)=O(\frac{\mu(\mathbb{R}^d)}{k})$.
\end{theorem}
Again, we start by proving the result for point sets and then discuss the minor adaptations that must be made when working with measures.
As was essentially done in the proof of Lemma~\ref{teo:hit}, we may and will assume that $B^d\subseteq C\subseteq dB^d$.
\begin{observation}\label{teo:tammes}
For any $d$ and any positive real $r$, there is a constant $c(d,r)$ with the following property: every set of points on $\mathbb{S}^{d-1}$ which contains no two distinct points at distance less than $r$ has at most $c(d,r)$ elements.
\end{observation}
\begin{proof}
Obvious. A straightforward $(d-1)$-volume counting argument yields $$c(d,r)<\frac{\text{vol}_{d-1} (\mathbb{S}^{d-1})}{\text{vol}_{d-1}(B^{d-1})r^{d-1}}.$$
\end{proof}
Determination of the optimal values of $c(d,r)$ is often referred to as the Tammes problem. Exact solutions are only known in some particular cases, see~\cite{tammes} for some recent progress and further references.
The simple geometric lemma below will allow us construct the desired covering.
\begin{lemma}\label{teo:neighborhoodcover}
Let $P\subset\mathbb{R}^d$ be a (possibly infinite) bounded set and consider a collection of homothets $\lbrace C_p\rbrace_{p\in P}$ such that $C_p$ is of the form $p+\lambda C$ and $\bigcap_{p\in P} C_p\neq\emptyset$. Then there is a subset $P'$ of $P$ of size at most $c_2=c_2(d)$ such that the collection of homothets $\lbrace C_p\rbrace_{p\in P'}$ covers $P$.
\end{lemma}
\begin{proof}
Take $c_2(d)=c(d,t)$ (as in the claim above) for some sufficiently small $t=t(d)$ to be chosen later. After translating, we may assume that $O\in\bigcap_{p\in P} C_p$. We construct $P'$ by steps, starting from an empty set. At each step, denote by $N$ the supremum of the Euclidean norms of the elements of $P$ that are yet to be covered by $\lbrace C_p\rbrace_{p\in P'}$, and add to $P'$ an uncovered point with norm at least $(1-\frac{1}{10d})N$. The process ends as soon as $P\subset\bigcup_{p\in P'} C_p$, we show that this takes no more than $c_2$ steps. Suppose, for the sake of contradiction, that after some number of steps we have $|P'|>c_2$ and let $P'_{unit}=\lbrace\frac{p}{|p|}\ |\ p\in P'\rbrace$. By Observation~\ref{teo:tammes} there are two distinct points $\frac{p_1}{|p_1|},\frac{p_2}{|p_2|}\in P'_{unit}$ (with $p_1,p_2\in P'$) at distance less than $t$ from each other. Say, w.l.o.g., that $p_1$ was added to $P'$ prior to $p_2$; it follows from the construction that $|p_1|>(1-\frac{1}{10d})|p_2|$. Since $C_{p_1}$ is $1/d$-fat and contains $O$, the ball with center $p_1$ and radius $\frac{|p_1|}{d}$ lies completely within said homothet. Now, by convexity, $C_p$ must contain a bounded cone with vertex $O$, base going trough $p_1/(1-\frac{1}{10d})$, and whose angular width depends only on $d$. It follows that if $t$ is small enough then $p_2$ lies within this cone and is thus contained in $C_{p_1}$ (see figure \ref{fig:1}). This contradicts the assumption that $p_2$ was added after $p_1$, and the result follows.
\end{proof}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.4]{Figure_01.pdf}
\caption{The point $p_2$ is contained in a cone which lies completely inside $C_{p_1}$.}
\label{fig:1}
\end{figure}
We remark that the above result can easily be derived from the work of Nasz{\'o}di et al. \cite{homothetarrangements} (see also \cite{besicovitchconstant,exponentiallowerbound}).
Now we present the proof of Theorem~\ref{teo:fbound}.
\begin{proof}
We assume that $\frac{|S|}{k}\geq 1$. For every $p\in S$, let $C_{p}$ be the smallest homothet of the form $\lambda C+p$ which covers more than $\frac{k}{2}$ points of $S$ (it exists, since $C$ is closed and for any sufficiently large $\lambda$ the homothet $\lambda C+p$ covers $|S|>\frac{k}{2}$ points). Since the boundary of $C_p$ contains at most $\frac{k}{2}$ points, a slightly smaller homothet, also of the form $\lambda C+p$, will cover at least $|C_p\cap S|-\frac{k}{2}$ but at most $\frac{k}{2}$ points. It follows that $|C_p\cap S|\leqslant k$, that is, $C_p$ is a $k^-/S$-homothet. Let $C_S=\lbrace C_p\text{ }|\text{ } p\in S\rbrace$ and consider a weak $\frac{k}{2|S|}$-net $W$ for $(S,\mathcal{H}_C|_S)$ of size $O(\frac{|S|}{k/2})=O(\frac{|S|}{k})$, as given by Theorem~\ref{teo:nets}. $W$ hits every homothet of $C$ which covers at least $\frac{k}{2}$ elements of $S$ so, in particular, it hits all homothets in $C_S$. We will use Lemma~\ref{teo:neighborhoodcover} to construct the desired cover using elements of $C_S$. For each $w\in W$ let $S_w=\lbrace s\in S\ |\ w\in C_s\rbrace$. The point set $S_w$ and the homothets $\lbrace C_p\rbrace_{p\in S_w}$ satisfy the properties required in the statement of Lemma~\ref{teo:neighborhoodcover}, so there is a subset $S_w'\subset S_w$ of size at most $c_2$ such that the collection of homothets $\lbrace C_p\rbrace_{p\in S_w'}$ covers $S_w$. Let $S_C=\bigcup_{w\in W} S_w'$, we claim that the collection of $k^-/S$-homothets $\lbrace C_p\rbrace_{p\in S_C}$ covers $S$. Indeed, for every $p\in S$, $C_p$ is hit by some element of $w$, whence $s\in S_w$ and $s\in\bigcup_{p\in S_w'} C_p\subset\bigcup_{s\in S_C} C_p$. Furthermore, $|S_C|\leqslant c_2|W|=O_d(\frac{|S|}{k})$, as desired.
Now, suppose that $\mu$ is $C$-nice and $K$ is a ball with $\mu(K)=\mu(\mathbb{R}^d)\geq k$. For every $p\in K$, let $C_p$ be an homothet of the form $\lambda C+p$ which has measure $k$ (again, it exists, since $\mu$ is not-$C$-degenerate and $\mu(K)\geq k$). Let $C_\mu=\lbrace C_p\text{ }|\text{ } p\in K\rbrace$ and consider a weak $\frac{k}{\mu(\mathbb{R}^d)}$-net for $(\mu,\mathcal{H_C})$ of size $O(\frac{\mu(\mathbb{R}^d)}{k})$. From here, we can follow the argument in the above paragraph to find a collection of $O_d(\frac{\mu(\mathbb{R}^d)}{k})$ $k^-/\mu$-homothets (in fact, of homothets of measure exactly $k$) which cover $K$. This concludes the proof.
\end{proof}
We remark that the result still holds if, instead of being non-$\frac{k}{2}/C$-degenerate, $S$ is non-$tk/C$-degenerate for some fixed $t\in(0,1)$. In fact, this condition can be dropped altogether in the case that $C$ is strictly convex. The implicit requirement that $\mu$ be non-$C$-degenerate could also be weakened, all that is needed is for no boundary of an homothet to have measure larger than $tk$ (again, for fixed $t\in(0,1)$).
The proof of Theorem~\ref{teo:fbound} (as well as Theorem \ref{teo:nets}) extends almost verbatim to weighted point sets. In the weighted case, the homothets are allowed to cover a collection of points with total weight at most $k$, and the result tells us that, as long as no boundary of an homothet contains points with total weight larger than $\frac{k}{2}$, $S$ can be covered using $O_d(\frac{w(S)}{k})$ such homothets, where $w(S)$ denotes the total weight of the points in $S$.
\subsection{Generalized covering density}\label{sec:coverdensity}
\begin{theorem}\label{teo:coverdensity}
Let $C\subset\mathbb{R}^d$ be a convex body and $\mu$ a non-$C$-degenerate measure such that $\mu(C)>0$ and $\mu(\mathbb{R})=\infty$. Then $\delta_H(\mu,C)$ is bounded from above by a function of $d$.
\end{theorem}
\begin{proof}
For any Borel set $K\subset\mathbb{R}^d$ the \textit{restriction of $\mu$ to $K$}, $\mu|_K$, is defined by $\mu|_K(X)=\mu(X\cap K)$. Notice that if $K$ is bounded then $\mu|_K$ is $C$-nice.
At a high level, our strategy consists of choosing an infinite sequence of positive reals, $\lambda_0<\lambda_1<\lambda_2<\dots$, and constructing covers with homothets of measure $\mu(C)$ of each of the bounded regions $\lambda_0B^d$, $\lambda_1B^d\backslash\lambda_0B^d$, $\lambda_2B^d\backslash\lambda_1B^d,\dots$ using Theorem \ref{teo:fbound} so that the union of these covers has bounded lower density with respect to $\mu$. To be entirely precise, $\lambda_{i+1}$ will not be chosen until after the cover of $\lambda_{i}B^d\backslash\lambda_{i-1}B^d,\dots$ has been constructed. The main difficulty that arises is that, after applying Theorem \ref{teo:fbound} to the restriction of $\mu$ to a bounded set, some of the homothets in the resulting cover may have measure (with respect to $\mu$) larger than $\mu(C)$. Below, we describe a process that allows us to circumvent this issue. Here, the importance of defining $d_\text{low}$ as we did (back in Section \ref{sec:packcover}) will be clear.
Choose $\lambda_0>0$ such that $\mu(\lambda_0B^d)\geq\mu(C)$ and set $\lambda_0B^d=\lambda_0B^d$. Theorem \ref{teo:fbound} tells us that $f(C,\frac{\mu(C)}{2},\mu|_{\lambda_0B^d})\leq c_{f,d}\frac{\mu(\lambda_0B^d)}{\mu(C)}$, so $\lambda_0B^d$ can be covered using no more than $c_{f,d}\frac{\mu(\lambda_0B^d)}{\mu(C)}$ homothets of $C$ which have measure at most $\frac{\mu(C)}{2}$ with respect to $\mu|_{\lambda_0B^d}$. In fact, if all of them were $\mu(C)^-/\mu$-homothets we could apply a dilation to each so that every one had measure $\mu(C)$ with respect to $\mu$. The following lemma shows that any homothet of the cover whose measure is too large with respect to $\mu$ can be substituted by a finite number of $\mu(C)^-/\mu$-homothets which are not completely contained in $\lambda_0B^d$.
\begin{lemma}\label{teo:fixbadhom}
Let $B\subset\mathbb{R}^d$ be a ball with $\mu(B)\geq\mu(C)$ and $C'$ be an homothet of $C$ such that $\mu|_{B}(C')<\mu(C)$ but $C'\not\subset B$. Then $C'\cap B$ can be covered by a finite collection of $\mu(C)^-/\mu$-homothets of $C$, none of which is fully contained in $B$.
\end{lemma}
\begin{proof}
Of course, we may assume that $C'\cap B\neq\emptyset$ and $\mu(C')>\mu(C)$. Let $C''$ be an homothet with $\mu|_B(C')<\mu|_B(C'')<\mu(C)$ that results from applying dilation to $C'$ with center in its interior; clearly, $C'\subsetneq C''$ and $\mu(C'')>\mu(C)$. Now, let $\overline{B}$ denote the closure of $B$ and, for each $p\in C'\cap\overline{B}$, consider an homothet $C_p$ with $\mu(C_p)=\mu(C)$ that is obtained by applying a dilation to $C''$ with center $p$. Since $\mu(C'')>\mu(C)$ and $p$ lies in the interior of $C''$, $C_p\subsetneq C''$ and $p$ belongs to the interior of $C_p$ (see figure \ref{fig:2}). We claim that $C_p$ is not fully contained in $B$. Indeed, if it were, we would have $C_p\subset B\cap C''$, but $\mu(B\cap C'')=\mu|_B(C'')<\mu(C)$, which contradicts the choice of $C_p$. Thus, for each point $p\in C'\cap\overline{B}$, $C_p$ has measure $\mu(C)$ with respect to $\mu$, it is not completely contained in $B$, and it covers an open neighborhood of $p$. The result now follows from the fact that $C'\cap\overline{B}$ is compact.
\end{proof}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.42]{Figure_02.pdf}
\caption{Configuration in the proof of Lemma \ref{teo:fixbadhom}.}
\label{fig:2}
\end{figure}
Apply Lemma \ref{teo:fixbadhom} (with $B=\lambda_0B^d$) to each of the aforementioned homothets and then enlarge each homothet in the cover until its measure with respect to $\mu$ is $\mu(C)$. This way, we obtain a finite cover $\mathcal{F}_0$ of $\lambda_0B^d$ by homothets of measure $\mu(C)$ with respect to $\mu$, of which at most $c_{f,d}\frac{\mu(\lambda_0B^d)}{\mu(C)}$ are fully contained in $\lambda_0B^d$.
Now, suppose that $\lambda_0<\lambda_1<\dots<\lambda_t$ have already been chosen so that there is a finite family $\mathcal{F}_t$ of homothets of measure $\mu(C)$ with respect to $\mu$ that covers $\lambda_iB^d$ and has the following property: at most $2c_{f,d}\frac{\mu(\lambda_iB^d)}{\mu(C)}$ of the homothets are fully contained in $\lambda_iB^d$ for every $i\in\{0,1,\dots,t\}$.
Chose $\lambda_{t+1}$ so that $2\lambda_t<\lambda_{t+1}$ and $\mu(\lambda_{i+1}B^d)\geq\frac{\mu(C)|\mathcal{F}_t|}{c_{f,d}}$ (the condition $\mu(\mathbb{R}^d)=\infty$ is crucial here). By Theorem \ref{teo:fbound}, $f(C,\frac{\mu(C)}{2},\mu|_{\lambda_{t+1}B^d})\leq c_{f,d}\frac{\mu(\lambda_{t+1}B^d)}{\mu(C)}$; consider a cover that achieves this bound. Again by Lemma \ref{teo:fixbadhom}, each homothet in the cover with measure larger than $\mu(C)$ with respect to $\mu$ can be substituted by a finite collection of homothets of measure at most $\mu(C)$ which are not fully contained in $\lambda_{t+1}B^d$, so that the homothets still cover $\lambda_{t+1}B^d$. After having carried out these substitutions, we enlarge each homothet in the cover so that it has measure $\mu(C)$ with respect to $\mu$ and then remove all homothets which are fully contained in $\lambda_tB^d$. The resulting family of homothets, which we denote by $F_{t+1,\text{outer}}$, covers $\lambda_{t+1}B^d\backslash\lambda_tB^d$ and contains at most $c_{f,d}\frac{\mu(\lambda_{t+1}B^d)}{\mu(C)}$ homothets that lie completely inside $\lambda_{t+1}B^d$. Let $\mathcal{F}_{t+1}=\mathcal{F}_t\cup\mathcal{F}_{t+1,\text{outer}}$. $F_{t+1}$ is a cover of $\lambda_tB^d\cup\lambda_{t+1}B^d\backslash\lambda_tB^d=\lambda_{t+1}B^d$ that consists of homothets of measure $\mu(C)$ with respect to $\mu$. Since no element of $\mathcal{F}_{t+1,\text{outer}}$ is a subset of $\lambda_tB^d$, there are no more than $2c_{f,d}\frac{\mu(\lambda_iB^d)}{\mu(C)}$ homothets fully contained in $\lambda_iB^d$ for every $i\in\{0,1,\dots,t\}$ and there are also no more than $|\mathcal{F}_t|+c_{f,d}\frac{\mu(\lambda_{t+1}B^d)}{\mu(C)}\leq2c_{f,d}\frac{\mu(\lambda_{t+1}B^d)}{\mu(C)}$ homothets contained in $\lambda_{t+d}B^d$.
Repeating this process, we obtain a sequence $\lambda_0<\lambda_1<\dots$ that goes to infinity and a sequence $\mathcal{F}_0\subset\mathcal{F}_1\subset\dots$ of collections of homothets of measure $\mu(C)$ with respect to $\mu$. Set $\mathcal{F}=\cup_{i=0}^\infty\mathcal{F}_i$, then $\mathcal{F}$ is a cover of $\mathbb{R}^d$ with homothets of measure $\mu(C)$ and, for $i=0,1,\dots$, we have that \[d_{inn}(\mu,\mathcal{F}|\lambda_iB^d)=\frac{1}{\mu(\lambda_iB^d)}\sum_{C'\in\mathcal{F},C'\subset \lambda_iB^d}\mu(C')\leq\frac{1}{\mu(\lambda_iB^d)}\frac{2c_{f,d}\mu(\lambda_{i}B^d)}{\mu(C)}\mu(C)=2c_{f,d},\] hence
\[d_\text{low}(\mu,\mathcal{F})=\liminf_{r\rightarrow\infty}d_{\text{inn}}(\mu,\mathcal{F}|rB^d)\leq2c_{f,d},\] and the result follows.
\end{proof}
Just as in the previous section, the result still holds as long as no boundary of an homothet has measure larger than $tk$ for some fixed $t\in(0,1)$. Our argument can also be slightly modified to yield a cover with lower density at most $(1+\epsilon)c_{f,d}$ for any $\epsilon>0$, this implies that $\Theta_H(\mu,C)\leq c_{f,d}$ (recall that $c_{f,d}$ is the hidden constant in Theorem \ref{teo:fbound}).
\section{Packing}\label{chap:pack}
\subsection{Packing in finite sets and measures}\label{sec:pack}
\begin{theorem}\label{teo:gbound}
Let $C\subset\mathbb{R}^d$ be a convex body. Then, for any positive integer $k$ and
any non-$\frac{k}{2}/C$-degenerate finite set of points $S\subset\mathbb{R}^d$, we have that $g(C,k,S)=\Omega_d(\frac{|S|}{k})$, where the hidden constant depends only on $d$. Similarly, for any positive real number $k$ and any $C$-nice measure, $g(C,k,\mu)=\Omega(\frac{\mu(\mathbb{R}^d)}{k})$.
\end{theorem}
Again, we assume that $B^d\subseteq C\subseteq dB^d$ and $|S|\geq k$, and we begin by proving the result for point sets.
For each $p\in S$, denote by $C_{p}$ the smallest homothet of the form $\lambda C+p$ which contains at least $k$ points of $S$. All of the $C_p$'s are $k^+/S$-homothets of $C$ and, by the assumption that $S$ is non-$\frac{k}{2}/C$-degenerate, each of them contains less than $\frac{3k}{2}$ elements of $S$. For any subset $S'\subseteq S$, let $C_{S'}=\lbrace C_{p}\text{ }|\text{ }p\in S'\rbrace$. We require the following preliminary result.
\begin{claim}\label{teo:satelliteconfig}
There is a constant $c_3=c_3(d)$ with the following property: If $S'\subset S$ and $p_0\in S'$ is such that $C_{p_0}$ is of minimal size amongst the elements of $C_{S'}$, then $C_{p_0}$ has nonempty intersection with at most $c_3k$ other elements of $C_{S'}$.
\end{claim}
\begin{proof}
After translating, we may assume that $B^d\subseteq C_{p_0}\subseteq dB^d$. For any $r\in\mathbb{R}$, the number of translates of $\frac{1}{2}B^d$ required to cover $rdB^d$ depends only on $d$ and $r$ and, by the choice of $p_0$, every one of these balls of radius $\frac{1}{2}$ contains less than $k$ points of $S'$. Hence, $|rdB^d\cap S'|\leq c_{d,r}k$.
Assume, w.l.o.g., that $p_0=O$ and let $c(d,t')$ be as in Observation \ref{teo:tammes} for some small $t'=t'(d)$ to be specified later. For each $r$, denote by $S_r'\subset S'$ the set that consists of those points $p\in S'$ such that $p\notin rdB^d$ and $C_p$ intersects $C_{p_0}$. Since $C$ is $1/d$-fat, it is not hard to see that for some large enough $r_d$ (which depends only on $d$) the following holds: if $p_1,p_2\in S_{r_d}'$ are such that $|p_1|\geq|p_2|$ and $\frac{p_1}{|p_1|},\frac{p_2}{|p_2|}$ are at distance less than $t'$, then $p_2\in C_{p_1}$. We can then proceed along the lines of the proof of Lemma \ref{teo:neighborhoodcover} to show that $S_{r_d}'$ can be covered by no more than $c(d,t')$ elements of $C_{S'}$, which yields $|S_{r_d}'|\leq\frac{3}{2}c(d,t')k$. Hence, there are at most $(c_{d,r_d}+\frac{3}{2}c(d,t'))k$ elements of $C_{S'}$ which have nonempty intersection with $C_{p_0}$, and the result follows by setting $c_3=c_{d,r_d}+\frac{3}{2}c(d,t')$.
\end{proof}
We can now prove Theorem \ref{teo:gbound}.
\begin{proof}
Let $S'\subseteq S$. We show by induction on $|S'|$ that there is a packing formed by at least $\lfloor\frac{|S'|}{c_3k}\rfloor$ elements from $C_{S'}$ (if $|S|<k$, set $C_{S'}=\emptyset$); since $C_S$ consists only of $k^+/S$ homothets of $C$, the result will follow immediately .
Our claim is trivially true if $|S'|<c_3k$. Let $S'\subseteq S$ with $|S'|\geq c_3k$ and assume that the result holds for all subsets with less than $|S'|$ elements. Choose $p_0\in S'$ so that $C_{p_0}$ is of minimal size amongst the elements of $C_{S'}$. Let $S_{p_0}=\{p\in S'\text{ }|\text{ }C_p\cap C_{p_0}\neq\emptyset\}$ and set $S''=S'-S_{p_0}$. Since $|S''|<|S'|$, the inductive hypothesis tells us that it is possible to choose $t\geq\lfloor\frac{|S''|}{c_3k}\rfloor$ points $p_1,p_2,\dots,p_t\in S''$ so that the homothets $C_{p_1},C_{p_2},\dots,C_{p_t}$ are pairwise disjoint. By the definition of $S''$, these homothets do not intersect $C_{s_0}$, this shows that we can choose $t+1$ disjoint homothets from $C_{S'}$. By Claim \ref{teo:satelliteconfig}, $|S''|\geq |S'|-c_3k$ and hence $t\geq\lfloor\frac{|S'|}{c_3k}\rfloor-1$, which yields the result.
Now, suppose that $\mu$ is $C$-nice and $K$ is a ball with $\mu(K)=\mu(\mathbb{R}^d)>k$. For each $p\in K$, define $C_{p}$ as the smallest homothet of the form $\lambda C+p$ which has measure $k$ and, for $K'\subseteq K$, let $C_{K'}\mu=\lbrace C_{p}\text{ }|\text{ }p\in K'\rbrace$. Claim \ref{teo:satelliteconfig} can be easily adapted to work measures, which then allows us to proceed as in the previous paragraph (except that we now induct on $\mu(K')$) to prove the measure theoretic version of Theorem \ref{teo:gbound}.
\end{proof}
Similarly to Theorem \ref{teo:fbound}, the non-$\frac{k}{2}$-degeneracy condition on $S$ can be relaxed to non-$tk$-degeneracy for some fixed $t>0$, and the non-$C$-degeneracy of $\mu$ can be substituted for the weaker requirement that no boundary of an homothet has measure larger than $tk$. Again, the proof extends to suitable weighted points sets.
In similar fashion to the proof of the Besicovitch covering theorem, it is also possible to derive Theorem \ref{teo:fbound} by adapting the technique above. Indeed, we could have defined $C_{p}$ to be the smallest homothet of the form $\lambda C+p$ that contains at least $\frac{k}{2}$ points of $S$. The proof of \ref{teo:satelliteconfig} would then yield a collection of $c_3$ $k^-/S$-homothets of $C$ that covers the set $S_{p_0}=\{p\in S\text{ }|\text{ }C_p\cap C_{p_0}\neq\emptyset\}$. We add these $O_d(1)$ homothets to the cover and add all the elements of $C_{p_0}\cap S$ to an initially empty set $P$. Now, consider $p_1\in S-S_{p_0}$ such that the size of $C_{p_1}$ is minimal and go through the same steps as before. This process is then repeated as long as $S$ is not yet fully covered. At least $\frac{k}{2}$ new elements are added to $P$ with each iteration, so the number of homothets in the final cover is no more than $\frac{2n}{k}O_d(1)=O_d(\frac{n}{k})$, as desired. The proof presented in Section \ref{chap:cover}, however, will lead to a randomized algorithm for approximating $C$-$k$-COVER in Section \ref{sec:algorithms}.
\subsection{Generalized packing density}\label{sec:packdensity}
\begin{theorem}\label{teo:packdensity}
Let $C\subset\mathbb{R}^d$ be a convex body and $\mu$ a non-$C$-degenerate measure with $\mu(C)>0$ and $\mu(\mathbb{R}^d)>\mu(C)$. Then $\Theta_H(\mu,C)$ is bounded from below by a function of $d$.
\end{theorem}
\begin{proof}
If $\mu(\mathbb{R}^d)<\infty$, the result follows readily by applying Theorem \ref{teo:gbound} to the restriction of $\mu$ to sufficiently large balls and then shrinking some homothets if necessary, so we assume that $\mu(\mathbb{R}^d)=\infty$. The strategy that we follow is similar to the one used for Theorem \ref{teo:coverdensity}.
Choose $\lambda_0>0$ so that $\mu(\lambda_0B^d)\geq\mu(C)$. By Theorem \ref{teo:gbound}, $g(C,\mu(C),\mu|_{\lambda_0B^d})\geq c_{g,d}\frac{\mu(\lambda_0B^d)}{\mu(C)}$, so there is a collection of at least $c_{g,d}\frac{\mu(\lambda_0B^d)}{\mu(C)}$ interior disjoint $\mu(C)^+/\mu|_{\lambda_0B^d}$-homothets of $C$. Each homothet in this collection contains another homothet that has nonempty intersection with $\lambda_0B^d$ and whose measure with respect to $\mu$ is exactly $\mu(C)$. These smaller homothets form a finite packing, which we denote by $\mathcal{F}_0$.
Assume that we have already chosen $\lambda_0<\lambda_1<\dots<\lambda_t$ so that there is a finite packing $\mathcal{F}_t$ composed by homothets of measure $\mu(C)$ and at least $c_{g,d}\frac{\mu(\lambda_iB^d)}{2\mu(C)}$ of them have nonempty intersection with $\lambda_iB^d$ for every $i\in\{0,1,\dots,t\}$.
Let $\lambda_{\mathcal{F}_t}>\lambda_t$ be such that all homothets of $\mathcal{F}_t$ are fully contained in $\lambda_{\mathcal{F}_t}B^d$. Denote the region $(\lambda_{\mathcal{F}_t}+1)B^d\backslash\lambda_{\mathcal{F}_t}B^d$ by $R$ and, for each $l>0$, let $\mu_l$ be the measure defined by \[\mu_l(X)=\mu(X\backslash(\lambda_{\mathcal{F}_t}+1)B^d)+l\ \text{vol}(X\cap R).\]
\begin{claim}\label{teo:barrier}
If $l$ is large enough, then any homothet that intersects both $\lambda_{\mathcal{F}_t}\mathbb{S}^{d-1}$ and $(\lambda_{\mathcal{F}_t}+1)\mathbb{S}^{d-1}$ has measure larger than $\frac{3}{2}\mu(C)$ with respect to $\mu_l$.
\end{claim}
\begin{proof}
The claim follows from the fact that the volume of any homothet as in the statement is bounded away from $0$. This last observation can be proven by a simple compactness argument.
\end{proof}
Let $l$ be such that the property in Claim \ref{teo:barrier} holds and choose $\lambda_{t+1}$ so that $2\lambda_t<\lambda_{t+1}$, $\lambda_{\mathcal{F}_t}<\lambda_{t+1}$ and \[\mu_l(\lambda_{t+1}B^d)\geq\frac{3\mu(\lambda_{t+1}B^d)}{4c_{g,d}} +\frac{3\text{vol}(R)}{c_{g,d}l}\] (this is possible, since we assumed that $\mu(\mathbb{R}^d)=\infty$). Theorem \ref{teo:gbound} tells us that $g(C,\frac{3}{2}\mu(C),\mu_l|_{\lambda_{t+1}B^d})\geq c_{g,d}\frac{2\mu_l(\lambda_{t+1}B^d)}{3\mu(C)}$; consider a packing by $\frac{3}{2}\mu(C)^+/\mu_l$-homothets which has at least this many elements. This packing contains at most $\frac{2\text{vol}(R)}{l\ \mu(C)}$ homothets $C'$ with $\text{vol}(C'\cap R)\ l\geq\frac{1}{2}\mu(C)$, which we remove from the collection. By the choice of $l$, none of the remaining homothets intersects $\lambda_tB^d$ and each of them has measure at least $\mu(C)$ with respect to $\mu$. Shrinking each homothet we obtain a packing $\mathcal{F}_{t+1,\text{outer}}$ formed by homothets of measure $\mu(C)$ with respect to $\mu$, and it has at least \[\frac{2c_{g,d}}{3\mu(C)}\left(\frac{3\mu(\lambda_{t+1}B^d)}{4c_{g,d}} +\frac{3\text{vol}(R)}{c_{g,d}l}\right)-\frac{2\text{vol}(R)}{l\ \mu(C)}=\frac{c_{g,d}}{2}\frac{\mu(\lambda_{t+1}B^d)}{\mu(C)}\] elements. Let $\mathcal{F}_{t+1}=\mathcal{F}_t\cup\mathcal{F}_{t+1\text{outer}}$, this is a packing with homothets of measure $\mu(C)$ with respect to $\mu$, and it contains at least $\frac{c_{g,d}}{2}\frac{\mu(\lambda_{i}B^d)}{\mu(C)}$ elements which have nonempty intersection with $\lambda_iB^d$ for each $i\in\{0,1,\dots,{t+1}\}$.
Repeating this process, we obtain a sequence $\lambda_0<\lambda_1<\dots$ that goes to infinity and a sequence $\mathcal{F}_0\subset\mathcal{F}_1\subset\dots$ of packings with homothets of measure $\mu(C)$ with respect to $\mu$. Set $\mathcal{F}=\cup_{i=0}^\infty\mathcal{F}_i$, then $\mathcal{F}$ is a packing with homothets of measure $\mu(C)$ and, for $i=0,1,\dots$, we have that \[d_{out}(\mu,\mathcal{F}|\lambda_iB^d)=\frac{1}{\mu(\lambda_iB^d)}\sum_{C'\in\mathcal{F},C'\cap \lambda_iB^d\neq\emptyset}\mu(C')\geq\frac{1}{\mu(\lambda_iB^d)}\frac{c_{g,d}\mu(\lambda_{i}B^d)}{2\mu(C)}\mu(C)=\frac{c_{g,d}}{2},\] thus
\[d_\text{upp}(\mu,\mathcal{F})=\limsup_{r\rightarrow\infty}d_{\text{out}}(\mu,\mathcal{F}|rB^d)\geq\frac{c_{g,d}}{2},\] as desired.
\end{proof}
Again, the result holds as long as no boundary of an homothet has measure larger than $tk$ for some fixed $t\in(0,1)$. As in the proof of Theorem \ref{teo:coverdensity}, our argument can be slightly modified to show that $\delta_H(\mu,C)\geq c_{g,d}$ (where $c_{g,d}$ is the hidden constant in Theorem \ref{teo:gbound}).
\section{Algorithms and complexity}\label{chap:computational}
\subsection{Algorithms}\label{sec:algorithms}
In this section we describe algorithms for approximating $B^d$-$k$-COVER and $B^d$-$k$-PACK (defined in Section \ref{sec:overview6}) up to a multiplicative constant that depends on $d$. The algorithms also provide either a cover with $k^-/S$ balls or a packing with $k^+/S$ balls with that number of elements. The algorithms essentially recreate the constructive proofs of theorems \ref{teo:gbound} and \ref{teo:fbound}.
We first present a randomized algorithm for approximating $B^d$-$k$-COVER. Given a finite point set $P\subset\mathbb{R}^d$, denote by $r_\text{opt}(P,k)$ the radius of the smallest ball that contains at least $k$ points of $P$. The following result of Har-Peled and Mazumdar \cite{k-enclosing} (see also Section 1 in \cite{geometricapproximation}) will be key.
\begin{theorem}\label{teo:approx-k-ball}
Given a set $P\subset\mathbb{R}^d$ of $n$ points and an integer parameter $k$, we can find, in expected $O_d(n)$ time, a ($d$-dimensional) ball of radius at most $2r_\text{opt}(P,k)$ which contains at least $k$ points of $P$.
\end{theorem}
\begin{theorem}\label{teo:coveralg}
Let $S\subset\mathbb{R}^d$ be a set of $n$ points. There is an algorithm that finds a covering of $S$ formed by $O_d(\frac{n}{k})$ $k^-/S$-homothets of $B^d$ in expected $O_d(\frac{n^2}{k})$ time.
\end{theorem}
\begin{proof}
By repeated applications of Theorem \ref{teo:approx-k-ball} we can find, in expected $O(\frac{n}{k}\cdot n)$ time, a sequence $B_1,B_2,\dots,B_t$ of balls and a sequence $S=S_1\supset S_2\supset\dots\supset S_{t+1}=\emptyset$ (with $t\leq\lceil\frac{2n}{k}\rceil$) such that each $B_i$ has radius at most $2r_\text{opt}(S_i,k/2)$, contains at least $k/2$ points of $S_i$ and satisfies $S_i\cap B_i=S_i-S_{i+1}$.
For each $B_i$, we can construct a set $P_{B_i}$ as in Lemma \ref{teo:hit} in $O_d(1)$ time. The union $W$ of these $t$ sets forms a weak $\epsilon$-net for $(S,\mathcal{H}_B|_S)$ (see Theorem \ref{teo:nets}). As in the proof of Theorem \ref{teo:fbound}, for each $p\in S$ let $B_p$ be the smallest ball of the form $\lambda B^d+p$ which covers at least than $\frac{k}{2}$ points of $S$ (if $S$ is not in $\frac{k}{2}/S$-general position, we might have to perturb $B_p$ slightly so that it contains no more than $k$ points); we do not compute any of these balls at this point in time. Each $B_p$ contains at least one element of $W$, and we can find one such $w_p\in W$ in $O_d(W)=O_d(\frac{n}{k})$ time by simply choosing from $W$ a point that minimizes the distance to $p$. This is repeated for every $p\in S$.
For every $w\in W$, let $S_w=\{p\in S\text{ }|\text{ }w_p=w\}$. Select from $S_w$ the point $p$ that is the furthest away from $w$ and compute the ball $B_p$. This can be done in $O_d(n)$ time, even in the case that a small perturbation is required, by looking at the distances from $p$ to each other element of $S$. Add $B_p$ to the final cover, remove the points in $B_p$ from $S_w$, and repeat until $S_w$ is empty. As can be seen from the proof of Lemma \ref{teo:neighborhoodcover}, the process ends after $O_d(1)$ iterations.
Repeat the scheme above for every $w\in W$ to obtain a cover with the desired properties. This takes $O_d(\frac{n}{k}\cdot n)$ time and, thus, the expected running time of the whole algorithm is precisely $O_d(\frac{n}{k}\cdot n)$. See Section \ref{sec:covering} for some omitted details.
\end{proof}
\begin{theorem}
Let $S\subset\mathbb{R}^d$ be a set of $n$ points. There is an algorithm that computes a packing formed by $O_d(\frac{n}{k})$ $k^+/S$-homothets of $B^d$ in $O_d(n^2)$ time.
\end{theorem}
\begin{proof}
Following the proof of Theorem \ref{teo:gbound}, for each $p\in S$ let $B_{p}$ be the smallest homothet of the form $\lambda B^d+p$ which contains at least $k$ points of $S$ (as in the previous algorithm, we might have to perturb it slightly so that it contains no more than $\frac{3k}{2}$ points) and, for $S'\subseteq S$, set $B_{S'}=\lbrace B_{p}\text{ }|\text{ }p\in S'\rbrace$. Compute all the elements of $B_{S}$ in total $O_d(n^2)$ time and find a point $p_0\in S$ such that $B_{p_0}$ is of minimal radius. Add $B_{p_0}$ to the packing. By Claim \ref{teo:satelliteconfig}, there are at most $c_3k$ points $p\in S$ such that $B_{p}$ intersects $B_{p_0}$ and, given the radius of each $B_{p}$, we can compute in linear time the set $S_{p_0}\subset S$ formed by all of these points. Now, we find a point $p_1\in S-S_{p_0}$ such that $B_{p_1}$ is of minimal radius, add it to the packing, and repeat the process above for as long as possible. At the end, we get a packing composed of $\Omega_d(\frac{n}{k})$ balls which contain at least $k$ points of $S$. Each of the (at most) $\frac{n}{k}$ iterations takes $O_d(n)$ time, so the running time of the algorithm is dominated by the $O_d(n^2)$ time that it takes to compute the elements of $B_{S}$.
\end{proof}
In the same way that the proof of Theorem \ref{teo:gbound} can be adapted to obtain an upper bound for $f$ (see the last paragraph of Section \ref{sec:covering}), we can also modify the algorithm above to get the following result.
\begin{theorem}
Let $S\subset\mathbb{R}^d$ be a set of $n$ points. There is an algorithm that computes, in $O_d(n^2)$ time, a cover of $S$ formed by $O_d(\frac{n}{k})$ $k^-/S$-homothets of $B^d$.
\end{theorem}
\subsection{Complexity}\label{sec:complexity}
As mentioned in Section \ref{sec:previouswork}, Bereg et al. \cite{matchingnp} showed if $C$ is a square then deciding whether $g(C,2,S)=\frac{|S|}{2}$ is NP-hard. We prove a similar result for $C$-$k$-COVER.
\begin{theorem}\label{teo:complexity}
Let $C$ be a square and $k$ a positive multiple of $4$. Then $C$-$k$-COVER is NP-hard. In fact, it is NP-hard to determine whether $f(C,k,S)=\frac{|S|}{k}$ or not.
\end{theorem}
\begin{proof}
Suppose that $C$ is a square. We provide a polynomial time reduction from $3$-SAT\footnote{$3$-SAT consists of determining the satisfability of a Boolean formula in conjunctive normal form where each clause has three variables. $3$-SAT is well-known to be NP-complete.} to $C$-$4$-COVER. The construction can easily be adapted to work for any $k$ multiple of $4$.
Suppose we are given an instance of $3$-SAT. To each variable we will assign a collection of points with integer coordinates which form a sort of loop; the number of points in each of these loops will be a multiple of $4$. For each clause, there will be a couple of smaller loops formed too by integer points; the number of points in each of these two loops will be even, but not a multiple of $4$. The total number of points will thus be a multiple of $4$, say, $4m$. We will call a square \textit{good} if it covers exactly $4$ points. The goal is to construct the loops in such a way that the Boolean formula is satisfiable if and only if the points can be covered by $m$ good squares. Such a collection of squares will be referred to as a \textit{good cover}. Note that in a good cover each point is covered by exactly one square. For an overview of the construction, see figure \ref{fig:3}.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.75]{Figure_03.pdf}
\caption{Overview the layout of the variable loops (black) and clause loops (red).}
\label{fig:3}
\end{figure}
At each crossing between two variable loops the points are arranged as in figure \ref{fig:4}. By spacing the loops appropriately and constructing their topmost sections at slightly different heights, we ensure that any square covering points from two different variable loops covers either more than $4$ points or covers a crossing between those two loops. The configuration of the points at each crossing makes it so that every good square which contains points from two variable loops covers exactly two points from each of those loops.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.44]{Figure_04.pdf}
\caption{Top left: placement of the points around a crossing between two variable loops. The other pictures depict all the essentially different ways in which a good square can cover the crossing.}
\label{fig:4}
\end{figure}
Figure \ref{fig:5} depicts the gadget used to simulate each clause. The configuration inside each of the $6$ red circles is designed so that any good square (inside the circle) which covers points from both the clause loop and the corresponding variable loop covers precisely two points from each. This way, any good square will cover an even number of points from each variable loop. The points of each variable loop are labeled (in order) from $1$ to $4t$ (for some $t$ that depends on the loop). We say that a good cover \textit{assigns} the value \textit{true} (resp. \textit{false}) to a variable if any two points labeled $2s$ and $2s+1$ (resp. $2s+1$ and $2s+2$) in the corresponding loop are contained in the same square, where the indices are taken modulo the total number of points in the loop. Clearly, a good cover assigns exactly one Boolean value to each variable. The points inside $c_{x,1}$ can be arranged so that if a good square that is contained in $c_{x,1}$ covers points from both the clause loop and the variable loop that corresponds to variable $x$, then it contains the points labeled with $4s$ and $4s+1$ if $x$ is not negated in the clause, or it contains the points labeled with $4s+1$ and $4s+2$ if $x$ appears in negated form ($\neg x$). Similarly, the points in $c_{x,2}$ are placed so that a good square which covers points from both the variable and the clause loops covers the points labeled as $4s+2$ and $4s+3$ if $x$ is not negated, or the points $4s+3$ and $4s+4$ if $x$ is negated. The points in $c_{y,1},c_{y,2},c_{z,1}$ and $c_{z,2}$ are arranged analogously.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.7]{Figure_05.pdf}
\caption{Placement of the points around each clause loop. Up to reflection and rotation, the points inside each of the six blue circles are arranged as shown on the right. There is essentially a unique way of placing a good square that covers points from both the clause loop and the corresponding variable loop.}
\label{fig:5}
\end{figure}
Since the number of points of the clause loop is even but not a multiple of $4$, in any good cover there must be a square that contains two points from said loop and two points from one of the three corresponding variable loops. The construction described in the last paragraph makes it so that this is only possible if the cover assigns to one of the three variables the value that makes the clause true. Since this holds for all clause loops simultaneously, this shows that in order for a good cover to exist the formula must be satisfiable. We prove that the converse is true as well. Suppose that the formula is satisfiable and consider an assignment of Boolean values that satisfies it. For every clause choose a variable that has been assigned the correct value (with respect to the clause). Each variable loop can be covered by good squares which assign to it the correct value and such that one of these squares covers two points from each clause loop for which the variable was chosen (again, this is possible by the construction described above). The only thing that could go wrong when covering the variable loops is for the number of squares that cover two points from the variable loop to be odd, but this will not happen, since each variable corresponds to two loops and the number of crossings between any two variable loops is even. Since exactly two points from each variable loop have been covered, the number of points that still need to be covered in each variable loop is a multiple of four, so we can easily extend this collection of good squares to a good cover with ease. We have shown that the initial formula is satisfiable if and only if the point set admits a good cover.
It is not hard to see that the reduction can be carried out in a grid of polynomial dimensions and in polynomial time. This concludes the proof.
\end{proof}
\section{Matching points with homothets}\label{chap:matching}
\subsection{Toughness of {D}elaunay triangulations}\label{sec:tough}
\begin{theorem}\label{teo:tough}
Let $C\subset\mathbb{R}^2$ an $\alpha$-fat strictly convex body with smooth boundary and $S\subset\mathbb{R}^2$ a finite point set in $C$-general position such that no three points of $S$ lie on the same line. If $U\subset S$, then $D_C(S)-U$ has less than \[\frac{450\degree-4\arcsin{\alpha}}{\arcsin{\alpha}}|U|+\frac{2\arcsin{\alpha}-90\degree}{\arcsin{\alpha}}\] connected components.
Of course, the result holds as long as $C$ can be made $\alpha$-fat by an affine transformation.
\end{theorem}
Note that as $\alpha$ goes to $1$ we get that $D_C(S)$ is $1$-tough, as was shown in \cite{simpletough} for Delaunay triangulations with respect to disks. We will need the following geometric lemma, which generalizes a well-known angular property of standard Delaunay triangulations.
\begin{lemma}\label{teo:angles}
Let $C\subset\mathbb{R}^2$ an $\alpha$-fat convex body and $S\subset\mathbb{R}^2$ a finite point set. Suppose that $abc$ and $cda$ are two adjacent bounded faces of $D_C(S)$. We have that \[\measuredangle abc+\measuredangle cda\leq360\degree-2\arcsin{\alpha}.\]
\end{lemma}
\begin{proof}
The points $b$ and $d$ lie on different sides of the line that goes through $a$ and $c$. Also,
since $(a,c)$ is an edge of $D_C(S)$, there is an homothet $C'$ of $C$ that contains $a$ and $c$ but contains neither $b$ nor $d$, we can actually choose $C'$ so that $a$ and $c$ lie on its boundary. This is all the information that we need in order to deduce the result.
By translating and rescaling, we may assume that $\alpha B^2\subset C'\subset B^2$. The points $a$ and $c$ are not contained in $\alpha B^2$, since they lie on the boundary of $C$. The fact that $C$ is convex implies that the convex hull $\text{conv}(\alpha B^2\cup\{a,c\})$ does not contain $b$ and $d$ (see figure \ref{fig:6} a). It is possible to slide $b$ and $d$ until they lie on the boundary of $\text{conv}(\alpha B^2\cup\{a,c\})$ without decreasing the values of $\measuredangle abc$ and $\measuredangle cda$, so we may and will assume that they lie on said boundary. By a similar argument, it suffices to prove the inequality under the assumption that $a$ and $c$ lie on the boundary of $B$ (see figure \ref{fig:6} b).
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.6]{Figure_06.pdf}
\caption{Configuration in the proof of Lemma \ref{teo:angles}}
\label{fig:6}
\end{figure}
It is not hard to see that $\measuredangle abc$ grows larger as $b$ gets closer to either $a$ or $c$. Similarly, $\measuredangle cda$ grows larger as $d$ gets closer to either $a$ or $c$. Thus, $\measuredangle abc+\measuredangle cda\leq 360\degree-\Theta$, where $\Theta$ is the measure of the angle at $a$ (or, equivalently, $c$) of $\text{conv}(\alpha B^2\cup\{a,c\})$. A simple calculation shows that $\Theta\geq 2\arcsin\alpha$, with equality if an only if the segment joining $a$ to $c$ goes through the closure of $\alpha B$.
\end{proof}
Instead of trying to prove Theorem~\ref{teo:tough} directly, we first bound the size of an independent set\footnote{A set of vertices of a graph forms an \textit{independent set} if no two of them are adjacent.} in $D_C(S)$.
We return to the proof of
\begin{theorem}\label{teo:independent}
Let $C$ and $S$ be as in the statement of Theorem~\ref{teo:tough} and $I\subset S$ an independent set of vertices of $D_C(S)$. Then \[|I|<\frac{450\degree-4\arcsin{\alpha}}{450\degree-3\arcsin{\alpha}}|S|+\frac{90\degree-2\arcsin{\alpha}}{450\degree-3\arcsin{\alpha}}.\]
\end{theorem}
\begin{proof}
Let $S'=S\backslash I$ and notice that at least one vertex $u$ of the outer face of $D_C(S)$ must belong to $S'$. For each edge of $D_C(S)$ consider an homothet of $C$ that contains its endpoints and no other element of $S$, and take two points $v,w\notin S$ which are not contained in any of those circles and such that the triangle with vertices $u,v$ and $w$ contains all points of $S$. By the choice of $v$ and $w$, the Delaunay triangulation $D_C(S\cup\{v,w\})$ contains $D_C(S)$ as a subgraph (see figure \ref{fig:7}). Let $D'$ the subgraph of $D_C(S\cup\{v,w\})$ induced by $S'\cup\{v,w\}$. Since $I$ is an independent set of $D_C(S\cup\{v,w\})$ and contains no vertex of the outer face, each point in $I$ corresponds to a bounded face of $D'$ which is bounded by a cycle and is not a face of $D_C(S\cup\{v,w\})$. The previous observation shows, in particular, that $D'$ is connected. Following the terminology in~\cite{simpletough}, we classify the bounded faces of $D'$ as \textit{good faces} if they are also faces of $D_C(S\cup\{v,w\})$, and as \textit{bad faces} if they contain one point of $I$; note that each bounded face falls in exactly one of these two categories. Let $g$ and $b=|I|$ be the number of good and bad faces, respectively.
We will asign some \textit{distinguished angles} to each edge of $D'$. If $(p,q)$ is an interior edge of $D'$ then it is incident to two bounded faces $pqr$ and $qps$ of $D_C(S\cup\{v,w\})$; we assign the edge $(p,q)$ to the angles $\angle qrp$ and $\angle psq$. Each exterior edge $(p,q)$ is incident to a single such face $pqr$; we assign $(p,q)$ to $\angle qrp$ (see figure \ref{fig:7}). On one hand, all three angles of any good face are distinguished and add up to $180\degree$. On the other hand, every bad face contains a point of $I$ and all angles of $D_C(S\cup\{v,w\})$ which are anchored at that point are distinguished and add up to $360\degree$. The total measure of the distinguished angles is thus \[T=g\cdot180\degree+b\cdot360\degree.\]
This quantity can also be bounded using Lemma~\ref{teo:angles}, as follows. Each edge of $D'$ is assigned to at most two distinguished angles, which have total measure at most $360\degree-2\arcsin{\alpha}$ (indeed, this is trivial if there is only one such angle, and it follows from the lemma if there are two). By Euler's formula, the number of edges of $D'$ is $|S'\cup\{v,w\}|+(b+g+1)-2=|S|+g+1$. Each of the three edges on the outer face is assigned to only one angle, so summing over all edges we get \[T<(360\degree-2\arcsin{\alpha})(|S|+g-2)+3\cdot 180\degree,\] whence \[g\cdot180\degree+b\cdot360\degree<(360\degree-2\arcsin{\alpha})(|S|+g-2)+540\degree.\] Since each element of $I$ is incident to at least three faces of the triangulation $D_C(S\cup\{v,w\})$ we get, again by Euler's formula, that \[3(|S|+2)-6\geq g+3b,\] so $g\leq 3(|S|-b)$. We momentarily set $\beta=2\arcsin{\alpha}$, then the two inequalities yield \[b\cdot 360\degree<(360\degree-\beta)|S|+(180\degree-\beta)(3|S|-3b)-2(360\degree-\beta)+540\degree,\] \[(900\degree-3\beta)b<(900\degree-4\beta)|S|-(180\degree-2\beta),\] \[|I|= b<\frac{900\degree-4\beta}{900\degree-3\beta}|S|-\frac{180\degree-2\beta}{900\degree-3\beta},\] and the result follows.
\end{proof}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.85]{Figure_07.pdf}
\caption{An example of how the Delaunay triangulation $D_C(S\cup\{u,w\})$} might look. All distinguished angles are marked in red. \textbf{This figure, which appeared in \cite{simpletough}, was provided to us by Ahmad Biniaz.}
\label{fig:7}
\end{figure}
The following simple lemma extends a result used in~\cite{simpletough}.
\begin{lemma}\label{teo:innerpath}
Let $C\subset\mathbb{R}^2$ a strictly convex body and $S\subset\mathbb{R}^2$ a finite point set in $C$-general position. Consider an homothet $C'$ of $C$ whose boundary contains exactly two points, $p$ and $q$ say, of $S$. Then $p$ and $q$ are connected by a path in $D_C(S)$ that lies in $C'$.
\end{lemma}
\begin{proof}
The proof is by induction on the number of points $t$ contained in the interior of $C'$. If $t=0$, then $p,q$ are adjacent in $D_C(S)$ and we are done. Otherwise, let $r$ be a point in the interior of $C'$ and apply a dilation with center $p$ until the image of $C'$ has $r$ on its boundary, we call this homothet $C_1$, repeat this process but now with center $q$ and call the resulting homothet $C_2$. This way, $p$ and $r$ lie on the boundary of $C_1$, while $q$ and $r$ lie on the boundary of $C_2$; notice also that $C_1,C_2\subset C'$. Since $C$ is strictly convex, we can ensure that the boundaries of $C_1$ and $C_2$ contain no point of $S$ other than $p,r$ and $q,r$, respectively, by taking a small perturbation of the homothets if necessary. Notice that the interiors of each of $C_1,C_2$ contain at most $t-1$ points of $S$. Thus, by the inductive hypothesis, we can find two paths joining $p$ to $r$ and $q$ to $r$ inside $C_1$ and $C_2$, respectively. The union of the two paths we just mentioned contains a path from $p$ to $q$ that lies completely in $C'$, as desired. See figure \ref{fig:8}.
\end{proof}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.44]{Figure_08.pdf}
\caption{Configuration in the proof of Lemma \ref{teo:innerpath}.}
\label{fig:8}
\end{figure}
Theorem~\ref{teo:tough} is an easy consequence of Theorem~\ref{teo:independent} and Lemma~\ref{teo:innerpath}. Indeed, consider an arbitrary set of vertices $U\subset S$ and choose a representative vertex from each component of $D_C(S)-U$. Let $V$ be the set of all representative vertices and consider the Delaunay triangulation $D_C(U\cup V)$. Suppose that there is an edge in this graph between two vertices $p$ and $q$ of $V$, then there is an homothet $C'$ such that $C'\cap (U\cup V)=\{p,q\}$. Furthermore, by applying a slight perturbation if necessary, we may assume that $C'$ contains no other point of $S$ on its boundary. Lemma~\ref{teo:innerpath} now tells us that there is a path in $D_C(S)$ joining $p$ and $q$ which lies in $C'$. Since $p$ and $q$ lie in different components of $D_C(S)-U$, this path must contain at least one vertex from $U$, which must therefore lie in $C'$. This contradiction shows that $V$ is an independent set of $D_C(U\cup V)$. By Lemma~\ref{teo:independent}, \[|V|<\frac{450\degree-4\arcsin{\alpha}}{450\degree-3\arcsin{\alpha}}|(V\cup U)|-\frac{90\degree-2\arcsin{\alpha}}{450\degree-3\arcsin{\alpha}},\] \[|V|<\frac{450\degree-4\arcsin{\alpha}}{\arcsin{\alpha}}|U|-\frac{90\degree-2\arcsin{\alpha}}{\arcsin{\alpha}},\] but $|V|$ is just the number of components of $D_C(S)-U$, so we are done.
\subsection[Large matchings in Delaunay graphs]{Large matchings in $D_C(S)$}\label{sec:matchings}
For any graph $G$, let $o(G)$ denote the number of connected components of $G$ which have an odd number of vertices. The Tutte-Berge formula~\cite{tutte-berge} tells us that the size of the maximum matching in a graph $G$ with vertex set $V$ equals \[\frac{1}{2}\left(|V|-\max_{U\subset V}\{o(G-U)-|U|\}\right).\]
Combining Theorem~\ref{teo:tough} and the Tutte-Berge formula yields the main result of this sections.
\begin{theorem}\label{teo:matching}
Let $C\subset\mathbb{R}^2$ an $\alpha$-fat strictly convex body with smooth boundary and $S\subset\mathbb{R}^2$ a finite point set in $C$-general position such that no three points of $S$ lie on the same line. Then $D_C(S)$ contains a matching of size at least \[\left(\frac{1}{2}-\frac{450\degree-5\arcsin{\alpha}}{900\degree-6\arcsin{\alpha}}\right)|S|+\frac{45\degree-\arcsin{\alpha}}{450\degree-4\arcsin{\alpha}}\left(1+\frac{450\degree-5\arcsin{\alpha}}{450\degree-3\arcsin{\alpha}}\right).\]
Again, the result also holds if $C$ can be made $\alpha$-fat by an affine transformation.
\end{theorem}
\begin{proof}
Let $U\subset S$ and notice that $o(D_C(S)-U)$ is at most the number of connected components of $D_C(S)-U$. Whence, Theorem \ref{teo:tough} implies that \[o(D_C(S)-U)<\frac{450\degree-4\arcsin{\alpha}}{\arcsin{\alpha}}|U|-\frac{90\degree-2\arcsin{\alpha}}{\arcsin{\alpha}}.\] Together with $o(D_C(S)-U)+|U|\leq S$, this can be seen to imply that $|S|-(o(D_C(S)-U)-|U|)$ must be larger than \[\left(1-\frac{450\degree-5\arcsin{\alpha}}{450\degree-3\arcsin{\alpha}}\right)|S|+\frac{90\degree-2\arcsin{\alpha}}{450\degree-4\arcsin{\alpha}}\left(\frac{450\degree-5\arcsin{\alpha}}{450\degree-3\arcsin{\alpha}}+1\right),\] and the result follows.
\end{proof}
To conclude this sections, we obtain a weaker bound that holds under more general conditions.
\begin{theorem}\label{teo:matching2}
Let $C\subset\mathbb{R}^2$ be a strictly convex body. Then, for every finite set $S\subset\mathbb{R}^2$ we have that $f(C,2,S)\leq|S|-\lceil\frac{1}{3}(|S|-8)\rceil$.
\end{theorem}
\begin{proof}
We will essentially show that $D_C(S)$ (which is planar, but not necessarily a triangulations) can be turned into a planar graph of minimum degree at least three by adding a constant number of vertices, the theorem then follows from a result of Nishizeki and Baybars~\cite{planarmatchings}.
For every $x$ (not necessarily in $S$) on the boundary of $C$, let $A_x$ be the smallest closed angular region which has $x$ as its vertex and contains $C$, and $\alpha_x\leq 180\degree$ be the measure of the angle that defines $A_x$. Let $a_x=(A_x-x)\cap\mathbb{S}^2$, $a_x$ is an arc of $\mathbb{S}^2$ determined by an angle of measure $\alpha_x$. See figure \ref{fig:9}.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.52]{Figure_09.pdf}
\caption{(a),(b): two examples of $A_x$ and $\alpha_x$. (c),(d): how $A_x$, $A_x-x$ and $a_x$ might look.}
\label{fig:9}
\end{figure}
\begin{lemma}\label{teo:fivepoints}
There are five points in $\mathbb{S}^2$ such that, for every $x$ on the boundary of $C$, $a_x$ contains at least one of these points in its interior.
\end{lemma}
\begin{proof}
Let $x_1,x_2,...,x_r$ be distinct points on the boundary of $C$. The intersection $\cap^{r}_{i=1}A_{x_i}$ is a closed and convex polygonal region and a quick calculation shows that $\sum^{r}_{i=1}\alpha_{x_i}\geq (r-2)180\degree$, where the equality occurs if and only if $C$ is an $r$-agon with vertices $x_1,...,x_r$. Since the result is easily seen to be true if $C$ is either a triangle or a quadrilateral, we can assume that, for any distinct points $x,y,z$ and $w$ on the boundary of $C$, $\alpha_{x}+\alpha_{y}+\alpha_{z}>180\degree$ and $\alpha_{x}+\alpha_{y}+\alpha_{z}+\alpha_{w}>360\degree$.
Let $A_{90\degree}$ be the set that consists of all points $x$ on the boundary of $C$ such that $\alpha_x\leq 90\degree$, then $|A_{90\degree}|\leq 3$. If $|A_{90\degree}|\leq 2$, we take four points in $\mathbb{S}^2$ such that they are the vertices of a square and that one of them is contained in the arc $a_x$ determined by one of the elements of $A_{90\degree}$. This set of four points hits the interiors of all but at most one of the arcs $a_x$, so it is possible to find five points in $\mathbb{S}^2$ which hit the interiors of all arcs. If $|A_{90\degree}|=3$, then $\sum_{x\in A_{90\degree}}\alpha_x>180\degree$. By choosing a square $Q$ with vertices in $\mathbb{S}^2$ uniformly at random, with positive probability $Q$ will be such that $v$ is contained in the interior of $a_x$ for more than two pairs $(v,x)$ where $v$ is a vertex of $Q$ and $x\in A_{90\degree}$. Since no arc $a_x$ with $x\in A_{90\degree}$ may contain more than one vertex of $Q$, every arc appears in at most one of the pairs. This implies that, with positive probability, the vertices of $Q$ hit the interior of every arc $a_x$ for $x\in A_{90\degree}$, but they clearly also hit the interior of every other $a_x$ and, thus, there is a set of four points (to which we can add any other point of $\mathbb{S}^2$ so that it has five elements) with the desired property.
\end{proof}
Let $x_1,x_2,x_3,x_4,x_5$ be five points as in Lemma~\ref{teo:fivepoints}. Consider a very large positive real number $\gamma$ to be specified later and let $S'=S\cup\{\gamma x_1,\gamma x_2,\dots,\gamma x_5\}$.
\begin{claim}
If $\gamma$ is large enough then every point of $S$ has degree at least $3$ in $D_C(S')$.
\end{claim}
\begin{proof}
Let $s\in S$ and consider an arbitrary line $\ell$ with $\ell\cap S=\lbrace s\rbrace$ and an open halfplane $H$ determined by $\ell$, we show that if $\gamma$ is large enough then $s$ is adjacent to a point in $H$. Assume, w.l.o.g, that $\ell$ is vertical and that $H$ is the right half-plane determined by $\ell$ and let $x_H$ be the leftmost point of $C$. Observe that, by Lemma~\ref{teo:fivepoints}, for any large enough $\gamma$ the angular region $A_{x_H}-x_H+s$ contains at least one of the points $\gamma x_1,\gamma x_2,\dots,\gamma x_5$. Now, consider the smallest $\lambda>0$ such that the homothet $C_\lambda=\lambda(C-x_H)+s$ contains at least two points of $S'$ (it exists, since $C_\lambda$ will contain $s$ and at least one of $\gamma x_1,\gamma x_2,\dots,\gamma x_5$ if $\lambda$ is very large). If necessary, perturb $C'$ slightly so that it contains $s$ and exactly one other element of $S'$, then this element lies in $H$ and is adjacent to $s$, as desired. This implies that, for large enough $\gamma$, the neighbours of $s$ are not contained in a closed halfplane determined by a line through $s$, which is only possible if $s$ has degree at least $3$ in $D_C(S')$. Any large enough $\gamma$ will ensure that this holds simultaneously for every $s\in S$.
\end{proof}
The result clearly holds for $|S|\leq 8$, so we assume that $|S|>8$. Let $X\subset\{\gamma x_1,\gamma x_2,\dots,\gamma x_5\}$ be the set of $\gamma x_i$'s which are adjacent to at least one point of $S$ and delete the rest of the $\gamma x_i$'s from $D_C(S')$. It is not hard to see that $|X|\geq 2$. If $|X|=2$, join these two points of by an edge (skip this step if they are already adjacent) and add a vertex $v$ in the outer face of $D_C(S')$, then connect $v$ to both element of $X$ and to some point in $S$ while keeping the graph planar. Otherwise, if $|X|\geq 2$, we can add edges between the elements of $X$ so that there is a cycle of length $|X|$ going through all of them and the graph remains planar. In any case, the resulting graph is simple, planar, connected, and it has at least $|S|+3>10$ vertices, all of degree at least three. Nishizeki and Baybars \cite{planarmatchings} showed that any graph with these properties contains a matching of size at least $\lceil\frac{1}{3}(n+2)\rceil$, where $n$ is the total number of vertices. Let $t\leq 5$ denote the number of vertices that do not belong to $S$. Deleting all vertices not in $S$ from the graph, we get a matching in $D_C(S)$ of size at least $\lceil\frac{1}{3}(|S|+2+t)\rceil-t=\lceil\frac{1}{3}(|S|-8)\rceil$. This matching translates into a way of covering $S$ using no more than $|S|-\lceil\frac{1}{3}(|S|-8)\rceil$ $2^+/S$-homothets of $C$.
\end{proof}
\section{Further research and concluding remarks}\label{chap:outro}
\subsection*{A drawback of the lower and upper densities}
Unlike the standard upper and lower densities of an arrangement, the measure theoretic versions introduced in Section \ref{sec:packcover} are in general not independent of the choice of the origin. The reason for this is that, for any two points $O_1$ and $O_2$, the measures of the balls $B(O_1,r)$ and $B(O_2,r)$ may differ in an arbitrarily large multiplicative constant for every $r$. Although this can be avoided by adding the requirement that $\mu(X)\leq c\cdot \text{vol}(X)$ for any compact $X$ and some constant $c$, this defect begs the question: Is there a better way of extending the standard definitions to arbitrary Borel measures?
\subsection*{Bounds in the other direction}
The hidden constants $c_{f,d}$ and $ c_{g,d}$ obtained in the proofs of theorems \ref{teo:fbound} and \ref{teo:gbound} increase and decrease exponentially in $d$, respectively. We showed in Sections \ref{sec:coverdensity} and \ref{sec:packdensity} that, under the right conditions, $\Theta_H(\mu,C)\leq c_{f,d}$ and $\delta_H(\mu,C)\geq c_{g,d}$ (in the case of measures). This yields, in particular, that $c_{f,d}\geq\Theta_H(C)$ and $c_{g,d}\leq\delta_H(C)$ for any $C$ (we remark that this can also be obtained by considering the restriction of the Lebesgue measure to large boxes). Both of these bounds also hold for the hidden constants in the case of point sets, as can be shown by taking a sufficiently large section of a grid.
\begin{theorem}\label{teo:grid}
Let $C\subset\mathbb{R}^d$ be a convex body and $\epsilon$ any positive real number. Then, for any sufficiently large $k$, there is an integer $N(C,\epsilon,k)$ such that for each $N$ with $N>N(C,\epsilon,k)$ the set $[N]^d=\lbrace(x_{1},x_{2}, ...,x_{d})\in\mathbb{R}^d\text{ }|\text{ }x_{i}\in [N]\rbrace$\footnote{For each positive integer $n$, $[n]$ denotes the set $\{1,2,\dots,n\}$.} of integer points inside a $d$-hypercube of side $N$ satisfies $f(C,k,[N]^d)>(\Theta_H(C)-\epsilon)\frac{N^{d}}{k}$ and $g(C,k,[N]^d)<(\delta_H(C)+\epsilon)\frac{N^{d}}{k}$.
\end{theorem}
Since the proof is quite straightforward, we give only a sketch of the bound for $f$.
\begin{proof}
Let $\delta>0$. For any sufficiently large $k$, there is an homothet $C'$ of $C$ of volume less than $(1+\delta)k$ which has the following property: Every homothet $C_{1}$ of $C$ that covers at most $k$ points of the lattice $\mathbb{Z}^{d}$ is contained in a translate $C_{2}$ of $C'$ such that every point covered by $C_{1}$ has distance at least $\sqrt{d}$ from the boundary of $C_{2}$. Also, for any sufficiently large $N$, the set $[1,N]^d=\lbrace(x_{1},x_{2}, ...,x_{d})\text{ }|\text{ }1\leq x_{i}\leq N\rbrace$ cannot be covered by less than $(\Theta_H(C)-\delta)\frac{N^{d}}{(1+\delta)k}$ translates of $C'$.
Now, consider a cover of $[N]^d$ by $k^-/[N]^d$-homothets of $C$ and for each of these homothets take a translate of $C'$ with the described properties. This way, we get a cover of $[1,N]^d$ with translates of $C'$, and the result follows by taking a small enough $\delta$. This is not entirely correct, since the $k^d/[N]^d$-homothets which are not completely contained in $[1,N]^d$ may not fit inside a translate of $C'$ in the desired way, but these become insignificant if we choose $N(C,\epsilon,k)$ to be large enough.
\end{proof}
While this shows that the exponential growth of $c_{f,d}$ and exponential decay of $ c_{g,d}$ are necessary, we believe that these bounds are still far from optimal. It might be an interesting problem to try and find point sets or measures for which $f$ is large (or $g$ is small) with respect to $\frac{|S|}{k}$ (or $\frac{\mu\mathbb{R}^d}{\mu(C)}$).
\begin{prob}
What are the optimal values of $c_{f,d}$ and $c_{g,d}$?
\end{prob}
Given that determination of packing and covering densities tends to be a very difficult problem, one should expect an exact solution to the problem above to be out of reach (for now). Similar questions can be asked for the results in Section \ref{sec:matchings}.
\begin{prob}
Can theorems \ref{teo:matching} and \ref{teo:matching2} be improved?
\end{prob}
\subsection*{Higher order Voronoi diagrams}
In their point set versions, theorems \ref{teo:fbound} and \ref{teo:gbound} can be interpreted as a kind of structural property of the order-$k$ Voronoi diagram of $S$ with respect to the (not necessarily symmetric) distance function induced by $C$. The cells in this diagram encode the $k$-element subsets of $S$ that can be covered by an homothet of $C$ which contains exactly $k$ points of $S$. See \cite{aurenhammervoronoi} for more on Voronoi diagrams.
\subsection*{Beyond convex bodies}
While the assumptions that $C$ is bounded and has nonempty interior can both easily be seen to be essential to the results obtained in sections \ref{chap:cover} and \ref{chap:pack}, the convexity hypothesis can be somewhat relaxed:
The \textit{kernel} of a compact connected set $C\subset\mathbb{R}^d$, denoted by $\text{ker}(C)$, is the set of points $p\in C$ such that for every other $q\in C$ the segment with endpoints $p$ and $q$ is completely contained in $C$. We say that $C$ is $\textit{star-shaped}$ if $\text{ker}(C)\neq\emptyset$. Our results in sections \ref{chap:cover} and \ref{chap:pack} remain true as long as $C$ is star-shaped and there is an affine transformation $T$ such that $B^d\subset ker(C)\subset C\subset\alpha B^d$ for some $\alpha=\alpha(d)$ that depends only on $d$.
A sufficiently large grid (as in Theorem~\ref{teo:grid}) or the restriction of the Lebesgue measure to a large box show that we can not hope to extend theorems ~\ref{teo:fbound} and ~\ref{teo:gbound} to non-convex bodies while keeping the hidden constant independent of $C$.
\subsection*{Complexity}
Even though the reduction to $3$-SAT given in Section \ref{sec:complexity} and the proof of NP-hardness in \cite{matchingnp} work only in some very particular cases, we conjecture the following.
\begin{conj}
Let $C$ be a convex body and $k\geq 3$ an integer, then $C$-$k$-COVER is NP-hard. Similarly $C$-$k$-PACK is NP-hard for every $k\geq 2$.
\end{conj}
\subsection*{Covering with disjoint homothets}
It is natural to ask whether a result along the lines of Theorem~\ref{teo:fbound} holds if we require that the $k^-/S$-homothets in the cover have disjoint interiors. A sufficiently fine grid (in the case of point sets) and the restriction of Lebesgue measure to a bounded box (in the measure case) show that, in general, this is not the case, indeed, unless $\theta(C)=1$, the number of interior-disjoint $k^-/S$-homothets required in these cases will not be bounded from above by a function of $\frac{|S|}{k}$ ($\frac{\mu(\mathbb{R}^d)}{\mu(C)}$, respectively). Perhaps the most annoying unanswered questions are the following.
\begin{prob}
Let $S$ be a finite set of at least $k$ points in the plane and $C$ a square. Is the number of disjoint homothets required to cover $S$ bounded from above by a function of $\frac{|S|}{k}$? Is it $O(\frac{|S|}{k})$? What is the answer if we add the restriction that no two points of $S$ lie on the same horizontal or vertical line?
\end{prob}
We believe the answer to all the previous questions to be no. In fact, we suspect that a family of examples which exhibit this can be constructed along the following lines:
Set $k$ to be very large and start by taking a uniformly distributed set of about $k$ points inside the unit square. Choose $m$ points (with $m$ much smaller than $k$) inside the square such that the set of their $2m$ $x$ and $y$ coordinates is independent over $\mathbb{Q}$ and place $k$ points around a very small neighborhood of each of these $m$ points. It is not hard to see that this would work directly (even for $m=1$) if all the squares in the cover were required to lie inside the unit square. This example can be adapted to measures as well.
For $k=2$, this problem is equivalent to the study of strong matchings; see Section \ref{sec:previouswork} for details.
\subsection*{Weak nets for zonotopes}
A centrally symmetric convex polytope is a \textit{zonotope} if all its faces are centrally symmetric\footnote{A zonotope is commonly defined as the set of all points which are linear combinations with coefficients in $[0,1]$ of a finite set of vectors, but the alternative definition given here, which is widely known to be equivalent, serves our purpose much better.}. Notice that each face of a zonotope is a zonotope itself. Examples of zonotopes include hypercubes, parallelepipeds and centrally symmetric convex polygons.
For zonotopes with few vertices, the following geometric lemma can act as a substitute of \ref{teo:hit}, allowing us to construct even smaller weak $\epsilon$-nets.
\begin{lemma}\label{teo:zonotopevertices}
Let $Z\subset\mathbb{R}^d$ be a zonotope and consider two homothets $Z_1$ and $Z_2$ of $Z$ with non-empty intersection. If $Z_1$ is at least as large as $Z_2$, then it contains at least one vertex of $Z_2$.
\end{lemma}
\begin{proof}
We proceed by induction on $d$. The result is trivial for $d=1$ (here, $Z\subset\mathbb{R}$ is simply an interval). Let $p_{1}$ and $p_{2}$ be the centers of $Z_{1}$ and $Z_{2}$, respectively, and $Z_{2}'$ be the result of translating $Z_{2}$ along the direction of $\overrightarrow{p_{1}p_{2}}$ so that $Z_{1}$ and $Z_{2}'$ intersect only at their boundaries; $p_{2}'$ will denote the center of $Z_{2}'$ (see \ref{fig:10} a). Now, let $t_{1}$ and $t_{2}$ be the intersection points of the segment $p_{1}p_{2}'$ with the boundaries of $Z_{1}$ and $Z_{2}'$, respectively. Consider a facet $f_{1}$ of $Z_{1}$ which contains $t_{1}$, since $Z$ is centrally symmetric, there is a negative homothety from $Z_{1}$ to $Z_{2}'$, and this homothety maps $f_{1}$ into a facet $f_{2}$ of $Z_{2}'$ which contains $t_{2}$ and is parallel to $f_{1}$. Let $h_{1}$ and $h_{2}$ be the parallel hyperplanes that support $f_{1}$ and $f_{2}$, respectively, then $Z_{1}$ is contained in the halfspace determined by $h_{1}$ that contains $p_{1}$, while $Z_{2}$ is contained in the halfspace determined by $h_{2}$ that contains $p_{2}$. Suppose that $t_{1}\neq t_{2}$, then $p_{1}, t_{1}, t_{2}, p_{2}$ must lie on the segment $p_{1}p_{2}$ in that order and, by our previous observation, $Z_{1}$ and $Z_{2}$ would not intersect (see \ref{fig:10} 2b), it follows that $t_{1}=t_{2}$ and, thus, $f_{1}\cap f_{2}\neq\emptyset$. Now, since $f_{1}$ and $f_{2}$ are homothetic $d-1$ dimensional zonotopes and $f_{2}$ is not larger than $f_{1}$, the induction hypothesis implies the existence of a vertex $v$ of $f_{2}$ contained in $f_{1}$.
Let $w$ be the vertex of $Z_{2}$ which is mapped to $v$ by the translation from $Z_{2}$ to $Z_{2}'$, we claim that $w$ is contained in $Z_{1}$. The positive homothety from $Z_{2}$ to $Z_{1}$ maps $w$ to a vertex $w'$ of $Z_{1}$. The points $p_{1}, p_{2}, v, w$ and $w'$ all lie on the same plane and, since $Z_{2}$ is not larger than $Z_{1}$, $w'$ is contained in the closed region determined by the lines $wp_{1}$ and $wv$ which is opposite to $p_{2}$. This way, $w$ belongs to the convex hull of the points $p_{1}$, $v$ and $w'$; since these three points belong to the convex set $Z_{1}$, so does $w$ (see \ref{fig:10} c). This concludes the proof.
\end{proof}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.62]{Figure_10.pdf}
\caption{(a): $Z_{1}$, $Z_{2}$ and $Z_{2}'$ (b): How the configuration would look if $t_{1}\neq t_{2}$ (c): Region where $w'$ lies highlighted in grey and triangle $w'vp_{1}$ in red.}
\label{fig:10}
\end{figure}
Proceeding as in the proof of Theorem \ref{teo:nets}, we get the following corollary, which generalizes a result for hypercubes by Kulkarni and Govindarajan \cite{weaknets}.
\begin{corollary}\label{teo:zonotopenets}
Let $Z\subset\mathbb{R}^d$ be a zonotope with $v$ vertices and denote by $\mathcal{H}_{Z}$ the family of all homothets of $C$. Then, for any finite set $S\subset\mathbb{R}^d$ and any $\epsilon>0$, $(S, \mathcal{H}_{Z}|_S)$ admits a weak $\epsilon$-net of size $\frac{v}{\epsilon}$.
\end{corollary}
We also have the following variant of Lemma \ref{teo:neighborhoodcover}.
\begin{lemma}\label{teo:zonotopeneighborhood}
Let $Z\subset\mathbb{R}^d$ be a zonotope and denote by $I$ the number of pairs $(f,v)$ where $f$ is a facet of $Z$ and $v$ is a vertex of $f$. Let $P\subset\mathbb{R}^d$ be a finite set and consider a collection of homothets $\lbrace Z_p\rbrace_{p\in P}$ of $Z$ such that $Z_p$ is of the form $p+\lambda Z$ and $\bigcap_{p\in P} Z_p\neq\emptyset$. Then there is a subset $P'$ of $P$ of size at most $I$ such that $\lbrace Z_p\rbrace_{p\in P'}$ covers $P$.
\end{lemma}
\begin{proof}
Assume that and $O\in\bigcap_{p\in P} Z_p$ and that $O$ is the center of $Z$. Let $(f,v)$ be a pair as in the statement of the lemma and consider the homothet $Z'$ that results from applying a dilation to $Z$ with center $v$ and ratio $\frac{1}{2}$, the intersection of $f$ with this homothet will be denoted by $f_v$. Repeating this for every pair $(f,v)$, we obtain a decomposition of the facets of $Z$ into $I$ interior disjoint regions.
Now, for every pair $(f,v)$, let $P_{f,v}$ consist of all the points $p\in P$ with the property that the ray $\overrightarrow{Op}$ has non-empty intersection with $f_v$. Note that each element of $P$ belongs to at least one the aforementioned sets. From every $P_{f,v}$, choose an element which is maximal with respect to the norm with unit ball $Z$ and add it to $P'$; it is not hard to see that any homothet of $Z$ that is centered at this point and contains $O$ must cover every point in $P_{f,v}$. This way, $P'\leq I$ and $\lbrace Z_p\rbrace_{p\in P'}$ covers the union of all sets of the form $P_{f,v}$, which is $P$.
\end{proof}
Plugging the bounds given by Corollary \ref{teo:zonotopenets} and Lemma \ref{teo:zonotopeneighborhood} into the proof of Theorem \ref{teo:fbound} we obtain the following: If $Z\subset\mathbb{R}^d$ is a zonotope with $V$ vertices and $I$ is as in the statement of lemma \ref{teo:zonotopeneighborhood} then, for any positive integer $k$ and any non-$\frac{k}{2}/C$-degenerate finite set of points $S\subset\mathbb{R}^d$, $f(Z,k,S)=\frac{2VI|S|}{k}$.
\section*{Acknowledgements}
I am grateful to Jorge Urrutia for many helpful discussions and, particularly, for suggesting that the fatness of the convex body might play an important role in the proofs of theorems \ref{teo:fbound} and \ref{teo:gbound}.
\bibliographystyle{plain}
| 3b8fcfb1bc5a34c8ec9aa95e9236a61f95f23050 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Federated learning, proposed by McMahan et al.\cite{mcmahan2017communication}, provides a new paradigm to enhance the privacy-promised learning framework without moving data outside the devices. Federated learning allows clients to train models with personal data on their devices and only share intermediate gradients with the central server. The server aggregates these gradients (typically by average) and transmits aggregated gradient back to clients. Finally, clients update their models as the new base model for the next round.
While clients communicate through the central server, the computation and communication of the server have become system overheads. Reducing communication overheads via techniques such as gradient compression has been proposed to mitigate this issue. Most compression techniques are considered lossy \cite{xu2020compressed}. Thus, some compensation techniques are usually required to adopt memory mechanism. For example, Lin et al. proposed Deep Gradient Compression (DGC) \cite{lin2018deep} to achieve a high compression ratio via well compensation design. However, DGC did not consider the non-IID problem discussed below. Another challenge of Federated Learning is unbalanced data distribution. Unlike centralized distributed learning, which separates data with balance distribution, data on each client are not independent and identically distributed (non-IID) in a typical Federated Learning setting. Several approaches have been proposed using global momentum methods to mitigate this problem, such as Sparse Gradient Compression \cite{aji2017sparse} and Global Momentum Compression \cite{zhao2019global}. However, these solutions further increased communication overheads and decreased accuracy. Although these existing methods showed fine in performance in different consideration, thus our main motivation is to design a scheme which cloud address both communication overheads and non-IID data.
This paper proposes two main contributions. First, we analyze two main communication overheads cost with the existing method. We theoretically show that some methods using momentum could lead to additional communication overheads. Second, we proposed a new compression compensation scheme called Global Momentum Fusion (GMF) which reduces communication overheads between Federated learning clients and the server and maintains comparable model accuracy in the presence of non-IID data.
\section{Problem formulation}
Based on our study, we illustrate two major problems of existing approaches.
\subsection{Server-Side Global Momentum Leads to Extra Communication Overheads}\label{PF:Server-Side}
Communication overheads comprise two main traffic costs. ($i$) clients upload gradients to the server to aggregate, and ($ii$) the server transmits the aggregated gradient to the clients. The size of the gradient upload from clients has a fixed size. However, the size of the aggregated gradient could be varied since the gradients collected from clients are inconsistent.
Figure \ref{fig:Analysis_size} illustrate the aggergrate process with global momentum. Several works, like FetchSGD \cite{rothchild2020fetchsgd} and GSGM \cite{li2019gradient} have used global momentum on the server that gives averaged gradients. As training goes, momentum accumulate aggregate gradient each round, which making aggregated gradient nearly full size in the future rounds.
\input{figure/aggregrade_problem.tex}
In Figure \ref{fig:Analysis_size}, $T$ is total communication rounds where $t \in T$ , $k$ is client where $k \in K$ and $\hat{G}$ is aggregated gradient.
\subsection{Less Efficiency with Client-Side Global Momentum Compensation}\label{PF:Client-Side}
Zhoa et al. \cite{zhao2019global} took a different approach and proposed Global Momentum Compression (GMC) mechanism. GMC used global momentum to replace local momentum and achieved comparable performance. However, GMC does not consider the variance between the local gradient and the global momentum. A large variance could make the global momentum less efficient and lead to over-fitting local data, especially under low compression rates and non-IID datasets. The sizes of the gradients could have a significant disparity as the progress of each training round. Wang et al. \cite{wang2020tackling} discussed a similar issue because of the objective inconsistency of the data.
\section{Global Momentum Fusion compression scheme}
Before introducing our design of the compression process. We first dig into the major sparse compressor. The top-K sparse compression method was commented applied in many sparsification gradient compression approaches. Typically, the top-K sparse compressor creates a mask by selecting the top k\% of the most significant changing parameters of the gradient, representing the top important feature of local data on the client.
\vspace{-1em}
\begin{equation}
G_{k,t} \leftarrow V_{k,t \odot Mask_{topK(V_{k,t})}} \\[6pt]
\label{eq:topk}
\end{equation}
We further design an reference of mask by adding an reference vector $F$ into top-K. In theory, these two gradients, $G^{transmit}$(green arrow) and $G^{accumulate}$(yellow arrow) in Figure \ref{fig:Top-K compressor}, are orthogonal because $G^{transmit}$ is masked by a mask, and $G^{accumulate}$ is the rest of $V$. This inspired us to design the term $F$ add with $V$ based on this property to re-weight the reference of the mask generation. Although we add a term $F$ with $V$ as shown in Figure \ref{fig:Top-K compressor}, it still stratifies the mathematical property. $F$ could be any vector that takes part in the compression selection and still satisfies the orthogonal.
\input{figure/topk.tex}
We formulated our Global Momentum Fusion compression by add a fusion layer before top-K sparse compression shown in the following equation:
\vspace{-1.5em}
\begin{equation}
\begin{small}
\centering
\begin{array}{c}
Z_{k,t} \leftarrow abs((1-\tau) \cdot N(V_{k,t})+\tau \cdot N(M_{k,t}) ) \\[6pt]
G_{k,t} = V_{k,t} \odot Mask_{topK(Z_{k,t})} \\[6pt]
\end{array}
\label{eq:GMF_compressor_policy}
\end{small}
\end{equation}
\vspace{-1em}
In Equation \ref{eq:topk} and \ref{eq:GMF_compressor_policy}, $V_{k,t}$ is an local compensated gradient, $G_{k,t}$ is compressed gradient, $M_{k,t}$ is the accumulated global momentum, $N$ is Normalize, $\alpha$ is the local momentum factor, $\beta$ is the global momentum factor and $\tau$ is the fusion ratio.
Additionally, inspired by FedNova \cite{wang2020tackling}, we introduced a normalized weighting method of fusion ratio to avoid domination by the local gradient and the global momentum.
Next, we proposed a Deep Gradient Compression with Global Momentum Fusion (DGCwGMF) method. We adopted momentum correction from DGC as our memory compensation mechanism and implemented Global Momentum Fusion scheme in the compression policy. DGCwGMF is operates as Figure \ref{fig:DGCwGMF} and Algorithm \ref{alg:GF} illustrates the pseudocode of DGCwGMF.
\input{figure/dgc_w_gmf.tex}
\vspace{-2em}
With the Global Momentum Fusion, we refer to the long-term momentum direction while compressing the gradient, which gives a configurable trade-off between the local gradient and the global momentum with the fusion ratio $\tau$. Before weighing them, we normalize the gradient to avoid bias caused by large variances. A smaller $\tau$ could allow the compressor selection to fit exactly to its local training data. A larger $\tau$ could waive the parameters of the gradient that differ from the global momentum. When setting $\tau > 0$, a higher similarity of the transmitted gradient between clients could have a lower expected communication overhead. When we set the fusion ratio $\tau=0$, DGCwGMF degenerates into DGC.
\input{algorithm/algorithm_gf.tex}
\section{Experiment validation}
We validate our approach with two training tasks: the image classification task in the Cifar10 dataset and the next-word prediction task in the Shakespeare dataset. To compare image classification and next-word prediction tasks, we provide a reproducible empirical experiment over two datasets containing an artificial non-IID dataset and a naturally non-IID dataset.
\subsection{Setup}
\vspace{-0.8em}
\input{table/summary_of_tasks.tex}
\vspace{-0.8em}
Table \ref{tab:summary_of_tasks} show our setting for the two task. Following the procedure in \cite{lin2018deep}, we generate 7 different Mod-Cifar10 datasets with the following Earth Moving Distance(EMD) \cite{zhao2018federated}: 0.0 (Cifar10-0), 0.48 (Cifar10-1), 0.76 (Cifar10-2), 0.87 (Cifar10-3), 0.99 (Cifar10-4), 1.18 (Cifar10-5) and 1.35 (Cifar10-6) in 20 clients to simulate realistic scenario. The EMD of the sampled clients of Shakespeare dataset is 0.1157. We set the fusion ratio $\tau$ to start from 0 and step increase to 0.6 in 10 steps. High EMD values indicate high unbalanced data. EMD=0 represents that data is balance distributed.
\vspace{-0.6em}
\input{table/techniquea.tex}
\vspace{-0.8em}
To analyze the performance in the term of accuracy and communication overheads with existing approaches and our work. Our experiment compares the following sparse compression methods. DGC is the method proposed by Lin Yujun et al. \cite{lin2018deep} with the momentum correction technique. We compare the current client-side global momentum work, GMC\cite{zhao2019global}. We use the same DGC algorithm to compare communication consumption and apply the server-side global momentum name as DGCwGM. Finally, DGCwGMF is the DGC method that applies the Global Momentum Fusion (GMF) scheme. Table \ref{tab:Technique} summarizes the comparison of these techniques.
\subsection{Results of Image Classification Task}
We first compare the accuracy and communication overheads of DGCwGMF and DGCwGM in Table \ref{tab:result_of_task1}. Both DGCwGMF and DGCwGM show similar accuracy. However, DGCwGM consumes more communication overheads due to the accumulated server-side global momentum, which matches our problem formulation \ref{PF:Server-Side}.
When overview our experiments, GMC reaches comparable accuracy when $EMD < 0.99$. However, in the case of compression rate=0.1 with Cifar10-6 dataset, GMC overfitted the local dataset at the end of training and degraded the performance of the model under training as shown in Figure \ref{fig:task1_top1_accuracy_6}. This issue can also match our problem formulation \ref{PF:Client-Side}.
\vspace{-0.5em}
\input{table/task1_exp_result.tex}
\vspace{-0.5em}
Next, we compare the communication overhead on Cifar10-6 for the following approaches: DGC, DGCwGMF, and DGCwGM, which have the close accuracy result, 0.7047, 0.7246, and 0.7087, respectively. When comparing the communication overhead, Table \ref{tab:result_of_task1} shows that DGCwGM costs $15.4\%$ more communication overhead than DGC. Moreover, our approach, DGCwGMF, saves $20.4\%$ of communication overhead compared with DGC and reaches higher accuracy, which proves GMF could actually save more communication overhead and less accuracy loss or improve accuracy.
\input{figure/task1_acc_cr01_6.tex}
Finally, we provide experiments of accuracy and communication overhead over different compression rate from 0.1 to 0.9 on Cifar10-6. In Figure \ref{fig:task1_comparison_accuracy_compression_rate}, the experiments result show that all techniques lose performance with the compression rate decrease. However, DGCwGMF maintains less performance loss under a low compression rate. DGCwGMF also decreases faster than other approaches on the communication overhead curve, making it ideal for further compression.
\input{figure/task1_baseline6.tex}
\subsection{Results of Next-word Prediction Task}
In Next-Word Prediction Task, Table \ref{tab:result_of_task2} shows that the DGCwGM has more communication overhead consumption than DGC, which also proves our problem formulation \ref{PF:Server-Side}, that momentum at the server-side requires more communication overhead consumption. Next, the table also show that GMC fails to converge. However, the other work, DGC, DGCwGMF, and DGCwGM, have very close accuracy.
\vspace{-0.8em}
\input{table/task2_exp_result.tex}
\vspace{-0.8em}
Besides GMC, the result of DGC, DGCwGMF, and DGCwGM under compression rate=0.1 also shows that DGCwGMF consumes less communication overhead and reaches only $0.002$ accuracy loss compared with DGCwGM. Table \ref{tab:result_of_task2} shows that DGCwGM costs $11\%$ more communication overhead than DGC, and DGCwGMF saves $12.5\%$ more communication overhead than DGC.
\input{figure/task2_baseline.tex}
Finally, we analyze the performance between different compression rates. Figure \ref{fig:task2_comparison_accuracy_compression_rate} shows that the performance is nearly the same with other works but requires fewer communication overheads.
\section{Conclusion}
We proposed a gradient compression with a compensation scheme called Deep Gradient Compression with Global Momentum Fusion (DCGwGMF) to reduce FL communication overhead while maintaining comparable model performance despite non-IID data. We designed and conducted comparative experiments to prove the proposed DCGwGMF provides the best performance with low compression rates in high EMD datasets. Specifically, our experiments showed that DGCwGMF has less accuracy loss in the non-IID dataset and $20.4\%$ fewer communication overheads than DGC in the image classification task. DGCwGMF reduces $12.5\%$ of communication overheads and maintains the same accuracy as vanilla DGC for speech recognition tasks.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
| fbb2e6e850919fd20e970a9392731358a95c865f | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{\label{section:introduction}Introduction\protect\\}
The study of population dynamics has gained popularity among various fields of research in recent years \cite{math1,math2,math3,math4,math5,math6,math7,math8,math9,math10,math11,waves1,physics1,
physics2,physics3,physics4,physics5,physics6,physics8,physics9, uwe4}.
Ecosystems of multiple interacting species are traditionally modelled as a dynamical system described by a
set of deterministic differential rate equations.
Yet such deterministic descriptions do not capture the stochastic nature of real-life systems and ignore
temporal and spatial correlation effects that certainly affect the system's quantitative features, and maybe
its qualitative behavior.
Therefore, various efforts have been made in trying to adequately represent such systems in terms of
coupled stochastic processes \cite{math9,physics1, physics2, physics5, physics8, doi}.
An additional difficulty in the study of stochastic population dynamics stems not just from the fact that they
are non-linear dynamical systems with a large number of degrees of freedom, but also because they do not
reside in thermal equilibrium:
Hence the stationary probability distribution is not the standard Boltzmann distribution, non-vanishing
probability currents decisively characterize the ensuing non-equilibrium steady states, and irreversibility is
crucial, as becomes manifest in absorbing states that characterize population extinction.
However, lattice simulations can be used effectively to gain insight on the interacting populations' behavior,
and may thus guide the development of new techniques for studying non-equilibrium systems.
This study focuses on the paradigmatic predator-prey model introduced independently by Lotka and
Volterra \cite{lotka,volterra}, owing to its simplicity and extensive prevalent literature.
The original formulation of the Lotka--Volterra model utilized a coupled set of deterministic differential
equations describing the temporal evolution of the predator and prey densities.
It was successful in explaining population oscillations that are present in predator-prey ecologies.
However, the Lotka--Volterra mean-field model was aptly met with criticism because it did not account for
stochastic fluctuations, and since it predicts stable density oscillations that are fully determined by the
initial population densities, whereas in nature, predator-prey systems can exhibit extinction or fixation.
The neutral limit cycles of the original Lotka--Volterra model are also not stable under straightforward
modifications to the model \cite{math2,math4, uwe4}:
Allowing for intrinsic stochastic noise, or introducing a finite carrying capacity render the limit cycles unstable,
and the system is instead driven to a stable fixed point with constant predator and prey densities.
A substantial body of experimental work has been performed on ecologies that exhibit predator-prey type
interactions \cite{math10, experiment1, experiment2, experiment3, experiment4, experiment5}.
While the Lotka--Volterra model is able to capture the periodic behavior of such systems, with good numerical
agreement for well-mixed microbial systems \cite{experiment5, experiment1}, its mean-field approximation
cannot capture the stochastic fluctuations in the population densities.
As pointed out in Ref.~\cite{experiment1}, the issue is that the deterministic Lotka--Volterra model (even with
a finite carrying capacity) only allows for decaying or constant oscillation amplitudes.
It is hence preferable to consider the Lotka--Volterra model as a stochastic reaction-diffusion system
incorporating the following reactions that involve the predator species $A$ and the prey species $B$:
\begin{subequations}
\label{eq:reactions}
\begin{eqnarray}
&A \xrightarrow{\mu} \emptyset, \quad &\text{predator death,} \label{subeq:pred_death} \\
&B \xrightarrow{\sigma} B+B, \quad &\text{prey reproduction (birth),} \ \label{subeq:prey_birth} \\
&A+B \xrightarrow{\lambda} A+A, \quad &\text{predation,} \label{subeq:predation}
\end{eqnarray}
\end{subequations}
where $\mu$, $\sigma$, and $\lambda$ denote the corresponding reaction rates that quantitatively characterize
the stochastic processes.
This simplest Lotka--Volterra model variant can be readily extended to account for finite resources for the prey
population.
On the mean-field level, one may just add a logistic growth limiting factor for the prey species
\cite{logistic1,logistic2,math4}.
For the stochastic model realized on a regular lattice, this can be achieved by implementing on-site lattice
occupation restrictions \cite{lattice1,lattice2,lattice3, lattice4, lattice5,physics1,physics2,physics5,physics8,uwe1}.
This latter modification to the stochastic Lotka--Volterra model induces a continuous non-equilibrium phase
transition between two-species coexistence and predator extinction.
If the predators are not efficient in hunting their prey, or if the food resources available to the prey are scarce,
the predator population eventually goes extinct \cite{physics1, physics2, physics8, uwe1, uwe2, uwe3}.
The critical exponents of this active-to-absorbing state phase transition were shown to be in the directed
percolation universality class by means of numerical simulations \cite{physics8,lattice4,lattice5,critical1,critical2,
critical3} as well as a field-theoretic analysis \cite{uwe2,uwe3, uwe4}.
Persistent spatio-temporal structures emerging in the coexistence phase of the stochastic lattice Lotka--Volterra
model have been thoroughly studied as well \cite{waves1,waves2,waves3,physics4,uwe2}.
Prominent travelling pursuit and evasion waves arise due to the fact that predators must move towards high
concentrations of prey in order to survive, leaving behind them areas of low prey concentration, while the prey
similarly need to evade regions of high predator densities.
Experimental in-vitro as well as in-vivo systems are often exposed to varying nutrients, which affects species
survival.
Therefore the modelling of population dynamics with temporally varying environments has gained attention in
recent years \cite{vary_env0,vary_env1,vary_env2,vary_env3,vary_env4,vary_env5,vary_env6,vary_env7,
vary_env8,vary_env9,vary_env10,vary_env11,vary_env12,vary_env13}.
Traditionally fluctuations in the environment are modelled as variable reaction rates \cite{vary_env1, vary_env2,
vary_env3, vary_env8, vary_env9, vary_env10, vary_env11} which usually enter linearly into the model.
On the other hand, to investigate the effects of varying non-linear parameters, typically time-dependent carrying
capacities are introduced \cite{vary_env4, vary_env5,vary_env6,vary_env7}, but in a non-spatial setting.
Yet spatial models with a varying carrying capacity have also not been properly explored in the literature.
Lattice models are often simulated with a fixed on-site restriction\cite{lattice1,lattice2,lattice3, lattice4, lattice5,physics1,physics2,physics5,physics8,uwe1}.
In this study, in order to gain a full understanding of how a time-varying on-site resource constraint can change
the quasi-stationary properties as well as transient kinetics of predator-prey competition dynamics, we consider
the stochastic Lotka--Volterra model on a regular two-dimensional lattice (with periodic boundary conditions)
with a finite local prey carrying capacity that varies periodically over time.
This oscillatory environmental variability resembles seasonal changes in food availability for the prey population.
We investigate how a sudden increase in food resources can prevent the predators from going extinct.
Specifically, intriguing dynamical behavior is observed when the system switches between carrying capacity
values that would result in species coexistence and predator extinction, respectively, in stationary environments.
One may regard this Lotka--Volterra system with periodically varying environment as a dynamical system
subject to an oscillating external driving force.
In periodically driven dynamical systems, there are two accessible limiting situations, namely the fast and slow
switching regimes, for which the driving force oscillation period is small or large, respectively, compared to the
intrinsic oscillation time scale of the system.
In order to quantitatively analyze our model, we measure the time evolution of the population density for each
species and their two-point correlations functions.
Our results highlight the marked effects of a periodically varying environment on species coexistence and
diversity, and we also delineate important differences between the full lattice model and the mean-field
approximation.
Moreover, we demonstrate that an analysis of the coupled mean-field rate equations allows a semi-quantitative
description of the (quasi-)stationary state of the system for rapidly varying environments.
This paper is organized as follows:
Section~\ref{section:mean-field} gives an overview of the stationary states of the Lotka--Volterra model for
predator-prey competition and their stability within the mean-field theory framework.
It next describes the features found by numerically integrating the coupled rate equations for periodically
varying carrying capacity.
We then mathematically analyze the (quasi-) stationary state of the mean-field model in both the slow- and
fast-switching regimes.
Our implementation for our corresponding stochastic lattice model and the ensuing simulation data are
presented in Section~\ref{section:lattice_model}, and compared with the mean-field results.
Finally, our summary and concluding remarks are provided in Section~\ref{section:conclusion}.
\section{Lotka--Volterra predator-prey competition: mean-field theory}
\label{section:mean-field}
\subsection{Constant carrying capacity: mean-field rate equations and stability analysis}
\label{subsection:stability_analysis}
Mean-field rate equations for stochastic dynamical reaction systems are approximate deterministic equations
that aptly describe a well-mixed setup.
Even though they neglect spatial correlations and temporal fluctuations, they are often useful to gain intuition
on the system's expected behavior.
In Sec.~\ref{section:lattice_model}, we shall compare the results obtained with the mean-field equations
with the Monte Carlo simulation data from the full stochastic model (\ref{eq:reactions}).
For the Lotka--Volterra predator-prey competition model (\ref{eq:reactions}), the classical mean-field rate
equations that describe the time evolution of the mean predator and prey densities $a(t)$ and $b(t)$ read
\begin{subequations}
\label{eq:mean-field}
\begin{eqnarray}
\frac{da(t)}{dt} &=& -\mu a(t) +\lambda a(t) b(t) \ , \label{subeq:pred_MF} \\
\frac{db(t)}{dt} &=& \sigma b(t) \left( 1-\frac{a(t)+b(t)}{K} \right) -\lambda a(t) b(t) \ ,
\label{subeq:prey_MF}
\end{eqnarray}
\end{subequations}
where $K$ denotes the (global) carrying capacity.
Except for the growth-limiting factor $1 - [a(t)+b(t)] / K$, these rate equations can be understood as
representing gain / loss terms for reactions that increase / decrease the population densities.
Their mean-field character resides in the assumed factorization for the non-linear predation reaction with
rate $\lambda$ of a two-point correlation function into a mere density product, which assumes statistical
independence and the absence of correlations.
The growth-limiting factor is used to model limited finite resources, and vanishes if $a(t)+b(t)=K$.
In that case, the prey density's temporal derivative becomes negative, indicating a strictly decreasing prey
population.
We remark that adding an explicit growth limiting term for the predator density is not required since the
predators' growth is determined by the prey density.
Hence, if the prey species has a growth limiting factor, this will indirectly constrain the predator population
abundance as well.
The stationary states of this system are given by constant solutions to (\ref{eq:mean-field}).
This results in three fixed points $(a^*,b^*)=\{(0,0), (0,K), (a_0,b_0)\}$, where
\begin{equation}
\label{eq:stationary}
a_0 = \frac{\sigma K}{\lambda K + \sigma} \left( 1 - \frac{\mu}{\lambda K} \right) , \quad
b_0 = \frac{\mu}{\lambda} \ .
\end{equation}
The solution $(0,0)$ represents total population extinction.
At the fixed point $(0,K)$, the predator species goes extinct while the prey species fills the entire system to
full capacity $K$.
Finally, the solution $(a_0,b_0)$ with non-zero densities for both species represents predator-prey
coexistence.
Note that $a_0 > 0$ requires $\mu / \lambda < K$.
Next we consider the (linear) stability of these solutions, which is achieved by linearizing
(\ref{eq:mean-field}) around the three distinct stationary states.
Shifting the densities by their stationary solutions $a(t)=a^*+\delta a(t)$, $b(t)=b^*+\delta b(t)$,
inserting this transformation into the original rate equations, and keeping only terms linear in the small
deviations $(\delta a(t),\delta b(t))$, we obtain the matrix equation $\mathbf{\dot{x}=Jx}$, where
$\mathbf{x}=\bigl( \delta a(t) \ \delta b(t) \bigr)^T$, the dot represents the time derivative, and the
Jacobian matrix $\mathbf{J}$ is explicitly given by
\begin{eqnarray}
\mathbf{J} = \begin{pmatrix} \lambda b^* - \mu & \lambda a^* \\
- \left( \frac{\sigma}{K}+\lambda \right) b^* \
& - \lambda a^* + \frac{\sigma}{K} \left( K-a^*-2b^* \right) \end{pmatrix} .
\end{eqnarray}
The dynamical behavior of the system in the vicinity of a fixed point follows from the eigenvalues
$\epsilon_\pm$ of the Jacobian matrix at each stationary point.
First let us consider the extinction fixed point $(0,0)$ with associated eigenvalues
$(\epsilon_-,\epsilon_+) = (- \mu,\sigma)$.
Both eigenvalues are real, indicating exponential behavior near the fixed point.
Yet the extinction stationary point is linearly unstable in mean-field theory against prey growth, since
$\epsilon_+ = \sigma$ is positive.
While this result is intuitive considering the fact that any small deviation in the prey density leads to
exponential growth of the prey, we recall that in the original stochastic model, in any finite system
total extinction represents the only asymptotically stable stationary absorbing state.
Next, the eigenvalues for the predator extinction fixed point $(0,K)$ are
$(\epsilon_-,\epsilon_+) = (\lambda K - \mu, - \sigma)$, which are also both real.
This stationary state is only stable with respect to small perturbations if $\lambda < \lambda_c = \mu/K$.
Finally, the two-species coexistence stationary point $(a_0,b_0)$ has associated eigenvalues
\begin{equation}
\epsilon_\mp = -\frac{\sigma \mu}{2 \lambda K} \left[ 1 \pm
\sqrt{1 - \frac{4 \lambda K}{\sigma} \left( \frac{\lambda K}{\mu} - 1 \right)} \, \right] .
\end{equation}
For $\lambda_s = (\mu / 2K) \left(1 + \sqrt{1 + \sigma / \mu} \right) > \lambda > \lambda_c$,
both eigenvalues are real and negative, and hence the stationary point is stable and small perturbations
exponentially relax back towards it.
If $\lambda > \lambda_s$, the eigenvalues acquire complex conjugate imaginary components with a
negative real part, indicating that the stationary point is still stable, but the system exhibits decaying
oscillations in its vicinity.
Yet for $\lambda < \lambda_c$ the eigenvalues are both real with $\epsilon_-<0$ and
$\epsilon_+>0$ assuming opposite signs.
Consequently, the stationary solution $(a_0,b_0)$ turns into an unstable saddle point.
This analysis demonstrates that the mean-field rate equations (\ref{eq:mean-field}) predict a continuous
active-to-absorbing state transition at $\lambda = \lambda_c$.
The absorbing state is the predator extinction phase $(0,K)$ which is stable only for $\lambda < \lambda_c$.
The active phase is the species coexistence phase $(a_0,b_0)$ which only exists and is then stable for
$\lambda > \lambda_c$.
This fixed point is a stable node for $\lambda < \lambda_s$ and becomes an attractive focus for
$\lambda > \lambda_s$.
The active-to-absorbing phase transition describing predator extinction is also observed in spatially extended
stochastic systems.
Away from criticality the system's behavior only changes quantitatively relative to the mean-field analysis.
Near the phase transition the critical exponents governing the model's dynamical scaling laws acquire
substantial corrections due to fluctuations in dimensions $d \leq d_c = 4$.
For a more thorough review of the stochastic Lotka--Volterra predator-prey model in a static environment,
we refer to Refs.~\cite{uwe1,uwe2}.
\subsection{Periodically switching carrying capacity: numerical integration of the coupled rate equations}
\label{subsection:mean-field_results}
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{K.png}
\caption{Sketch illustrating the time dependence of the periodically switching carrying capacity $K(t)$:
$T_k$ is the full period of the signal; $K_-$ and $K_+$ are its the low and high values.}
\label{fig:K(t)}
\end{figure}
\begin{figure*}[t]
\subfloat[\label{subfig:mean_field_density_a}$T_k=60$]{%
\includegraphics[width=0.67\columnwidth]{MFdensity60.png} \
}
\subfloat[\label{subfig:mean_field_density_b}$T_k=80$]{%
\includegraphics[width=0.67\columnwidth]{MFdensity80.png} \
}
\subfloat[\label{subfig:mean_field_density_c}$T_k=200$]{%
\includegraphics[width=0.67\columnwidth]{MFdensity200.png}
}
\caption{Predator (full red) and prey (dashed blue) density time traces obtained by numerical integration
of the coupled mean-field rate equations with periodically switching carrying capacity $K(t)$ (the shaded
gray areas indicate the excluded densities).
The parameters used here are $\sigma=\mu=\lambda=0.1$, and $K_-=1$, $K_+=10$.}
\label{fig:mean_field_density}
\end{figure*}
\begin{figure*}[t]
\subfloat[\label{subfig:mean_field_fft_a}$T_k=60$]{%
\includegraphics[width=0.67\columnwidth]{MFfft60.png} \
}
\subfloat[\label{subfig:mean_field_fft_b}$T_k=80$]{%
\includegraphics[width=0.67\columnwidth]{MFfft80.png} \
}
\subfloat[\label{subfig:mean_field_fft_c}$T_k=200$]{%
\includegraphics[width=0.67\columnwidth]{MFfft200.png}
}
\caption{Fourier transforms of the predator (red squares) / prey (blue crosses) density time evolution from
Fig.~\ref{fig:mean_field_density}, with parameters $\sigma=\mu=\lambda=0.1$, $K_-=1$, $K_+=10$.}
\label{fig:mean_field_fft}
\end{figure*}
In this section, we describe results obtained from numerically integrating the coupled rate equations
(\ref{eq:mean-field}) subject to a periodically switching carrying capacity.
As depicted in Fig.~\ref{fig:K(t)}, the carrying capacity $K(t)$ is taken to be a rectangular time signal
ranging between the low and high values $K_-$ and $K_+$, and with full switching period $T_k$ (i.e.,
from $K_\mp$ back to $K_\mp$).
This functional variation of the carrying capacity does of course not constitute a realistic model for species
interacting in nature since food resources do not change in a discontinuous manner.
However, it can be argued that seasonal changes lead to a sudden carrying capacity drop / increase between
winter and summer, as resource availability may seasonally vary.
The following results will later be utilized to highlight the differences between the mean-field approximation
and the stochastic lattice model.
We remark that a full quantitative comparison between the two models is uninformative due to the fact that in
the lattice model one prescribes microscopic reaction probabilities, whereas in the mean-field system one
controls the effective macroscopic reaction rates.
A thorough quantitative analysis would require fitting the stochastic lattice data to the mean-field results in
order to extract the effective (and usually scale-dependent) macroscopic rates.
Here, we are not interested in the detailed quantitative differences between the lattice and mean-field models.
Rather we shall focus on the qualitative distinctions between the two models, and will specifically highlight
featuers predicted by the mean-field equations that are not present in the stochastic lattice system.
The mean-field equations were numerically integrated by employing a fourth-order Runge--Kutta scheme with
(dimensionless) time increment $\Delta t = 0.01$, i.e., $t_0 = 100 \, \Delta t$ sets the basic unit time scale
relative to which all times and inverse rates will henceforth be measured in this section.
We set the initial conditions to $\rho_a(0) = \rho_b(0) = 0.5$ and $K(0) = K_-$, and have confirmed that our
results do not depend on these chosen initial values.
Figure~\ref{fig:mean_field_density} displays the resulting predator and prey densities $\rho(t)$ as functions
of time.
For both switching periods $T_k=60$ and $T_k=80$, we clearly observe period-doubling effects in the time
traces.
This is further confirmed by the Fourier transforms of these temporal evolutions shown in
Fig.~\ref{fig:mean_field_fft}.
For $T_k=60$, the highest Fourier peak occurs at a period $t=120$ indicating period-doubling.
However, an additional smaller peak emerges at $t=240$, reflecting that the density repeats after
four switching periods of the carrying capacity, suggesting even the presence of a period-quadrupling effect.
Similar period-doubling is visible for $T_k=80$, but no period-quadrupling is discernible.
Further increase in the carrying capacity period evidently eliminates period-doubling phenomena as shown in
Fig.~\ref{subfig:mean_field_fft_c}.
We detect the highest peak in the density Fourier transforms for $T_k=200$ at $t = T_k / 3$, a harmonic of
the driving period.
This feature is in fact also observed in the lattice model, in contrast to the period-doubling at smaller
periods $T_k$, for which we shall find that the internal reaction noise in the stochastic model washes away
these intriguing non-linear effects.
\subsection{Quantitative analysis: slow-switching regime}
\label{subsection:quantative_analysis_slow}
The stationary mean-field population densities in the coexistence phase are given in
Eq.~(\ref{eq:stationary}).
For an environment where $K$ periodically switches between two constant values $K_-$ and $K_+$, the
long-time behavior of the system depends on these stationary densities.
If the period of the oscillating environment is sufficiently long such that the system reaches the stationary
state for either $K$ value, then the densities can effectively be described as oscillating between two constant
values with the same period $T_k$ as the carrying capacity.
In that case, the averages of the predator and prey densities over one period can simply be approximated by
the arithmetic means $(\Tilde{a},\Tilde{b})$ of the two stationary values $(a_-,b_-)$ and $(a_+,b_+)$
pertaining to $K=K_-$ and $K=K_+$, respectively.
Thus we obtain
\begin{subequations}
\begin{eqnarray}
\Tilde{a} &=& \frac{a_-+a_+}{2} = \frac{\sigma}{2 \lambda} \left( \frac{\lambda K_- - \mu}
{\lambda K_- + \sigma} + \frac{\lambda K_+ - \mu}{\lambda K_+ + \sigma} \right)
\label{subeq:averagepredator} \nonumber \\
&=& \frac{\sigma}{\lambda} \ \frac{2 \lambda^2 K_ - K_+ + \lambda (\sigma - \mu) (K_- + K_+)
- 2 \mu \sigma}{2 (\lambda K_- + \sigma) (\lambda K_+ + \sigma)} \, , \ \\
\Tilde{b} &=& \frac{b_-+b_+}{2} = \frac{\mu}{\lambda} \ .
\end{eqnarray}
\end{subequations}
We rewrite the mean predator density in terms of an equivalent time-averaged effective carrying capacity
$K^*$ defined through
\begin{equation}
\Tilde{a} = \frac{\sigma}{\lambda} \ \frac{\lambda K^*-\mu}{\lambda K^*+\sigma} \ .
\label{eq:a_tilde}
\end{equation}
Comparison with the explicit result (\ref{subeq:averagepredator}) yields
\begin{equation}
K^* = \frac{2 K_- K_+ + (K_- + K_+) \, \sigma / \lambda}{K_- + K_+ + 2 \sigma / \lambda} \ ,
\label{eq:equivalent_K}
\end{equation}
which reduces to the rate-independent harmonic average ${\bar K} = 2 K_ - K_+ / (K_- + K_+)$ for large
$K_-, K_+ \gg 1$.
Hence, in the slow-switching regime, the system can be described as oscillating around the average
population densities corresponding to the constant rate-dependent effective carrying capacity $K^*$.
\begin{figure}[b]
\subfloat[\label{subfig:mean_field_K_eq_a}predator density]{%
\includegraphics[width=0.5\columnwidth]{MFfasta.png}%
}\hfill
\subfloat[\label{subfig:mean_field_K_eq_b}prey density]{%
\includegraphics[width=0.5\columnwidth]{MFfastb.png}%
}
\caption{Long-time predator and prey densities $\rho_\infty$ averaged over $10$ periods of the switching
carrying capacity, vs. $T_k$ in units of $\tau$, where $\tau$ represents the characteristic intrinsic oscillation
period for a Lotka--Volterra model with fixed carrying capacity $\bar{K}$.
The parameters used for the oscillating environment are $\sigma=\mu=\lambda=0.1$, and $K_-=2$,
$K_+=6$, which yields the harmonic average $\bar{K} = 3$ (red) and $K^*=3.2$ (dashed orange).
The corresponding stationary densities follow from Eq.~(\ref{eq:stationary}).}
\label{fig:mean_field_K_eq}
\end{figure}
Through numerical integration of the mean-field rate equations, we tested the harmonic average
hypothesis for different switching periods, and confirmed Eq.~(\ref{eq:equivalent_K}) in the slow-switching
regime.
We note that this comparison is facilitated for the mean-field model compared to the stochastic lattice system
because we have exact formulas available for the stationary density values, and $K$ is not required to be an
integer.
Figure~\ref{fig:mean_field_K_eq} shows the comparison of the numerically obtained population densities
with periodically varying $K(t)$ with the corresponding stationary values obtained with a simple harmonic
average of the carrying capacities and the rate-dependent effective carrying capacity (\ref{eq:equivalent_K}).
Interestingly, computing the stationary prey density from the straightforward harmonic carrying capacity
average ${\bar K}$ yields accurate results for a large range of switchting periods, as is apparent in
Fig.~\ref{subfig:mean_field_K_eq_b}.
This is due to the fact that for a static carrying capacity, the prey density oscillates about its stationary value,
and the fluctuations about it almost precisely average out, as verified in Fig.~\ref{fig:prey_average_density}.
Since within the mean-field framework, $b^*$ and hence $\Tilde b$ do not depend on $K$, any equivalent
carrying capacity would work for the prey population.
The predator density also follows the harmonically averaged carrying capacity for small periods, see below;
and is indeed aptly captured by the rate-dependent equivalent carrying capacity (\ref{eq:equivalent_K}) for
large switching periods.
For intermediate periods $T_k$, we observe a non-monotonic crossover regime with a large resonance-like
spike, see Fig.~\ref{subfig:mean_field_K_eq_a}.
We verified that these findings do not depend on the initial conditions of the system.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{staticaverage.png}
\caption{Numerical integration for the prey density $b(t)$ for a static environment (full black) for
$\sigma = \mu = \lambda = 0.1$ and $K = 10$, compared with the stationary value $b_0 = 1$
(dashed red).
The average of the oscillating black curve over time is ${\bar b} = 1.00246$.}
\label{fig:prey_average_density}
\end{figure}
\subsection{Quantitative analysis: fast-switching regime}
\label{subsection:quantative_analysis_fast}
The coupled mean-field rate equations (\ref{eq:mean-field}) suggest that in the fast-switching regime, both
species' densities oscillate about values that are equal to the stationary population densities for an equivalent
carrying capacity ${\bar K}$ that is just the harmonic average of $K_-$ and $K_+$.
This follows from the fact that the prey density rate equation (\ref{subeq:pred_MF}) depends explicitly on
$1 / K$.
Based on these observations, we construct an ansatz for the long-time behavior of both species' densities as
follows.
We first shift time according to $t \to t - NT_k$, where $N$ is a large integer such that at $t = N T_k$ the
system has reached its quasi-stationary state.
Hence this time axis shift defines $t=0$ to be the start of an environmental cycle in the long-time regime.
If the system is thus initialized at the onset of the low carrying capacity state, i.e., at $t = 0$ it just switched
from $K_+$ to $K_-$, then at $t = T_k / 2$ it will flip back from $K_-$ to $K_+$, and that cycle repeats at
$t =T_k$.
We now derive an approximate solution that describes the densities in one cycle $t \in [0,T_k]$.
Henceforth we shall refer to the region $t \in [0, T_k / 2]$ as $\mathcal{T_-}$, and the time interval
$t \in [T_k / 2,T_k]$ as $\mathcal{T_+}$.
Since the prey density exhibits a discontinuity in its first time derivative at $t = T_k / 2$, it can be described
by a piece-wise function.
In the fast-switching regime, we may apply a short-time Taylor expansion for the population dynamics, and
retain only the linear term.
The absolute values of the prey density slope in the intervals $\mathcal{T_-}$ and $\mathcal{T_+}$ must
be the same, due to the fact that the prey density is periodic, $b(T_k) = b(0)$, and continuous at the jumps
in between these two regions.
In $\mathcal{T_-}$ the system is in the low carrying capacity state, therefore the prey density is a decreasing
function of time, and its slope should be negative.
For $t \in \mathcal{T_+}$, the prey density has a positive slope, since now the system is in the high carrying
capacity state.
These considerations motivate the following simple ansatz for the prey density,
\begin{eqnarray}
\label{eq:prey_ansatz}
b(t) = \begin{cases}
b_1-\alpha t & \quad t \in \mathcal{T_-} \ , \\
b_2+\alpha t & \quad t \in \mathcal{T_+} \ , \end{cases}
\end{eqnarray}
which is numerically verified in Fig.~\ref{fig:prey_ansatz}.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{MFpreyansatz.png}
\caption{Numerical integration of the mean-field equations (black) for the parameters
$\sigma = \mu = 0.1$, $\lambda = 0.5$, $T_k = 10$, and $K_- = 2$, $K_+ = 6$.
The dashed red graph represents a linear fit applied to the numerical data, resulting in $\alpha = 0.00125$.}
\label{fig:prey_ansatz}
\end{figure}
The prey density is continuous at the boundary $t = T_k / 2$, whence $b_2 = b_1 - \alpha T$.
Moreover, in the fast-switching regime, the density variations of both species over one period of the carrying
capacity should be assumed to be small relative to their average values.
Consequently, $1 / K(t)$ is the only significant term when averaging Eq.~(\ref{subeq:prey_MF}).
Its average leads to an equivalent carrying capacity that is equal to the harmonic average ${\bar K}$.
Therefore, the system reaches a quasi-stationary state, where both densities oscillate around their stationary
values for an equivalent carrying capacity ${\bar K}$.
The temporal average of Eq.~(\ref{eq:prey_ansatz}) needs to be $b_0$.
Imposing this condition, we obtain
\begin{eqnarray}
\label{eq:prey_ansatz2}
b(t) = \begin{cases}
b_0 - \alpha \left( t - \frac{T_k}{4} \right) & \quad t \in \mathcal{T_-} \ , \\
b_0 + \alpha \left( t - \frac{3T_k}{4} \right) & \quad t \in \mathcal{T_+} \ ; \end{cases}
\end{eqnarray}
the slope constant $\alpha$ shall be determined later.
The rate equation for the predator density (\ref{subeq:pred_MF}) may now be cast into a more suggestive
form,
\begin{equation}
\dot{a}(t) = \lambda \, a(t) \left[ b(t) - b_0 \right] ,
\label{eq:modified_pred_MF}
\end{equation}
which indicates that the extrema of $a(t)$ occur at times when $b(t) = b_0$.
Using the ansatz~(\ref{eq:prey_ansatz2}), this happens at $t = T_k / 4$ and $t = 3T_k / 4$.
Equation~(\ref{eq:modified_pred_MF}) can then be integrated to solve for the predator density,
\begin{eqnarray}
a(t) &\sim& A \, e^{\lambda \left( \int^t b(t') \, dt' - b_0 t \right)} \nonumber \\
&=& \begin{cases}
A \, e^{- \frac{\lambda \alpha}{2} \left( t^2 - \frac{T_k}{2} t \right)}
& \quad t \in \mathcal{T_-} \ , \\
A' \, e^{\frac{\lambda \alpha}{2} \left(t^2 - \frac{3T_k}{2} t \right)}
& \quad t \in \mathcal{T_+} \ , \end{cases} \nonumber
\end{eqnarray}
where $A$ and $A'$ are integration constants.
Since the predator density is required to be continuous at $t = T_k / 2$, one arrives at the relation
$A' = A \, e^{\lambda\alpha T_k^2 / 4}$, which yields the approximate predator density solution
\begin{eqnarray}
a(t) = \begin{cases}
A \, e^{- \frac{\lambda\alpha}{2} \left( t^2 - \frac{T_k}{2} t \right)}
& \quad t \in \mathcal{T_-} \ , \\
A \, e^{\frac{\lambda\alpha}{2} \bigl( t^2 - \frac{3T_k}{2} t + \frac{T_k^2}{2} \bigr)}
& \quad t \in \mathcal{T_+} \ . \end{cases}
\label{eq:aans}
\end{eqnarray}
The average of the predator density over one cycle of environmental switching then becomes
\begin{eqnarray}
\frac{1}{T_k} \int_0^{T_k} \!\! a(t) \, dt = 2\sqrt{2 \pi} A \, e^{T_k^2 \alpha \lambda / 32} \
\frac{\erf\!\left( \frac{T_k \sqrt{\alpha\lambda}}{4 \sqrt{2}} \right)}{T_k \sqrt{\alpha\lambda}} \ . \
\label{eq:average_pred}
\end{eqnarray}
Under the assumption of fast environmental switching, $T_k$ should be the smallest time scale in the
system, and the explicit form of Eq.~(\ref{eq:average_pred}) suggests that the fast-switching regime is
quantitatively delineated by $T_k \sqrt{\alpha\lambda} \ll 1$.
The still undetermined parameter is the (initial) slope of the prey density $\alpha = |\dot b|_0$.
To zeroth order in $\alpha$, either immediately from Eq.~(\ref{eq:aans}) or by expanding
Eq.~(\ref{eq:average_pred}) in $T_k \sqrt{\alpha\lambda}$, gives the simple result
\begin{eqnarray}
\frac{1}{T_k}\int_0^{T_k} a(t) \, dt = A + O(T_k \sqrt{\alpha\lambda}) \ .
\end{eqnarray}
Since this average must equal the stationary value of predator density for a harmonically averaged carrying
capacity, we may fix the integration constant
\begin{eqnarray}
A \approx \frac{\sigma}{\lambda} \, \frac{\lambda {\bar K} - \mu}{\lambda {\bar K} + \sigma} =
\frac{\sigma}{\lambda} \, \frac{2 \lambda K_+K_- - \mu \left( K_++K_- \right)}
{2 \lambda K_+ K_- + \sigma \left( K_+ + K_- \right)} \, , \
\end{eqnarray}
to leading order in an expansion in powers of $T_k \sqrt{\alpha\lambda}$.
The left-hand side of Eq.~(\ref{subeq:prey_MF}) equals the constant slope of the prey density under the
fast-switching approximation.
Since $t < T_k $, we also have $t \sqrt{\alpha\lambda} \ll 1$, and with
$a(t) = A +O(T_k \sqrt{\alpha\lambda})$ one has $b(t) = b_0 + O(T_k \alpha)$.
Upon inserting these asymptotic values into (\ref{subeq:prey_MF}) for $t \in \mathcal{T_-}$, we arrive at
\begin{equation}
-\alpha\approx\sigma b_0\left(1-\frac{A+b_0}{K_-}\right)-\lambda A b_0
\end{equation}
and thus ultimately at
\begin{eqnarray}
\alpha \approx \frac{\mu\sigma}{\lambda} \,
\frac{(\mu + \sigma) (K_+ - K_-)}{2\lambda K_+ K_- + \sigma (K_++K_-)} \ .
\end{eqnarray}
These approximations fully characterize the long-time quasi-stationary state in the fast-switching regime.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{RMSE.png}
\caption{Relative root mean-square error of the approximate solution as function of
$T_k \sqrt{\lambda\alpha}$ for the predator (full red) and prey (dashed blue) populations.}
\label{fig:error_plot}
\end{figure}
In order to test this approximate solution, we computed the root mean-square error between our ansatz and
the result of numerically integrating the mean-field equations.
This error was then divided by the actual density average as obtained from numerical integration to obtain a
dimensionless error measure $\bar{\sigma}$.
In Fig.~\ref{fig:error_plot}, this relative error $\bar{\sigma}$ is plotted against the dimensionless carrying
capacity period $T_k \sqrt{\lambda\alpha}$.
As expected, our approximation yields small relative errors for $T_k \sqrt{\lambda\alpha} \ll 1$.
Interestingly, the asymptotic expansion seems to work even up to values $T_k \sqrt{\lambda\alpha} = 2$
with relative errors less than $10 \%$.
\section{Stochastic lattice model}
\label{section:lattice_model}
\subsection{Stochastic Monte Carlo simulation algorithm}
\label{subsection:methods}
In this section, we employ a lattice model to numerically simulate the stochastic Lotka--Volterra
predator-prey system (\ref{eq:reactions}), which allows us to investigate spatial structures and
reaction-induced spatio-temporal correlations.
The stochasticity of the system is implemented through an individual-based Monte Carlo algorithm.
We implement the model on a two-dimensional square lattice with periodic boundary conditions (i.e., a
toroidal simulation domain), where each lattice site holds information about the number of individuals of
each species at that location.
The initial configuration of the system is set up as a disordered state where each individual is placed at a
randomly selected lattice site.
We employ the following notation:
$n_a(x,y;t)$: number of predator individuals at site $(x,y)$ and at time $t$; $n_b(x,y;t)$: number of prey
individuals at site $(x,y)$ and at time $t$; $N_a(t)$: total number of predator individuals across the entire
lattice at time $t$; $N_b(t)$: total number of prey individuals across the entire lattice at time $t$;
$\bar{n}_i(t)$: $N_i(t) / L^2$, where $L$ is the linear lattice size, denotes the average species $i \in (a,b)$
density.
Time is simulated via Monte Carlo steps (MCS), such that at each Monte Carlo time step
\begin{enumerate}
\item a random location on the lattice $(x,y)$ is picked;
\item a random neighboring site is selected from the von-Neumann neighborhood (four nearest
neighbors) $(x_{\rm new},y_{\rm new})$;
\item if $(x,y)$ contains a predator individual, we attempt $n_b(x_{\rm new},y_{\rm new};t)$
predation reactions as follows:
\begin{itemize}
\item generate a random number $r$;
\item if $r<\lambda$, decrease the number of prey at $(x_{\rm new},y_{\rm new})$ by $1$ and
increase the number of predators at $(x_{\rm new},y_{\rm new})$ by $1$;
\end{itemize}
\item next attempt a death reaction for the predator as described below:
\begin{itemize}
\item generate a random number $r$.
\item if $r < \sigma$, decrease the number of predators at $(x,y)$ by $1$;
\end{itemize}
\item if $(x,y)$ contains a prey individual, attempt a reproduction reaction as follows:
\begin{itemize}
\item generate a random number $r$;
\item if $r < \mu$ and $n_a(x_{\rm new},y_{\rm new};t) \\
+n_b(x_{\rm new},y_{\rm new};t)<K$,
increase the number of prey at site $(x_{\rm new},y_{\rm new})$ by $1$;
\end{itemize}
\item if $(x,y)$ is empty ($n_a(x,y;t)+n_b(x,y;t)=0$), return to step 1;
\item the above steps are repeated $N_a(t)+N_b(t)$ times.
\end{enumerate}
\begin{figure*}
\subfloat[\label{subfig:stochastic_density_movie_a}$t=0$]{%
\includegraphics[width=0.5\columnwidth]{samplemovie00.png}%
}\hfill
\subfloat[\label{subfig:stochastic_density_movie_b}$t=46$]{%
\includegraphics[width=0.5\columnwidth]{samplemovie46.png}%
}\hfill
\subfloat[\label{subfig:stochastic_density_movie_c}$t=70$]{%
\includegraphics[width=0.5\columnwidth]{samplemovie70.png}%
}\hfill
\subfloat[\label{subfig:stochastic_density_movie_d}$t=88$]{%
\includegraphics[width=0.5\columnwidth]{samplemovie88.png}%
}
\caption{Snapshots of a single run for a system with parameters $L = 256$,
$\sigma = \mu = \lambda = 0.1$, $K_- = 1$, $K_+ = 10$, and $T_k = 100$; time $t$ is measured in
units of Monte Carlo steps.
The red and blue colored pixels indicate the presence of predators and prey, respectively, with the
brightness representing the local density, the pink colored pixels pertain to sites with both predator and prey
present, and the black pixels represent empty sites.
The system is initialized with $K(t=0) = K_- = 1$. The full movie can be viewed at \cite{note:movies}}
\label{fig:stochastic_density_movie}
\end{figure*}
This implementation ensures that at each Monte Carlo time step, on average, all individuals in the lattice
attempt a reaction.
We utilize random updates (i.e., picking new lattice sites at random) rather than systematic sequential updates
(going over each lattice site in a specific sequence) in order to avoid introducing any bias in how reactions
occur in the system.
A choice now has to be made in how to precisely manage the population after switching from the high to the
low carrying capacity, because there will likely be an excess number of individuals at some lattice sites.
We have considered two implementations to deal with this issue:
In the first variant, we randomly removed any excess individuals to immediately reach the allowed low
carrying capacity value $K_-$.
While this implementation leads to interesting period-doubling behavior, we deemed it to be unrealistic.
In the second implementation, we left the excess particles on site, but restricted further prey reproduction at
lattice locations with more individuals than permitted.
Therefore, we allow the system to intrinsically relax to a configuration without excess individuals, since
eventually any superfluous predators would be forced to perish, and any excess prey would be devoured by
predators.
This intrinsic relaxation introduces a time scale set by the internal response time of the system, which is in
turn determined by the reaction rates.
The stochastic lattice system was simulated over multiple runs, thus averaging both over ensembles of
different initial conditions and distinct temporal histories; $\langle \ldots \rangle$ denotes the resulting
(double) ensemble averages.
We measured the average spatial species densities $\rho_i(t) = \langle \bar{n}_i(t) \rangle$, and computed
the (connected) auto-correlation functions at fixed positions,
$C_{ij}(t,t_0) = \langle \bar{n}_i(t)\bar{n}_j(t_0) \rangle
- \langle \bar{n}_i(t) \rangle \langle \bar{n}_j(t_0) \rangle$.
The static correlations as functions of spatial distance $|x - x_0|$ were extracted using the definition
$C_{ij}(x,x_0;y_0,t_0) = \langle n_i(x,y_0,t_0) n_j(x_0,y_0,t_0) \rangle - \langle n_i(x,y_0,t_0) \rangle
\langle n_j(x_0,y_0,t_0) \rangle$.
In the long-time regime, the system should be isotropic at length scales large compared with the lattice
constant, so that static correlations along the $x$ or $y$ directions will become identical.
We also assume that the system is homogeneous at those scales, and hence that the auto-correlations
should be independent of the reference positions $(x_0,y_0)$.
Consequently we determine the auto-correlations using the densities averaged over lattice sites, which
improves our statistics.
Both these assumption were confirmed via explicit simulations.
Furthermore, we evaluated the static correlations at a specific, sufficiently late fixed time step $t_0$, but
again checked that all correlations are invariant under discrete time translation $t_0 \to t_0 +T_k$ with the
environmental switching period $T_k$.
The various parameters of the system are the three reaction rates $(\sigma,\mu,\lambda)$, the low and high
carrying capacity values $(K_-,K_+)$, and the period of the oscillating environment $T_k$.
However, we can eliminate one of these parameters by rescaling the units of time.
We chose to always fix $\sigma = \mu$, because they represent the rates for the linear prey reproduction
and predator death reactions, and we are predominantly interested in the behavior of the system as the
non-linear coupling $\lambda$ is varied.
Thus our varying control parameters consist of the set $(\lambda, K_-, K_+, T_k)$.
For fixed $\sigma$ and $\mu$, the critical predation rate $\lambda_c$ only depends on the carrying capacity.
Therefore, for the remainder of this paper we shall implicitly assume a fixed value for $\sigma = \mu$ and
indicate the critical threshold as $\lambda_c(K)$.
\subsection{Population densities}
\label{subsection:density}
Snapshots of a single simulation run of a representative system at different times are depicted in
Fig.~\ref{fig:stochastic_density_movie}.
According to our setup, the system is initially in a random configuration, so the predators consume the prey
available in their neighborhood.
At $t=46$, the predators have devoured most of their prey, and their number decays over time:
The system is in the predator extinction phase for $K=1$.
Therefore, without the external periodic environmental variation, the predators would eventually go extinct.
However, we see that at $t = 70$, after the carrying capacity has jumped at $t = T_k/2 =50$ from $K = 1$
to $K = 10$, the prey are permitted to reproduce more abundantly.
The prey population increase induces spreading waves of predators, in turn causing an enhancement of the
predator density, until by $t=88$ the latter almost fill the entire lattice.
When the carrying capacity drops back to $K = 1$ at $T_k = 100$, the predator density starts decaying again
over time towards the point of extinction until the carrying capacity is once more reset and the whole process
(stochastically) repeats.
\begin{figure}
\subfloat[\label{subfig:stochastic_density_a}$\lambda=0.1$]{%
\includegraphics[width=0.5\columnwidth]{sampledensity100.png}%
}\hfill
\subfloat[\label{subfig:stochastic_density_b}$\lambda=0.275$]{%
\includegraphics[width=0.5\columnwidth]{sampledensity275.png}%
}
\caption{Predator (full red) and prey (dashed blue) population densities averaged over $50$ realizations for
a system with $L = 256$, $\sigma = \mu = 0.1$, $K_- = 1$, $K_+ = 10$, and $T_k = 100$; the shaded
gray areas are excluded by the switching carrying capacity $K(t)$.
The critical predation rate values associated with fixed carrying capacities $K_-$, $K_+$ are
$\lambda_c(K=1) = 0.26(5)$ and $\lambda_c(K=10) = 0.01(0)$.}
\label{fig:stochastic_density}
\end{figure}
\begin{figure}[b]
\includegraphics[width=0.95\columnwidth]{samplefft.png}
\caption{Population density Fourier transforms as functions of the period $2 \pi / \omega$ (predators:
red squares; prey: blue crosses).
The first $2000$ Monte Carlo time steps were discarded before computing the Fourier transform in order to
eliminate the initial behavior.
The parameters used here are $L = 256$, $\sigma = \mu = \lambda = 0.1$, $K_- = 1$, $K_+ = 10$, and
$T_k = 100$.}
\label{fig:stochastic_fft}
\end{figure}
Figure~\ref{fig:stochastic_density} shows the long-time behavior of the density for two different values of
the predation rate $\lambda$.
For $\lambda=0.1$ (a), the system oscillates between the predator extinction phase, approached when
$K = 1$, and the two-species coexistence phase, when $K = 10$.
In contrast, for $\lambda = 0.275$ (b) the system resides in the species coexistence phase at both $K$ values.
Both population time traces show stable oscillations with the switching period $T_k$, as expected for a
dynamical system driven by a periodic external force.
This is further confirmed by the Fourier transform plots displayed in Fig.~\ref{fig:stochastic_fft}.
The prey density becomes non-smooth at points where the carrying capacity switches from $K_+$ to $K_-$ or
vice versa, while the predator density remains smooth at those points.
This is indicative of the fact that only the prey density explicitly depends on the carrying capacity, while the
predator density depends on $K$ through its coupling to the prey species.
In Fig.~\ref{subfig:stochastic_density_movie_a}, we see that even though the system is in the predator
extinction phase when $K=1$, the $A$ species are still able to maintain a non-zero population density through
the periodic environmental variation.
Indeed, we observe that the key difference between the runs for $\lambda = 0.1$ and $\lambda = 0.275$
resides in the amplitude of the oscillations, which drops significantly when the predation rate increases.
This is a general feature of the static Lotka--Volterra model.
However, the amplitude of the oscillation in Fig.~\ref{subfig:stochastic_density_movie_a} is even higher than
would be attained in a static system with fixed $K = 10$:
Driving the system away from reaching the absorbing state causes the densities to overshoot their stationary
state values for $K=10$.
While a static system would go extinct for low values of the predation rate, the periodic temporal variation of
the carrying capacity allows both species to coexist in this situation.
\begin{figure}[t]
\subfloat[\label{subfig:better_coexistence_a}predators]{%
\includegraphics[width=0.5\columnwidth]{avslambda.png}%
}\hfill
\subfloat[\label{subfig:better_coexistence_b}prey]{%
\includegraphics[width=0.5\columnwidth]{bvslambda.png}%
}
\caption{Maximum population densities achieved in the late time interval $t \in [2000,5000]$, plotted
against the dimensionless rate ratio $\lambda / \sigma$, where $L = 256$, $\sigma = \mu = 0.1$,
$K_+ = 10$, $K_- = 1$, and $T_k = 100$ for the oscillating environment (black crosses), and for the same
parameters with fixed $K = 10$ for the static case (red squares).}
\label{fig:better_coexistence}
\end{figure}
In fact, the population oscillations become most prominent if the carrying capacity effectively switches the
system between the predators' absorbing and active phases.
To demonstrate that this is a generic feature of our model, we plot the maximum density values reached in the
simulations in the long-time limit in Fig.~\ref{fig:better_coexistence}.
For $\lambda > 6 \sigma$, we observe predator extinction; as was noted in Ref.~\cite{physics1}, the system
may, depending on the initial conditions, evolve into one of the two absorbing states for large predation rates.
We interpret this extinction transition to be caused by stochastic fluctuations in our finite simulation system:
As the predation rate becomes large, stochastic fluctuations are increasingly likely to drive the simulation
towards the absorbing predator extinction state.
For smaller predation rates, the asymptotic predator density decreases with growing $\lambda$.
In the two-species coexistence region, the simulation results for the systems with periodically varying
environment exhibit markedly larger oscillation amplitudes for both predatator and prey populations.
Moreover, the extinction transition at high predation rate is moved to larger values of $\lambda / \sigma$ for
the simulation runs with periodically varying carrying capacities compared to systems with fixed environment.
The predator-prey density phase space plots are constructed in Fig.~\ref{fig:stochastic_phasespace} by
simulating the system for multiple predation rate values.
We see that for each $\lambda$ the system fluctuates around a closed orbit.
Upon increasing the predation rate $\lambda$, the radius of this closed orbit becomes smaller, while the
influence of stochastic fluctuations become more apparent.
For $\lambda=0.8$, the orbit approaches $\rho_A=0$ which means that the predator population is close to
extinction.
Raising the predation rate further to $\lambda = 0.9$, the system reaches the (finite system size) absorbing
state with vanishing predator density, see Fig.~\ref{fig:better_coexistence}.
\begin{figure}
\includegraphics[width=\columnwidth]{phasespace.png}
\caption{Predator-prey density phase space plots for various values of the predation rate $\lambda$ (as
indicated), with $L=256$, $\sigma = \mu = 0.1$, $K_- = 1$, $K_+ = 10$, and $T_k = 100$.
The initial behavior of the system was discarded for all $\lambda$ values, except for $\lambda = 0.9$, for
which the predator population becomes extinct.}
\label{fig:stochastic_phasespace}
\end{figure}
\subsection{Fast and slow switching regimes}
\label{subsection:fast_slow_swtiching}
We next carefully investigate how the system behaves in the two opposite limits of fast and slow
environmental switching, relative to the intrinsic period of the Lotka--Volterra population oscillations.
Figures~\ref{fig:stochastic_varying_period}(a) and (b) show the both populations' densities for $T_k = 10$
(fast switching) and $T_k = 460$ (slow switching).
The time-averaged behavior of the density in the fast-switching regime resembles a system with a constant
effective equivalent carrying capacity $K^*$ that should be related to $K_-$ and $K_+$.
In the slow-switching regime the system is given sufficient time to approach a (quasi-)stationary state when
$K_+ = 10$.
The prey density then reaches very high values, and the system is slowly driven to predator extinction;
however, it would take many cycles of the changing environment for this absorbing state to be attained.
As the switching period $T_k$ is increased, the predator population may only survive for a few cycles;
eventually, when $T_k$ is set too large, it will go extinct before the prey food resources become
abundant again.
\begin{figure}
\subfloat[\label{subfig:stochastic_varying_period_a}$T_k=10$]{%
\includegraphics[width=0.5\columnwidth]{sampledensityperiod10.png}%
}\hfill
\subfloat[\label{subfig:stochastic_varying_period_b}$T_k=460$]{%
\includegraphics[width=0.5\columnwidth]{sampledensityperiod460.png}%
}
\caption{Predator (solid red) and prey (dahsed blue) population densities averaged over $50$ realizations,
for $L = 256$, $\sigma = \mu = \lambda = 0.1$, $K_- = 1$, $K_+ = 10$, and switching periods
(a) $T_k = 10$, (b) $T_k = 460$, with the gray areas here indicating the population densities excluded by
$K(t)$.}
\label{fig:stochastic_varying_period}
\end{figure}
We now explore the equivalent static environment hypothesis in the fast-switching regime in more detail.
The mean-field rate equations suggest that for very short periods this equivalent carrying capacity equals the
harmonic average ${\bar K}$ of $K_+$ and $K_-$, since Eqs.~(\ref{eq:mean-field}) only explicitly depend
on $1 / K$.
For longer periods, the mean-field model predicts that the dynamics becomes effectively equivalent to a
quasi-static system with a rate-dependent equivalent carrying capacity $K^*$, Eq.~(\ref{eq:equivalent_K}).
One should expect the slow-switching equivalent carrying capacity in the stochastic lattice model to display a
similar dependence on the microscopic reaction probabilities.
As mentioned earlier, their precise relationship with macroscopic reaction rates such as $\sigma$ and
$\lambda$ is however subtle and difficult to capture quantitatively, which poses a problem for stringently
testing Eq.~(\ref{eq:equivalent_K}) for the stochastic lattice model.
Yet for large $K_-$ and $K_+$, $K^*$ reduces approximately to the harmonic average ${\bar K}$,
independent of the reaction rates.
Hence we focus on testing the equivalent static environment hypothesis mainly with this effective carrying
capacity.
\begin{figure}[b]
\subfloat[\label{subfig:stochastic_K_eq_K_low_a}predator density]{%
\includegraphics[width=0.5\columnwidth]{harmonictestlowKa.png}%
}\hfill
\subfloat[\label{subfig:stochastic_K_eq_K_low_b}prey density]{%
\includegraphics[width=0.5\columnwidth]{harmonictestlowKb.png}%
}
\caption{Long-time population densities $\rho_\infty$ averaged over six periods of the carrying capacity
$K(t)$ plotted versus $T_k / \tau$, where $\tau$ denotes the intrinsic period of the equivalent static system
with $K = \bar{K}$, where $L = 256$, $\sigma = \mu = \lambda = 0.1$, and $K_- = 2$, $K_+ = 6$ for the
oscillating environment (black crosses), while $K = {\bar K} = 3$ for the static environment with fixed
carrying capacity (full red).}
\label{fig:stochastic_K_eq_K_low}
\end{figure}
To this end, we first present Monte Carlo simulation data for our system with $K_-=2$ and $K_+=6$, hence
${\bar K} = 3$, obtained for a series of different switching periods $T_k$, measured relative to the intrinsic
population oscillation period $\tau$ at fixed $\bar{K}$.
For comparison, we also display simulations with fixed carrying capacity $K = 3$, and display the resulting
population densities in Fig.~\ref{fig:stochastic_K_eq_K_low}.
We find that the predator density in the oscillating environment does not behave as if the environment were
static with a harmonically averaged carrying capacity ${\bar K}$, with a discrepancy in the predator density of
at least $18.4\%$.
In contrast, the time-averaged prey density $\rho_\infty$ matches with the static equivalent ${\bar K}$ value
for $T_k \approx 2.2 \tau$.
Yet for faster switching rates, we observe worse agreement with a discrepancy of up to $5.45\%$.
For periods $T_k > 2.2 \tau$, $\rho_\infty$ increases monotonically with $T_k / \tau$, deviating further from
the average prey density for the static equivalent ${\bar K}$.
For larger $T_k / \tau$, the discrepancy between the harmonically averaged and the oscillating environments
becomes more enhanced, although deviations remain less than $10\%$.
Hence we conclude that our prey density data for an oscillating environment can be satisfactorily described
by an equivalent constant environment for a wide range of oscillation periods.
We note that both time-averaged population densities exhibit resonance-like extrema at $T_k \approx \tau$,
owing to the environment switching just after the predators and prey have reached their maximum and
minimum population counts, respectively, following their intrinsic Lotka--Volterra oscillations.
As the period of the environment increases, more of these population oscillations may occur before the
carrying capacity is reset, and integrating over one cycle of the environmental switching effectively averages
over multiple periods of the intrinsic oscillations.
\begin{figure}[t]
\subfloat[\label{subfig:stochastic_K_eq_K_high_a}predator density]{%
\includegraphics[width=0.5\columnwidth]{harmonictesthighKa.png}%
}\hfill
\subfloat[\label{subfig:stochastic_K_eq_K_high_b}prey density]{%
\includegraphics[width=0.5\columnwidth]{harmonictesthighKb.png}%
}
\caption{Long-time population densities $\rho_\infty$ averaged over six periods of the carrying capacity
$K(t)$ plotted versus $T_k / \tau$, where $\tau$ denotes the intrinsic period of the equivalent static system
with $K = \bar{K}$, where $L = 256$, $\sigma = \mu = \lambda = 0.1$, and $K_- = 4$, $K_+ = 12$ for the
oscillating environment (black crosses), while $K = {\bar K} = 6$ for the static environment with fixed
carrying capacity (full red).}
\label{fig:stochastic_K_eq_K_high}
\end{figure}
\begin{figure*}[t]
\subfloat[\label{subfig:correlation_low_period_movie_a}$t=23$]{%
\includegraphics[width=0.5\columnwidth]{cormovies23.png}%
}\hfill
\subfloat[\label{subfig:correlation_low_period_movie_b}$t=28$]{%
\includegraphics[width=0.5\columnwidth]{cormovies28.png}%
}\hfill
\subfloat[\label{subfig:correlation_low_period_movie_c}$t=50$]{%
\includegraphics[width=0.5\columnwidth]{cormovies50.png}%
}\hfill
\subfloat[\label{subfig:correlation_low_period_movie_d}$t=101$]{%
\includegraphics[width=0.5\columnwidth]{cormovies101.png}%
}
\caption{Snapshots of a single run for a system with parameters $L = 256$, $\sigma = \mu = 0.5$,
$\lambda = 0.1$, $K_- = 1$, $K_+ = 10$, and $T_k = 10$; time $t$ is measured in units of Monte Carlo steps.
The red and blue colored pixels indicate the presence of predators and prey, respectively, with the
brightness representing the local density, the pink colored pixels pertain to sites with both predator and prey
present, and the black pixels represent empty sites.
The system is initialized with $K(t=0) = K_- = 1$. The full movie can be viewed at \cite{note:movies}}
\label{fig:correlation_low_period_movie}
\end{figure*}
In Fig.~\ref{fig:stochastic_K_eq_K_high} we repeat this numerical investigation for $K_- = 4$, $K_+ = 12$,
thus ${\bar K} = 6$.
The time-averaged prey density $\rho_\infty$ for the oscillating environment approximately agrees quite well
with the corresponding value for the static equivalent environment for all switching periods.
However, the predator density for low periods does not match the harmonic mean hypothesis.
For periods $T_k > \tau$, we see that the predator density with the oscillating environment approaches
$\rho_\infty$ for the static equivalent environment.
This suggests that the harmonically averaged carrying capacity works well to describe the mean predator
population density for large $K_-$ and $K_+$ values, and for large environment oscillation periods, such that
the system reaches the stationary state before switching occurs.
In conclusion, stochastic fluctuations may change the form of the general equivalent static carrying capacity
(\ref{eq:equivalent_K}), yet it can still be approximated by the harmonic average for large carrying capacities.
Our simulation results indicate that the functional dependence of the prey density on the carrying capacity can
be well approximated as $b \sim 1 / K$ for a large range of environmental switching periods.
However, the predator density exhibits a more complicated dependence on the carrying capacity values and
$T_k$; it can only be approximated by $a \sim 1 / K$ for large $K_-$ and $K_+$ and for $T_k \gg \tau$.
In the latter limit, the system reaches its quasi-stationary state before the environment switches, which for the
used parameter values corresponds to a stable node with non-oscillatory kinetics; consequently, there is
little variation with $T_k$.
Generally we observe that the long-time behavior of both population densities depends on the carrying
capacity period in a non-monotonic manner.
\subsection{Correlation functions}
\label{subsection:correlations}
The predator-prey pursuit and evasion waves characteristic of the stochastic spatial Lotka--Volterra model
are more prominent in systems with high reaction rates.
Therefore, we study the ensuing correlations for $\sigma = \mu = 0.5$, $\lambda = 0.1$, and leave
$K_- = 1$, $K_+ = 10$.
For these parameters the system resides deep in the predator extinction absorbing phase when $K = K_-$, and
in the active two-species coexistence phase for $K = K_+$.
The behavior of the system for environmental switching period $T_k = 10$ is exemplified by the simulation
snapshots depicted in Fig.~\ref{fig:correlation_low_period_movie}.
The predators are initially almost driven to extinction, but due to the switching environment the prey population
increases until it fills most of the lattice.
We observe that at $t = 23$ there remain only a few surviving predators which become localized sources for
spreading waves.
At $t = 28$, the prey may proliferate in the interior of the fronts as well, causing the population waves to
spread both outwards and inwards, until they eventually collide and interfere with each other as seen at
$t = 50$.
Starting from $t = 101$, the lattice exhibits a global density oscillation, and it becomes difficult to discern the
original locations of the wavefront sources.
\begin{figure}[b]
\subfloat[\label{subfig:correlations_low_period_a}auto-correlations]{%
\includegraphics[width=0.5\columnwidth]{ct10.png}%
}\hfill
\subfloat[\label{subfig:correlations_low_period_b}static correlations]{%
\includegraphics[width=0.5\columnwidth]{cx10.png}%
}
\caption{Long-time correlation functions computed for a system with the following parameters:
$L = 512$, $\sigma = \mu = 0.5$, $\lambda = 0.1$, $K_- = 1$, $K_+ = 10$, and $T_k = 10$.
(a) Temporal auto-correlations computed for $t_0 = 1000$, with $t$ measured starting from $t_0$.
The inset shows the Fourier transform of the auto-correlation time series.
This data was averaged over $10,000$ ensembles and for $512$ lattice sites, giving an equivalent of a total of
$5,120,000$ independent ensembles.
(b) Static correlation functions taken at $t_0 = 1000$.
Distances $x$ are measured in units of the (dimensionsless) lattice spacing; data averaged over $10,000$
distinct ensembles.}
\label{fig:correlations_low_period}
\end{figure}
\begin{figure*}[t]
\subfloat[\label{subfig:correlations_high_period_movie_a}$t=39$]{%
\includegraphics[width=0.5\columnwidth]{cormovies39.png}%
}\hfill
\subfloat[\label{subfig:correlations_high_period_movie_b}$t=56$]{%
\includegraphics[width=0.5\columnwidth]{cormovies56.png}%
}\hfill
\subfloat[\label{subfig:correlations_high_period_movie_c}$t=72$]{%
\includegraphics[width=0.5\columnwidth]{cormovies72.png}%
}\hfill
\subfloat[\label{subfig:correlations_high_period_movie_d}$t=264$]{%
\includegraphics[width=0.5\columnwidth]{cormovies264.png}%
}
\caption{Snapshots of a single run for a system with parameters $L = 256$, $\sigma = \mu = 0.5$,
$\lambda = 0.1$, $K_- = 1$, $K_+ = 10$, and $T_k = 30$; time $t$ is measured in units of Monte Carlo steps.
The red and blue colored pixels indicate the presence of predators and prey, respectively, with the
brightness representing the local density, the pink colored pixels pertain to sites with both predator and prey
present, and the black pixels represent empty sites.
The system is initialized with $K(t=0) = K_- = 1$. The full movie can be viewed at \cite{note:movies}}
\label{fig:correlations_high_period_movie}
\end{figure*}
\begin{figure*}
\subfloat[\label{subfig:correlations_high_period_a}auto-correlations]{%
\includegraphics[width=0.5\columnwidth]{ct30.png}%
}\hfill
\subfloat[\label{subfig:correlations_high_period_b}$ij=aa$]{%
\includegraphics[width=0.5\columnwidth]{aa.png}%
}\hfill
\subfloat[\label{subfig:correlations_high_period_c}$ij=ab$]{%
\includegraphics[width=0.5\columnwidth]{ab.png}%
}\hfill
\subfloat[\label{subfig:correlations_high_period_d}$ij=bb$]{%
\includegraphics[width=0.5\columnwidth]{bb.png}%
}
\caption{Long-time correlation functions computed for a system with the following parameters:
$L = 512$, $\sigma = \mu = 0.5$, $\lambda = 0.1$, $K_- = 1$, $K_+ = 10$, and $T_k = 30$.
(a) Temporal auto-correlations computed for $t_0 = 990$, with $t$ measured starting from $t_0$.
The inset shows the Fourier transform of the auto-correlation time series.
(b) Static predator-predator, (c) predator-prey, and (d) prey-prey correlation functions for different values of
$t_0$, normalized by $|C_{ij}(x=0)|$; distances $x$ are measured in units of the lattice spacing.
Data averaged over $10,000$ distinct ensembles.}
\label{fig:correlations_high_period}
\end{figure*}
The associated temporal auto- and static correlation functions are displayed in
Fig.~\ref{fig:correlations_low_period}.
The auto-correlation functions exhibit damped oscillations with a peak period $2 T_k = 20$, twice the
switching period of the carrying capacity.
This is due to the fact that the two-point correlation function contains a product of particle densities, and the
square of sinoidal functions may be decomposed into sine functions with doubled period.
Note that the auto-correlations decay to zero after approximately $40$ time steps.
The on-site population restrictions induce anti-correlations between individuals of the same species; the
cross-correlation function $C_{ab}$ becomes positive after some time has elapsed., indicating that surviving
predators follow the prey with some time delay.
The static correlation functions rapidly decay to zero, demonstrating that the spatial correlation lengths are
small, on the scale of a few lattice spacings.
Figure~\ref{fig:correlations_high_period_movie} shows simulation snapshots for the system parameters,
but with a larger switching period $T_k = 30$.
In this run, only one predator patch has survived by $t = 39$.
Subsequently it serves as a source for a spreading population wave that later interferes with itself owing to
the periodic boundary conditions of the lattice.
At $t = 56$ the wave starts spreading in both directions until at $t = 72$, when the system returns to the
low carrying capacity regime, and the prey in the interior of the front are not allowed to reproduce further.
Even after a long time period at $t = 264$, there is only a single density oscillation center that is sourced by
the sole predator patch that had survived at $t = 39$.
In Fig.~\ref{subfig:correlations_high_period_a} we plot the corresponding auto-correlation functions, which
exhibit a much slower decay compared to Fig.~\ref{fig:correlations_low_period}(a) for $T_k = 10$.
This suggests that a carrying capacity period of $T_k=30$ causes a resonance effect, which indeed becomes
apparent in the simulation movies, as in this case the switching happens approximately when the waves travel
back to the location of the source.
The Fourier transform again confirms that the auto-correlation functions oscillate with a period $2 T_k$.
Since the carrying capacity switching period is $T_k = 30$, and it is initialized with $K(t=0) = K_-$, the
behavior of the system at different $t_0$ values can be described as follows:
For $t_0 = 990$, the system has just switched from $K(t) = K_+$ to $K_-$; at $t_0 = 1000$, it still resides
at carrying capacity $K_-$; for $t_0 =1005$, the system has just switched from $K_-$ back to $K_+$; at
$t_0 = 1015$, the carrying capacity is still $K_+$.
The static correlation functions, shown in Figs.~\ref{fig:correlations_high_period}(b,c,d), exhibit similar
behavior for $t_0 = 990$ and $t_0 = 1015$, and for $t_0 = 1000$ and $t_0 = 1005$, respectively, which
suggests a common delay time for the correlations.
At $t_0 = 990$, the system is in the state with $K(t) = K_-$, while at $t_0 = 1015$, $K(t) =K_+$, about to
switch to $K_-$; and similarly at $t_0 = 1005$ and $t_0 = 1000$.
Compared with the system with faster switching period $T_k = 10$, the static correlations decay over a larger
distance, in agreement with the movies and snapshots which show wider wavefronts.
The predator-prey cross-correlation function $C_{ab}(x)$ displays maxima at positive values for
$t_0 = 1000$ and $t_0 = 1005$, when the carrying capacity is low and few individuals are present per site.
Conversely at $t_0 = 990$ and $t_0 = 1015$, when the population densities are large, the only positive peak
occurs at $x = 0$, due to the fact that predators tend to be on the same site as prey for large $K$.
For low carrying capacities, the predators cannot reside on the same locations as the prey, so instead they are
most likely to be in the close prey neighborhood.
\subsection{Asymmetric switching intervals}
\label{subsection:asymmetric}
Finally, we further investigate the properties of our system by applying an asymmetric square signal for the
switching carrying capacity, such that $K = K_-$ for $T_-$ time steps, and then $K = K_+$ for the subsequent
time interval of length $T_+$, where $T_- \neq T_+$.
The total switching period of the carrying capacity then is $T_k = T_- + T_+$.
Simulating such a stochastic lattice system reveals a period-doubling effect for an intermediate range of
$T_+ / T_-$ ratios, as shown in Fig~\ref{fig:asymmetric}.
For either too small or too large time interval ratios, no period-doubling effect could be observed.
The origin of this intriguing period-doubling effect appears to be that prey particles are not able to reproduce
quickly enough while the system has attained the high carrying capacity $K_+$.
Hence, it takes the system two cycles of the oscillating environment for the prey density to reach its peak value.
\begin{figure}
\subfloat[\label{subfig:asymmetric_a}density time-series]{%
\includegraphics[width=0.5\columnwidth]{asymmetric.png}%
}\hfill
\subfloat[\label{subfig:asymmetric_b}Fourier transform]{%
\includegraphics[width=0.5\columnwidth]{asymmetricfft.png}%
}
\caption{(a) Predator and prey densities averaged over $50$ realizations for a system with asymmetric switching
intervals: $L = 256$, $\sigma = \mu = \lambda = 0.1$, $K_- = 1$, $K_+ = 10$, $T_- = 100$,
$T_+ = 10 = 0.1 \, T_-$; the shaded gray areas are excluded by the switching carrying capacity $K(t)$.
(b) Fourier transforms of the population density time evolution in (a).}
\label{fig:asymmetric}
\end{figure}
\section{Conclusion and outlook}
\label{section:conclusion}
In this paper, we have investigated the paradigmatic Lotka--Volterra predator-prey model with a periodically
varying carrying capacity $K(t)$ that represents seasonally changing food resource availability for the prey
population.
The model was studied both by a mean-field analysis based on the deterministic rate equations, and through
detailed individual-based stochastic Monte Carlo simulations on a two-dimensional lattice with periodic boundary
conditions.
Both the mean-field and the stochastic lattice model exhibit characteristic periodic behavior induced by the
changing environment.
The rate equation solutions display a region in parameter space with period-doubling and period-quadrupling
features; such effects are naturally expected in driven nonlinear dynamical systems.
However, the period-doubling region in parameter space is not observed in the stochastic lattice model:
The internal stochastic noise evidently dominates and eliminates these nonlinear effects.
Yet we were able to induce period-doubling dynamics in the lattice model by utilizing an external periodic drive
signal with asymmetric switching intervals.
The phase space analysis demonstrated that, for parameters that lead to an ecologically stable system (which
does not evolve into an absorbing population exctintion state), the phase space orbits are closed loops, whose
sizes decrease with growing predation rate $\lambda$, indicating that the population oscillation amplitudes
become reduced with enhanced predation efficiency.
A periodically varying environment allows the system to remain stable even for lower values of $\lambda$, as
compared to the corresponding system with fixed carrying capacity.
We find that the periodically varying environment induces oscillations with greater amplitudes, without hitting
the predator extinction absorbing state.
Furthermore, we observe that even for the same value of $\lambda$, the periodically varying systems display
larger oscillation amplitudes than the static system.
The finite system size extinction threshold at high predation rates is furthermore shifted to higher values of
$\lambda$ as well.
Thus, a periodically changing, externally driven environment leads to a richer ecology and promotes species
diversity.
We investigated the long-time behavior of the population densities by studying their averages over multiple
cycles of the periodic environment as a function of switching period $T_k$.
For the mean-field model, the prey density average does not depend on $T_k$, and is equal to its
$K$-independent stationary value.
In contrast, the mean predator density turns out equal to the stationary value of a static equivalent $K^*$
value given by Eq.~(\ref{eq:equivalent_K}) that for small periods simply reduces to the harmonic average
${\bar K}$ of $K_-$ and $K_+$.
Interestingly, for intermediate periods $T_k$ one encounters a non-monotonic crossover regime between
these two averages for intermediate values of the period with characteristic resonant features when $T_k$ is
close to the intrinsic Lotka--Volterra population oscillation period.
The stochastic lattice model reveals more complex behavior owing to renormalization of the equivalent
stationary carrying capacity values as well as the reaction rates.
The mean stationary prey density value is no longer $K$-independent, and it shows non-monotonic behavior
as a function of $T_k$.
Nevertheless, quantitatively these effects are small, implying that the harmonically averaged equivalent
stationary value ${\bar K}$ gives a good approximation for the long-time average of the prey density.
The predator density average matches the stationary equivalent ${\bar K}$ only for high values of $K_-$ and
$K_+$, as well as large switching periods $T_k$.
We evaluated the auto-correlation and static correlation functions for the stochstic lattice model specifically
for two different periods, $T_k = 10$ and $T_k = 30$.
The Fourier transformed auto-correlations exhibit peaks at $2 T_k$.
Due to the local on-site restrictions, the cross-correlation functions are negative at short distances.
For the smaller period $T_k = 10$, the auto-correlations decay to zero already after about a single oscillation
period, and the static correlations rapidly decay to zero as well, indicating a small spatial correlation length.
When the period is increased to $T_k = 30$, we observe a resonance effect causing the auto-correlations to
decay at a much slower rate.
As the simulations movies show, this resonant behavior is caused by the spherical travelling activity waves
pulsing back to the location of their sources.
The static correlation functions for $T_k=30$ exhibit a much slower decay as well, indicating markedly
longer-ranged correlations.
Plotting the static correlation functions at different times, we detect a time-delay effect, where the stationary
correlations require some time to respond to the changing environment.
Using our observations pertaining to the long-time behavior of the population densities in the mean-field
model, we obtained a closed-form solution that approximates the quasi-stationary state of the system for a
fast switching carrying capacity; more preciely, this solution holds if $T_k \sqrt{\lambda |db / dt|} \ll 1$.
We were able to explicitly demonstrate the regime of applicability for this approximation,
c.f.~Fig.~\ref{fig:error_plot}.
It should be possible to utilize this asymptotic technique to study generalizations to other periodically
varying variables, e.g., varying reaction rates, to shed light on the response of such systems to sudden
parametric variations.
The description of reaction-diffusion systems in terms mean-field rate equations is of course useful, and
often provides an accurate qualitative description of real systems for some region in parameter space.
However, this paper demonstrates that when an ecological system is subjected to periodic variations in the
environment, a proper stochastic model may behave differently than its mean-field representation.
Fluctuations can lead to dramatic changes in the behavior of the system as the present results indicate.
One method of steering ecological communities towards a certain desirable behavior is to alter the
environment.
Therefore, developing succesful control schemes for such systems requires taking the effects of fluctuations
into proper consideration.
A full understanding of the fundamental problem of species diversity, and beyond, constructing a
quantitative theory of biological evolution, hinge on unraveling the impact of environmental dynamics on
ecological systems.
\begin{acknowledgments}
We would like to thank Matthew Asker, Jason Czak, Llu\'is Hernandez-Navarro, Hana Mir, Mauro Mobilia,
Michel Pleimling, Alastair Rutlidge, James Stidham, and Louie Hong Yao for their helpful feedback on our work.
We are grateful to Rana Genedy for reading the manuscript draft and providing suggestions for improvement.
This research was supported by the U.S National Science Foundation, Division of Mathematical Sciences under
Award No. NSF DMS-2128587.
\end{acknowledgments}
\bibliographystyle{apsrev4-2}
\nocite{*}
| cd742b35a29a6a6740190731c414cb57664c9691 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Motivation}
The information carried by a space-time metric is mainly of causal nature. Indeed, Malament's theorem states that the causal relations between the points of a 4d manifold fully determine the metric, up to a conformal factor given at each point \cite{malament1977}. In quantum models of space-time, the role of the metric is usually played by more fundamental objects, like spins and intertwiners for spin-foams \cite{rovelli2014a, Ashtekar:2021kfp}. There, it is less evident to see how causality enters the scene: how is it encoded? The question is of importance to understand more generally whether causality is a fundamental or an emergent property of space-time. Our investigation proceeds as follows:
\begin{enumerate}
\item [\ref{sec:discrete_causal_structure}] we recall what is meant by causality over a lorentzian manifold and we show how it survives over a simplicial complex;
\item [\ref{sec:dual_skeleton}] we show how the causal structure can be represented on the dual skeleton;
\item [\ref{sec:lorentzian_regge_calculus}] we show how the causal structure is determined by the dynamics of Regge calculus \cite{Regge:1961px};
\item [\ref{sec:sum_over_histories}] we elucidate the role of causality at the level of the path-integral over geometries;
\item [\ref{sec:BF}] we investigate the case of discrete BF theory;
\item [\ref{sec:EPRL}] we propose a causal version of the EPRL spin-foam model \cite{Engle:2007wy};
\item [\ref{sec:relation_to_earlier_proposals}] we discuss how the proposed causal EPRL model relates to previous proposals in the literature \cite{livine2003,Engle:2011un,engle2016a}.
\end{enumerate}
\section{Discrete causal structure}
\label{sec:discrete_causal_structure}
The geometry of a lorentzian manifold is fully encoded in the metric $g$. The signature of $g$ is either $(-,+,+,+)$ or $(+,-,-,-)$. It is generally hold that this freedom is a pure convention, with no physical consequences. However, for the rest of our work, it is useful to let this choice open and write the signature as $(\eta,-\eta,-\eta,-\eta)$ with $\eta \in \{-1,+1\}$.
The \textit{causal structure} of $g$ can be decomposed into two sub-notions that we call \textit{bare-causality}\footnote{The denomination is ours. Surprisingly enough, the literature does not seem to have already named this specific property. It usually calls it simply "causality", but we need another word to be accurate.} and \textit{time-orientability} (see \cite{hawking1973} for a standard reference).
\passage{Bare-causality}
We call \textit{bare-causality} the property of each tangent space at any point of the manifold, to be partitioned into three classes of tangent vectors: time-like, space-like and null. Formally, the classes are equivalence classes for the relation:
\begin{equation}
u\sim v \Longleftrightarrow \text{sign} \, g(u,u) = \text{sign} \, g(v,v),
\end{equation}
where $u$ and $v$ are tangent vectors. Specifically, we call \textit{time-like} the class of vectors for which
\begin{equation}
\label{eq:def_time-like}
\text{sign} \, g(u,u) = \eta
\end{equation}
and \textit{space-like} the one for which
\begin{equation}
\label{eq:def_space-like}
\text{sign} \, g(u,u) = - \eta.
\end{equation}
The definition of bare-causality is local, in the sense that it makes sense at each point of the manifold, but it can also be formulated as a global property. The "bare-causal structure" of a lorentzian space-time consists in the possibility to say, given any two points, whether they are space-like, time-like or null separated. To be clear with the definitions, two points are time-like separated if they can be joined by a smooth curve whose tangent vectors are time-like all along. It is important that the curve is smooth, because otherwise one could turnaround sharply and draw a time-like curve between any two points.
\passage{Time-orientability}
On top of bare-causality, one can define a notion of \textit{time-orientability}. At a local level, time-orientability is the property of time-like vectors to be divided into two classes: past and future. Formally, the two classes are equivalence classes of time-like vectors for the relation:
\begin{equation}
\label{eq:time-orientability}
u \sim v \Longleftrightarrow \text{sign} \, g(u,v) = \text{sign} \, g(v,v).
\end{equation}
In the case of bare-causality, the information contained in the metric $g$ alone enables to distinguish one from another the three classes of time, null and space, without ambiguity. This is not the case for time-orientability: the two classes defined by \eqref{eq:time-orientability} are perfectly symmetric. Thus, the denomination "past" or "future" is arbitrary as long as no external arrow of time is imposed additionally. So we can pick a reference vector $u_0$ (the arrow of time), label the two classes by $\omega \in \{-1,+1\}$ and say that the time-like vector $v$ is in the class $\omega$ if
\begin{equation}
\label{eq:def_omega}
\text{sign} \, g(u_0,v) = \omega \, \eta.
\end{equation}
To fix the language, we declare that $\omega=+1$ is the future, so that $u_0$ is future-pointing.
As done previously, the local definition of time-orientation can be turned into a global one by requiring continuity across the classes at different points. Then, space-time is said to be time-orientable if it is possible to continuously define a division of time-like vectors in past and future classes. It is then possible to say that a point lies in the future of some other. As a consequence, one can define the \textit{causal future} $I^+(p)$ and the \textit{causal past} $I^-(p)$ of a point $p$. Again, the arrow of time, i.e. the labelling of "past" or "future", is conventional, e.g. attached to a specific choice of reference vector $u_0$, but it is not a geometric property of the metric.
Time-orientability is conceptually different from bare-causality. However, any lorentzian metric locally defines light-cones with both a bare-causal and a time-orientable structure. So the conceptual difference is often overlooked and the term of "causality" is used indifferently to talk about either notions or both. Yet, it is important to have the distinction clear in mind, especially when moving to quantum models, because we expect the lorentzian metric to make way to new objects, while the underlying physical notions may survive.
\passage{Discrete bare-causality}
At a discrete level, consider a lorentzian 4-simplicial complex $\Delta$, i.e. a set of minkowskian 4-simplices nicely glued together \cite{Regge:1961px}\footnote{We leave the rigorous definition of "nicely glued together" unspecified here as it does not affect directly our investigation. See \cite{Dona:2020yao} for details.}. It is again possible to define the notions of bare-causality and time-orientability, both at a local and at a global level.
Each 4-simplex $\sigma$ comes with an embedding in Minkowski space-time. It is bounded by 5 tetrahedra, each of them having a unique normal $4$-vector $N$, of unit norm and directed outward. A priori, $N$ can be time-like, space-like or null. Two nearby 4-simplices share exactly one common tetrahedron $T$. The two 4-simplices are said to be time-like separated if $N$ (computed with respect to any of the two $4$-simplices) is time-like. A similar definition holds for space-like and null separation.
Unfortunately, this local notion of bare-causality fails to extend straightforwardly to distant 4-simplices. Indeed, one could be tempted to say that two distant $4$-simplices are time-like separated if there exists a sequence of time-like separated nearby tetrahedra in-between. However, this definition fails, because, for instance, two space-like separated nearby tetrahedra could be connected by a common time-like separated nearby tetrahedron. In the previous continuous case, the smoothness of the time-like curve was preventing such a pathology, but this is not anymore possible in the discrete case. The difficulty can be circumvented by first introducing a local notion of time-orientability.
\passage{Discrete time-orientability}
A tetrahedron is said to be space-like if it is embedded within a space-like hyperplane. In this case, its $4$-normal $N$ is time-like. Time-orientability is the property that the space-like boundary tetrahedra of a 4-simplex can be divided into two classes, by the following relation
\begin{equation}
T_1 \sim T_2 \Longleftrightarrow \text{sign}(N_1 \cdot N_2) = \text{sign}(N_1 \cdot N_1),
\end{equation}
where the dot denotes the minkowskian scalar product. A choice of time-orientation consists in saying which class is called past or future (relatively to the 4-simplex).
At a global level, we say that $\Delta$ is time-orientable if there exists a consistent choice of time-orientation for each 4-simplex, so that each space-like tetrahedron has an opposite time-orientation relatively to each of the two 4-simplices that bounds it: if a tetrahedron is in the future of a 4-simplex, it should be in the past of another.
Given two 4-simplices, $\sigma_1$ and $\sigma_2$, sharing a space-like tetrahedron $T$, we say that $\sigma_2$ is in the future of $\sigma_1$ if $T$ is in the future of $\sigma_1$ (hence in the past of $\sigma_2$). This definition allows us to define straightforwardly a notion of causal future and causal past of a 4-simplex: $\sigma_2$ is in the future of $\sigma_1$ if there exists a future-oriented chain of 4-simplices in-between. This definition encompasses the notion of time-like separation for distant 4-simplices that was initially looked for. Thus, both bare-causality and time-orientability are defined locally and globally in the discrete setting.
\section{Causality on the dual skeleton}
\label{sec:dual_skeleton}
The previously defined discrete causal structure can be easily represented on $\Delta^*_1$, the dual 1-skeleton of $\Delta$. $\Delta^*_1$ is built from $\Delta$ by replacing each $4$-simplex by a vertex, each tetrahedron by an edge and forgetting about triangles, segments and points of $\Delta$.
Bare-causality discriminates between space-like and time-like edges\footnote{We deliberately ignore the null case, which does not seem to shed much light on our investigation.}, while time-orientability provides an orientation to the time-like edges. Overall, causality is then represented by
\begin{enumerate}[itemsep=0 mm]
\item An arrow from past to future on time-like edges.
\item No arrow on space-like edges.
\end{enumerate}
In the following, we assume that all tetrahedra are space-like, which implies that all $N$ are time-like. Dually, it means that all the edges of $\Delta_1^*$ carry an arrow. This simplifying assumption is made in many of the formulations of spin-foams. It is important to note that this condition automatically implements some implicit assumptions about the fundamental causal structure of space-time. Indeed, this assumption erases any local notion of bare-causality: all nearby 4-simplices are time-like separated. Thus, at the most local level, the primacy is granted to time-orientability. The notion of bare-causality only emerges at a more global level as follows: given two distant vertices, if one is not in the past of the other, then they are said to be space-like separated.
\passage{Dual causal set}
In mathematical terms, $\Delta_1^*$ is a 5-valent \textit{simple oriented graph}\footnote{A \textit{directed graph} is given by a set of vertices and a set of ordered pairs of vertices (arrows). It is said \textit{simple} if there are no arrows from a vertex to itself. It is said \textit{oriented} if there is at most one arrow between any two vertices.}. Its \textit{transitive closure} defines a \textit{poset} (partially ordered set). The elements of the poset are the vertices of $\Delta_1^*$ and the partial order $\leq$ comes from a unique extension of the set of arrows with the following properties:
\begin{enumerate}[itemsep=0mm]
\item Reflexivity: $v \leq v$ (by convention).
\item Anti-symmetry: $v_1 \leq v_2$ and $v_2 \leq v_1$ imply $v_1 = v_2$.
\item Transitivity: $v_1 \leq v_2$ and $v_2 \leq v_3$ imply $v_1 \leq v_3$.
\end{enumerate}
In most reasonable cases, the poset of $\Delta_1^*$ is \textit{locally finite}, meaning that for any pair of vertices $(v_1,v_2)$, the so-called \textit{causal diamond} $\left\{ v \mid v_1 \leq v \leq v_2 \right\}$ is a finite set. Such a poset is a \textit{causal set}, as defined originally in \cite{bombelli1987}.
We have shown, without much surprise, that the discretisation of a lorentzian manifold naturally carries a causal set structure. Causal set theory takes the causal set structure as a starting point. Then, the question naturally poses itself as to whether or not it is possible to reconstruct $\Delta^*_1$ from its associated causal set only. Given a causal set, one can derive a notion of neighbourhood by declaring that two vertices $x$ and $y$, such that $x \leq y$, are next to each other if there is no $z \neq x,y$ such that $x \leq z \leq y$. In other words, the neighbourhood relations are obtained by a \textit{transitive reduction} of the causal set, i.e. a graph with the fewest possible arrows and the same "reachability relations" as the causal set. Interestingly, for a finite directed acyclic graph\footnote{A directed acyclic graph is a directed graph with no directed cycles, which means, in causal language, no closed time-like curves.}, such a transitive reduction is unique. However, \textit{the transitive reduction of the transitive closure is not the identity}. Thus, it is not possible to recover $\Delta^*_1$ from the causal set by transitive reduction. In other words, the notions of neighbourhood for causal set theory and for discrete lorentzian geometry, as described over $\Delta^*_1$, are not the same.
Similarly, the conformal factor, which is an important piece of information of the metric, can arise in several different ways. In causal set theory, it emerges by counting the number of vertices within a given causal diamond \cite{bombelli1987,Surya:2019ndm}. In discrete lorentzian geometry, it can be given by the lorentzian volume of the 4-simplices, which requires additional input, not deducible from the causal set alone. For instance, the additional information can be provided by coloring each vertex with a real number (the 4-simplex volume), or by coloring each edge with the volume of the corresponding tetrahedron, or by introducing faces and coloring them with the area of the corresponding triangles. The latter option is of course relevant for spin-foam models as discussed also in \cite{livine2003,Cortes:2014oka,Wieland:2014nka,wieland2015,Immirzi:2016nnz}.
To work algebraically with the causal structure, it will soon appear convenient to express the orientation of the edges as follows. Given a vertex $v$, we define the orientation of an edge\footnote{We denote indifferently $e \in v$ or $v \in e$ when the vertex $v$ is an endpoint of the edge $e$.} $e \in v$ with respect to $v$ as
\begin{equation}
\varepsilon_v(e) \overset{\text{def}}= \left\{
\begin{aligned}
- 1 \quad &\text{if $e$ is incoming} \\
1 \quad &\text{if $e$ is outgoing}.
\end{aligned}
\right.
\end{equation}
This convention is similar to the earlier choice in equation \eqref{eq:def_omega} to call $\omega=1$ the future.
We define a \textit{causal structure on the 1-skeleton} as an assignement of an orientation $\varepsilon_v(e)$ to each pair $(v,e)$ such that $e\in v$, under the constraint
\begin{equation}
\label{eq:gluing_edges}
\varepsilon_{v_1}(e) = - \varepsilon_{v_2}(e),
\end{equation}
where $v_1$ and $v_2$ are the two end-points of $e$. The latter condition expresses the fact that an incoming edge, with respect to one vertex, is outgoing with respect to the other.
\passage{Causal wedges}
We have seen that the causality of $\Delta$ can be read on the edges of $\Delta^*_1$. Now we are going to show that it can also be read equivalently on the wedges of the dual 2-skeleton $\Delta^*_2$.
To proceed, let's go back to the 4-simplicial complex $\Delta$. A pair $w=(t,\sigma)$ such that $t \in \sigma$ is called a \textit{wedge}. Given a wedge $w$, there exists exactly two tetrahedra $T_1,T_2 \in \sigma$ that share $t$, to which are associated the normals $N_1$ and $N_2$. The \textit{dihedral angle} of $w$ is defined as
\begin{equation}
\label{eq:dihedral_angle}
\theta_w \overset{\text{def}}= \text{sign}(N_1 \cdot N_2 ) \cosh^{-1}(|N_1 \cdot N_2 |).
\end{equation}
This definition is a natural extension of the notion of dihedral angle from euclidean to minkowskian geometry. Its absolute value $|\theta_w|$ depends only on the absolute value of the scalar product $|N_1 \cdot N_2 |$ of the normals. On the other hand, its sign depends on if the relative time-orientation of the two normals
\begin{equation}
\text{sign}(\theta_w)= \left\{
\begin{aligned}
+\eta \quad &\text{if $N_1$ and $N_2$ are co-chronal} \\
-\eta \quad &\text{if $N_1$ and $N_2$ are anti-chronal}.
\end{aligned}
\right.
\end{equation}
When the normals are co-chronal (resp. anti-chronal), the wedge is said to be \textit{thick} (resp. \textit{thin}). At the level of the wedges, causality shows up as follows: a thick wedge encloses a time-like region, while a thin wedge encloses a space-like region (see fig. \ref{fig:wedge}).
\begin{figure}[h]
\centering
\begin{overpic}[width=0.6 \columnwidth]{gfx/space-time_thick_wedge.png}
\put (55,85) {$N_2$}
\put (35,85) {$N_1$}
\end{overpic}
\\
\medskip
\begin{overpic}[width=0.6 \columnwidth]{gfx/space-time_thin_wedge.png}
\put (55,85) {$N_1$}
\put (55,10) {$N_2$}
\end{overpic}
\caption{Up: thick wedge, co-chronal normals.\\ Down: thin wedge, anti-chronal normals.}
\label{fig:wedge}
\end{figure}
The thin/thick distinction provides an orientation of the wedges. However this orientation does not extend to triangles because several wedges of the same triangle may not have the same orientation.
This notion translates easily on the dual complex. A pair of a face and a vertex $(f,v)$ with $f\in v$ defines a (dual) wedge on the 2-skeleton $\Delta^*_2$. There exists two unique edges $e_1$ and $e_2$ such that $e_1, e_2 \in f$ and $e_1, e_2 \in v$. The wedge is \textit{thick} if $e_1$ and $e_2$ are both incoming or both outgoing. It is \textit{thin} otherwise.
Algebraically, the wedge orientation can be defined as
\begin{equation}
\varepsilon_v(f) \overset{\text{def}}= \left\{
\begin{aligned}
+\eta \quad &\text{if thick} \\
-\eta \quad &\text{if thin}.
\end{aligned}
\right.
\end{equation}
It is then easy to show that
\begin{equation}
\label{eq:epsilon_f_from_e}
\varepsilon_v(f) = \eta \, \varepsilon_v(e_1) \varepsilon_v(e_2).
\end{equation}
As we have presented it, the wedge orientation is a byproduct of the edge orientation. However, one can wonder whether it possible to go the other way around and to compute the $\varepsilon_v(e)$ as a function of the $\varepsilon_v(f)$, i.e. to invert equation \eqref{eq:epsilon_f_from_e}? The short answer is \textit{no}, but not much information is actually missing to do this inversion.
Around the same vertex $v$, equation \eqref{eq:epsilon_f_from_e} defines a system of 10 equations (one per face) with 5 unknowns (one per edge), so we may fear it to be over-constrained. However, the rank of the system \eqref{eq:epsilon_f_from_e} is only 4. Indeed, given the orientation $\varepsilon_v(f)$ for any 4 wedges \textit{that do not form a cycle}, one can deduce the orientation of the other 6. Rather than a formal definition, the notion of "forming a cycle" is best understood through few examples on the links of the vertex graph\footnote{Given a vertex, the vertex graph associates a node to each edge and a link to each wedge in-between.. In graph theory, the word "edge" is usually used instead of "link". But we stick to a wide-spread convention in loop quantum gravity (see \cite{rovelli2014a}) where "edge" is reserved to the bulk of 2-complexes and "link" is used for the boundary.} (see figure \ref{fig:pentagram_in_pentagon}).
\begin{figure}[h]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width=0.4 \columnwidth]{gfx/pentagram_in_pentagon_cycle_1.png}
\includegraphics[width=0.4 \columnwidth]{gfx/pentagram_in_pentagon_cycle_2.png}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\includegraphics[width=0.4 \columnwidth]{gfx/pentagram_in_pentagon_nocycle_1.png}
\includegraphics[width=0.4 \columnwidth]{gfx/pentagram_in_pentagon_nocycle_2.png}
\end{subfigure}
\caption{Up: sets of 4 wedges which form cycles. Down: sets of 4 wedges which do \textit{not} form cycles.}
\label{fig:pentagram_in_pentagon}
\end{figure}
Then it is easy to show that the product of orientations along any cycle of wedges is
\begin{equation}\label{eq:cycle}
\prod_{f \in \text{cycle}} \varepsilon_v(f) = \eta^{\# F},
\end{equation}
where $\# F$ is the number of faces $f$ around the cycle. Since any set of 5 wedges around $v$ contains a cycle, then $4$ wedges orientation are indeed sufficient to fix them all. Hence the system \eqref{eq:epsilon_f_from_e} is actually under-constrained of exactly one dimension.
Given a vertex $v$, denote the surrounding edges by $e_i$, with $i \in \{1,...,5\}$, and accordingly the surrounding faces by $f_{ij}$. To make the system invertible, let us add one independent equation, by defining the orientation of the vertex $v$ as:
\begin{equation}
\label{eq:definition_e_v}
\varepsilon_v \overset{\text{def}}= \prod_j \varepsilon_v(e_j) .
\end{equation}
Then, the set of equations \eqref{eq:epsilon_f_from_e} augmented of \eqref{eq:definition_e_v}, with unknowns $\varepsilon_v(e)$, is invertible and one can show that
\begin{equation}
\varepsilon_v(e_i) = \varepsilon_v \prod_{k \neq i} \varepsilon_v(f_{ik}).
\end{equation}
So we see that we can recover the orientation of the edges from the orientation of the wedges, up to a vertex orientation.
Let's now consider a skeleton with many vertices. Can we deduce the orientation of the edges of the 1-skeleton from the orientation on the wedges of the 2-skeleton? From the previous analysis with a single vertex, we know that it will be possible to invert the system of equations if one considers in addition one orientation $\varepsilon_v$ per vertex. However, the set of $\varepsilon_v$ is itself constrained, because of the gluing condition \eqref{eq:gluing_edges}, which now reads
\begin{equation}
\varepsilon_{v_1} \prod_{f | e \in f} \varepsilon_{v_1}(f) = - \varepsilon_{v_2} \prod_{f | e \in f} \varepsilon_{v_2}(f)
\end{equation}
This constraint eliminates almost all the degrees of freedom introduced by $\varepsilon_v$, so that there remains finally only the freedom to fix the orientation of a single vertex in the whole skeleton. The orientation of all others can be deduced from it and the $\varepsilon_v(f)$. Indeed, assume that you have a 2-skeleton where all the $\varepsilon_v(f)$ have been fixed (satisfying the constraint along cycles). Then, if you fix only one $\varepsilon_v$, the orientation of the edges around $v$ are fixed, and this "orientation-fixing" will then propagate everywhere else, so the full set of $\varepsilon_v$ will ultimately be fixed. Reversing the orientation of one $\varepsilon_v$, will reverse the entire skeleton, which corresponds to the time-reversal symmetry.
We have previously shown that the causal structure of discrete general relativity can be encoded in the dual 1-skeleton with oriented edges. Now we have just seen that the \textit{causal structure of a 2-skeleton} can be described as the assignation of $\varepsilon_v(f)$ to each wedge under the cycle constraint \eqref{eq:cycle} and a global orientation $\varepsilon$, which can be regarded as a global arrow of time.
\section{Lorentzian Regge calculus}
\label{sec:lorentzian_regge_calculus}
\passage{Lorentzian Regge action}
So far, we only focused on the kinematical aspects of causality. Now, we will see how causality shows up in the dynamics \cite{Regge:1961px}. Extending the formulae of \cite{barrett1993} to the $4$-dimensional case, the lorentzian Regge action is a sum over the triangles $t$:
\begin{equation}
S_R \overset{\text{def}}= \sum_t A_t \delta_t,
\end{equation}
with $A_t$ the area of the triangle $t$, and $\delta_t$ the deficit angle defined as a sum over the 4-simplices $\sigma$ surrounding $t$:
\begin{equation}
\delta_t \overset{\text{def}}= \sum_{\sigma | t \in \sigma} \theta_{t\sigma},
\end{equation}
with the dihedral angle defined by equation \eqref{eq:dihedral_angle}. The order of the two sums can be exchanged:
\begin{equation}
S_R = \sum_\sigma \sum_{t | t \in \sigma} A_t \theta_{t\sigma}.
\end{equation}
To derive the equations of motion by variational calculus, one should tell which are the independent variables of which $S_R$ is a function of. In the original Regge calculus, it is shown that if the action is considered as a function of the lengths $l_s$ of the segments $s$, the resulting equations of motions become Einstein equations in the continuous limit. However, this is not the only possible choice.
\passage{First-order Regge calculus}
Barrett has proposed a formulation where the independent variables are both the lengths $l_s$ and the angles $\theta_{t\sigma}$ \cite{barrett1994}. This choice of variables mimics the Palatini formulation which takes the metric and the torsion-less connection as primary fields of the Einstein-Hilbert action. Compared to the original Regge calculus, the introduction of $\theta_{t\sigma}$ extends the total number of variables. In order to recover the equations of motion, it is then necessary to add constraints to the action, which is done with a Lagrange multiplier $\mu_\sigma$ per each 4-simplex $\sigma$. One obtains
\begin{equation}
S[l_s,\theta_{t\sigma}, \mu_\sigma] = \sum_\sigma \sum_{t | t \in \sigma} A_t(l_s) \theta_{t\sigma} + \sum_\sigma \mu_\sigma \det \gamma_\sigma
\end{equation}
where $\gamma_\sigma$ is the $5 \times 5$ matrix whose elements are the minkowskian scalar products between the normals to the boundary tetrahedra:
\begin{equation}
[\gamma_\sigma]_{ij} = N_i \cdot N_j = \text{sign}(\theta_{ij}) \cosh \theta_{ij},
\end{equation}
where $N_i$ is the (unit outward) normal to the $i$th boundary tetrahedron of $\sigma$ and $\theta_{ij}$ is the dihedral angle between the tetrahedra $i$ and $j$. The Lagrange multiplier imposes the constraint
\begin{equation}
\det \gamma_\sigma = 0
\end{equation}
which implements the closure of the normals, i.e.
\begin{equation}
\sum_i V_i N_i = 0,
\end{equation}
with $V_i$ the volume of the $i$th tetrahedron.
\passage{Causal structure from dynamics}
This choice of variables for the action makes it very clear how causality plays its role in the dynamics. Indeed, the causal structure shows up in the sign of both the lengths $l_s$ and the angles $\theta_{t\sigma}$. However, assuming all tetrahedra to be space-like, the lengths $l_s$, and so the areas $A_t$ and volumes $V_T$, are all positive. So causality appears only in the orientation of the wedges, i.e. the sign of $\theta_{t\sigma}$.
Any set of signs of $\theta_{t\sigma}$ does not define an allowed causal structure as it must satisfy the cycle constraint \eqref{eq:cycle}. Interestingly, this condition shows up as a corollary of the equations of motion. Indeed the constraint $\det \gamma_\sigma = 0$ implies the existence of a vector $v$ such that $\sum_i \gamma_{ij} v_i = 0$. Then the equation of motion obtained by varying $\theta_{ij}$ yields
\begin{equation}
A_t = \kappa_\sigma \mu_\sigma v_i v_j \sinh \theta_{ij}
\end{equation}
for some $\kappa_\sigma \in \mathbb{R}$ (see \cite{barrett1994}). Since $A_t > 0$,
\begin{equation}
\text{sign} \, \theta_{ij} = \text{sign}(\kappa_\sigma \mu_\sigma) \, \text{sign} \, v_i \, \text{sign} \, v_j
\end{equation}
which implies the constraints \eqref{eq:cycle}. This shows that the equations of motion impose the structure of the wedges to be causal.
\section{Causal path integral}
\label{sec:sum_over_histories}
So far, the analysis was purely classical, although discrete. When going to the quantum regime of gravity, it reasonable to expect the generic state of the metric field is a superposition of classical configurations. In particular, there might not be a definite causal structure. Several causal histories may interfere and thus generate in principle observable effects. A theory of quantum gravity should be able to predict these effects through the computation of transition amplitudes between different states of space.
Let's clarify the main ideas by proceeding heuristically, although a precise mathematical formulation may be more difficult to achieve. The standard procedure starts by foliating the space-time manifold into constant-time slices: $\mathcal{M} \cong \Sigma \times \mathbb{R}$. The classical states are 3-metrics $h$ defined over $\Sigma$. At the quantum level, the 3-metric is an operator $\hat h$, with eigenstates $\ket{h}$ whose eigenvalues are the classical 3-metrics $h$. The sum-over-histories approach to quantum gravity proposes to compute the transition amplitude between the state $\ket{h_0}$ at time $t_0$ and the state $\ket{h_1}$ at time $t_1$ as a path integral:
\begin{equation}\label{eq:standard_transition}
\braket{h_1}{h_0} = \int [\mathcal{D}g] \, e^{\frac{i}{\hbar} S[g]}.
\end{equation}
$[\mathcal{D}g]$ is a measure on the set of 4-metrics $g$ over $\Sigma \times [t_0,t_1]$, such that the restriction of $g$ to the slice $t=t_0$ (resp. $t=t_1$) is $h_0$ (resp. $h_1$). $S[g]$ is the Einstein-Hilbert action evaluated on such a metric.
\passage{General boundary formulation}
The previous and standard formulation is not ideal because it relies upon a slicing of space-time into constant-time leaves, which may already fix too much structure for a general treatment of causality. Better suited for our purpose is the general boundary formulation developed by Oeckl in \cite{oeckl2008}.
Consider a region of space-time $M$ with boundary $\Sigma$. A 4-metric $g$ on $M$ induces a 3-metric $h$ on $\Sigma$. The crux of a quantum theory of space-time is the computation of the \textit{metric propagator}:
\begin{equation}\label{eq:metric_propagator}
Z_M(h) = \int [\mathcal{D}g] \, e^{\frac{i}{\hbar} S[g] }
\end{equation}
where the integral is carried over all the 4-metrics $g$ bounded by $h$. The standard formulation (equation \eqref{eq:standard_transition}) is recovered when $\Sigma$ is made of two disconnected components (past and future).
\passage{Regge path integral}
As a way towards the actual computation of the metric propagator, one can discretise the previous formula. Consider a 4-simplicial complex $\Delta$. Its boundary is a 3-simplicial complex $\Sigma$. Working with the Regge action, the metric propagator is a function of the length of the segments of $\Sigma$, and it reads:
\begin{equation}\label{eq:regge_propagator}
\mathcal{A}_\Delta(l_\Sigma) = \int [\dd l_s] \, e^{\frac{i}{\hbar} S_R[l_s]}.
\end{equation}
The integral is done over the lengths $l_s$ of all the segments $s$ in the bulk of $\Delta$. Note that each integral could also be replaced by a sum with a cut-off, in order to ensure a finite value to the propagator, but it does not seem useful in our quest for causality.
Using the first-order Regge calculus, the propagator reads:
\begin{multline}
\mathcal{A}_\Delta(l_\Sigma,\theta_\Sigma,\mu_\Sigma) = \int [\dd l_s] [\dd \theta_{t\sigma}] [\dd \mu_\sigma] \\
\times \prod_\sigma e^{\frac{i}{\hbar} \left( \sum_{t | t \in \sigma} A_t \theta_{t\sigma} + \mu_\sigma \det \gamma_\sigma \right) }\,.
\end{multline}
The integration over $\mu_\sigma$ can be formally carried over, which gives a $\delta$-function that fixes the constraint:
\begin{equation}\label{eq:first_order_regge_propagator}
\mathcal{A}_\Delta(l_\Sigma,\theta_\Sigma) = \int [\dd l_s] [\dd \theta_{t\sigma}] \prod_\sigma \delta(\det \gamma_\sigma) \, e^{\frac{i}{\hbar} \sum_{t | t \in \sigma} A_t \theta_{t\sigma} }\,.
\end{equation}
\passage{Causal structure of the boundary}
A causal structure on $\Delta$ induces a causal structure on its boundary $\Sigma$. It consists in saying for each tetrahedron of the boundary whether it shall be regarded as future or past. It can be represented on the boundary of the dual 2-skeleton $\Delta^*_2$, which is a 4-valent graph. The induced causal structure consists in assigning a sign to each node, depending on whether the edge attached to it is pointing inside or outside the bulk. Conventionally, we take the sign to be positive for an outward edge (future tetrahedron) and negative for inward edge (past tetrahedron). An example is shown in figure \ref{fig:boundary_causality}. It is important to notice that fixing a causal structure on the boundary does not in general impose a single causal history in the bulk: many different histories may share the same causal boundary.
\begin{figure}[h]
\begin{overpic}[width=0.6 \columnwidth]{gfx/boundary_causality.png}
\put (30,90) {\textbf{+}}
\put (70,90) {\textbf{+}}
\put (0,65) {\textbf{--}}
\put (95,65) {\textbf{+}}
\put (0,30) {\textbf{+}}
\put (95,30) {\textbf{+}}
\put (25,10) {\textbf{--}}
\put (70,10) {\textbf{--}}
\end{overpic}
\begin{overpic}[width=0.6 \columnwidth]{gfx/boundary_causality.png}
\put (50,90) {\textbf{+}}
\put (10,80) {\textbf{--}}
\put (0,50) {\textbf{--}}
\put (15,15) {\textbf{--}}
\put (50,10) {\textbf{+}}
\put (85,20) {\textbf{--}}
\put (95,50) {\textbf{+}}
\put (85,75) {\textbf{+}}
\put (47,63) {\textbf{--}}
\put (35,55) {\textbf{+}}
\put (35,45) {\textbf{--}}
\put (37,37) {\textbf{+}}
\put (46,32) {\textbf{+}}
\put (60,38) {\textbf{--}}
\put (65,45) {\textbf{--}}
\put (60,55) {\textbf{+}}
\end{overpic}
\caption{Example of causal structure on a boundary graph. Up: encoded on the nodes. Down: encoded on the links.}
\label{fig:boundary_causality}
\end{figure}
The causal information of the boundary can also be encoded on the links. To each link, one associates the product of the sign of the endpoints (see fig. \ref{fig:boundary_causality}). For a 4-valent graph, there are twice as many links as there are nodes. But despite the double number of variables, this encoding is not injective, but 2-to-1. Physically, by encoding causality on the links, we only provide information about bare-causality, while encoding over the nodes also gives a time-orientation.
A random assignment of signs to links only defines a causal structure on the boundary if it enables to consistently assign signs to the nodes. This happens if, and only if, the signs on the links satisfy the constraint that their product around any loop of the graph is $1$. The latter constraint is implied by the cycle condition \eqref{eq:cycle} in the bulk. In fact, when two edges crossing the boundary share a common vertex, the sign of the wedge matches the sign of the link between the two corresponding nodes. More generally, the sign of the link is equal to the product of the signs of the wedges around the corresponding face in the bulk, which can be written
\begin{equation}\label{eq:boundary_bulk_relation}
\varepsilon_l = \prod_{v \in f} \varepsilon_v(f),
\end{equation}
where $f$ is the face that intersect the boundary along the link $l$.
\passage{Causal amplitude}
The metric propagator is a function of the boundary variables. In concrete situations, the bare-causal structure of the boundary may be fixed by an assignment of links orientations $\varepsilon_l$. In this case, the range of integration on the angles $\theta_{t\sigma}$ in equation \eqref{eq:first_order_regge_propagator} must be restricted for the wedges that belong to faces intersecting the boundary. This restrictions consists in implementing the constraint \eqref{eq:boundary_bulk_relation}. For instance, consider a link $l$ bounding a face $f$ (dual to $t$) with only one vertex $v$ (dual to $\sigma$). If $\varepsilon_l = 1$ (resp. $\varepsilon_l = -1$), then the integration over $\theta_{t\sigma}$ shall be carried over $\mathbb{R}^+$ (resp. $\mathbb{R}^-$), instead of $\mathbb{R}$.
For the $\theta_{t\sigma}$ in the bulk, the integration is still done over all $\mathbb{R}$. We can rewrite the amplitude \eqref{eq:first_order_regge_propagator} as
\begin{multline}
\label{eq:causal_amplitude}
\mathcal{A}_\Delta(l_\Sigma,\eta_\Sigma,\varepsilon_\Sigma) = \\ \sum_{[\varepsilon_{t\sigma}]} \int [\dd l_s] [\dd \eta_{t\sigma}] \prod_\sigma \delta(\det \gamma_\sigma) \, e^{\frac{i}{\hbar} \sum_{t | t \in \sigma} A_t \varepsilon_{t\sigma} \eta_{t\sigma}}\,,
\end{multline}
where the sum is done over all possible orientations $\varepsilon_{t\sigma}$ of wedges in the bulk, compatible with the bare-causality of the boundary $\varepsilon_\Sigma$. Thus, the path-integral is also summing over configurations which do not satisfy the cycle condition \eqref{eq:cycle}. These configurations have no well-defined causal structure.
However, in the classical limit, when $\hbar \rightarrow 0$, the configurations that contribute the most are the stationary points of the action, which satisfy the classical equations of motion and thus, as seen previously, have a proper causal structure.
It is tempting to implement the causal structure already at the quantum level. By this we mean restricting the sum over the configurations of $\varepsilon_{t\sigma}$ which satisfy the condition \eqref{eq:cycle}. Such a strategy is reminiscent of the definition of the Feynman propagator for the relativistic particle.
By computing the transition amplitude $\braket{x_1}{x_0}$, with $x_0$ before $x_1$, using the path integral method, one considers only these trajectories which do not go back in time. In this case, the transition amplitude gives the Feynman propagator $G_F$.
The suggestion of such a causal path-integral for gravity goes back to Teitelboim \cite{teitelboim1982}. In the context of the standard formulation (equation \eqref{eq:standard_transition}) $h_0$ is regarded as "past" and $h_1$ as "future". So Teitelboim proposed to restrict the range of integration over the 4-metrics $g$ to that which match this time-orientation. When written in the hamiltonian formalism, it amounts to restricting the range of integration of the lapse $N$ to positive values only.
\section{BF theory}
\label{sec:BF}
As a first step towards spin-foams, let's consider discrete BF theory. It is a topological theory, so we do not expect any causal structure to arise, but it makes use of an orientation structure which is worth looking at as a warm-up.
\passage{Discrete BF theory}
Following \cite{baez2000}, the discretisation of BF theory is done over a 2-complex $\mathcal{C}$. The variables are one group element $g_e \in G$ per edge $e\in \mathcal{C}$. Then the amplitude is defined as
\begin{equation}
\label{eq:Z_BF}
Z_\mathcal{C} \overset{\text{def}}= \int [\dd g_e] \prod_f \delta \left( U_f (g_e) \right).
\end{equation}
There is one integral per edge,
$\dd g_e$ is the Haar measure over $G$, $\delta$ is the Dirac $\delta$-function over $G$ and the product is carried over all the faces $f$ of $\mathcal{C}$. Moreover we define the circular product
\begin{equation}
U_f (g_e) \overset{\text{def}}= \prod_{e \in f}^\circlearrowleft g_e,
\end{equation}
where the product is made over the edges $e$ surrounding $f$. Although sometimes overlooked, we want to draw attention to the orientation structure which is required to define correctly the circular product. We need the following additional structure over $\mathcal{C}$:
\begin{enumerate}
\item a distinguished edge to each face, that serves as a starting point in the product;
\item an orientation to each face, that tells the order of the following edges.
\end{enumerate}
Although this structure is required to define $U_f$, $\delta(U_f)$ doesn't actually depends on it, due to the invariance of the $\delta$-function under inversion and cyclic permutation.
It is common to rewrite $Z_\mathcal{C}$ by splitting the $\delta$-function into a sum over the irreducible representations (irreps) of $G$:
\begin{equation}
\label{eq:delta_expansion}
\delta(U) = \sum_\rho \dim \rho \, \Tr \rho(U)
\end{equation}
Then, for each edge, the integral over the group element $g$ can be rewritten as a sum over intertwiners, which formally reads
\begin{equation}
\label{eq:sum_iota}
\int \dd g \bigotimes_{f \in e}^\circlearrowleft \rho_f(g) = \sum_{\iota} \iota \iota^*.
\end{equation}
Again, the definition of the circular tensor product requires to introduce additional structure:
\begin{enumerate} \setcounter{enumi}{2}
\item a distinguished face to each edge, that serves as a starting point in the tensor product;
\item an orientation of the faces around each edge, which can be thought as an arrow on the edge (with the right-hand convention to turn around for instance).
\end{enumerate}
Of course, $Z_\mathcal{C}$ remains blind to this structure. The amplitude finally becomes:
\begin{equation}
\label{eq:amplitude_sums}
Z_\mathcal{C} = \sum_\rho \sum_\iota \prod_f \dim \rho_f \prod_v A_v.
\end{equation}
The sum in $\rho$ (resp. $\iota$) is made over all the possible labelling of the faces (resp. edges) by irreps (resp. intertwiners). The \textit{vertex amplitude} $A_v$ is a function of the irreps $\rho_f$ and intertwiners $\iota_e$ attached to the faces and edges surrounding a vertex $v$.
The four orientation structures just introduced enter the computation of $A_v$. These structures lie on the edges and faces of the 2-complex. Motivated by our analysis in the previous sections, we want to investigate the idea that causality of spin-foams could arise from the breaking of the invariance of $Z_\mathcal{C}$ with respect to the orientation of $\mathcal{C}$.
\passage{Ponzano-Regge model}
To proceed concretely, let's consider the simple example of the \textit{Ponzano-Regge model} \cite{ponzano1968}, for which $\mathcal{C}$ is dual to a 3-dimensional simplicial complex $\Delta$, and $G = SU(2)$. In this case, the irreps are labelled by spins $j \in \mathbb{N}/2$ and there is no sum over the intertwiners (because it is unique). The vertex amplitude can be nicely represented pictorially as a graph where each node stands for an adjacent edge and each link for a face in-between. The \textit{vertex graph} takes typically the following form:
\begin{equation}
\label{eq:tetrahedron}
\begin{overpic}[width = 0.6 \columnwidth]{gfx/tetrahedron.png}
\put (0,48) {+}
\put (45,8) {--}
\put (58,73) {+}
\put (90,30) {--}
\put (30,70) {$j_1$}
\put (48,57) {$j_2$}
\put (72,55) {$j_3$}
\put (36,35) {$j_5$}
\put (20,20) {$j_6$}
\put (80,20) {$j_4$}
\end{overpic}
\end{equation}
The arrows on the links are induced by the orientation of the faces and the signs on the nodes are induced by the orientation of the edges (+ for incoming). Any combination of arrows and signs can be found, but the topology of the graph is the same for every vertex. The labels $j$ on the links are inherited from the irreps on the faces.
The graphical calculus is defined with the following rules:
\begin{enumerate}
\item To each link $l$, associate a variable $m_l$ that will be summed over;
\item The 3jm-Wigner symbol\footnote{We refer to \cite{martin-dussaud2019} for an introduction to the mathematical material used in this section.} is associated to the following nodes:
\begin{equation}
\begin{split}
\begin{pmatrix}
j_1 & j_2 & j_3 \\
m_1 & m_2 & m_3
\end{pmatrix}
&=
\begin{array}{c}
\begin{overpic}[width = 0.3 \columnwidth]{gfx/3CG-out.png}
\put (40,0) {+}
\put (10,48) {$j_3$}
\put (50,60) {$j_2$}
\put (80,40) {$j_1$}
\end{overpic}
\end{array} \\
&=
\begin{array}{c}
\begin{overpic}[width = 0.3 \columnwidth]{gfx/3CG-in.png}
\put (40,0) {--}
\put (10,48) {$j_1$}
\put (55,60) {$j_2$}
\put (80,40) {$j_3$}
\end{overpic}
\end{array}
\end{split}
\end{equation}
The sign on the node indicates the sense in which the attached links shall be read.
\item If an arrow is reversed, replace in the formula above $m_l$ by $-m_l$ and multiply by $(-1)^{j_l-m_l}$, like
\begin{equation}
\begin{array}{c}
\begin{overpic}[width = 0.3 \columnwidth]{gfx/3CG-outin.png}
\put (40,0) {+}
\put (10,48) {$j_3$}
\put (50,60) {$j_2$}
\put (80,40) {$j_1$}
\end{overpic}
\end{array}
= (-1)^{j_3-m_3} \begin{pmatrix}
j_1 & j_2 & j_3 \\
m_1 & m_2 & -m_3
\end{pmatrix}
\end{equation}
or
\begin{equation}
\begin{array}{c}
\begin{overpic}[width = 0.3 \columnwidth]{gfx/3CG-inout.png}
\put (40,0) {--}
\put (10,48) {$j_1$}
\put (55,60) {$j_2$}
\put (80,40) {$j_3$}
\end{overpic}
\end{array}
= (-1)^{j_1 - m_1} \begin{pmatrix}
j_1 & j_2 & j_3 \\
-m_1 & m_2 & m_3
\end{pmatrix}
\end{equation}
A positive (resp. negative) node with an incoming (resp. outgoing) link corresponds to a counter-alignment of the face and the edge.
\item Multiply all factors and sum over all $m_l$ from $-j_l$ to $j_l$ (integer steps).
\end{enumerate}
As an example, the graph \eqref{eq:tetrahedron} evaluates to
\begin{multline}
A_v = \sum_{m_i} (-1)^{j_4 - m_4 + j_1 - m_1} \begin{pmatrix}
j_1 & j_2 & j_3 \\
m_1 & m_2 & m_3
\end{pmatrix} \\
\times
\begin{pmatrix}
j_4 & j_5 & j_3 \\
m_4 & -m_5 & m_3
\end{pmatrix}
\begin{pmatrix}
j_6 & j_2 & j_4 \\
m_6 & m_2 & -m_4
\end{pmatrix}
\begin{pmatrix}
j_6 & j_5 & j_1 \\
m_6 & -m_5 & -m_1
\end{pmatrix}
\end{multline}
The power of graphical calculus is apparent when comparing this cumbersome formula to the diagram \eqref{eq:tetrahedron}. Up to a sign, the vertex amplitude equals the 6j-symbol:
\begin{equation}
A_v = \pm \begin{Bmatrix}
j_1 & j_2 & j_3 \\
j_4 & j_5 & j_6
\end{Bmatrix}.
\end{equation}
The sign $\pm$ is a function of the spins $j_i$ and it depends on the orientation of the links and nodes. It matters when several vertices are glued together.
The importance of the Ponzano-Regge model is revealed by its semi-classical limit, which makes it a good candidate for 3D euclidean quantum gravity \cite{ponzano1968}. Indeed, the vertex amplitude admits a graphical representation as a tetrahedron depicted in \eqref{eq:tetrahedron}. This shape initially expresses the invariance of the 6j-symbol under the action of the tetrahedral group. But it turns out that it also carries a deeper geometric meaning when the labels $j_i$ are interpreted as the edge lengths of the tetrahedron. Denoting $V$ the volume of this tetrahedron, one can prove the following behaviour for the vertex amplitude $A_v(\lambda j_i)$ when $\lambda \to \infty$:
\begin{equation}
\label{eq:classical_limit_PR}
\begin{Bmatrix}
\lambda j_1 & \lambda j_2 & \lambda j_3 \\
\lambda j_4 & \lambda j_5 & \lambda j_6
\end{Bmatrix} \sim \frac{1}{4\sqrt{3 \pi \lambda^3 V}} \left( e^{i S} + e^{-i S} \right)
\end{equation}
with the action
\begin{equation}
S \overset{\text{def}}= \sum_i \left( \lambda j_i + \frac{1}{2} \right) \xi_i + \frac{\pi}{4}
\end{equation}
with $\xi_i$ the exterior dihedral angle along the edge $i$ \cite{roberts1999}.
Graphical calculus also clarifies how the orientation enters the computation of the vertex amplitude. The invariance of the total amplitude $Z_\mathcal{C}$ under changes of orientation of the edges and faces is checked in appendix \ref{sec:orientation_invariance}. To be precise, the invariance is only true for faces and edges which lie in the bulk of $\mathcal{C}$. In general, $\mathcal{C}$ is bounded by some 3-valent graph $\Gamma$ over which an orientation is induced by $\mathcal{C}$. The total amplitude $Z_\mathcal{C}$ is sensitive to the orientation of $\Gamma$. The case is similar to the amplitude defined by equation \eqref{eq:causal_amplitude}. Although topological in the bulk, BF theory is non-trivial on the boundary. The boundary orientation provides a prototype of boundary causal structure.
A change of boundary orientation affects the value of $Z_\mathcal{C}$ is a simple way:
\begin{itemize}
\item A flip of a link-orientation brings a global factor $(-1)^{2j}$ if the two-endpoints carry opposite signs, none otherwise.
\item A flip of a node-orientation brings an overall factor $(-1)^{j_1+j_2+j_3}$.
\end{itemize}
As for Feynman diagrams, this simple way of modifying the causal structure of the boundary can be understood as a \textit{crossing symmetry}.
\passage{Causal Ponzano-Regge model}
Now we want to go further and suggest a way to break the orientation-invariance in the bulk of the Ponzano-Regge model.
Consider $\mathcal{H}_j$, the spin-j irreducible representation of $SU(2)$, with the canonical basis $\ket{jm}$. The coherent states are defined as
\begin{equation}
\ket{j,z} \overset{\text{def}}= u(z) \ket{j,-j}
\end{equation}
with $u : \mathbb{C}^2 \to SU(2)$ a (well-chosen) surjective map. Then, the intertwiner can be written as the state
\begin{equation}
\ket{\iota} = \frac{1}{N} \int \dd g \, g \cdot \bigotimes_{i=1}^3 \ket{j_i, z_i}
\end{equation}
with $N = N(j_i,z_i)$ such that $\braket{\iota}=1$. Up to a global phase, the vertex amplitude is
\begin{equation}\label{eq:A_v_K}
A_v = \frac{1}{N} \int_{SU(2)} [\dd g_n] \prod_l K_l(g_{t_l}, g_{s_l}) .
\end{equation}
$t_l$ and $s_l$ are respectively the source and the target of $l$. The wedge amplitude is defined as
\begin{equation}
K (g_t, g_s) \overset{\text{def}}= \matrixel{j,z_t}{g^{-1}_t g_s }{j, z_s} = \matrixel{z_t}{g^{-1}_t g_s }{z_s}^{2j}.
\end{equation}
For the purpose of studying the semi-classical limit, it is convenient to write the wedge amplitude as
\begin{equation}
K (g_t, g_s) = e^{2j (\log r + i \theta)}
\end{equation}
with $r \in \mathbb{R}^+$ and $\theta \in (-\pi,\pi]$.
Now, to introduce a notion of causality, we can force some orientation structure to appear artificially: this is done here by writing the identity
\begin{equation}
K (g_t, g_s) = \sum_{\varepsilon \in \{1,-1\}} \Theta(\varepsilon \theta ) \, e^{2j (\log r + i \theta)},
\end{equation}
where $\Theta$ is the step function and we are summing over the signs $\varepsilon$. We then define the causal wedge amplitude as
\begin{equation}
K^\varepsilon (g_t, g_s) \overset{\text{def}}= \Theta(\varepsilon \theta ) \, e^{2j (\log r + i \theta)}.
\end{equation}
for some choice of $\varepsilon$, which can be understood as a choice of wedge orientation. Given one orientation $\varepsilon_l$ per wedge $l$, the causal vertex amplitude $A_v^{\varepsilon}$ is defined by replacing the wedge amplitude $K_l$ by its causal alternative $K_l^{\varepsilon_l}$ in equation \eqref{eq:A_v_K}. The BF vertex amplitude is recovered as
\begin{equation}
A_v = \sum_{[\varepsilon_l]} A_v^{\varepsilon_l},
\end{equation}
where the sum is made over all possible sign-assignation to the wedges. There is a total of $2^6$ such configurations. This sum introduces a partition of the range of integration of \eqref{eq:A_v_K} into as many sectors. In the semi-classical limit, only two sectors survive, as it appears in \eqref{eq:classical_limit_PR}. Since the exterior dihedral angles of any tetrahedron are always such that $\sin \xi \leq 0$, the two sectors are when $\varepsilon_l$ are either all positive or all negative. Starting with the causal vertex amplitude with all $\varepsilon_l$ negative thus leads to the asymptotic limit
\begin{equation}
A^-_v \sim \frac{1}{4\sqrt{3 \pi V}} e^{i S}.
\end{equation}
This provides a toy-model for the appearance of causality, which we will apply to the EPRL model.
\passage{$\{15j\}$ BF theory}
Before moving to the EPRL model, let us look at BF theory in 4 dimensions. The main difference with respect to the $3d$ case comes from the fact that $\Delta$ is a 4-dimensional simplicial complex and so the amplitude $Z_\mathcal{C}$ includes a sum over the intertwiners. The graphical representation of the intertwiners requires the introduction of an additional structure:
\begin{enumerate} \setcounter{enumi}{4}
\item at each edge, the surrounding faces are partitioned into two sets of two (there exists three such partitions);
\item these two sets are ordered (e.g. called left and right).
\end{enumerate}
Then the vertex amplitude is represented by a pentagram like
\begin{equation}
\label{eq:pentagram}
\begin{overpic}[width = 0.7 \columnwidth]{gfx/pentagram.png}
\put (67,10) {--}
\put (73,17) {--}
\put (25,76) {$j_1$}
\put (15,35) {$j_2$}
\put (50,8) {$j_3$}
\put (80,35) {$j_4$}
\put (70,80) {$j_5$}
\put (49,59) {$j_6$}
\put (38,50) {$j_7$}
\put (42,36) {$j_8$}
\put (55,39) {$j_9$}
\put (57,50) {$j_{10}$}
\put (48,90) {$\iota_1$}
\put (43,90) {+}
\put (53,90) {+}
\put (8,59) {$\iota_2$}
\put (10,64) {--}
\put (10,54) {--}
\put (25,12) {$\iota_3$}
\put (22,17) {+}
\put (30,9) {+}
\put (73,12) {$\iota_4$}
\put (87,59) {$\iota_5$}
\put (87,54) {+}
\put (87,64) {+}
\end{overpic}
\end{equation}
$\iota \in \mathbb{N}/2$ is labelling the intertwiners. It is surrounded by two positive (resp. negative) nodes, when the edge is incoming (resp. outgoing). When the nodes are positive (resp. negative), the arrow goes from the right set to the left set (resp. the other way around). One can check with the rules above, that the overall amplitude $Z_\mathcal{C}$ is insensitive to the additional structure introduced. The same procedure as before can be used to define the causal vertex amplitude $A_v^\varepsilon$.
\medskip
To sum up, discrete BF theory is defined over a 2-complex $\mathcal{C}$ with a bunch of auxiliary orientation structures. The value of the partition function $Z_\mathcal{C}$ is sensitive to the boundary values of these structures, but not in the bulk. However, we have made a proposal to select a causal structure in the bulk as well.
\section{EPRL model}
\label{sec:EPRL}
General relativity can be formulated in a language close to the one of BF theory. The essential difference is that GR has local degrees of freedom, which mathematically arise from the implementation of constraints on the BF variables. Spin-foam models build upon this insight and consist in a weak implementation of these constraints in the discrete BF theory.
The appearance of the local degrees of freedom shows up in the breaking of the topological invariance of BF theory. The question then arises if these constraints induce also a causal structure on the 2-complex. The analysis of the previous sections suggests that such a causal structure can arise as an orientation structure on the edges or the wedges. Imposing constraints on these orientations then breaks the bulk orientation invariance of BF theory discussed above In this section, we propose such a construction for the EPRL model.
\passage{Lorentzian EPRL model}
The EPRL model \cite{Engle:2007wy}, as formulated in \cite{Rovelli:2011eq}, is defined over a 2-complex $\mathcal{C}$ by the following partition function:
\begin{equation}
Z_\mathcal{C} = \int_{SU(2)} [\dd h_w ] \, \prod_f \delta(U_f) \prod_v A_v(h_w).
\end{equation}
The integral is made over the variables $h_w \in SU(2)$ associated to each wedge $w$ in the bulk of $\mathcal{C}$. $Z_\mathcal{C}$ is a function of the variables $h_l \in SU(2)$ associated to each link of the boundary graph $\Gamma$. Similarly to BF theory, the precise definition requires additional structure on $\Gamma$:
\begin{enumerate}
\item a starting wedge per each face;
\item an orientation per each face;
\item an orientation to each wedge, i.e. each wedge $w$ has a source edge $s_w$ and a target edge $t_w$;
\item a distinguished edge $E_v$ per each vertex $v$.
\end{enumerate}
Then, $U_f$ is defined as the circular product
\begin{equation}
U_f (h_w) \overset{\text{def}}= \prod_{w \in f}^\circlearrowleft h_w
\end{equation}
that starts with the starting wedge of $f$, circulates in the sense given by the orientation of $f$, and each $h_w$ is inverted when the orientation of the wedge does not match with the orientation of the face. Besides, the vertex amplitude is
\begin{equation}
\label{eq:A_v_EPRL}
A_v(h_w) = \int_{SL_2(\mathbb{C})} [\dd g_e ] \, \delta(g_{E_v}) \, \prod_{w \in v} K(h_w,g_{s_w} g_{t_w}^{-1})
\end{equation}
where there is one integration over $SL_2(\mathbb{C})$ per each edge surrounding $v$. The $\delta(g_{E_v})$ is only here to make the integral finite, but the value of $A_v$ is actually independent on the choice of $E_v$. The wedge amplitude $K$ is a function over $SU(2) \times SL_2(\mathbb{C})$ given by
\begin{equation}
K(h,g) = \sum_j (2j+1)^2 \int_{SU(2)} \dd k \, \overline{\chi^j (hk)} \, \chi^{\gamma j,j}(kg),
\label{eq:EPRL-K-def}
\end{equation}
with $\chi^j$ and $\chi^{p,k}$ respectively the characters of $SU(2)$ and $SL_2(\mathbb{C})$, and $\gamma \in \mathbb{R}$ the Barbero-Immirzi parameter.
The success of the EPRL model lies in its semi-classical limit, which was studied in \cite{Barrett:2009mw,Dona:2020yao} for a 2-complex $\mathcal{C}$ made of a single vertex dual to a lorentzian 4-simplex $\sigma$. In this case, the partition function reduces to a single vertex amplitude $A_v(h_w)$, function of one $SU(2)$ element $h_w$ per each of the 10 links of the boundary graph. The boundary of $\sigma$ is made of 5 tetrahedra which can be described by the areas $j$ and the normals $\vec n$ to their faces. Then, the kinematics of loop quantum gravity prescribes a construction of a coherent state $\Psi_{j,\vec n}(h_w)$ which is "peaked" on the boundary geometry of $\sigma$. In the limit of large areas, $\lambda \to \infty$,
\begin{equation}
\label{eq:asymptotic_EPRL}
\begin{split}
\braket{\Psi_{\lambda j, \vec n}}{A_v} &\overset{\text{def}}= \int_{SU(2)} [\dd h_w] \Psi^*_{j,\vec n}(h_w) A_v(h_w) \\
&\sim \frac{1}{\lambda^{12}} \left( N_\sigma e^{i \lambda S_R} + N_{P\sigma} e^{- i \lambda S_R} \right)
\end{split}
\end{equation}
with $N_\sigma, N_{P \sigma} \in \mathbb{R}$ and $S_R$ is the lorentzian Regge action of $\sigma$. This result is encouraging because it is close to the classical expectation and takes the same form \eqref{eq:classical_limit_PR} for the semiclassical limit of the Ponzano-Regge model. However, it is not without flaws as we will discuss later.
The EPRL model is blind to the bulk orientation structure that has been introduced to define it. Indeed, changing the starting wedge or the orientation of a face $f$ changes the value of $U_f$ but not of $\delta(U_f)$. Moreover, reversing the orientation of a wedge $w$ changes both $U_f$ and $A_v$. In $U_f$ it replaces $h_w$ by $h_w^{-1}$. In $A_v$, it interchanges $s_w$ and $t_w$, and so $K(h_w,g_{s_w} g_{t_w}^{-1})$ becomes $K(h_w,g_{t_w} g_{s_w}^{-1})$, which is easily shown to be equal to $K(h_w^{-1},g_{s_w} g_{t_w}^{-1})$. A change of variables $h_w \longrightarrow h_w^{-1}$ within the integral finally proves that the value of $Z_\mathcal{C}$ remains unchanged.
The model is nevertheless sensitive to the orientation structure on the boundary. This structure consists only of the orientation of the links on the boundary over which the variables $h_l$ live. In the light of our preceding analysis, we expect this structure to carry causal information.
\passage{Causal EPRL model}
Analogously to our previous toy-model for BF theory, one can break the orientation invariance in the bulk as follows. First, one performs the integral over $k\in SU(2)$ in \eqref{eq:EPRL-K-def}, which yields
\begin{equation}
K(h,g) = \sum_{j,m,n} (2j+1) \overline{D^j_{mn}(h) } D^{\gamma j , j }_{jmjn}(g).
\end{equation}
One can switch from the magnetic basis $\ket{jm}$ to the overcomplete coherent states basis $\ket{j,z}$, by introducing a resolution of the identity
\begin{equation}
\mathbb{1}_j = \frac{2j+1}{\pi} \int_\Gamma \frac{\Omega(z)}{\|z\|^{4}} \dyad{j,z}
\end{equation}
with the measure
\begin{equation}
\Omega(z) \overset{\text{def}}= \frac{i}{2} (z_0 \dd z_1 - z_1 \dd z_0) \wedge (z_0^* \dd z_1^* - z_1^* \dd z_0^*)\,.
\end{equation}
Here $\Gamma$ is the image of a path $\mathbb{C}P^1 \to \mathbb{C}^2$ that crosses each vector line of $\mathbb{C}^2$ once and only once\footnote{More abstractly, it can be regarded as a section of the Hopf bundle.}.
One gets
\begin{multline}
K (h,g) = \sum_j \frac{(2j+1)^3}{\pi^2} \int_{\Gamma'\times \Gamma''} \frac{\Omega( z') \Omega( z'')}{\|z'\|^4 \|z''\|^4} \\ \times \matrixel{j,z'}{h^\dagger}{j,z''} \matrixel{\gamma j , j, j,z''}{D^{\gamma j, j}(g)}{\gamma j,j,j,z'}.
\end{multline}
Following \cite{Barrett:2009mw,Dona:2020yao}, the $SL_2(\mathbb{C})$ matrix element appearing above can be expressed in terms of an auxiliary $\mathbb{C}P^1$ as
\begin{equation}
\matrixel{\gamma j,j,j,z''}{D^{\gamma j, j}(g)}{\gamma j,j,j,z'} = \frac{2j+1}{\pi} \int_\Gamma \frac{\Omega(\zeta)}{\|\zeta\|^4}\; \mathscr{A}\; e^{i S_\gamma}\,,
\end{equation}
where
\begin{equation}
\mathscr{A} = \frac{\braket{\zeta}{z''^*}^{2j} \braket{z'^*}{g^T \zeta}^{2j} e^{2 i j (\arg z''_1 - \arg z'_1)} }{\|z'\|^{2j} \|z''\|^{2j} \| \zeta \|^{2j-2} \| g^T \zeta \|^{2j +2 }}
\end{equation}
and
\begin{equation}
S_\gamma = \gamma j \log \frac{\| g^T \zeta \|^2}{ \| \zeta \|^2}\,.
\end{equation}
One can thus define the causal wedge amplitude as
\begin{multline}
K_\varepsilon (h,g) = \sum_j \frac{(2j+1)^4}{\pi^3} \int
\frac{\Omega(\zeta)\Omega( z') \Omega( z'')}{\|\zeta\|^4 \|z'\|^4 \|z''\|^4} \\ \times \, \matrixel{z'}{h^\dagger}{z''}^{2j}\;\; \Theta(\varepsilon S_\gamma ) \,\mathscr{A}\, e^{i S_\gamma}.
\label{eq:causal-K}
\end{multline}
The causal vertex amplitude $A_v^\varepsilon$ is then defined by replacing $K$ with $K_\varepsilon$ in equation \eqref{eq:A_v_EPRL}, for some choice of $\varepsilon$ on each wedge. This defines a causal EPRL model.
At this level, the epithet "causal" is only motivated by the fact that the extra variable $\varepsilon \in \{ 1,-1 \}$ can potentially encode a causal structure on the wedges, as described previously. Let's motivate it further.
First, the usual vertex amplitude is recovered by summing over all possible sign-assignation to wedges:
\begin{equation}
\label{eq:sectors_EPRL}
A_v = \sum_{[\varepsilon_w]} A_v^{\varepsilon}.
\end{equation}
There are $2^{10}$ terms in the sum. If one interprets $\varepsilon$ as wedge orientations, then a configuration only properly defines a causal structure if the cycle condition \eqref{eq:cycle} is fulfilled. So, one can properly call $A_v^\varepsilon$ a "causal vertex amplitude" when the configuration $[\varepsilon_w]$ satisfies \eqref{eq:cycle}.
Secondly, assuming that the $[\varepsilon_w]$ indeed defines a proper causal structure, the word "causal" for $A_v^\varepsilon$ is only deserved if, in the asymptotic limit, $\varepsilon$ really captures the causal orientation of the boundary state. More precisely, given a lorentzian 4-simplex, it determines a set of $\tilde \varepsilon$ for each wedge, such that $\tilde \varepsilon_w = \text{sign} \, \theta_w$, where $\theta_w$ is the dihedral angle of the wedge $w$. Then, its boundary coherent state $\Psi_{j,\vec n}$, when contracted with $A_v^{\tilde \varepsilon}$, should have the expected classical limit, i.e.
\begin{equation}
\label{eq:asymptotic_causal_EPRL}
\braket{\Psi_{\lambda j, \vec n}}{A_v^{\tilde \varepsilon}} \sim \frac{1}{\lambda^{12}} N_\sigma e^{i \lambda S_R}
\end{equation}
One can check that this is the case. Indeed, $\braket{\Psi_{\lambda j, \vec n}}{A_v}$ is an integral over the variables $g,z,z'$. The sum \eqref{eq:sectors_EPRL} introduces a partition of the range of integration in a number of sectors $[\varepsilon_w]$ characterised by $\varepsilon_w S_\gamma(x_w) > 0$, where $x_w$ stands for all the variables $g,z,z'$ on the wedge $w$. In the asymptotic limit, the two terms of \eqref{eq:asymptotic_EPRL} arise from two stationary points which are related by a parity transformation. The asymptotic analysis of \cite{barrett2009c} shows that one of them, denoted $\sigma$, is such that $\tilde \varepsilon_w S_\gamma(x^\sigma_w) > 0$. Then the other, denoted $P\sigma$, satisfies $S_\gamma (x^{P\sigma}_w) = - S_\gamma(x^\sigma_w)$, so that $P\sigma$ and $\sigma$ are in opposite sectors. By construction, $\braket{\Psi_{\lambda j, \vec n}}{A_v^{\tilde \varepsilon}}$ selects only the sector of $\sigma$.
The asymptotic behaviour \eqref{eq:asymptotic_causal_EPRL} is the only criterion that constrains the definition of the causal wedge amplitude. So, the dichotomy operated by $\Theta(\varepsilon S_\gamma )$ is to some extent arbitrary, and other functions could work as well. Another choice of $K_\varepsilon$ would define a different quantum theory with the same classical limit. Our choice appears to us as the simplest one. Its technical properties will be discussed in a second article.
The causal EPRL model is not a new model, but rather an interpretation of different components of the standard vertex amplitude in terms of causal structures. This interpretation is motivated \textit{a priori} by the understanding that discrete causal structures can be encoded on wedges, and \textit{a posteriori} by the asymptotics of the causal vertex. In the next section, we discuss how the proposal connects to earlier results, and this motivates the idea that taking causality into account could cure some of the EPRL model's known issues.
\section{Relation to earlier proposals}
\label{sec:relation_to_earlier_proposals}
\passage{Livine-Oriti Barrett-Crane causal model}
Our approach is closely related to an earlier proposal on the implementation of causality by Livine and Oriti \cite{livine2003}. In the context of the Barrett-Crane model, the wedge amplitude is
\begin{equation}
K^p(x_1,x_2) = \frac{2 \sin(\beta(x_1,x_2) \, p/2)}{p \sinh{\beta(x_1,x_2)}}.
\end{equation}
$x_1$ and $x_2$ can be understood as the normals to two boundary tetrahedra and $\beta(x_1,x_2)$ is the lorentzian angle in-between (see appendix \ref{sec:barrett_crane} for a complete definition of the symbols). The Livine-Oriti proposal consists in expanding the sine as
\begin{equation}
K^p(x_1,x_2) = \frac{1}{p \sinh{\beta(x_1,x_2)}} \sum_{\varepsilon = \pm 1 } \varepsilon \, {e^{ i \varepsilon \beta(x_1,x_2) \, p/2}},
\end{equation}
interpreting $\varepsilon$ as an orientation on the wedges and selecting one of the two sectors only. The implementation of causality discussed in this paper can be understood as a direct generalization of the Livine-Oriti proposal to the EPRL model. The non-trivial step introduced here is in the identification of how to introduce the splitting in \eqref{eq:causal-K}.
\passage{Divergence and spikes}
The EPRL model and the Ponzano-Regge model both suffer from infrared divergences. In the latter case, the simplest example is provided by a triangulation $\Delta$ made of four tetrahedra subdividing a bigger tetrahedron. The dual 2-complex has 4 vertices and 10 faces. The four interior faces enclose a \textit{bubble}. The partition function is given by
\begin{equation}
Z_\mathcal{C} = \sum_{j_i} \left(\prod_f (2j_f+1)\right) A_{v_1} A_{v_2} A_{v_3} A_{v_4}.
\end{equation}
The sum is made over the spins $j_i$ attached to the interior faces. The spins of the other faces are not summed over because they are fixed by the boundary conditions. The range of the sum for each interior face is a priori restricted by its neighboring faces according to the Clebsh-Gordan conditions. But the existence of a bubble implies that the sums are actually unbounded. This wouldn't be a problem if the vertex amplitude $A_v$ was decreasing fast enough for large spins, but such is not the case.
In the asymptotic limit, $A_v$ consists of two conjugate terms like
\begin{equation}
A_v \sim A_v^+ + A_v^-,
\end{equation}
as shown in equation \eqref{eq:classical_limit_PR}.
The sign $\pm$ can be seen as an orientation of the tetrahedron dual to $v$. Thus,
\begin{equation}
A_{v_1}A_{v_2}A_{v_3}A_{v_4} \sim \sum_{\varepsilon_i} A^{\varepsilon_1}_{v_1}A^{\varepsilon_2}_{v_2}A^{\varepsilon_3}_{v_3}A^{\varepsilon_4}_{v_4},
\end{equation}
where the sum is carried over all possible sign-assignation to the four vertices. In \cite{christodoulou2013}, it is suggested that only some of the terms in this sum, dubbed "spikes", contribute to the divergence. In other words, a wise selection of vertex orientation can cure the model from divergences. It is suggested that a similar behavior could cure the EPRL model as well.
Our proposed causal EPRL model \eqref{eq:causal-K} satisfies the requirements identified in \cite{christodoulou2013}. However, there are two main differences with respect to their proposal:
\begin{enumerate}
\item We fix the orientation at the level of wedges, which is a finer scale than that of tetrahedra.
\item The orientation is fixed in the definition of the amplitude, while theirs only holds in the asymptotic limit.
\end{enumerate}
\passage{Engle's proper vertex}
The computation of the EPRL asymptotics \eqref{eq:asymptotic_EPRL} by Barrett et al \cite{barrett2009c} is promising but presents two difficulties:
\begin{itemize}
\item The cosine problem: the asymptotics of a single vertex has two critical points. This results in a problematic asymptotic behavior when several vertices are involved.
\item Euclidean boundary: the asymptotics does not suppress the euclidean boundary.
\end{itemize}
These two difficulties are traced back to a common origin in Engle's analysis \cite{Engle:2011un,engle2016a}: the EPRL model is built from discrete BF theory by imposing the simplicity constraint, but the latter is not strong enough as it admits three sectors out of the five of the Plebanski formulation.
The \emph{proper vertex} \cite{engle2016a} includes further constraints that restrict the model to the Einstein-Hilbert sector only. This is done by introducing in each wedge amplitude a spectral projector that concretely acts as a step-function $\Theta$. As a result, only one of the two terms in \eqref{eq:asymptotic_EPRL} is selected.
Our proposed causal vertex \eqref{eq:causal-K} shares a similar feature in that it amounts to the introduction of a step-function on each wedge. Yet, let us remark that:
\begin{enumerate}
\item The motivations are different. Engle's proper vertex is motivated by the restriction to the Einstein-Hilbert sectors while we are motivated by our analysis of the causal structure. Do the two requirements actually coincide? It would be interesting to investigate this relation, following the analysis of Immirzi in \cite{Immirzi:2016nnz}.
\item A priori, it is not clear that the two restrictions match away from the semi-classical limit. In fact, Engle's proper vertex introduces a step-function $\Theta$ on each wedge which depends on data on the full 4-simplex, while \eqref{eq:causal-K} is local on the wedge but includes the wedge orientations $\varepsilon_w$ as additional dynamical variables.
\end{enumerate}
\section{Conclusion}
We can summarise our main points as follows:
\begin{itemize}
\item The notion of causality in general relativity encompasses two related but conceptually different notions: bare-causality and time-orientability.
\item There is a natural way to translate these notions to a simplicial complex.
\item The causal structure can be implemented on the dual 1-skeleton (edges). It can be seen as the combination of a causal set with a neighbourhood relation.
\item It can also be encoded on the dual 2-skeleton (wedges) with a degeneracy of 2 that corresponds to a global time-reversal symmetry.
\item Starting from the set of all possible wedge orientations, the lorentzian Regge action determine equations of motion whose solutions fix a proper causal structure.
\item The metric propagator can be written as a sum over all possible wedge orientations. By fixing the causal structure from the beginning, one defines a causal metric propagator, similar to the Feynman propagator.
\item The discrete BF theory naturally carries an orientation structure on the edges and faces, although it is blind to it in the bulk.
\item The discrete BF theory is sensitive to the orientation on the boundary. There are simple rules of crossing symmetry to go from one orientation to another
\item There is an easy way to break the orientation invariance in the bulk, which provides a toy-model to study causality in spin-foam models.
\item The EPRL amplitude can be regarded as a sum over all possible configurations of wedge orientations $\varepsilon_w$, which provide additional dynamical variables encoding the causal structure. Only a subset of it corresponds to properly causal configurations.
\item The causal EPRL vertex shares common traits with the Livine-Oriti causal version of the Barrett-Crane model and with Engle's proper vertex. It cures some of the flaws of the initial model.
\item The causal EPRL model shows a new form of rigidity: only discrete lorentzian geometries compatible with the fixed causal structure arise in the semiclassical limit. This phenomenon provides an analogue of Malament's theorem for semiclassical spinfoams.
\end{itemize}
\begin{acknowledgments}
The authors thank Pietro Don\`a and Francesco Gozzini for insights and constructive criticisms during the course of this work. PMD thanks Alexandra Elbakyan for her help to access the scientific literature.
This publication was made possible through the support of the ID\# 61466 grant from the John Templeton Foundation, as part of the “Quantum Information Structure of Spacetime (QISS)” project (\hyperlink{https://www.qiss.fr}{qiss.fr}). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. EB acknowledges support by the NSF via the Grant PHY-1806428.
\end{acknowledgments}
| a47fc2a1f426e7fdbca54c389288b489ec1c3aa2 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\noindent Transferring knowledge from a labeled source domain to an unlabeled target domain is desirable in many real-world applications. However, deep learning models do not perform well in the presence of such domain shifts. For example, a model trained on synthetic data may fail to generalize well on real-world data. Unsupervised Domain Adaptation (UDA) seeks to solve this problem by learning domain-invariant features. Recent UDA methods harness transferable features learned by deep models pre-trained on large datasets like ImageNet~\cite{ganin2016domain, he2016deep, long2017deep, long2018conditional, zhang2019bridging, liu2019transferable, tang2020unsupervised, gu2020spherical, jiang2020implicit, dalib}. However, a large body of work has shown that these deep models are vulnerable to small adversarial changes in the input that can easily fool the trained models~\cite{biggio2013evasion, szegedy2013intriguing, goodfellow2014explaining, carlini2017towards}. The widespread use of these models in sensitive applications requires them to be robust against these changes.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figs/block.png}
\caption{An overview of the proposed method. Source and target images are passed through the backbone model and robust teachers to get features at different blocks. The intermediate features are transferred to the robust feature adaptation (RFA) module, which adapts the robustness. The output of the backbone model goes through the domain adaptation module, which utilizes an unsupervised domain adaption algorithm. The parameters of the UDA feature extractor are updated to minimize both domain adaptation and robust feature adaptation loss. Light colors show the features extracted for source domain inputs and dark colors for target domain inputs.}
\label{fig:block_rfa}
\end{figure*}
\noindent Significant attention has been devoted to counter adversarial examples, and many defense methods have been devised~\cite{goodfellow2014explaining, guo2017countering,tramer2017ensemble, madry2017towards, buckman2018thermometer, liao2018defense, ross2018improving, shafahi2019adversarial, tramer2019adversarial, wong2020fast}. Supervised adversarial training is among the most successful approaches~\cite{madry2017towards}. It is based on the simple idea of training a model on adversarial examples. It utilizes min-max optimization where adversarial examples are first generated by iterative maximization of the loss, and the model is then trained on these examples. However, the generation of these adversarial examples requires labels and adversarial training implicitly assumes inputs from a single domain. These issues limit the applicability of adversarial training in UDA.
\noindent In this paper, we propose a simple, unsupervised, and domain agnostic method for robustness in UDA. It does not require labels and utilizes data from both domains, making it feasible for UDA. Our work is motivated by the recent line of work on transferability of robustness~\cite{goldblum2019adversarially, chan2020thinks}, and observation that adversarially trained models learn "fundamentally different" features from normally trained counterparts~\cite{tsipras2018robustness, ilyas2019adversarial, santurkar2019image}. The first line of work has demonstrated the transferability of adversarial robustness from a pre-trained robust model. The authors in~\cite{hendrycks2019pretraining, shafahi2019adversarially} show that adversarially pre-trained models can improve robustness in transfer learning; \cite{goldblum2019adversarially} shows that adversarial robustness can be distilled by matching softened labels produced by robust pre-trained models; \cite{chan2020thinks} shows that robustness can be distilled by matching input gradients of robust models to those of a non-robust model. These works focus on cutting the computational cost of adversarial training for single domain classification and require labeled data.
\noindent Our proposed method, Robust Feature Adaptation (RFA), embeds the adaptation of robustness in the domain adaptation training by leveraging the feature space of robust pre-trained models. RFA uses ImageNet adversarially pre-trained models to extract robust features for inputs of source and target domains. It then instills robustness in UDA's feature extractor by minimizing its discrepancy with robust features. RFA enables the model to learn both domain invariant and robust features.
\noindent Unlike previous works on transferability, our method does not require labeled data as it only uses intermediate features of the robust models and a label-free distance measure between the feature spaces of the two models. Similarly, RFA does not require any adversarial intervention during the domain adaptation training as it does not generate adversarial examples. These characteristics make it possible to harnesses both labeled source and unlabeled target domains. Moreover, the RFA is a plug-in method that can be used with any UDA method to enhance its robustness. It only requires adversarially pre-trained models similar to the UDA methods that need normally pre-trained models. Our experiments show that RFA can equip UDA models with high adversarial robustness while keeping good generalization ability.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose a plug-in method that aligns the features of a UDA model with the robust features of multiple adversarially pre-trained ImageNet models. This way, it instills robustness in UDA models without adversarial intervention or label requirement.
\item To the best of our knowledge, we are the first to show that the adversarial robustness for a target task can be distilled from intermediate representations of robust models adversarially trained on a different task without any fine-tuning.
\item Comprehensive experimental results show that our method consistently improves the robustness of various UDA algorithms on widely-used benchmark datasets. For instance, it improves the adversarial robustness from 0\% to 43.49\% while maintaining the clean accuracy for CDAN as the UDA algorithm on challenging simulation-to-real adaptation task of the VisDA-2017 dataset.
\end{itemize}
\begin{table*}
\centering
\begin{adjustbox}{max width=1\textwidth, center}
\begin{threeparttable}
\begin{tabular}{lccccccccccccc}
\toprule
\multirow{1}{*}{{Dataset}} & \multicolumn{1}{r}{{Robust PT}} & \multicolumn{2}{c}{{Source-only}} & \multicolumn{2}{c}{DANN~\cite{ganin2016domain}} & \multicolumn{2}{c}{DAN~\cite{long2015learning}} & \multicolumn{2}{c}{CDAN~\cite{long2018conditional} } & \multicolumn{2}{c}{JAN~\cite{long2017deep}} & \multicolumn{2}{c}{MDD~\cite{zhang2019bridging} } \\
\hline
& &Acc. & Rob. &Acc. & Rob.&Acc. & Rob.&Acc. & Rob.&Acc. & Rob. &Acc. & Rob.\\
\hline
\multirow{2}{*}{VisDA-17} & $\times$ & 43.05 & 0 & 71.34 & 0 & 61.79 & 0.01 & 74.23 & 0 & 63.70 & 0 & 72.20 & 4.03 \\
& $\checkmark$ & 25.67 & 6.64 & 65.79 & 38.21 & 42.24 & 22.11 & 68.00 & 41.67 & 55.08 & 32.15 & 67.72 & 39.50 \\
\hline
\multirow{2}{*}{Office-31} & $\times$ & 77.80 & 0.02 & 85.79 & 0 & 81.72 & 0 & 86.90 & 0 & 85.68 & 0 & 88.31 & 1.70 \\
& $\checkmark$ & 69.51 & {41.11} & 77.30 & 62.38 & 73.71 & 42.29 & 79.67 & 65.53 & 75.12 & 60.24 & 80.72 & 67.54 \\
\hline
\multirow{2}{*}{Office-Home} & $\times$ & 58.29 & 0.06 & 63.39 & 0.05 & 59.64 & 0.23 & 67.03 & 0.04 & 64.61 & 0.07 & 67.91 & 5.81 \\
& $\checkmark$ & 53.89 & {31.46} & 58.10 & {37.25} & 55.18 & {24.21} & 63.04 & {43.81} & 60.74 & 33.09 & 63.30 & 43.42 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item \noindent $\times$: Normally Pre-Trained Model, $\checkmark$: Adversarially Pre-Trained Model, PT: Pre-Training.
\end{tablenotes}
\end{threeparttable}
\end{adjustbox}
\caption{Can Robust Pre-Training (PT) instill robustness in unsupervised domain adaptation setting? Comparison between normally and adversarially pre-trained models for clean accuracy and adversarial robustness (\%) with six UDA algorithms. Adversarial pre-training improves adversarial robustness but also causes a drop in clean accuracy.
}
\label{tab:pretraining_robustness}
\end{table*}
\section{Background}
\subsection{Related Work}
\noindent \textbf{Unsupervised Domain Adaptation. } Most of the unsupervised domain adaptation methods are motivated by the theoretical results in~\cite{ben2006analysis, ben2010theory}. These results suggested learning representations invariant across domains. In deep learning, this is often achieved by min-max training where a pre-trained deep neural network is fine-tuned such that not only does it minimize the loss on labeled data from the source domain but also fool a discriminator. This discriminator is simultaneously trained to distinguish between source and target domains~\cite{ganin2016domain}. In recent works, it has also been shown that large models, pre-trained on large-scale datasets such as ImageNet, improve unsupervised domain adaptation~\cite{long2015learning, ganin2016domain, he2016deep, long2017deep, long2018conditional, zhang2019bridging, liu2019transferable, tang2020unsupervised, gu2020spherical, jiang2020implicit}. Several unsupervised domain adaptation algorithms have been proposed that leverage pre-trained models~\cite {long2015learning, long2018conditional, zhang2019bridging, liu2019transferable}. However, these works do not consider robustness. Our work is complementary to these works as it improves the robustness of these methods.
\noindent \textbf{Adversarial Training and Robust Features.} Adversarial attacks are considered security risk~\cite{biggio2013evasion, szegedy2013intriguing, goodfellow2014explaining, carlini2017towards}. Numerous methods have been proposed to defend against such examples~\cite{guo2017countering,tramer2017ensemble, madry2017towards, buckman2018thermometer, liao2018defense, ross2018improving, shafahi2019adversarial, tramer2019adversarial, wong2020fast, awais2020towards}. Adversarial training -- the most effective defense mechanism -- is devised to defend against $\ell_p$ bounded adversarial perturbations~\cite{goodfellow2014explaining, madry2017towards} in the inputs. However, adversarial training requires labels and therefore is not suitable for UDA training. In another direction, recent work has also shown that adversarially trained models learn ``fundamentally different'' representations~\cite{tsipras2018robustness, ilyas2019adversarial, engstrom2019adversarial, zhu2021towards}. Our work is motivated by this observation, and we proposed an algorithm to leverage these robust features.
\noindent \textbf{Knowledge and Robustness Transfer.}
The main purpose of knowledge distillation is to decrease the size of a large model. It works by distilling the knowledge of a big pre-trained teacher model to a compact \textit{randomly initialized} student model for the same dataset~\cite{hinton2015distilling}. Many different settings have been explored to achieve this objective \cite{romero2014fitnets, yim2017gift, zagoruyko2016paying, tung2019similarity}. Our work is different from these works as we want only to adapt robustness from the teacher without labels while also learning domain invariant features that perform well on two domains.
\noindent Our work is motivated by~\cite{goldblum2019adversarially, chan2020thinks, hendrycks2019pretraining, shafahi2019adversarially} that showed transferability of robustness. However, these methods are primarily motivated to decrease the computational cost of adversarial training and require labels. In~\cite{goldblum2019adversarially}, the authors showed that the robustness can be distilled from a large pre-trained model (e.g., ResNet) to a compact model (e.g., MobileNet) by utilizing soft class scores produced by the teacher model. Compared to the work in~\cite{goldblum2019adversarially}, our method distills robustness from the intermediate representations only. Furthermore, the distillation is performed from teachers trained on one task (i.e., supervised classification) to a student needed to be trained on another task (i.e., unsupervised domain adaptation), which has not been explored previously. In~\cite{chan2020thinks}, the distillation is performed by matching the gradient of the teacher and student. This method requires fine-tuning on target tasks, back-propagation to get gradients, and discriminator-based learning. Compared to~\cite{chan2020thinks}, our proposed method does not require any fine-tuning, and it adapts robust features from pre-trained models without requiring any extra back-propagation. Moreover, both of these distillation methods require labels and are designed for single-domain training.
\subsection{Preliminaries}
\label{sec:preliminaries}
\noindent Unsupervised Domain Adaptation aims to improve generalization on target domain by reducing domain discrepancy between source and target. Formally, we are given labelled data in the source domain $D_s = \{(x_i^s, y_i^s)\}_{i=1}^{n_s} \sim P$ and unlabeled data in the target domain $D_t = \{x_j^t\}_{j=1}^{n_t} \sim Q$, where $P\neq Q$. Most unsupervised domain adaptation methods fine-tune a pre-trained backbone model $f(x; \theta)$ and train a classifier $C(f(x; \theta); \psi)$ on top of it. The training is done in such a way that it reduces error on the labeled source domain as well as learning features that are invariant in both source and target domains.
\noindent Adversarial examples~\cite{szegedy2013intriguing, goodfellow2014explaining} are bounded and imperceptible perturbations in the input images that change the normal behavior of neural networks. Thus, the adversarial robustness of a model is its invariance to such small $\ell_p$ bounded perturbation in the input. To achieve this robustness, adversarial examples are created by maximizing the loss, and then it is minimized to train the model~\cite{madry2017towards}:
\[
\min_{\theta, \psi} \mathbb{E}_{(x, y) \sim D} \bigg[\max_{||\delta||_p \leq \epsilon} \mathcal{L}(x+\delta, y; \theta, \psi) \bigg],
\]
where $\epsilon$ is the perturbation budget that governs the adversarial robustness of the model. The model is trained to be robust in $\ell_p$-norm ball of radius $\epsilon$. Increasing $\epsilon$ means the model is stable for a larger radius. However, this framework is not appropriate for UDA as this requires labels and assumes data from a single domain.
\noindent Following \cite{madry2017towards}, we define the \textbf{adversarial robustness} as the accuracy of target dataset ($D_t$) perturbed with a perturbation budget of $\epsilon$ in $\ell_{\infty}$-norm ball. To find the adversarial example $x_{adv}$, we use Projected Gradient Descent (PGD) with 20 iterations~\cite{madry2017towards}. \textit{We have used terms robustness and adversarial robustness interchangeably.}
\section{Pre-Training and Robustness}
\label{sec:pretraining}
\noindent We start with a simple question: can we instill robustness in unsupervised domain adaptation by replacing the normally pre-trained feature extractor with a robust counterpart?
\noindent To answer this question, we replaced the normal backbone model with an adversarially trained one. We call this setup \textbf{Robust Pre-Training} or Robust PT. To demonstrate the effect of robust pre-training, we conducted a set of experiments with six UDA methods and three common datasets, i.e., Office-31~\cite{saenko2010adapting}, Office-Home~\cite{venkateswara2017Deep} and VisDA-2017~\cite{peng2017visda}. We employed a ResNet-50~\cite{he2016deep} adversarially trained with different perturbation budgets as defined in Section~\ref{sec:preliminaries}. Unless stated otherwise, robustness is reported with PGD-20 and perturbation budget of $\epsilon=3$. For comparison, we use the default settings of all the hyper-parameters and report the average results over three independent runs. We only reported the best results averaged over all possible tasks of each dataset here. For detailed results, please refer to the supplementary material.
\noindent It is reasonable to expect that adversarial pre-training will not increase robustness for unsupervised domain adaptation. Previous work has shown that the transferability of robustness is due to the robust feature representations learned by the pre-trained models. Robustness is only preserved if we do not update the backbone~\cite{hendrycks2019pretraining, shafahi2019adversarially}. Specifically, to maintain the robustness, only an affine layer is trained on top of the fixed feature extractor with the help of the labeled data. However, we fine-tuned the backbone model to be accurate in the source domain and invariant for the source and target domains.
\noindent The best robustness results averaged over all tasks in each dataset are shown in Table~\ref{tab:pretraining_robustness}. We find that an adversarially pre-trained backbone can improve the robustness under UDA settings. For example, robustness for CDAN \cite{long2018conditional} improves from 0\% to 41.67\%, with around 5.5\% decrease in clean accuracy on VisDA-2017 dataset. For the DAN algorithm, improvement in robustness is 0\% to 22.11\% at the cost of an 18\% drop in clean accuracy. Similar improvement in robustness is also visible in experiments involving Office-31 and Office-Home datasets, as shown in Table~\ref{tab:pretraining_robustness}.
\noindent However, adversarially pre-trained backbone decreases the generalization ability of models for the UDA setting. The decrease in accuracy can go as high as 20\%. We hypothesize that robust pre-training is not the most efficient way of leveraging robust features of the backbone. In the next section, we design an algorithm to utilize these features more efficiently.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/student_model}
\caption{The clean accuracy of weak adversarially pre-trained (adversarial pre-training with small $\epsilon$) models on VisDA-2017 dataset. }
\label{fig:student_model}
\vspace{-10pt}
\end{figure}
\section{Robust Feature Adaptation}
\noindent In this section, we introduce our method and its motivation. The goal of Robust Feature Adaptation (RFA) is to improve the adversarial robustness of unsupervised domain adaptation (UDA) algorithms without causing a significant drop in accuracy. Based on our experiments in the previous section, we hypothesized that the direct use of pre-trained models as backbone model is not an efficient way to instill robustness in UDA training. These pre-trained models have significantly less accuracy to begin-with \cite{robustness}. This low pre-training accuracy makes it hard for UDA training to get better generalizations for the task. Our hypothesis is based on previous observations \cite{kornblith2019better} that have shown a direct relationship between the accuracy of a pre-trained model and its final performance on a given task.
\noindent In our method, we propose to adopt robust features instead of directly using robust models as a backbone. The main idea of the proposed method is to align the features of the UDA backbone model with the robust features provided by multiple adversarially pre-trained models. This aligning is done as we do domain adaptation training for learning domain invariant features.
\noindent Each part of our framework is based on a hypothesis based on insights from previous works and detailed experimental investigation. In this section, we describe each component of our proposed algorithm along with their motivation. The empirical comparisons to support our method are given in Section~\ref{sec:design_principles}. An overview of the proposed method is illustrated in Figure~\ref{fig:block_rfa}.
\subsection{Feature Extractor for Domain Adaptation}
\noindent As described earlier, existing UDA algorithms fine-tune normally pre-trained ImageNet models. However, adversarially pre-trained models learn `fundamentally different features' compared to their normally pre-trained counterparts~\cite{tsipras2018robustness, engstrom2019adversarial, ilyas2019adversarial}. This difference can cause inconsistency between the features of student and teacher models, which may cause difficulty in optimization. Hence, we propose to use a weak adversarially pre-trained model (model pre-trained with a small perturbation budget) as the backbone model.
\noindent As shown in Figure~\ref{fig:student_model}, these robust models do not hurt clean accuracy significantly but can solve the feature inconsistency problem. A experimental comparison is shown in Section~\ref{sec:design_principles}.
\begin{table}
\centering
\begin{adjustbox}{max width=0.5\textwidth, center}
\begin{tabular}{llcc}
\toprule
{Dataset} & {Method} & {Accuracy} & {Robustness} \\
\toprule
\multirow{3}{*}{Office-31} & Baseline & 88.31 & 1.70 \\
& Robust PT & 80.72 & 67.54 \\
& RFA & 84.21 & 74.31 \\
\hline
\multirow{3}{*}{VisDA-2017} & Baseline & 72.20 & 4.03 \\
& Robust PT & 67.72 & 39.50 \\
& RFA & 72.90 & 47.66 \\
\hline
\multirow{3}{*}{Office-Home} & Baseline & 67.91 & 5.81 \\
& Robust PT & 63.30 & 43.42 \\
& RFA & 65.37 & 51.13 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Comparison of robustness and clean accuracy for RFA with Robust Pre-Training and baseline. RFA improves robustness compare to Robust Pre-Training while keeping good generalization.}
\label{tab:main_results_datasets}
\end{table}
\begin{table}
\centering
\begin{adjustbox}{max width=1\linewidth, center}
\begin{tabular}{llll}
\toprule
\multicolumn{1}{c}{UDA Method} & Baseline & \multicolumn{1}{c}{Robust PT} & \multicolumn{1}{c}{RFA} \\
\midrule
Source Only & 43.05 / 0 & 25.67 / 6.64 & 44.65 / 11.10 \\
DANN & 71.34 / 0 & 65.79 / 38.21 & 65.32 / 34.11 \\
DAN & 61.79 / 0 & 42.24 / 22.11 & 55.70 / 21.59 \\
CDAN & 74.23 / 0 & 68.00 / 41.67 & 72.03 / 43.49 \\
JAN & 63.70 / 0 & 55.08 / 32.15 & 62.95 / 32.81 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Comparison of Robust Pre-Training and RFA for five UDA algorithms with the VisDA-2017 dataset. RFA significantly improves robustness while keeping good clean accuracy.}
\label{tab:rfa_visda_robustness}
\end{table}
\begin{table*}
\centering
\begin{tabular}{ccc}
\begin{adjustbox}{width=0.2\linewidth}
\begin{tabular}{lll}
\toprule
Student & Acc. & Rob. \\
\toprule
Baseline & 72.20 & 4.03 \\
Normal & 71.22 & 7.63 \\
Adv. & 72.71 & 40.61 \\
\bottomrule
\end{tabular}
\end{adjustbox}
&
\begin{adjustbox}{width=0.4\linewidth}
\begin{tabular}{lccccc}
\toprule
Loss & DANN & CDAN & MDD \\
\toprule
L1 & 45.02 / 9.58 & 55.16 / 13.53 & 54.52 / 18.89 \\
L2 & 54.28 / 1.45 & 58.16 / 1.76 & 64.20 / 8.29 \\
SP & 65.32 / 34.11 & 72.03 / 43.49 & 72.90 / 47.66 \\
\bottomrule
\end{tabular}
\end{adjustbox}
&
\begin{adjustbox}{width=0.33\linewidth}
\begin{tabular}{lll}
\toprule
Method & \multicolumn{1}{c}{RN-18} & \multicolumn{1}{c}{WRN-50-2} \\
\toprule
Baseline & 69.61 / 0.15 & 73.36 / 5.47 \\
Robust PT & 64.44 / 24.40 & 71.20 / 37.63 \\
Ours (RFA) & 65.05 / 36.46 & 74.98 / 50.47 \\
\bottomrule
\end{tabular}
\end{adjustbox} \\
(a)&(b)&(c)\\
\end{tabular}
\caption{(a) Comparison of normally and adversarially pre-trained students for the accuracy and robustness of our algorithm. (b) Comparison of pairwise loss vs. similarity preserving loss for robustness. (c) Comparison of accuracy and robustness (\%) for MDD Baseline, Robust PT and RFA with different neural network architectures. RFA consistently improves robustness for different architectures. Here RN represents ResNet and WRN WideResNet. All of these experiments are conducted on VisDA-2017 dataset.}
\label{tab:ablation_studies_set2}
\end{table*}
\subsection{Joint Training for Adaptation of Robust and Domain Invariant Features}
\noindent Our robust feature adaptation method aims to fine-tune the UDA feature extractor in such a way that it adapts robust features from adversarially trained models along with domain-invariant features from UDA training.
\noindent In knowledge distillation, we initialize the student with random weights and force the student to mimic the feature space of the teacher by minimizing the pair-wise distance between features and/or softened class scores.
Our UDA feature extractor, on the other hand, is also pre-trained and has already learned a set of features. This means that the student and the teacher may have learned features in different ways, or the order of the learned feature maps may differ.
Furthermore, \textit{since the teacher is not trained directly on the target dataset, it can not provide the softened class scores.} This is also another reason not to directly minimize pair-wise distance as the teacher is trained on a different dataset. In conclusion, we only want to use the feature supervision of the teacher to align student's features with it to adapt robustness.
\noindent To align features of student to that of robust teacher, we used similarity preserving loss to match the similarity of activations between robust and non-robust features ~\cite{tung2019similarity}. The main idea of this loss is to align the student's feature in such a way that two inputs producing similar activations in the feature space of teacher model should also produce similar activations in the feature space of student model. Specifically, given a mini-batch of training data, let $Q_{T}^{l} \in \mathbb{R}^{b \times d}$ and $Q_{S}^{l} \in \mathbb{R}^{b \times d}$ denote the activations of $l$-th layer from teacher and student models, respectively, where $b$ is the batch size and $d$ is the dimension of the activations after reshaping. The similarity matrices of $l$-th layer from teacher and student models are defined as $G_{T}^{l} = Q_{T}^{l} \cdot {Q_{T}^{l}}^\intercal / ||Q_{T}^{l} \cdot {Q_{T}^{l}}^\intercal||_2$ and $G_{S}^{l} = Q_{S}^{l} \cdot {Q_{S}^{l}}^\intercal / ||Q_{S}^{l} \cdot {Q_{S}^{l}}^\intercal||_2$, respectively, where $||\cdot||_2$ is a row-wise $\ell_2$-norm. We then define the robust feature adaptation loss of $l$-th layer as
\[
\mathcal{L}_{RFA}^l = \dfrac{1}{b^2} ||G_{T}^{l} - G_{S}^{l}||_F^2,
\label{eq:1}
\]
where $||\cdot||_F$ is the Frobenius norm.
\noindent We use the sum of robust feature adaptation losses of intermediate layers:
\begin{equation}\label{equ:rfa}
\mathcal{L}_{RFA} = \sum_{l=1}^{L} \mathcal{L}_{RFA}^l,
\end{equation}
where $L$ is the number of intermediate layers. The joint training loss is then defined as
\begin{equation}\label{equ:joint_loss}
\mathcal{L} = \mathcal{L}_{C} + \mathcal{L}_{DA} + \alpha \mathcal{L}_{RFA},
\end{equation}
where $\mathcal{L}_{C}$ is the classification loss on the source domain, $\mathcal{L}_{DA}$ is the loss term for domain adaptation, and $\alpha$ is a hyper-parameter that balances domain adaptation and robust feature adaptation. Note that our proposed method can be applied to different UDA algorithms by using the corresponding domain adaptation method with loss term $\mathcal{L}_{DA}$.
\subsection{Adapting Diverse Robust Features}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/activations_max_OfficeHome_Ar_spoon_final5.png}
\caption{Maximally activated neurons for an image from Office-Home dataset. \textit{The first row shows activations for normally pre-trained model} and other rows show activations for robust pre-trained models trained with a different perturbation budget ($\epsilon$). Highlighted regions can be interpreted as the discriminative parts of the input that activates the neurons the most. Note that different models have learned different discriminative features.}
\label{fig:activatons}
\end{figure}
\noindent The Figure~\ref{fig:activatons} shows the diversity of discriminative features learned by the same model trained with different perturbation budgets. More details are in Section~\ref{sec:design_principles}. To leverage these diverse robust features, we propose to supervise the student with multiple teachers. To reduce the computing cost during training, we randomly choose one teacher at each iteration during training. This means that we can guide the student model with the diversity of multiple teachers with the same computing cost as using one. The detailed procedure for our method is summarized in Algorithm \ref{algo1}.
\begin{algorithm}
\caption{RFA: Robust Feature Adaptation for UDA}
\label{algo1}
\begin{algorithmic}[1]
\REQUIRE Multiple robust teachers $\{f(\cdot\ ;\theta^m_T)\}_{m=1}^{M}$, training datasets $D_s, D_t$, batch size $b$, learning rate $\eta$, hyperparameter $\alpha$, iteration number $K$, UDA algorithm.
\ENSURE $\theta_S$, $\psi_S$.
\STATE Initialize $\theta_S$, $\psi_S$;
\FOR {$0 \leq k \leq K-1$}
\STATE Sample a random mini-batch of training examples $\{((x^s_i, y^s_i), (x^t_i))\}_{i=1}^b$ with a batch size of $b$;
\STATE $x \gets \{(x^s_i, x^t_i)\}_{i=1}^b$;
\STATE Randomly sample a teacher $f(\cdot\ ;\theta^m_T)$;
\STATE Compute $Q^l_T$ and $Q^l_S$ with $f(x;\theta^m_T)$ and $f(x;\theta_S)$ respectively, for $l=1,2,\cdots, L$;
\STATE Compute $\mathcal{L}_{RFA}$ according to Eq.~\eqref{equ:rfa};
\STATE Compute $\mathcal{L}_{C}$ and $\mathcal{L}_{DA}$ with the UDA algorithm;
\STATE Compute $\mathcal{L}$ according to Eq.~\eqref{equ:joint_loss};
\STATE Update $(\theta_S, \psi_S) \gets (\theta_S, \psi_S) - \eta \cdot \nabla_{(\theta_S, \psi_S)}\mathcal{L}$;
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\subsection{Setup}
\noindent We conduct experiments on 19 different tasks derived from 3 main-stream unsupervised domain adaption (UDA) datasets.
\textbf{Office-31}~\cite{saenko2010adapting} is a standard domain adaptation dataset with 6 tasks based on three domains: Amazon (\textbf{A}), Webcam (\textbf{W}) and DSLR (\textbf{D}). The dataset is imbalanced across domains with 2,817 images in \textbf{A}, 795 images in \textbf{W} and 498 images in \textbf{D} domain. \textbf{Office-Home}~\cite{venkateswara2017Deep} is a more complex dataset compared to Office-31 and contains more images (15,500) for 12 adaptation tasks based on 4 more diverse domains: Artistic (\textbf{Ar}), Clip Art (\textbf{Cl}), Product (\textbf{Pr}), and Real World (\textbf{Rw}). \textbf{VisDA-2017}~\cite{peng2017visda} is a simulation-to-real dataset with two extremely different domains: synthetic domain in which images are collected from 3D rendering models and real-world images. It is also a large-scale dataset as it contains 280k images in the synthetic domain and 50k images in the real-world domain. Due to the extremely different domains and scale, it is one of the most challenging datasets in UDA.
\noindent Unless stated otherwise, we use ResNet-50~\cite{he2016deep} as our backbone model and MDD~\cite{zhang2019bridging} as the domain adaptation algorithm. We used this setup to show that our method can improve robustness without a significant drop in accuracy. To show that Robust Feature Adaptation (RFA) can work as a plug-in method, we conduct experiments with six UDA algorithms: Source Only (fine-tuning model on source data only), DAN~\cite{long2015learning}, DANN~\cite{ganin2016domain}, JAN~\cite{long2017deep}, CDAN~\cite{long2018conditional}, and MDD~\cite{zhang2019bridging}. We follow the experimental protocol of~\cite{ganin2016domain, long2018conditional} commonly used in UDA and adopt the hyper-parameters used in~\cite{dalib}. We compare RFA with \textbf{UDA algorithm Baseline} (adopting normally pre-trained ImageNet model) and \textbf{Robust PT} (UDA algorithm adopting adversarially pre-trained ImageNet model). For a fair comparison, we use the same values for all hyper-parameters for the UDA algorithm Baseline, Robust PT, and RFA. The new hyper-parameter of our proposed method is $\alpha$. We choose it based on the magnitude of domain adaptation loss. Specifically, we multiply robust feature adaptation loss $\mathcal{L}_{RFA}$ by $1000$ to make it have the equivalent magnitude to that of domain adaptation loss. We report average results over three runs for all the experiments.
\subsection{Results}
\noindent\textbf{On Improving Robustness.} To achieve better robustness, we choose four strong teachers, i.e., ImageNet ResNet-50 models, trained with different perturbation budgets. More specifically, we use perturbation budget of $\epsilon \in \{3, 5\}$ with $\ell_2$-norm and $\epsilon \in \{2, 4\}$ with $\ell_{\infty}$-norm. To show the effectiveness of our method, we choose a backbone adversarially trained with $\epsilon=1$. For the bulk of our experiments, We use MDD as a domain adaptation algorithm.
\noindent The average results for Office-31, Office-Home, and VisDa-2017 are shown in Table~\ref{tab:main_results_datasets}. These results clearly show that our method can improve the robustness of the backbone model by adapting the robust features without a significant drop in clean accuracy. The improvement in robustness is due to the robust teachers, while the improvement in clean accuracy is because of the backbone model used in RFA. This model has higher accuracy compared to backbone use in Robust Pre-Training. This way, our method has a significant advantage over Robust PT as it can use backbone models with higher clean accuracy while adapting robustness from any teacher.
\begin{table*}
\begin{adjustbox}{ width=2.1\columnwidth, center}
\centering
\begin{tabular}{lccccccccccccc}
\toprule
Method & Ar $\shortrightarrow$ Cl & Ar $\shortrightarrow$ Pr & Ar $\shortrightarrow$ Rw & Cl $\shortrightarrow$ Ar & Cl $\shortrightarrow$ Pr & Cl $\shortrightarrow$ Rw & Pr $\shortrightarrow$ Ar & Pr $\shortrightarrow$ Cl & Pr $\shortrightarrow$ Rw & Rw $\shortrightarrow$ Ar & Rw $\shortrightarrow$ Cl & Rw $\shortrightarrow$ Pr & Avg \\
\toprule
Baseline & 54.59 & 72.38 & 77.19 & 61.52 & 71.19 & 71.54 & 63.04 & 50.31 & 79.0 & 72.5 & 57.66 & 83.92 & 67.91 \\
Robust PT & 55.07 & 73.87 & 78.26 & 60.82 & 71.84 & 71.88 & 60.65 & 51.89 & 79.02 & \textbf{72.64} & \textbf{60.50} & 82.81 & 68.27 \\
Ours (RFA) & \textbf{55.65} & \textbf{77.13} & \textbf{80.69} & \textbf{64.43} & \textbf{74.81} & \textbf{75.54} & \textbf{63.99} & \textbf{53.07} & \textbf{80.59} & 71.80 & 58.41 & \textbf{84.31} & \textbf{70.03} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Classification accuracy (\%) for all the twelve tasks from Office-Home dataset based on ResNet-50. Our method improves clean accuracy of 10 out of 12 tasks as well as the average.}
\label{tab:rfa_officehome_clean}
\vspace{-10pt}
\end{table*}
\begin{table}
\centering
\begin{tabular}{cc}
\begin{adjustbox}{width=0.55\linewidth}
\begin{tabular}{llccccc}
\toprule
$\alpha$ & 100 & 500 & 1000& 5000 \\
\toprule
Acc.& 71.61 & 73.62 & 72.90 & 70.31 \\
Rob. & 40.07 & 46.36 & 47.66 & 47.27 \\
\bottomrule
\end{tabular}
\end{adjustbox}
&
\begin{adjustbox}{width=0.4\linewidth}
\begin{tabular}{lcc}
\toprule
Teachers & Acc. & Rob. \\
\toprule
Single & 70.31 & 40.15 \\
Multiple & \textbf{73.45} & \textbf{40.87} \\
\bottomrule
\end{tabular}
\end{adjustbox}\\
(a) & (b)\\
\end{tabular}
\caption{\textbf{Ablation Studies.} (a) The effect of varying $\alpha$ on accuracy and robustness (\%) for RFA on VisDA-2017 dataset. (b) The effect of multiple teachers on accuracy and robustness (\%) on VisDA-2017 dataset.}
\label{tab:ablation_studies_set1}
\end{table}
\noindent \textbf{On RFA as a Plug-in Method.} A salient characteristic of our method is that it can complement existing or new domain adaption algorithms that use ImageNet pre-trained models. To show this, we conduct experiments with six different UDA algorithms (Source only, DAN, DANN, JAN, CDAN, and MDD) on the challenging and large-scale VisDA-2017 dataset. As shown in Table~\ref{tab:rfa_visda_robustness}, RFA improves robustness for all the six UDA algorithms.
\begin{figure}
\centering
\includegraphics[ width=0.5\textwidth]{figs/avg_robustness}
\caption{Comparison of MDD Baseline, Robust PT (Pre-Training), and RFA for average robustness and accuracy (\%) on Office-Home and VisDA-2017. The $x$-axis shows the perturbation budget of the pre-trained model.}
\label{fig:rfa_robustness_avg}
\end{figure}
\section{Discussion and Analysis}
\subsection{Empirical Investigation of Design Principles}
\label{sec:design_principles}
\noindent \textbf{Choosing Student Model.}
One major insight of our framework is the use of weak adversarially pre-trained models (adversarially pre-trained models with small perturbation budget $\epsilon$) as feature extractors. To see the effect of the weak adversarially pre-trained model, we compare it with a normally pre-trained student in Table~\ref{tab:ablation_studies_set2}(a). Normally pre-trained student can improve robustness, albeit not significantly. Weak adversarially pre-trained students, on the other hand, can improve robustness significantly.
\noindent To further see how the UDA feature extractor model should be pre-trained, we compare the robustness and accuracy of different feature extractor models with different pre-training perturbation levels in Figure~\ref{fig:rfa_robustness_avg}.
\noindent \textbf{Comparison of Pairwise and with Non-Pairwise Loss.}
An important aspect of our algorithm is the loss function. We hypothesized that similarity preserving loss that preserves similarity between the activations is better to compare to pair-wise loss. This is because our student model is already trained, and we only want to fine-tune it and require weak supervision. To illustrate it, we compare the robustness and clean accuracy for two pair-wise losses with similarity preserving loss in Table~\ref{tab:ablation_studies_set2}(b).
\noindent\textbf{Effect of Multiple Teachers.} We hypothesized that the same model trained with different perturbation budgets can supervise student models with the diverse features. In Figure~\ref{fig:activatons}, we show the maximally activated neurons (maximum value across channels) of four different residual blocks of the robust ResNet-50 model. The first row shows activations of residual blocks for a normally pre-trained model, and other rows represent activations for robust ResNet-50 models trained with different values of $\epsilon$. The figure shows the diversity of discriminative features learned.
\noindent To illustrate the effect of multiple teachers, we compare it with a single teacher in Table~\ref{tab:ablation_studies_set1}(b). Single model supervision is enough to distill the robustness. However, the diversity of supervision from multiple teachers improves both accuracy and robustness.
\subsection{Ablation Studies}
\noindent\textbf{Sensitivity of Weight of Robust Feature Adaptation ($\alpha$).} We study the sensitivity of our method to the weight of robust feature adaptation term $\alpha$ on VisDA-2017. Table~\ref{tab:ablation_studies_set1}(a) demonstrates the clean accuracy and adversarial robustness by varying $\alpha \in \{0, 100, 500, 1000, 5000\}$. Increasing $\alpha$ decreases the clean accuracy while increasing the robustness. These experimental results show that $\alpha$ can control the trade-off between clean accuracy and adversarial robustness.
\begin{table}
\centering
\begin{adjustbox}{width=0.5\textwidth, center}
\begin{tabular}{lcccccc}
\toprule
\multirow{2}{*}{Method} &\multirow{2}{*}{Clean} & \multirow{2}{*}{FGSM} & \multicolumn{4}{c}{PGD-k}\\
\cline{4-7}
& & & 10& 20 & 50 & 100 \\
\toprule
Baseline & 72.20 & 41.15 & 11.82 & 4.03 & 3.24 & 3.06 \\
Robust PT & 71.95 & 63.23 & 39.54 & 28.21 & 25.55 & 24.69 \\
Ours & \textbf{73.45} & \textbf{67.87} & \textbf{42.25} & \textbf{40.87} & \textbf{40.28} & \textbf{40.11} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{The effect of an increasing number of iterations for PGD attack. Results of the proposed method are consistent, showing a successful convergence of PGD attacks.}
\label{tab:iterations}
\end{table}
\noindent\textbf{Effect of the number of PGD iterations on robustness.} To further show the transferability of robustness, we test our method with an increasing number of iterations for PGD attack (PGD-k). The robustness of our method is consistent as shown in Table~\ref{tab:iterations}.
\begin{table*}
\centering
\begin{tabular}{cc}
\begin{adjustbox}{width=0.51\linewidth}
\begin{tabular}{lccccccc}
\toprule
Method & A $\shortrightarrow$ W & D $\shortrightarrow$ W & W $\shortrightarrow$ D & A $\shortrightarrow$ D & D $\shortrightarrow$ A & W $\shortrightarrow$ A & Avg. \\
\toprule
Baseline & 91.40 & 98.74 & 100.00 & 92.17 & 73.06 & 74.47 & 88.31 \\
Robust PT & 91.78 & 99.12 & 100.00 & 92.77 & 73.85 & 74.11 & 88.60\\
Ours (RFA) & \textbf{92.80} & \textbf{99.21} & \textbf{100.00} & \textbf{93.04} & \textbf{78.00} & \textbf{77.74} & \textbf{90.15} \\
\bottomrule
\end{tabular}
\end{adjustbox}
&
\begin{adjustbox}{width=0.46\linewidth}
\begin{tabular}{lcccccc}
\toprule
Method & Source & DANN & DAN & CDAN & JAN & MDD \\
\toprule
Baseline & 43.05 & 71.34 & 61.79 & 74.23 & 63.70 & 72.20 \\
Robust PT & 47.20 & 72.81 & 62.56 & 75.85 & 63.02 & 75.64 \\
Ours (RFA) & \textbf{59.00} & \textbf{75.05} & \textbf{65.58} & \textbf{77.54} & \textbf{66.68} & \textbf{79.42}\\
\bottomrule
\end{tabular}
\end{adjustbox}
\\
(a)&(b)\\
\end{tabular}
\caption{\textbf{Improved Clean Accuracy.} (a) Classification accuracy (\%) for all the six tasks from Office-31 dataset based on ResNet-50. (b) Comparison of classification accuracy (\%) for Baseline, Robust PT and RFA with six UDA algorithms on VisDA-2017 dataset. RFA consistently improves accuracy for all UDA algorithms.}
\label{tab:office31-clean_acc}
\end{table*}
\begin{table}
\centering
\begin{adjustbox}{width=1\linewidth, center}
\begin{tabular}{lcccc|c}
\toprule
Method & Art-Painting & Cartoon & Sketch & Photo & Average \\
\hline
\multirow{2}{*}{Baseline} & 77.93 & 80.29 & 78.90 & 94.55 & 82.92 \\
& \rob{0} & \rob{0.13} & \rob{2.24} & \rob{0.18} & \rob{0.64} \\
\hline
\multirow{2}{*}{Ours (RFA)} & 76.56 & 76.83 & 75.97 & 94.61 & 81.00 \\
& \rob{23.15} & \rob{51.58} & \rob{62.82} & \rob{40.00} & \rob{44.38} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Comparison of accuracy and \rob{robustness} (\%) for DecAug Baseline, Robust PT and RFA for all the four tasks from PACS based on ResNet-18.}
\label{tab:rfa_decaug_pacs_robustness}
\end{table}
\begin{table}
\centering
\begin{adjustbox}{width=0.5\textwidth, center}
\begin{threeparttable}
\begin{tabular}{lcccccccc}
\hline
Dataset & Rob. & Source & DANN & DAN & CDAN & JAN & MDD \\
& PT & Only & \cite{ganin2016domain} & \cite{long2015learning} & \cite{long2018conditional} & \cite{long2017deep} & \cite{zhang2019bridging} \\
\hline
VisDA &$\times$ & 43.05 & 71.34 & 61.79 & 74.23 & 63.70 & 72.20 \\
2017 &$\checkmark$ & \textbf{48.95} & \textbf{72.81} & \textbf{62.70} & \textbf{75.85} & \textbf{65.51} & \textbf{75.64} \\
\hline
Office& $\times$ & \textbf{77.80} & 85.79 & 81.72 & 86.90 & 85.68 & 88.31 \\
31 & $\checkmark$ & 77.66 & \textbf{86.06} & \textbf{82.08} & \textbf{88.05} & \textbf{86.05} & \textbf{88.60} \\
\hline
Office & $\times$ & 58.29 & 63.39 & 59.64 & 67.03 & 64.61 & 67.91 \\
Home & $\checkmark$ & \textbf{58.87} & \textbf{64.08} & \textbf{60.38} & \textbf{67.67} & \textbf{65.60} & \textbf{68.27} \\
\hline
\end{tabular}
\begin{tablenotes}
\item $\times$: Normally Pre-Trained Model, $\checkmark$: Adversarially Pre-Trained Model, Rob. PT: Robust Pre-Training.
\end{tablenotes}
\end{threeparttable}
\end{adjustbox}
\caption{Comparison between normally and adversarially pre-trained models on classification accuracy (\%) with different UDA algorithms. Adversarial pre-training improves classification accuracy for UDA.
}
\label{tab:pretraining_clean}
\vspace{-15pt}
\end{table}
\noindent\textbf{Improvement by RFA is consistent across architectures.}
In Table~\ref{tab:ablation_studies_set2}(c), we demonstrate that our proposed method can improve robustness using different architectures. RFA improves the robustness of Wide-ResNet-50-2 from 5.47\% to 50.47\% and accuracy of ResNet18 from 0.15\% to 36.46\%.
\subsection{Can RFA Improve Robustness for Domain Generalization?}
\noindent An important aspect of our method is that it is domain-agnostic and can be applied to tasks involving more than one domain. To illustrate this, we also conduct experiments for Domain Generalization (DG) with our method on PACS~\cite{Li2017} dataset. DG methods~\cite{Li2017,carlucci2019domain,zhou2020learning,bai2020decaug} learn models from multiple domains such that they can generalize well to unseen domains. PACS dataset contains four domains with different image styles: art painting, cartoon, sketch, and photo. We follow the same leave-one-domain-out validation experimental protocol as in~\cite{Li2017}. For each time, we select three domains for training and the remaining domain for testing. We apply RFA to the SOTA DG method DecAug~\cite{bai2020decaug} and report results in Table~\ref{tab:rfa_decaug_pacs_robustness}. It illustrates that our method can also significantly improve the robustness while maintaining good clean accuracy in domain generalization.
\subsection{Can Adversarially Pre-Trained Models Improve Clean Accuracy?}
\noindent A recent work~\cite{salman2020adversarially} has shown that weak adversarially pre-trained models (AT with small $\epsilon \in [0.01,0.5]$) can also improve clean accuracy for target tasks in transfer learning, e.g., ImageNet to Pets dataset. In this section, we explore this hypothesis for unsupervised domain adaptation (UDA). Specifically, we did experiments for two settings: using weak adversarially pre-trained models as feature extractors and using them as teachers in our proposed algorithm.
\noindent First, we use a weak adversarially pre-trained model as a feature extractor while keeping everything else the same as in UDA training. We found that this simple setup can improve clean accuracy. The results are shown in Table~\ref{tab:pretraining_clean}.
\noindent To further see the effect of robust features, we used these weak adversarially trained models in our robust adaptation algorithm. The results on different tasks from Office-31, Office-Home and average accuracy for different UDA algorithms on VisDA-17 are shown in Tables \ref{tab:office31-clean_acc}(a),\ref{tab:rfa_officehome_clean}, \ref{tab:office31-clean_acc}(b), respectively. RFA outperforms both Baseline and Robust Pre-Training with significant margins. Our method achieves 90.15\% compared to 88.31\% of Baseline and 88.60\% of Robust Pre-Training on Office-31. Similarly, on a more complex Office-Home dataset, it achieved 70.03\% compared to 67.91\% of Baseline and 68.27\% of Robust PT. On challenging the VisDA-2017 dataset, we achieved even more improvements. For instance, MDD with normally pre-trained ResNet-50 achieves an accuracy of 72.20\%, but our proposed algorithm achieves 79.42\% -- an absolute 7\% improvement.
\noindent It is noteworthy that our method significantly \textit{improves accuracy on hard tasks}, e.g., for Office-31, \textbf{D} $\to$ \textbf{A} (73.06\% to 78\% ) and \textbf{W} $\to$ \textbf{A} (74.47\% to 77.74\% ); for Office-Home, \textbf{Cl} $\to$ \textbf{Ar} (61.52\% to 64.43\%), \textbf{Cl} $\to$ \textbf{Pr} (71.19\% to 74.81\%) and \textbf{Cl} $\to$ \textbf{Rw} (71.54\% to 75.54\%); for VisDA-2017, simulation to real (72.20\% to 79.42\%). This highlights the importance of adaptation of robust features for UDA.
\section{Conclusion}
\noindent Existing interventions for adversarial robustness require labels and assume learning from a single domain. This hinders their application in unsupervised domain adaptation. To make unsupervised domain adaptation robust, we introduced a simple, unsupervised and domain-agnostic method that does not require adversarial examples during training. Our method is motivated by the transferability of robustness. It utilizes adversarially pre-trained models and adapts robustness from their internal representations. Our results show that it significantly improves the robustness for UDA.
\noindent Our work may be extended in two dimensions. One direction is the applicability of our work in other problems involving learning from multiple domains. In this work, we primarily focused on UDA and also briefly discussed domain generalization. However, many different scenarios require learning from a diverse set of domains, such as open compound domain adaptation~\cite{liu2020open} and single domain generalization~\cite{qiao2020learning}. It would be interesting to see the performance of our algorithm under these circumstances. Another direction is to leverage a zoo of diverse pre-trained models~\cite{xu2021nasoa, shu2021zoo}. Systematic selection of relevant teachers and adaptive aggregation of their knowledge can further improve performance without significant computation overhead.
\noindent \textbf{Acknowledgements.} Authors are thankful to the anonymous reviewers, and to Dr. Nauman, Faaiz, Teerath, Salman and Asim for their help and constructive feedback.
\FloatBarrier
{\small
\bibliographystyle{ieee_fullname}
| fcee8639b336f92edbf0291d5ec72452aeb30328 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Some comments on referee I report}
End of page 4: the authors remark that the testing of exchangeability can be restricted to the comparison
between two permutations, $(1,2)$ and $(1,2, \ldots n)$ - as underlined in (i) above. The authors do not provide a
proof nor a reference to the fact that $(1,2)$ and $(1,2, \ldots n)$ are generators of the permutation group and,
more importantly, they do not highlight that the same technique had been already used in Harder \&
Stadtmüller (2017). This can be misleading in judging the originality of their contribution. From a
methodological point of view, it would be interesting to understand how the choice of generators affects the result of the testing.
\
\it Response. The fact that the group of permutations can be generated using two permutations like $(1,2)$ and $(1,2, \ldots n)$ among many others is a elementary well known result for the permutation group. We do mention the paper by Harder \&
Stadtmüller (2017), and we compare with their results in the manuscript. However we did not found that it should be highlighted that Harder \&
Stadtmüller (2017) have also considered this fact. However we have no problem in adding a comment on this line. \rm
\
3. End of page 5, point 1.: The expression “split the sample at random” is too vague and needs a stronger
probabilistic setup. Moreover, I believe that the sentence “providing two independent samples, which we
will take typically of equal sizes” cannot be true. If we force the two samples to have the same number of
observations, we are creating a dependence between them. In particular, they will be independent
conditionally on their sample size, which makes them exchangeable by de Finetti’s Theorem.
\
\it Response.
We do not agree with that the sentence “providing two independent samples, which we will take typically of equal sizes” cannot be true. There are many ways to do it arriving to two independent samples of nearly the same size. For instance we can take an iid sample of Bernoulli(1/2) random variables independent of the data an attach to each sample each data according to the label $0,1$ obtained at each trial. \rm
\
The simulations show that using more than one projection increases the power of the test - as already
underline by Cuesta-Albertos et al. (2007) - and the comparison with the method by Harder \& Stadtmüller
(2017) is based on 50 projections. Is there a way to establish how many projections one should use?
Would the test give better results with respect to Harder \& Stadtmüller (2017) also with 1 projection?
What happens if one considers a smaller sample size, e.g., 100? As a minor comment, in the plots
the authors use to indicate the number of projections, whereas in the main text indicates the number of
generators.
\
\it Response. Besides considering a different problem, there is a considerable improvement with respect to how to work with several projections. Indeed, in this case we consider a valid bootstrap procedure to obtain the critical value of the test, getting much better power results. We have no results regarding how many projections one should use which should depend on the problem at hand. The case of using only one projection was already considered for Harder \& Stadtmüller (2017) proposals (see Figures 3 and 4). \rm
\
Section 2 has many missing explanations and definitions and should be carefully revised in order to beself-explanatory. For example,
\begin{itemize}
\item The definition of projection is missing.
\it Response. We refer to the classical orthogonal projection. If necessary we can add the definition. \rm
\item The characterization of in terms of characteristic functions does not have neither a proof nor a reference.
\it We can add a reference regarding this well known fact if necessary.\rm
\item Is always measurable?
\it Yes, it is a cone. \rm
\item Maybe it is better to write strictly positive Lebesgue measure.
\item In what sense the Carleman condition is optimal?
\it Response. This is shown in Theorem 3 of the manuscript, with a detailed and non-trivial proof. \rm
\item At the beginning of page 3 is referred to as a cone but this was never underlined before nor proved. Moreover, there is a typo in “positive Lebesgue measure”.
\it Response. It follows easily from the definition of the set, but we can add it if necessary. \rm
\item The notation for the pushforward probability is not standard and has not been introduced. \it Response. It is just the composition, but if necessary we can also define it more explicitely. \rm
\end{itemize}
\section{Some comments on referee II report}
(ii) Also, do the assumptions of Theorem 4 imply that P is fully characterised by all its moments? If
this is the case, is Theorem 2 really needed for the proof of consistency? Is a projection version of
Cramer-Wold’s Theorem really crucial in the derivation of the test or one can derive the same just by
manipulation of, say, moment generating functions?
\
\it Response. No .... \rm
(iii) The authors suggest multiple times that the suggested methodology could be extended to infinite
dimensions (as opposed to what allowed by the “competitors”). However I am not sure that, for
permutations acting on $N$, it is sufficient to consider a finite number of elements. It looks to me as
though one should consider the set of all inversions if $\{1, n\}$ for every $n$, which is a countable set. Would
this not imply testing again a countably infinite sequence of transformations $\{h_j\}$? Maybe there is
some algebraic shortcut that I am not seeing but in this case the authors should clarify how this would
work in infinite dimensions.
\
\it Response. Regarding the infinite dimensional problem at the conclusions we mention:
As well as being effective, the methods developed in this article are quite flexible.
there are several variants of the Cram\'er--Wold theorem,
and these can be exploited to provide tests for exchangeability in other contexts.
For example, using Theorem 4.1 in Cuevas, Fraiman and Ransford (2007) or Theorem 5 in Cuevas and Fraiman (2009),
our results can be applied to the case of multivariate functional data,
where we have vectors taking values in a Hilbert or Banach space.
We can still employ the same algorithm as
using several random directions and finding the critical value by bootstrap,
except that, in the infinite-dimensional case, we should choose the random directions
(or the elements on the dual space) according to a non-degenerate gaussian measure.
\
We refer to the following problem:
The setup is that of multivariate functional data, i.e. we have a sample $$\{(X_{11}(t), \ldots, X_{1d}(t)), \ldots (X_{n1}(t), \ldots, X_{nd}(t))\},$$ of vectors of trajectories of some stochastic process taking values for instance in a Hilbert space, and we want to test is the distribution of these vectors is exchangeable.
\rm
\
p 4 Remark starting from l46. Why is it relevant to consider the set G? Would it not be sufficient to
say ‘Thought $\Sigma_d$ contains $d!$ permutations, it can be generated using etc.’, without any consideration
of subgroup?
\
\it Response. We believe not. We need that the set be a subgroup, since then if it contains a generator then it must coincide with the whole group of permutations. Otherwise it is not true. \rm
\
Sec 4 : I could not really appreciate the extent
to which this method outperforms competing methodologies. When resorting to bootstrap for non distribution-free models, you do use one-dimensional projection, but potentially a lot of them. Does
it make it still more efficient than competing methodologies? (Are the latter actually also necessarily
not distribution free?)
\
\it Response. Yes, using many directions make the method more powerful as shown in the graphics in Figures 1,2,3 and 4, when we vary the value of the number of directions, including the case $k=1$. When using more than one directions the test is no longer distribution free, but we adress the problem using a valid bootstrap procedure.
\end{document}
\section{Some comments on referee I report}
End of page 4: the authors remark that the testing of exchangeability can be restricted to the comparison
between two permutations, $(1,2)$ and $(1,2, \ldots n)$ - as underlined in (i) above. The authors do not provide a
proof nor a reference to the fact that $(1,2)$ and $(1,2, \ldots n)$ are generators of the permutation group and,
more importantly, they do not highlight that the same technique had been already used in Harder \&
Stadtmüller (2017). This can be misleading in judging the originality of their contribution. From a
methodological point of view, it would be interesting to understand how the choice of generators affects the result of the testing.
\
\it Response. The fact that the group of permutations can be generated using two permutations like $(1,2)$ and $(1,2, \ldots n)$ among many others is a elementary well known result for the permutation group. We do mention the paper by Harder \&
Stadtmüller (2017), and we compare with their results in the manuscript. However we did not found that it should be highlighted that Harder \&
Stadtmüller (2017) have also considered this fact. However we have no problem in adding a comment on this line. \rm
\
3. End of page 5, point 1.: The expression “split the sample at random” is too vague and needs a stronger
probabilistic setup. Moreover, I believe that the sentence “providing two independent samples, which we
will take typically of equal sizes” cannot be true. If we force the two samples to have the same number of
observations, we are creating a dependence between them. In particular, they will be independent
conditionally on their sample size, which makes them exchangeable by de Finetti’s Theorem.
\
\it Response.
We do not agree with that the sentence “providing two independent samples, which we will take typically of equal sizes” cannot be true. There are many ways to do it arriving to two independent samples of nearly the same size. For instance we can take an iid sample of Bernoulli(1/2) random variables independent of the data an attach to each sample each data according to the label $0,1$ obtained at each trial. \rm
\
The simulations show that using more than one projection increases the power of the test - as already
underline by Cuesta-Albertos et al. (2007) - and the comparison with the method by Harder \& Stadtmüller
(2017) is based on 50 projections. Is there a way to establish how many projections one should use?
Would the test give better results with respect to Harder \& Stadtmüller (2017) also with 1 projection?
What happens if one considers a smaller sample size, e.g., 100? As a minor comment, in the plots
the authors use to indicate the number of projections, whereas in the main text indicates the number of
generators.
\
\it Response. Besides considering a different problem, there is a considerable improvement with respect to how to work with several projections. Indeed, in this case we consider a valid bootstrap procedure to obtain the critical value of the test, getting much better power results. We have no results regarding how many projections one should use which should depend on the problem at hand. The case of using only one projection was already considered for Harder \& Stadtmüller (2017) proposals (see Figures 3 and 4). \rm
\
Section 2 has many missing explanations and definitions and should be carefully revised in order to beself-explanatory. For example,
\begin{itemize}
\item The definition of projection is missing.
\it Response. We refer to the classical orthogonal projection. If necessary we can add the definition. \rm
\item The characterization of in terms of characteristic functions does not have neither a proof nor a reference.
\it We can add a reference regarding this well known fact if necessary.\rm
\item Is always measurable?
\it Yes, it is a cone. \rm
\item Maybe it is better to write strictly positive Lebesgue measure.
\item In what sense the Carleman condition is optimal?
\it Response. This is shown in Theorem 3 of the manuscript, with a detailed and non-trivial proof. \rm
\item At the beginning of page 3 is referred to as a cone but this was never underlined before nor proved. Moreover, there is a typo in “positive Lebesgue measure”.
\it Response. It follows easily from the definition of the set, but we can add it if necessary. \rm
\item The notation for the pushforward probability is not standard and has not been introduced. \it Response. It is just the composition, but if necessary we can also define it more explicitely. \rm
\end{itemize}
\section{Some comments on referee II report}
(ii) Also, do the assumptions of Theorem 4 imply that P is fully characterised by all its moments? If
this is the case, is Theorem 2 really needed for the proof of consistency? Is a projection version of
Cramer-Wold’s Theorem really crucial in the derivation of the test or one can derive the same just by
manipulation of, say, moment generating functions?
\
\it Response. No .... \rm
(iii) The authors suggest multiple times that the suggested methodology could be extended to infinite
dimensions (as opposed to what allowed by the “competitors”). However I am not sure that, for
permutations acting on $N$, it is sufficient to consider a finite number of elements. It looks to me as
though one should consider the set of all inversions if $\{1, n\}$ for every $n$, which is a countable set. Would
this not imply testing again a countably infinite sequence of transformations $\{h_j\}$? Maybe there is
some algebraic shortcut that I am not seeing but in this case the authors should clarify how this would
work in infinite dimensions.
\
\it Response. Regarding the infinite dimensional problem at the conclusions we mention:
As well as being effective, the methods developed in this article are quite flexible.
there are several variants of the Cram\'er--Wold theorem,
and these can be exploited to provide tests for exchangeability in other contexts.
For example, using Theorem 4.1 in Cuevas, Fraiman and Ransford (2007) or Theorem 5 in Cuevas and Fraiman (2009),
our results can be applied to the case of multivariate functional data,
where we have vectors taking values in a Hilbert or Banach space.
We can still employ the same algorithm as
using several random directions and finding the critical value by bootstrap,
except that, in the infinite-dimensional case, we should choose the random directions
(or the elements on the dual space) according to a non-degenerate gaussian measure.
\
We refer to the following problem:
The setup is that of multivariate functional data, i.e. we have a sample $$\{(X_{11}(t), \ldots, X_{1d}(t)), \ldots (X_{n1}(t), \ldots, X_{nd}(t))\},$$ of vectors of trajectories of some stochastic process taking values for instance in a Hilbert space, and we want to test is the distribution of these vectors is exchangeable.
\rm
\
p 4 Remark starting from l46. Why is it relevant to consider the set G? Would it not be sufficient to
say ‘Thought $\Sigma_d$ contains $d!$ permutations, it can be generated using etc.’, without any consideration
of subgroup?
\
\it Response. We believe not. We need that the set be a subgroup, since then if it contains a generator then it must coincide with the whole group of permutations. Otherwise it is not true. \rm
\
Sec 4 : I could not really appreciate the extent
to which this method outperforms competing methodologies. When resorting to bootstrap for non distribution-free models, you do use one-dimensional projection, but potentially a lot of them. Does
it make it still more efficient than competing methodologies? (Are the latter actually also necessarily
not distribution free?)
\
\it Response. Yes, using many directions make the method more powerful as shown in the graphics in Figures 1,2,3 and 4, when we vary the value of the number of directions, including the case $k=1$. When using more than one directions the test is no longer distribution free, but we adress the problem using a valid bootstrap procedure.
\end{document}
\section{Introduction}
Let $d\ge2$ and let $P$ be a Borel probability measure on ${\mathbb R}^d$.
The \emph{image} or \emph{push-forward} of $P$ under a linear map $T:{\mathbb R}^d\to{\mathbb R}^d$
is the Borel probability measure $PT^{-1}$ on ${\mathbb R}^d$ defined by
\[
PT^{-1}(B):=P(T^{-1}(B)).
\]
The measure $P$ is said to be \emph{$T$-invariant} if $PT^{-1}=P$.
If $G$ is a group of invertible linear self-maps of ${\mathbb R}^d$, then
$P$ is \emph{$G$-invariant} if it is $T$-invariant for each $T\in G$.
For example, if $G$ is the group of $d\times d$
permutation matrices, then $P$ is $G$-invariant if and only if
it is an exchangeable distribution. If $G$ is the group of signed permutation
matrices, then $P$ is $G$-invariant if and only if it is a sign-invariant exchangeable
distribution. (These terms will be defined in detail later in the article.)
Of course one can imagine many more examples.
The purpose of this paper is to develop a methodology for testing whether a given
probability measure $P$ on ${\mathbb R}^d$ is $G$-invariant for a given group~$G$.
We know of no previous work on this subject in this degree of generality,
though certain special cases, notably testing for exchangeability or sign-invariant exchangeability,
have been extensively treated in the literature. We shall provide detailed references when we come to discuss these special cases later in the paper.
A potential obstacle is the fact that $G$ may be quite large, possibly even infinite.
Another possible difficulty is that the dimension $d$ of the underlying space may be very large.
To circumvent these problems, we exploit two devices.
The first idea is a very simple one, namely that, to test for $G$-invariance, it suffices to
test for $T$-invariance as $T$ runs through a set of generators for~$G$. The point is that
$G$ may be generated by a very small set of $T$, even though $G$ itself is large or even infinite.
This idea is explored in \S\ref{S:generators}.
The second idea is that, to test whether $P=PT^{-1}$,
one can project $P$ and $PT^{-1}$
onto a randomly chosen one-dimensional subspace,
and test whether the projected measures are equal.
This reduces a $d$-dimensional problem to a one-dimensional one,
for which well-known techniques are available.
The justification for this procedure is a variant of the
Cram\'er--Wold theorem. This idea is described in detail in \S\ref{S:CW}.
Based on these two ideas, we develop a testing procedure in \S\ref{S:testing}.
Under suitable hypotheses on $P$ and $G$, our test is consistent and distribution-free.
We also consider a variant of this procedure, based on projecting onto several randomly-chosen one-dimensional
subspaces. This has the effect of increasing the power of the test.
There follows a discussion of three special cases.
The case of exchangeable distributions is treated in \S\ref{S:exch}
and that of sign-invariant exchangeable distributions in \S\ref{S:signexch}.
We describe the background, perform some simulations, and,
in the case of exchangeability, we compare our results with those
obtained by other techniques in the literature.
The third case, treated in \S\ref{S:infdim},
illustrates the flexibility of our method. In this case,
${\mathbb R}^d$ is replaced by an infinite-dimensional Hilbert space.
The necessary adjustments to our method are described,
and illustrated with a further simulation.
The article concludes with two examples drawn from real datasets,
one concerning biometric measurements, and the other satellite images.
\section{Generators}\label{S:generators}
Recall that a group $G$ is said to be \emph{generated} by a subset $A$ if
every element $g$ of $G$ can be written as a finite product $g=g_1\cdots g_m$,
such that, for each $j$, either $g_j\in A$ or $g_j^{-1}\in A$.
Equivalently, $G$ is generated by $A$ if $A$ is contained in no proper subgroup of $G$.
In our context, the interest of this notion stems from the following simple proposition.
\begin{proposition}
Let $P$ be a Borel probability measure on ${\mathbb R}^d$,
let $G$ be a group of invertible linear maps $T:{\mathbb R}^d\to{\mathbb R}^d$,
and let $A$ be a set of generators for $G$.
Then $P$ is $G$-invariant if and only if $P$ is $T$-invariant for each $T\in A$.
\end{proposition}
\begin{proof}
Define
\[
G_0:=\{T\in G: PT^{-1}=P\}.
\]
Obviously $G_0$ contains the identity. Also, it is easily checked that,
if $T_1,T_2\in G_0$, then also $T_1T_2\in G_0$ and $T_1^{-1}\in G_0$.
Therefore $G_0$ is a subgroup of $G$. By assumption, $G_0$ contains $A$.
As $A$ generates $G$, it follows that $G_0=G$.
\end{proof}
\begin{example}\label{Ex:2gen}
Let $G$ be the group of $d\times d$ permutation matrices
(i.e.\ matrices with one entry $1$ in each row and each column, and zeros elsewhere).
Thus each $T\in G$ permutes the basis vectors of ${\mathbb R}^d$,
say $Te_j=e_{\sigma(j)}$,
where $\sigma$ is permutation of $\{1,2,\dots,d\}$.
The correspondence $T\leftrightarrow\sigma$ is an isomorphism
between $G$ and $\Sigma_d$, the group of permutations of $\{1,2,\dots,d\}$.
It is well known that, even though $\Sigma_d$
contains $d!$ permutations, it can be generated using just two permutations,
for example the transposition $\sigma_1=(1,2)$ and the cycle $\sigma_2=(1,2,3,4,...,d)$
(see e.g.\ \cite[Example~2.30]{Ro78}).
Thus $G$ has a generating set consisting of just two matrices.
\end{example}
\begin{example}\label{Ex:sign}
Let $G$ be the group of $d\times d$ signed permutation matrices
(i.e.\ matrices with one entry $\pm 1$ in each row and each column, and zeros elsewhere).
This is sometimes called the hyperoctahedral group, denoted~$B_d$.
It is the group of symmetries of the $d$-dimensional cube. It contains $d!2^d$ elements,
but, like the symmetric group $\Sigma_d$, it can be generated by just two elements.
However, these elements are more complicated to describe,
see \cite[Proposition~6]{Ja04}.
On the other hand, it is easy to give a set of three generators:
for example, one can take $\{T_1,T_2,T_3\}$,
where $T_1,T_2$ are the matrices corresponding to the same two permutations $\sigma_1,\sigma_2$ as before,
and $T_3$ is the diagonal matrix given by $T_3:=\diag(-1,1,1,\dots,1)$.
\end{example}
\section{Cram\'er--Wold theorems}\label{S:CW}
In this section we recall the Cram\'er--Wold theorem, as well as a more recent extension.
We also discuss the sharpness of this result in the context testing for $G$-invariance.
Let $P,Q$ be Borel probability measures on ${\mathbb R}^d$, where $d\ge2$.
We denote by ${\mathcal E}(P,Q)$ the set of vectors $x\in{\mathbb R}^d$
such that $P\pi_x^{-1}=Q\pi_x^{-1}$, where $\pi_x:{\mathbb R}^d\to{\mathbb R}^d$ is the orthogonal projection onto the one-dimensional subspace ${\mathbb R} x$ spanned by $x$.
Equivalently, ${\mathcal E}(P,Q)$ is the set of $x\in{\mathbb R}^d$ such that $\phi_P(tx)=\phi_Q(tx)$ for all $t\in{\mathbb R}$,
where $\phi_P,\phi_Q$ denote the characteristic functions of $P,Q$ respectively.
The set ${\mathcal E}(P,Q)$ is a closed cone (not necessarily convex) in ${\mathbb R}^d$.
For detailed proofs of all these facts, see \cite[\S2]{CFR07}.
The following result is a restatement in our notation of a
well-known theorem of Cram\'er and Wold (\cite{CW36}).
\begin{theorem}[\protect{\cite[Theorem~I]{CW36}}]\label{T:CW}
Let $P,Q$ be Borel probability measures on~${\mathbb R}^d$, where $d\ge2$.
If ${\mathcal E}(P,Q)={\mathbb R}^d$, then $P=Q$
\end{theorem}
There are several extensions of this theorem,
in which one assumes more about the nature of the measures $P,Q$
and less about the size of ${\mathcal E}(P,Q)$.
Articles on this subject include those of R\'enyi \cite{Re52},
Gilbert \cite{Gi55}, Heppes \cite{He56}, B\'elisle--Mass\'e--Ransford \cite{BMR97}
and Cuesta-Albertos--Fraiman--Ransford \cite{CFR07}.
We cite one such result, taken from \cite{CFR07}.
\begin{theorem}[\protect{\cite[Corollary~3.2]{CFR07}}]\label{T:CWgen}
Let $P,Q$ be Borel probability measures on~${\mathbb R}^d$, where $d\ge2$. Assume that
the absolute moments $m_N:=\int\|x\|^N\,dP(x)$ are all finite and satisfy
\begin{equation}\label{E:Carleman}
\sum_{N\ge1}m_N^{-1/N}=\infty.
\end{equation}
If the set ${\mathcal E}(P,Q)$ is of positive Lebesgue measure in ${\mathbb R}^d$,
then $P=Q$.
\end{theorem}
The moment condition \eqref{E:Carleman} is slightly less restrictive than demanding that the moment generating function of $P$ be finite.
Just how essential is this condition?
The brief answer is that, without it, Theorem~\ref{T:CWgen} breaks down dramatically.
Indeed, given a moment sequence $(m_N)$ that fails to satisfy \eqref{E:Carleman},
one can find probability measures $P,Q$ whose moments are bounded by $m_N$,
and ${\mathcal E}(P,Q)$ has positive measure (indeed it even has non-empty interior),
yet $P\ne Q$.
See \cite{CFR07} for an extensive discussion of this topic.
However, we are eventually going to apply Theorem~\ref{T:CWgen} in the special case when
$Q=PT^{-1}$, where $T:{\mathbb R}^d\to{\mathbb R}^d$ is an invertible linear map.
It is \emph{a priori} conceivable that Theorem~\ref{T:CWgen} might be improvable
for pairs of measures $(P,Q)$ of this particular form.
The following sharpness result, which we believe to be new, shows that this is not so,
at least in the case when $T$ is of finite order.
\begin{theorem}\label{T:sharpexch}
Let $T:{\mathbb R}^d\to{\mathbb R}^d$ be a linear map
such that $T\ne I$ but $T^n=I$ for some $n\ge2$.
Let $(M_N)_{N\ge0}$ be a positive sequence satisfying
\[
M_0=1,
\quad
M_N^2\le M_{N-1}M_{N+1}~(N\ge1)
\quad\text{and}\quad
\sum_{N\ge1}M_N^{-1/N}<\infty.
\]
Then there exists a Borel probability measure $P$ on ${\mathbb R}^d$ such that
\begin{itemize}
\item $\int\|x\|^N\,dP(x)\le M_N$ for all $N\ge0$,
\item the cone ${\mathcal E}(P,PT^{-1})$ is of positive measure in ${\mathbb R}^d$, but
\item $PT^{-1}\ne P$.
\end{itemize}
\end{theorem}
\begin{remark}
The condition that $T^n=I$ for some $n\ge 2$ is automatically satisfied
if $T$ belongs to a finite group $G$, as will be the case in all the examples that we
shall study. The article \cite{Ko03} contains a criterion for when $T^n=I$.
\end{remark}
The main new idea in Theorem~\ref{T:sharpexch} is the construction given in
the following lemma.
\begin{lemma}\label{L:sharpexch}
Let $Q,R$ be Borel probability measures on ${\mathbb R}^d$.
Let $T:{\mathbb R}^d\to{\mathbb R}^d$ be a linear map such that $T^n=I$ for some $n\ge2$.
Define
\[
P:=\frac{1}{n}\Bigl(Q+\sum_{j=1}^{n-1}RT^{-j}\Bigr).
\]
Then $P$ is a Borel probability measure on ${\mathbb R}^d$ and, writing ${\mathcal E}_0:={\mathcal E}(Q,R)$, we have
\begin{equation}\label{E:sharpexch}
{\mathcal E}_0\cap T^{-1}({\mathcal E}_0)\subset {\mathcal E}(P,PT^{-1})\subset {\mathbb R}^d\setminus ({\mathcal E}_0\bigtriangleup T^{-1}({\mathcal E}_0)).
\end{equation}
\end{lemma}
\begin{proof}
Clearly $P$ is a Borel probability measure on ${\mathbb R}^d$.
Also, since $T^n=I$, the measure $\tilde{P}:=\sum_{j=0}^{n-1}RT^{-j}$ is $T$-invariant,
so, since $P=(1/n)(Q-R+\tilde{P})$, we get
\[
P-PT^{-1}=\frac{1}{n}\Bigl(Q-R-(Q-R)T^{-1})\Bigr).
\]
Using the characterization of ${\mathcal E}$ in terms of characteristic functions, it follows that
\begin{equation}\label{E:phiequiv}
\begin{aligned}
x\in{\mathcal E}(P,PT^{-1})
&\iff \phi_{P-PT^{-1}}(tx)=0 \quad\forall t\in{\mathbb R}\\
&\iff \phi_{(Q-R)}(tx)-\phi_{(Q-R)T^{-1}}(tx)=0 \quad\forall t\in{\mathbb R}\\
&\iff \phi_{(Q-R)}(tx)-\phi_{(Q-R)}(tTx)=0 \quad\forall t\in{\mathbb R}.
\end{aligned}
\end{equation}
To prove the first inclusion in \eqref{E:sharpexch},
let $x\in{\mathcal E}_0\cap T^{-1}({\mathcal E}_0)$.
Then both $x,Tx\in{\mathcal E}_0={\mathcal E}(Q,R)$,
so both $\phi_{(Q-R)}(tx)=0$ and $\phi_{(Q-R)}(tTx)=0$ for all $t\in{\mathbb R}$.
By the equivalence \eqref{E:phiequiv}, it follows that $x\in{\mathcal E}(P,PT^{-1})$.
This establishes the first inclusion.
For the second inclusion in \eqref{E:sharpexch},
let $x\in{\mathcal E}_0\bigtriangleup T^{-1}({\mathcal E}_0)$.
Then exactly one of $x$ and $Tx$ lies in ${\mathcal E}_0={\mathcal E}(Q,R)$,
so there exists $t\in{\mathbb R}$ such that exactly one of $\phi_{(Q-R)}(tx)$
and $\phi_{(Q-R)}(tTx)$ is zero.
Their difference is therefore non-zero,
so, by the equivalence \eqref{E:phiequiv} again,
it follows that $x\notin {\mathcal E}(P,PT^{-1})$.
This establishes the second inclusion and completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T:sharpexch}]
Let $B$ be a closed ball in ${\mathbb R}^d$ such that $0\notin B$.
By \cite[Theorem~5.4]{BMR97},
there exist mutually singular Borel probability measures $Q,R$ on ${\mathbb R}^d$
such that both $\int\|x\|^N\,dQ(x)\le M_N$ and $\int\|x\|^N\,dR(x)\le M_N$ for all $N\ge0$,
and also such that ${\mathcal E}_0:={\mathcal E}(Q,R)$ contains all lines not meeting $B$.
Since $T\ne I$,
by choosing $B$ appropriately we may ensure that ${\mathcal E}_0\ne T^{-1}({\mathcal E}_0)$
and also that ${\mathcal E}_0\cap T^{-1}({\mathcal E}_0)$ has positive Lebesgue measure.
Let $P$ be the probability measure constructed from $Q,R$ in Lemma~\ref{L:sharpexch}.
By the left-hand inclusion in \eqref{E:sharpexch}, ${\mathcal E}(P,PT^{-1})$ has positive Lebesgue measure.
Also, by the right-hand inclusion, ${\mathcal E}(P,PT^{-1})$ is a proper subset of~${\mathbb R}^d$, so $P\ne PT^{-1}$.
It remains to check that $P$ satisfies the moment inequality.
From the definition of $P$ in Lemma~\ref{L:sharpexch}, we have
\begin{align*}
\int \|x\|^N\,dP(x)
&=\frac{1}{n}\Bigl(\int \|x\|^N\,dQ(x)+\sum_{j=1}^{n-1}\int\|T^j(x)\|^N\,dR(x)\Bigr)\\
&\le \frac{1}{n}\Bigl(M_N+\sum_{j=1}^{n-1}\|T\|^{jN}M_N\Bigr)\le \|T\|^{nN}M_N.
\end{align*}
This is almost what we want.
Repeating the argument with $(M_N)_{N\ge0}$ replaced by $(\|T\|^{-nN}M_N)_{N\ge0}$
at the outset, we get $\int\|x\|^N\,dP(x)\le M_N$.
\end{proof}
\section{General testing procedures}\label{S:testing}
\subsection{A distribution-free procedure}\label{SS:testing1}
Consider the following set-up.
Let $d\ge2$, let $P$ be a Borel probability measure on ${\mathbb R}^d$,
and let $G$ be a finite multiplicative group of $d\times d$ orthogonal matrices.
We want to test the null hypothesis
\[
H_0:\quad PT^{-1}=P \quad \text{for all~}T \in G.
\]
Assume that $G$ is generated by $k$ elements, say $\{T_1,\dots,T_k\}$.
Then, as we have seen in \S\ref{S:generators},
it suffices just to test whether $PT_j^{-1}=P$ for $j=1,2,\dots, k$.
We now proceed as follows.
Let $\aleph_n:=\{{\mathbf X}_1, \ldots, {\mathbf X}_n\}$ be an i.i.d.\ sample of vectors in ${\mathbb R}^d$ with common distribution $P$.
Consider the following testing procedure.
\begin{enumerate}
\item Split the sample $\aleph_n$ at random into two disjoint subsamples
\[
\aleph^{n_1}:=\{{\mathbf X}_1, \ldots {\mathbf X}_{n1}\},
\quad
\aleph^{n_2}= \{{\mathbf X}'_1, \ldots {\mathbf X}'_{n2}\},
\]
providing two independent samples, which we will take typically of equal sizes.
\item Generate $k$ independent random directions $h_1, \ldots h_k$,
uniformly distributed on the unit sphere of ${\mathbb R}^d$.
\item Given a unit vector $h\in{\mathbb R}^d$,
denote by $F_h$ the one-dimensional distribution of $\langle h,{\mathbf X}\rangle$,
and by $F_h^{T_j}$ that of $\langle h,T_j({\mathbf X})\rangle$ for $j=1, \ldots, k$.
The previous results suggest testing whether $F_{h_j}^{T_j}=F_{h_j}$, for $j=1, \ldots k$.
\item Next, we perform $k$ Kolmogorov--Smirnov two-sample tests between
\[
\langle h_j,\aleph^{n_1}\rangle
\quad\text{and}\quad
\langle h_j,\aleph^{n_2}_{T_{j}}\rangle, \quad j=1, \ldots k,
\]
each one at level $\alpha/k$, where
\[
\langle h, \aleph^{n_1}\rangle:=\Bigl\{\langle h,{\mathbf X}_1\rangle, \ldots, \langle h,{\mathbf X}_{n_1}\rangle \Bigr\}
\]
and
\[
\langle h, \aleph^{n_2}_{T}\rangle:=\Bigl\{\langle h,T({\mathbf X}'_1)\rangle, \ldots, \langle h,T({\mathbf X}'_{n_2})\rangle\Bigr\}.
\]
\item Given $h, T$, let $F_{n_1,h} ^{T}$ and $F_{n_2,h}^{T}$ be the empirical distributions of
$\langle h, \aleph^{n_1}\rangle$ and $\langle h, \aleph^{n_2}_{T}\rangle$ respectively.
The Kolmogorov--Smirnov statistic for our problem is given by
\[
KS(n_1,n_2, h, T):= \sup_{t \in {\mathbb R}} \bigl| F_{n_1,h}^{T}(t) - F_{n_2,h}^{T}(t)\bigr|.
\]
\item Given $\alpha$ with $0<\alpha<1/k$, choose $c_{\alpha/k}$ such that
\[
P\Bigl(KS(n_1, n_2, h_j,T_j)> c_{\alpha/k}\Bigr)= \alpha/k,
\]
Observe that $c_{\alpha/2, n_1,n_2}$ does not depend on~$j$.
This is because
\[
\langle h_j,T_j({\mathbf X})\rangle=\langle T_j^*(h_j),{\mathbf X}\rangle,
\]
and all the vectors $T_j^*(h_j)$ have the same distribution, namely the uniform distribution on the unit
sphere in ${\mathbb R}^d$ (it is here that we need the assumption that the $T_j$ are orthogonal matrices).
\item Reject the null hypothesis if, for at least one of the $k$ Kolmogorov--Smirnov tests, the null hypotheses is rejected.
\end{enumerate}
\begin{theorem}
Let $P$ be a Borel probability measure on ${\mathbb R}^d$, where $d\ge2$.
Assume that:
\begin{enumerate}[\normalfont(i)]
\item the absolute moments $m_N:=\int\|x\|^N\,dP(x)$ are all finite and satisfy
the condition $\sum_{N\ge1}m_N^{-1/N}=\infty$;
\item $P$ is absolutely continuous with respect to Lebesgue measure on ${\mathbb R}^d$.
\end{enumerate}
Then the preceding test has a level between $\alpha/k$ and $\alpha$ under the null hypothesis~$H_0$,
and is consistent under the alternative.
\end{theorem}
\begin{proof}
By the choice of $c_{\alpha/k}$ and Bonferroni's inequality, we have
\[
\alpha/k \le P\Bigl(\bigcup_{j=1}^k\{KS(n_1, n_2, h_j, T_j) > c_{\alpha/k}\} \Bigr ) \le \alpha.
\]
Thus the test has a level between $\alpha/k$ and $\alpha$.
The proof of consistency follows the same lines as the one given in \cite[Theorem~3.1]{CFR06}.
Under the alternative hypothesis,
there exists $j_0\in\{1,\dots,k\}$ such that $P \ne PT^{-1}_{j_0}$.
Then, by Theorem~\ref{T:CWgen},
for almost all $h\in{\mathbb R}^d$, there exists $t_h\in{\mathbb R}$ such that
$\delta_h:=|F_{h}^{T_{j_0}}(t_h) - F_{h}(t_h)| >0$.
As $P$ is absolutely continuous with respect to Lebesgue measure on ${\mathbb R}^d$,
both $F_h$ and $F_h^{T_{j_0}}$ are continuous,
and so, by the strong law of large numbers,
\[
\lim_{n\to\infty}F_{n,h}(t_h) = F_{h}(t_h)
\quad\text{and}\quad
\lim_{n\to\infty}F_{n,h}^{T_{j_0}}(t_h) = F_{h}^{T_{j_0}}(t_h) \quad\text{a.s.}
\]
Hence, by the triangle inequality,
\[
\liminf_{n_1,n_2\to\infty}KS(n_1,n_2,h,T_{j_0})\ge \delta_h >0\quad\text{a.s.}
\]
This establishes consistency.
\end{proof}
\subsection{Testing using more than one random projection} \label{SS:testing2}
The algorithm presented in \S\ref{SS:testing1} is consistent and distribution-free.
In this section we consider the case where we use more than one projection,
as suggested in \cite{CFR07},
in order to increase the power of the test.
Moreover, we no longer need to assume that the matrices $T_j$ are orthogonal.
The price to be paid is that the proposed statistic is no longer distribution-free.
In this case we do not need to split the sample in two subsamples.
It just consists on taking $\ell$ random directions $h_1, \ldots, h_\ell$,
for each direction $h_i$ calculating the statistic $KS(n, h_i, T_j)$ for the univariate projected data,
and then taking the maximum of them:
$$
D_{n,\ell}:=
\max_{\substack{i=1, \ldots, \ell\\j=1,\dots,k}}KS(n, h_i, T_j)
=\max_{\substack{i=1, \ldots, \ell\\j=1,\dots,k}}\sup_{t \in \mathbb R} | F_{n,h_i}(t) - F_{n,h_i}^{T_j} (t)|.
$$
Here $F_{n,h_i}(t)$ and $F^{T_j}_{n,h_i}(t)$ are the empirical distribution of
\[
\langle h_i,\aleph^{n}\rangle
:=\{\langle h_i,{\mathbf X}_1\rangle, \ldots, \langle h_i,{\mathbf X}_{n}\rangle \}
\]
and
\[
\langle h_i,\aleph^{n}_{T_j}\rangle
:=\{\langle h_i,T_j({\mathbf X}_{1})\rangle , \ldots, \langle h_i,T_j({\mathbf X}_{n})\rangle \},
\]
for $ i=1, \ldots \ell$ and $j=1, \ldots,k$, where $k$ is the number of generators of the group.
Since the statistic is no longer distribution-free for $\ell\ge 2$,
in order to obtain the critical value for a level-$\alpha$ test,
we approximate the distribution using bootstrap on the original sample ${\mathbf X}_1, \ldots, {\mathbf X}_n$
by generating a large enough number $B$ of values of $D_{n,\ell}$ for each bootstrap sample.
More precisely, for $r=1, \ldots, B$ repeat:
\begin{enumerate}
\item Generate a bootstrap sample of $\mathbf X_1^*, \ldots, \mathbf X_n^*$, by random sampling with replacement from $\aleph_n$, and generate $(\ell+k)$ i.i.d.\ random directions $h_1, \ldots, h_{\ell+k}$.
\item Calculate $D_{n,\ell}^*$ based on $\mathbf X_1^*, \ldots, \mathbf X_n^*$ and $T_j(\mathbf X_1^*), \ldots, T_j(\mathbf X_n^*)$.
\end{enumerate}
We end up with a sample $ {\mathcal D}^*:= \{D_{n,l}^{*1}, \ldots , D_{n,l}^{*B}\}$ of size $B$,
and take as critical value the $(1-\alpha)$-quantile of the empirical distribution of ${\mathcal D}^*$.
The validity of the bootstrap in this case follows from \cite[Theorems~3 and~4]{Pr95}.
\section{Application to exchangeability}\label{S:exch}
\subsection{Background}\label{SS:exchback}
A $d$-tuple of random variables ${\mathbf X}:=(X_1,\dots,X_d)$ is said to be \emph{exchangeable} if
it has the same distribution as $(X_{\sigma(1)},\dots,X_{\sigma(d)})$
for every permutation $\sigma$ of $\{1,\dots,d\}$.
This is equivalent to demanding that the distribution $P$ of ${\mathbf X}$ be $G$-invariant,
where $G$ is the group of $d\times d$ permutation matrices.
The notion of exchangeability plays a central role in probability and statistics.
See for instance \cite{Al85} for a detailed study of exchangeability,
and \cite{Ki78} on some of the important uses of it.
In particular, permutation tests provide exact level tests for a wide variety of practical testing problems,
but they rely on the assumption of exchangeability.
From the point of view of applications,
it is quite common to assume that
a given vector of data $(X_1, \ldots, X_n)$
(where the information $X_i$ corresponds to the $i$-th subject of a study)
does not depend on the order in which the data are collected.
Although independence of the $X_i$ is quite often assumed,
this is stronger than the notion of exchangeability.
The relationship is made precise by de Finetti's theorem,
according to which, if ${\mathbf X}:= (X_1, X_2, \ldots)$
is a sequence of random variables
whose distribution is invariant under permutations of finitely many coordinates,
then the law of ${\mathbf X}$ is a mixture of i.i.d.\ measures.
De Finetti's theorem applies to an infinite exchangeable sequence of random variables,
but there are also versions that apply to finite exchangeable sequences,
see e.g.\ \cite{DF80, JKY16, KY19} and the references therein.
The problem of testing for exchangeability has been considered quite extensively in the literature.
Modarres \cite{Mo08} compares a number of different methods: the run test, the nearest neighbour test,
the rank test, the sign test and the bootstrap test.
The problem of testing exchangeability in an on-line mode has been considered by
Vovk, Nouretdinov and Gammerman \cite{VNG03}.
The data are observed sequentially,
and the procedure provides a way to monitor on-line the evidence against the exchangeability assumption,
based on exchangeability martingales.
A test for the case of binary sequences can be found in \cite{RRLK21}, using a similar approach.
Genest, Ne\u slehov\'a and Quessy \cite{GNQ12} propose an interesting test for a closely related problem: exchangeability of copulas for two-dimensional data. This has been extended to arbitrary finite dimensions by Harder and Stadtm\"uller \cite{HS17},
using the observation that $\Sigma_d$ can be generated by just two elements (see Example~\ref{Ex:2gen} above). See also the recent article \cite{BQ21} by Bahraoui and Quessy, where they also consider the problem of testing if the copula is exchangeable, but, instead of using empirical copulas as in the previous results, they consider a test based on the copula characteristic function. As they mention:
``From a modeling point-of-view, it may be of interest to check whether the copula of a population is exchangeable. The main point here is that this step may be accomplished without making any assumption on the marginal distributions. Hence, a vector can have exchangeable components at the level of its dependence structure, that is, its copula, whether or not its marginal distributions are identical." Under the extra assumption that all marginals have the same distribution, it provides a test for exchangeability of the distributions.
However, they need to use bootstrap in order to perform their test.
In the next paragraph, we perform some simulations to show how the test procedures described in \S\ref{S:testing}
apply to this situation. In \S\ref{SS:exchsim2} we compare our test for exchangeability with the one proposed by Harder and Stadtm\"uller in \cite{HS17}.
\subsection{Simulations for the test for exchangeability}\label{SS:exchsim1}
We consider a sample of a multivariate normal distribution in dimension $d$, with mean zero and covariance matrix given by a mixture
$(1-\xi)\Sigma^{(1)} +\xi \Sigma^{(2)},$ where $\Sigma^{(1)}$ corresponds to a exchangeable distribution and $\Sigma^{(2)}$ is a Toeplitz matrix, given by
\[
\Sigma^{(1)}:=
\begin{pmatrix}
1 & \rho & \cdots & \rho\\
\rho& 1 & \cdots & \rho\\
\vdots & \vdots & \ddots & \vdots\\
\rho& \rho & \cdots & 1
\end{pmatrix},
\quad
\Sigma^{(2)}:=
\begin{pmatrix}
1 & \rho & \rho^2 & \cdots & \rho^{d-1}\\
\rho & 1 & \rho & \cdots & \rho^{d-2}\\
\rho^2 & \rho & 1 & \cdots & \rho^{d-3}\\
\vdots & \vdots & \ddots & \ddots & \vdots\\
\rho^{d-1}& \rho^{d-2} & \cdots &\rho & 1
\end{pmatrix}.
\]
In this simulation, we take $\rho:=1/2$. If $\xi=0$, then the distribution is exchangeable,
and as it increases to $1$ we get further from the null hypothesis.
To analyze the power function of the proposed test,
we consider some different scenarios:
the dimension $d$ being $6$ and $10$,
the sample size $n$ being $1000$ and $5000$,
and the number of projections being $1,10,50, 100$.
In all cases, the level of the test is $0.05$.
In Figure~\ref{F:normal} we plot the empirical power function as a function $g(\xi)$ of $\xi \in [0,1]$.
\begin{figure}[ht]
\centering
\subfloat[]{\includegraphics[scale=0.65]{Figuras/normal6b.pdf}}
\hfill
\subfloat[]{\includegraphics[scale=0.65]{Figuras/normal10b.pdf}}
\caption{\small Empirical power function $g(\xi)$ for $\xi \in [0,1]$. Upper panel (a) for dimension $d=6$, lower panel (b) for $d=10$.}\label{F:normal}
\end{figure}
\subsection{Comparison with a copula-based test}\label{SS:exchsim2}
As we mentioned in \S\ref{SS:exchback},
Harder and Stadtm\"uller \cite{HS17} proposed an interesting test for exchangeability of copulas,
which we describe briefly here. Assuming that all marginal distributions are equal it provides a exchangeability test for distributions. However, without the extra assumption that all marginals coincide, the exchangeability of the copula does not imply exchangeability of the probability distributions.
Thus, without this assumption, it is also necessary to test that all marginal distributions coincide, which, for even moderate dimensions, makes the problem harder. For the comparison below, we consider a case where all marginal distributions coincide.
Let ${\mathbf X}:=(X_1, \ldots, X_d)$ be a random vector with joint distribution $F$ and
continuous marginal distributions $F_1 \dots F_d$.
Assume that $F_1,\dots,F_d$ are continuous.
Then $U_j:=F_j(X_j)$ is uniformly distributed on the interval $[0,1]$ for each~$j$,
and the distribution $C$ of $(U_1, \ldots, U_d)$ is the unique copula
such that $F(x_1,\dots,x_d)= C(F_1(x_1), \ldots, F_d(x_d))$.
Given an i.i.d. sample ${\mathbf X}_1, \ldots, {\mathbf X}_n \in {\mathbb R}^d$, set
\begin{equation}\label{E:empcop}
\hat C_n({\mathbf u}):= \frac{1}{n} \sum_{i=1}^n \prod_{j=1}^d {\mathcal I}_{\{\hat F_{jn}(X_{ij})\leq u_j\}},
\quad {\mathbf u}:=(u_1, \ldots, u_d) \in [0,1]^d,
\end{equation}
where $\hat F_{jn}, \ j=1, \ldots, d$ are the empirical marginal distributions of the sample.
Among many other interesting asymptotic results, Deheuvels (\cite{De81}) showed that
\begin{equation}\label{E:deh}
\sup_{{\mathbf u} \in [0,1]^d}|\hat C_n({\mathbf u}) - C({\mathbf u})\vert = O\bigl(n^{-\frac{1}{2}}(\log\log n)^\frac{1}{2}\bigr) \quad\text{a.s.},
\end{equation}
which suggests using this statistic for testing.
Based on $\hat C_n({\mathbf u})$, Harder and Stadtm\"uller \cite{HS17}
proposed a test of exchangeability for the problem
\begin{align*}
&H_0: \quad C({\mathbf u})= C({\mathbf u}_{\sigma}) \quad\text{for all ${\mathbf u} \in [0,1]^d$ and all $\sigma \in \Sigma_d$}, \\
\intertext{against}
&H_A:\quad C({\mathbf u})\ne C({\mathbf u}_{\sigma}) \quad\text{for some ${\mathbf u} \in [0,1]^d$ and some $\sigma \in \Sigma_d$},
\end{align*}
by performing a test based on statistics defined as integral versions of the difference
$(\hat C_n({\mathbf u})- \hat C_n({\mathbf u}_{\sigma}))$, such as
\begin{equation}\label{E:copulas}
S_n:= \sum_{\sigma \in {\mathcal G}_0} \int_{[0,1]^d}\bigl(\hat C_n({\mathbf u})- \hat C_n({\mathbf u}_{\sigma})\bigr)^2 w({\mathbf u}, \sigma)\,dC({\mathbf u}),
\end{equation}
where ${\mathcal G}_0$ is a set of generators for the permutation group $\Sigma_d$,
and $w({\mathbf u}, \sigma)$ is a bounded continuous weight function.
From equation~\eqref{E:deh} and the dominated convergence theorem, it follows that
$S_n \to S$ a.s., where
\[
S:= \sum_{\sigma \in {\mathcal G}_0} \int_{[0,1]^d}\bigl(C({\mathbf u})- C({\mathbf u}_{\sigma})\bigr)^2 w({\mathbf u}, \sigma)\,dC(u).
\]
This is shown in \cite[ Lemma 3.1]{HS17},
while the asymptotic distribution is derived in \cite[Theorem 3.4]{HS17} under some regularity assumptions.
As in \cite{HS17}, we consider hierarchical copulas for two different scenarios:
\[
C^1_{\theta_0, \theta_1}(u):= C_{\theta_0} \bigl( u_1, C_{\theta_1}(u_2, u_3) \bigr),
\]
and
\[
C^2_{\theta_0, \theta_1}(u):= C_{\theta_0} \bigl( C_{\theta_1}(u_1, u_2) , u_3\bigr),
\]
where $C_{\theta}$ is the Clayton bivariate copula with parameter $\theta$.
The parameters $\theta_0$ and $\theta_1$ are chosen so that their Kendall indices $\tau$ are
\[
\tau_0:=5/12-\xi/60 \quad\text{and}\quad \tau_1:=5/12+\xi/60 \quad\text{for~}\xi \in \{0,1,\ldots,7\}.
\]
When $\xi=0$ we are under the null hypothesis, and as $\xi$ increases we are further away from the null.
The number of random directions and sample sizes are the same as in \S\ref{SS:exchsim1}.
The empirical power functions are shown in Figures~\ref{F:copula1} and~\ref{F:copula2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{Figuras/copula1b.pdf}
\caption{\small Empirical power function for the hierarchical copula for the exchangeability test as a function of $\xi \in \{0,1,\ldots,7\}$, in dimension 3 for the copula $C^1$.}
\label{F:copula1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{Figuras/copula2b.pdf}
\caption{\small Empirical power function for the hierarchical copula for the exchangeability test as a function of $\xi \in \{0,1,\ldots,7\}$, in dimension 3 for the copula $C^2$.}
\label{F:copula2}
\end{figure}
Our results show that the empirical power functions are better than those reported in \cite{HS17} for a sample size $N=1000$. We compare with the best results obtained for this scenario in \cite{HS17} using the statistic $S_{1000}$ defined by equation \eqref{E:copulas} with our proposal $D_{1000,50}$, for different values of $\xi$,
see Table~\ref{Ta:pot}.
{\begin{table}[ht]
\caption{\small Empirical power functions for the tests based on $S_{n}$ and $D_{n,50}$ for a sample of size $1000$.
We consider a random vector with uniform marginals
and the copulas $C^1_{\theta_0, \theta_1}$ and $C^2_{\theta_0, \theta_1}$ in dimension $3$,
for different values of $\xi$ when the level of the test is $5 \%$.}
\label{Ta:pot}
\begin{center}
\begin{tabular}{ccccc}\toprule
& \multicolumn{2}{c}{$C^1_{\theta_0, \theta_1}$} & \multicolumn{2}{c}{$C^2_{\theta_0, \theta_1}$}
\\\cmidrule(lr){2-3}\cmidrule(lr){4-5}
$\xi$ & $S_{1000}$ & $D_{1000,50}$ & $S_{1000}$ & $D_{1000,50}$ \\\midrule
0 & 0.044 & 0.043& 0.033 & 0.052 \\
1& 0.182& 0.338 & 0.112 & 0.246 \\
2 & 0.667 & 0.996 & 0.416 & 0.992 \\
3 & 0.981 &1.000 & 0.865 & 1.000\\
4 & 1.000 & 1.000 & 0.999 & 1.000\\
5 & 1.000 & 1.000 & 1.000 & 1.000\\
\bottomrule
\end{tabular}
\end{center}
\end{table}}
\section{Sign-invariant exchangeability}\label{S:signexch}
\subsection{Background}\label{SS:signexchback}
Berman \cite{Be62,Be65} introduced the notion of sign-invariant exchangeability,
which is defined as follows.
A $d$-tuple of random variables ${\mathbf X}:= (X_1, \ldots, X_d)$ is \emph{sign-invariant} if
it has the same distribution as $(\epsilon_1 X_1, \ldots, \epsilon_d X_d)$
for every choice of $(\epsilon_1,\dots,\epsilon_d) \in \{-1,1\}^d$.
It is \emph{sign-invariant exchangeable} if it is both sign-invariant and exchangeable.
Equivalently, $(X_1, \ldots, X_d)$ is sign-invariant exchangeable if and only if it has the same
distribution as
$(\epsilon_1 X_{\sigma(1)},\dots,\epsilon_d X_{\sigma(d)})$ for all $\sigma\in\Sigma_d$
and $(\epsilon_1,\dots,\epsilon_d) \in \{-1,1\}^d$.
This amounts to saying that the distribution $P$ of ${\mathbf X}$ is $G$-invariant,
where $G$ is the group of $d\times d$ signed permutation matrices.
As remarked in Example~\ref{Ex:sign}, $G$ can be generated by three matrices $T_1,T_2,T_3$,
so, to test for $G$-invariance, it suffices to test whether $P=PT^{-1}_j$ for $j=1,2,3$.
\subsection{Simulations for the test for sign-invariant exchangeability}\label{SS:signexchsim}
We consider a sample of a multivariate normal distribution in dimension $d$ for $d=3, 6, 10$,
with mean zero and covariance matrix given by $\Sigma^{(1)}$, defined in \S\ref{SS:exchsim1}
(where now $\rho$ is variable).
When $\rho=0$ the distribution is sign-invariant exchangeable.
We consider sample sizes of $200, 500, 1000$ and $5000$,
and we consider $1, 10, 50$ and $100$ random directions in ${\mathbb R}^3$.
A plot of the empirical power function of the test as a function of $\rho \in [0,1]$
using \S\ref{SS:signexchback} and \S\ref{SS:testing2} is given in Figure~\ref{F:signexch}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth]{Figuras/signo33b.pdf}
\caption{\small Empirical power function for sign-invariant exchangeable testing with $\alpha=0.05$ and varying $\rho \in [0,1]$.
We consider samples sizes of $200,500,1000$ and $5000$, and $1,10,50$ and $100$ random directions in ${\mathbb R}^2$.}
\label{F:signexch}
\end{figure}
\section{An infinite-dimensional example}\label{S:infdim}
\subsection{Background}
As mentioned in \S\ref{S:CW},
there are several variants of the Cram\'er--Wold theorem,
including versions for Hilbert spaces \cite{CFR07}
and even Banach spaces \cite{CF09}.
These can be exploited to provide tests for
$G$-invariance in infinite-dimensional spaces.
We consider here the case of a Hilbert space.
Let ${\mathcal H}$ be a separable infinite-dimensional Hilbert space,
let $P$ be a Borel probability measure on ${\mathcal H}$,
and let $G$ be a group of invertible continuous linear
maps $T:{\mathcal H}\to {\mathcal H}$. We want to test the null hypothesis
\[
H_0: \quad PT^{-1}=P \quad\text{for all~}T\in G.
\]
Nearly everything goes through as before.
The only adjustment needed is in Theorem~\ref{T:CWgen},
because Lebesgue measure no longer makes sense
in infinite dimensions. It role is taken by a non-degenerate gaussian
measure on ${\mathcal H}$. What this means is explained in detail in \cite[\S4]{CFR07}, where one can find the proof of the following result,
which serves as a replacement for Theorem~\ref{T:CWgen}.
\begin{theorem}[\protect{\cite[Theorem~4.1]{CFR07}}]\label{T:cwgauss}
Let ${\mathcal H}$ be a separable Hilbert space,
and let $\mu$ be a non-degenerate gaussian measure on ${\mathcal H}$.
Let $P,Q$ be Borel probability measures on ${\mathcal H}$.
Assume that
the absolute moments $m_N:=\int\|x\|^N\,dP(x)$ are all finite and satisfy
\begin{equation}
\sum_{N\ge1}m_N^{-1/N}=\infty.
\end{equation}
If the set ${\mathcal E}(P,Q)$ is of positive $\mu$-measure in ${\mathcal H}$,
then $P=Q$.
\end{theorem}
\subsection{Simulations for an infinite-dimensional example}
In this example, we perform a simulation study on the exchangeability test
for multidimensional functional data in the Hilbert space $\mathcal{L}^3:=\otimes_{i=1}^3 L^2[0,1]$.
We consider a sample of i.i.d.\ random functional vectors $\{\mathbf{X}_i\}_{i=1, \ldots,n} \subset \mathcal{L}^3$,
where the marginal components are defined as
\[
X_{i,j}(t):= m(t)+ \epsilon_{i,j}(t), \quad i \in \{1,2,\dots, n\} \textrm{\and\ } j \in \{1,2,3\},
\]
where $m(t):= \cos (2 \pi t)$, and where $\epsilon_{1,j}, \ldots, \epsilon_{n,j}$ are i.i.d\ gaussian processes with
\[
\textrm{Cov}(\epsilon_{i,j}(t), \epsilon_{i,j}(s))= \textrm{exp} \left( - \vert s-t \vert \right).
\]
Finally, we assume that the correlation function between the marginal components satisfies
\[
\textrm{Cor}(\epsilon_{i,j_1}(t), \epsilon_{i,j_2}(t))=
\begin{cases}
0.5 -\delta, &\text{if~} j_1=1,~j_2=3,\\
0.5, &\text{if~} j_1=2,~j_2=3,\\
0.5+\delta &\text{if~} j_1=1,~j_2=2.
\end{cases}
\]
For $\delta=0$, the functional vector is exchangeable.
The random projections are taken over a standard multivariate brownian $\mathbf{W}$ in $[0,1]^3$, that is,
$$\langle \mathbf{X}, \mathbf{W} \rangle = \sum_{i=1}^3 \langle {X}_j, {W}_j \rangle, $$
where ${W}_j$ are the i.i.d.\ standard brownian in $[0,1]$ and $$\langle {X}_j, {W}_j \rangle= \int_0^1 {X}_j(t){W}_j(t) dt.$$
The functions are discretized on an equispaced grid of size $100$ in $[0,1]$.
Figure~\ref{functional} depicts a realization of the functional random vector for $\delta=0.3$.
\begin{figure}[ht]
\centering
\subfloat{\includegraphics[width=125mm]{Figuras/functional.pdf}}
\caption{Marginal curves of a simulation of $\mathbf{X}$ for $\delta=0.3$}
\label{functional}
\end{figure}
The statistic $D_{n,\ell}$ is calculated (as in Section~\ref{SS:testing2}) for the values $n \in \{250,500,1000\}$
and $\ell \in \{10,50,100\}$. The power functions through $1000$ replicates are determined in
Table~\ref{Tab:pvalores}. We find that the results are quite good.
\begin{table}
\caption{Empirical power function $D_{n,\ell}$ for $\delta \in [0,0.3]$, $n \in \{250,500,1000\}$ and $\ell \in \{10,50,100\}$. Significance level= $0.05$. } \label{Tab:pvalores}
\begin{center}
\begin{tabular}{c|ccc|ccc|ccc}
\toprule[0.4 mm]
& \multicolumn{3}{c|}{$n=250$} &\multicolumn{3}{c|}{$n=500$} &\multicolumn{3}{c}{$n=1000$} \\
\midrule[0.4 mm]
$\delta \, \, \backslash \,\, \ell$ \ & $10$ & $50$& $100$&$10$ & $50$& $100$&$10$ & $50$& $100$ \\
\hline
$0$ & 0.044 & 0.035 &0.051& 0.044& 0.040 & 0.045 &0.051 & 0.04 & 0.044 \\
$0.05$ & 0.050& 0.045 &0.059& 0.043 & 0.040 & 0.061 &0.078 & 0.065 & 0.080 \\
$0.1$ & 0.078 & 0.069&0.089& 0.116 & 0.103 & 0.152 &0.264& 0.266& 0.350\\
$0.15$ & 0.116 & 0.121&0.194& 0.318 & 0.342 & 0.474 &0.727 &0.877 & 0.946 \\
$0.20$ & 0.298 & 0.331&0.470& 0.643 & 0.842& 0.945 &0.926 & 0.999& 1.000\\
$0.25$& 0.576 & 0.721 &0.853& 0.851 & 0.995 & 1.00 & 0.977 & 1.000& 1.000\\
$0.30$ & 0.781 & 0.977 &0.997& 0.989 & 0.995& 1.00 &0.996 & 1.000 & 1.000\\
\toprule[0.4 mm]
\end{tabular}
\end{center}
\end{table}
\section{Real-data examples}
We conclude the article with two examples drawn from real datasets.
In both cases, the test is for exchangeability.
\subsection{Biometric measurements}
These data were collected in a study of how data on various characteristics of the blood varied with sport, body size and sex of the athlete, by the Australian Institute of Sport. The data set, see \cite{CW94}, is available in the package \textit{locfit} of the $R$ software. The data set is composed of $202$ observations and $13$ variables. As in \cite{BQ21}, five covariates are considered: red cell count (RCC), haematocrit (Hc), haemoglobin (Hg), lean body mass (LBM) and height (Ht). It is clear that the marginal distributions are different, so the distribution is not exchangeable. In order to check the exchangeability of the copula, the marginal distributions are standardized by the transformation $U_n = F_n(X)$, where $F_n$ is the empirical cumulative distribution of the random variable~$X$. Figure~\ref{F:symIndex}(A) displays the bivariate symmetry index $S_n$ developed in \cite{GNQ12},
\[
S_n:= \int_0^1 \int_0^1 \left[ \hat{C}_n(u,v) - \hat{C}_n(v,u) \right]^2 \textrm{d}\hat{C}_n(u,v),
\]
where $\hat{C}_n$ is the empirical copula as in (\ref{E:empcop}).
The greatest asymmetry is observed between variables LBM and Hg,
but in all cases the proposed test in \cite{GNQ12} does not reject the symmetry null hypothesis
(the p-value between LBM and Hg is $0.72$).
\begin{figure}[ht]
\centering
\subfloat[]{\includegraphics[scale=0.62]{Figuras/simIndex.pdf}}
\subfloat[]{\includegraphics[scale=0.58]{Figuras/pvalores.pdf}}
\caption{\small (A) Bivariate symmetry index values $S_n$ for all pairs of copulas among the 5 variables considered in the Biometric measurements. (B) P-values of the test (with statistic $D_{n,\ell=50}$) for sub-sample of sizes $n=50,150, \ldots 550$ in the Statlog Satellite dataset (horizontal red line $\alpha=0.05$).}
\label{F:symIndex}
\end{figure}
We test the global five-dimensional symmetry of the copula. The components of each observation of the standardized sample are randomly permuted. From this permuted sample, we obtain the empirical distribution of the statistic $D_{n,\ell=50}$ under the exchangeability hypothesis (over 10,000 replicates). The p-value obtained for the sample is $0.0126$. Therefore, as in \cite{BQ21}, the exchangeability hypothesis is rejected.
\subsection{Statlog Satellite dataset}
The database consists of the multi-spectral values of pixels in $3\times3$ neighbourhoods in a satellite image, and is available in the UCI, Machine Learning Repository (\url{https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/satimage/}) . The sample size is 6435 images. Each line contains the pixel values in the four spectral bands (converted to ASCII) of each of the 9 pixels in the $3\times3$ neighbourhood. That is, each observation is represented by $36$ attributes. In this example too the marginal distributions are clearly different, so, as in the previous example, the variables are standardized, and the statistic under the null hypothesis is determined to test exchangeability of the copula. The test $D_{n,\ell=50}$ is performed for different sub-sample sizes $n=50,150, \ldots 550$
(from the first to the n-th observation), similar to \cite{FGNV2012}.
Figure~\ref{F:symIndex}(B) displays the p-values obtained for each~$n$.
We observe that, with a sample size greater than $300$, the hypothesis of exchangeability is rejected.
\section{Conclusions and potential further developments}\label{S:conclusion}
We have shown how to exploit an extension of the Cram\'er--Wold theorem, Theorem~\ref{T:CWgen}, to use one-dimensional projections
to test for invariance of a probability measure under a group of linear transformations. As special cases, we obtain tests for
exchangeability and sign-invariant exchangeability.
The results in the simulations are really good,
and the algorithms we propose are computationally extremely fast and adequate even for high-dimensional data.
Regarding the comparison with the proposal in \cite{HS17},
Table \ref{Ta:pot} shows a relative efficiency of our method \emph{vis-\`a-vis} its competitor
of order around 100\% for small values of the parameter~$\xi$.
Moreover, the proposal in \cite{HS17} seems to be very hard to implement in high-dimensional spaces.
We illustrate our method with a short study of two real-data examples.
As well as being effective, the methods developed in this article are quite flexible.
Our results can be applied to the case of multivariate functional data,
where we have vectors taking values in a Hilbert space or even a Banach space.
We can still employ the same algorithm as in \S\ref{SS:testing2},
using several random directions and finding the critical value by bootstrap,
except that, in the infinite-dimensional case, we should choose the random directions
(or the elements on the dual space) according to a non-degenerate gaussian measure.
We have also shown that, even in the context of testing for $G$-invariance,
where one of the measures is a linear transformation of the other,
Theorem~\ref{T:CWgen} remains sharp. This is the content of Theorem~\ref{T:sharpexch}.
However, if we have further \emph{a priori} knowledge of the distribution in question, then
Theorem~\ref{T:CWgen} can be improved.
For example, Heppes \cite[Theorem~$1'$]{He56} showed that,
if $P$ is discrete probability measures on ${\mathbb R}^d$ supported on $k$ points,
and if $Q$ is an arbitrary Borel probability measure on ${\mathbb R}^d$ such that ${\mathcal E}(P,Q)$ contains at least $(k+1)$ subspaces, no two of which are contained in any single hyperplane,
then $P=Q$. This opens the door to potential improvements of our procedures in the case where
$P$ is known to be discrete, for example testing for the exchangeability of
sequences of Bernoulli random variables.
This idea is explored further in \cite{FMR22}.
\section*{Acknowledgement}
Fraiman and Moreno supported by grant FCE-1-2019-1-156054, Agencia Nacional de Investigaci\'on e Innovaci\'on, Uruguay. Ransford supported by grants from NSERC and the Canada Research Chairs program.
\bibliographystyle{plain}
\section{Introduction}
Let $d\ge2$ and let $P$ be a Borel probability measure on ${\mathbb R}^d$.
The \emph{image} or \emph{push-forward} of $P$ under a linear map $T:{\mathbb R}^d\to{\mathbb R}^d$
is the Borel probability measure $PT^{-1}$ on ${\mathbb R}^d$ defined by
\[
PT^{-1}(B):=P(T^{-1}(B)).
\]
The measure $P$ is said to be \emph{$T$-invariant} if $PT^{-1}=P$.
If $G$ is a group of invertible linear self-maps of ${\mathbb R}^d$, then
$P$ is \emph{$G$-invariant} if it is $T$-invariant for each $T\in G$.
For example, if $G$ is the group of $d\times d$
permutation matrices, then $P$ is $G$-invariant if and only if
it is an exchangeable distribution. If $G$ is the group of signed permutation
matrices, then $P$ is $G$-invariant if and only if it is a sign-invariant exchangeable
distribution. (These terms will be defined in detail later in the article.)
Of course one can imagine many more examples.
The purpose of this paper is to develop a methodology for testing whether a given
probability measure $P$ on ${\mathbb R}^d$ is $G$-invariant for a given group~$G$.
We know of no previous work on this subject in this degree of generality,
though certain special cases, notably testing for exchangeability or sign-invariant exchangeability,
have been extensively treated in the literature. We shall provide detailed references when we come to discuss these special cases later in the paper.
A potential obstacle is the fact that $G$ may be quite large, possibly even infinite.
Another possible difficulty is that the dimension $d$ of the underlying space may be very large.
To circumvent these problems, we exploit two devices.
The first idea is a very simple one, namely that, to test for $G$-invariance, it suffices to
test for $T$-invariance as $T$ runs through a set of generators for~$G$. The point is that
$G$ may be generated by a very small set of $T$, even though $G$ itself is large or even infinite.
This idea is explored in \S\ref{S:generators}.
The second idea is that, to test whether $P=PT^{-1}$,
one can project $P$ and $PT^{-1}$
onto a randomly chosen one-dimensional subspace,
and test whether the projected measures are equal.
This reduces a $d$-dimensional problem to a one-dimensional one,
for which well-known techniques are available.
The justification for this procedure is a variant of the
Cram\'er--Wold theorem. This idea is described in detail in \S\ref{S:CW}.
Based on these two ideas, we develop a testing procedure in \S\ref{S:testing}.
Under suitable hypotheses on $P$ and $G$, our test is consistent and distribution-free.
We also consider a variant of this procedure, based on projecting onto several randomly-chosen one-dimensional
subspaces. This has the effect of increasing the power of the test.
There follows a discussion of three special cases.
The case of exchangeable distributions is treated in \S\ref{S:exch}
and that of sign-invariant exchangeable distributions in \S\ref{S:signexch}.
We describe the background, perform some simulations, and,
in the case of exchangeability, we compare our results with those
obtained by other techniques in the literature.
The third case, treated in \S\ref{S:infdim},
illustrates the flexibility of our method. In this case,
${\mathbb R}^d$ is replaced by an infinite-dimensional Hilbert space.
The necessary adjustments to our method are described,
and illustrated with a further simulation.
The article concludes with two examples drawn from real datasets,
one concerning biometric measurements, and the other satellite images.
\section{Generators}\label{S:generators}
Recall that a group $G$ is said to be \emph{generated} by a subset $A$ if
every element $g$ of $G$ can be written as a finite product $g=g_1\cdots g_m$,
such that, for each $j$, either $g_j\in A$ or $g_j^{-1}\in A$.
Equivalently, $G$ is generated by $A$ if $A$ is contained in no proper subgroup of $G$.
In our context, the interest of this notion stems from the following simple proposition.
\begin{proposition}
Let $P$ be a Borel probability measure on ${\mathbb R}^d$,
let $G$ be a group of invertible linear maps $T:{\mathbb R}^d\to{\mathbb R}^d$,
and let $A$ be a set of generators for $G$.
Then $P$ is $G$-invariant if and only if $P$ is $T$-invariant for each $T\in A$.
\end{proposition}
\begin{proof}
Define
\[
G_0:=\{T\in G: PT^{-1}=P\}.
\]
Obviously $G_0$ contains the identity. Also, it is easily checked that,
if $T_1,T_2\in G_0$, then also $T_1T_2\in G_0$ and $T_1^{-1}\in G_0$.
Therefore $G_0$ is a subgroup of $G$. By assumption, $G_0$ contains $A$.
As $A$ generates $G$, it follows that $G_0=G$.
\end{proof}
\begin{example}\label{Ex:2gen}
Let $G$ be the group of $d\times d$ permutation matrices
(i.e.\ matrices with one entry $1$ in each row and each column, and zeros elsewhere).
Thus each $T\in G$ permutes the basis vectors of ${\mathbb R}^d$,
say $Te_j=e_{\sigma(j)}$,
where $\sigma$ is permutation of $\{1,2,\dots,d\}$.
The correspondence $T\leftrightarrow\sigma$ is an isomorphism
between $G$ and $\Sigma_d$, the group of permutations of $\{1,2,\dots,d\}$.
It is well known that, even though $\Sigma_d$
contains $d!$ permutations, it can be generated using just two permutations,
for example the transposition $\sigma_1=(1,2)$ and the cycle $\sigma_2=(1,2,3,4,...,d)$
(see e.g.\ \cite[Example~2.30]{Ro78}).
Thus $G$ has a generating set consisting of just two matrices.
\end{example}
\begin{example}\label{Ex:sign}
Let $G$ be the group of $d\times d$ signed permutation matrices
(i.e.\ matrices with one entry $\pm 1$ in each row and each column, and zeros elsewhere).
This is sometimes called the hyperoctahedral group, denoted~$B_d$.
It is the group of symmetries of the $d$-dimensional cube. It contains $d!2^d$ elements,
but, like the symmetric group $\Sigma_d$, it can be generated by just two elements.
However, these elements are more complicated to describe,
see \cite[Proposition~6]{Ja04}.
On the other hand, it is easy to give a set of three generators:
for example, one can take $\{T_1,T_2,T_3\}$,
where $T_1,T_2$ are the matrices corresponding to the same two permutations $\sigma_1,\sigma_2$ as before,
and $T_3$ is the diagonal matrix given by $T_3:=\diag(-1,1,1,\dots,1)$.
\end{example}
\section{Cram\'er--Wold theorems}\label{S:CW}
In this section we recall the Cram\'er--Wold theorem, as well as a more recent extension.
We also discuss the sharpness of this result in the context testing for $G$-invariance.
Let $P,Q$ be Borel probability measures on ${\mathbb R}^d$, where $d\ge2$.
We denote by ${\mathcal E}(P,Q)$ the set of vectors $x\in{\mathbb R}^d$
such that $P\pi_x^{-1}=Q\pi_x^{-1}$, where $\pi_x:{\mathbb R}^d\to{\mathbb R}^d$ is the orthogonal projection onto the one-dimensional subspace ${\mathbb R} x$ spanned by $x$.
Equivalently, ${\mathcal E}(P,Q)$ is the set of $x\in{\mathbb R}^d$ such that $\phi_P(tx)=\phi_Q(tx)$ for all $t\in{\mathbb R}$,
where $\phi_P,\phi_Q$ denote the characteristic functions of $P,Q$ respectively.
The set ${\mathcal E}(P,Q)$ is a closed cone (not necessarily convex) in ${\mathbb R}^d$.
For detailed proofs of all these facts, see \cite[\S2]{CFR07}.
The following result is a restatement in our notation of a
well-known theorem of Cram\'er and Wold (\cite{CW36}).
\begin{theorem}[\protect{\cite[Theorem~I]{CW36}}]\label{T:CW}
Let $P,Q$ be Borel probability measures on~${\mathbb R}^d$, where $d\ge2$.
If ${\mathcal E}(P,Q)={\mathbb R}^d$, then $P=Q$
\end{theorem}
There are several extensions of this theorem,
in which one assumes more about the nature of the measures $P,Q$
and less about the size of ${\mathcal E}(P,Q)$.
Articles on this subject include those of R\'enyi \cite{Re52},
Gilbert \cite{Gi55}, Heppes \cite{He56}, B\'elisle--Mass\'e--Ransford \cite{BMR97}
and Cuesta-Albertos--Fraiman--Ransford \cite{CFR07}.
We cite one such result, taken from \cite{CFR07}.
\begin{theorem}[\protect{\cite[Corollary~3.2]{CFR07}}]\label{T:CWgen}
Let $P,Q$ be Borel probability measures on~${\mathbb R}^d$, where $d\ge2$. Assume that
the absolute moments $m_N:=\int\|x\|^N\,dP(x)$ are all finite and satisfy
\begin{equation}\label{E:Carleman}
\sum_{N\ge1}m_N^{-1/N}=\infty.
\end{equation}
If the set ${\mathcal E}(P,Q)$ is of positive Lebesgue measure in ${\mathbb R}^d$,
then $P=Q$.
\end{theorem}
The moment condition \eqref{E:Carleman} is slightly less restrictive than demanding that the moment generating function of $P$ be finite.
Just how essential is this condition?
The brief answer is that, without it, Theorem~\ref{T:CWgen} breaks down dramatically.
Indeed, given a moment sequence $(m_N)$ that fails to satisfy \eqref{E:Carleman},
one can find probability measures $P,Q$ whose moments are bounded by $m_N$,
and ${\mathcal E}(P,Q)$ has positive measure (indeed it even has non-empty interior),
yet $P\ne Q$.
See \cite{CFR07} for an extensive discussion of this topic.
However, we are eventually going to apply Theorem~\ref{T:CWgen} in the special case when
$Q=PT^{-1}$, where $T:{\mathbb R}^d\to{\mathbb R}^d$ is an invertible linear map.
It is \emph{a priori} conceivable that Theorem~\ref{T:CWgen} might be improvable
for pairs of measures $(P,Q)$ of this particular form.
The following sharpness result, which we believe to be new, shows that this is not so,
at least in the case when $T$ is of finite order.
\begin{theorem}\label{T:sharpexch}
Let $T:{\mathbb R}^d\to{\mathbb R}^d$ be a linear map
such that $T\ne I$ but $T^n=I$ for some $n\ge2$.
Let $(M_N)_{N\ge0}$ be a positive sequence satisfying
\[
M_0=1,
\quad
M_N^2\le M_{N-1}M_{N+1}~(N\ge1)
\quad\text{and}\quad
\sum_{N\ge1}M_N^{-1/N}<\infty.
\]
Then there exists a Borel probability measure $P$ on ${\mathbb R}^d$ such that
\begin{itemize}
\item $\int\|x\|^N\,dP(x)\le M_N$ for all $N\ge0$,
\item the cone ${\mathcal E}(P,PT^{-1})$ is of positive measure in ${\mathbb R}^d$, but
\item $PT^{-1}\ne P$.
\end{itemize}
\end{theorem}
\begin{remark}
The condition that $T^n=I$ for some $n\ge 2$ is automatically satisfied
if $T$ belongs to a finite group $G$, as will be the case in all the examples that we
shall study. The article \cite{Ko03} contains a criterion for when $T^n=I$.
\end{remark}
The main new idea in Theorem~\ref{T:sharpexch} is the construction given in
the following lemma.
\begin{lemma}\label{L:sharpexch}
Let $Q,R$ be Borel probability measures on ${\mathbb R}^d$.
Let $T:{\mathbb R}^d\to{\mathbb R}^d$ be a linear map such that $T^n=I$ for some $n\ge2$.
Define
\[
P:=\frac{1}{n}\Bigl(Q+\sum_{j=1}^{n-1}RT^{-j}\Bigr).
\]
Then $P$ is a Borel probability measure on ${\mathbb R}^d$ and, writing ${\mathcal E}_0:={\mathcal E}(Q,R)$, we have
\begin{equation}\label{E:sharpexch}
{\mathcal E}_0\cap T^{-1}({\mathcal E}_0)\subset {\mathcal E}(P,PT^{-1})\subset {\mathbb R}^d\setminus ({\mathcal E}_0\bigtriangleup T^{-1}({\mathcal E}_0)).
\end{equation}
\end{lemma}
\begin{proof}
Clearly $P$ is a Borel probability measure on ${\mathbb R}^d$.
Also, since $T^n=I$, the measure $\tilde{P}:=\sum_{j=0}^{n-1}RT^{-j}$ is $T$-invariant,
so, since $P=(1/n)(Q-R+\tilde{P})$, we get
\[
P-PT^{-1}=\frac{1}{n}\Bigl(Q-R-(Q-R)T^{-1})\Bigr).
\]
Using the characterization of ${\mathcal E}$ in terms of characteristic functions, it follows that
\begin{equation}\label{E:phiequiv}
\begin{aligned}
x\in{\mathcal E}(P,PT^{-1})
&\iff \phi_{P-PT^{-1}}(tx)=0 \quad\forall t\in{\mathbb R}\\
&\iff \phi_{(Q-R)}(tx)-\phi_{(Q-R)T^{-1}}(tx)=0 \quad\forall t\in{\mathbb R}\\
&\iff \phi_{(Q-R)}(tx)-\phi_{(Q-R)}(tTx)=0 \quad\forall t\in{\mathbb R}.
\end{aligned}
\end{equation}
To prove the first inclusion in \eqref{E:sharpexch},
let $x\in{\mathcal E}_0\cap T^{-1}({\mathcal E}_0)$.
Then both $x,Tx\in{\mathcal E}_0={\mathcal E}(Q,R)$,
so both $\phi_{(Q-R)}(tx)=0$ and $\phi_{(Q-R)}(tTx)=0$ for all $t\in{\mathbb R}$.
By the equivalence \eqref{E:phiequiv}, it follows that $x\in{\mathcal E}(P,PT^{-1})$.
This establishes the first inclusion.
For the second inclusion in \eqref{E:sharpexch},
let $x\in{\mathcal E}_0\bigtriangleup T^{-1}({\mathcal E}_0)$.
Then exactly one of $x$ and $Tx$ lies in ${\mathcal E}_0={\mathcal E}(Q,R)$,
so there exists $t\in{\mathbb R}$ such that exactly one of $\phi_{(Q-R)}(tx)$
and $\phi_{(Q-R)}(tTx)$ is zero.
Their difference is therefore non-zero,
so, by the equivalence \eqref{E:phiequiv} again,
it follows that $x\notin {\mathcal E}(P,PT^{-1})$.
This establishes the second inclusion and completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T:sharpexch}]
Let $B$ be a closed ball in ${\mathbb R}^d$ such that $0\notin B$.
By \cite[Theorem~5.4]{BMR97},
there exist mutually singular Borel probability measures $Q,R$ on ${\mathbb R}^d$
such that both $\int\|x\|^N\,dQ(x)\le M_N$ and $\int\|x\|^N\,dR(x)\le M_N$ for all $N\ge0$,
and also such that ${\mathcal E}_0:={\mathcal E}(Q,R)$ contains all lines not meeting $B$.
Since $T\ne I$,
by choosing $B$ appropriately we may ensure that ${\mathcal E}_0\ne T^{-1}({\mathcal E}_0)$
and also that ${\mathcal E}_0\cap T^{-1}({\mathcal E}_0)$ has positive Lebesgue measure.
Let $P$ be the probability measure constructed from $Q,R$ in Lemma~\ref{L:sharpexch}.
By the left-hand inclusion in \eqref{E:sharpexch}, ${\mathcal E}(P,PT^{-1})$ has positive Lebesgue measure.
Also, by the right-hand inclusion, ${\mathcal E}(P,PT^{-1})$ is a proper subset of~${\mathbb R}^d$, so $P\ne PT^{-1}$.
It remains to check that $P$ satisfies the moment inequality.
From the definition of $P$ in Lemma~\ref{L:sharpexch}, we have
\begin{align*}
\int \|x\|^N\,dP(x)
&=\frac{1}{n}\Bigl(\int \|x\|^N\,dQ(x)+\sum_{j=1}^{n-1}\int\|T^j(x)\|^N\,dR(x)\Bigr)\\
&\le \frac{1}{n}\Bigl(M_N+\sum_{j=1}^{n-1}\|T\|^{jN}M_N\Bigr)\le \|T\|^{nN}M_N.
\end{align*}
This is almost what we want.
Repeating the argument with $(M_N)_{N\ge0}$ replaced by $(\|T\|^{-nN}M_N)_{N\ge0}$
at the outset, we get $\int\|x\|^N\,dP(x)\le M_N$.
\end{proof}
\section{General testing procedures}\label{S:testing}
\subsection{A distribution-free procedure}\label{SS:testing1}
Consider the following set-up.
Let $d\ge2$, let $P$ be a Borel probability measure on ${\mathbb R}^d$,
and let $G$ be a finite multiplicative group of $d\times d$ orthogonal matrices.
We want to test the null hypothesis
\[
H_0:\quad PT^{-1}=P \quad \text{for all~}T \in G.
\]
Assume that $G$ is generated by $k$ elements, say $\{T_1,\dots,T_k\}$.
Then, as we have seen in \S\ref{S:generators},
it suffices just to test whether $PT_j^{-1}=P$ for $j=1,2,\dots, k$.
We now proceed as follows.
Let $\aleph_n:=\{{\mathbf X}_1, \ldots, {\mathbf X}_n\}$ be an i.i.d.\ sample of vectors in ${\mathbb R}^d$ with common distribution $P$.
Consider the following testing procedure.
\begin{enumerate}
\item Split the sample $\aleph_n$ at random into two disjoint subsamples
\[
\aleph^{n_1}:=\{{\mathbf X}_1, \ldots {\mathbf X}_{n1}\},
\quad
\aleph^{n_2}= \{{\mathbf X}'_1, \ldots {\mathbf X}'_{n2}\},
\]
providing two independent samples, which we will take typically of equal sizes.
\item Generate $k$ independent random directions $h_1, \ldots h_k$,
uniformly distributed on the unit sphere of ${\mathbb R}^d$.
\item Given a unit vector $h\in{\mathbb R}^d$,
denote by $F_h$ the one-dimensional distribution of $\langle h,{\mathbf X}\rangle$,
and by $F_h^{T_j}$ that of $\langle h,T_j({\mathbf X})\rangle$ for $j=1, \ldots, k$.
The previous results suggest testing whether $F_{h_j}^{T_j}=F_{h_j}$, for $j=1, \ldots k$.
\item Next, we perform $k$ Kolmogorov--Smirnov two-sample tests between
\[
\langle h_j,\aleph^{n_1}\rangle
\quad\text{and}\quad
\langle h_j,\aleph^{n_2}_{T_{j}}\rangle, \quad j=1, \ldots k,
\]
each one at level $\alpha/k$, where
\[
\langle h, \aleph^{n_1}\rangle:=\Bigl\{\langle h,{\mathbf X}_1\rangle, \ldots, \langle h,{\mathbf X}_{n_1}\rangle \Bigr\}
\]
and
\[
\langle h, \aleph^{n_2}_{T}\rangle:=\Bigl\{\langle h,T({\mathbf X}'_1)\rangle, \ldots, \langle h,T({\mathbf X}'_{n_2})\rangle\Bigr\}.
\]
\item Given $h, T$, let $F_{n_1,h} ^{T}$ and $F_{n_2,h}^{T}$ be the empirical distributions of
$\langle h, \aleph^{n_1}\rangle$ and $\langle h, \aleph^{n_2}_{T}\rangle$ respectively.
The Kolmogorov--Smirnov statistic for our problem is given by
\[
KS(n_1,n_2, h, T):= \sup_{t \in {\mathbb R}} \bigl| F_{n_1,h}^{T}(t) - F_{n_2,h}^{T}(t)\bigr|.
\]
\item Given $\alpha$ with $0<\alpha<1/k$, choose $c_{\alpha/k}$ such that
\[
P\Bigl(KS(n_1, n_2, h_j,T_j)> c_{\alpha/k}\Bigr)= \alpha/k,
\]
Observe that $c_{\alpha/2, n_1,n_2}$ does not depend on~$j$.
This is because
\[
\langle h_j,T_j({\mathbf X})\rangle=\langle T_j^*(h_j),{\mathbf X}\rangle,
\]
and all the vectors $T_j^*(h_j)$ have the same distribution, namely the uniform distribution on the unit
sphere in ${\mathbb R}^d$ (it is here that we need the assumption that the $T_j$ are orthogonal matrices).
\item Reject the null hypothesis if, for at least one of the $k$ Kolmogorov--Smirnov tests, the null hypotheses is rejected.
\end{enumerate}
\begin{theorem}
Let $P$ be a Borel probability measure on ${\mathbb R}^d$, where $d\ge2$.
Assume that:
\begin{enumerate}[\normalfont(i)]
\item the absolute moments $m_N:=\int\|x\|^N\,dP(x)$ are all finite and satisfy
the condition $\sum_{N\ge1}m_N^{-1/N}=\infty$;
\item $P$ is absolutely continuous with respect to Lebesgue measure on ${\mathbb R}^d$.
\end{enumerate}
Then the preceding test has a level between $\alpha/k$ and $\alpha$ under the null hypothesis~$H_0$,
and is consistent under the alternative.
\end{theorem}
\begin{proof}
By the choice of $c_{\alpha/k}$ and Bonferroni's inequality, we have
\[
\alpha/k \le P\Bigl(\bigcup_{j=1}^k\{KS(n_1, n_2, h_j, T_j) > c_{\alpha/k}\} \Bigr ) \le \alpha.
\]
Thus the test has a level between $\alpha/k$ and $\alpha$.
The proof of consistency follows the same lines as the one given in \cite[Theorem~3.1]{CFR06}.
Under the alternative hypothesis,
there exists $j_0\in\{1,\dots,k\}$ such that $P \ne PT^{-1}_{j_0}$.
Then, by Theorem~\ref{T:CWgen},
for almost all $h\in{\mathbb R}^d$, there exists $t_h\in{\mathbb R}$ such that
$\delta_h:=|F_{h}^{T_{j_0}}(t_h) - F_{h}(t_h)| >0$.
As $P$ is absolutely continuous with respect to Lebesgue measure on ${\mathbb R}^d$,
both $F_h$ and $F_h^{T_{j_0}}$ are continuous,
and so, by the strong law of large numbers,
\[
\lim_{n\to\infty}F_{n,h}(t_h) = F_{h}(t_h)
\quad\text{and}\quad
\lim_{n\to\infty}F_{n,h}^{T_{j_0}}(t_h) = F_{h}^{T_{j_0}}(t_h) \quad\text{a.s.}
\]
Hence, by the triangle inequality,
\[
\liminf_{n_1,n_2\to\infty}KS(n_1,n_2,h,T_{j_0})\ge \delta_h >0\quad\text{a.s.}
\]
This establishes consistency.
\end{proof}
\subsection{Testing using more than one random projection} \label{SS:testing2}
The algorithm presented in \S\ref{SS:testing1} is consistent and distribution-free.
In this section we consider the case where we use more than one projection,
as suggested in \cite{CFR07},
in order to increase the power of the test.
Moreover, we no longer need to assume that the matrices $T_j$ are orthogonal.
The price to be paid is that the proposed statistic is no longer distribution-free.
In this case we do not need to split the sample in two subsamples.
It just consists on taking $\ell$ random directions $h_1, \ldots, h_\ell$,
for each direction $h_i$ calculating the statistic $KS(n, h_i, T_j)$ for the univariate projected data,
and then taking the maximum of them:
$$
D_{n,\ell}:=
\max_{\substack{i=1, \ldots, \ell\\j=1,\dots,k}}KS(n, h_i, T_j)
=\max_{\substack{i=1, \ldots, \ell\\j=1,\dots,k}}\sup_{t \in \mathbb R} | F_{n,h_i}(t) - F_{n,h_i}^{T_j} (t)|.
$$
Here $F_{n,h_i}(t)$ and $F^{T_j}_{n,h_i}(t)$ are the empirical distribution of
\[
\langle h_i,\aleph^{n}\rangle
:=\{\langle h_i,{\mathbf X}_1\rangle, \ldots, \langle h_i,{\mathbf X}_{n}\rangle \}
\]
and
\[
\langle h_i,\aleph^{n}_{T_j}\rangle
:=\{\langle h_i,T_j({\mathbf X}_{1})\rangle , \ldots, \langle h_i,T_j({\mathbf X}_{n})\rangle \},
\]
for $ i=1, \ldots \ell$ and $j=1, \ldots,k$, where $k$ is the number of generators of the group.
Since the statistic is no longer distribution-free for $\ell\ge 2$,
in order to obtain the critical value for a level-$\alpha$ test,
we approximate the distribution using bootstrap on the original sample ${\mathbf X}_1, \ldots, {\mathbf X}_n$
by generating a large enough number $B$ of values of $D_{n,\ell}$ for each bootstrap sample.
More precisely, for $r=1, \ldots, B$ repeat:
\begin{enumerate}
\item Generate a bootstrap sample of $\mathbf X_1^*, \ldots, \mathbf X_n^*$, by random sampling with replacement from $\aleph_n$, and generate $(\ell+k)$ i.i.d.\ random directions $h_1, \ldots, h_{\ell+k}$.
\item Calculate $D_{n,\ell}^*$ based on $\mathbf X_1^*, \ldots, \mathbf X_n^*$ and $T_j(\mathbf X_1^*), \ldots, T_j(\mathbf X_n^*)$.
\end{enumerate}
We end up with a sample $ {\mathcal D}^*:= \{D_{n,l}^{*1}, \ldots , D_{n,l}^{*B}\}$ of size $B$,
and take as critical value the $(1-\alpha)$-quantile of the empirical distribution of ${\mathcal D}^*$.
The validity of the bootstrap in this case follows from \cite[Theorems~3 and~4]{Pr95}.
\section{Application to exchangeability}\label{S:exch}
\subsection{Background}\label{SS:exchback}
A $d$-tuple of random variables ${\mathbf X}:=(X_1,\dots,X_d)$ is said to be \emph{exchangeable} if
it has the same distribution as $(X_{\sigma(1)},\dots,X_{\sigma(d)})$
for every permutation $\sigma$ of $\{1,\dots,d\}$.
This is equivalent to demanding that the distribution $P$ of ${\mathbf X}$ be $G$-invariant,
where $G$ is the group of $d\times d$ permutation matrices.
The notion of exchangeability plays a central role in probability and statistics.
See for instance \cite{Al85} for a detailed study of exchangeability,
and \cite{Ki78} on some of the important uses of it.
In particular, permutation tests provide exact level tests for a wide variety of practical testing problems,
but they rely on the assumption of exchangeability.
From the point of view of applications,
it is quite common to assume that
a given vector of data $(X_1, \ldots, X_n)$
(where the information $X_i$ corresponds to the $i$-th subject of a study)
does not depend on the order in which the data are collected.
Although independence of the $X_i$ is quite often assumed,
this is stronger than the notion of exchangeability.
The relationship is made precise by de Finetti's theorem,
according to which, if ${\mathbf X}:= (X_1, X_2, \ldots)$
is a sequence of random variables
whose distribution is invariant under permutations of finitely many coordinates,
then the law of ${\mathbf X}$ is a mixture of i.i.d.\ measures.
De Finetti's theorem applies to an infinite exchangeable sequence of random variables,
but there are also versions that apply to finite exchangeable sequences,
see e.g.\ \cite{DF80, JKY16, KY19} and the references therein.
The problem of testing for exchangeability has been considered quite extensively in the literature.
Modarres \cite{Mo08} compares a number of different methods: the run test, the nearest neighbour test,
the rank test, the sign test and the bootstrap test.
The problem of testing exchangeability in an on-line mode has been considered by
Vovk, Nouretdinov and Gammerman \cite{VNG03}.
The data are observed sequentially,
and the procedure provides a way to monitor on-line the evidence against the exchangeability assumption,
based on exchangeability martingales.
A test for the case of binary sequences can be found in \cite{RRLK21}, using a similar approach.
Genest, Ne\u slehov\'a and Quessy \cite{GNQ12} propose an interesting test for a closely related problem: exchangeability of copulas for two-dimensional data. This has been extended to arbitrary finite dimensions by Harder and Stadtm\"uller \cite{HS17},
using the observation that $\Sigma_d$ can be generated by just two elements (see Example~\ref{Ex:2gen} above). See also the recent article \cite{BQ21} by Bahraoui and Quessy, where they also consider the problem of testing if the copula is exchangeable, but, instead of using empirical copulas as in the previous results, they consider a test based on the copula characteristic function. As they mention:
``From a modeling point-of-view, it may be of interest to check whether the copula of a population is exchangeable. The main point here is that this step may be accomplished without making any assumption on the marginal distributions. Hence, a vector can have exchangeable components at the level of its dependence structure, that is, its copula, whether or not its marginal distributions are identical." Under the extra assumption that all marginals have the same distribution, it provides a test for exchangeability of the distributions.
However, they need to use bootstrap in order to perform their test.
In the next paragraph, we perform some simulations to show how the test procedures described in \S\ref{S:testing}
apply to this situation. In \S\ref{SS:exchsim2} we compare our test for exchangeability with the one proposed by Harder and Stadtm\"uller in \cite{HS17}.
\subsection{Simulations for the test for exchangeability}\label{SS:exchsim1}
We consider a sample of a multivariate normal distribution in dimension $d$, with mean zero and covariance matrix given by a mixture
$(1-\xi)\Sigma^{(1)} +\xi \Sigma^{(2)},$ where $\Sigma^{(1)}$ corresponds to a exchangeable distribution and $\Sigma^{(2)}$ is a Toeplitz matrix, given by
\[
\Sigma^{(1)}:=
\begin{pmatrix}
1 & \rho & \cdots & \rho\\
\rho& 1 & \cdots & \rho\\
\vdots & \vdots & \ddots & \vdots\\
\rho& \rho & \cdots & 1
\end{pmatrix},
\quad
\Sigma^{(2)}:=
\begin{pmatrix}
1 & \rho & \rho^2 & \cdots & \rho^{d-1}\\
\rho & 1 & \rho & \cdots & \rho^{d-2}\\
\rho^2 & \rho & 1 & \cdots & \rho^{d-3}\\
\vdots & \vdots & \ddots & \ddots & \vdots\\
\rho^{d-1}& \rho^{d-2} & \cdots &\rho & 1
\end{pmatrix}.
\]
In this simulation, we take $\rho:=1/2$. If $\xi=0$, then the distribution is exchangeable,
and as it increases to $1$ we get further from the null hypothesis.
To analyze the power function of the proposed test,
we consider some different scenarios:
the dimension $d$ being $6$ and $10$,
the sample size $n$ being $1000$ and $5000$,
and the number of projections being $1,10,50, 100$.
In all cases, the level of the test is $0.05$.
In Figure~\ref{F:normal} we plot the empirical power function as a function $g(\xi)$ of $\xi \in [0,1]$.
\begin{figure}[ht]
\centering
\subfloat[]{\includegraphics[scale=0.65]{Figuras/normal6b.pdf}}
\hfill
\subfloat[]{\includegraphics[scale=0.65]{Figuras/normal10b.pdf}}
\caption{\small Empirical power function $g(\xi)$ for $\xi \in [0,1]$. Upper panel (a) for dimension $d=6$, lower panel (b) for $d=10$.}\label{F:normal}
\end{figure}
\subsection{Comparison with a copula-based test}\label{SS:exchsim2}
As we mentioned in \S\ref{SS:exchback},
Harder and Stadtm\"uller \cite{HS17} proposed an interesting test for exchangeability of copulas,
which we describe briefly here. Assuming that all marginal distributions are equal it provides a exchangeability test for distributions. However, without the extra assumption that all marginals coincide, the exchangeability of the copula does not imply exchangeability of the probability distributions.
Thus, without this assumption, it is also necessary to test that all marginal distributions coincide, which, for even moderate dimensions, makes the problem harder. For the comparison below, we consider a case where all marginal distributions coincide.
Let ${\mathbf X}:=(X_1, \ldots, X_d)$ be a random vector with joint distribution $F$ and
continuous marginal distributions $F_1 \dots F_d$.
Assume that $F_1,\dots,F_d$ are continuous.
Then $U_j:=F_j(X_j)$ is uniformly distributed on the interval $[0,1]$ for each~$j$,
and the distribution $C$ of $(U_1, \ldots, U_d)$ is the unique copula
such that $F(x_1,\dots,x_d)= C(F_1(x_1), \ldots, F_d(x_d))$.
Given an i.i.d. sample ${\mathbf X}_1, \ldots, {\mathbf X}_n \in {\mathbb R}^d$, set
\begin{equation}\label{E:empcop}
\hat C_n({\mathbf u}):= \frac{1}{n} \sum_{i=1}^n \prod_{j=1}^d {\mathcal I}_{\{\hat F_{jn}(X_{ij})\leq u_j\}},
\quad {\mathbf u}:=(u_1, \ldots, u_d) \in [0,1]^d,
\end{equation}
where $\hat F_{jn}, \ j=1, \ldots, d$ are the empirical marginal distributions of the sample.
Among many other interesting asymptotic results, Deheuvels (\cite{De81}) showed that
\begin{equation}\label{E:deh}
\sup_{{\mathbf u} \in [0,1]^d}|\hat C_n({\mathbf u}) - C({\mathbf u})\vert = O\bigl(n^{-\frac{1}{2}}(\log\log n)^\frac{1}{2}\bigr) \quad\text{a.s.},
\end{equation}
which suggests using this statistic for testing.
Based on $\hat C_n({\mathbf u})$, Harder and Stadtm\"uller \cite{HS17}
proposed a test of exchangeability for the problem
\begin{align*}
&H_0: \quad C({\mathbf u})= C({\mathbf u}_{\sigma}) \quad\text{for all ${\mathbf u} \in [0,1]^d$ and all $\sigma \in \Sigma_d$}, \\
\intertext{against}
&H_A:\quad C({\mathbf u})\ne C({\mathbf u}_{\sigma}) \quad\text{for some ${\mathbf u} \in [0,1]^d$ and some $\sigma \in \Sigma_d$},
\end{align*}
by performing a test based on statistics defined as integral versions of the difference
$(\hat C_n({\mathbf u})- \hat C_n({\mathbf u}_{\sigma}))$, such as
\begin{equation}\label{E:copulas}
S_n:= \sum_{\sigma \in {\mathcal G}_0} \int_{[0,1]^d}\bigl(\hat C_n({\mathbf u})- \hat C_n({\mathbf u}_{\sigma})\bigr)^2 w({\mathbf u}, \sigma)\,dC({\mathbf u}),
\end{equation}
where ${\mathcal G}_0$ is a set of generators for the permutation group $\Sigma_d$,
and $w({\mathbf u}, \sigma)$ is a bounded continuous weight function.
From equation~\eqref{E:deh} and the dominated convergence theorem, it follows that
$S_n \to S$ a.s., where
\[
S:= \sum_{\sigma \in {\mathcal G}_0} \int_{[0,1]^d}\bigl(C({\mathbf u})- C({\mathbf u}_{\sigma})\bigr)^2 w({\mathbf u}, \sigma)\,dC(u).
\]
This is shown in \cite[ Lemma 3.1]{HS17},
while the asymptotic distribution is derived in \cite[Theorem 3.4]{HS17} under some regularity assumptions.
As in \cite{HS17}, we consider hierarchical copulas for two different scenarios:
\[
C^1_{\theta_0, \theta_1}(u):= C_{\theta_0} \bigl( u_1, C_{\theta_1}(u_2, u_3) \bigr),
\]
and
\[
C^2_{\theta_0, \theta_1}(u):= C_{\theta_0} \bigl( C_{\theta_1}(u_1, u_2) , u_3\bigr),
\]
where $C_{\theta}$ is the Clayton bivariate copula with parameter $\theta$.
The parameters $\theta_0$ and $\theta_1$ are chosen so that their Kendall indices $\tau$ are
\[
\tau_0:=5/12-\xi/60 \quad\text{and}\quad \tau_1:=5/12+\xi/60 \quad\text{for~}\xi \in \{0,1,\ldots,7\}.
\]
When $\xi=0$ we are under the null hypothesis, and as $\xi$ increases we are further away from the null.
The number of random directions and sample sizes are the same as in \S\ref{SS:exchsim1}.
The empirical power functions are shown in Figures~\ref{F:copula1} and~\ref{F:copula2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{Figuras/copula1b.pdf}
\caption{\small Empirical power function for the hierarchical copula for the exchangeability test as a function of $\xi \in \{0,1,\ldots,7\}$, in dimension 3 for the copula $C^1$.}
\label{F:copula1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{Figuras/copula2b.pdf}
\caption{\small Empirical power function for the hierarchical copula for the exchangeability test as a function of $\xi \in \{0,1,\ldots,7\}$, in dimension 3 for the copula $C^2$.}
\label{F:copula2}
\end{figure}
Our results show that the empirical power functions are better than those reported in \cite{HS17} for a sample size $N=1000$. We compare with the best results obtained for this scenario in \cite{HS17} using the statistic $S_{1000}$ defined by equation \eqref{E:copulas} with our proposal $D_{1000,50}$, for different values of $\xi$,
see Table~\ref{Ta:pot}.
{\begin{table}[ht]
\caption{\small Empirical power functions for the tests based on $S_{n}$ and $D_{n,50}$ for a sample of size $1000$.
We consider a random vector with uniform marginals
and the copulas $C^1_{\theta_0, \theta_1}$ and $C^2_{\theta_0, \theta_1}$ in dimension $3$,
for different values of $\xi$ when the level of the test is $5 \%$.}
\label{Ta:pot}
\begin{center}
\begin{tabular}{ccccc}\toprule
& \multicolumn{2}{c}{$C^1_{\theta_0, \theta_1}$} & \multicolumn{2}{c}{$C^2_{\theta_0, \theta_1}$}
\\\cmidrule(lr){2-3}\cmidrule(lr){4-5}
$\xi$ & $S_{1000}$ & $D_{1000,50}$ & $S_{1000}$ & $D_{1000,50}$ \\\midrule
0 & 0.044 & 0.043& 0.033 & 0.052 \\
1& 0.182& 0.338 & 0.112 & 0.246 \\
2 & 0.667 & 0.996 & 0.416 & 0.992 \\
3 & 0.981 &1.000 & 0.865 & 1.000\\
4 & 1.000 & 1.000 & 0.999 & 1.000\\
5 & 1.000 & 1.000 & 1.000 & 1.000\\
\bottomrule
\end{tabular}
\end{center}
\end{table}}
\section{Sign-invariant exchangeability}\label{S:signexch}
\subsection{Background}\label{SS:signexchback}
Berman \cite{Be62,Be65} introduced the notion of sign-invariant exchangeability,
which is defined as follows.
A $d$-tuple of random variables ${\mathbf X}:= (X_1, \ldots, X_d)$ is \emph{sign-invariant} if
it has the same distribution as $(\epsilon_1 X_1, \ldots, \epsilon_d X_d)$
for every choice of $(\epsilon_1,\dots,\epsilon_d) \in \{-1,1\}^d$.
It is \emph{sign-invariant exchangeable} if it is both sign-invariant and exchangeable.
Equivalently, $(X_1, \ldots, X_d)$ is sign-invariant exchangeable if and only if it has the same
distribution as
$(\epsilon_1 X_{\sigma(1)},\dots,\epsilon_d X_{\sigma(d)})$ for all $\sigma\in\Sigma_d$
and $(\epsilon_1,\dots,\epsilon_d) \in \{-1,1\}^d$.
This amounts to saying that the distribution $P$ of ${\mathbf X}$ is $G$-invariant,
where $G$ is the group of $d\times d$ signed permutation matrices.
As remarked in Example~\ref{Ex:sign}, $G$ can be generated by three matrices $T_1,T_2,T_3$,
so, to test for $G$-invariance, it suffices to test whether $P=PT^{-1}_j$ for $j=1,2,3$.
\subsection{Simulations for the test for sign-invariant exchangeability}\label{SS:signexchsim}
We consider a sample of a multivariate normal distribution in dimension $d$ for $d=3, 6, 10$,
with mean zero and covariance matrix given by $\Sigma^{(1)}$, defined in \S\ref{SS:exchsim1}
(where now $\rho$ is variable).
When $\rho=0$ the distribution is sign-invariant exchangeable.
We consider sample sizes of $200, 500, 1000$ and $5000$,
and we consider $1, 10, 50$ and $100$ random directions in ${\mathbb R}^3$.
A plot of the empirical power function of the test as a function of $\rho \in [0,1]$
using \S\ref{SS:signexchback} and \S\ref{SS:testing2} is given in Figure~\ref{F:signexch}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth]{Figuras/signo33b.pdf}
\caption{\small Empirical power function for sign-invariant exchangeable testing with $\alpha=0.05$ and varying $\rho \in [0,1]$.
We consider samples sizes of $200,500,1000$ and $5000$, and $1,10,50$ and $100$ random directions in ${\mathbb R}^2$.}
\label{F:signexch}
\end{figure}
\section{An infinite-dimensional example}\label{S:infdim}
\subsection{Background}
As mentioned in \S\ref{S:CW},
there are several variants of the Cram\'er--Wold theorem,
including versions for Hilbert spaces \cite{CFR07}
and even Banach spaces \cite{CF09}.
These can be exploited to provide tests for
$G$-invariance in infinite-dimensional spaces.
We consider here the case of a Hilbert space.
Let ${\mathcal H}$ be a separable infinite-dimensional Hilbert space,
let $P$ be a Borel probability measure on ${\mathcal H}$,
and let $G$ be a group of invertible continuous linear
maps $T:{\mathcal H}\to {\mathcal H}$. We want to test the null hypothesis
\[
H_0: \quad PT^{-1}=P \quad\text{for all~}T\in G.
\]
Nearly everything goes through as before.
The only adjustment needed is in Theorem~\ref{T:CWgen},
because Lebesgue measure no longer makes sense
in infinite dimensions. It role is taken by a non-degenerate gaussian
measure on ${\mathcal H}$. What this means is explained in detail in \cite[\S4]{CFR07}, where one can find the proof of the following result,
which serves as a replacement for Theorem~\ref{T:CWgen}.
\begin{theorem}[\protect{\cite[Theorem~4.1]{CFR07}}]\label{T:cwgauss}
Let ${\mathcal H}$ be a separable Hilbert space,
and let $\mu$ be a non-degenerate gaussian measure on ${\mathcal H}$.
Let $P,Q$ be Borel probability measures on ${\mathcal H}$.
Assume that
the absolute moments $m_N:=\int\|x\|^N\,dP(x)$ are all finite and satisfy
\begin{equation}
\sum_{N\ge1}m_N^{-1/N}=\infty.
\end{equation}
If the set ${\mathcal E}(P,Q)$ is of positive $\mu$-measure in ${\mathcal H}$,
then $P=Q$.
\end{theorem}
\subsection{Simulations for an infinite-dimensional example}
In this example, we perform a simulation study on the exchangeability test
for multidimensional functional data in the Hilbert space $\mathcal{L}^3:=\otimes_{i=1}^3 L^2[0,1]$.
We consider a sample of i.i.d.\ random functional vectors $\{\mathbf{X}_i\}_{i=1, \ldots,n} \subset \mathcal{L}^3$,
where the marginal components are defined as
\[
X_{i,j}(t):= m(t)+ \epsilon_{i,j}(t), \quad i \in \{1,2,\dots, n\} \textrm{\and\ } j \in \{1,2,3\},
\]
where $m(t):= \cos (2 \pi t)$, and where $\epsilon_{1,j}, \ldots, \epsilon_{n,j}$ are i.i.d\ gaussian processes with
\[
\textrm{Cov}(\epsilon_{i,j}(t), \epsilon_{i,j}(s))= \textrm{exp} \left( - \vert s-t \vert \right).
\]
Finally, we assume that the correlation function between the marginal components satisfies
\[
\textrm{Cor}(\epsilon_{i,j_1}(t), \epsilon_{i,j_2}(t))=
\begin{cases}
0.5 -\delta, &\text{if~} j_1=1,~j_2=3,\\
0.5, &\text{if~} j_1=2,~j_2=3,\\
0.5+\delta &\text{if~} j_1=1,~j_2=2.
\end{cases}
\]
For $\delta=0$, the functional vector is exchangeable.
The random projections are taken over a standard multivariate brownian $\mathbf{W}$ in $[0,1]^3$, that is,
$$\langle \mathbf{X}, \mathbf{W} \rangle = \sum_{i=1}^3 \langle {X}_j, {W}_j \rangle, $$
where ${W}_j$ are the i.i.d.\ standard brownian in $[0,1]$ and $$\langle {X}_j, {W}_j \rangle= \int_0^1 {X}_j(t){W}_j(t) dt.$$
The functions are discretized on an equispaced grid of size $100$ in $[0,1]$.
Figure~\ref{functional} depicts a realization of the functional random vector for $\delta=0.3$.
\begin{figure}[ht]
\centering
\subfloat{\includegraphics[width=125mm]{Figuras/functional.pdf}}
\caption{Marginal curves of a simulation of $\mathbf{X}$ for $\delta=0.3$}
\label{functional}
\end{figure}
The statistic $D_{n,\ell}$ is calculated (as in Section~\ref{SS:testing2}) for the values $n \in \{250,500,1000\}$
and $\ell \in \{10,50,100\}$. The power functions through $1000$ replicates are determined in
Table~\ref{Tab:pvalores}. We find that the results are quite good.
\begin{table}
\caption{Empirical power function $D_{n,\ell}$ for $\delta \in [0,0.3]$, $n \in \{250,500,1000\}$ and $\ell \in \{10,50,100\}$. Significance level= $0.05$. } \label{Tab:pvalores}
\begin{center}
\begin{tabular}{c|ccc|ccc|ccc}
\toprule[0.4 mm]
& \multicolumn{3}{c|}{$n=250$} &\multicolumn{3}{c|}{$n=500$} &\multicolumn{3}{c}{$n=1000$} \\
\midrule[0.4 mm]
$\delta \, \, \backslash \,\, \ell$ \ & $10$ & $50$& $100$&$10$ & $50$& $100$&$10$ & $50$& $100$ \\
\hline
$0$ & 0.044 & 0.035 &0.051& 0.044& 0.040 & 0.045 &0.051 & 0.04 & 0.044 \\
$0.05$ & 0.050& 0.045 &0.059& 0.043 & 0.040 & 0.061 &0.078 & 0.065 & 0.080 \\
$0.1$ & 0.078 & 0.069&0.089& 0.116 & 0.103 & 0.152 &0.264& 0.266& 0.350\\
$0.15$ & 0.116 & 0.121&0.194& 0.318 & 0.342 & 0.474 &0.727 &0.877 & 0.946 \\
$0.20$ & 0.298 & 0.331&0.470& 0.643 & 0.842& 0.945 &0.926 & 0.999& 1.000\\
$0.25$& 0.576 & 0.721 &0.853& 0.851 & 0.995 & 1.00 & 0.977 & 1.000& 1.000\\
$0.30$ & 0.781 & 0.977 &0.997& 0.989 & 0.995& 1.00 &0.996 & 1.000 & 1.000\\
\toprule[0.4 mm]
\end{tabular}
\end{center}
\end{table}
\section{Real-data examples}
We conclude the article with two examples drawn from real datasets.
In both cases, the test is for exchangeability.
\subsection{Biometric measurements}
These data were collected in a study of how data on various characteristics of the blood varied with sport, body size and sex of the athlete, by the Australian Institute of Sport. The data set, see \cite{CW94}, is available in the package \textit{locfit} of the $R$ software. The data set is composed of $202$ observations and $13$ variables. As in \cite{BQ21}, five covariates are considered: red cell count (RCC), haematocrit (Hc), haemoglobin (Hg), lean body mass (LBM) and height (Ht). It is clear that the marginal distributions are different, so the distribution is not exchangeable. In order to check the exchangeability of the copula, the marginal distributions are standardized by the transformation $U_n = F_n(X)$, where $F_n$ is the empirical cumulative distribution of the random variable~$X$. Figure~\ref{F:symIndex}(A) displays the bivariate symmetry index $S_n$ developed in \cite{GNQ12},
\[
S_n:= \int_0^1 \int_0^1 \left[ \hat{C}_n(u,v) - \hat{C}_n(v,u) \right]^2 \textrm{d}\hat{C}_n(u,v),
\]
where $\hat{C}_n$ is the empirical copula as in (\ref{E:empcop}).
The greatest asymmetry is observed between variables LBM and Hg,
but in all cases the proposed test in \cite{GNQ12} does not reject the symmetry null hypothesis
(the p-value between LBM and Hg is $0.72$).
\begin{figure}[ht]
\centering
\subfloat[]{\includegraphics[scale=0.62]{Figuras/simIndex.pdf}}
\subfloat[]{\includegraphics[scale=0.58]{Figuras/pvalores.pdf}}
\caption{\small (A) Bivariate symmetry index values $S_n$ for all pairs of copulas among the 5 variables considered in the Biometric measurements. (B) P-values of the test (with statistic $D_{n,\ell=50}$) for sub-sample of sizes $n=50,150, \ldots 550$ in the Statlog Satellite dataset (horizontal red line $\alpha=0.05$).}
\label{F:symIndex}
\end{figure}
We test the global five-dimensional symmetry of the copula. The components of each observation of the standardized sample are randomly permuted. From this permuted sample, we obtain the empirical distribution of the statistic $D_{n,\ell=50}$ under the exchangeability hypothesis (over 10,000 replicates). The p-value obtained for the sample is $0.0126$. Therefore, as in \cite{BQ21}, the exchangeability hypothesis is rejected.
\subsection{Statlog Satellite dataset}
The database consists of the multi-spectral values of pixels in $3\times3$ neighbourhoods in a satellite image, and is available in the UCI, Machine Learning Repository (\url{https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/satimage/}) . The sample size is 6435 images. Each line contains the pixel values in the four spectral bands (converted to ASCII) of each of the 9 pixels in the $3\times3$ neighbourhood. That is, each observation is represented by $36$ attributes. In this example too the marginal distributions are clearly different, so, as in the previous example, the variables are standardized, and the statistic under the null hypothesis is determined to test exchangeability of the copula. The test $D_{n,\ell=50}$ is performed for different sub-sample sizes $n=50,150, \ldots 550$
(from the first to the n-th observation), similar to \cite{FGNV2012}.
Figure~\ref{F:symIndex}(B) displays the p-values obtained for each~$n$.
We observe that, with a sample size greater than $300$, the hypothesis of exchangeability is rejected.
\section{Conclusions and potential further developments}\label{S:conclusion}
We have shown how to exploit an extension of the Cram\'er--Wold theorem, Theorem~\ref{T:CWgen}, to use one-dimensional projections
to test for invariance of a probability measure under a group of linear transformations. As special cases, we obtain tests for
exchangeability and sign-invariant exchangeability.
The results in the simulations are really good,
and the algorithms we propose are computationally extremely fast and adequate even for high-dimensional data.
Regarding the comparison with the proposal in \cite{HS17},
Table \ref{Ta:pot} shows a relative efficiency of our method \emph{vis-\`a-vis} its competitor
of order around 100\% for small values of the parameter~$\xi$.
Moreover, the proposal in \cite{HS17} seems to be very hard to implement in high-dimensional spaces.
We illustrate our method with a short study of two real-data examples.
As well as being effective, the methods developed in this article are quite flexible.
Our results can be applied to the case of multivariate functional data,
where we have vectors taking values in a Hilbert space or even a Banach space.
We can still employ the same algorithm as in \S\ref{SS:testing2},
using several random directions and finding the critical value by bootstrap,
except that, in the infinite-dimensional case, we should choose the random directions
(or the elements on the dual space) according to a non-degenerate gaussian measure.
We have also shown that, even in the context of testing for $G$-invariance,
where one of the measures is a linear transformation of the other,
Theorem~\ref{T:CWgen} remains sharp. This is the content of Theorem~\ref{T:sharpexch}.
However, if we have further \emph{a priori} knowledge of the distribution in question, then
Theorem~\ref{T:CWgen} can be improved.
For example, Heppes \cite[Theorem~$1'$]{He56} showed that,
if $P$ is discrete probability measures on ${\mathbb R}^d$ supported on $k$ points,
and if $Q$ is an arbitrary Borel probability measure on ${\mathbb R}^d$ such that ${\mathcal E}(P,Q)$ contains at least $(k+1)$ subspaces, no two of which are contained in any single hyperplane,
then $P=Q$. This opens the door to potential improvements of our procedures in the case where
$P$ is known to be discrete, for example testing for the exchangeability of
sequences of Bernoulli random variables.
This idea is explored further in \cite{FMR22}.
\section*{Acknowledgement}
Fraiman and Moreno supported by grant FCE-1-2019-1-156054, Agencia Nacional de Investigaci\'on e Innovaci\'on, Uruguay. Ransford supported by grants from NSERC and the Canada Research Chairs program.
\bibliographystyle{plain}
| aaf123d4b84de91ac2991dcbf98567c949f5886a | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
There is a long-standing debate whether the collisionless cold dark matter (DM) predicts too much structure at small scales. The conflict with observations is suggested in at least three ways: (1) {\it The cusp-versus-core problem} is the disagreement between the cuspy density profiles predicted from numerical simulations of collisionless cold DM and the cored profiles favored by well-studied dwarf galaxies~\cite{Moore:1994yx,Flores:1994gz,Navarro:1996gj}; (2) {\it the too-big-to-fail problem} is the observation that the most massive subhaloes in CDM simulations are too massive to host the satellites of the Milky Way, yet should be luminous given the observed dwarfs~\cite{BoylanKolchin:2011de,Walker:2012td}; and {(3)} the long-standing {\it missing satellites problem} {wherein} the number of satellites found in simulations of Milky Way sized halos disagrees with observations by roughly a factor $\mathcal{O}(10)$~\cite{Klypin:1999uc,Moore:1999nt,Kauffmann:1993gv}. While improved numerical simulations, incorporating baryons, and better satellite statistics are clearly needed and may yet alleviate this problem, the alternative possibility -- involving additional physics in the dark matter sector -- has justifiably received considerable attention.
Particularly well-explored is the suggestion that DM could be self-interacting~\cite{Spergel:1999mh,Feng:2009mn,Loeb:2010gj,Tulin:2013teo,Fan:2013yva,Pearce:2013ola,Bellazzini:2013foa,Kahlhoefer:2013dca,Cyr-Racine:2013fsa,Bramante:2013nma,Kaplinghat:2013yxa,Cline:2013pca,Laha:2013gva,Curtin:2013qsa,Cline:2013zca,Boddy:2014yra,Hochberg:2014dra,Petraki:2014uza,Ko:2014bka,Wang:2014kja,Curtin:2014afa,Buckley:2014hja,Schutz:2014nka,Boddy:2014qxa}.
In this framework, the cusped centers of DM halos are smoothed into cores by providing DM with an efficient mechanism for transporting energy from the hot inner region to the cold outer region. A velocity-dependent cross section can nicely accommodate the desire to have significant scattering at dwarf scales without running afoul of the stringent limits on self-interactions coming from cluster and galactic scales.
Particle physics models with this feature employ $\mathcal{O}({\rm MeV})$ dark force mediators~\cite{Feng:2009mn,Loeb:2010gj,Aarssen:2012fx,Tulin:2013teo}.
Additional benefits can be obtained if the hidden sector interaction could somehow also include neutrinos~\cite{Aarssen:2012fx}.
It has been argued that this scenario may solve \emph{all} of the small-scale structure issues mentioned above.
Indeed, the efficient scattering of DM with energetic bath particles would lead to late kinetic decoupling, delaying the formation of the smallest protohalos and resolving the problem of missing satellites~\cite{Aarssen:2012fx,Shoemaker:2013tda,Dasgupta:2013zpn}. For this process to damp structure on the scales relevant for the missing satellites problem, decoupling temperatures of order $\mathcal{O}(0.1~{\rm keV})$ are needed, singling out neutrinos as a potential scattering partner for DM. (though other realizations are possible~\cite{Chu:2014lja,Archidiacono:2014nda}).
From the particle-physics point of view, the immediate issue to be addressed is how neutrinos could be consistently coupled to a hidden gauge group with the necessary strength without running into a host of constraints, e.g.,~\cite{Laha:2013xua}.
A straightforward path is use instead of the three Standard Model (SM) neutrinos a new, light fermion, which is a SM singlet~\cite{Bringmann:2013vra}. Provided this new \emph{secluded neutrino} could be produced in the early universe with sufficient relic abundance, the desired late kinetic decoupling of DM could be realized.
\begin{figure*}
\begin{center}
\includegraphics[width=.2\textwidth]{feyn1.pdf} ~~~~~~~~~~
\includegraphics[width=.2\textwidth]{feyn2.pdf} ~~~~~~~~~~
\includegraphics[width=.2\textwidth]{feyn3.pdf}
\caption{ The relevant Feynman diagrams for (1) the relic abundance, (2) DM-DM self-scattering, and (3) $\nu$-DM scattering relevant for addressing the missing satellites problem.}
\label{fig:feyn}
\end{center}
\end{figure*}
The phenomenology of this secluded neutrinophilic DM ($\nuDM$) model contains several unique and salutary consequences, which are studied in this letter. First, we show the thermal recoupling of the active and {secluded} neutrino sectors indeed leads to a relic secluded neutrino population which is both hotter and more numerous than in previous studies~\cite{Dasgupta:2013zpn,Bringmann:2013vra}. Second, in contrast with~\cite{Aarssen:2012fx,Bringmann:2013vra} who considered symmetric DM we consider the larger parameter space in which DM carries a primordial particle-antiparticle asymmetry (though see~\cite{Dasgupta:2013zpn}). This has the effect of opening an interesting window in parameter space at low DM mass where self-scattering is in the perturbative regime.
Lastly and most importantly, we point out that, thank to the active-secluded neutrino mixing, the neutrino self-interactions in this model modify the mean free path of Ultra-High-Energy (UHE) neutrinos as they propagate through the bath of relic neutrinos. We find that the bulk of the parameter space which simultaneously resolves all dark matter structure problems has direct observational consequences for the IceCube experiment.
\\
\indent
In Sec.~\ref{sec:model} we describe the general features of the {S}$\nu$DM model. In Sec~\ref{sec:relic} we solve the Boltzmann equations to determine the region of parameter space favored by an (a)symmetric thermal relic. In Sec.~\ref{sec:sidm} we determine the self-scattering parameters relevant for addressing the cusp-versus-core and too-big-to-fail problems. The {secluded} neutrino temperature and kinetic decoupling computation are addressed in Secs.~\ref{sec:steriletemp} and \ref{sec:KD} respectively. Implications of the neutrinos self-interactions for the high-energy IceCube data are discussed in Sec.~\ref{sec:icecube}. We summarize all of our results in Sec.~\ref{sec:discuss} and conclude.
\section{The $\nuDM$ Model }
\label{sec:model}
As already mentioned, the fact that simulations imply much cuspier density profiles than the cored profiles favored by observations~\cite{Moore:1994yx,Flores:1994gz,Navarro:1996gj} could be an indication that DM has non-negligible self-scattering~\cite{Spergel:1999mh}.
Detailed analysis shows that a velocity-dependent interaction is favored, as can be achieved with a light force carrier. The argument proceeds as follows. The strongest constraints on DM self-interactions come from Milky Way ellipticity and Cluster collisions, roughly requiring $\sigma_{XX} /m_{X}\lesssim (0.1 -1)~{\rm cm}^{2}{\rm s}^{-1}$~\cite{Rocha:2012jg,Vogelsberger:2012ku,Peter:2012jh}. Note that these constraints are obtained from DM populations where the velocity dispersion is $\mathcal{O}(100~{\rm km/s})$ for Milky Way constraints and $\mathcal{O}(1000~{\rm km/s})$ for cluster constraints. For the $\mathcal{O}(1~{\rm cm}^{2}~{\rm g}^{-1})$ cross sections at dwarf scales ($\mathcal{O}(10~{\rm km/s})$), as identified by Spergel and Steinhardt~\cite{Spergel:1999mh}, to be allowed, the self-scattering should exhibits strong velocity dependence. Long-range interactions mediated by an $\mathcal{O}({\rm MeV})$ force carrier have precisely this feature and may thus solve the cusp-versus-core problem while remaining consistent with the constraints from galactic and cluster scales~\cite{Feng:2009mn,Loeb:2010gj,Aarssen:2012fx,Tulin:2013teo}.
{In this paper} we shall assume that the DM is a Dirac fermion, $X$, charged under a new $U(1)_{X}$ gauge interaction. There are two crucial ingredients for $\nuDM$, $\mathscr{L}_{\text{S}\nu\text{DM}} = \Delta \mathscr{L}_{\phi} + \Delta \mathscr{L}_{M}$, where the first term specifies the nature of the DM and neutrino coupling to the new gauge boson,
\begin{eqnarray}
\Delta \mathscr{L}_{\phi} = g_{\nu} \bar\nu_{s} \gamma_{\mu}\nu_{s} \phi^{\mu} + g_{X} \bar{X} \gamma_{\mu}X \phi^{\mu},
\end{eqnarray}
and the second term,
\begin{eqnarray}
\label{eq:Weinberg}
\Delta \mathscr{L}_{M} =
y_{\alpha} \frac{(L_{\alpha} H)(h_{X} \nu_{s})}{\Lambda},
\end{eqnarray}
allows the new $\nu_{s}$ to mass-mix with the active SM neutrinos in a gauge-invariant way via a $U(1)_{X}$ charged Higgs $h_{X}$ which acquires a {vacuum expectation value (VEV)}. This Higgs is also responsible for giving mass to the vector, $m_{\phi} = g_{h}~ \langle h_{X} \rangle$, where $g_{h}$ is the gauge charge of the Higgs and $\langle h_{X} \rangle$ is its VEV. Note that the active neutrinos are contained in their {electroweak (EW)} doublets, $L_{\alpha} = \left(
\begin{array}{c}
\nu_{\alpha}\\
\ell_{\alpha}\\
\end{array}
\right)$,
where $\alpha = e, \mu, \tau$.
We note that the presence of this mixing is completely logical, since the operator in Eq.~(\ref{eq:Weinberg}) is suppressed by only a single power of the new physics scale $\Lambda$ and hence even new physics at very high scales could generate it. The situation is completely analogous to the standard Weinberg operator for the neutrino Majorana mass. Indeed, a simple ultraviolet completion of our model involves a see-saw type construction. One introduces right-handed singlet neutrinos with very large Majorana masses, couples them to both the SM and secluded neutrinos with Dirac mass terms and then integrates the heavy right-handed states out, yielding Eq.~(\ref{eq:Weinberg}) at low energies.
The baryonic neutrino model of Pospelov~\cite{Pospelov:2011ha,Pospelov:2012gm,Pospelov:2013rha} employs similar features in order to endow neutrinos with new BSM interactions. We, however, do not assume any novel neutrino-baryon or neutrino-charged-lepton coupling. In fact, in $\nuDM$ when the universe is at temperatures below the high energy scale $\Lambda$, interactions between the dark and SM sectors can be mediated \emph{exclusively} through neutrino mixing. In this case, neither the ``dark photon'' searches nor DM direct detection experiments are expected to turn up a positive signal. The astrophysical and cosmological signatures discussed below, including the possible imprints of the dark sector in DM profiles and the Icecube data, become the \emph{primary} methods of discovering the dark sector
{Additional features of the model and the constraints on it are discussed in a concurrent publication~\cite{us2}. }
\section{Dark Matter Thermal Relic Abundance}
\label{sec:relic}
{
In models like ours where DM is not self-conjugate, there exists the possibility that some high-scale physics has violated the Sakharov conditions~\cite{Sakharov:1967dj} and generated a primordial asymmetry. {This greatly expands the potential dynamic range of our model, as in our case the final abundance depends on both the annihilation cross section and the primordial asymmetry instead of the annihilation cross section alone.} To be as general as possible, and since we are interested in relatively low-energy physics {in this paper}, we do not specify the nature of the high-energy physics producing this asymmetry (though see e.g.~\cite{Petraki:2013wwa,Zurek:2013wia} for examples), but instead simply impose the relevant annihilation requirements for an asymmetric thermal relic~\cite{Graesser:2011wi}.}
To find the requisite annihilation cross section at a given DM mass we solve the coupled Boltzmann equations,
\begin{eqnarray} \frac{dn_{i}}{dt} + 3 H n_{i} = - \langle \sigma_{ann} v_{rel} \rangle \left[ n_{i}n_{j} - n_{eq}^2\right],
\label{eq:boltz}
\end{eqnarray}
where the indices run over $i,j = X, \overline{X}$, $H$ is the Hubble expansion rate, $n_{eq}(T)$ is the equilibrium number density, and $\langle \sigma_{ann} v_{rel} \rangle$ is the thermal average of the total annihilation cross section.
We find that when the asymmetry is zero for Dirac DM, the correct abundance is obtained for $\langle \sigma_{ann} v_{rel} \rangle \simeq 4.5 \times 10^{-26}~{\rm cm}^{3}~{\rm s}^{-1}$ DM masses $\gtrsim 10$ GeV~\cite{Steigman:2012nb}. More generally however, when the asymmetry is nonzero, for the combination of the asymmetric and symmetric components not to overclose the Universe the annihilation cross section must be $\gtrsim 4.5 \times 10^{-26}~{\rm cm}^{3}~{\rm s}^{-1}$~\cite{Graesser:2011wi,Bell:2014xta}. In what follows we will allow for nonzero asymmetry between $X$ and $\bar{X}$, and employ this constraint.
Two processes contribute to the total annihilation cross section: $\overline{X}X \rightarrow \overline{\nu}\nu$ and $\overline{X}X \rightarrow \phi \phi$. When the DM mass is large compared to the mediator $\phi$ ($m_{X} > m_{\phi}$) the annihilation is governed by the diagram in Fig.~\ref{fig:feyn}(a), with cross section
\begin{eqnarray}
\langle \sigma_{X\overline{X} \rightarrow \phi \phi} v_{rel} \rangle \simeq \frac{g_{X}^{4}}{16 \pi m_{X}^{2}}\sqrt{1-\left(\frac{m_{\phi}} {m_{X}} \right)^{2}},
\label{eq:2phi}
\end{eqnarray}
while the cross section for the $s$-channel annihilation to a pair of neutrinos is
\begin{eqnarray}
\langle \sigma_{X\overline{X} \rightarrow \nu \nu} v_{rel} \rangle \simeq \frac{N_{\nu} g_X^2g_{\nu}^{2}}{32\pi m_{X}^2}\frac{\left(2+\frac{m_\nu^2}{m_X^2}\right)}{\left(1-\frac{m_{\phi}^2}{4m_X^2}\right)^2+\frac{\Gamma_{\phi}^2}{m_{\phi}^2}}
\end{eqnarray}
Thus whenever in the light mediator regime, $m_{X} > m_{\phi}$, the $\overline{X}X \rightarrow \phi \phi$ channel dominates so long as $g_{X} > g_{\nu}\sqrt{N_{\nu}}$.
With the assumption that the $\overline{X}X \rightarrow \phi \phi$ mode dominates, this effectively fixes the value of $g_{X}$ in the symmetric DM limit or serves as a lower limit on $g_{X}$ when there exists a nonzero particle asymmetry.
Therefore using Eq.~(\ref{eq:2phi}) and imposing that the annihilation cross section be $\gtrsim 4.5 \times 10^{-26}~{\rm cm}^{3}~{\rm s}^{-1}$ , we find that an (a)symmetric thermal relic roughly requires that the DM coupling is, $g_{X} \simeq 0.02~ \sqrt{m_{X}/{\rm GeV}}$.
{The above discussion is modified at large DM masses by the presence of Sommerfeld-enhanced scattering~\cite{Hisano:2004ds,ArkaniHamed:2008qn,Feng:2010zp,Tulin:2013teo}, though is largely irrelevant for the sub-TeV DM masses with which we are concerned here~\cite{Tulin:2013teo}.}
\section{DM Self-Scattering: turning cusps into cores}
\label{sec:sidm}
{
As discussed in Sec.~\ref{sec:model} dwarf galaxies may indicate the need for DM-interactions, while cluster and galactic observations only yield limits on DM self-interactions. Therefore, along the lines of~\cite{Loeb:2010gj,Tulin:2013teo,Aarssen:2012fx} we are interested in exploring the parameter space of a thermal relic for which the DM-DM scattering cross section is large at dwarf speeds but small enough to accommodate galactic and cluster limits on self-interactions~\footnote{Though we stress that velocity-independent interactions around $\sim$0.5$~{\rm cm}^{2}{\rm g}^{-1}$ may be sufficient to modify the profiles of dwarfs while narrowly evading galactic and cluster scale limits~\cite{Rocha:2012jg,Peter:2012jh}}. Given this set of constraints, we adopt a region of interest for self-interactions defined as~\cite{Tulin:2013teo}: 1-10$~{\rm cm}^{2}{\rm g}^{-1}$ at characteristic dwarf speeds, $10~{\rm km}/{\rm s}$, while $< 1~{\rm cm}^{2}{\rm g}^{-1}$ at galactic and cluster speeds, $100-1000~{\rm km}/{\rm s}$ respectively. }
In an asymmetric DM context as we are considering, the parameter space for self-interactions is quite large given that relic abundance considerations only impose a lower bound on the coupling, $g_{X}$, rather than fixing it as in the symmetric relic case. In contrast with~\cite{Aarssen:2012fx,Bringmann:2013vra}, who considered symmetric DM, we shall see that thermal asymmetric DM allows for a potential resolution of the small-structure problems in the perturbative regime as well.
{In the small-coupling regime ($\alpha_{X}m_{X} / m_{\phi} \ll 1$) where the scattering can be computed perturbatively, one can use the Born approximation to compute the $t$-channel contribution to the transfer cross section~\cite{Feng:2009hw,Tulin:2013teo}. We agree with this calculation and find,}
\begin{eqnarray}
\sigma_{T} = \frac{g_{X}^4}{2\pi m_{X}^{2} v^{4}} \left[\ln \left(1+ \frac{m_{X}^{2}v^{2}}{m_{\phi}^{2}}\right) - \frac{m_{X}^{2} v^{2} }{m_{\phi}^{2}+m_{X}^{2}v^{2}}\right].
\end{eqnarray}
{In the non-perturbative regime ($\alpha_{X}m_{X} / m_{\phi} \gg 1$), fitting expressions have been obtained in the limit that the DM de Broglie wavelength is small compared to the characteristic scale of the potential, $m_{X}v/m_{\phi} >1$. In this sense the scattering proceeds ``classically.'' For repulsive scattering these have been found to be \cite{Khrapak,Tulin:2013teo},}
\begin{equation}
\sigma_T \approx \left\{ \begin{array}{cc}
\frac{2\pi}{m^2_\phi}& \beta^2 \ln(1+\beta^{-2}),
\beta<1, \\
\frac{\pi}{m^2_\phi}&
\left( \ln 2\beta - \ln \ln 2 \beta \right)^2,
\beta > 1,
\end{array} \right.
\end{equation}
{where $\beta \equiv 2 \alpha_X m_\phi / (m_\chi v_\mathrm{rel}^2)$ is the ratio of the potential energy at the characteristic length scale $r = m_{\phi}^{-1}$ to the kinetic energy of the DM.}
Outside the realm of applicability for either of these analytic results, we solve the Schr$\ddot{{\rm o}}$dinger equation using the numerical recipe outlined in~\cite{Tulin:2013teo}.
\section{{Secluded} Neutrino Abundance and Temperature}
\label{sec:steriletemp}
After the {{d}ark sector decouples from the Standard Model, the temperature ratio between radiation in the two sectors is easily estimated from the conservation of entropy under the assumption that the two sectors shared a common temperature in the past, $T_{d}$. This is found to be
\begin{eqnarray} \left. \frac{T_{s}}{T_{\gamma} }\right |_{T_{KD}} = \left[ \frac{ g_{*,s}(T_{d})~~g_{*,SM}(T_{KD})}{g_{*,SM}(T_{d})~~g_{*,s}(T_{KD})}\right]^{1/3}\, ,
\end{eqnarray}
where $T_{KD}$ is the temperature of kinetic decoupling (discussed in the next section) and $T_{s}$ is the temperature of the secluded neutrinos. Thus taking $T_{d} = 1$ TeV and assuming both the scalar and vector were in equilibrium at early times but not at kinetic decoupling, we find $T_{s}/T_{\gamma} \simeq 0.47$.
{An important observation from} models of neutrinos with dark sector interactions is that immediately after decoupling the one-loop finite temperature self-energy contribution to the neutrino effective mass strongly suppresses the mixing angle between the active and {secluded} neutrinos~\cite{Hannestad:2013ana,Dasgupta:2013zpn}. {The effect is the direct analog of the standard model finite temperature potential for neutrinos~\cite{Notzold:1988fv}. This mixing angle suppression isolates} the dark sector from the standard model sector by reducing the rate of $\nu_{\rm active} - \nu_{\rm {s}}$ scattering, to the extent that it is much less than the expansion rate of the universe. This prevents {the secluded sector} from thermalizing with the Standard Model sector through neutrino scattering at temperatures above $15\,\rm MeV$. {As a result, the phenomenology of Big Bang Nucleosynthesis (BBN) is unaffected by the addition of secluded neutrino interactions.}
{A critical feature of the model we present in this paper which was missed by previous authors~\cite{Hannestad:2013ana,Dasgupta:2013zpn} is the recoupling of the secluded neutrinos to the active neutrino population at low (post BBN) temperatures. Once the temperature of the relic {secluded} neutrinos has dropped to a sufficiently low level, the thermal contribution to the {secluded} neutrino self energy will become small enough that it no longer suppresses the mixing angle between active and {secluded} neutrinos. Once the mixing angle becomes sufficiently large, the rate for 2 to 2 neutrino scattering through the dark sector interaction becomes fast enough to equilibrate the populations of secluded and active neutrinos.} This occurs roughly when the neutrino oscillation rate is comparable to the effective potential,
\begin{eqnarray}
\frac{\delta m^2}{2E_s}\cos 2\theta_s \simeq \frac{7\pi^2 g_\nu^2 E_s T_{s}^4}{45 m_\phi^4}\, ,
\end{eqnarray}
where we have employed the form of the effective potential valid at low-temperatures, $T_{s}, E_{s} \ll m_{\phi}$~\cite{Notzold:1988fv,Dasgupta:2013zpn}.
{A number of neutrino beam experiment anomalies provide a guide for our choice of neutrino mixing parameters~\cite{Aguilar:2001pj,Aguilar:2013ik}, which we take to be $\delta m^2 = 1 {\rm eV}^2$ and $\theta_s = 0.1$. This has the benefit of placing our neutrino mixing portal in a regime that future short baseline neutrino experiments may be able to probe. For the range of $g_s$ and $m_\phi$ which satisfy the constraints placed on SIDM, along with the {secluded} neutrino mixing parameters above}, we find that mixing angle suppression will cease in the temperature range $500\,{\rm keV} - 5\,{\rm keV}$. Without mixing angle suppression, the rate of scattering between active and {secluded} neutrinos through the dark sector interactions becomes much faster than the Hubble expansion rate.
Interestingly, the newly reconnected neutrino populations do not truly thermalize with each other. The temperature of decoupling is much lower than $2m_\phi$ so that neutrino number changing interactions are not possible. Further, recoupling occurs below the temperature at which the active neutrinos are decoupled from the SM plasma. Because no new entropy can enter or be generated within the dark-active coupled neutrino system, this leads to an equilibrium state which is dictated by detailed balance. The scattering processes which convert {secluded} neutrinos to active neutrinos and vice versa are both of the same magnitude in equilibrium, $\mathcal{O}\left( \sin^2\theta_s\right)$, so that the final state of the system is a sum of the Fermi-Dirac distributions of the {secluded} and active neutrinos weighted by the relative number of degrees of freedom in the {secluded} and active sectors. Though this process equilibrates the effective temperatures of all neutrinos, the spectrum of {secluded} (and active) neutrinos is distorted in such a way that a fractional value of a fully thermally populated neutrino species remains. We find that to a good approximation, the scattering processes yield a detailed balance with {$84\%$ the number density of a fully thermalized neutrino distribution} and $T_{s} \simeq T_{\nu} = (4/11)^{1/3} T_\gamma$, for our initial condition of a single {secluded} neutrino species which decouples at $T_{d} = 1\,\rm TeV$.
{This change in the effective temperature of the secluded neutrinos is a key factor in computing the correct kinetic decoupling temperature, $T_{KD}$, which we perform in the next section. As we will shortly see in Eqn.~\ref{T_KD}, the increased temperature of the secluded neutrinos is more impactful on $T_{KD}$ than the fractional thermal population engendered by detailed balance. As a result, our model predicts the effect of secluded neutrino scattering on the small scale structure of DM halos to be much more pronounced than previously calculated~\cite{Aarssen:2012fx,Dasgupta:2013zpn}.}
\section{Late kinetic decoupling: where the missing satellites went}
\label{sec:KD}
Even after the number-changing annihilation reactions cease being in equilibrium, kinetic equilibrium between DM and the {secluded neutrinos} can persist via elastic scattering, $X \nu_{s} \leftrightarrow X \nu_{s}$. This late kinetic decoupling delays the formation of the smallest protohalos and may offer a solution to the missing satellites problem~\cite{Aarssen:2012fx,Bringmann:2013vra,Dasgupta:2013zpn,Ko:2014bka,Chu:2014lja}.
\begin{figure*}
\begin{center}
\includegraphics[width=.45\textwidth]{Goldilocks3_v6.pdf}
\includegraphics[width=.45\textwidth]{Goldilocks2_v6.pdf}
\caption{ {In the {\it left} and {\it right} panels we fix the mediator mass to 10 MeV and 1 MeV respectively. To address the cusp-versus-cores and too-big-to-fail problems a parameter point should lie within the shaded blue region. Values to the left of the black curve lead are excluded by producing an over-abundance of DM, $\Omega_{DM} h^{2} > 0.12$. Regions to the right of the green curve are excluded by Lyman-$\alpha$ requirements that $M_{halo} < 5 \times 10^{10}~M_{\odot}$, {while regions to the right of the red solid line are excluded by having a MFP $<$ 50 Mpc. In the region to the right of the dashed red IceCube can perform source correlations at 3$\sigma$ (see text for details).} For reference the dashed green curves are contours of constant $M_{halo} = 10^{5}~M_{\odot},10^{7}~M_{\odot}, 10^{9}~M_{\odot}$ from left to right. Arrows indicate the direction in which the parameter space is allowed.} }
\label{fig:gold}
\end{center}
\end{figure*}
The momentum relaxation rate from this process can be roughly estimated from, $\Gamma_{p} \sim \sigma_{X \nu}~ n_{d} ~\left(T_{KD}/m_{X}\right)$, where the factor in parentheses accounts for the fact that many neutrino scatterings are required to appreciably change the DM momentum. Then using $n_{s} = \frac{3 }{2} \frac{\zeta(3)}{\pi^{2}} T_{s}^{3}$, one can estimate the temperature of kinetic decoupling by equating the momentum relaxation rate to the Hubble rate, $\Gamma_{p} = H$.
Though the above sketch is qualitatively correct, we employ a method along the lines of~\cite{Gondolo:2012vh} which incorporates the effects of Fermi-Dirac statistics and Pauli blocking. With this method we find the temperature of DM-{secluded} neutrino kinetic decoupling to be (see Appendix for details):
\begin{eqnarray}
T_{KD} = \left(\frac{104.58}{31 \pi^{3} N_{\nu}}\right)^{1/4} \frac{m_{\phi} g_{*}^{1/8}}{\sqrt{g_{X}g_{\nu}}} \left(\frac{m_{X}}{M_{Pl}}\right)^{1/4}\left(\frac{T_{\gamma}}{T_{s}}\right)^{3/2}_{KD}.
\label{T_KD}
\end{eqnarray}
where $g_{*}$ is the effective energy degrees of freedom parameter.
{The final ingredient to make contact with the missing satellites problem is to relate the temperature of kinetic decoupling to the mass of the smallest DM protohalos. As observed in~\cite{Hofmann:2001bi,Loeb:2005pm,Bertschinger:2006nq}, as long as DM-SM interactions are in kinetic equilibrium acoustic oscillations damp structure on sub-horizon scales. After kinetic decoupling, DM can stream freely out of over-densities and wipe out structure up to sub-free streaming scales. The mass of the smallest protohalos is determined by the largest of these two effects. For the low values of kinetic decoupling we consider here acoustic damping is dominant~\cite{Bringmann:2009vf,Gondolo:2012vh,Cornell:2013rza}.} Thus the concomitant suppression in the halo mass function is simply estimated from the amount of DM in the horizon at temperature $T_{KD}$~\cite{Bringmann:2009vf}
{
\begin{eqnarray}
M_{halo} &=& \frac{4\pi}{3} \rho_{DM,0}\frac{g_{*,s}(T_{KD})}{g_{*,s}(T_{0})}\left(\frac{T_{KD}}{T_{0}}\right)^{3}H^{-3}(T_{KD}) \nonumber \\
&=&1.7 \times 10^{8}~M_{\odot} \left(\frac{{\rm keV}}{T_{KD}}\right)^{3}
\label{M_halo}
\end{eqnarray}
where $g_{*,s}$ is the entropy degrees of freedom, $T_{0}$ is the temperature of the present epoch, $\rho_{DM,0}$ is the present DM density and, $M_{\odot} \simeq 1.1 \times 10^{57}$ GeV is a solar mass.} For halo cutoffs addressing the missing satellites problem, $10^{9-10}M_{\odot}$, this requires temperatures $T_{KD} \simeq 0.1-0.5$ keV.
\section{Implications for IceCube}
\label{sec:icecube}
\subsection{Basic argument}
We now come to the crucial stage of our presentation. The basic observation for what follows is that, if the dark force mediates DM-DM and DM-$\nu_{s}$ interactions, it should also mediate \emph{self-interactions of $\nu_{s}$}. If the secluded and active neutrinos appreciably mix, as in Eq.~(\ref{eq:Weinberg}), all neutrino mass eigenstates are endowed with this novel self-interaction. The result is anomalous scattering in collisions of neutrino particles. While a requisite laboratory neutrino-neutrino collider is presently lacking, the Universe provides an excellent setup for testing of this scenario, with baselines on gigaparsec scales. The ``beam'' is supplied by the ultra-high-energy (UHE) neutrino flux recently observed at the Icecube experiment; the background, by the relic populations of secluded -- and to a lesser extent active -- neutrinos.
Indeed, consider a neutrino produced at an astrophysical source. Initially in an active state $\nu_{a}$, it quickly oscillates into other flavors, and eventually separates into several wave packets corresponding to the different mass eigenstates that $\nu_{a}$ projects onto. Let us assume a generic scenario that the three mostly-active mass eigenstates, $\nu_{1,2,3}$, each have a similar, small amount of the secluded admixture, $\sim\sin\theta_{s}$. The fourth state, $\nu_{4}$, is then mostly made up of secluded neutrino, $\sim\cos\theta_{s}$. The initial active state $\nu_{a}$ has a small probability, $\sin^{2}\theta_{s}$, of immediately projecting into $\nu_{4}$. This component essentially disappears from the flux, having only a probability of $\sin^{2}\theta_{s}$ to interact as an active neutrino in the Icecube detector (see below).
A more interesting fate awaits the mostly-active eigenstates, $\nu_{1,2,3}$. While propagating through the relic background of $\nu_{4}$, these UHE neutrinos are subject to \emph{flavor-dependent} interactions. Specifically, only the $\nu_{s}$ components of $\nu_{1,2,3}$ enter the interactions. The final state of the scattering then consists of a $\nu_{s}-\nu_{s}$ pair, with each most likely to project onto $\nu_{4}$ state. Thus, the combination of dark-force-mediated scattering and the mixing-induced oscillations effectively converts active neutrinos into secluded ones, depleting the UHE neutrino flux.
Three important observations are in order. First, the above discussion assumes that the secluded neutrino is present at an appreciable level in all three states, $\nu_{1,2,3}$. If one or more of these states do not contain $\nu_{s}$, they are not subject to the scattering process, resulting in only partial suppression of the flux. Second, the active relic neutrino background also scatters the UHE flux, but with the probability suppressed by an additional factor of $\sin^{2}\theta_{s}$. Lastly, one may ask whether the active neutrino flux can be regenerated by subsequent scattering. After all, each subsequent event has a $\sim\sin^{2}\theta_{s}$ probability of producing $\nu_{1,2,3}$ and the $\nu_{4}$ neutrinos are subject to more frequent interactions with the relic background than in $\nu_{1,2,3}$, owing to the larger content of $\nu_{s}$. It is important to note, however, that such regenerated flux would have significantly lower energies, since each scattering event distributes the energy of the incident UHE neutrino between the two daughter states. Effectively, this flux disappears from the ultra-high-energy spectrum.
The efficiency of the scattering depends on the ratio of the mediator mass $m_{\phi}$ and the center-of-mass energy $s=2E_{\nu}m_{s}$, where $m_{s}$ is the mass of the mostly-secluded state $\nu_{4}$. Specifically, for a $t$-channel exchange, one has
\begin{eqnarray}
\sigma^t_{\nu\nu}(z) = \begin{cases}
\sin^2\theta_s \frac{s g_s^4}{2\pi m_\phi^4},\ &s \ll m_\phi^2\, , \\
\sin^2\theta_s \frac{3g_s^4}{4\pi m_\phi^2},\ &s\gg m_\phi^2 \,.\end{cases}
\end{eqnarray}
Below the mediator mass, the interaction becomes effectively contact and, just like for the SM Fermi interaction, the strength decreases with decreasing energy. For strong absorption, we need to be in the regime $s \gtrsim m_{\phi}^{2}$.
Remarkably, this is indeed realized for us. For a $\sim$ {100 TeV} astrophysical neutrino and a $\sim$1 eV secluded neutrino mass, as motivated by the short-baseline anomalies~\cite{Aguilar:2001pj,Aguilar:2013ik}, the center-of-mass energy is {$\sim 10$ MeV}, which is \emph{exactly the scale of the mediator masses favored by the velocity-dependent DM self-scattering in galactic cores}. In this case, the $t$-channel cross section is {$\sim9300$ fm$^{2}$ (1 MeV$/m_{\phi})^{2} g_{s}^{4}\sin^{2}\theta_{s}$} and the corresponding
mean free path (MFP) assuming the relic number density of ${\cal O}(10^{2})$ cm$^{-3}$ is {$\sim 30$ pc $ (m_{\phi}/\mbox{1 MeV})^{2} g_{s}^{-4}\sin^{-2}\theta_{s}$}.
Thus, the UHE neutrinos at Icecube provide an excellent probe of our scenario.
In their recently released three-year dataset the IceCube collaboration has reported 37 events above the atmospheric neutrino background with energies between 30 and 2000 TeV, with a significance of 5.7$\sigma$~\cite{Aartsen:2014uq}. The origin of these high-energy neutrino events remains unknown, though they appear to be isotropically distributed, suggesting an extragalactic origin. If this is the case, the MFP of high-energy neutrinos as they scatter on the C$\nu$B cannot be too short, as most of the flux originating at cosmological distances would not reach us. This can be immediately seen from Fig.~\ref{fig:absorption}, which shows the fraction of events at IceCube expected to originate within a given redshift, $z$, assuming SM neutrino interactions and the source distribution that tracks the star formation history of the universe.
Even if one boosts the flux emitted by the nearby sources by a large factor, the observed flux would look highly anisotropic. This consideration leads to an {\it upper bound} on the coupling in $\nuDM$.
Indeed, taking $\sin^{2}\theta_{s}=0.01$ and demanding that the MFP be at least 50 Mpc for isotropy considerations, {we find $g_{s}\lesssim 0.09 (m_{\phi}/\mbox{1 MeV})^{1/2}$, or $\alpha_{s}=g_{s}^{2}/4\pi\lesssim 6\times 10^{-4} (m_{\phi}/\mbox{1 MeV})$,} a significant constraint. This constraints is illustrated in Fig.~\ref{fig:gold}: it is responsible for the right cutoff on the allowed regions. Similar considerations were made for toy models~\cite{Ioka:2014kca,Ng:2014pca}.
For couplings below this cutoff, we have an interesting possibility of \emph{probing} the $\nuDM$ scenario with future Icecube data. There are at least two smoking-gun signatures to look for in this case. First is the effect of absorption in certain energy bands due to the $s$-channel resonance. The $s$-channel cross section is suppressed only by 2 powers of the coupling constant, being connected to the production of the physical particle in the process of $\nu_{s}\nu_{s}\rightarrow\phi$. Thanks to the red-shifting of the energy, the $s$-channel absorption could result in a gap in the energy spectrum. Such a gap, if confirmed with enough statistics, would be difficult to ascribe to physics at the source.
\begin{figure}
\includegraphics[width=.45\textwidth]{absorption_effect_on_redshift.pdf}
\caption{ Here we illustrate the effect of neutrino scattering on the fraction of events at IceCube originating within a given redshift, $z$. Notice that when absorption is present, a larger fraction of events originate from nearby, and may be more easily correlated with known sources.}
\label{fig:absorption}
\end{figure}
Second is the possibility that the observed UHE neutrinos could be correlated with known nearby sources. Indeed, correlations with distant sources is not expected, since most of sources at cosmological distances (redshift of $z\sim1-5$) are not in any catalog. If nearby source correlations were to be observed, one would conclude that a large fraction of the flux is \emph{missing}. The argument is that, on generic grounds, one expects sources to follow a distribution similar to the star-formation history of the universe. Then, as seen in Fig.~\ref{fig:absorption}, the population at $z\sim1-5$ should contribute most of the flux. For example, if the observed neutrinos were to be correlated to sources lying within $z \lesssim 0.2$, distant sources would be expected to contribute some 50 times as much flux. Its absence would then imply \emph{a neutrino redshift horizon}, pointing towards the $\nuDM$ scenario. We next consider this scenario in some detail.
\subsection{Detailed example}
\begin{figure*}
\begin{center}
\includegraphics[width=.45\textwidth]{Goldylocks_gx_mx_projection_01_sintheta_v7.pdf}
\includegraphics[width=.45\textwidth]{Goldylocks_gx_mphi_projection_01_sintheta_v7.pdf}
\caption{ {\it Left}: 2D projection along the $m_\phi$ axis showing the regions of the $m_X$-$g_X$ parameter space which may potentially solve the dark matter structure problems while producing identifiable absorption features for the IceCube experiment. {\it Right}: 2D projection along the $m_X$ axis showing the regions of the $m_\phi$-$g_X$ parameter space which may potentially solve the dark matter structure problems while producing identifiable absorption features for the IceCube experiment. {Blue indicates the regime where the C$\nu$B is opaque to high energy neutrinos on distances less than $50\,\rm Mpc$, orange color indicates regimes where the C$\nu$B is opaque to high energy neutrinos on distances short enough that absorption might be detected via IceCube source correlations at the level of $3\sigma$ statistical significance (using the 3 year data set~\cite{Aartsen:2014uq}), {{purple} indicates the regime where there absorption of high energy neutrinos may alter the IceCube observed spectra without creating a significant source correlation,} red indicates the regime where the absorption of high energy neutrinos reconciles the over abundance of IceCube events correlated with BL Lacs at $z < 0.212$, and dark grey regions show the regime where the C$\nu$B is optically thin out to $z=10$.}}
\label{fig:projections}
\end{center}
\end{figure*}
Let us now proceed to a more detailed examination of the modification of the neutrino optical depth in $\nuDM$. Assume a relic background of {secluded} neutrinos with a number density of $n_{\nu_s}\vert_{z=0} = 94\,{\rm cm}^{-3}$ for our example of $T_{d} = 1\,\rm TeV$.
We will use the criterion that the optical depth
for a high energy neutrino to scatter with the C$\nu$B be greater than unity. We compute the optical depth as follows,
\begin{eqnarray}
\tau = \int_0^{r_p} n_{\nu_s}\left( z\right) \sigma_{\nu\nu}\left( z\right) dr_p = \int_0^{z_{i}} \frac{ c n_{\nu_s}\left( z\right) \sigma_{\nu\nu}\left( z\right) dz}{(1+z)H(z)}\, ,\nonumber \\
\end{eqnarray}
where $r_p$ is the proper distance along the neutrino world line, $z$ is the redshift, $H(z)$ is the Hubble expansion rate, and the number density of background {secluded neutrinos} $n_{\nu_s} = n_{\nu_s}\vert_{z=0} (1+z)^3$. Contributions to $\sigma_{\nu\nu}\left( z\right)$ will come from resonant $s$-channel scattering and from $t$-channel scattering. The resonant scattering, as explained above, could yield distinct absorption bands, which would be smoking-gun signatures of our mechanism. At the same time, the locations of these band are {sensitively dependent on both} the mediator mass and the absolute neutrino mass scale. Hence, {determining} whether or not resonant absorption features will appear in the IceCube data is not a reliable constraint. Therefore we chose to use the $t$-channel scattering alone to constrain what portions of the parameter space may impact IceCube observations. For concreteness, we compute the bounds for our different scattering regimes using $E_\nu(z=0) = 63\,\rm TeV$, which corresponds to the lowest IceCube event energy which has been claimed to correlate with a gamma ray source at known {redshift}~\cite{Padovani:2014uq}.}
The first scattering regime we define is the {Optically Thin regime, where $\tau < 1$ out to a redshift of $z = 10$. This regime will be unlikely to have an observable effect on the IceCube signal as the potential sources for TeV - PeV neutrinos (such as AGNs, GRBs, or star forming galaxies) have redshift distributions which typically peak at $z < 4$. The second regime is the mean free path $< 50\,\rm Mpc$ regime, where the optical depth for neutrino absorption is $\tau = 1$ at a distance of $50\,\rm Mpc$ or less. This leads to IceCube sources which can be directly correlated with local large scale structure around or within the Milky Way galaxy.
{The third regime we define is the IceCube 3$\sigma$ correlation projected limit. In this regime the absorption of high energy neutrinos is sufficiently strong that it is possible to use the 3 year data release from IceCube~\cite{Aartsen:2014uq} to detect a discrepancy between the low {redshift} ($z \leq 0.2$) distribution of correlated IceCube sources and the redshift distribution of the same sources as identified with photons. To compute this limit {we take the median number of events above background in the IceCube 3 year data set, 20, and assume that the {red}{redshift distribution of} IceCube neutrino sources tracks the star formation rate (which is likely for many potential sources~\cite{Aartsen:2014uq}). This yields the expectation that $0.34$ IceCube neutrino events should originate within the range $0 < z < 0.2$. We then compute the effects of absorption in our model on the redshift distribution of neutrinos emitted from sources distributed according to the star formation rate, taking the overall flux and {red}{adjusting the normalization} such that the IceCube experiment is expected to observe 20 events {\it post-absorption}. The post-absorption expected number of events in the redshift range $0 < z < 0.2$ is then tallied and the significance of the discrepancy between the absorbed and non-absorbed expectations computed. Requiring the discrepancy be at least $3\sigma$ confidence level or greater yields the constraints on $m_\phi$ and $g_\nu$ shown in Figure~\ref{fig:projections} as well as a the projection that the IceCube detectable redshift horizon (beyond which $\tau>1$) for $t$-channel absorption is $z \leq 0.70\,(0.64)$ for the contact (continuum) interaction limit.}}
The {fourth} regime we define is the Isotropic Source regime, where the optical depth for high energy neutrino absorption is greater than unity only for redshifts greater than $z=0.70\,(0.64)$ in the contact (continuum) interaction limit. The absorption of high energy neutrinos in this regime may yet alter the spectral index of the high energy neutrino spectrum or create absorption lines, but it will not be detectable through correlating IceCube events with astrophysical sources.}
{The recent results of~\cite{Padovani:2014uq} are of further interest, as the authors point out that there is a significant correlation between several of the IceCube neutrino events and BL Lacs (3 events correlate with BL Lacs of known redshift, 4 events correlate with BL Lacs of unknown redshift). We find these results very interesting. When normalizing the BL Lac signal to the total IceCube neutrino flux we find that the number of neutrinos correlated with BL Lacs located closer than $z\sim0.2$ exceeds expectation by an order of magnitude. A possible explanation of this discrepancy may be that the redshift distribution of BL Lacs capable of generating IceCube energy neutrinos is radically different from the redshift distribution of all BL Lacs due to some unknown mechanism. However, our model suggests the mean free path of high energy neutrinos may not extend to high redshift due to absorption, potentially biasing the sources correlated with IceCube neutrino events to low redshift. Beginning from this perspective, we define the BL Lac Source Correlation regime by supposing that absorption of high energy neutrinos on the C$\nu$B reconciles the over-abundance of correlated neutrino events at low {redshift}. The mean free path of high energy neutrinos is shortened by scattering with secluded neutrinos in the C$\nu$B such that the correlation of 3 IceCube neutrinos with BL Lac sources at a redshift less than $z = 0.212$ (the largest redshift source correlated with an IceCube neutrino event~\cite{Padovani:2014uq}) is consistent with the total flux neutrinos observed by IceCube originating from all BL Lacs in the observable universe. We also vary the expected number of background events in the IceCube 3 year data set by $1\sigma$~\cite{Aartsen:2014uq} to establish an upper and lower limit on the possible number of neutrinos originating form BL Lacs beyond a redshift of $z=.212$. We show the results of these bounds in Figures~\ref{fig:gold} and~\ref{fig:projections}, {which correspond for $t$-channel scattering to an upper bound on the redshift horizon for high energy neutrinos of $z = 1.0\,(0.92)$ for the contact (continuum) limit and a lower bound on the redshift horizon of $z = 0.42\,(0.38)$ for the contact (continuum) limit.}}
\section{Discussion of Results}
\label{sec:discuss}
In Figs.~\ref{fig:gold} and~\ref{fig:projections} we summarize our main results. In each panel of Fig.~\ref{fig:gold} we fix the mediator mass and keep $g_{X} = g_{\nu}$ throughout. We include the thermal relic constraint, and exclude values of $g_{X}$ too small to yield a sufficiently large annihilation cross section (black curve). Strictly speaking this region of parameter space is only excluded in the minimal model here, since one could easily add new interactions to $X$ to open new annihilation channels. Next, we include DM self-interactions at the level of $\sigma_{XX}/m_{X} = 1-10~{\rm cm}^{2}{\rm g}^{-1}$ at dwarf scales where the DM dispersion is small, $v = 10~{\rm km}/{\rm s}$ (shaded blue). Self-Interactions at this level has been argued to provide a potential resolution of both the ``too-big-to-fail'' problem and the ``cusp-versus-core'' problems.
In addition one may also wish to address the missing satellites problem. To this end in Fig.~\ref{fig:gold} we display curves of constant DM protohalo masses (green dashed curves). The observed lack of satellites in the Milky Way may be indicative of cutoffs around $10^{9-10}~M_{\odot}$. It is known however that DM substructure cannot be arbitrarily large. Lyman-$\alpha$ constraints are among the most restrictive and require $M_{halo} \lesssim 5\times 10^{10}~M_{\odot}$ (solid green)~\cite{SommerLarsen:1999jx,Tittley:1999jt}.
{Significantly, the original region for the viable solution of all dark matter structure problems via hidden sector interactions with neutrinos~\cite{Aarssen:2012fx} persists within the larger parameter space allowed by our model. Because the authors of~\cite{Aarssen:2012fx} explicitly considered only symmetric thermal relic dark matter which self-interacts in the classical scattering limit, their solution exists in the upper right hand portion of the plots in Figure~\ref{fig:gold}. This subset of the allowed parameter space is almost entirely within the MFP$<50\,\rm Mpc$ regime for the IceCube experiment. As such, the parameter space allowed by~\cite{Aarssen:2012fx} results in vigorous absorption of high energy neutrinos by the C$\nu$B, which can be readily probed by the IceCube detector.}
To summarize the allowed parameter space we have projected the results of the calculations done for Fig.~\ref{fig:gold} onto the $m_\phi$ and $m_X$ planes in Fig.~\ref{fig:projections}. These projections show that there is a very large volume of the parameter space which resolve the issues associated with dark matter structure while also producing absorption of high energy neutrinos by the C$\nu$B. Most importantly, the right hand panel of Fig.~\ref{fig:projections} shows that the bulk of the regime where IceCube might observe either isotropic or local high energy neutrino sources is also a viable solution for all of the dark matter structure problems. This means that this model predicts, at the very minimum, a horizon in redshift beyond which high energy neutrinos cannot propagate to the IceCube detector without scattering and becoming undetectable. Most of the parameter space produces an observable feature in the IceCube signal, specifically that the redshift distribution of high energy neutrino sources does not match the redshift distribution of observed events in the IceCube detector. Furthermore, the possibility exists (dependent on the absolute neutrino mass scale) that absorption lines from resonant production of the mediator in $\nu - \nu$ collisions may be observable in the IceCube experiment.
{Should the primary source of neutrinos in the IceCube detector ultimately prove to be BL Lacs and should the correlation found in~\cite{Padovani:2014uq} persist in the future, we} {could} {infer {data-preferred values} of the strength of the secluded sector self interactions. From the BL Lac Source Correlation contour in Fig.~\ref{fig:projections}, we can make an estimate of the strength of the secluded sector interaction in the contact limit, $G_{\rm eff} = g_\nu^2/m_\phi^2 = 9 \pm 3 \times 10 ^{-5}\, {\rm MeV}^{-2}$, using $\sin\theta_s = 0.1$.}
\section{Conclusions}
We have studied the consequences of endowing DM with new interactions with neutrinos. This is a generic possibility in models where {secluded} neutrinos and DM are charged under a new $U(1)$ gauge symmetry. The mass mixing with active neutrinos arises from unspecified high-scale physics, which we effectively parameterize {as a} dimension-five operator connecting the two sectors. The new light gauge boson in this model simultaneously provides an annihilation channel for DM, yields late kinetic decoupling of DM and neutrinos, gives strong self-interactions at dwarf galaxy scales, {and modifies the mean free path of high-energy neutrinos.} {Of further importance, DM is not self conjugate under such a scheme, allowing the DM relic abundance to be asymmetric. This admits the existence of much more strongly coupled DM-DM and $\nu_s$-DM interactions, expanding the regime where such a model could simultaneously solve all of the dark matter structure problems} {and lead to novel effects at IceCube. On the neutrino side, the new interactions discussed here may have additional cosmological implications such as their impact on the CMB by delaying the onset of free-streaming~\cite{Bell:2005dr,Friedland:2007vv,Cyr-Racine:2013jua}.} {In addition, although present data has not yet reached thermal relic sensitivity~\cite{Abbasi:2011eq}, future IceCube data limiting DM annihilation into neutrinos will be a further test of this model, and can be relevant even in the case of asymmetric annihilation~\cite{Graesser:2011wi,Bell:2014xta}.}
{The neutrino mixing portal which connects the dark and SM sectors at low temperature has a range of beneficial effects. Mixing angle suppression~\cite{Hannestad:2013ana,Dasgupta:2013zpn} precludes such models from interfering with BBN. We have shown that late-time scattering via the dark mediator recouples the active and {secluded neutrino} populations, leading to a larger and hotter population of relic {secluded} neutrinos than previously thought~\cite{Dasgupta:2013zpn}. This late time recoupling of neutrinos further increases the volume of the parameter space which may explain the missing satellites problem.}
{The most important feature present in our model is the $\nu$-$\nu$ scattering through the dark sector interaction which produces observable consequences in the high energy neutrino flux observed by the IceCube experiment. IceCube data may be able to constrain or confirm $\nuDM$. The presence of secluded neutrinos in the C$\nu$B modify the mean free path of high energy neutrinos {since they can} scatter on relic background of secluded neutrinos via mixing. The parameter space which reconciles issues with dark matter structure lies almost entirely within the regime which might be tested as IceCube's exposure time increases. The correlation of IceCube neutrino events with BL Lac sources at low redshift may be the first evidence of the absorption of high energy neutrinos through this mechanism. This represents a novel and unique opportunity to probe the dynamics of the {d}ark sector using the IceCube neutrino telescope.}
\vspace{.5cm}
\acknowledgements
{We would like to thank George Fuller and Kris Sigurdson for many helpful discussions and Bill Louis for comments on the manuscript. Results from this work have been presented at numerous conferences over the last year, including INFO13, MIAMI13, Neutrino 2014, SF14, Bright Ideas on Dark Matter at the University of Southern Denmark, and the NIAPP 2014 topical workshop. This work has been supported by the CP3-Origins centre which is partially funded by the Danish National Research Foundation, grant number DNRF90, the DOE Office of Science and the U.C. Office of the President in conjunction with the LDRD Program at LANL. We would also like to thank the Munich Institute for Astro- and Particle Physics (MIAPP) and the Kavli Institute for Theoretical Physics at UCSB for their hospitality.}
\section*{Appendix: Kinetic Decoupling}
\vspace{1cm}
\subsection{Simplified Approach}
The matrix element for $ X \nu \rightarrow X \nu$ is (averaging over initial DM spins)
\begin{eqnarray}
\langle |\mathscr{M} |^{2}\rangle = \frac{4g_{X}^{2}g_{\nu}^{2}}{\left(t- m_{\phi}^{2}\right)^{2}} \left[ (s-m_{X}^{2})^{2} + (u- m_{X}^{2})^{2} + 2 m_{X}^{2} t\right] \nonumber
\end{eqnarray}
Note that using $s+t+u = 2m_{X}^{2}$ we have $u - m_{X}^{2} = -2m_{X}E_{\nu}-t$.
The cross section is $d \sigma/d \cos \theta =\frac{ \langle |\mathscr{M} |^{2}\rangle }{32 \pi s}$. In the limit that $t =0$ and $s = m_{X}^{2} + 2m_{X}E_{\nu}$ we see that $\langle |\mathscr{M} |^{2}\rangle = \frac{32 g_{X}^{2}g_{\nu}^{2}}{m_{\phi}^{4}}m_{X}^{2} E_{\nu}^{2}$ .
The temperature of kinetic decoupling can be found by solving $H = \Gamma_{mom} = n_{s} \sigma_{X\nu}\sqrt{\frac{3}{2}} \left(\frac{T_{s}}{m_{X}}\right)$ where at $t=0$ the cross section is $\sigma_{X\nu} = \frac{g_{X}^{2}g_{\nu}^{2}E_{\nu}^{2}}{\pi m_{\phi}^{4}}$. Using $H = 1.66 \sqrt{g_{*}} T_{\gamma}^{2}/M_{Pl}$ and $n_{s} = \frac{3}{2} \frac{\zeta(3)}{\pi^{2}} N_{\nu} T_{s}^{3}$ and $\langle E_{\nu}^{2} \rangle = 12.9 T^{2}$.
\begin{eqnarray}
T_{KD} = \left(\frac{\pi^{3} 1.66 (2/3)^{3/2}}{12.9 \zeta(3) N_{\nu}}\right)^{1/4} \frac{m_{\phi} g_{*}^{1/8}}{\sqrt{g_{X}g_{\nu}}} \left(\frac{m_{X}}{M_{Pl}}\right)^{1/4}\left(\frac{T_{\gamma}}{T_{s}}\right)^{3/2}_{KD}. \nonumber
\end{eqnarray}
\subsection{Details}
The above approach is not fully accurate. A better method is offered by the following which takes into account Fermi-Dirac statistics and Pauli blocking. To do do we follow the method of~\cite{Gondolo:2012vh} and write the momentum-relaxation rate as
\begin{eqnarray} \gamma(T) = \frac{g_{SM}}{6 m_{X}T} \int_{0}^{\infty} \frac{d^{3}p}{(2\pi)^{3}} f(p) \left(1-f(p)\right) 8 p^{4} \left. \frac{d \sigma}{dt}\right\vert_{t=0}
\end{eqnarray}
where $\frac{d\sigma}{dt} = \frac{1}{64 \pi m_{X}^{2} p^{2}} \langle |\mathscr{M} |^{2}\rangle$.
\begin{eqnarray}
\left. \frac{d \sigma}{dt}\right\vert_{t=0} = \frac{g_{X}^{2}g_{\nu}^{2}}{2\pi m_{\phi}^{4}}
\end{eqnarray}
Using $\int_{0}^{\infty} p^{6} f(p) \left(1-f(p)\right)dp = \frac{31 \pi^{6}}{42} T^{7}$. Using $g_{SM} =2 N_{\nu}$ we find
\begin{eqnarray}
\gamma(T) = \frac{31 N_{\nu}\pi^{3}}{63} \frac{g_{X}^{2}g_{\nu}^{2}}{m_{X}m_{\phi}^{4}} T_{\nu}^{6}
\end{eqnarray}
Equating this to the Hubble rate and solving for $T$ yields
\begin{eqnarray}
T_{KD} &=& \left(\frac{1.66 \cdot 63}{31 \pi^{3} N_{\nu}}\right)^{1/4} \frac{m_{\phi} g_{*}^{1/8}}{\sqrt{g_{X}g_{\nu}}} \left(\frac{m_{X}}{M_{Pl}}\right)^{1/4}\left(\frac{T_{\gamma}}{T_{s}}\right)^{3/2}_{KD}. \nonumber \\
&=& \frac{0.067~{\rm keV}}{N_{\nu}^{1/4}(g_{X}g_{\nu})^{1/2}} \left(\frac{T_{\gamma}}{T_{s}}\right)^{3/2} \left(\frac{m_{X}}{{\rm TeV}}\right)^{1/4} \left(\frac{m_{\phi}}{1~{\rm MeV}}\right) \nonumber,
\end{eqnarray}
which is in agreement with~\cite{Aarssen:2012fx}.
| d68399b71b3c85654bc7155ef0b42e1abdd42091 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
A quantum spin liquid is a many-body singlet state without breaking the global spin rotation symmetry and the lattice symmetry. In two dimensions, spin liquids may support fractional excitations. A gapped spin liquid has degenerate ground states on a torus and carries topological order.\cite{Wen89, WenNiu90, HCJiangNP12} It was proposed that spin liquids may exist in geometrically frustrated spin-1/2 anti-ferromagnetic systems owning to strong quantum fluctuations.\cite{Anderson1987} Searching for spin liquid states has attracted lots of research interest in condensed matter physics from both experimental\cite{Triangle03, Dmit08,SHLee07, THHan12} and theoretical sides.\cite{Motrunich05,JiangKagome08, Ran08, WhiteKagome11,DMRGKagome12,CSLKagome12,CSLSheng14}
Spin liquid states are also called resonating valence bond (RVB) states\cite{Anderson1987}, where a valence bond stands for a two-spin singlet just like a Cooper pair in $s$-wave superconductors and a RVB state is a superposition of all possible valence bond covering configurations. The validity of identifying a spin liquid state as a RVB state relies on a mathematical theorem, which says that the valence bond covering configurations, sometimes called singlet pair states (SPSs), are overcomplete to span the Hilbert space of many-body-singlet states.\cite{Caspers89} More precisely, if the system is arbitrarily divided into two subsystems A and B with equal number of spins, then the overcomplete bases can be restricted to the valence bond covering configurations where each valence bond contains one spin from A and one from B. The RVB picture yields several practical approaches to construct spin liquid states, such as Gutzwiller projection of BCS-type mean field wavefunctions\cite{Gros88}, Liang-Doucot-Anderson variational method\cite{LiangDoucotAnderson88} and tensor network (or projected entangled pair states) construction.\cite{PEPsRVB12} These approaches have been widely used to study antiferromagnetic spin models and high temperature superconductors.
On the other hand, spin-1 systems also have strong quantum fluctuations, and consequently spin liquid states may also exist in spin-1 frustrated anti-ferromagnetic systems. For example, 1-dimensional (1D) gapped $S=1$ spin liquid --- the Haldane phase,\cite{HaldanePLA1983, HaldanePRL1983} has attracted lots of research interest. Candidates of 2D $S=1$ spin liquids have also been reported in literature\cite{NiGa2S4_05, Cheng-Spin-1SL2011, JiangS1SL09,LiuZhouNg2011_Spin-1SL, Xu-S1SL2012,PALee12Gutz}. An interesting aspect for $S=1$ anti-ferromagnets is that there may exist non-Abelian spin liquids which support non-Abelian anyon excitations.\cite{Senthil_NonAbelian11}
There is a similarity between spin-1/2 systems and spin-1 systems, namely, $S=1/2$ is the fundamental representation of $SU(2)$ group whereas $S=1$ is the fundamental representation of $SO(3)$ group. So it is natural to extrapolate that $S=1$ spin liquid states are also RVB states, where a valence bond represents a singlet-pair formed by two $S=1$ spins. In the remaining part of this paper, we show that this is indeed the truth. We prove that for $S=1$ systems, singlet pair states are also overcomplete to span the Hilbert space of many-body singlet states. Comparing with spin-1/2 systems, there are two differences: one is that here the number of spins of a many-body singlet can be either even or odd, the other is that the subsystem-singlet-pair states, where each singlet pair is from one subsystem to the other, are no longer complete bases.
Above conclusion ensures that some methods used to study $S=1/2$ spin liquids can also be applied for $S=1$ systems. For instance, Gutzwiller projection approach based on fermionic slave particle representation provides very good trial wave functions for $S=1$ bilinear-biquadratic anti-ferromagnetic Heisenberg chains. \cite{LiuZhouTuWenNg2012, LiuZhouNgExt14} It was shown that the Haldane phase is long-range RVB states (from this point of view, the resonating loop states\cite{YaoS1SL10} in 2D are also long-range RVB states) while the dimer phase is short-rang RVB states. Another example is the tensor network approach which tells us that extremely short-ranged RVB state for $S=1$ spins on Kagome lattice carries $Z_2$ topological order.\cite{YaoS1SL10,RAL14,CXPollman14}
However, it should be noted that the RVB picture of spin liquid states is no longer valid for $SO(3)$ symmetric spin systems with spin magnitude $S>1$. In these large spin systems, spin-singlet-clusters (3-body-singlet, or 4-body-singlet, so on and so forth) as well as two-body singlet pairs are necessary to span the Hilbert space of many-body singlets. The RVB representation will be valid for integer spin-$S$ systems\footnote{For half-odd-integer spin-$S$ systems, the $SO(3)$ symmetry can be enlarged into $SP(2S+1)$, since a $SO(3)$ singlet formed by two spin-$S$ spins is also a $SP(2S+1)$ singlet. The $SP(2S+1)$ symmetry is beyond the scope of the present paper and will be discussed in our future work.} when they have enhanced symmetry group $SO(2S+1)$.\cite{TuSO(n)13}
Recently, systems with $SU(n)$ symmetry attracted lots of interest in cold atom physics,\cite{ScienceSU(n)14, NatPhysSU(n)24, SU(n)_Hubbard04, SU(n)_coldatom10} and $SU(n)$ spin liquids states have also been studied theoretically.\cite{HermeleSU(n)CSL09, TuSU(n)CSL14} To this end, we also discuss the complete bases for $S=1$ spin liquids with $SU(3)$ symmetry. Generalization of this result to $SU(n)$ systems is straightforward.
The remaining part of the paper is organized as follows. In section \ref{sec:SO3}, we prove that singlet pair states are overcomplete bases for $S=1$ spin liquids with $SO(3)$ symmetry. A special case where the $S=1$ system has $SU(3)$ symmetry is discussed in section \ref{sec:SU3}. In section \ref{sec:S=1apply} we discuss a simple application of the $S=1$ RVB representation in 1D spin liquids. Section \ref{sec:sum} is devoted to conclusions and discussions.
\section{Overcompleten bases for $SO(3)$ symmetric $S=1$ spin liquids}\label{sec:SO3}
\subsection{Tensor Representation and Young Tableau}
Under $SO(3)$ operation, the three components of $S=1$ vary as a real vector. The three bases for $S=1$ can be combined into the familiar vector form
\begin{eqnarray*}
&&|x\rangle={1\over\sqrt2}(|-1\rangle-|1\rangle),\\
&&|y\rangle={i\over\sqrt2}(|-1\rangle+|1\rangle),\\
&&|z\rangle=|0\rangle.
\end{eqnarray*}
We denote these bases as $V^m=|m\rangle$ where $m=x,y,z$. Thus the Hilbert space of a system with $L$ spins form a rank-$L$ reducible {\it real} tensor representation of $SO(3)$.
Similar to the reduction of $SU(2)$ tensors (see Appendix \ref{app:SU2}), $SO(3)$ tensor representations can be reduced according to different representations of the permutation group of the tensor indices (\textit{i.e.}, the site indices). Different representations of the permutation group can be described by different Young diagrams.\cite{ChenJQBook04} Comparing to $SU(2)$ group, the complicity of $SO(3)$ group (and generally $SO(n)$ group) is that the same Young diagram stands for different $SO(3)$ representations. \footnote{For $SU(n)$ system, each Young diagram corresponds to an $SU(n)$ irreducible representation. However, for $SO(n)$ tensors, the representation space corresponding to a Young diagram is usually reducible, since the trace of two indices is invariant under $SO(n)$. So the same Young diagram may stand for a direct sum of different irreducible representations.}
We will illustrate this issue by two simple cases: $L=2$ and $L=3$.
The direct product of two spins is represented as a real rank-2 tensor $T^{mn}=V_1^m\otimes V^n_2$. The reduction of the tensor contains three channels $1\otimes1=0\oplus1\oplus2$. The representations with total spin $S_t=0,2$ are symmetric under permutation of the two site indices, while the state with $S_t=1$ is antisymmetric for the site indices. As shown in Fig.~\ref{fig: 2site}, the permutation symmetries can be labeled by Young diagrams:
\begin{figure}[htbp]
\centering
\includegraphics[width=1.1in]{2site.pdf}
\caption{Young diagrams for a 2-spin system. (a) The antisymmetric channel $[1;1]$, the total spin is $S_t=1$, which is called a dual vector; (b) The symmetric channel $[2]$, which contains the trace part $S_t=0$ and the traceless part $S_t=2$. } \label{fig: 2site}
\end{figure}
{\bf Antisymmetric channel} [Fig.~\ref{fig: 2site}(a), $S_t=1$]
\begin{eqnarray*}
T^{\{mn\}}=\sum_{mn}{1\over2}(T^{mn}-T^{nm}).
\end{eqnarray*}
Since there is only one free index, we can rewrite it as a dual vector:
\[
\tilde V^l=\sum_{mn}\varepsilon^{lmn}T^{\{mn\}},
\]
where $\varepsilon^{lmn}$ is the Levi-Civita symbol. In other words, the Young diagram $[1;1]$ is dual to the Young diagram $[1]$;
{\bf Symmetric channel} [Fig.~\ref{fig: 2site}(b), $S_t=0, 2$]
1) trace of $T$ ($S_t=0$)
\[\mathrm{Tr~}T=\sum_{mn}\delta^{mn}T^{mn}=\sum_m T^{mm};\]
2) traceless symmetric tensor ($S_t=2$)
\[T_0^{[mn]}=T^{[mn]}-{1\over3}\delta^{mn}\mathrm{Tr~} T.\]
where $T^{[mn]}={1\over2}(T^{mn}+T^{nm})$ and the subscript 0 in $T_0^{[mn]}$ means traceless, {\it i.e.} $\sum_{m,n}\delta^{mn}T_0^{[mn]}=0$.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.8in]{3site.pdf}
\caption{Young diagrams for a 3-spin system. (a) The fully antisymmetric diagram $[1;1;1]$, $S_t=0$; (b) The mixed-symmetric diagram $[2;1]$, $S_t=1\oplus1\oplus2\oplus2$; (c) The fully symmetric diagram $[3]$, $S_t=1\oplus3$. } \label{fig: 3site}
\end{figure}
Similarly, the reduction of a rank-3 tensor can be labeled by Young diagrams shown in Fig.~\ref{fig: 3site}. The fully anti-symmetric diagram $[1;1;1]$ [see Fig. \ref{fig: 3site}(a)] stands for a singlet with $S_t=0$; The mixed symmetric diagram $[2;1]$ [see Fig.\ref{fig: 3site}(b)] stands for a direct sum of $S_t=1\oplus1\oplus2\oplus2$ (each irreducible representation is 2-fold degenerate because the Young diagram $[2;1]$ stands for a 2-dimensional representation of the permutation group); the fully symmetric diagram $[3]$ [see Fig.\ref{fig: 3site}(c)] represents a direct sum of $S_t=1\oplus3$. The bases for the irreducible representations are given in Appendix \ref{app: 1*3}.
Above examples reveal important features for $S=1$ systems:
(1) three spins (or a rank-3 tensor) can form a singlet (or a scalar) if the indices are fully antisymmetric;
(2) a dual vector and a vector ({\it i.e.} a two-row unit and a one-row unit) cannot contract into a singlet;
(3) a rank-odd fully symmetric tensor can not form a singlet.
\begin{figure}[htbp]
\centering
\includegraphics[width=2.6in]{abcdYD.pdf}
\caption{The Young diagrams that contains singlet representations. (a) the fully symmetric Young diagram $[2n_1]$; (b) the diagram $[2n_2;2n_2]$ is dual to the fully symmetric one, it contains singlet components if $N$=even; (c) three-row diagram $[n_3;n_3;n_3]$. (d) The most general Young diagram $[m_1;m_2;m_3]$. Supposing the number of sites is $L$ ($L$ can be either even or odd), there are three conditions for the Young diagram if it contains a singlet: $m_1+m_2+m_2=L$; $(m_1-m_2)$=even; $(m_2-m_3)$=even. } \label{fig: 3YT}
\end{figure}
According to above properties, not every Young diagram contains singlet representations. In the following we list the Young diagrams that contain singlet channels in their tensor reduction:
(A) A single row with an even number of columns,
see Fig.~\ref{fig: 3YT}(a);
(B) Two equal-length rows with even number of columns,
see Fig.~\ref{fig: 3YT}(b);
(C) Three equal-length rows,
see Fig.~\ref{fig: 3YT}(c).
(D) A combination of above three, see Fig.~\ref{fig: 3YT}(d).
If a Young diagram contains singlet channels, it must be one of above cases. {\it The number of independent singlets corresponding to each Young diagram is equal to the dimension of the representation of the permutation group labeled by the same Young diagram} (see Appendix \ref{app:SU2}).
\subsection{Overcompleteness of SPSs}
Now we prove that all singlets corresponding to the diagrams in Fig.~\ref{fig: 3YT} can be expanded as superpositions of SPSs.
\subsubsection{Two Formulas}
Firstly we give the following properties of fully antisymmetric tensor $\varepsilon^{abc}$,
\begin{eqnarray}
\varepsilon^{abc}\varepsilon^{def}&=&\delta^{ad}(\delta^{be}\delta^{cf}-\delta^{bf}\delta^{ce})
-\delta^{ae}(\delta^{bd}\delta^{cf}-\delta^{bf}\delta^{cd})\nonumber\\
&&-\delta^{af}(\delta^{be}\delta^{cd}-\delta^{bd}\delta^{ce}),\label{delt3}
\end{eqnarray}
specially, if $a=d$, Eq.~(\ref{delt3}) reduces to
\begin{eqnarray}
\sum_a\varepsilon^{abc}\varepsilon^{aef}=\delta^{be}\delta^{cf}-\delta^{bf}\delta^{ce}.\label{delt2}
\end{eqnarray}
Above two are the most important equations in proving the completeness of SPSs.
\subsubsection{Simple Applications}
Now we illustrate two applications of these formulas:
\begin{figure}[htbp]
\centering
\includegraphics[width=1.2in]{4_6site.pdf}
\caption{The singlets described by the following Young diagrams can be expanded as superpositions of SPSs: (a) 4 spins with permutation symmetry channel described by the diagram $[2;2]$; (b) 6 spins with symmetry channel $[2;2;2]$. } \label{fig: 4_6site}
\end{figure}
(1) The Young diagram $[2;2]$ in Fig.\ref{fig: 4_6site}(a) is dual to Fig.\ref{fig: 2site}(b). This diagram contains singlet channels, meaning that two dual vectors can contracte into a singlet. From Eq.~(\ref{delt2}), we have
\begin{eqnarray}\label{T22}
|0,0\rangle&=&\sum_{abcef}T_{[2;2]}^{bcef}\varepsilon^{abc}\varepsilon^{aef}\nonumber\\
&=&\sum_{bc}(T^{bcbc}-T^{bccb})
\end{eqnarray}
here the indices of $T_{[2;2]}^{bcef}$ respect the symmetry described by Young diagram $[2;2]$, namely, $\{bc\}$ and $\{ef\}$ are anti-symmetric and then $[be]$ and $[cf]$ are symmetric under permutation. Above equation means that the 4-spin singlet described by $[2;2]$ can be expanded as a superposition of SPSs $|0,0\rangle=(13)(24)-(14)(23)$, where the bracket means a singlet pair.
(2) The Young diagram $[2;2;2]$ in Fig.\ref{fig: 4_6site}(b) contains singlet component. Owning to the relation (\ref{delt3}) we have
\begin{eqnarray}\label{T222}
|0,0\rangle&=&\sum_{abcdef}T_{[2;2;2]}^{abcdef}\varepsilon^{abc}\varepsilon^{def}\nonumber\\
&=&\sum_{abc}[(T^{abcabc}-T^{abcacb})-(T^{abcbac}-T^{abccab})\nonumber\\
&&-(T^{abccba}-T^{abcbca})].
\end{eqnarray}
This means that the 6-spin singlet described by $[2;2;2]$ can be written as a superposition of SPSs $|0,0\rangle=[(14)(25)(36)-(14)(26)(35)]-[(15)(24)(36)-(15)(26)(34)]-[(16)(25)(34)-(16)(24)(35)]$.
\subsubsection{Proof of Overcomplete Bases}
With these pre-knowledge, we are ready to prove the overcompleteness of SPSs. Firstly we suppose that the number of sites is even $L=2N$. In each SPS there are $N$ singlet pairs and the total number of SPSs is equal to ${(2N)!\over(N!)(2^N)}$. Now we show that the all of Young diagrams listed in Fig.~\ref{fig: 3YT} can be expanded by these SPSs.
(A) The single-row Young diagram $[2N]$
The Hilbert space of $2N$ spins can be described by a rank-$2N$ real tensor $T^{m_1m_2...m_{2N}}$. The Young diagram $[2N]$ means these indices are fully symmetric under permutation, namely,
\[
T^{m_1m_2...m_{2N}}_{[2N]} ={1\over (2N)!}\sum_{ P}T^{P(m_1m_2...m_{2N})},\]
where $P$ means permutation of the $2N$ indices $(m_1,m_2,...,m_{2N})$. To obtain a singlet, all indices should be contracted two by two:
\[
|0,0\rangle= \sum_{\{m_i\}}\prod_{k=1}^N \delta^{m_{i_k}m_{j_k}}T^{m_1m_2...m_{2N}}_{[2N]},
\]
where the two spins at sites $i_k$ and $j_k$ form a singlet pair, and $|0,0\rangle$ is an equal weight superposition of all SPSs.
(B) The two-row Young diagram $[N;N]$ contains singlet representations if $N$ is even. Since $[N;N]$ is dual to $[N]$, we can treat the two spins at each column as a dual vector, and the argument in (A) still works for the dual vectors. Then we can use relation (\ref{delt2}) to express the singlet formed by dual vectors as SPSs [referring to Eq.(\ref{T22})];
(C) The three-row Young diagram $[M;M;M]$ [This Young diagram is available if $L$ is divisible by 3, namely, $L=3M$ ($M$ is even since $L$ is even). Otherwise this diagram can only occur as part of the Young diagram in Fig.~\ref{fig: 3YT}(d)]. The three spins in each column form a singlet. Similar to Eq.~(\ref{T222}), the product of three-spin-singlets can be expressed as superposition of SPSs using relation (\ref{delt3});
(D) A general Young diagram $[m_1;m_2;m_3]$ is a combination of above three diagrams with $m_1+m_2+m_3=L$ and $(m_1-m_2)=$even, $(m_2-m_3)=$even. If $L$=even, then $m_3$=even, otherwise $m_3$=odd. The three-row part $[m_3;m_3]$ is already an singlet. The two-row part $[(m_2-m_3);(m_2-m_3)]$ can not contract with the one-row part $[m_1-m_2]$ owning to our previous argument [see Fig. \ref{fig: 3site}(b)]. So if the whole diagram contains singlet channels, then the two-row part and the one-row part must form singlets independently. Thus, all the three parts in the combined diagram can be considered independently, and the arguments in (A),(B),(C) still work. As a result, the singlet states described by a general diagram $[m_1;m_2;m_3]$ can be written as superpositions of SPSs.
Now we consider the case when $L=$odd, where the system can not be completely grouped into singlet pairs. However, if we arbitrarily select out three spins to form a three-spin singlet, then the remaining spins can completely combine into singlet pairs. So possibly any singlet of the system can be expanded as a superposition of all of the following configurations, in each configuration three spins form a singlet and the remaining form singlet pairs. Now we show how this is true. Since $L$=odd, (A) and (B) are not relevant. The Young diagram (C) is relevant if $L$ is divisible by 3, namely, $L=3M$ where $M$ is odd, and if a Young diagram in case (D) contains singlets, $m_3$ must be odd. In both cases, if we single out the first column (which corresponds to three arbitrary spins), then the remaining part can be expanded as SPSs according to our previous discussion. As a result, the singlet with odd number of spins can be decomposed as a superposition of products of a three-spin-singlet and singlet pairs.
\subsubsection{Check The Completeness of Young Diagrams}
We denote the Hilbert space spanned by the singlets of a $L$-spin system as $\mathcal H_0$. It is not difficult to see that the dimension of the Hilbert space $\mathcal H_0$ is equal to the difference between the number of states with $S_z=0$ and the number of states with $S_z=1$:
\begin{eqnarray}\label{dimH0}
d_{\mathcal H_0,L}=\sum_{i=0}^{\{{L\over2}\}} C_{L}^iC_{L-i}^i-\sum_{i=1}^{\{{L+1\over2}\}} C_{L}^iC_{L-i}^{i-1},
\end{eqnarray}
where $C_L^i={L!\over i!(L-i)!}$, and $\{{L\over2}\}$ is the integer part of $L/2$.
To see if the bases described by the Young diagrams are exhausting, we can compare (\ref{dimH0}) with the sum of the dimensions of all permitted Young diagrams (refer to Appendix \ref{app:SU2}) listed in (A)$\sim$(D). The consistency can be checked for small $L$. For example, when $L=7$, above formula gives $d_{\mathcal H_0,7}=36$, and the two possible Young diagrams give $d_{[5;1;1]}+d_{[3;3;1]}=15+21=36$; when $L=8$, above formula gives $d_{\mathcal H_0,8}=91$, in consistent with the result by summing all possible Young diagrams $d_{[4;2;2]}+d_{[4;4]}+d_{[6;2]}+d_{[8]}=56+14+20+1=91$.
Above we verified that the Young diagrams in Fig.\ref{fig: 3YT} include all states of $\mathcal H_0$. We have also shown that the singlet states corresponding to every Young diagram in Fig.\ref{fig: 3YT} can be destructed into superposition of products of 2-body singlets (and a 3-body singlet if $L$=odd). Synthesizing these two points we conclude that every singlet of the system can be written in forms of superposition of products of 2-body singlets (and a 3-body singlet if $L$=odd). This finishes the proof of overcomplete bases for many-body singlets of $SO(3)$ symmetric $S=1$ systems.
\section{overcomplete bases for $SU(3)$ symmetric systems}\label{sec:SU3}
With particular interactions, $S=1$ models may have an enhanced $SU(3)$ symmetry. For example, the $J$-$K$ model
\begin{eqnarray}\label{JK}
H=\sum_{\langle i,j\rangle}[J\mathbf S_i\cdot\mathbf S_j + K(\mathbf S_i\cdot\mathbf S_j)^2]
\end{eqnarray}
with $J=K$ is invariant under $SU(3)$ and now $S=1$ carries the fundamental representation of $SU(3)$ group. For this kind of $SU(3)$ systems, a singlet unit contains at least three spins. We will show that if the ground state of a many-body system does not break $SU(3)$ symmetry (if lattice symmetry is unbroken, then it is a $SU(3)$ spin liquid), then it can be expanded as superposition of products of three-body-singlet-clusters, called singlet-cluster states (SCSs).
Similar to $SU(2)$ systems, an irreducible representation of $SU(3)$ can also be uniquely labeled by a Young diagram(see Appendix \ref{app:SU2}). $SU(3)$ singlets are described by the Young diagram of three rows with equal number of columns [see Fig.~\ref{fig: 3YT}(c)]. That is to say, if the ground state is a $SU(3)$-singlet, then the system size must be divisible by 3, say, $L=3M$. The dimension of the Hilbert space of singlets is equal to the dimension of the $[M;M;M]$ representation of the permutation group, which is equal to $d_{\mathcal H_0,3M}=d_{[M;M;M]}={2(3M)!\over (M+2)!(M+1)!M!}$ (see Appendix \ref{app:SU2}).
The total number of all possible SCSs is ${(3M)!\over (M!)6^M}$. It is easy to see that these bases are overcomplete for $\mathcal H_0$. However, the number of overcomplete bases can be significantly reduced. To see this, we arbitrarily divide the $3M$-site system into three subsystems, each containing $M$ sites. If we require that the three spins in each singlet cluster come from three different subsystems, then the total number of subsystem-SCSs becomes $(M!)^2$. Following the proof of overcompleteness of subsystem-SPSs for $SU(2)$ systems(see Appendix \ref{app:SU2}), one can show that any $SU(3)$ singlet can be expanded as a superposition of these $(M!)^2$ subsystem-SCSs.
\section{Application in Gutzwiller approach of excited states}\label{sec:S=1apply}
In this section, we will apply the SPS bases to the excited states of 1D Haldane phase [namely, the model (\ref{JK}) with $-1<K/J<1,\ J>0$] in the Gutzwiller approach, and prove that the one-magnon excited states and two-magnon excited states obtained are orthogonal. Noticing that a single magnon carries spin-1, so the total spin of two magnons can be 0,1 or 2. The orthogonality between a one-magnon state and a two-magnon state is obvious if they carry different spin angular momentum or lattice momentum. In the following we will show that they are still orthogonal even if the two states carry the same quantum numbers.
We first briefly review the Gutzwiller approach for the Haldane phase.\cite{LiuZhouTuWenNg2012, LiuZhouNgExt14} The Gutzwiller approach for $S=1$ spin models is based on the fermion representation of spin-1 spins, where three species of fermions (called spinons) $f_x, f_y, f_z$ are introduced to rewrite the spin operators as
\[S^\alpha_i=\sum_{\beta,\gamma}i\varepsilon^{\alpha\beta\gamma}f_{\beta,i}^\dag f_{\gamma,i}, \ \ {\rm with\ } \alpha, \beta, \gamma = x,y,z\]
under an onsite particle number constraint
\begin{eqnarray}\label{N=1}
f_{x,i}^\dag f_{x,i}+f_{y,i}^\dag f_{y,i}+f_{z,i}^\dag f_{z,i}=1.
\end{eqnarray}
In this fermion representation, the spin model (\ref{JK}) is rewritten as $H=-\sum_{\alpha,\beta,\langle i,j\rangle}[Jc_{\alpha,i}^\dag c_{\alpha,j} c_{\beta,j}^\dag c_{\beta,i}+(J-K)c_{\alpha,i}^\dag c_{\alpha,j}^\dag c_{\beta,j}c_{\beta,i}]$, and its ground state and low energy excited states can be approximately described by Gutzwiller projected eigenstates of the following mean field Hamiltonian:
\begin{eqnarray}\label{Hmf}
H_{\rm mf} &=& \sum_{\alpha,\langle i,j\rangle} (\chi f_{\alpha,i}^\dag f_{\alpha,j}+\Delta f_{\alpha,i}^\dag f_{\alpha,j}^\dag + {\rm h.c}) +\sum_{\alpha,i} \lambda f_{\alpha,i}^\dag f_{\alpha,i}\nonumber\\
&=& \sum_{\alpha,k} E_k \Gamma_{\alpha,k}^\dag \Gamma_{\alpha,k},
\end{eqnarray}
where $\Gamma_{\alpha,k}$ are Bogoliubov particles and $\chi, \Delta, \lambda$ are variational parameters determined by minimizing the trial ground state energy $E_{\rm Grd} = \langle{\rm Grd}|H|{\rm Grd}\rangle/\langle{\rm Grd}|{\rm Grd}\rangle$ with $|{\rm Grd}\rangle =P_G |{\rm mf(\chi, \Delta, \lambda)}\rangle$. Here $P_G$ means Gutzwiller projection that enforces the constraint (\ref{N=1}). When $|\lambda|<2|\chi|$ and $\Delta\neq0$, above mean field model describes a topological superconductor and the Gutzwiller projected ground state belongs to the Haldane phase.
A subtle property of topological superconductor is that the fermion parity of its ground state depends on boundary condition.\cite{Kitaev2001, LiuZhouTuWenNg2012} Without loss of generality, we assume that the length $L$ of the chain is even, then the fermion parity of the ground state is even under anti-periodic boundary condition and is odd under periodic boundary condition. As a consequence, we need to carefully choose boundary conditions to construct the low energy states of the Haldane phase. For example, the ground state of Haldane phase is given by
\[
|{\rm Grd}\rangle =P_G |{\rm mf}\rangle_{\rm apbc},
\]
where $|{\rm mf}\rangle_{\rm apbc}$ is the ground state of (\ref{Hmf}) under anti-periodic boundary condition.
When obtaining one-magnon excited states, we should choose periodic boundary condition,
\[
| (\alpha,k+\pi)_{\rm 1-mag}\rangle =P_G \Gamma_{\alpha,k}^\dag |{\rm mf}\rangle_{\rm pbc},
\]
where $k={2n\pi\over L},\ n=-{L\over2},-{L\over2}+1,...,{L\over2}-1$ and the extra momentum $\pi$ is owning to the change of boundary condition. Since the pairing term vanishes at momentum $k=0$, the three fermions $f^x_{k=0}, f^y_{k=0}, f^z_{k=0}$ are unpaired and they form a singlet. Except for these three spinons and the excited magnon $\Gamma_{\alpha,k}$ (when $k\neq0$ the magnon is essentially a broken Cooper pair with one spinon removed), the remaining spinons form Cooper pairs. Noticing that there is a three-body singlet (except for the case when the momentum of the 1-magnon excited state is $\pi$), the Young diagrams describing the one-magnon states have the shape shown in Fig.~\ref{fig: 3YT}d with $m_3=$odd, $m_2-m_3=$even, $m_1-m_2$=odd.
For two-magnon excited states, we should use anti-periodic boundary condition. The states with total spin-1 is given as
\[
|(\alpha,k_1+k_2)_{\rm 2-mag} \rangle =P_G \sum_{\beta,\gamma}\varepsilon^{\alpha\beta\gamma}\Gamma_{\beta,k_1}^\dag \Gamma_{\gamma,k_2}^\dag |{\rm mf}\rangle_{\rm apbc},
\]
where $k_1,k_2 = {2n\pi\over L }+{\pi\over L},\ n=-{L\over2},...,{L\over2}-1$. In this case, all of the spinons form Cooper pairs except for the two anti-symmetric magnons $\Gamma_{\beta,k_1}$ and $\Gamma_{\gamma,k_2}$. The Young diagrams describing the two-magnon states have the shape shown in Fig.~\ref{fig: 3YT}d with $m_3=$even, $m_2-m_3=$odd, $m_1-m_2$=even.
Since an one-magnon state and a two-magnon state are described by different Young diagrams, they must be orthogonal to each other. This result has been verified numerically. Generally, an odd-magnon excited state and an even-magnon state are orthogonal.
Although the one-magnon state $|(\alpha,k)_{\rm 1-mag}\rangle$ and the two-magnon state $|(\alpha,k)_{\rm 2-mag}\rangle$ are orthogonal, they are not eigenstates of the spin Hamiltonian $H$, and the off-diagonal matrix elements $\langle(\alpha,k)_{\rm 1-mag}|H|(\alpha,k)_{\rm 2-mag} \rangle$ are usually nonzero. The off-diagonal entries will `mix' the two-magnon states with the one magnon state. The mixing will be remarkable if the diagonal terms are equal $\langle (\alpha,k)_{\rm 1-mag}|H|(\alpha,k)_{\rm 1-mag}\rangle=\langle(\alpha,k)_{\rm 2-mag}|H|(\alpha,k)_{\rm 2-mag}\rangle$, in this case the one-magnon excitation will be unstable and will decay into two magnons. \cite{Magnondecay12, LiuZhouNgExt14} For this reason, the one-magnon excitations, which are well defined around the momentum $k=\pi$ (since there is a finite gap between the one-magnon state and the two-magnon state with the same momentum), will become unstable in the vicinity near $k=0$ because the single-magnon dispersion merges into the two-magnon continuum.
\section{conclusion and discussion}\label{sec:sum}
In summary, we have shown that for a $S=1$ system with $L$ spins, every singlet of the system can be written in forms of a superposition of singlet-pair-states if $L=$even, or a superposition of 2-body-singlet pairs time a 3-body-singlet if $L$=odd. We have also proved that if the system has $SU(3)$ symmetry, the products of 3-body singlet states are overcomplete bases of many-body $SU(3)$ singlets. Our conclusion provides solid foundations for generalizing the methods used in $S=1/2$ resonating valence bond states to study $S=1$ systems. As a simple application, we showed that in the Gutzwiller approach the one-magnon excited states and two-magnon excited states are orthogonal even if they carry the same quantum numbers.
Our conclusion for $SO(3)$ symmetric spin-1 systems can be straightforwardly generalized to $SO(2n+1)$ symmetric $S=n$ systems if $n$ is an integer, namely, a $SO(2n+1)$ singlet can be written in forms of a superposition of singlet-pair-states if $L=$even, or a superposition of 2-body-singlet pairs time a $(2n+1)$-body-singlet if $L$=odd. This conclusion can also be generalized to $SO(2n)$ systems, where an $SO(2n)$ singlet contains even number of objects, and the overcomplete bases include all of the following states: (1) product of 2-body singlet pairs; (2) product of a $2n$-body-singlet times 2-body singlet pairs. However, the $SO(2n)$ symmetry CANNOT emerge in an $SO(3)$ symmetric spin-$(n-{1\over2})$ system since an $SO(2n)$ singlet may not be invariant under $SO(3)$. For example, the 2-body $SO(2n)$ singlet (which is symmetric under exchanging of the two objects) is different from the $SO(3)$ singlet formed by two spins (which is anti-symmetric under exchanging the spins), namely, the 2-body $SO(2n)$ singlet is NOT invariant under $SO(3)$ spin rotation. This means that $SO(2n)$ systems are very different from the usual spin systems. Finally, our conclusion for $SU(3)$ systems can be generalized to $SU(n)$ systems if the physical degrees of freedom carry fundamental representation of $SU(n)$ [notice that the $SO(3)$ symmetry for a spin-$({n-1\over2})$ system can be enlarged into $SU(n)$].
The author thank Hong-Hao Tu for encouraging him writing out this article and some valuable discussions as well as comments to the manusrcript. We also thank Zhong-Qi Ma, Fa Wang, Xiong-Jun Liu and Jason Ho for helpful dissuasions, and thank Tai-Kai Ng, Yi Zhou for previous collaborations. This work was initiated in IAS of HKUST during a program in 2013. We thank the support from NSFC 11204149 and Tsinghua University Initiative Scientific Research Program.
| cec828acaccb9786f4f49b99a6b3bda9269595d5 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Symmetry breaking is an ubiquitous feature of the low temperature behavior in condensed matter physics. Solids or N\'eel antiferromagnets are phases that break some essential symmetries of the physical laws:
translational symmetry or rotational spin symmetry. Understanding the nature of the broken symmetries, discrete or continuous, allows to understand the nature of the elementary excitations and to predict the
low-energy behavior of the materials (Goldstone modes, Mermin Wagner theorem, topological defects, ...). In some phases, at first glance, the symmetry content may be hidden: as for example in Helium liquids.
The first obvious character is the absence of translation symmetry breaking and absence of a solid phase at zero temperature. It was very early understood (F.~London) that this absence of solidification
is due to the many-body quantum dynamics and the Helium phases have been named quantum liquids to be contrasted to the more ``classical liquids''. It was only decades after discovery of the $^{4}${He}
superfluidity that the nature of the order parameter was unveiled. The understanding of the $^{3}${He} order parameter has also been heavily dependent on group symmetry considerations.
A parallel can be developed between this distinction of quantum liquids versus classical solids and that of spin liquids (SL's) versus N\'eel ordered phases. N\'eel ordered phases at least break translational symmetry of the
lattice and rotational symmetry of the spins. They can be described by a local order parameter and a Landau theory, whereas SL do not break any lattice symmetries nor spin rotation symmetry
and cannot be described by a \textit{local} order parameter. Similarly to $^{4}${He}, SL's can be characterized by an internal hidden, more or less complex order.
In this paper we are mainly concerned with topological SL. These SL's are characterized at $T=0$ by exponentially decaying correlations for all local observables (spins, dimers or spin nematic operators) and a spin gap to
bulk excitations.
They contrast to critical SL which have algebraic correlations and gapless excitations. It has been understood very early\cite{Read1989a,Kivelson1990} that the elementary excitations of these resonating valence bond (RVB) SL
carry a spin-$\frac{1}{2}$ contrarily to the spin-1 magnons of the N\'eel antiferromagnets. These emergent excitations are called spinons. A natural framework to describe the SL physics is the use of effective theories with the
fractional particles as elementary building blocks (parton construction). Going from the original spins to these fractionalized spinons implies the introduction of gauge fields in which the spinons are deconfined (SL) or glued (N\'eel order).
At first glance these approaches introduce via the gauge fields a considerable (infinite) amount of degrees of freedom.
In fact the number of possible distinct SL's is limited by the requirement that their physical observables do not break any lattice or spin symmetry and the enumeration of the different classes of distinct SL's can
be done through group theory analysis.
This was understood ten years ago by X.-G.~Wen who developed a classification of symmetric SL using Projective Symmetry Group technique (PSG).\cite{Wen_PSG} The analysis of Wen for fermionic spinons on the square lattice was extended by
Wang and Vishwanath to bosonic spinons.\cite{PSG} In these works, the definition of a SL is limited to spin systems that do not break any symmetry, neither $SU(2) $ spin symmetry nor lattice symmetries nor time reversal symmetry.
These SL's have been dubbed by Wen \textit{symmetric} SL. This definition excludes \textit{chiral} SL which break time-reversal symmetry (and some minimal amount of lattice symmetry) but which do not break $SU(2)$ and do not have
long range order in spin-spin or dimer-dimer correlations.
In the wake of Laughlin theory of FQHE, chiral SL's have been very popular at the end of the eighties (\textcite{Kalmeyer_Laughlin,Kalmeyer_89,Wen_Wilczek,Yang1993}), but, in the absence of indisputable candidates, this option has nearly disappeared from many discussions in the last decade.
Non-planar structures are quite ubiquitous in classical frustrated magnetism,\cite{Regular_order}
and are associated to scalar chirality: $\vec S_1\cdot(\vec S_2 \times \vec S_3)\ne 0$.
In some cases where the ground state is non-planar this chirality can persists at finite temperature\cite{Momoi_classique,KagomeDomenge} although the magnetic order itself is absent for $T>0$ (Mermin-Wagner).
A similar phenomenon may take place in quantum systems at $T=0$.
There, the usual scenario is that of a gradual reduction of the N\'eel order parameter when the strength of the quantum fluctuations is increased.
At some point the sublattice magnetization vanishes and the $SU(2)$ symmetry is restored (leading to a SL).
Now, if the ordered magnetic structure is {\it chiral}, the time-reversal symmetry ${\cal T}$ may
still be broken at the point where the magnetic order disappears, hence leading to a time-reversal symmetry breaking (TRSB) SL.\footnote{We will see in the following that in chiral phases, the fluxes can evolve continuously with the increase of quantum fluctuations, leading eventually to a non chiral SL phase. But this quantum phase transition has no reason to be concomitant with the opening of a (spinon) gap and the appearance of a SL, as can be seen in Ref.~\onlinecite{cuboc1}.}
Some TRSB SL have indeed been recently proposed on the kagome lattice\cite{cuboc1,Faak2012} and there are probably other examples.\cite{wen2010,chu2011}
The goal of this paper is to revisit the PSG analysis by relaxing the time-reversal symmetry constraint in order to include chiral SL's.
The framework used here is the Schwinger-boson mean-field theory (SBMFT).\footnote{SBMFT also coincides with large-$N$ limit of an $Sp(N)$ generalization of the $SU(2)$ model.\cite{ReadSachdev_SpN,Auerbach}}
But, as for the symmetric PSG, the symmetry considerations we use here should also be valid to classify SL in presence of moderate fluctuations beyond mean-field.
The paper is organized as follow.
Sections \ref{sec:recalls} and \ref{sec:SL} are reviews, to keep this article self-contained.
Section ~\ref{sec:recalls} is a description of SBMFT to fix the notations and precise the present understanding of this approach. Sec.~\ref{sec:SL} starts by recalling the gauge invariance of SBMFT and then describes how
the PSG is used to enforce the SL's symmetries on mean-field theories.
In Sec.~\ref{sec:LRO_CSL}, the concept of PSG is extended to include all chiral SL's.
In Sec.~\ref{sec:ansatze_tri} all the chiral and non chiral SL theories with explicit nearest neighbor gauge fields on the triangular lattice are derived. As an example of application we propose a chiral SL as
the ground state of a ring-exchange model on the triangular lattice.
The physical meaning of the fluxes and their expressions in terms of spin operators is developed in Sec.~\ref{sec:fluxes}, as well as the
question of topological loops on finite size samples.
Sec.~\ref{sec:ccl} is the conclusion.
Appendices contain proofs of some statements in the main text, technical details and further applications to the square and kagome lattices.
\tableofcontents
\section{Schwinger boson mean-field theory (SBMFT)}
\label{sec:recalls}
We consider a spin Hamiltonian $\widehat H_0(\{\widehat{\mathbf S}_i\}_{i=1\dots N_s})$ on a periodic lattice with $N_s$ spins, each of length $S$. $\widehat H_0$ can contain Heisenberg interaction or more complicated terms
such as cyclic exchange, all invariant under global spin rotations ($SU(2)$ symmetry) and by time-reversal transformation $\cal T$
($\widehat H_0(\{\widehat{\mathbf S}_i\})=\widehat H_0(\{-\widehat{\mathbf S}_i\})$).
We insist on these symmetries since they are the basis of our construction.
Finding the ground state (GS) of a quantum spin problem is notoriously difficult problem and
the SBMFT provides an approximate way to treat the problem. This approach can be summarized by the following steps:
i) The spin operators (hence the Hamiltonian) are expressed using Schinwer bosons.
ii) A suitable rotationally-invariant mean-field decoupling leads to a quadratic Hamiltonian $H_{\rm MF}$.
iii) $H_{\rm MF}$ is diagonalized using a Bogoliubov transformation and solved self-consistently.
\subsection{Bosonic operators and bond operators}
Let $m$ to be the number of sites per unit cell in the lattice, and $N_m$ number of unit-cells, so that $N_s=N_m m$ is the total number of sites.
We define the two bosonic operators $\widehat b_{i\sigma}^\dag$ that create a spin $\sigma=\pm1/2$ (or $\sigma=\uparrow$ or $\downarrow$) on site $i$.
The spin operators read:
\begin{subeqnarray}
\label{eq:def_S}
\widehat S_i^z=\sum_\sigma\sigma\widehat b_{i\sigma}^\dag\widehat b_{i\sigma}, \\
\widehat S_i^+=\widehat b_{i\uparrow}^\dag\widehat b_{i\downarrow}, \\
\widehat S_i^-=\widehat b_{i\downarrow}^\dag\widehat b_{i\uparrow}.
\end{subeqnarray}
The Hamiltonian is thus a polynomial of bosonic operators with only even degree terms.
These relations imply that the commutation relations
$[\widehat S^\alpha_i,\widehat S^\beta_i]=i\epsilon^{\alpha\beta\delta}\widehat S^\delta_i$
are verified. As for the total spin, it reads
$
\vec{\widehat S_i}^2=\frac{\widehat n_i}{2}\left(\frac{\widehat n_i}{2}+1\right)
$,
where $\widehat n_i=\widehat b^\dagger_{i\uparrow}\widehat b_{i\uparrow}
+\widehat b^\dagger_{i\downarrow}\widehat b_{i\downarrow}$ is the total number of bosons at site $i$.
To fix the ``length'' of the spins, the following constraint must therefore be imposed on physical states:
\begin{equation}
\widehat n_i=\sum_\sigma\widehat b_{i\sigma}^\dag\widehat b_{i\sigma}=2S.
\label{eq:constraint}
\end{equation}
In traditional MF theories, the MF parameter is the order parameter (as for example the magnetization $\langle \widehat{\mathbf S}_i\rangle$) and the MF Hamiltonian consequently breaks
the initial Hamiltonian symmetries, except in the high temperature phase where the MF parameter is zero.
Here, we would like to describe SL's that do not break any symmetry.
Thus we are going to express $\widehat H_0$ using quadratic bosonic operators, requiring their invariance by global spin rotations.
The expectation value of these operators will then be used as mean-field parameters, insuring that the MF Hamiltonian respects the rotational invariance.
Only linear combinations of the two following operators and of their hermitian conjugates obey this property:
\begin{subeqnarray}
\widehat A_{ij}&=&\frac12(\widehat b_{i\uparrow}\widehat b_{j\downarrow}-\widehat b_{i\downarrow}\widehat b_{j\uparrow}),\\
\widehat B_{ij}&=&\frac12(\widehat b^\dag_{i\uparrow}\widehat b_{j\uparrow}+\widehat b^\dag_{i\downarrow}\widehat b_{j\downarrow}).
\end{subeqnarray}
$i$ and $j$ are lattice sites and these operators are thus bond operators.
They are linked by the relation
\begin{equation}
:\widehat B_{ij}^\dag \widehat B_{ij}:+\widehat A_{ij}^\dag \widehat A_{ij}=\frac14\widehat n_i(\widehat n_j -\delta_{ij})
\label{eq:relation_AB}
\end{equation}
where $:.:$ means normal ordering.
Any Hamiltonian invariant by global spin rotation can be expressed in terms of these operators only.
For example, an Heisenberg term $\widehat{\mathbf S}_i\cdot\widehat{\mathbf S}_j$ where $i\neq j$ can be decoupled as
\begin{subeqnarray}
\label{eq:SiSj}
\widehat{\mathbf S}_i\cdot\widehat{\mathbf S}_j
&=&:\widehat B_{ij}^\dag \widehat B_{ij}:-\widehat A_{ij}^\dag \widehat A_{ij},\slabel{eq:SiSj_a}
\slabel{eq:decoupling_AB}\\
&=&2:\widehat B_{ij}^\dag \widehat B_{ij}:-S^2,
\slabel{eq:decoupling_B}\\
&=&S^2-2\widehat A_{ij}^\dag \widehat A_{ij}.
\slabel{eq:decoupling_A}
\end{subeqnarray}
where the first line is true whatever the boson number, but the last two lines use Eq.~\ref{eq:relation_AB} and suppose that the constraint of Eq.~\ref{eq:constraint} is strictly respected.
To make clear the physical significance of these two bond operators in the case $S=\frac{1}{2}$, we write them
in terms of projection operators $\widehat P_s$ on the singlet state and $\widehat P_t$ on the triplet states:
\begin{subeqnarray}
\label{eq:projector}
\widehat A_{ij}^\dag \widehat A_{ij}&=&\frac12 \widehat P_s\\
:\widehat B_{ij}^\dag \widehat B_{ij}:&=&\frac14 (\widehat P_t-\widehat P_s).
\end{subeqnarray}
We see in Eq.~\ref{eq:projector}, that $:\widehat B_{ij}^\dag \widehat B_{ij}:$ represents a ferromagnetic contribution to Eq.~\ref{eq:decoupling_AB}, whereas $\widehat A_{ij}^\dag \widehat A_{ij}$ gives the singlet contribution.
\subsection{The mean-field approximation}
We now need two successive approximations to obtain a quadratic and solvable Hamiltonian.
We first relax the constraint on the boson number by imposing it only on {\it average}:
\begin{equation}
\langle \widehat n_i\rangle =\kappa.
\label{eq:constraint_mean}
\end{equation}
where $\kappa$ does not need to be a integer.
To implement this constraint, a Lagrange multiplier (or chemical potential) $\lambda_i$ is introduced at each site $i$ and the term $\sum_i \lambda_i(\kappa-\widehat n_i)$ is added to the Hamiltonian.
$\kappa$ can be continuously varied to interpolate between the classical limit ($\kappa=\infty$) and the extreme quantum limit ($\kappa\to0$).
It should be reminded in general that fixing $\kappa=2S$ to study a spin-$S$ model is not necessarily the best choice as in the SBMFT
$\langle \widehat{\mathbf S}_{i}^2\rangle = \frac38 \kappa(\kappa+2)$.\cite{Auerbach}
An alternative choice could be to fix $\kappa$ in such a way that the spin fluctuations and not the spin length have the correct value.\footnote{The
SBMFT is not meant to provide an accurate quantitative agreement with $SU(2)$ models, nor to replace controlled numerics.
SBMFT should instead be viewed as a tool to identify possible phases in competition, their instabilities, to provide indicative phase diagrams, and to identify the important degrees of freedom that
need to be incorporated in a description going beyond mean field.}
In a second step, bond operators fluctuations are neglected and a MF Hamiltonian $\widehat H_{\textrm {MF}}$ that is linear in bond operators is obtained.
For instance:
\begin{equation}
:\widehat B_{ij}^\dag \widehat B_{ij}:\,\simeq \langle \widehat B_{ij}^\dag\rangle \widehat B_{ij}+ \widehat B_{ij}^\dag\langle \widehat B_{ij}\rangle -|\langle \widehat B_{ij}\rangle|^2.
\end{equation}
We replace $\langle \widehat B_{ij}\rangle$ and $\langle \widehat A_{ij}\rangle$ by complex bond parameters $\mathcal A_{ij}$ and $\mathcal B_{ij}$.
This MF approximation can be seen as the first term of a large $N$ expansion of a $Sp(N)$ theory.\cite{ReadSachdev_SpN}
The steps are explained in details in Ref.~\onlinecite{Auerbach} in the very similar case of an $SU(N)$ theory.
This zero'th order $1/N$ expansion can be pursued to the first order.\cite{Trumper_fluct}
The MF Hamiltonian is now a quadratic bosonic operator.
It can be written in terms of a $2N_s\times 2N_s$ complex matrix $M$ and of a real number $\epsilon_0$ depending on the $\mathcal A_{ij}$ and $\mathcal B_{ij}$ and on the Lagrange multipliers $\lambda_i$:
\begin{equation}
\label{eq:HMF}
\widehat H_{\textrm {MF}}=\phi^\dag M \phi+\epsilon_0.
\end{equation}
where $\phi^\dag=(\widehat b_{1\uparrow}^\dag,\widehat b^\dag_{2\uparrow},\dots,\widehat b^\dag_{N_s\uparrow},\widehat b_{1\downarrow},\dots,\widehat b_{N_s\downarrow})$.\footnote{The most general quadratic
Hamiltonian would require the use of a larger vector
$(\widehat b_{1\uparrow}^\dag,\dots,\widehat b^\dag_{N_s\uparrow},\widehat b_{1\downarrow}^\dag,\dots,\widehat b^\dag_{N_s\downarrow},\widehat b_{1\uparrow},\dots,\widehat b_{N_s\downarrow})$
but the rotational invariance implies conservation of $\hat S^z_{\rm tot}=\sum_i \widehat b^\dagger_{i\uparrow}\widehat b_{i\uparrow}
+\widehat b^\dagger_{i\downarrow}\widehat b_{i\downarrow}$ and the allowed quadratic terms are therefore limited to
$\widehat b_{i\uparrow}^\dag\widehat b_{j\uparrow}$, $\widehat b_{i\downarrow}\widehat b_{j\downarrow}^\dag$, $\widehat b_{i\uparrow}^\dag\widehat b_{j\downarrow}^\dag$ and $\widehat b_{i\downarrow}\widehat b_{j\uparrow}$.}
The expression for $M$ and $\epsilon_0$ depend on $\widehat H_0$ and on the chosen decoupling (for example using Eq.~\ref{eq:decoupling_AB}, \ref{eq:decoupling_B} or \ref{eq:decoupling_A}).
The set of mean-field parameters $\{\mathcal A_{ij},\mathcal B_{ij}\}$ appearing in $H_\textrm {MF}$ is called an \textbf{Ansatz}.
Up to an equivalence relation that will be described in the next section, an Ansatz defines a specific phase (ground state and excitations).
Depending on the value of $\kappa$, this state can either have N\'eel long range order,
or the bosons are gapped (several types of SL are then possible).
In the following we will explain and exploit the relation which exists between regular classical magnetic orders\cite{Regular_order} and SL's.
To enforce self-consistency, the following conditions should be obeyed:
\begin{equation}
\mathcal A_{ij}=\langle \widehat A_{ij}\rangle \,\textrm{ and }\, \mathcal B_{ij}=\langle \widehat B_{ij}\rangle,
\end{equation}
which are equivalent to
\begin{equation}
\label{eq:self_cons2}
\frac{\partial F_{MF}}{\partial \mathcal A_{ij}}=0 \,\textrm{ and }\, \frac{\partial F_{MF}}{\partial \mathcal B_{ij}}=0,
\end{equation}
where $F_{MF}$ is the MF free energy, together with the constraint
\begin{equation}
\label{eq:constraint3}
\langle \widehat n_{i}\rangle=\kappa\,\Leftrightarrow\,\frac{\partial F_{MF}}{\partial \mathcal \lambda_{i}}=0.
\end{equation}
The next step is to calculate the mean values of the operators $\widehat A_{ij}$ and $\widehat B_{ij}$ either in the GS of $\widehat H_{\textrm {MF}}$ if the temperature is zero,
or in the equilibrium state for non-zero temperatures.
In both cases one needs to use a Bogoliubov transformation to diagonalize $H_{\rm MF}$.
As this transformation is often explained in the simple case of $2\times2$ matrices (or for particular sparse matrices),
we explain the algorithm in a completely general case in App.~\ref{app:bogoliubov}.
\subsection{Choice of bond fields: \texorpdfstring{$\widehat A_{ij}$}{} and \texorpdfstring{$\widehat B_{ij}$}{} or \texorpdfstring{$\widehat A_{ij}$}{} or \texorpdfstring{$\widehat B_{ij}$}{} only.}
As in the example of Eq.~\ref{eq:SiSj}, the relation~\ref{eq:relation_AB} can be used to
eliminate $\mathcal A_{ij}$ or $\mathcal B_{ij}$ from $\widehat H_{\textrm {MF}}$.
If we choose to keep only the $\mathcal B_{ij}$ parameters, $M$ is block diagonal with two blocks of size $N_s$ and the vacuum of bosons is a GS.
To obey the constraint of Eq.~\ref{eq:constraint}, we have to adjust the boson densities by filling some zero-energy mode(s), therefore breaking the $SO(3)$ symmetry.
The GS is thus completely classical.
On the contrary, we can keep the $\mathcal A_{ij}$ only, but then the singlet weight is overestimated, which can be introduce some bias on frustrated lattices where short-distance correlations are not collinear.
Keeping $\widehat A_{ij}$ only is a widespread practice in the litterature, but Trumper {\it et al.}\cite{Trumper_AB_SBMFT} have explicitly shown that the bandwidth of the spectrum of excitations of the Heisenberg model on the triangular lattice
is twice too large when using $\mathcal A_{ij}$ fields only. On the other hand, the use of both $\mathcal A_{ij}$ and $\mathcal B_{ij}$ restores the correct bandwidth and a improves quantitatively the excitation spectrum.
Note that even on the square lattice the simultaneous use of both bond operators improves the ground state energy.\cite{CecattoSquare}
From a different point of view Flint and Coleman\cite{Symplectic_SBMFT} advise the use of both fields in order to have a large-$N$ limit where spin generators are {\it odd} under the time-reversal symmetry, as it is the case for $SU(2)$.
\section{The search of SL}
\label{sec:SL}
Even when considering an Hamiltonian with nearest neighbor interactions only, the dimension of the MF parameters manifold is exponentially large.\footnote{
With a coordination number $z$ and two complex parameters per bond, it is naively $\mathbb C^{zN_s}$.
In fact, the modulus of self-consistent $\widehat A_{ij}$ and $\widehat B_{ij}$ cannot exceed a
bound, which depends on $\kappa$, see the details in App.~\ref{app:borne}. As for the gauge invariance, it allows to fix some parameters to be real, see below.}
Moreover the Lagrange multipliers $\lambda_i$ make the search of the stationary points of the MF free energy difficult (constrained optimization) as for each considered Ansatz, all $\lambda_i$
must be adjusted to calculate the MF free energy.
In Ref.~\onlinecite{SBMFT_alllinks} this optimization was carried out (without any simplifying/symmetry assumption) on square and triangular lattices with up to 36 sites. In almost all cases the MF ground-state turned
out to be highly symmetric, as expected, but excited mean-field solutions are however highly inhomogeneous (and often not understood yet).
The problem can be considerably simplified if we restrict our search to states respecting some (or all) the symmetries of $\widehat H_0$.
Such symmetries are divided into global spin rotations, lattice symmetries and time reversal symmetry.
We have assumed from the beginning that $\widehat H_0$ is invariant by global spin rotations and chosen the MF approximation in such a way that it remains true for $\widehat H_{\textrm {MF}}$,
but the choice of a specific Ansatz may or may not break other discrete symmetries.
The fore-coming section explains how to find all Ans\"atze such as the physical quantities are invariant by all the lattice symmetries $\mathcal X$, either strictly (for symmetric SL's) or only up to a time-reversal transformation (Chiral SL's).
We will now define some groups specific to an Ansatz: the invariance gauge group in Sec.~\ref{sec:IGG} and the projective symmetry group in Sec.~\ref{sec:PSG}.
Then, in Sec.~\ref{sec:APSG}, we define the algebraic projective symmetry group, which is associated to a lattice symmetry group and not specific to a particular Ansatz on this lattice.
\subsection{Gauge invariance, fluxes and invariance gauge goup (IGG)}
\label{sec:IGG}
Let $\mathcal G \backsimeq U(1)^{N_s}$ be the set of gauge transformations.
A gauge transformation is characterized by an angle $\theta(i) \in[0,2\pi[$ at each site and the operator $\widehat G$ which implements the associated gauge transformation
\begin{equation}
\label{eq:gauge_transf}
\widehat b_{j\sigma}\to \widehat b_{j\sigma}\,e^{i\theta(j)} = \widehat G^\dagger \widehat b_{j\sigma}\widehat G
\end{equation}
is given by
\begin{equation}
\widehat G=\exp\left(i\sum_j \widehat b_{j\sigma}^\dagger \widehat b_{j\sigma} \theta(j)\right).
\end{equation}
A wave function $|\phi\rangle$ respects a symmetry $\widehat F$ if all the physical observables measured in the state $\widehat F|\phi\rangle$ are identical to those measured in $|\phi\rangle$.
It does not mean that $|\phi\rangle=\widehat F|\phi\rangle$, but that the two wave functions are equal up to a gauge transformation: $\exists\, \widehat G\in\mathcal G, |\phi\rangle=\widehat G\widehat F|\phi\rangle$.
The action of $\widehat G$ on the Ansatz is:
\begin{equation}
\left\{
\begin{array}{l}
\mathcal A_{jk}\to \mathcal A_{jk}\,e^{i(\theta(j)+\theta(k))},\\
\mathcal B_{jk}\to \mathcal B_{jk}\,e^{i(-\theta(j)+\theta(k))},
\end{array}\right.
\end{equation}
such as $\widehat H_{\textrm {MF}}$ remains unaffected by $\widehat G$.
We note that $\langle \widehat A_{jk}\rangle$ and $\langle \widehat B_{jk}\rangle$ are gauge dependent: they are not physical quantities as they do not preserve the on-site boson number.
As any such quantity, their mean values calculated using $\widehat H_0$ is zero when the average is taken on all gauge choices.
Using $\widehat H_{\textrm {MF}}$, it can be non-zero as the gauge symmetry is explicitly broken by the choice of the Ansatz.
We have seen that changing the gauge modifies the Ansatz but not the physical quantities.
Conversely, if two MF Hamiltonians give rise to the same physical quantities, then their Ans\"atze are linked by a gauge transformation.
In fact two types of physical quantities are directly related to the Ansatz: the MF parameter moduli (related to the scalar product of two spins), and the fluxes. The fluxes are defined as the arguments of Wilson
loop operators such as $\langle \widehat B_{ij}\widehat B_{jk}\widehat B_{ki}\rangle$ or of $\langle \widehat A^\dag_{ij}\widehat A_{jk}\widehat A^\dag_{kl}\widehat A_{li}\rangle$. By construction these quantities are
gauge invariant and define the Ansatz up to gauge transformations.
The physical meaning of fluxes will be addressed in Sec.~\ref{sec:fluxes}.
The gauge transformations that do not modify a specific Ansatz form a subgroup of $\mathcal G$ called the invariance gauge group (IGG).
It always contains the minimal group $\mathbb Z_2$ formed by the identity and by the transformation Eq.~\ref{eq:gauge_transf} with $\theta(i)=\pi$ for all lattice sites $i$.
In the particular cases where we can divide the lattice in two sublattices such as $\mathcal A_{ij}=0$ whenever $i$ and $j$ are in the same sublattice (bipartite problem), the IGG is enlarged to $U(1)$.
The later situation corresponds, for instance, to an Ansatz on a square lattice with only first-neighbor $\mathcal A_{ij}$.
The transformations of the IGG are then given by $\theta(i)=\theta$ on a one sublattice and $\theta(i)=-\theta$ on the other, with arbitrary $\theta\in[0,2\pi[$.
\subsection{The projective symmetry group (PSG)}
\label{sec:PSG}
Let $\mathcal X$ be the group of the lattice symmetries of the Hamiltonian $\widehat H_0$ (translations, rotations, reflections\dots).
From now on, for the sake of simplicity, we discard the hat on the gauge and symmetry operators.
The effect of an element $X$ of $\mathcal X$ on the bosonic operators is
\begin{equation}
\label{eq:latt_sym}
X:\widehat b_{j\sigma}\to \widehat b_{X(j)\sigma}.
\end{equation}
The effect of $X$ on the Ansatz is:
\begin{equation}
\left\{
\begin{array}{l}
\mathcal A_{jk}\to \mathcal A_{X(j)X(k)},\\
\mathcal B_{jk}\to \mathcal B_{X(j)X(k)}.
\end{array}\right.
\end{equation}
We know that a gauge transformation does not change any physical quantities.
What about the lattice symmetries ?
We know from Sec.~\ref{sec:IGG} that if the Ans\"atze before and after the action of $X$ have the same physical quantities, they are linked by a gauge transformation: it thus exists at least one gauge transformation
$G_X$ such as $G_XX$ leaves the Ansatz unchanged.
\textit{The set of such transformations of $\mathcal G\times \mathcal X$ is called the projective symmetry group (PSG) of this Ansatz.}
Note that this group only depends on the Ansatz and on $\mathcal X$, but not on the details of the Hamiltonian.
Thus, an Ansatz is said to respect a lattice symmetry $X$ if it exists a transformation $G_X \in \mathcal G$ such that the Ansatz is invariant by $G_XX$.
The IGG of an Ansatz is the PSG subgroup formed by the set of gauge transformations $G_I$ associated to the identity transformation $I$ of $\mathcal X$.
For each lattice symmetry $X \in \mathcal X$ respected by the Ansatz, the set of gauge transformations $G_X$ such as $G_XX$ is in the PSG is isomorph to the IGG: for any $G_I$ in the IGG, $(G_IG_X)X$ is in the PSG.
Thus, the condition for an Ansatz to respect all the lattice symmetries is that its PSG is isomorph to IGG$\times\mathcal X$.
\subsection{The algebraic projective symmetry groups}
\label{sec:APSG}
An Ansatz is characterised (partially) by its IGG and its PSG. In turn, we know from these groups which lattice symmetries it preserves.
Reversely we now want to impose lattice symmetries and find all Ans\"atze that preserves them.
To reach this goal, we proceed in two steps.
The first one is to find the set of the so-called algebraic PSG's. \cite{Wen_PSG,PSG}
They are subgroups of $\mathcal G\times \mathcal X$ verifying algebraic conditions necessarily obeyed by a PSG.
Contrary to the PSG of an Ansatz, the algebraic PSG's exist independently of any Ansatz and only depend on the lattice symmetry group $\mathcal X$ and on the choice of an IGG (chosen as the more general).
An algebraic PSG does not depend on the details of the lattice such as the positions of the sites.
However, depending on these details, an algebraic PSG may have zero, one, or many compatible Ans\"atze.
The second step consists, for a given lattice, in finding all the Ans\"atze compatible with a given algebraic PSG.
Let us detail the algebraic conditions verified by the algebraic PSG's.
The group $\mathcal X$ is characterized by its generators $x_1$\dots$x_p$.
A generator $x_a$ has an order $n_a\in \mathbb N^*$ such as $x_a^{n_a}$ is the identity
(if no such integer exists, we set $n_a=\infty$).
For any transformation $X\in\mathcal X$, there exists a unique ordered product $X=x_1^{k_1}\dots x_p^{k_p}$ with $0\leq k_a<n_a$ if $n_a$ is finite, $k_a\in\mathbb Z$ if not.
The rules used to transform an unordered product into an ordered one are the algebraic relations of the group.
Each of these rules implies a constraint on the $G_{x_a}$ (chosen as one of the gauge transformation associated to $x_a$).
Basically, it states that if a lattice symmetry $X$ can be written in several ways using the generators, the gauge transformation $G_X$ is independent of the writing (up to an IGG transformation).
The subgroups of $\mathcal G\times \mathcal X$ respecting all these constraints are the algebraic PSG's.
To illustrate the idea, let us consider a basic example where $\mathcal X$ is generated by two translations $x_1$ and $x_2$.
Both transformations have an infinite order $n_1=n_2=\infty$.
We have $X\in \mathcal X$ written as product of generators $X=x_1^{m_1}x_2^{m_2}x_1^{m_3}x_2^{m_4}\dots$ and we would like o write it as $X=x_1^{p_1}x_2^{p_2}$.
The need algebraic relation is simply the commutation between the two translations : $x_1x_2=x_2x_1$.
We then have $p_1=m_1+m_3+\dots$ and $p_2=m_2+m_4+\dots$.
We will now see that this implies a constraint on $G_{x_1}$ and $G_{x_2}$.
Suppose that we have an Ansatz unchanged by $G_{x_1}x_1$ and $G_{x_2}x_2$.
Then the inverses $x_1^{-1}G_{x_1}^{-1}$ or $x_2^{-1}G_{x_2}^{-1}$ too are in the PSG.
So, the product $G_{x_1}x_1G_{x_2}x_2x_1^{-1}G_{x_1}^{-1}x_2^{-1}G_{x_2}^{-1} \in$ PSG.
This product has been chosen to make the algebraic relation $x_1x_2=x_2x_1$ ($\Leftrightarrow x_1x_2x_1^{-1}x_2^{-1}=I$) appear after the following manipulations:
\begin{eqnarray*}
& G_{x_1}x_1G_{x_2}x_2x_1^{-1}G_{x_1}^{-1}x_2^{-1}G_{x_2}^{-1} \in \rm{PSG}\\
\Leftrightarrow&G_{x_1}(x_1G_{x_2}x_1^{-1})x_1x_2x_1^{-1}x_2^{-1}(x_2G_{x_1}^{-1}x_2^{-1})G_{x_2}^{-1} \in \rm{PSG}\\
\Leftrightarrow&G_{x_1}(x_1G_{x_2}x_1^{-1})(x_2G_{x_1}^{-1}x_2^{-1})G_{x_2}^{-1} \in \rm{PSG}.
\end{eqnarray*}
The expressions in parenthesis in the last line are pure gauge transformations and the full resulting expression is a product of gauge transformations.
Thus, we can more precisely write:
\begin{equation}
G_{x_1}(x_1G_{x_2}x_1^{-1})(x_2G_{x_1}^{-1}x_2^{-1})G_{x_2}^{-1} \in \rm{IGG}.
\end{equation}
If the IGG is $\mathbb Z_2$, this constraint can be written in term of the phases $\theta_X(i)$ of the gauge transformation $G_X$ as:
\begin{equation}
\theta_{x_1}(i)+\theta_{x_2}(x_1^{-1}i)-\theta_{x_1}(x_2^{-1}i)-\theta_{x_2}(i)=p\pi,
\end{equation}
with $p=0$ or $1$.
This constraint coming from the commutation relation between $x_1$ and $x_2$ must be obeyed by all algebraic PSG's.
It is useless to list all algebraic PSG's for the simple reason that some of them are equivalent and give Ans\"atze with the same physical observables.
Two (algebraic or not) PSG's are equivalent if they are related by a gauge transformation $G$: for any gauge transformation $G_X$ associated to the lattice symmetry $X$ in the first PSG, $GG_XG^{-1}$ belongs to the set of gauge transformations associated to $X$ in the second PSG.
We are only interested in equivalence classes of PSG's.
Taking algebraic PSG's in different classes does not imply that they have no common Ans\"atze: a trivial example is the Ansatz with only zero parameters, belonging to any algebraic PSG's.
But each class includes Ans\"atze that are in no other class and have specific physical properties.
Once all the algebraic PSG's classes are determined, it remains to find the possible compatible Ans\"atze for one representant of each class.
As an example of compatibility condition, let's take the case where $X$ belongs to the considered algebraic PSG ({\it i.e.} $G_{X}=I$).
Then an Ansatz can be compatible with this algebraic PSG only if, for any couple of sites $(i,j)$, $\mathcal A_{ij}=\mathcal A_{X(i)X(j)}$.
If such compatible Ans\"atze exist, they respect the lattice symmetries by construction (in the sense that their physical quantities do so).
We now want to impose the time reversal symmetry: among the compatible Ans\"atze, we only keep those that are equivalent to a real Ansatz up to a gauge transformation.
We call them \textit{strictly symmetric} Ans\"atze (\textit{weakly symmetric} ones are defined in the next section).
To completely define an Ansatz, it is sufficient to give the algebraic PSG and the values of the MF parameters on non symmetry-equivalent bonds.
For example, on a square (or triangular or kagome) lattice with all usual symmetries (see Fig.~\ref{fig:sym_latt}) and only first neighbor interactions, the $\mathcal A_{ij}$ and $\mathcal B_{ij}$ of one bond are enough.
\section{From chiral long range orders to chiral SL's}
\label{sec:LRO_CSL}
We will now show that the zoo of N\'eel LRO obtained from the strictly symmetric Ans\"atze misses
the chiral states which are exact ground states of a large number of frustrated classical models. This will lead us in a straightforward manner to the construction of chiral algebraic PSG's in which time reversal and some lattice symmetries can be broken (Sec.~\ref{sec:CSL}). This generalised framework will then be illustrated on the triangular lattice in Sec.~\ref{sec:ansatze_tri} and on the square and kagome lattice in App.~\ref{app:EAPSG}.
\subsection{\texorpdfstring{$SU(2)$}{} symmetry breaking of symmetric Ans\"azte}
\label{sec:classical_order}
To simplify, we suppose that all lattice sites are equivalent by symmetry and only consider Ans\"atze such as the $\lambda_i$ are all equal to a single $\lambda$.
Even if an Ansatz is strictly symmetric, it does not always represent a SL phase.
As is well known in SBMFT, a Bose condensation of zero energy spinons can occur and leads to N\'eel order.
We will discuss how the Ans\"azte symmetry constraints the magnetic order obtained after condensation, and establish a relation with the \textit{regular states} introduced in Ref.~\onlinecite{Regular_order}.
The Bogoliubov bosons creation operators are linear combinations of the $\widehat b_{i\sigma}$ and $\widehat b_{i\sigma}^\dag$, such as their vacuum $|\tilde 0\rangle$ is a GS of
$\widehat H_{\rm{MF}}$ (see App.~\ref{app:bogoliubov}).
If the GS is unique, it must respect all the Hamiltonian symmetries and consequently, cannot break the global spin rotation invariance.
But when $\kappa$ increases (we continuously adapt the Ansatz to $\kappa$ so that the self-consistency conditions remains verified and the PSG remains the same), some eigen energie(s) decrease(s) to zero.
The GS is then no more unique as the zero mode(s) can be more or less populated and the phases of each zero mode are free.
It is then possible to develop a long range spin order.
This phenomena occurs when no $\lambda$ verifies condition \ref{eq:constraint3}.
If $\lambda$ increases the mean number of boson per site increases up to a maximal number $\kappa_{\rm{max}}$.
At this point, some eigen energies become zero.
Increasing $\lambda$ further is not possible as the Bogoliubov transformation becomes unrealizable (the $M$ matrix of Eq.~\ref{eq:HMF} has non-positive eigenvalues).
To reach the required number of boson per site, we have to fill the zero energy modes ${\tilde b}^\dag_1$, ${\tilde b}^\dag_2$, \dots using
coherent states $e^{\alpha_1{\tilde b}^\dag_1+\alpha_2{\tilde b}^\dag_2+\dots}|\tilde 0\rangle$ for example.
In the thermodynamical limit the fraction of missing bosons is macroscopic and a Bose condensation occurs in each of the soft modes.
The choice of the weight $\alpha_i$ of these modes fixes the direction of the on-site magnetization.
Detailed examples of magnetization calculations in a condensate are given by Sachdev.\cite{Sachdev}
In the classical limit ($\kappa\to\infty$), all bosons are in the condensate and contribute to the on-site magnetization $\mathbf m_i$.
The modulus $|\mathbf m_i|$ should be equal to $\kappa/2$ to satisfy Eq.~\ref{eq:HMF}.
The $\widehat b_{i\sigma}$ operators acquire a non-zero expectation value $\langle \widehat b_{i\sigma}\rangle$ and are (up to a gauge transformation) linked to $\mathbf m_i$ by :
\begin{equation}
\left(
\begin{array}{c}
\langle \widehat b_{i\uparrow}\rangle\\
\langle \widehat b_{i\downarrow}\rangle
\end{array}\right)=
\left(
\begin{array}{c}
\sqrt{|\mathbf m_i|+ m_i^z}\\
\sqrt{|\mathbf m_i|- m_i^z}e^{i{\rm Arg}(m_i^x+i m_i^y)}
\end{array}\right),
\label{eq:ab_class}
\end{equation}
where Arg is the argument of the complex number and $m_i^{x,y,z}$ are the magnetization components.
These values are constrained by the Ansatz through:
\begin{subeqnarray}
\label{eq:AB_class}
\mathcal{A}_{ij}
&=&\frac{1}{2}(\langle \widehat b_{i\uparrow}\rangle\langle \widehat b_{j\downarrow}\rangle-\langle \widehat b_{j\uparrow}\rangle\langle \widehat b_{i\downarrow}\rangle),
\\
\mathcal{B}_{ij}
&=&\frac{1}{2}(\langle \widehat b^\dag_{i\uparrow}\rangle\langle \widehat b_{j\uparrow}\rangle+\langle \widehat b^\dag_{i\downarrow}\rangle\langle \widehat b_{j\downarrow}\rangle).
\end{subeqnarray}
The supplementary constraint reads:
\begin{equation}
\label{eq:constraint_class}
|\mathbf m_i|\sim \kappa/2
\end{equation}
This extra constraint can make the classical limit problem unsolvable: no classical magnetization pattern is then compatible with the Ansatz.
An example of such a situation was studied by \textcite{PSG} (see App.~\ref{app:strange}).
We can take the problem of the classical limit from the other side.
We begin from a classical state, from which we calculate $\langle\widehat b_{i\sigma}\rangle$ and the Ansatz (using Eq.~\ref{eq:ab_class} and \ref{eq:AB_class}).
What are the conditions on the classical state for the associated Ansatz to be strictly symmetric ?
As we look for an Ansatz respecting all lattice symmetries, the rotationally invariant quantities (as the spin-spin correlations) must be invariant by all lattice symmetries, what severely limits the classical magnetization pattern.
Such a state is called a $SO(3)$-regular state.
Mathematically, a state is said to be $SO(3)$-regular if for any lattice symmetry $X$ there is a global spin rotation $S_X\in SO(3)$ such as the state is invariant by $S_XX$.
Moreover, the time reversal symmetry ({\it i.e.} the Ansatz can be chosen to be real) imposes the co-planarity of the spins.\footnote{If we choose the $xz$ plane, we directly obtain a real Ansatz from Eqs.~\ref{eq:ab_class},\ref{eq:AB_class}.}
The set of coplanar $SO(3)$-regular states can be sent on the set of condensed states of strictly symmetric Ans\"atze.
In the same way, we define the $O(3)$-regular states by including global spin flips $\mathbf S_i\to-\mathbf S_i$ in the group of spin transformations.
These $O(3)$-regular states are listed in Ref.~\onlinecite{Regular_order} for several two-dimensional lattices.
The $O(3)$-regular states are divided in coplanar $SO(3)$-regular states and in chiral states.
In a chiral state, the global inversion $\mathbf S_i\to-\mathbf S_i$ cannot be ``undone'' by a global spin rotation. Equivalently, there exist three sites
$i$, $j$, $k$ such as the scalar chirality $\mathbf S_i\cdot (\mathbf S_j\land \mathbf S_k)$ is non zero: the spins are not coplanar.
Then a strictly symmetric Ansatz, upon condensation, can only give coplanar $SO(3)$-regular states in the classical limit, therefore missing all chiral $O(3)$-regular states.
This limitation can seem unimportant as most of the usual long-range ordered spin models have planar GS's.
But some new counter examples have recently been discovered.
The first example is the cyclic exchange model on the triangular lattice\cite{Momoi_classique} with a four sublattice tetrahedral chiral GS (see Fig.~\ref{fig:order_tri_tetra}).
More recently, two twelve sublattice chiral GS's, with the spins oriented towards the corners of a cuboctahedron, were discovered on the kagome lattice with first and second neighbor exchanges\cite{KagomeDomenge,Janson_2008} (studied in App.~\ref{app:kagome_ansatz}).
A systematic study of the classical GS's of simple models on different lattices has indeed revealed that the GS's are chiral for large ranges of interaction values.\cite{Regular_order}
\begin{figure}
\begin{center}
\includegraphics[height=.14\textwidth]{order_tri_tetra_spin-crop}\quad
\includegraphics[height=.14\textwidth]{order_tri_tetra_latt-crop}
\caption{(Color online) Tetrahedral order on the triangular lattice}
\label{fig:order_tri_tetra}
\end{center}
\end{figure}
The theory of symmetric PSG is unable to encompass such chiral states.
In the following subsection, we will build TRSB SL Ans\"atze which include, upon condensation, all classical regular chiral states.
This method was already applied to the kagome lattice with up to third neighbor interactions, leading to the surprising result of a chiral state even in the purely first neighbor model. \cite{cuboc1}
If this state is physically relevant or not is still an open question, but independently, it shows that the
omission of chiral Ans\"atze has prevented the discovery of more competitive MF solutions.
\subsection{The chiral algebraic PSG's: how to include weakly symmetric states}
\label{sec:CSL}
The time-reversal transformation $ \cal T$ acts on an Ansatz by complex conjugation of the MF parameters.\cite{Wen_PSG}
If an Ansatz respects this symmetry, it is sent to itself by $\cal T$ (up to a gauge transformation).
So, in an appropriate gauge, all parameters can then be chosen real.
In most previous SBMFT studies, the hypothesis of time reversal invariance of the GS was implicit, as only \textit{real} Ans\"atze were considered.
In contrast to $SU(2)$ global spin symmetry that can easily be broken through the Bose condensation process,
no transition is known to produce a chiral ordered state out of a $ \cal T$-symmetric Ansatz.
Indeed, chiral Ansätze have loops with complex-valued fluxes which evolve continuously with $\kappa$.
We do not expect any singular behavior of these (local) fluxes when crossing the condensation point, so the generic situation
is that a chiral LRO phase will give rise to a TRSB SL\cite{cuboc1,Faak2012} when decreasing $\kappa$.
It is of course possible that the lowest-energy Ansatz changes with $\kappa$ but such a first-order transition has not reason to coincide with the onset of magnetic LRO.
To obtain all chiral SL's we have to explicitly break time-reversal symmetry at the MF level, in the Ansatz.
For $SO(3)$ classical regular states, a lattice transformation from $\mathcal X$ is compensated by a global spin rotation (that leaves the Ansatz unchanged).
For $O(3)$ classical regular states, a lattice transformation $X\in\mathcal X$ is compensated by a global spin rotation possibly followed by an inversion $\mathbf S_i\to-\mathbf S_i$.
This defines a parity $\epsilon_X$ to be $+1$ if no spin inversion is needed, and $-1$ otherwise.
In a chiral SL, the parity will be deduced from the effect of $X$ on the fluxes: $\epsilon_X=1$ if they are unchanged, $-1$ if they are reversed.
With this distinction in mind we will call \textit{weakly symmetric} Ans\"atze (WS)
the Ans\"atze respecting the lattice symmetries up to $ \cal T$, whereas the the Ans\"atze respecting strictly all lattice symmetries and $\cal T$ have already been called
\textit{strictly symmetric} (SS) Ans\"atze (all lattice symmetries are even).
The distinction between even and odd lattice symmetries (as defined by $\epsilon_X$) is the basis of the construction of all WS Ans\"atze via the chiral algebraic PSG's.
Let us consider $\mathcal X_e$ the subgroup of transformations of $\mathcal X$ that can only be even.
Mathematically, $\mathcal X_e$ is the subgroup of $\mathcal X$ which elements are sent to the identity by {\it all} morphisms from $\mathcal X$ to $\mathbb Z_2$.
$\mathcal X_e$ contains at least all the squares of the elements of $\mathcal X$ as $\epsilon_{X^2}=\epsilon_X^2=1$.
But, depending on the algebraic relations of $\mathcal X$, it may contain more transformations as we show in the triangular case in Sec.~\ref{sec:APSG_tri}.
Once $\mathcal X_e$ is known, we define the {\it chiral} algebraic PSG's of $\mathcal X$ as the algebraic PSG's of $\mathcal X_e$.
The method described previously to find all algebraic PSG's applies the same way.
We define $\mathcal X_o$ as the set of transformations which may be odd ($\mathcal X-\mathcal X_e$). It contains transformations of undetermined parities.
To filter the weakly symmetric Ans\"atze from those compatible with the chiral algebraic PSG's, we have to take care of the transformations of $\mathcal X_o$.
This gives two types of extra constraints.
First, same type ($\mathcal A$ or $\mathcal B$) MF parameters on bonds linked by such transformation must have the same modulus.
The second constraint concerns their phases, through the fluxes.
The phases are gauge dependent, but the fluxes are gauge independent.
Fluxes are sent to their opposite by $\cal T$ and as well as by the odd transformations of $\mathcal X$. Then are unchanged by even transformations.
To find all WS Ans\"atze we then have to determine a maximal set of independent elementary fluxes and distinguish all possible cases of parities for the transformations of $\mathcal X_o$ ($\epsilon_X=\pm1$).
We can now apply these theoretical considerations to find all WS Ans\"atze on some usual lattices as the triangular, honeycomb, kagome and square lattice.
The calculations are detailed for the triangular lattice in the following subsections and some results for the kagome and square lattice are given in App.~\ref{app:EAPSG}.
\subsection{Chiral algebraic PSG's of lattices with a triangular Bravais lattice}
\label{sec:APSG_tri}
\begin{figure}
\begin{center}
\includegraphics[height=.19\textwidth]{sym_tri_R6sigma_rep120-crop}\quad
\includegraphics[height=.19\textwidth]{sym_square_R4sigma_rep90-crop}\\
\caption{(Color online) Generators of the lattice symmetries $\mathcal X$ on the triangular and square lattices.
$\mathcal V_i$ is a translation, $\sigma$ is a reflection and $\mathcal R_i$ is a rotation of order $i$. }
\label{fig:sym_latt}
\end{center}
\end{figure}
The first step is to find all chiral algebraic PSG's.
As already mentioned, they only depend on the symmetries of $\mathcal X_e$ and on the IGG.
We choose the most general case of IGG$\sim\mathbb Z_2$ and suppose that $\widehat H_0$ respects all the lattice symmetries with the generators described in Fig.~\ref{fig:sym_latt}.
These symmetries are those of a triangular lattice, but the actual (spin) lattice of $\widehat H_0$ can be any lattice with a triangular Bravais lattice such as a honeycomb, a kagome or more complex lattices.
The coordinates $(x,y)$ of a point are given in the basis of the translation vectors $\mathcal V_1$, $\mathcal V_2$ and the effect of the generators on the coordinates are
\begin{subeqnarray}
\mathcal V_1:(x,y)&\to&(x+1,y),\\
\mathcal V_2:(x,y)&\to&(x,y+1),\\
\mathcal R_6:(x,y)&\to&(x-y,x),\\
\sigma:(x,y)&\to&(y,x).
\end{subeqnarray}
The algebraic relations in $\cal X$ are:
\begin{subeqnarray}
\mathcal V_1\mathcal V_2&=&\mathcal V_2\mathcal V_1,\\
\sigma^2&=&I\\
\mathcal R_6^6&=&I, \\
\mathcal V_1\mathcal R_6&=&\mathcal R_6\mathcal V_2^{-1} \slabel{eq:cons-d}\\
\mathcal V_2\mathcal R_6&=&\mathcal R_6\mathcal V_1\mathcal V_2 \slabel{eq:cons-e}\\
\mathcal V_1\sigma&=&\sigma \mathcal V_2\\
\mathcal R_6\sigma \mathcal R_6&=&\sigma.
\end{subeqnarray}
Let us now determine the subgroup $\mathcal X_e$ of transformations which are necessarily even.
It evidently includes $\mathcal V_1^2$, $\mathcal V_2^2$ and $\mathcal R_6^2$ (noted $\mathcal R_3$).
But there are more even transformations in this subgroup. Using Eq.~\ref{eq:cons-e} we find $\epsilon_{\mathcal V_2}\epsilon_{\mathcal R_6}=\epsilon_{\mathcal R_6}\epsilon_{\mathcal V_1}\epsilon_{\mathcal V_2}$, so $\epsilon_{\mathcal V_1}=1$.
In the same way, using Eq.~\ref{eq:cons-d}, we get $\epsilon_{\mathcal V_2}=1$.
Thus $\mathcal X_e$ is generated by $\mathcal V_1$, $\mathcal V_2$ and $\mathcal R_3$.
The algebraic relations in $\mathcal X_e$ are
\begin{subeqnarray}
\label{eq:constraints}
\mathcal V_1\mathcal V_2&=&\mathcal V_2\mathcal V_1,\\
\mathcal R_3^3&=&I, \\
\mathcal R_3\mathcal V_1&=&\mathcal V_2\mathcal R_3,\\
\mathcal R_3&=&\mathcal V_1\mathcal V_2\mathcal R_3\mathcal V_2.
\end{subeqnarray}
As explained in Sec.~\ref{sec:APSG}, each of these relations gives a constraint on the gauge transformations associated to these generators.
The Eqs.\ref{eq:constraints} imply that for any site $i$:
\begin{subeqnarray}
\label{eq:constraints2}
\theta_{\mathcal V_2}(\mathcal V_1^{-1}i)-\theta_{\mathcal V_2}(i)&=&p_1 \pi,\\
\theta_{\mathcal R_3}(i)+\theta_{\mathcal R_3}(\mathcal R_3i)+\theta_{\mathcal R_3}(\mathcal R^2_3i)&=&p_2\pi,\\
\theta_{\mathcal R_3}(i)-\theta_{\mathcal R_3}(\mathcal V_2^{-1}i)-\theta_{\mathcal V_2}(i)&=&p_3\pi,\\
\theta_{\mathcal V_2}(\mathcal V_1^{-1}i)+\theta_{R_3}(\mathcal V_2^{-1}\mathcal V_1^{-1}i)&&\nonumber\\
+ \theta_{\mathcal V_2}(\mathcal V_2\mathcal R_3^2i)-\theta_{\mathcal R_3}(i)&=&p_4\pi,
\end{subeqnarray}
where $p_1$ to $p_4$ can take either the value $0$ or $1$ (the equations are written modulo $2\pi$).
We note $\lbrack x\rbrack$ the integer part of $x$ and $x^*=x-\lbrack x\rbrack$ ($0\leq x^*<1$).
By partially fixing the gauge, we can impose
\begin{subeqnarray}
\label{eq:partial_gauge_fixing}
\theta_{\mathcal V_1}(x_i,y_i)&=&0
\nonumber\\
\theta_{\mathcal V_2}(x_i^*,y_i)&=&p_1 \pi x_i^*.
\nonumber
\end{subeqnarray}
Through a gauge transformation $G$ of argument $\theta_G$, the $\theta_X$ of a lattice transformation $X$ becomes:
\begin{equation}
\label{eq:gauge_effect}
\theta_X(i) \to \theta_G(i)+\theta_X(i)-\theta_G(X^{-1}i).
\end{equation}
and the algebraic PSG is transformed in an other element of its equivalence class.
Using the following gauge transformations:
\begin{subeqnarray}
G_3:(x,y)&\to&\pi x,\nonumber\\
G_4:(x,y)&\to&\pi y,\nonumber
\end{subeqnarray}
we see that a change of $p_3$ or $p_4$ is a gauge transformation, so we can set them to zero.
Solving the set of equations~\ref{eq:constraints2} leads to:
\begin{subeqnarray}
\label{eq:APSG_tri}
\theta_{\mathcal V_1}(x,y)&=&0 \\
\theta_{\mathcal V_2}(x,y)&=&p_1\pi x \\
\theta_{\mathcal R_3}(x,y)&=&
p_1\pi x\left( y-\frac{x+1}{2}\right)+g_{\mathcal R_3}(x^*, y^*),
\end{subeqnarray}
with a supplementary constraint that can only be treated when the spin lattice is defined:
\begin{eqnarray}
\label{eq:constraintgR}
g_{\mathcal R_3}(x^*, y^*)
+g_{\mathcal R_3}((-y)^*, (x-y)^*)&&\nonumber\\
+g_{\mathcal R_3}((y-x)^*,(-x)^*) &=&p_2\pi.
\end{eqnarray}
This constraint only depends on the coordinates of the sites in a unit cell ($x^*$ and $y^*$).
Eqs.~\ref{eq:APSG_tri} and \ref{eq:constraintgR} define the chiral algebraic PSG on the triangular Bravais lattice.
The full determination of the WS Ants\"atze requires precise definition of the spin lattice (triangular, honeycomb ($m=2$) or kagome ($m=3$)) and on the number of interactions included
in the MF Hamiltonian (first neighbor only or first and second neighbor; $ \cal{ A}$ and $\cal {B}$ parameters, or $\cal{ A}$ only...).
The case of the triangular lattice ($m=1$) with nearest neighbor interactions and $\cal{ A}$ and $\cal {B}$ MF parameters is described in the next subsection.
\section{ Strictly and Weakly symmetric Ans\"atze on the triangular lattice with first neighbor interactions}
\label{sec:ansatze_tri}
\subsection{Construction of WS Ans\"atze on the triangular lattice}
The triangular lattice has a single site per unit cell and the values of $x^*$ and $y^*$ are the coordinates of this site in a unit cell, say $(0,0)$.
Eq.~\ref{eq:constraintgR} simplifies into:
\begin{equation}
6g_{\mathcal R_3}(0,0)=0.
\end{equation}
The solutions are $g_{\mathcal R_3}(0,0)=k\pi/3$, with $k$ integer. Because the IGG is $\mathbb Z_2$, only the three values $k=-1,0,1$ lead to physically different Ans\"atze.
Finally, we have 6 distinct algebraic PSG's for the reduced set of symmetries ${\cal X}_e$.
They are characterised by two integers $p_1=0,1$ and $k=-1,0,1$ and defined by:
\begin{subeqnarray}
\theta_{\mathcal V_1}(x,y)&=&0 \\
\theta_{\mathcal V_2}(x,y)&=&p_1\pi x \\
\theta_{\mathcal R_3}(x,y)&=&p_1\pi x\left(y-\frac{x+1}{2}\right)+\frac{k\pi}{3}
\end{subeqnarray}
Now, we have to find all the Ans\"atze compatible with these PSG's.
\footnote{In the most general case, an Ansatz contains MF parameters for any couple of sites $(i,j)$, but in practice, we study short range interactions.
For this example, we limit ourselves to first neighbor parameters, but this procedure is easily generalized to further neighbors.}
The first useful insight is to count the number of independent bonds.
Here, one can obtain any bond from any other by a series of rotations and translations ({\it i.e.}, elements of $\mathcal{\chi}_e$).
Thus, if we fix the value of $\mathcal A_{ij}$ and $\mathcal B_{ij}$ on a bond $ij$, we can deduce all other bond parameters from the PSG.
Note that $\mathcal A_{ij}$ can be chosen real by using the gauge freedom.
The value of all bond parameters are represented on Fig.~\ref{fig:triangular_ansatz} as a function of their value on the reference bond.
The unit cell of the Ansatz contains up to two sites because $p_1$ may be non-zero.
\begin{figure}
\begin{center}
\includegraphics[width=.28\textwidth]{ansatze_chiral_triangulaire-crop.pdf}
\caption{(Color online)
Ans\"atze respecting the $\mathcal X_e$ symmetries on the triangular lattice.
All arrows carry $\mathcal B_{ij}$ parameters of modulus $B_1$ and of argument $\phi_{B_1}$ and $\mathcal A_{ij}$ parameters of modulus $A_1$ and of argument 0 on red arrows (choice of the gauge),
$2k\pi/3$ on blue ones and $4k\pi/3$ on green ones.
On dashed arrows $\mathcal A_{ij}$ and $\mathcal B_{ij}$ take an extra $p_1\pi$ phase.
}
\label{fig:triangular_ansatz}
\end{center}
\end{figure}
From now on we can forget about the PSG construction and only retain the definition of the Ansatz given by Fig.~\ref{fig:triangular_ansatz} and
its minimal set of parameters: two integers $p_1$ and $k$, two modulus $A_1$ and $B_1$, and one argument $\phi_{B_1}$.
Until now, we have only considered the subgroup $\mathcal X_e$ and we have looked for Ans\"atze strictly respecting these symmetries.
We now want to consider all symmetries in $\mathcal X$, but the symmetries in $\mathcal X_o$ will be obeyed modulo an eventual time-reversal symmetry.
This requires supplementary conditions on the Ans\"atze of Fig.~\ref{fig:triangular_ansatz}.
As explained in Sec.~\ref{sec:CSL}, the transformations of $\mathcal X_o$ imply relations between the modulus and the arguments of the Ansatz.
Since we are in a very simple case, where all bonds are equivalent in $\mathcal X_e$, no extra relation on the modulus can be extracted from $\mathcal X_o$.
However, some conditions can be found by examining how the the fluxes
$\rm{Arg}(\mathcal A_{ij}\mathcal A_{jk}^*\mathcal A_{kl}\mathcal A_{li}^*)$ on an elementary rhomboedron and $\rm{Arg}(\mathcal A_{ij}\mathcal B_{jk}\mathcal A_{ki}^*)$ on an elementary triangle
transform with $\mathcal R_6$ and $\sigma$. Assuming that neither $A_1$ nor $B_1$ are zero we find:
\begin{subeqnarray}
\label{eq:epsilon}
2 k\pi(1-\epsilon_{\mathcal R_6})/3&=&0\\
2 k\pi(1+\epsilon_\sigma)/3&=&0\\
(1+\epsilon_{\mathcal R_6})\phi_{B1}&=&p_1\pi\\
(1-\epsilon_\sigma)\phi_{B_1}&=&p_1\pi
\end{subeqnarray}
For each set $(\epsilon_{\mathcal R_6},\epsilon_\sigma)$, the compatible Ans\"atze are thus limited to:\\
i) $(\epsilon_{\mathcal R_6},\epsilon_\sigma)=(1,1)$: $k=0$, $p_1=0$ and $\phi_{B_1}=0$ or $\pi$,\\
ii) $(\epsilon_{\mathcal R_6},\epsilon_\sigma)=(-1,-1)$: $k=0$, $p_1=0$ and $\phi_{B_1}=0$ or $\pi$,\\
iii) $(\epsilon_{\mathcal R_6},\epsilon_\sigma)=(1,-1)$: $\phi_{B_1}=p_1\pi/2$ or $\pi+p_1\pi/2$,\\
iv) $(\epsilon_{\mathcal R_6},\epsilon_\sigma)=(-1,1)$: $k=0$, $p_1=0$ and no constraint on $\phi_{B_1}$.
A couple $(\epsilon_{\mathcal R_6},\epsilon_\sigma)$ does not characterize an Ansatz. A given Ansatz, can be found for several couples of parities.
For example, the Ans\"atze obtained for $(\epsilon_{\mathcal R_6},\epsilon_\sigma)=(1,1)$ are also present for all other $(\epsilon_{\mathcal R_6},\epsilon_\sigma)$.
Indeed as their MF parameters are real, they are not sensitive to time reversal and any $\epsilon_{\mathcal R_6}$, $\epsilon_\sigma$ can be chosen.
From the classical point of view, these Ans\"atze describe coplanar spin configurations, which are invariant under a global spin flip followed by a $\pi$ rotation around an axis perpendicular to the spin plan.
Finally, there are nine different WS Ans\"atze families, given in Table~\ref{tab:symmetric_Ansatze_tri}.
We now conclude this section by a series of remarks concerning the solutions we have obtained:
\begin{itemize}
\item[i)] The number of WS Ans\"atze families is larger than the number of algebraic PSG of $\cal X _\textrm{e }$, because the operators in $\mathcal X_o$ can act in different ways on the Ans\"atze.
\item[ii)] Amongst these 9 Ans\"atze families, only the two first are non chiral, and the 6 others are TRSB Ans\"atze (by applying $\cal T$, $k=1$ is changed to $k=-1$ and $\phi_{B_1}$ to $-\phi_{B_1}$).
The 6 families obtained by aplying $\cal T$ are not listed here.
\item[iii)] These solutions are called {\it families} as the moduli $A_1$ and $B_1$ can vary continuously without modifying the symmetries.
The third Ansatz has no fixed value for $\phi_{B_1}$ and includes the first and second Ans\"atze families (they are kept as distinct as they are non chiral).
\item[iv)] The fluxes of these Ans\"atze are easily calculated using Fig.~\ref{fig:triangular_ansatz}.
\item[v)] The detailed list of compatible Ans\"atze depends on the choice of the mean-field parameters (here, non zero $\mathcal A_{ij}$ and $\mathcal B_{ij}$ on first neighbor bonds) as we explain in App:~\ref{app:strange} by contrasting these results to those of Wang \textit{et al.} on the same lattice.\cite{PSG}
\end{itemize}
\begin{table}
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Ansatz number&~~$p_1$~~ &~~$k$ ~~&~~$\phi_{B_1}$~~ \\
\hline
1&\multirow{5}{*}{0}&\multirow{3}{*}{0}&0\\
\cline{4-4}
2&&&$\pi$\\
\cline{4-4}
3&&&any\\
\cline{3-4}
4&&\multirow{2}{*}{1}&0\\
\cline{4-4}
5&&&$\pi$\\
\cline{2-4}
6&\multirow{4}{*}{1}&\multirow{2}{*}{0}&$\pi/2$\\
\cline{4-4}
7&&&$3\pi/2$\\
\cline{3-4}
8&&\multirow{2}{*}{1}&$\pi/2$\\
\cline{4-4}
9&&&$3\pi/2$\\
\hline
\end{tabular}
\caption{The nine weakly symmetric Ans\"atze families on the triangular lattice, with the notations of Fig.~\ref{fig:triangular_ansatz}. The modulus $A_1$ and $B_1$ are not constrained although supposed non zero.
\label{tab:symmetric_Ansatze_tri}
}
\end{center}
\end{table}
\begin{table}
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
~~~~~~&~~~F~~~ &coplanar &~~tetra~~\\
\hline
$p_1$ &0 &0 &1\\
$k$&- &0 &1\\
$\varepsilon_R$ &? &? &1\\
$\varepsilon_\sigma$&? &? &-1\\
\hline
$A_1$ &0 &$\frac{\sqrt3}{4}^*$ &$\frac1{\sqrt6}^*$\\
\hline
$B_1$ &$\frac12^*$&$\frac14^*$ &$\frac1{\sqrt{12}}^*$\\
$\phi_{B_1}$ &0 &$\pi$ &$\frac\pi2$\\
\hline
\end{tabular}
\caption{Values of the parameters of Fig.~\ref{fig:triangular_ansatz} for Ansatz families related to regular classical states on the triangular lattice.
The states are designed by F for ferromagnetic, coplanar is the $\sqrt3\times\sqrt3$ state and tetra for tetrahedral.
These states are described in more details in~\onlinecite{Regular_order}.
The interrogation points mean that the two values $\epsilon=\pm1$ are possible (coplanar or colinear state).
The $^*$ means that the parameter value is free, we give its value in the fully magnetized state ($|\mathbf m|=1$).
\label{tab:regular_states_tri}
}
\end{center}
\end{table}
\subsection{Condensation of the WS Ans\"atze: the missing tetrahedral state}
\label{sec:tetrahedron}
The SBMFT has already been used to study the antiferromagnetic Heisenberg first-neighbor Hamiltonian on the triangular lattice with the $\mathcal A_{ij}$-only decoupling\cite{Sachdev, PSG}
(Eq.~\ref{eq:decoupling_A}) or with both $\mathcal A_{ij}$ and $\mathcal B_{ij}$\cite{Trumper_AB_SBMFT} (Eq.~\ref{eq:decoupling_AB}).
The classical limit of this model gives the well known three sublattice N\'eel order with coplanar spins at angles of $120^\circ$.
The bond parameters obtained from this classical order (see Eq.~\ref{eq:AB_class}) lead to a strictly symmetric Ansatz
(no need to break $\cal T$: we can chose to fix $(\epsilon_{\mathcal R_6},\epsilon_\sigma)=(1,1)$) with $p_1=0$, $k=0$ and $\phi_{B_1}=\pi$.
We note that all MF parameters are real in this gauge choice (is it always possible to do so for coplanar states).
In this case the restriction to real bond parameters did not prevent to obtain the true MF ground state.
The tetrahedral state (Fig.~\ref{fig:order_tri_tetra})
is the unique GS of the multi-spin exchange Hamiltonian in a large range of parameters:\cite{Momoi_classique,Regular_order}
\begin{equation}
\widehat H=J_2\sum_{\langle i j \rangle} \hat P_{(ij)} + J_4 \sum_{\langle ijkl\rangle}(\hat P_{(ijkl)}+\hat P_{(ilkj)}),
\label{eq:Ham_J2J4}
\end{equation}
where the second sum runs on every elementary rhomboedra and $\hat P_{(ijkl)}$ is a cyclic permutation of the spins and $J_4>0$ and $\frac14<\frac{J_2}{J_4}<1$.
Moreover, it is one of the GS's of a Heisenberg Hamiltonian with first and second neighbor interactions
\begin{equation}
\widehat H=\sum_{\langle i j \rangle}\widehat{\mathbf S}_i\cdot \widehat{\mathbf S}_j+\alpha\sum_{\langle\langle i j \rangle\rangle}\widehat{\mathbf S}_i\cdot \widehat{\mathbf S}_j
\end{equation}
for $\frac18\leq\alpha\leq1$. In the later situation the GS is however degenerate and fluctuations (order by disorder) favors collinear orders.\cite{Jolicoeur_J1J2tri,Lecheminant1995}
The bond parameters obtained from this classical order (Eq.~\ref{eq:AB_class}) lead to the weakly symmetric Ansatz
($(\epsilon_{\mathcal R_6},\epsilon_\sigma)=(1,-1)$) with $p_1=1$, $k=1$ and $\phi_{B_1}=\pi/2$ (or opposite $k$ and $\phi_{B_1}$ for the opposite chirality).
The previous SBMFT studies of the ring exchange model (Eq.~\ref{eq:Ham_J2J4}) have been limited to real parameters\cite{Misguich_SBMFT}
and it would be interesting to perform a systematic search for a possible chiral MF ground-state.
If the chiral Ansatz indeed turns out to have the lowest energy -- as suggested by its classical limit -- then the spin-$\frac{1}{2}$ might be
a chiral SL since exact diagonalisations\cite{Misguich1999,Misguichmambrini2002} have shown the absence of Néel long range order in some parameter range.
\section{Fluxes}
\label{sec:fluxes}
We have already given a brief definition of the fluxes in Sec.~\ref{sec:IGG}; in this section we will enlarge this definition and comment on the physical meaning of the various loop operators (local and non local) that can be defined on a lattice.
The gauge invariance of a product of $\widehat A_{ij}$, $\widehat A^\dag_{ij}$, $\widehat B_{ij}$ and $\widehat B^\dag_{ij}$ operators on a closed contour requires two conditions:
(i) each site $i$ appears in an even number of terms,
(ii) The set of operators containing a site $i$ can be organized into pairs such as the product of each pair is invariant by a local gauge transformation on site $i$ (for example $\widehat A_{ji}$ and $\widehat B_{ik}$).
Such a gauge-invariant operator is the analog of a Wilson loop operator in gauge theory and
the complex argument of its expectation value is called a flux.
${\rm Arg}\langle \widehat A_{ij}\widehat A^\dag_{jk}\dots\widehat A_{lm}\widehat A^\dag_{mi}\rangle$,
${\rm Arg}\langle \widehat B_{ij}\widehat B_{jk}\dots\widehat B_{li}\rangle$ are examples of fluxes with only $\widehat A_{ij}$ or $\widehat B_{ij}$ operators,
but it is possible to mix both as for example in ${\rm Arg}\langle \widehat A_{ij}\widehat A^\dag_{jk}\widehat B^\dag_{kl}\widehat A_{lm}\widehat A^\dag_{mi}\rangle$.
In SBMFT we approximate these averages of products by the product of the averages (this can be formally justified in the $N\to\infty$ limit).
For example: $\langle \widehat B_{ij}\widehat B_{jk}\dots\widehat B_{li}\rangle \to\mathcal B_{ij} \mathcal B_{jk}\dots \mathcal B_{li}$.
There is an infinite set of non-independent fluxes.\footnote{For example, we can deduce $\rm{Arg}(\mathcal A_{ij}\mathcal B_{jk}\mathcal A^*_{ki}\mathcal A_{im}\mathcal B_{mn}\mathcal A^*_{ni})$
from ${\rm Arg}(\mathcal A_{ij}\mathcal B_{jk}\mathcal A^*_{ki})$ and $\rm{Arg}(\mathcal A_{im}\mathcal B_{mn}\mathcal A^*_{ni})$.
We can thus limit ourselves to fluxes/loops such as each site is encountered exaclty twice.
Still this is not enough to have independent fluxes, as the fluxes of two loops having a common part, with the same operators on the common bonds is equal to the flux of the loop encircling both loops.
For example, we can deduce $\rm{Arg}(\mathcal A_{ij}\mathcal A^*_{jk}\mathcal A_{kl}\mathcal A^*_{li})$ from $\rm{Arg}(\mathcal A_{ij}\mathcal A^*_{jk}\mathcal B^*_{ki})$
and $\rm{Arg}(\mathcal A_{kl}\mathcal A^*_{li}\mathcal B^*_{ik})$ (using $\mathcal B_{ik}=\mathcal B^*_{ki}$).}
A method to determine the number of independent fluxes for a given set of non zero $\mathcal A_{ij}$ and $\mathcal B_{ij}$ is given in App.~\ref{app:ind_fluxes}.
To characterize a given Ansatz, we can limit ourselves to the minimal set of independent parameters that define unequivocally its equivalence class: essentially the non-zero bond field modulus and a minimal set of fluxes.
The first insight on the physical meaning of the fluxes is given in the classical limit (Sec.~\ref{sec:flux_class}), where they are simple geometric quantities related to the orientation of the spins.
Then, we come back to the quantum case and express the fluxes, which are physical quantities, with the exclusive use of spin operators (Sec.~\ref{sec:flux_quantum}).
\subsection{Definition and physical meaning in the classical limit}
\label{sec:flux_class}
We first concentrate on the mean-field flux formed by products of $\mathcal B_{ij}$ parameters.
In the classical limit, the flux of $\mathcal B_{ij}$ around a loop $ijk\dots l$: $\rm{Arg}( \mathcal B_{ij}\mathcal B_{jk}\dots\mathcal B_{li})$
is related to the solid angle associated to the contour described by the spins on the Bloch sphere.
We give here a simplified formulation of the calculation given in Ref.~\onlinecite{Auerbach}.
Let us suppose that the direction of the magnetization (with a modulus fixed to 1) evolves slowly along the loop and use the gauge of Eq.~\ref{eq:ab_class}, but in spherical coordinates:
\begin{equation}
\left(
\begin{array}{c}
\langle \widehat b_{i\uparrow}\rangle\\
\langle \widehat b_{i\downarrow}\rangle
\end{array}\right)=\sqrt{S}
\left(\begin{array}{c}
\cos \frac{\theta_i}{2} \\
\sin \frac{\theta_i}{2}e^{i\phi_i}
\end{array}\right).
\label{eq:ab_class2}
\end{equation}
Then:
\begin{equation}
\rm{Arg}(\mathcal B^*_{ij})\simeq S(1-\cos\theta_i)\frac{\phi_j-\phi_i}{2}.
\label{eq:elem_solid_angle}
\end{equation}
This last quantity (to first order in the variation of the spin) is the half of the solid angle between the three directions defined by the $z$ axis and the spins at site $i$ and $j$.
By summing such quantities around a closed contour, we obtain the half of the solid angle spanned by the spins along the loop.
This illustrates the gauge dependence of a single $\mathcal B^*_{ij}$: by a gauge transformation we change the direction of the $z$ axis and thus $\rm{Arg}(\mathcal B^*_{ij})$, but the total
solid angle of the closed loop is independent of the choice of the $z$.
In a similar approach the flux $\rm{Arg}(\mathcal A_{ij}(-\mathcal A^*_{jk})\dots\mathcal A_{lm}(-\mathcal A^*_{mi})$ is associated to the half of the solid angle defined by the spins along the loop,
but after flipping one spin every two sites
(the $j$ spin for $\mathcal A_{ij}$, the $i$ for $-\mathcal A_{ij}^*$).
The $-1$'s present in the above expression have their importance as they can lead to a final difference of $\pi$.
For more complicated fluxes mixing $\mathcal A_{ij}$ and $\mathcal B_{ij}$ parameters,
we flip one spin every two sites on $\mathcal A_{ij}$ and $\mathcal A_{ij}^*$ bonds (as previously),
we flip all of them for $\mathcal B_{ij}$, and none for $\mathcal B_{ij}^*$. The flux is then half the solid angle associated to these modified spin directions.
We can now reformulate the previously discussed relation between chirality and fluxes.
If a classical state is chiral, it has non trivial fluxes on contours where the spins are non coplanar.
If the corresponding MF parameters are non zero, we then have found a loop with a non-trivial flux and whatever the gauge choice, at least one MF parameter has to be complex.
Now, if a state is coplanar, then all fluxes are trivial and in a gauge where the spin plane is $xz$, all MF parameters are real.
In the tetrahedral state described on Fig.~\ref{fig:order_tri_tetra}, the flux of the $\mathcal A_{ij}$ around a small rhomboedron is $\pm\pi/3$ and
the flux of the $\mathcal B_{ij}$ around a small triangle is $\pm\pi/2$ (depending on the choice $k=\pm1$, see Sec.~\ref{sec:tetrahedron}).
\subsection{Fluxes in quantum models}
\label{sec:flux_quantum}
In the quantum realm, the fluxes can no longer be expressed in term of solid angles.
But as we have already noted, Wilson loop operators are gauge invariant quantities and as such, they are physical observables and can be expressed in terms of the spin operators.
\subsubsection{Spin-1/2 formulas}
To simplify we will start by imposing that the constraint is strictly verified for $S=\frac{1}{2}$, so there is exactly one boson per site.
We have noted that in the classical limit, the scalar chiralities are associated to the fluxes.
In the quantum case, we can express the flux operators in term of permutation operators, generalizing some results of Ref.~\onlinecite{Wen_Wilczek}.
The operator that transports the spins at sites $1,2,3$ to sites $2,3,1$ is the permutation noted $\widehat P_{(123)}$.
We recall that the permutation operator of spins between two sites can be written as:
\begin{equation}
\widehat P_{(ij)}=\frac12+2\widehat{\mathbf S}_i\cdot\widehat{\mathbf S}_j
\label{eq:perm}
\end{equation}
This straightforwardly implies that the flux of the $\widehat B_{ij}$ operators is
\begin{equation}
:\widehat B^\dag_{12}\widehat B^\dag_{23}...\widehat B^\dag_{n1}: = \frac{1}{2^{n}}\widehat P_{(12..n)}
\label{eq:perm_B}
\end{equation}
The formula for the flux of the $\widehat A_{ij}$ operators is more involved. It reads
\begin{equation}
\begin{array}{rl}
:\widehat A_{12}^\dag\widehat A_{23} \widehat A_{34}^\dag&\dots\widehat A_{2n\,1}:\,= \frac{1}{2^{2n}}
\widehat P_{(12..2n)}(1-\widehat P_{(23)})
\\
&(1-\widehat P_{(45)}) \dots
(1-\widehat P_{(2n\,1)}).
\end{array}
\label{eq:flux_SU2}
\end{equation}
To prove this last assertion, we first note that $\frac{1-\widehat P_{(ij)}}{2}$ is the projector on the singlet state of the two spins $i$ and $j$.
We then verify this equality in the basis of states $\bigotimes_{i=1}^n \psi_{2i,2i+1}$, where $\psi_{i,j}$ are eigen vectors of $P_{(ij)}$.
In the case where at least one bond is in a symmetric state (triplet), both sides of Eq.~\ref{eq:flux_SU2} are zero.
The final step is simply to check that the relation holds for the state which is a product of singlets.
\subsubsection{Fluxes in quantum spin S models}
\label{sec:flux_S}
For $S>1/2$, Eq.~\ref{eq:perm} is no more valid and Eqs.~\ref{eq:perm_B} and \ref{eq:flux_SU2} are not more valid either.
But we can still replace the on-site number of bosons by $2S$ and obtain an expression depending only on the spin operators.
The expression of the product of four $\widehat A_{ij}$ operators is:
\begin{equation}
\label{eq:Aord}
\begin{array}{c}
8:\widehat A_{12}^\dag\widehat A_{23} \widehat A^\dag_{34}\widehat A_{41}:=
({\mathbf S}_1. {\mathbf S}_2)({\mathbf S}_3. {\mathbf S}_4)
+({\mathbf S}_2. {\mathbf S}_3)({\mathbf S}_4. {\mathbf S}_1)\nonumber\\
-({\mathbf S}_1. {\mathbf S}_3)({\mathbf S}_2. {\mathbf S}_4)
+S^2({\mathbf S}_1 .{\mathbf S}_3+{\mathbf S}_2. {\mathbf S}_4
-{\mathbf S}_1. {\mathbf S}_2
\nonumber \\
-{\mathbf S}_2. {\mathbf S}_3
-{\mathbf S}_3. {\mathbf S}_4
-{\mathbf S}_4. {\mathbf S}_1)
+S^4
+iS(
{\mathbf S}_4.({\mathbf S}_1 \times {\mathbf S}_2)
\nonumber \\
-{\mathbf S}_1.({\mathbf S}_2 \times {\mathbf S}_3)
+{\mathbf S}_2.({\mathbf S}_3 \times {\mathbf S}_4)
-{\mathbf S}_3.({\mathbf S}_4 \times {\mathbf S}_1)
)
\end{array}
\end{equation}
The expression of the product of three $\widehat B_{ij}$ operators is:
\begin{equation}
\label{eq:Bord}
\begin{array}{c}
4:\widehat B^\dag_{12}\widehat B^\dag_{23}\widehat B^\dag_{31}:=
S({\mathbf S}_1 .{\mathbf S}_2+{\mathbf S}_2. {\mathbf S}_3+{\mathbf S}_3. {\mathbf S}_1)
\\
+S^3
-i{\mathbf S}_1.({\mathbf S}_2 \times {\mathbf S}_3)
\end{array}
\end{equation}
\subsubsection{Fluxes in SBMFT}
\label{sec:flux_SBMFT}
In a state where on-site number of bosons is not strictly conserved, the previous expressions become a bit more complicated.
The number operators can no longer be replaced by $2S$, and we have for example:
\begin{equation}
\label{eq:Bord2}
\begin{array}{c}
4:\widehat B^\dag_{12}\widehat B^\dag_{23}\widehat B^\dag_{31}:=
\frac12 \widehat n_3{\mathbf S}_1 .{\mathbf S}_2+\frac12 \widehat n_1{\mathbf S}_2. {\mathbf S}_3+\frac12 \widehat n_2{\mathbf S}_3. {\mathbf S}_1
\\
+\frac{\widehat n_1\widehat n_2\widehat n_3}8
-i{\mathbf S}_1.({\mathbf S}_2 \times {\mathbf S}_3).
\end{array}
\end{equation}
\subsection{Finite size calculations lattice symmetries and non local fluxes}
\label{sec:flux_periodicity}
For simple lattices as the square or triangular lattice, we can solve analytically the MF Hamiltonian $H_{\rm{MF}}$ of Eq.~\ref{eq:HMF} directly in the thermodynamical limit.
But in most cases, we have to solve numerically the self-consistency conditions on finite lattices.
To use the chiral PSG's on a finite periodic lattice, we have to be cautious about symmetries.
Indeed, all precautions have been taken so that the Ansatz (strictly or weakly) respects the lattice symmetries on an infinite lattice.
But we have to verify that the finite periodic lattice has the same symmetry group as the infinite one.
This verification is quite usual for local properties, but is more subtle for non-local ones and can be most easily understood in term of fluxes on large non-local loops.
PSG's impose that fluxes on local loops are preserved by lattice symmetries (or sent to their opposite in the case of a chiral state).
But some additional care has been taken concerning loops
which are topologically non trivial (cannot be shrunk to a point by a succession of local deformations).
These loops which ``winds'' through the boundary conditions do not exist on the infinite lattice.
For a symmetric Ansatz to remain symmetric on a finite periodic lattice, we have to verify that the fluxes associated to these topologically non trivial loops also respect the lattice symmetries.
The way to treat the problem of the non-local loops is detailed in App.~\ref{app:fluxes}, together with several ways of understanding their meaning.
\section{Conclusion}
\label{sec:ccl}
In this paper we have extended the PSG construction to include time-reversal-symmetry-breaking states with the SBMFT.
These TRSB phases that we describe generically as \textit{chiral }, can also break one or many discrete symmetries of the lattice
(in the triangular example either $\sigma$ or $\cal R_\textrm{6}$). Using this constructive method we have built all the SS and WS Ans\"atze with two MF parameters on the triangular lattice.
All the regular $O(3)$ magnetically ordered phases can be obtained from these Ans\"atze by spinon-condensation (the others have no regular classical limit).
The TRSB Ans\"atze have, when they condense, non-planar magnetic order and non-zero scalar chiralities.
The TRSB SL have short range spin-spin correlations but non trivial fluxes on various loops. The simplest of these fluxes are related to the imaginary part of the
permutation operator of three spins, that is directly related to their scalar chirality.
In some cases the time-reversal symmetry breaking fluxes might be more complex, as explained in section \ref{sec:fluxes} and illustrated in Appendix \ref{app:kagome_ansatz} for the kagome lattice.
These various fluxes have been initially defined within the MF Schwinger boson approach but \ref{sec:fluxes} has shown how these gauge invariant
quantities can be expressed in terms of spin operators, independently of any MF approximation.
It should be noticed that in a TRSB SL fluxes other than those deduced from the Ansatz may be non zero and easier to compute. It is the case for example in the \textit{cuboc1} SL recently proposed for the nearest-neighbor Heisenberg model on the kagome lattice.\cite{cuboc1}
The flux of the $\widehat A$ bond operators around the hexagons can be expressed in terms of spin permutation operators but it is relatively involved
(Eq.~\ref{eq:flux_SU2}) and has not yet been computed numerically.
In fact, in that phase (at least at the MF level), there are simpler fluxes which are non zero, as for example the triple product of 2nd neighbor spins around hexagons, or the triple product of three consecutive spins on an hexagon.
In Sec.~\ref{sec:tetrahedron}, another TRSB Ansatz was discussed in relation to the ring exchange model on the triangular lattice.
In spite of short range spin-spin correlations the TRSB SL have some local order parameter associated to the fluxes.
The finite temperature broken symmetries being discrete symmetries, there are no Goldstone modes and these chiral phases should survive thermal fluctuations in 2D.
The phase transition associated to the restoration of the chiral symmetry has been studied in some classical spin models.\cite{KagomeDomenge,MessioDomenge,Triedres}
In spite of the Ising like character of the order parameter, the phase transition was shown to be weekly first order due to interplay of vortices in the magnetic texture
with domain walls of the chirality. It has been shown within the SBMFT framework in the \textit{cuboc1}
phase that thermal fluctuations tend to do expel the chiral fluxes\cite{cuboc1} (favour coplanar correlations)
but a more complete study (beyond MF) of the finite temperature properties of a TRSB SL would be required to understand the
specific properties of the chiral transition in these systems.
Finally, it would be useful to clarify the ``topological'' differences (entanglement, degeneracy, edge modes, ...) between the present chiral SL described in the SBMFT framework with
the chiral SL wave-functions related to fractional quantum Hall states (such as the Kalmeyer-Laughlin state\cite{Kalmeyer_89} or that of Yang, Warman and Girvin\cite{Yang1993} for instances),
as well as the difference with conventional ($\mathcal T$-symmetric) $\mathbb{Z}_2$ liquids.
It would also be very interesting to analyze qualitatively the effects of (gauge) fluctuations in the present chiral SL.
| 4db903b549ba81639add07aa20a79e8e1bedb12b | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section*{Introduction}
Every city has evolved under a unique set of geographical, political and cultural conditions \cite{Hall_Cities1998} but despite heterogeneity in their historical trajectories, there appear to be certain characteristics common to all cities regardless of their location.
Such characteristics include fractal properties \cite{Batty_FractalCities,Batty_scaling2008}, Zipf distributions of city sizes \cite{Zipf1949} and population growth laws \cite{Gabaix_1999,Gabaix_Proc1999,Eeckhout2004,Rozenfeld_Batty_Makse08,Sembolini08}.
In the search for appropriate paradigms, a single characteristic of urban systems, population city size, reveals scaling laws quantifying urban attributes ranging from innovation, income and employment rates, to household electrical consumption and road surface area amongst many others \cite{Bettencourt_PNAS07,Bettencourt_West2010}. These are encompassed in the following relationship
\begin{equation}
Y(t)=Y_0N(t)^\beta
\label{scalingEq}
\end{equation}
where $Y(t)$ and $N(t)$ represent the urban indicator and the population size of a city at time $t$ respectively, and $Y_0$ is a time dependent normalisation constant.
Evidence of such laws has been observed in the US, Germany and China \cite{Bettencourt_etal08,Bettencourt_etal10} among other countries.
The exponent $\beta$ has been found to lie within three universal categories \cite{Bettencourt_PNAS07}: (i) $\beta<1$ (sublinear) economies of scale associated with infrastructure and services, e.g. road surface area; (ii) $\beta\approx1$ (linear) associated with individual human needs, e.g. housing and household electrical consumption; and (iii) $\beta>1$ (superlinear) associated with outcomes from social interactions, e.g. number of patents and income. A summary of the exponents found in \cite{Bettencourt_PNAS07, Bettencourt_etal08} is given in Fig.~\ref{betas_Bettencourt_LUZ}A.
\begin{figure}[h]
\centerline{\includegraphics[width=0.8\linewidth]{betas_Bettencourt_LUZ.pdf}}
\caption{Exponents with $95\%$ CI for different urban indicators coloured-coded according to the category given in \cite{Bettencourt_PNAS07}. A) Results for the US, Germany and China, taken from table in Bettencourt et al 2007, B) Results for E\&W, Table~S2 contains details of variables.}\label{betas_Bettencourt_LUZ}
\end{figure}
Scaling laws of this sort have proven to be very successful in biology \cite{West_Science97,Schmidt-Nielsen1975}, where size alone provides sufficient information to predict many properties of an animal, such as its life span, metabolism and heart rate.
Consequently, identifying equivalent universal scaling laws for cities could be extremely important for furthering our understanding of urban dynamics, and may help to manage many contemporary global challenges that concern cities such as the effect of transport and industrial emissions on climate change, the use of natural resources and the growth of urban poverty \cite{Davis_Slums2006}.
Nevertheless, many challenges need to be overcome before universal laws can be established \cite{Noulas_Lambiotte_etal2012}. For example, questions remain as to how to generate reliable results both within and across national datasets.
The problem of ambiguity in the definition and measurement of urban indicators gives rise to incongruent comparisons between countries.
In addition, it is not always possible to specify the generative mechanism of an urban indicator, in order to unequivocally assign it as the outcome of either social interactions, human needs or services.
Furthermore, the quality of the data might not permit a full characterisation of the exponent in either one of the three regimes: sublinear, linear and superlinear.
These challenges bring forward a more fundamental aspect in the development of a theory of cities.
This refers to the robustness of the model, assessed through the sensitivity of the exponent to different city definitions and sample distributions.
In complex systems, it is well-known that an observed power-law is usually only valid for the tail of the distribution containing the largest set of events \cite{Newman2005,Stumpf_Porter2012,Perline2005,Pinto_etal2012}.
This is the case for the Zipf distribution of city sizes, as has been pointed out in \cite{Eeckhout2004,Cristelli_Batty_Pietronero2012,Jiang_Jia2011,Batty_Complexity2010}.
Hence it is crucial to identify how city demarcations, and minimum population size cutoffs, give rise to large fluctuations in the value of the exponent.
In this work, we analyse the scaling behaviour of a range of indicators using census data on cities in England and Wales. After initially adopting a predefined set of standard city delineations, we find that the observed scaling relationships depart from the expected behaviour.
We hypothesise that the unanticipated results may be due to the given definition of city boundaries. In response, we explore new methodologies to generate more realistic city boundaries from more disaggregate data. Our method identifies city extent by clustering very small scale geographic units according to population density and journey to work commuting trips, removing the need for an a-priori assumption about boundary demarcation. This enables us to define cities based on both their morphological and functional extent.
A series of urban areas is generated by clustering adjacent neighbourhoods that lie within a defined density threshold. Then the effective commuting hinterland of each core is identified by calculating the proportion of journey to work trips that are destined for each core. We can then probe the whole parameter space of density and commuting thresholds to obtain more than $20\times10^3$ realisations of cities.
When the scaling laws are tested for all these system descriptions, we find that the exponent is consistent for variables in the linear regime, but that variables in the non-linear regimes are highly sensitive to boundary definition and sample distribution.
These results emphasise the need for a consistent methodology to define urban systems, in order to derive congruent comparisons between cities.
They also reveal the importance of understanding the origin of deviations from a model, in order to construct a theory of cities.
\section*{Results}
\subsection*{Scaling laws in England and Wales (E\&W)}
The complex, multi-layered nature of cities means that more than one reasonable definition of their extent can be produced depending on whether the political, economic or geographic reach of the system is being considered \cite{Batty_edit_Large2011,Batty_comment2011}.
Although this plurality is a well known problem in the study of urban systems, there is no consensus as to how a city should be defined.
For the purposes of an initial exploration of scaling behaviour in England and Wales (E\&W), we ignore the challenge of boundary delineation by adopting the definition employed in \cite{Bettencourt_PNAS07}, where cities are considered as integrated economic and social units.
In the case of the US this definition corresponds to Metropolitan Statistical Areas (MSAs)\cite{MSA_ref}, and an analogous set of areas for the European Union are Larger Urban Zones (LUZs)\cite{LUZ_ref}.
We select a range of observed socio-demographic indicators produced by the UK Office for National Statistics (ONS) from the 2001 census for E\&W, and aggregate high resolution census units up to the LUZ classification. We exclude Scotland from the analysis since the National Records of Scotland applies a different methodology for data collation than the ONS in England and Wales (details of all data sources are provided in the SI Text in the `Individual Data Tables' section). Analysis of scaling laws is then undertaken by classifying each indicator in terms of the above mentioned three regimes: sublinear, linear and superlinear.
When attempting to classify the resulting scaling behaviour from this initial exploration, many of the indicators could not be placed within a unique domain. See Table~S1 and discussion in SI Text.
Notwithstanding, there are some urban indicators in the list that can be clearly categorised in one of the three regimes. A summary of these results is given in Fig.~\ref{betas_Bettencourt_LUZ}B, where $\beta$ is the exponent in eq. (\ref{scalingEq}). These exponents pertain to cities defined in terms of LUZs for E\&W.
Details of the variables and the values for $\beta$, $R^2$, and the confidence intervals can be found in Table~S2.
Although we observe many variables that do behave as expected in the sublinear and linear regime, given the confidence intervals associated with each exponent, further verification is clearly required for some of them.
On the other hand, the values for the scaling exponent in the superlinear regime do not corroborate the expected ones.
This is particularly surprising for three variables that are clearly outcomes of social and economic interactions: number of patents, household income and crime incidents.
The latter can be regarded as one kind of social activity, that therefore is expected to increase superlinearly with city size as discussed in \cite{Bettencourt_etal10, GomezLievano_etal2012}.
Let us now investigate if all these discrepancies are due to an inappropriate definition of cities.
Fig.~S2 gives a representation of cities in E\&W in terms of LUZs and their size distribution.
The map shows a high degree of arbitrariness in the selection of cities and the delimitation of their boundaries. The size distribution is more or less Zipf.
However, the next biggest cities after London seem to be underestimated according to this plot.
The misrepresentation of these cities out of a total of 21, in addition to the absence of important cities such as Oxford and Reading, questions the soundness of the LUZ representation.
In the next section we look into new ways of elucidating cities in E\&W in an attempt to answer the question about the influence of boundary definition on observed scaling behaviour.
Our first aim is to look for contours of cities that are consistent with the built environment and their economic flows. We achieve this by using population density as a fundamental urban property to construct the initial settlements. This is followed by an expansion of the boundaries driven by commuting to work flows.
Our second aim is to explore scaling laws in a comprehensive set of systems of cities, so that we can analyse the behaviour of the scaling exponent under these different definitions.
This will give us insight into the stability or sensitivity of such laws to city boundaries.
\subsection*{Redefining city boundaries through density}
In order to redefine cities, we construct a clustering algorithm parametrised by population density. A similar algorithm can be be found in \cite{Rozenfeld_Gabaix_Makse2011}. The unit of agglomeration is a \emph{ward}, which is the smallest geographical unit in the census data across many variables (see SI Text, `Unit of Geography' section for details).
Let $\rho$ be our density parameter. We cluster all adjacent wards with density $\rho_w$ such that $\rho_w \geq \rho$. If a ward $i$ has a density $\rho_{w_i} < \rho$, but is surrounded by wards such that $\rho_w \geq \rho$, then the ward is also included in the cluster. This is done in order to avoid cities with holes.
For example, if a ward contains a big park, such as in Richmond in Greater London, its density will be much lower than its adjacent wards. If left out of the cluster, the city will not only have a hole, but will be missing an important functional area. Cities are hence considered as continuous entities in this first approach.
In detail, the parameter $\rho$ is varied within the interval $[1;40]$ persons/hectare. The result is 40 different realisations of systems of cities for E\&W, varying from very large clusters containing various settlements, to clusters containing only the core of cities for the highest density values. Maps for some of these cases are featured in the SI Text (see Fig.~S3).
When we follow the growth of cluster sizes resulting from the change in density from high to low values, we observe a sharp transition of the rank $3$ cluster between $\rho=13$ and $\rho=12$, see Fig.~\ref{cluster14_all}A.
\begin{figure}
\centering
\includegraphics{clust14_Corine_Zipf.pdf}
\caption{Clusters of cities for cutoff of 14prs/ha. A) Transition in cluster size; B) Zipf distribution of city size; C) Corine satellite map of E\&W: red corresponds to the built area and the black contours are the clusters defined for $\rho=14$ prs/ha}\label{cluster14_all}
\end{figure}
This transition corresponds to the joining of Liverpool and Manchester. The biggest cluster encompasses London, and this grows steadily including small settlements as the density lowers, but does not merge with another big city within the interval considered, and this is why no transition is observed.
If we select a density threshold before the merging of Liverpool and Manchester takes place, but near the transition since these two cities are very close spatially, we reproduce a system of cities that is very similar to the one prescribed by the built environment. This corresponds to $\rho=14$, and we see from Fig.~\ref{cluster14_all}C that it recreates almost exactly the urbanised areas defined using the CORINE landcover dataset \cite{Corine_ref}. In addition cities follow a Zipf distribution. Fig.~\ref{cluster14_all}B shows the cumulative density function with an exponent of $2.07$ and a very high $p$-value of $0.8$, using the method for fitting a power-law distribution proposed in \cite{Clauset_etalPL09}.
\subsection*{Extending boundaries to include commuters}
At this point, we expand our definition of the city from a purely morphological description to include some sense of its functional extent through an incorporation of commuting flow data into the clustering algorithm. Data on the total journey to work flows of commuters from every ward in E\&W is provided by the ONS and is used to define the commuting hinterland for the original 40 cluster systems. The procedure operates as follows. For each realisation for $\rho \in [1;40]$, we select only clusters whose population size $N$ is such that $N\geq N_0$, where $N_0 \in \{0,10,50,100,150\}\times10^3$ individuals.
Each ward is then added to the cluster for which the largest percentage $\tau$ of people commute into if $\tau > \tau_0$, where $\tau_0\in[0;100]$.
No continuity condition is imposed on the new clusters.
The extreme value of $\tau_0=100$ reproduces the original system.
This procedure leads to a comprehensive list of $20.2\times10^3$ realisations of systems of cities. See the SI Text for a visual representation of some of these composites (Fig.~S4).
Exploring the full parameter space is useful to assess the behaviour of the scaling exponent, bearing in mind that for the extreme values of $\rho$ and low values of $\tau$, the sets of aggregates move further and further away from realistic descriptions of cities.
\subsection*{Sensitivity of the power law exponent}
We make use of heatmaps to represent the values of $\beta$ in eq.~(\ref{scalingEq}), for each of the five initial population cutoffs pre-commuting clustering: $N\geq N_0$, in the whole parameter space for $\tau_0\in[0;100]$ and $\rho \in [1;40]$.
Linear relationships of variables with population size, are generally not affected by the different definitions of cities presented above. For this reason, heatmaps for these variables tend to be homogeneous over the whole parameter space.
On the other hand, non-linear dependencies coming from aggregation effects, will exhibit variations in the scaling exponent if initial conditions, given by the city limits and population aggregates through commuting patterns, are changed.
A first inspection indicates high variability between heatmaps for the same urban indicator but different population size cutoffs. The effect of imposing a minimum size on a settlement in order to consider it a city, is the reduction in the number of cities that are included in the system to test for scaling laws, see Fig.~S4.
For the extreme scenarios, i.e. very high density and minimum population size of $150\times10^3$, the number of cities included in the distribution can vary greatly, from $429$ with no cutoff, to only $5$ cities if a large population size cutoff is imposed.
Variations between the different heatmaps for the extreme values of the density parameter, are therefore mainly due to the small sample size in the distribution, and no statistically sound conclusion can really be drawn from these extreme cases. This is particularly the case for Income as shown in Fig.~\ref{Income0_150}.
\begin{figure}
\centering
\includegraphics{Income_Combined.pdf}
\caption{Heatmap for income for different minimum population size thresholds: no cutoff, $10^4$, $10^5$ and $150\times10^3$.}\label{Income0_150}
\end{figure}
For the extreme scenarios, i.e. where there is only a very small number of cities included in the distribution, the weight of London becomes important, biasing the exponent to superlinearity.
London is a positive outlier, and relevant economic agglomeration effects are clearly present, but do not seem to exist for the next biggest cities. We also observe that in this case no effects are recorded by including commuters into the analysis.
For properties that belong to the linear regime, and where London is not an outlier, such as number of households and of people employed, the exponent remains consistent across the whole parameter space, see Fig.~\ref{dwellings_employed}.
\begin{figure}
\centering
\includegraphics{Household_Employed_Combined.pdf}
\caption{Heatmap for observables that belong to the linear regime. The exponent remains consistent in the whole parameter space}\label{dwellings_employed}
\end{figure}
Conversely, the exponent for the observables in the sublinear regime can be greatly affected by variations in the threshold for the percentage of commuters. See Fig.~\ref{agric0_150} for employment in agriculture.
\begin{figure}
\centering
\includegraphics{AgHF_Combined.pdf}
\caption{Heatmap for employment in agriculture, hunting and fishing for different minimum population size thresholds: no cutoff, $10^4$, $10^5$ and $150\times10^3$.}\label{agric0_150}
\end{figure}
In this case, extending the area of cities to include commuters, makes the exponent shift from the sublinear to the superlinear regime. This effect is pronounced if in addition settlements are removed through the constraint of minimum population size of $10^5$.
Furthermore, for the range where clusters are just the central cores of cities, given by $\rho>30$, the exponent also increases if population size constraints are applied. For variables such as distance to work, see Fig.~S6, the exponent becomes superlinear, although as stated earlier, for these extreme cases no statistical validity can be obtained.
There are also many variables that should present economies of scale, such as infrastructure variables, e.g. area of roads; and some employment categories such as elementary occupations and manufacturing, that nevertheless show a linear exponent if no cutoff on population size is imposed, see Fig.~S7. If on the other hand only the tail of the distribution containing the large cities is taken into account, the exponent tends to sublinearity, although very weakly, and only for some of these variables.
The expected agglomeration effects for variables that are the outcome of social and economic interactions are in general not observed. Most of the employment categories corresponding to this regime have linear exponents, see Fig.~S8 with the exception of Financial Intermediation. Nevertheless the value of the exponent is nowhere close to the expected $1.15$ from supercreative employment or $1.30$ from R\&D employment. In addition, once the cities are extended as integrated economic entities by including commuters, the effect on the exponent is to lower its value towards linearity, instead of increasing it.
The heatmaps show how these non-linear effects are highly sensitive to city definition and sample distribution, see Fig.~\ref{Finance0_150}.
\begin{figure}
\centering
\includegraphics{FinanceInt_Combined.pdf}
\caption{Heatmap for employment in financial intermediation for different minimum population size thresholds: no cutoff, $10^4$, $10^5$ and $150\times10^3$.}\label{Finance0_150}
\end{figure}
Different cutoffs on population size give very different results. And once again, the effect of London as a positive outlier becomes important for a small sample size. Other variables displaying superlinearity are area of non-domestic buildings and patents. For the former the superlinearity, which is considerable if no constraint on population size is imposed, is completely washed out if a minimum population size of $50\times10^3$ individuals is imposed, see Fig.~\ref{NoDBldg0_150}.
\begin{figure}[h]
\centering
\includegraphics{NonDBuild_Combined.pdf}
\caption{Heatmap for area of non-domestic buildings for different minimum population size thresholds: no cutoff, $10^4$, $10^5$ and $150\times10^3$.}\label{NoDBldg0_150}
\end{figure}
Patents on the other hand, present the higher volatility, and each heatmap for different population size cutoffs gives a very different result, see Fig.~\ref{Patents10_150}.
\begin{figure}[h]
\centering
\includegraphics{Patents_Combined.pdf}
\caption{Heatmap for total number of patents from 2000-2011 for different minimum population size thresholds: no cutoff, $50\times10^3$, $10^5$ and $150\times10^3$.}\label{Patents10_150}
\end{figure}
For this variable we need to impose a minimum population size threshold of $10^4$ in order to obtain a dataset that does not contain many zeroes from the small settlements.
\\
In conclusion, our results show that cities in E\&W do not present economic agglomeration effects, with the exception of London, which is a positive outlier.
Furthermore, if non-linear effects are present, the scaling exponent is highly sensitive to city delimitation.
Therefore the value of the exponent for a single definition cannot be taken as a proxy to draw comparisons between cities if no consistent way to construct cities has been devised.
On the other hand, we were also able to confirm that all the urban indicators that have a linear dependency with population size, are robust to city demarcations.
Any discrepancies found between observed linear dependencies and expected non-linear ones according to \cite{Bettencourt_PNAS07, Bettencourt_etal10}, are therefore not due to a poor definition of cities, since more than $20\times10^3$ different configurations were explored, and linearity persisted.
The main incongruity between observed and expected outcomes, is the lack of superlinearity for exponents belonging to observables that are the product of economic and social dynamics, such as income and some employment categories requiring particular skills.
Our methodology provides a tool to construct multiple city limits in a systematic way, enabling us to define consistent systems of cities. Moreover, it emphasises the sensitivity to boundary definition for urban indicators that do not show a linear dependency with population size.
This is specifically highlighted for patents, whose exponent is highly volatile across the whole parameter space. The sensitivity of non-linear exponents to boundaries indicates that comparisons drawn between cities based on the value of the exponent can be misleading.
\section*{Discussion}
This work shows that the search for patterns in urban indicators is more intricate than previously thought.
The specific demarcation and definition of cities play a crucial role in the distribution and measurement of urban attributes.
Any dependencies found between the latter and population size, are strongly affected by city selection and definition.
The argument could however be reversed if universality existed. Contours of cities could be constructed according to expected statistical or scaling laws.
This can only be done if these are not too sensitive to borders, since otherwise one would face the dilemma of what comes first; the boundary upon which the theory rests, or the theory that defines the boundary.
The lack of superlinear exponent values for indicators driven by social and economic interactions sets a significant discrepancy between results for E\&W, and results found for the US and China in the literature.
This raises many important questions.
On the one hand, the distinct results obtained from different population size cutoffs, indicate that maybe E\&W is too small a system from which agglomeration effects can be measured properly.
This also prompts us to investigate the necessary conditions that need to be fulfilled in order to observe the expected scaling exponents. In addition to the relatively diminutive scale of England and Wales compared to the US and China, the spatial distribution of its cities might be responsible for obscuring superlinear dependencies. Individual cities within the UK are perhaps too close to each other to be easily treated as discrete entities, or maybe London is simply too big relative to the rest of the system. This can be investigated further, by looking for agglomeration effects in other countries that contain a primate city such as London \cite{Jefferson1939}.
Alternatively, the main discrepancies might be due to the fact that the UK has experienced a process of de-industrialisation for longer than most western countries.
The combined impact of globalisation and de-industrialisation may be causing a slow down in the growth of the largest cities outside of London, in turn affecting the value of the exponent.
This affect might be particularly noticeable in the results, as the exponent of a power law is driven by the largest events in the distribution and the weight of London is not big enough to skew the exponent.
This hypothesis can also be put to the test by calculating the exponent for various indicators at the height of the British industrial revolution, during the growth phase of the above mentioned industrial cities.
The scaling behaviour during this time period could be more in line with the expected scaling laws drafted in \cite{Bettencourt_PNAS07, Bettencourt_etal10}.
If this is the case, perhaps there is a timeline that all countries follow after industrialisation that alters expected scaling behaviour?
On the other hand, the maturity of the UK with respect to industrialisation, makes it a unique integrated urban system \cite{Batty_Ferguson_edit2011}, in which, following governmental policies of regionalisation and decentralisation \cite{Champion_etal2007}, critical functionalities were removed from the core of main cities and placed in other areas, smoothing away any agglomeration effects from economic output.
Such regionalisation policies were applied by the government from the 1920s onwards, with respect to reducing congestion of employment in London, stemming the so called 'drift to the south' and attempting to reinvigorate all regions outside London and south east England.
The geographic scale of the system of interest might need to reflect the reach of a city's interactions. London's dominance as a financial and business services hub relates as much to a global organisation of trade and interaction. A characteristic which is reflected in the regressions as London is a positive outlier for attributes that are expected to have superlinear dependencies. The performance of cities such as London should possibly be evaluated relative to other global hubs operating within a larger scaled network of interactions.
Following Sornette's idea on the emergence of big things \cite{Sornette2012,Yukalov_Sornette2012,Pisarenko_Sornette2012}, a different perspective of the description of cities could be adopted, in which these global hubs are evaluated separately to their domestic counterparts. Sornette refers to the formers as \emph{dragon-kings}. A two system theory of cities might then emerge. A regime for cities driving international dynamics, the dragon-kings, and a regime for the remaining cities composing a country.
\\
The methodology employed in this paper, had the purpose of recreating several representations of cities in England and Wales in order to explore the sensitivity of the scaling exponent, and to distinguish between linear and non-linear effects, by looking at the fluctuations between different realisations.
Across more than twenty thousand descriptions, we were able to record and characterise the behaviour of the scaling exponent over the whole set. It is our intention to apply this methodology to other European countries, distinguishing between systems with and without primate cities. In addition, we intend to re-analyse the US by applying clustering algorithms to more disaggregate data than MSAs, and to assess historic datasets for the UK to evaluate the stability of the scaling exponent over time as well as space.
The challenge of studying cities in a consistent manner is clearly considerable due to spatial and qualitative differences in data from location to location.
It is however a necessary step for identifying truly universal patterns of behaviour. It is our hope that the approach described here goes some way to meeting this challenge, progresses the debate on scaling and city size and moves us closer to a better understanding of systems of cities.
\section*{Materials and Methods}
Most of the variables come from the 2001 UK census dataset, produced by the Office for National Statistics.
The data is of high spatial resolution, and is given at the level of wards. This is aggregated for each of the different realisations of cities described in the text. Each of the tables from which the indicators were obtained is described in detail in the SI Text. Data on patents was provided by the intellectual property office at the postcode level, for the years 2000 to 2011. The dataset on household income was taken from UK census experimental statistics for 2001/02, and it was produced using a model-based process. Crime data was obtained from the Home Office, and is the average between the years 2003 and 2011. Finally, infrastructure data, such as the area of roads, paths and buildings, come from the 2001 Generalised Land Use Database. Details for all the variables are provided in the SI Text.
\section*{Acknowledgements}
EA, EH, PF, AJ and MB acknowledge the support of ERC Grant 249393-ERC-2009-AdG. Useful discussions with Luis Bettencourt, and Geoffrey West of the Santa Fe Institute, and Jos\'e Lobo of Arizona State University helped clarify many issues.
HY acknowledge the support of grants from the Rockefeller Foundation and the James McDonnell Foundation (no. 220020195).
\begin{spacing}{0.9}
| 15664e7415cc14dfc47f5120c7a8171c2c6ae6f5 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
\subsection{ A Class of Stochastically Forced Linear Equations }
The original goal of this work was to develop a framework
for analyzing the stability of the stochastically forced
Mathieu equation:
\begin{equation} \label{eq:Mathieu}
\ddot{x} + \gamma \dot{x} + (\omega_0^2 + \varepsilon f(t))x = 0,
\end{equation}
where $f$ is a stochastic process, and the
stability is determined by the boundedness of the
second moment $\avg{x^2(t)}$ \cite{Arnold,khas}.
Here, $\avg{\cdot}$ denotes
the sample-average.
We wanted to avoid heuristic methods, and consider cases
where $f(t)$ is a stochastic process with a
realistic power spectral density. In particular, we do
not want to assume that $f$ is white noise. Hence
we want to analyze the case where $f(t)$ is colored noise. However,
in order to rigorously derive a Fokker-Planck equation for
a stochastic differential equation,
the governing equation must include only white noise
\cite{Arnold}.
We can achieve both goals of rigor and
realistic power spectral density by letting $f$ be the output of a linear
filter that is forced by a vector white noise ${\boldsymbol \xi}$.
That is,
\begin{align}
&\dot{ {\bf s}} = {\bf H} {\bf s} +{\boldsymbol \xi}(t), \label{eq:filter}\\
& f(t) = \langle \bfa , \bfs (t)\rangle ,\label{eq:filter_f}
\end{align}
where $ {\bf H}$ is an $n\! \times \! n$ real, diagonalizable matrix, whose eigenvalues
have negative real parts,
$ {\bf a}\in \mathbb{R}^n$, and
$\langle \cdot,\cdot\rangle$ is the
standard inner product on $\mathbb{C}^n$.
We will take deterministic initial condition $ {\bf s} (0) = {\bf 0}$.
We assume the noise vector ${\boldsymbol \xi}$ is weighted white noise, meaning
\begin{equation}
\label{eq:xi}
\avg{ {\boldsymbol \xi}}=0, \qquad
\avg{{{\boldsymbol \xi}}(t+\tau)
{\boldsymbol \xi}^T (t)} = {\bf B} \delta(\tau),
\end{equation}
where $ {\bf B}$ is symmetric and positive semi-definite.
Thus, when $ {\bf s} (t)$ solves \eqref{eq:filter}
it is a standard vector-valued Ornstein-Uhlenbeck process.
We refer to the scalar process,
$f(t) = \langle \bfa , \bfs (t)\rangle$, as colored noise or as an $n$th-order filter
provided $ {\bf s} (t)$ solves \eqref{eq:filter}.
We will make only mild requirements on the matrices $ {\bf H}$ and $ {\bf B}$,
thereby allowing
for wide variability in the power spectral density of the resulting
process $\langle \bfa , \bfs (t)\rangle$. Thus,
in allowing for a wide range of
choices of $ {\bf H}, {\bf B}$, and $ {\bf a}$,
our approach accommodates a broad class of colored noise
forcing terms.
In this paper we will be concerned with the more general problem
of linear equations that are being parametrically forced
by the function $f(t)$
in equation (\ref{eq:filter_f}). That is equations of the form
\begin{equation}\label{eq:x}
\dot{{\bf x}} = {\bf A}_0 {\bf x} + \varepsilon \langle \bfa , \bfs \rangle {\bf A}_1 {\bf x},
\end{equation}
where
${\bf s}$ is the solution to the stochastic equation
(\ref{eq:filter}), ${\bf x}(t) \in \mathbb{R}^N$ for
some $N\geq 1$,
and ${\bf A}_0, \, {\bf A}_1$ are $N\times N$ constant
matrices.
The purpose of this paper is to present a perturbation
method (assuming $\varepsilon$ is small) for determining the stability of
the solution ${\bf x}(t)$ of \eq{x}, by which we mean the boundedness
of the second moments of ${\bf x}(t)$. However, our method applies to the $p$th
moment, so we will not limit our analysis to second moments only.
Van Kampen has presented a heuristic approach to the
case of colored noise forcing, \cite{Kampen}. Though derived by
completely different means, his result
for the Mathieu equation \eq{Mathieu} is the same as
ours when considering only the first-moments, and without damping.
He arrives at his result by truncating some
series at order $\varepsilon^2$, and
is only expected to be valid to this order. Our method is rigorous,
can be applied to find solutions to any order in $\varepsilon$, and applies
to any moment. We discuss this further in \S \ref{sec:compare}.
We were originally interested in
equation (\ref{eq:Mathieu}) as a model for the response
of capillary gravity waves to a time-varying gravitational field
arising from random vertical motions of a container with a
free surface (as in \cite{repetto}).
Here
$f(t)$ represents the random fluctuations in acceleration.
Since the Fourier transform of an acceleration should
vanish at zero, along with its derivative, the power spectral
density of
a realistic
process $f$ should satisfy $S(0) = S'(0)=0$.
For example, we can construct a
two-dimensional filter using the system (\ref{eq:filter})
that
has the power spectral density
\begin{equation}
\label{eq:exampleS}
S(\omega) = \frac{ \sigma \beta^2 \omega^2}{(\omega^2+\mu_1^2)
(\omega^2+\mu_2^2)},
\end{equation}
by choosing
\[
{\bf H} = \left( \begin{array}{cc} -\mu_1 & 0 \\
\beta & -\mu_2 \end{array}\right),
\quad {\bf B} = \left( \begin{array}{cc} \sigma & 0 \\
0 & 0 \end{array}\right),
\quad {\bf a} = \left( \begin{array}{c} \beta \\
-\mu_2 \end{array}\right),\quad \mu_1,\mu_2,\sigma >0.
\]
The formula for $S(\omega)$ in equation \eq{exampleS}
follows from Corollary \ref{cor:S} in Appendix C.
The stochastically forced Mathieu equation
has been analyzed before, for instance in
\cite{adams-bloch2008,adams-bloch2009,adams-bloch2010,bobryk-chrz,lee-etal,roy}
but not for the case \eqref{eq:Mathieu}, or the
general setting \eqref{eq:x}.
In \cite{lee-etal, roy} they consider additive forcing,
and in
\cite{adams-bloch2008,adams-bloch2009,adams-bloch2010}
they consider a different type of parametric forcing.
In \cite{bobryk-chrz} they consider a different class of colored noises
and study stability by truncating an infinite hierarchy of moment equations.
Other studies concern Lyapunov stability, or rely on numerical methods.
Our analysis applies to a broad class of equations \eqref{eq:x}
with a wide variety of forcing terms \eqref{eq:filter},
is semi-analytical (relying only on numerics for the
computation of eigenvalues
of small matrices),
and can be applied to any moment.
\subsection{Ladder Operators and the Vector Ornstein-Uhlenbeck
Process}
Our perturbation analysis of
the moment stability of equation (\ref{eq:x}) relies heavily
on a simple characterization of the eigenvalues and eigenfunctions of the Fokker-Planck equation
associated with equation (\ref{eq:filter}).
In particular, in \S \ref{sec:lad} and \S \ref{sec:eig}
we characterize the spectrum using ladder operators
by generalizing
Dirac's
creation and annihilation operator approach to the quantum harmonic oscillator \cite{Dirac}.
An understanding of the spectrum and eigenfunctions in terms of its
ladder operators is crucial to developing the perturbation theory in \S \ref{sec:pert}.
Though
other authors have used ladder operators for Ornstein-Uhlenbeck processes,
they have only considered the scalar case $n=1$
\cite{resibois,risken,titulaer,wilkinson}.
We believe the extension to the vector case is by no means trivial, and
is interesting in its own right.
The probability density function $P(s_1, \ldots, s_n, t)$
associated with the process $ {\bf s}(t)$
defined by equations (\ref{eq:filter}) and (\ref{eq:xi})
satisfies the Fokker-Planck equation
\begin{equation}\label{eq:fps}
\partial_t P = \mathcal{D} P ,
\quad \mbox{with}\qquad
\mathcal{D} P = \frac{1}{2}\Div{ {\bf B} \nabla P} - \Div{ {\bf H} {\bf s} P}.
\end{equation}
$\mathcal{D}$ is called the Fokker-Planck operator associated to \eqref{eq:filter}.
See \cite{Gardiner} for a derivation of this equation.
We note that the Fokker-Planck equation
\eq{fps} is the same in both the It\^o and Stratonovich interpretations because
the matrix ${\bf B}$ is independent of ${\bf s}$ (see \cite{Gardiner}). The operator
$\mathcal{D}$ will play a crucial role in our stability analysis.
In \S \ref{sec:lad}
we begin by analyzing
the operator $\mathcal{D}$ in terms of its associated ladder operators. That is,
operators $\mathcal{L}$ satisfying the commutator equation
\begin{equation}\label{eq:ladEq}
[\mathcal{D}, \mathcal{L}]=\mu \mathcal{L}
\end{equation}
As in Dirac's theory of the harmonic oscillator, the significance
of the ladder operators stems from the fact that if $\phi$ is
an eigenfunction of $\mathcal{D}$ with eigenvalue $\lambda$, then the function
$\mathcal{L} \phi $ will either vanish, or be an eigenfunction of $\mathcal{D}$ with
eigenvalue $\lambda + \mu$.
In \S \ref{sec:lad} we show that we can construct the
ladder operators by solving a matrix eigenvalue problem
\begin{equation}\label{eq:TDA}
{\bf T} {\bf y} = \mu {\bf y} , \qquad {\bf T} = {\bf D} {\bf A},
\end{equation}
where $ {\bf A}$ is an antisymmetric matrix and
$ {\bf D}$ is a symmetric matrix, expressed in terms of $ {\bf H}$, $ {\bf B}$.
We show there are $n$ raising operators, $\mathcal{L}_k$,
which generate new eigenfunctions of $\mathcal{D}$ with an increase in the real
part of the eigenvalue, and $n$ lowering operators $\mathcal{L}_{-k}$ that
correspondingly decrease the real part of the eigenvalue.
We also show that $\mathcal{D}$ can be expressed in terms of its
ladder operators. In particular,
\begin{equation}\label{eq:Dladder}
\mathcal{D} = \sum_{k=1}^n \mu_k \mathcal{L}_{-k}\mathcal{L}_k.
\end{equation}
where $\mu_k$ is the increment of the ladder operator $\mathcal{L}_k$. That
is, $[\mathcal{D}, \mathcal{L}_k] = \mu_k \mathcal{L}_k $.
This representation is useful for determining the spectrum of $\mathcal{D}$.
In \S \ref{sec:eig} we characterize the solutions of
\begin{equation}
\label{eq:eig_prob}
\mathcal{D} \phi = \chi \phi
\end{equation}
in terms of the ladder operators, $\mathcal{L}$, and increments, $\mu$, solving \eq{ladEq}.
In particular, we show that any eigenvalue $\chi$ of $\mathcal{D}$ can
be written as
\begin{equation}
\chi_{{\bf k} } = -\sum_{j=1}^n k_j \mu_j
\label{eqn_integ}
\end{equation}
where $\mu_j$ are the increments of the ladder operators with
positive real parts, and the $k_j$ are non-negative integers.
We will see that the increments $\mu_j$ are the negative of the
eigenvalues of the matrix ${\bf H}$ defining the filter in
equation (\ref{eq:filter_f}).
We also show that any eigenfunction of $\mathcal{D}$ can be obtained by
applying the ladder operator to the eigenfunction
$\Phi_0({\bf s})$
associated with the eigenvalue $\chi=0$ of $\mathcal{D}$,
which is the eigenvalue with the largest real part.
The results summarized in the last paragraph
rely on the fact that
real parts of the eigenvalues of $\mathcal{D}$ are bounded above (see
Lemma \ref{lem:boundD}) , which is proved
in \cite{Liberzon, Metafune, roy},
but we give a different and simple proof of this in Appendix B.
Here, the domain of $\mathcal{D}$ is the set of functions
that have bounded moments of any order.
The spectrum and eigenfunctions of $\mathcal{D}$ have been studied
before (see \cite{Liberzon, Metafune, roy}) but not in the context of ladder operators.
\subsection{Perturbation Expansion for Moment Stability Analysis}
In \S \ref{sec:pert}
we use the classical perturbation theory of eigenvalues to carry
out an analysis of the stability of equation (\ref{eq:x}).
Our analysis begins by considering the ODEs for $ {\bf s} (t)$ and $ {\bf x} (t)$
together as a single ODE system.
The probability density function
$P(s_1,\ldots,s_n,x_1,\ldots,x_N,t)$ for
the combined system \eq{filter} and \eq{x}
solves the Fokker-Planck equation
\begin{equation} \label{eq:fp}
\partial_t P = \frac{1}{2}\mbox{div}_{\bf s}\left( {\bf B}\nabla_{\bf s} P\right) - \mbox{div}_{\bf s}\left(
{\bf H} {\bf s} P\right) - \mbox{div}_{\bf x} \left( ({\bf A}_0 + \varepsilon \langle \bfa , \bfs \rangle {\bf A}_1){\bf x} P \right)
\end{equation}
The notation $\mbox{div}_{\bf s}$ and $\nabla_{\bf s}$ refer to divergence and gradient
with respect to only the $s_j$ variables, and similarly
$\mbox{div}_{\bf x}$ is divergence in $ {\bf x}$ variables.
Equation \eq{fp}
is the same in both the It\^o and Stratonovich interpretations because
the matrix ${\bf B}$ is independent of $ {\bf s}$ and $ {\bf x}$ (see \cite{Gardiner}).
We can derive an equation for the $p$th marginal moments by multiplying
\eq{fp} by monomials ${\bf x}^\alpha$ and integrating with respect to $d{\bf x}$, where
$\alpha$ is a multi-index of order $p$. The result is an equation
for ${\bf m} ({\bf s},t)$,
a vector of the $p$th marginal moments,
which is of the form
\begin{equation}\label{eq:gzero}
\partial_t {\bf m} = \mathcal{D} {\bf m} + {\bf \Gamma}_0 {\bf m} + \varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m}.
\end{equation}
Note that $\mathcal{D}$ is a differential operator in the ${\bf s}$ variables only,
\begin{equation}\label{eq:defD}
\mathcal{D} \varphi = \frac{1}{2}\mbox{div}_{\bf s}\left( {\bf B}\nabla_{\bf s} \varphi \right) - \mbox{div}_{\bf s}\left(
{\bf H} {\bf s} \varphi \right).
\end{equation}
In equation (\ref{eq:gzero})
each component of $ {\bf m}( {\bf s},t)$ is of the form
$\int_{\mathbb{R}^N} {\bf x}^\alpha P( {\bf x}, {\bf s},t)d {\bf x}$
for some multi-index $\alpha$ with $|\alpha|=p$.
$\mathcal{D} {\bf m}$ indicates $\mathcal{D}$ applied to each component of ${\bf m}$.
For much of our analysis we can assume that the matrices ${\bf \Gamma}_0$ and ${\bf \Gamma}_1$
are given to us, but we illustrate how to obtain
these matrices from
the matrices ${\bf A} _0$ and ${\bf A} _1$ for the particular case of
the Mathieu equation in \S \ref{sec:app}.
The matrices ${\bf \Gamma}_0, \, {\bf \Gamma}_1$ in \eq{gzero} are constant
and depend on
which moments one is considering (see example in equation \eq{gamma}).
There are $J=\left(
\begin{array}{c}
N+p-1 \\ p
\end{array}\right)$ distinct $p$th order monomials in $N$ variables,
therefore ${\bf \Gamma}_0$ and ${\bf \Gamma}_1$ are $J\times J$ matrices.
As in a standard stability analysis,
in order to determine the stability of \eq{gzero}, we look for solutions of the form
$\widetilde{ {\bf m}}({\bf s},t) = e^{\lambda t} {\bf m}( {\bf s})$. Our equation for ${\bf m}({\bf s})$ becomes
\begin{equation} \label{eq:general}
\lambda {\bf m} = \mathcal{D} {\bf m} + {\bf \Gamma}_0 {\bf m} + \varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m}.
\end{equation}
That is, the equation for the $p$th marginal moments of ${\bf x}(t)$
can be written as an eigenvalue problem, and stability is
decided by the sign of the
real part of the largest eigenvalue.
We do a perturbation analysis assuming that the magnitude $\varepsilon$
of the forcing is small.
Our analysis relies on the fact that that when $\varepsilon=0$
the eigenfunctions
of equation (\ref{eq:general}) are the direct product of the eigenfunctions
of $\mathcal{D}$ and the eigenvectors of the matrix ${\bf \Gamma}_0$.
A key observation (see Lemma \ref{lem:alpha_beta}) for the perturbation analysis is that
for any vector ${\bf a}$ we can determine constants $\alpha_k$ and $\beta_k$
such that
$\langle \bfa , \bfs \rangle$ can be written as
\begin{equation}\label{eq:coeff_intro}
\langle \bfa , \bfs \rangle = \sum_{k=1}^n \left(\alpha_k \mathcal{L}_k + \beta_k \mathcal{L}_{-k}\right),
\end{equation}
where $\mathcal{L}_{\pm k}$ are the ladder operators satisfying \eqref{eq:ladEq}.
The proof of Lemma \ref{lem:alpha_beta} is given in Appendix D.
In \S \ref{sec:pert} we show that
when $\varepsilon=0$, the eigenvalue of equation (\ref{eq:general}) with the
largest real part is
the same as the largest eigenvalue of the matrix $ {\bf \Gamma}_0$.
If $\lambda_0$ is this unperturbed eigenvalue,
then $ \lambda(\varepsilon) = \lambda_0 + \lambda_2
\varepsilon^2 +\ldots$ with
\begin{equation}\label{eq:lamb2}
\lambda_2 = \sum_{j=1}^J \ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_j}\ip{\boldsymbol \psi_j}{{\bf \Gamma}_1 \boldsymbol \phi_1}
G(\nu_1 - \nu_j)
\end{equation}
where $\nu_1 = \lambda_0$, $\nu_j$ are the eigenvalues of ${\bf \Gamma}_0$,
and $\boldsymbol \phi_j , \boldsymbol \psi_j$ are the eigenvectors and
normalized adjoint eigenvectors of ${\bf \Gamma}_0$.
Equation (\ref{eq:lamb2})
uses the \emph{extended power
spectral density} $G(z)$, which is defined for a general
stationary random process in \eq{Gdef},
and is given explicitly in \eq{G} for the filter
$\langle \bfa , \bfs (t)\rangle$.
The form of $\lambda_2$
in \eq{lamb2} is derived for forcing terms that have the form \eq{filter_f},
however, the fact that this is simply a weighted sum of
values of $G$, whose coefficients depend only on ${\bf \Gamma}_0, {\bf \Gamma}_1$
(which do not depend on the filter),
suggests such a formula could hold for any process with a well-defined
extended power spectral density. We have carried the perturbation analysis to
higher orders, but the higher-order
coefficients do not appear to have such a simple
form as in equation \eqref{eq:lamb2}.
The method in \S \ref{sec:pert}
involves constructing matrices ${\bf \Gamma}_0$ and ${\bf \Gamma}_1$,
which, as mentioned earlier,
depend on ${\bf A}_0, {\bf A}_1$, and the representation of the $p$th
marginal moments as a vector.
We use the stochastic Mathieu equation as a specific example
in \S \ref{sec:app}.
In \S \ref{sec:numerics} we discuss a numerical method
for determining the stability of \eq{x} without assuming that
$\varepsilon$ is small. We compare these numerical results to our
perturbation results up to both second and fourth order
for the Mathieu equation, and show they are in excellent agreement. In
\S \ref{sec:tensor} we give a second representation (whose
derivation is given in \cite{SAND})
for $\lambda_2$ that does not involve the matrices ${\bf \Gamma}_0$ and
${\bf \Gamma}_1$, but deals directly with the matrices ${\bf A} _0$ and ${\bf A} _1$.
We have found that this representation
simplifies numerical computations.
\section{Existence and Properties of Ladder Operators}\label{sec:lad}
In this section we define the
notion of a ladder operator,
show how to construct these operators, and
prove some basic lemmas about them.
Lemma \ref{lem_one} shows how ladder operators can be used to generate new
eigenfunctions that have their eigenvalue changed by
the increment of the ladder operator.
Lemma \ref{matrixEig} shows how to
find the ladder operators $\mathcal{L}_k$ and their increments $\mu_k$ by
solving a matrix eigenvalue problem.
Lemma \ref{lem:T} shows that the increments of the ladder
operators are
zero and
$\pm \mu_k$ ,where $- \mu_k$ are the eigenvalues of the matrix
${\bf H}$ defining our filter (see equation (\ref{eq:filter})).
Lemma \ref{lem_commute} gives the commutator relations for the ladder
operators, and Lemma \ref{lem:Dsum} shows that the operator $\mathcal{D}$ can
be expressed as a weighted sum of $\mathcal{L}_{-k} \mathcal{L}_k$, where
$\mathcal{L}_k$ are the ladder operators.
Throughout this section (and the rest of the paper)
the operator $\mathcal{D}$ is that defined in \eqref{eq:defD}.
The basic lemmas in this section are
crucial to the rest of this paper, and hence we have
tried to write this section so that
the lemmas stand out clearly. Though the lemmas are all easily
stated, the proofs of some of the lemmas are
quite technical, especially when attention is given to ensuring that they
apply for complex eigenvalues of the matrix $ {\bf T}$.
For this reason, we have relegated many of the proofs to Appendix A.
Before discussing ladder operators it should be
noted that we define
the domain of $\mathcal{D}$ as the set of functions
that have bounded moments of any order.
Thus, our definition of the domain of $\mathcal{D}$
differs from that given
in \cite{Liberzon}. In that paper they defined the domain based on the
exponential decay
of the eigenfunction $\Phi_0$ that we define
in equation (\ref{eqn_defPhi0}) and discuss in \S \ref{sec:eig}.
The two definitions of the domain give the identical eigenfunctions,
but we believe ours is more natural since it does not require knowing
the solution ahead of time.
In \cite{Liberzon} they discuss a continuous
spectrum that arises if the domain is defined so that the eigenfunctions
of $\mathcal{D}$ are only required to be square integrable (or some similarly
less restrictive condition).
An examination of these eigenfunctions shows that
they have a power law decay as ${\bf s}$ goes to
infinity, and hence do not have moments of all orders.
Hence our definition of the domain also excludes this continuous spectrum.
We now give a definition of a ladder operator of $\mathcal{D}$.
\begin{definition}
An operator $\mathcal{L}$ is a ladder operator for $\mathcal{D}$ with increment $\mu$
if $[\mathcal{D} , \mathcal{L} ]=\mu \mathcal{L}$
for some $\mu \in \mathbb{C}$, where $[\cdot \, , \cdot ]$ denotes the commutator
$[\mathcal{D},\mathcal{L}] = \mathcal{D} \mathcal{L} - \mathcal{L} \mathcal{D}$.
\end{definition}
The following lemma shows that ladder operators can be used to
generate new eigenfunctions from ones that we already know.
\begin{lemma}
\label{lem_one}
Suppose $\mathcal{L}$ is a ladder operator such that
$[\mathcal{D},\mathcal{L} ] = \mu \mathcal{L}$. Let $\phi$ be an eigenfunction of
$\mathcal{D}$ with eigenvalue $\chi$. Then either $\mathcal{L} \phi=0$, or
$\mathcal{L} \phi$ is an eigenfunction of $\mathcal{D}$ with eigenvalue
$\chi + \mu$.
\end{lemma}
\begin{proof}
We have $\mathcal{D} \mathcal{L} \phi - \mathcal{L} \mathcal{D} \phi = \mu \mathcal{L} \phi$.
Since $\phi$ is an eigenfunction of $\mathcal{D}$, this gives us
$\mathcal{D} \mathcal{L} \phi = (\chi + \mu ) \mathcal{L} \phi $.
\end{proof}
We defined the domain of $\mathcal{D}$ to be the set of functions
that have moments of all orders.
It should be noted that Lemma \ref{lem_one} would not apply
if the domain had been (for example) the set of all square integrable
functions.
In that case a third possibility would exist. It could be
that the function $\phi$ is square integrable, but the
function $\mathcal{L} \phi$ is not. Thus $\mathcal{L} \phi$ would not generate
a new eigenfunction.
We will show that $\mathcal{D}$ has $2n+1$ ladder operators $\mathcal{L}_{\pm k}$, $k=0,\ldots,n$.
We begin by decomposing $\mathcal{D}$ into simple differential
and multiplicative operators.
\begin{definition}\label{def:L}
We define the operators $L_j, j=1,\ldots, 2 n+1$ as follows.
\begin{equation}
\label{eqn_defL}
\begin{aligned}
& L_j \phi = \partial_{s_j} \phi \;\;\;\;\; \mbox{for $j=1,\ldots , n$} \\
& L_{j+n} \phi = s_j \phi \;\;\; \mbox{for $j=1,\ldots , n$}\\
& L_{2n+1} \phi = {\cal I} \phi\;\;\;\; \mbox{for $j=2n+1$}
\end{aligned}
\end{equation}
\end{definition}
Here $ {\cal I} $ is the identity operator.
Note that
$[L_j,L_k]=0$ unless $|j-k|=n$, and $[L_j,L_{j+n}]= {\cal I} $.
In particular, we have
\begin{equation}
[ L_j, L_{k+n} ] = \delta_{jk} {\cal I} \;\;\;\mbox{ $j,k=1,\ldots,n$}
\label{eqn_comL}
\end{equation}
We note that $\mathcal{D}$ can be expressed in the operators
$L_j$ as
\begin{equation} \label{eq:Dsum}
\mathcal{D} = \sum_{k,j=1}^{2n+1}\frac{1}{2}d_{jk}L_jL_k.
\end{equation}
We let $ {\bf D}$ denote the symmetric matrix with components
$d_{jk}$ in \eqref{eq:Dsum}. The choice of $d_{jk}$
in \eqref{eq:Dsum} is not unique, but we make an
explicit choice that makes this matrix symmetric.
If we let $ {\bf A}$ denote the antisymmetric matrix
with components $a_{jk}$ given by
\begin{equation}
[L_j,L_k] = a_{jk} {\cal I} ,
\label{eqn_defA}
\end{equation}
then we have explicit expressions for ${\bf A}$ and ${\bf D}$
\begin{equation}
\label{eq:AD}
{\bf A} = \left(\begin{array}{ccc}
{\bf 0}_n & {\bf I}_n & 0 \\ -{\bf I}_n & {\bf 0}_n & \vdots \\ 0& \cdots &0\end{array} \right),
\quad
{\bf D} = \left( \begin{array}{ccc} {\bf B} & - {\bf H} & 0\\
-{\bf H}^T & \; {\bf 0}_n & \vdots \\
0 & \ldots & - \mbox{tr}({\bf H}) \end{array} \right).
\end{equation}
For details regarding the construction of $ {\bf D}$ see
Lemma \ref{Dsymm} in Appendix A.
Just as $\mathcal{D}$ has a representation in terms of the
operators $L_j$, its
ladder operators will also be expressed in terms of the $L_j$.
Consider an operator
\begin{equation}\label{eq:L}
\mathcal{L} = \sum_{j=1}^{2n+1}y_jL_j.
\end{equation}
We write $ {\bf y}$ for the vector of coefficients of $\mathcal{L}$.
From the representations \eqref{eq:Dsum} and \eqref{eq:L},
we see that the commutator $[\mathcal{D},\mathcal{L}]$ involves
sums of terms of the form $L_iL_jL_k - L_k L_i L_j$, which do not
at first sight appear to be linear in the operators $L_m$,
$m=1,\ldots,2n+1$.
However, by twice applying the
commutator relations in equation (\ref{eqn_comL}) we can show that
$L_i L_j L_k - L_k L_i L_j$ is in fact
a sum of the $L_m$.
Determining the coefficient vector $ {\bf y}$
and increment $\mu$ thus becomes a matrix eigenvalue problem.
The details of how we arrive at this form are
given in Appendix A. Here we will merely state the result of
these manipulations.
\begin{lemma} \label{matrixEig}
If $ {\bf y}$ is the vector of coefficients for $\mathcal{L}$, as defined as in equation \eq{L},
then the equation $[\mathcal{D},\mathcal{L}]=\mu \mathcal{L}$ can be written as
a matrix eigenvalue problem
$ {\bf T} {\bf y} = \mu {\bf y}$, where $ {\bf T} = {\bf D} {\bf A}$,
and $ {\bf D}$ and $ {\bf A}$ are
defined in \eqref{eq:AD}.
\end{lemma}
We make the assumption that the eigenvalues of $ {\bf H}$ have negative real parts,
and the eigenvectors form a complete set. For simplicity of the arguments, we will
also assume that the eigenvalues of $ {\bf H}$ are simple.
By explicitly writing out the eigenvalue problem
$ {\bf T} {\bf y} = \mu {\bf y}$ we can determine the eigenvalues $\mu_k$
in terms of the eigenvalues of the matrix ${\bf H}$.
We will give the details of the proof in Appendix A.
\begin{lemma} \label{lem:T}
The eigenvalues of $ {\bf T} = {\bf D} {\bf A}$ are $\{0,\pm \mu_k \}$, $k=1,\ldots , n$,
where $-\mu_k$ are the eigenvalues of the matrix ${\bf H}$.
\end{lemma}
Note that $\mathcal{L}_0$ is the identity operator with increment $0$.
Thus, our analysis only involves the $2n$ ladder operators $\mathcal{L}_{\pm k}$
for $k=1,\ldots, n$.
In doing the perturbation expansion it will be necessary
to have the commutator relations of the operators $\mathcal{L}_i$.
Finding the commutator relations for $[\mathcal{L}_j,\mathcal{L}_k]$ can be
turned into a linear algebra problem involving the eigenvectors of
the matrix $ {\bf T}$.
In particular, using equations (\ref{eq:L}) and (\ref{eqn_defA}) we
get
\begin{equation}
[\mathcal{L}_j,\mathcal{L}_k ] = \left({\bf y} _j ^T {\bf A} {\bf y} _k \right) {\cal I} .
\label{eqn_com1}
\end{equation}
From equation (\ref{eq:AD}) it is easily seen that
\begin{equation}
{\bf A} {\bf D} =
-({\bf D} {\bf A} )^T = - {\bf T} ^T
\label{eqn_antic}
\end{equation}
If ${\bf D} {\bf A} {\bf y} = \mu
{\bf y}$, then multiplying both sides of this by ${\bf A}$ and using
equation (\ref{eqn_antic}) we see that ${\bf A} {\bf y} $ is an
eigenvector of ${\bf T} ^T$ with eigenvalue $ - \mu$.
With this in mind, the right hand side of equation (\ref{eqn_com1}) can
be written as the inner product between the vector ${\bf y} _j$ and the
adjoint eigenvector of ${\bf T}$ associated with $ - \mu_k$.
Using the fact that the eigenvectors and adjoint eigenvectors of a
matrix form a bi-orthogonal set, we can arrive at a simple
expression for the commutators.
When dealing with complex quantities, the notation in this
argument gets to be a bit tedious, and we will leave the details
to Appendix A.
The final commutator result is given by the
following lemma.
\begin{lemma} \label{lem_commute}
For $j,k \geq 1$ we have $[\mathcal{L}_j,\mathcal{L}_k] = 0$ and $[\mathcal{L}_{-j},\mathcal{L}_{k}] =
\delta_{jk} {\cal I} $.
\end{lemma}
In Dirac's theory of the harmonic oscillator, he shows that
the Hamiltonian operator can be written as the
product of the raising and lowering operators.
We now generalize this result
to the vector case. In this case the operator $\mathcal{D}$ can be
written as a weighted sum of the products of the raising and
lowering operators.
The next lemma shows that the weights are in fact the
eigenvalues $\mu_k$ of the matrix $ {\bf T}$.
We leave the proof of this lemma to Appendix A, but note
that its proof is probably the most subtle one in this
paper.
\begin{lemma} \label{lem:Dsum}
The differential operator $\mathcal{D} = \sum_{i,j=1}^{2n+1}\frac{1}{2}d_{i,j}L_iL_j$
can be written as
\begin{equation}
\label{eq:Dlad}
\mathcal{D} = \sum_{k=1}^n \mu_k \mathcal{L}_{-k}\mathcal{L}_{k}.
\end{equation}
\end{lemma}
An important feature of the decomposition
\eqref{eq:Dlad} is that only terms of the form $\mathcal{L}_{-k}\mathcal{L}_{k}$, $k>0$, appear (there
are no terms of the form $\mathcal{L}_{k}\mathcal{L}_{-k}$ for $k>0$).
\section{Eigenvalues and Eigenfunctions of $\mathcal{D}$}\label{sec:eig}
In this section we will use the ladder operator formalism to completely characterize the
eigenvalues and eigenfunctions of the operator $\mathcal{D}$.
We note that the spectrum of $\mathcal{D}$ has already been studied and
characterized \cite{Liberzon,Metafune,roy}, but not in terms of ladder operators.
We include another proof of those results because the characterization in
terms of ladder operators is used in the perturbation analysis in \S \ref{sec:pert}.
As with Dirac's theory of the quantum harmonic oscillator, the
analysis of the spectrum using ladder operators requires that the
real part of the spectrum is bounded above.
We will now state this as a lemma, but leave the proof to
Appendix B.
\begin{lemma}\label{lem:boundD}
The real part of spectrum of the operator $\mathcal{D}$, as defined in \eqref{eq:defD}, is bounded above.
\end{lemma}
The following theorem will allow us to characterize the eigenfunction
associated with the eigenvalue with the largest real part.
\begin{theorem}
\label{thm:lkphi}
Let $\Phi({\bf s})$ be an eigenfunction of $\mathcal{D}$ (as in
equation \eqref{eq:defD}) associated with the
eigenvalue having the largest real part.
We must have $\mathcal{L} _k \Phi =0 $ for $k=1, \ldots, n$.
\end{theorem}
\begin{proof}
Suppose $\Phi({\bf s})$ is an eigenfunction of $\mathcal{D}$ with
eigenvalue $\chi$. If $ \Psi = \mathcal{L}_k \Phi \neq 0$, then $\Psi ({\bf s})$
will be an eigenfunction of $\mathcal{D}$ with eigenvalue $\chi + \mu_k$.
This will give us an eigenvalue with a larger real part than $\chi$.
Hence if $\chi$ is the eigenvalue with the largest real part, then
$\mathcal{L}_k \Phi =0$ for all $k$.
\end{proof}
\begin{remark}
The system $\mathcal{L}_k \Phi = 0$ for each $k=1,\ldots, n$ is an over-determined system of
first order differential equations. The fact that a solution exists is non-trivial.
However, the fact that $[\mathcal{L}_k,\mathcal{L}_j] =0$ implies
that the Frobenius Theorem applies (see \cite{Marsden, ArnoldV}), which
guarantees the system is solvable.
\end{remark}
\begin{remark}
As in the comment following Lemma \ref{lem_one}, we should note that
the domain of $\mathcal{D}$ is defined as the set of functions that
have moments of all orders.
If the domain of $\mathcal{D}$ were defined using the less stringent
requirement that the eigenfunctions were square integrable, it would not
be necessary that $\mathcal{L} _k \Phi=0$ for all $k$. This is because
in this case
$\mathcal{L}_k \Phi$ does not have to generate
a new eigenfunction. It could instead produce a function that
is not square integrable.
\end{remark}
By Theorem \ref{thm:lkphi}, the ``top'' eigenfunction $\Phi_0({\bf s})$ (i.e.
the eigenfunction associated to the largest eigenvalue of
$\mathcal{D}$) must satisfy the
equations $\mathcal{L} _k \Phi _0 =0$.
If ${\bf y} ^k$ is the eigenvector of $ {\bf T}$ associated with
the eigenvalue $\mu_k$, and if $\mu_k \neq 0$, then
the last component of ${\bf y} ^k$ vanishes (see the proof of
Lemma \ref{lem:T} in Appendix A).
That is, we can write
\begin{equation}
{\bf y}^k = \threevec{{\bf p} _k}{{\bf q} _k}{0}.
\end{equation}
Using equation (\ref{eq:L}),
and the definition of the operators $L_k$ in (\ref{eqn_defL}),
the equations $\mathcal{L}_k \Phi_0=0$ can thus be written as
\begin{equation}
\label{eqn_lcmPhi0}
{\bf p} _k \cdot \nabla \Phi_0 + ( {\bf q} _k \cdot {\bf s} ) \Phi_0 =0
\;\;\;\;k=1,\ldots,n
\end{equation}
If we make the ansatz that $\Phi_0({\bf s}) =
\exp( - \frac{1}{2} {\bf s} ^T \boldsymbol \Sigma {\bf s} )$, then
equations (\ref{eqn_lcmPhi0}) will be satisfied if and only if
\begin{equation}
{\bf P} ^T \boldsymbol \Sigma = {\bf Q} ^T ,
\end{equation}
where
\begin{equation}
{\bf P} = [{\bf p} _1,{\bf p} _2,\ldots,{\bf p} _n],\qquad
{\bf Q} = [{\bf q} _1,{\bf q} _2,\ldots,{\bf q} _n].
\end{equation}
If ${\bf P}$ is invertible, this gives us
$\boldsymbol \Sigma = ({\bf P} ^T)^{-1} {\bf Q} ^T $.
It is not clear that ${\bf P}$ is invertible, or that
$\boldsymbol \Sigma $ is symmetric. However, under
certain weak assumptions
on ${\bf H}$ and ${\bf B}$ (see Definition \ref{def:control}
and Lemma \ref{lem:Sigma} below)
this will be the case. If these assumptions hold, it is convenient to
write $\boldsymbol \Sigma^{-1} = {\bf P} {\bf Q} ^{-1}$.
We now define the notion of a controllable pair.
\begin{definition}\label{def:control}
The matrices ${\bf H}$ and ${\bf B}$ will be said to form
a controllable pair if
there is no nontrivial vector ${\bf z}$ such that
$ {\bf z} ^T {\bf H} ^{k} {\bf B} =0$ for $k=0,\ldots,n-1$.
This is equivalent to requiring ${\rm rank}\,{\bf C} = n$, where
${\bf C}$ is the $n\times n^2$ matrix
${\bf C}=\left[{\bf B},\,{\bf H}{\bf B},\, \ldots, \, {\bf H}^{n-1}{\bf B} \right]$.
\end{definition}
In Appendix B we prove the following lemma.
\begin{lemma}\label{lem:Sigma}
Assuming all of the eigenvalues of ${\bf H}$ have real parts less than
zero, the eigenvectors of $ {\bf H}$ are complete,
and that ${\bf B}$ is positive semidefinite,
then ${\boldsymbol \Sigma} ^{-1} = {\bf P} {\bf Q} ^{-1} $
is symmetric and positive semi-definite.
If ${\bf H}$ and ${\bf B}$ also form a controllable pair, then
the matrix ${\bf \Sigma}^{-1}$ is positive definite, and hence
the matrices ${\bf \Sigma} = {\bf Q} {\bf P} ^{-1}$ and ${\bf P}$
are non-singular.
\end{lemma}
Requiring $({\bf H},{\bf B})$ to be a controllable pair
eliminates some ``degenerate'' types of filters.
For instance, if ${\bf H}={\rm diag}(-\mu_1,-\mu_2)$
and ${\bf B}={\rm diag}(1,0)$ then $({\bf H},{\bf B})$ is not a controllable
pair. In this case,
$s_1(t)$ is a scalar Ornstein-Uhlenbeck process, but $s_2(t)$ is deterministic,
so $ {\bf s} (t)$ is not a genuine two-dimensional Ornstein-Uhlenbeck process,
but rather it is a one-dimensional process with an appended deterministic component.
\begin{definition}\label{def:bc}
We will say that the $n\times n$ real matrices
${\bf H}$ and ${\bf B}$
satisfy the \emph{basic conditions} if
\begin{itemize}
\item[(i)] ${\bf B}$ is symmetric
and positive semi-definite
\item[(ii)] ${\bf H}$ has simple eigenvalues
$\{-\mu_k\}_{k=1}^n$ with $\rp{\mu_k}>0$ for $k=1,\ldots,n$
\item[(iii)] $({\bf H},{\bf B})$ form a controllable pair (Def. \ref{def:control}).
\end{itemize}
The requirement of simple eigenvalues for ${\bf H}$ is for convenience
and could be replaced with the requirement of a complete
set of eigenvectors.
\end{definition}
\begin{lemma}
\label{lem:phi0}
Assuming ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
the eigenvalue $\chi_0 $ with the largest real part of $\mathcal{D}$ is simple, and
the eigenfunction $\Phi_0({\bf s})$ associated with
it is given by
\begin{equation}
\label{eqn_defPhi0}
\Phi_0({\bf s})
= {\rm exp}\left( - \frac{1}{2} \langle{\bf s} , {\bf \Sigma} {\bf s}\rangle\right),
\end{equation}
where $\boldsymbol \Sigma = {\bf Q} {\bf P} ^{-1} $.
Moreover, $\chi_0 = 0$.
\end{lemma}
\begin{proof}
Without loss of generality we look for solutions of the form
$\Phi_0({\bf s}) = e^{ \psi_0({\bf s}) }$.
In order to satisfy equations (\ref{eqn_lcmPhi0}) we must have
\begin{equation}
{\bf p} _k \cdot \nabla \psi_0 + ({\bf q} _k \cdot {\bf s}) =0
\end{equation}
A direct calculations shows that $\psi_0 = - \frac{1}{2} \langle
{\bf s}, \boldsymbol \Sigma {\bf s} \rangle$ satisfies this equation.
If we have another solution to this equation, say $\psi_1$, then the difference
$\psi_0- \psi_1$ between these solutions will
satisfy
$ {\bf p} _k \cdot \nabla (\psi_0- \psi_1) =0$, for $k=1,\ldots,n$.
The vectors ${\bf p} _k$ are complete
(they are the eigenvectors of ${\bf H}^T$),
which implies that
$\psi_0-\psi_1$ is a constant. This in turn implies that
the eigenfunctions associated with each of the
solutions
$\psi_0({\bf s})$ are multiples of each other, hence
$\chi_0$ is simple.
From Lemma \ref{lem:boundD}, the real part of the the spectrum of $\mathcal{D}$ is
bounded above.
From Lemma \ref{lem:Dsum}, we can write $\mathcal{D} = \sum_{k=1}^n \mu_k \mathcal{L}_{-k}\mathcal{L}_k$.
Hence, when we apply $\mathcal{D}$ to the eigenfunction $\Phi_0({\bf s})$
associated with $\chi_0$,
we will get $\mathcal{D} \Phi_0 = 0$
because $\mathcal{L}_k \Phi_0=0$, $k=1,\ldots,n,$
by Theorem \ref{thm:lkphi}. Hence $\chi_0 = 0$.
\end{proof}
\begin{theorem}\label{thm:integ}
Let ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}).
$\chi$ is an eigenvalue of $\mathcal{D}$ if and only if it can be written as
in equation (\ref{eqn_integ}),
where $k_j$ are non-negative integers, and $- \mu_j$ are the eigenvalues of
${\bf H}$.
The eigenfunction associated with
$\chi_{{\bf k}}$ is given by
$\Phi_{{\bf k}}({\bf s}) = \mathcal{L}_{-1}^{k_1} \mathcal{L}_{-2}^{k_2}
...\mathcal{L}_{-n}^{k_n} \Phi_0({\bf s}) $.
where $\Phi_0({\bf s})$ is defined as in equation (\ref{eqn_defPhi0}).
\end{theorem}
\begin{proof}
From Lemmas \ref{lem:boundD} and \ref{lem:phi0}, the real part of the the spectrum of $\mathcal{D}$ is
bounded above by $0$, and $\chi = 0$ is an eigenvalue of $\mathcal{D}$ that has the
form \eqref{eqn_integ}.
If $\chi$ is any other eigenvalue,
and $\Phi$ is its eigenfunction, then there must be at least one
value of $k$ such that $\mathcal{L} _k \Phi \neq 0$.
If this new eigenvalue has the form
given in equation (\ref{eqn_integ}), then the previous one will too. We
can keep carrying out this process obtaining eigenvalues with
larger real parts. This process must eventually end since the
real part of the spectrum is bounded
above. The only way it can end is when we arrive at the largest eigenvalue,
which we have already seen, is zero. This implies
equation (\ref{eqn_integ}).
The argument in the last paragraph shows that any eigenvalue of
$\mathcal{D}$ must be of the form (\ref{eqn_integ}).
To show that any number $\chi_{{\bf k}}$ of this form
must be an eigenvalue of $\mathcal{D}$ we show that
$\Phi_{{\bf k}}({\bf s}) = \mathcal{L}_{-1}^{k_1} \mathcal{L}_{-2}^{k_2}
...\mathcal{L}_{-n}^{k_n} \Phi_0({\bf s})$ is
the eigenfunction associated with
$\chi_{{\bf k}}$. This
follows from
Lemma \ref{lem_efuncs} in Appendix B.
\end{proof}
\section{Perturbation Method}\label{sec:pert}
The marginal-moment equation \eq{gzero}
is derived by multiplying \eq{fp} by a monomial
$ {\bf x}^\alpha$ for some multi-index $\alpha$, then integrating with
respect to ${\bf x}$.
If this is done for each multi-index of order $p$, we derive a set of equations
for the $p$th marginal moments. If we collect the $p$th marginal moments into a
vector $ {\bf m}$, we arrive at \eq{gzero}. The matrices ${\bf \Gamma}_0 ,\,{\bf \Gamma}_1$
depend not only on ${\bf A}_0, {\bf A}_1$, but also on our mapping of the $p$th marginal
moments into $ {\bf m}$. For this reason, we do not write the explicit form of
${\bf \Gamma}_0, {\bf \Gamma}_1$ in this section, but we do write them out for the example of
second marginal moments for the Mathieu equation in \S \ref{sec:app}.
We let $\boldsymbol \phi_j$ denote the eigenvectors of ${\bf \Gamma}_0$ with
eigenvalues $\nu_j$.
We let $\boldsymbol \psi_j$ be the normalized
adjoint eigenvectors, so that $\ob{\boldsymbol \psi}_k^T\boldsymbol \phi_j = \ip{\boldsymbol \psi_k}{\boldsymbol \phi_j} = \delta_{kj}$.
We may assume without loss of generality that the $\nu_j$ are ordered
so that $\rp{\nu_1} \geq \rp{\nu_j}$ for all $j$.
We expand the unknowns as series in $\varepsilon$,
\begin{equation}\label{eq:expand}
\lambda = \lambda_0+\varepsilon\lambda_1 +
\varepsilon^2 \lambda_2 + \ldots ,\qquad {\bf m}( {\bf s}) =
{\bf m}_0( {\bf s}) + \varepsilon {\bf m}_1( {\bf s})+ \varepsilon^2 {\bf m}_2({\bf s}) +\ldots,
\end{equation}
and solve for the terms of these series.
If we substitute these expansions into \eq{general},
and collect
the zeroth-order terms, we get
\begin{equation}\label{eq:zero}
\lambda_0 {\bf m}_0 = \mathcal{D} {\bf m}_0 + {\bf \Gamma}_0 {\bf m}_0.
\end{equation}
The eigenfunctions of $\mathcal{D}$ are scalar-valued, and the eigenvectors of ${\bf \Gamma}_0$
are constant vectors. Assuming that both the eigenfunctions of
$\mathcal{D}$ and the
eigenvectors of ${\bf \Gamma}_0$ are complete, then
the most general
solution $ {\bf m}_0$ to \eq{zero} will be a product of an eigenfunction of $\mathcal{D}$
with an eigenvector of ${\bf \Gamma}_0$, and $\lambda_0$ will be the sum of the eigenvalues
of $\mathcal{D}$ and ${\bf \Gamma}_0$.
We are interested in the largest eigenvalue, so we take
\begin{equation}
\label{eq:m0}
{\bf m}_0( {\bf s}) = \Phi_0( {\bf s}) \boldsymbol \phi_1,
\end{equation}
and $\lambda_0 = \nu_1$ because $0$ is the largest eigenvalue of $\mathcal{D}$
and $\mathcal{D} \Phi_0 = 0$ (Lemma \ref{lem:phi0}), and $\nu_1$ was selected to have the
largest possible real part (note that the choice of $\nu_1$ need not
be unique).
The form of the forcing in \eqref{eq:x} allows us to
represent $\langle \bfa , \bfs \rangle$ in terms of the ladder operators.
In particular, the parametric forcing by the linear filter
$\langle \bfa , \bfs (t)\rangle$ results in the presence of the first-order
polynomial $\langle \bfa , \bfs \rangle$ in the Fokker-Planck equation, and thus to the
term $\varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m}$ in the moment equation \eqref{eq:general}.
Since the ladder operators, $\mathcal{L}_{\pm k}$, are
linear combinations of first-order operators
$\partial_{s_j}$ and monomials $s_j$,
it is reasonable to try
to write $\langle \bfa , \bfs \rangle$ as a linear combination of $\mathcal{L}_{\pm k}$.
The completeness of the
eigenvectors of $ {\bf H}$ allows us to do this,
greatly simplifying our perturbation analysis.
\begin{lemma}\label{lem:alpha_beta}
If the eigenvectors of $ {\bf H}$ are complete,
and $\alpha_k, \, \beta_k$ are defined as in
equation \eqref{eq:coeffs1} (Appendix D), then
\begin{equation}\label{eq:as}
\langle \bfa , \bfs \rangle = \sum_{k=1}^n \left(\alpha_k \mathcal{L}_k + \beta_k \mathcal{L}_{-k}\right).
\end{equation}
\end{lemma}
The proof of Lemma \ref{lem:alpha_beta} is in Appendix D.
Formula \eqref{eq:as} ensures that the
coefficients $\alpha_k$ and $\beta_k$ will appear in the
coefficients of the perturbation expansions \eqref{eq:expand}.
We show in Appendix C that the
extended power spectral density, $G(z)$, of $\langle \bfa , \bfs (t)\rangle$ can also be expressed
in terms of $\alpha_k$ and $\beta_k$
(Theorem \ref{thm:G}). This allows us
to derive a simple formula for the
order $\varepsilon^2$ coefficient of $\lambda(\varepsilon)$
in terms of $G(z)$ (Theorem \ref{thm:m2}).
Recall that there are
$J=\left(
\begin{array}{c}
N+p-1 \\ p
\end{array}\right)$ distinct $p$th order monomials in $N$ variables,
and that ${\bf \Gamma}_0$ and ${\bf \Gamma}_1$ are $J\times J$ matrices.
We will assume that ${\bf \Gamma}_0$ has a complete set of eigenvectors,
which is the case for the Mathieu equation, and occurs whenever
the eigenvectors of ${\bf A}_0$ are complete.
The following lemma gives solvability conditions that
will be used repeatedly in our analysis.
\begin{lemma}\label{lem:solve}
Let ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}).
Suppose that ${\bf \Gamma}_0$ has a complete set of
eigenvectors
$\{ \boldsymbol \phi_j \}_{j=1}^J$,
with eigenvalues $\nu_j$,
normalized adjoint eigenvectors
$\{ \boldsymbol \psi_j \}_{j=1}^J$,
and that
$\Phi(s)$ is an eigenfunction of $\mathcal{D}$ with eigenvalue
$- \mu $.
If $\mu \neq 0$, then
then the equation
\[
(\lambda_0 -\mathcal{D} - {\bf \Gamma}_0){\bf m} = \Phi({\bf s}){\bf b}
\]
has a solution given by
\begin{equation}
\label{eq:sol1}
{\bf m}({\bf s}) = \Phi({\bf s}) \sum_{j=1}^J \frac{\ip{\boldsymbol \psi_j}{{\bf b}}}{\nu_1-\nu_j +\mu} \boldsymbol \phi_j
\end{equation}
On the other hand, if $\mu=0$ (and hence
$\Phi(s) = \Phi_0({\bf s} )$) and $\nu_1 \neq \nu_j$ for $j>1$, then the equation
\[
(\lambda_0 -\mathcal{D} - {\bf \Gamma}_0){\bf m} = \Phi_0({\bf s}){\bf b}
\]
has a solution if and only if $\ip{\boldsymbol \psi_1}{{\bf b}}=0$. In this case,
the solution is
\begin{equation}
\label{eq:sol2}
{\bf m}({\bf s}) = \kappa \Phi_0(s) \boldsymbol \phi_1 +
\Phi_0({\bf s}) \sum_{j=2}^J \frac{\ip{\boldsymbol \psi_j}{{\bf b}}}{\nu_1-\nu_j} \boldsymbol \phi_j .
\end{equation}
where $\kappa$ is an arbitrary constant.
\end{lemma}
The constant $\kappa$ can be used to choose a normalization for ${\bf
m}$. We do not need to choose a specific normalization for ${\bf
m}$, so we set $\kappa = 0$ because it is convenient. One can check
that if ${\bf A}_0$ has a complete set of eigenvectors then ${\bf \Gamma}_0$
will too.
\begin{proof}
If $\mu \neq 0$, then when we write ${\bf b}$ in the $\boldsymbol \phi_j$ basis,
and make the ansatz ${\bf m}({\bf s}) = \Phi({\bf s}){\bf c}$,
where ${\bf c}$ is a constant vector, we arrive at the
expression for ${\bf m}$ in equation (\ref{eq:sol1}). If $\mu=0$, and hence $\Phi({\bf s})
= \Phi_0({\bf s})$, then we cannot solve this equation if
${\bf b}$ has any component in the direction of $\boldsymbol \phi_1$.
This gives the compatibility condition $\ip{\boldsymbol \psi_1 }{{\bf b}}=0$.
Assuming this holds, the solution is given by equation (\ref{eq:sol2}).
\end{proof}
We will now describe the outline of the perturbation analysis.
In order to help us describe the perturbation analysis we will use the
following definition.
\begin{definition}
We say a function
${\bf f}({\bf s})$ is in ${\cal V} _k$ if
it can be written as the sum of eigenfunctions of $\mathcal{D}$ times constant
vectors, where each of the eigenfunctions is the product of
$k$ or fewer ladder operators $\mathcal{L}_{-j}, j=1,\ldots, n$
applied to the eigenfunction
$\Phi_0({\bf s})$.
\end{definition}
The following lemma will be used in our perturbation analysis.
\begin{lemma}
If $h({\bf s}) \in {\cal V}_k$, then $g({\bf s}) = \langle {\bf a} , {\bf s}
\rangle h({\bf s} )$ is in ${\cal V } _{k+1} $.
\label{lem:V}
\end{lemma}
\begin{proof}
This is almost a direct consequence of Lemmas \ref{lem_commute} and \ref{lem:alpha_beta}.
From Lemma \ref{lem:alpha_beta} we know that
$g(s)$ can be written as a sum of terms involving $\mathcal{L}_{-j} h({\bf s})$
and $\mathcal{L}_j h({\bf s})$ where $j>0$. By definition, each of the
terms $\mathcal{L}_{-j} h({\bf s})$ are in ${\cal V }_{k+1}$. On the other hand,
the commutator relations $[ \mathcal{L} _j, \mathcal{L} _{-k}] = - \delta_{jk} {\cal I} $
from Lemma \ref{lem_commute}, and the fact that $\mathcal{L}_j
\Phi_0({\bf s}) =0$ for $j>0$, can be used to show that $\mathcal{L}_j h({\bf s})$
is in ${\cal V }_{k-1}$ That is, $\mathcal{L}_j$ has either canceled out
a previous term $\mathcal{L}_{-j}$
applied to $\Phi_0$, or it commutes with all of the previous operators applied
to $\Phi_0$, yielding the zero function because $\mathcal{L}_j \Phi_0 = 0$
for $j>0$.
\end{proof}
The perturbation analysis proceeds as follows. We have a zeroth-order solution
$ {\bf m}_0 = \boldsymbol \phi_1\Phi_0$, which is clearly in ${\cal V}_0$.
We will see by induction, that the function
${\bf m} _k({\bf s})$ will be in ${\cal V}_k$.
The equation at each higher order will be
of the form
\begin{equation}\label{eq:genk}
(\lambda_0 -\mathcal{D} - {\bf \Gamma}_0 ){\bf m}_k = -\lambda_k {\bf m}_0 +
{\bf r}_{k-1}(s)
\end{equation}
where ${\bf r} _{k-1}(s)$ is function that can be computed using
the ${\bf m}_j$ and $\lambda_j$ for $j<k$.
In particular, we have
\begin{equation*}
{\bf r} _{k-1}({\bf s} ) = - \sum_{j=1}^{k-1} \lambda_j {\bf m} _{k-j}
+ \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m} _{k-1}
\end{equation*}
Assuming that for $j<k$
the functions ${\bf m} _j({\bf s} )$ are
in ${\cal V}_j$,
then Lemma \ref{lem:V} ensures that
the term ${\bf r} _{k-1}({\bf s})$ will
be in ${\cal V}_k$.
We can write
\begin{equation*}
{\bf r} _{k-1}({\bf s} ) = \Phi_0({\bf s}) {\bf b}_ {k-1}
+ \hat{{\bf r }} _{k-1}({\bf s})
\end{equation*}
where the term $\hat{{\bf r }}_{k-1} ({\bf s})$ can be written as
a sum of eigenfunctions of $\mathcal{D}$ times constant vectors, where none of the eigenfunctions is
$\Phi_0({\bf s})$.
With this in mind we use Lemma \ref{lem:solve} to see that
we will be able to solve equation (\ref{eq:genk}) if and only if
$\lambda_k \ip{{\boldsymbol \psi} _1}{ {\boldsymbol \phi} _1}= \ip{{\boldsymbol \psi} _1}{{\bf b} _{k-1}}$, and hence
\begin{equation*}
\lambda_k = \ip{ {\boldsymbol \psi} _1}{{\bf b} _{k-1}}.
\end{equation*}
Once we have chosen $\lambda_k$ in this way, we can
solve for ${\bf m} _k$, and it will clearly be in ${\cal V}_k$, thus
allowing us to continue the process to the next value of $k$ by induction.
Terms in ${\bf r} _{k-1}({\bf s})$
proportional to $\Phi_0$ can only arise at even steps in the process (i.e. equations
for $\lambda_{2j}, {\bf m}_{2j}$) because $\mathcal{L}_k\Phi_k=-\Phi_0$ (see equation
\ref{eq:lem:phik}).
These terms proportional to $\Phi_0$ must satisfy the compatibility condition
$\ip{\boldsymbol \psi_1}{ {\bf b}}=0$ as in Lemma \ref{lem:solve}.
\subsection{First Order}\label{sec:pert1}
To simplify notation, we make the following definition.
\begin{definition}\label{def:phik}
The functions $\Phi_k({\bf s})$ are defined as
\begin{equation}\label{eq:phik}
\Phi_k( {\bf s})=\mathcal{L}_{-k}\Phi_0( {\bf s}), \qquad k = 1,\dots ,n,
\end{equation}
where $\Phi_0({\bf s})$ is the eigenfunction of $\mathcal{D}$ associated with the
eigenvalue with the largest real part. Note that,
from Lemma \ref{lem_commute} and Theorem \ref{thm:lkphi}, we have
\begin{equation}
\mathcal{L}_k \Phi_k( {\bf s}) = -\Phi_0( {\bf s}),\qquad k=1,\ldots, n,
\label{eq:lem:phik}
\end{equation}
because $\mathcal{L}_k \Phi_k = \mathcal{L}_k\mathcal{L}_{-k}\Phi_0=
(\mathcal{L}_{-k}\mathcal{L}_{k}-1)\Phi_0=-\Phi_0$.
\end{definition}
Substituting \eq{expand} into \eq{general} and
collecting terms of order $\varepsilon$,
we get the equation for ${\bf m}_1$
\begin{equation}\label{eq:eq1}
(\lambda_0 -\mathcal{D} - {\bf \Gamma}_0 ){\bf m}_1 = -\lambda_1 {\bf m}_0 + \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m}_0.
\end{equation}
It is not hard to show that the eigenvalue $\lambda(\varepsilon) = \lambda_0+\varepsilon \lambda_1 +\ldots$
must be an even function of $\varepsilon$. This is also intuitive because the sign of $\varepsilon$
plays no role in \eq{x}. Thus, it is no surprise that $\lambda_1 = 0$.
\begin{lemma}\label{lem:m1}
If ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
we have $\lambda_1 = 0$ and
\begin{equation}\label{eq:m1}
{\bf m} _1 = \sum_{k=1}^n \Phi_k({\bf s}) {\bf c} _k
\end{equation}
where $\Phi_k$ is defined in \eqref{eq:phik}, and
\begin{equation}\label{eq:ck}
{\bf c} _k = \sum_{j=1}^J \frac{\beta_k \ip{\boldsymbol \psi_j}{{\bf \Gamma}_1 \boldsymbol \phi_1}}
{\nu_1 - \nu_j + \mu_k}\boldsymbol \phi_j
\end{equation}
\end{lemma}
\begin{proof}
Using (\ref{eq:as}) and that $\mathcal{L}_k\Phi_0 = 0$, $\mathcal{L}_{-k}\Phi_0 = \Phi_k$, we have
\begin{equation} \label{eq:eq11}
\langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m}_0 = \sum_{k=1}^n \beta_k {\bf \Gamma}_1 \boldsymbol \phi_1 \Phi_k({\bf s})
\end{equation}
Thus, the right side of \eq{eq1} is a finite sum of terms proportional to $\Phi_0, \Phi_1,
\ldots, \Phi_n$, and each term can be treated separately. We now apply
Lemma \ref{lem:solve} to equation (\ref{eq:eq1}) using
equation (\ref{eq:eq11}).
The only term proportional to $\Phi_0$ is $-\lambda_1 {\bf m}_0$. But according
Lemma \ref{lem:solve} this means $\ip{\boldsymbol \psi_1}{\lambda_1 \boldsymbol \phi_1}=0$. Hence, $\lambda_1 = 0$.
The expression in \eq{sol1} applied to the $\Phi_k$ terms
for $k>0$ gives the expression in \eq{m1} .
\end{proof}
\subsection{Second Order}\label{sec:pert2}
Substituting \eq{expand} into \eq{general} and
collecting terms of order $\varepsilon^2$,
we get the equation for ${\bf m}_2$
\begin{equation}\label{eq:eq2}
(\lambda_0 -\mathcal{D} - {\bf \Gamma}_0 ){\bf m}_2 = -\lambda_2 {\bf m}_0 + \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m}_1.
\end{equation}
The situation here is similar to that for $ {\bf m}_1$, except that
the terms proportional to $\Phi_0( {\bf s})$ come from $ {\bf m}_0$ as well as
terms of the form $\mathcal{L}_k\Phi_k( {\bf s}) = -\Phi_0( {\bf s})$.
\begin{lemma}\label{lem:quick}
If ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
the compatibility condition for \eq{eq2} implies
\begin{equation}\label{eq:lam2}
\lambda_2 =-\sum_{k=1}^n\ip{\boldsymbol \psi_1}{\alpha_k {\bf \Gamma}_1 {\bf c}_k},
\end{equation}
where ${\bf c} _k$ is defined as in \eq{ck}.
\end{lemma}
\begin{proof}
Lemma \ref{lem:m1} shows that
$ {\bf m}_1 = \sum_{k=1}^n \Phi_k({\bf s}) {\bf c}_k $. This fact, and an application of
the result in Lemma \ref{lem:alpha_beta} implies that
\begin{equation}
\langle \bfa , \bfs \rangle{\bf \Gamma}_1 {\bf m}_1 = \left(\sum_{l=1}^n \alpha_l \mathcal{L}_{l}+\beta_l\mathcal{L}_{-l}\right)
{\bf \Gamma}_1 \sum_{k=1}^n \Phi_k({\bf s}) {\bf c}_k = -
\Phi_0({\bf s}) \left(\sum_{k=1}^n\alpha_k {\bf \Gamma}_1 {\bf c}_k \right) + \ldots
\nonumber
\end{equation}
where the term on the right is the only term proportional to $\Phi_0$.
We used \eqref{eq:lem:phik}
to write $\mathcal{L}_k\Phi_k( {\bf s})=\mathcal{L}_k\mathcal{L}_{-k}\Phi_0( {\bf s}) = -\Phi_0( {\bf s})$.
Using the form of ${\bf m}_0$ in \eqref{eq:m0},
the compatibility condition from Lemma \ref{lem:solve}
implies $ -\lambda_2\ip{\boldsymbol \psi_1}{\boldsymbol \phi_1} = \ip{\boldsymbol \psi_1}{\sum_{k=1}^n\alpha_k {\bf \Gamma}_1 {\bf c}_k }$.
Hence
\begin{equation}
\lambda_2 =-\ip{\boldsymbol \psi_1}{\sum_{k=1}^n\alpha_k {\bf \Gamma}_1 {\bf c}_k}=
-\sum_{k=1}^n\ip{\boldsymbol \psi_1}{\alpha_k {\bf \Gamma}_1 {\bf c}_k}.
\nonumber
\end{equation}
\end{proof}
Computing the expression for $ {\bf m}_2$ is a simple exercise, but we do not
write it here. Continuing this process for higher order terms is straightforward,
though grows more tedious with each successive order.
Lemma \ref{lem:quick}
allows us to compute $\lambda_2$, but
a nice feature of the second order term $\lambda_2$,
is that it can be expressed by a simple formula involving
the extended power spectral density $G$ of
the process $\langle \bfa , \bfs (t)\rangle$ (see Appendix C).
We prove the following theorem in Appendix D.
\begin{theorem}\label{thm:m2}
If $\rp{\nu_1-\nu_j + \mu_k}>0$
for each $j=1,\ldots, J$ and $k=1,\ldots , n$,
and if
${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
then
\begin{equation}\label{eq:lambda2}
\lambda_2 = \sum_{j=1}^J \ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_j}\ip{\boldsymbol \psi_j}{{\bf \Gamma}_1 \boldsymbol \phi_1}
G(\nu_1 - \nu_j).
\end{equation}
Here $G(z)$ is the extended power spectral density of the forcing
term $\langle {\bf a}, {\bf s} \rangle$.
\end{theorem}
\begin{remark}
Note that the coefficients $\ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_j}\ip{\boldsymbol \psi_j}{{\bf \Gamma}_1 \boldsymbol \phi_1}$
and the differences $\nu_1 - \nu_j$ depend only on the differential equation for $ {\bf x}$
(i.e. only on the matrices $ {\bf A}_0$ and $ {\bf A}_1$), and the function $G$ depends only
on the filter $\langle \bfa , \bfs (t)\rangle$ (i.e. on $ {\bf H}, {\bf B}, {\bf a}$). It would be interesting to investigate
whether the same form as in \eqref{eq:lambda2}
would hold for \emph{any} asymptotically stationary filter.
That is, if the expression for $\lambda_2$ would be
a linear combination of values of $G$, where the coefficients depend only on the
physical system, and the places where $G$ is evaluated are
given by the eigenvalues of that system.
\end{remark}
\section{Applications}\label{sec:app}
\subsection{Second Moments for the Mathieu Equation}\label{sec:mat}
We can write the Mathieu equation (\ref{eq:Mathieu})
as in \eq{x} using
a two-dimensional vector
${\bf x}^T = ( x_1,x_2)$.
In this case the matrices ${\bf A}_0, {\bf A}_1$ in equation \eq{x} are
\begin{equation}\label{eq:MA0A1}
{\bf A}_0 = \left(
\begin{array}{cc}
0 & 1 \\ -\omega_0^2 & -\gamma
\end{array}\right), \quad
{\bf A}_1 = \left(
\begin{array}{cc}
0 & 0 \\ 1 & 0
\end{array}\right).
\end{equation}
We will consider the stability for the second moments.
We define
\begin{equation}\label{eq:Mmjk}
m_{jk}({\bf s},t) = \int_{\mathbb{R}^2} x_j x_k P(x_1,x_2,{\bf s},t)dx_1 dx_2,
\end{equation}
In this case
The Fokker Planck equation (\ref{eq:fp}) can be written as
\begin{equation}
\partial_t P = \mathcal{D} P -
\pdone{}{x_1} \left( x_2 P \right)
- \pdone{}{x_2} \left(\left( - \omega_0^2 x_1 - \gamma x_2 +
\langle {\bf a} , {\bf s} \rangle x_1 \right) P\right)
\label{eqn_mat}
\end{equation}
If we multiply equation (\ref{eqn_mat}) by $x_1^2$, and integrate over all values of
$x_1$ and $x_2$, after integrating by parts we get the equation
\begin{equation*}
\partial _t m_{11} = \mathcal{D} m_{11} + 2 m_{12}
\end{equation*}
Similarly multiplying equation (\ref{eqn_mat}) by $x_1 x_2$ and $x_2^2$,
integrating over all $x_1$ and $x_2$, and applying integration by parts, we
get the equations
\begin{equation*}
\partial _t m_{12} = \mathcal{D} m_{12} - \omega_0^2 m_{11} - \gamma m_{12}
+ m_{22} + \langle {\bf a},{\bf s} \rangle m_{11}
\end{equation*}
and
\begin{equation*}
\partial _t m_{22} = \mathcal{D} m_{22} - 2\omega_0^2 m_{12} - 2 \gamma m_{22}
+ 2 \langle {\bf a},{\bf s} \rangle m_{12} .
\end{equation*}
If we let
${\bf m} = (m_{11}, m_{12}, m_{22})^T $
this can be written
in the form of equation (\ref{eq:gzero}) where
the matrices ${\bf \Gamma}_0, {\bf \Gamma}_1$ in \eq{gzero} are given by
\begin{equation}\label{eq:gamma}
{\bf \Gamma}_0 = \left(
\begin{array}{ccc}
0 & 2 & 0 \\ -\omega_0^2 & -\gamma & 1 \\ 0 & -2\omega_0^2 & -2 \gamma
\end{array} \right), \quad
{\bf \Gamma}_1 = \left(
\begin{array}{ccc}
0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 2 & 0
\end{array} \right).
\end{equation}
After assuming temporal behavior of the form $e^{ \lambda t}$
we arrive at the eigenvalue problem
\begin{equation}
\lambda {\bf m} = \mathcal{D} {\bf m} + {\bf \Gamma}_0 {\bf m} + \varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m}.
\end{equation}
for ${\bf m}({\bf s})$ and $\lambda$, which is the same as
\eq{general}.
We will now apply the
results of Theorem \ref{thm:m2} to this set of equations.
In the case of second moments, the eigenvalues $\nu_j$ of ${\bf \Gamma}_0$
are given by sums of two eigenvalues of ${\bf A}_0$. I.e.,
$\nu_j = \sigma_\ell +\sigma_m$ where $\sigma_k$ are
eigenvalues of ${\bf A}_0$.
In the case of the Mathieu equation,
the eigenvalues of ${\bf A}_0$ are $\sigma_1,\, \sigma_2$, where
$\sigma_1 = \ob{\sigma}_2 = \frac{-\gamma + i\sqrt{4\os - \gamma^2}}{2}$.
Hence, there are three choices of $\nu_1$, given by
$\nu_1 = -\gamma,$ or $\nu_1 =-\gamma \pm i\sqrt{4\os - \gamma^2}$, since they all have the same
real part.
In the case $\nu_1 = -\gamma$, we have
$\ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_1} = 0$,
so the $G(0)$ term does not appear. We also have
$\ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_2}\ip{\boldsymbol \psi_2}{{\bf \Gamma}_1 \boldsymbol \phi_1}=
\ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_3}\ip{\boldsymbol \psi_3}{{\bf \Gamma}_1 \boldsymbol \phi_1} = \frac{2}{4\os - \gamma^2}$.
Hence
\begin{equation}\label{eq:lam2mat}
\lambda_2 = \frac{2}{4\os - \gamma^2}\left(G(\nu_1-\nu_2) + G(\nu_1-\nu_3)\right)
=\frac{2}{4\os - \gamma^2}S\left(\sqrt{4\os - \gamma^2}\right),
\end{equation}
where $S(\omega)$ is the power spectral density of $\langle \bfa , \bfs (t)\rangle$.
This follows
because (without loss of generality, taking
$\nu_2 = -\gamma - i\sqrt{4\os - \gamma^2} = \overline{\nu}_3$) we have
$\nu_1-\nu_2 = i\sqrt{4\os - \gamma^2} = -(\nu_1-\nu_3)$, and $G(i\omega) + G(-i\omega) = S(\omega)$
(see Appendix C).
If we take either $\nu_1=-\gamma \pm \sqrt{4\os - \gamma^2}$, then the
expressions for $\lambda_2$ are
\begin{align*}
&(+)\quad \lambda_2 = \frac{2}{4\os - \gamma^2}\left(G\left( i\sqrt{4\os - \gamma^2}\right) - 2G(0)\right)\\
&(-)\quad \lambda_2 = \frac{2}{4\os - \gamma^2}\left(G\left( -i\sqrt{4\os - \gamma^2}\right) - 2G(0)\right).
\end{align*}
Both cases have the same real part of $\lambda_2$
\begin{equation*}
\rp{\lambda_2} = \frac{1}{4\os - \gamma^2}\left( S\left(\sqrt{4\os - \gamma^2}\right)-2S(0)\right),
\end{equation*}
which is less than the expression in \eq{lam2mat}. Hence, we have proved
\begin{theorem}\label{thm:lam2}
If ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
then the second moments of the Mathieu equation \eq{Mathieu} become unstable when
$\lambda(\varepsilon)>0$ where
\begin{equation}
\lambda (\varepsilon) = -\gamma + \frac{2}{4\os - \gamma^2}S\left(\sqrt{4\os - \gamma^2}\right)\varepsilon^2 + \ldots
\end{equation}
\end{theorem}
\subsection{Comparing Moments for the Mathieu Equation}\label{sec:compare}
If we perform the same analysis as in \S \ref{sec:mat}, but
for the first and third
marginal moment equations instead of the
second marginal moment equation, we obtain results similar to Theorem \ref{thm:lam2}.
If we denote the largest eigenvalue of the $p$th moment operator
$\mathcal{D} + {\bf \Gamma}_0 + \varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1$ as $\lambda^{(p)}$, then up to
second order, we have
\begin{align}
& \lambda^{(1)}(\varepsilon) = \frac{-\gamma + i \sqrt{4\os - \gamma^2}}{2} + \left(\frac{G(i\sqrt{4\os - \gamma^2\, })}{4\os - \gamma^2}
-\frac{G(0)}{4\os - \gamma^2}\right)\varepsilon^2 + \ldots \nonumber \\
& \lambda^{(3)}(\varepsilon) =\frac{-3\gamma + i \sqrt{4\os - \gamma^2}}{2} + \nonumber \\
& \qquad \qquad
\left( \frac{3G\left(-i\sqrt{4\os - \gamma^2}\right)}{4\os - \gamma^2} + \frac{4G\left(i\sqrt{4\os - \gamma^2}\right)}{4\os - \gamma^2}-\frac{G(0)}{4\os - \gamma^2}
\right)\varepsilon^2 + \ldots \nonumber
\end{align}
(${\bf \Gamma}_1$ and ${\bf \Gamma}_0$ depend on $p$, but we do not make that explicit in our notation.)
It is only the real parts of the eigenvalues that factor into the stability. We have
\begin{align}\label{eq:l1}
& \rp{\lambda^{(1)}(\varepsilon)} = -\frac{\gamma}{2} +
\frac{1}{2(4\os - \gamma^2)}\left(S\left(\sqrt{4\os - \gamma^2\, }\right)- S(0)\right)\varepsilon^2 +\ldots \\
\label{eq:l2}
& \rp{\lambda^{(2)}(\varepsilon)} = -\gamma + \frac{2}{4\os - \gamma^2}S\left(\sqrt{4\os - \gamma^2}\right)\varepsilon^2 + \ldots \\
\label{eq:l3}
&\rp{\lambda^{(3)}(\varepsilon) } = -\frac{3}{2}\gamma +
\frac{1}{2(4\os - \gamma^2)}\left(7S\left(\sqrt{4\os - \gamma^2}\right) - S(0)\right)\varepsilon^2 + \ldots
\end{align}
In \cite{Kampen}, there is a heuristic treatment of the first moments of $ {\bf x}(t)$.
There, Van Kampen writes a series for $ {\bf x}(t)$, which he truncates at the $\varepsilon^2$ term
and then averages to get an expression for $\avg{ {\bf x}(t)}$ up to order $\varepsilon^2$.
He then points out that this new series is the solution to an ODE, up to order $\varepsilon^2$.
The stability of $\avg{ {\bf x}(t)}$ is then analyzed in terms of this new ODE. His result
for the Mathieu equation matches ours up to order $\varepsilon^2$ (although, he considers the case
$\gamma = 0$). Our result is a rigorous treatment, applies to higher moments,
and we can find the solution to any order in $\varepsilon$. We stop at $\varepsilon^2$ in this paper
only for convenience.
If we assume that the $\rp{\lambda^{(p)}}$ becomes positive while $\varepsilon$ is small (so we
neglect the $\varepsilon^4$ terms and higher), then we can use \eq{l1}, \eq{l2}, and \eq{l3} to solve
$\rp{\lambda^{(p)}}=0$ for $p=1,2,3$. Then we find that the second moments will become
unstable before the first moments. If $S\left(\sqrt{4\os - \gamma^2}\right) > S(0)$, then
the third moment will become unstable before the second moment. If $S\left(\sqrt{4\os - \gamma^2}\right) \leq S(0)$,
then the second moment becomes unstable before the third.
\subsection{Numerical Results} \label{sec:numerics}
In this subsection
we discuss the computation of the eigenvalue
that determines the stability of the Mathieu
equation (\ref{eq:Mathieu}), with ${\bf A}_0, {\bf A}_1$ from (\ref{eq:MA0A1}) and
${\bf \Gamma}_0, {\bf \Gamma}_1$ from (\ref{eq:gamma}).
We do not restrict ourselves to small values of
$\varepsilon$.
We carry out these calculations by converting the eigenvalue problem
to an infinite dimensional system of linear equations, and
truncating this system after a finite number of terms.
Our procedure converges rapidly as the number of terms in our
expansion is
increased.
We limit ourselves to the case of
a second-order filter given by \eq{filter},
with $ {\bf H}$, $ {\bf B}$, and $ {\bf a}$ given by
\begin{equation} \label{eq:filter2}
{\bf H} = \left(\begin{array}{cc} -\mu_1 & 0\\ \beta & -\mu_2 \end{array} \right), \quad
{\bf B}= \left(\begin{array}{cc} 1 & 0\\ 0 & 0 \end{array} \right) , \quad
{\bf a} = \left( \begin{array}{c} a_1 \\ a_2 \end{array} \right),
\end{equation}
where $\beta, a_1, a_2 \in \mathbb{R}$, $\beta \neq 0$, and $\mu_1, \mu_2 > 0$.
The vector of second marginal moments $ {\bf m} ( {\bf s})$, given by
(\ref{eq:Mmjk}), satisfies
\begin{align}\label{eq:num1}
\lambda {\bf m} &= \mathcal{D} {\bf m} + {\bf \Gamma}_0 {\bf m} + \varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m} \nonumber \\
& = \frac{1}{2}\partial_{s_1}^2 {\bf m} + \mu_1 \partial_{s_1} (s_1 {\bf m})
-\partial_{s_2}((\beta s_1 - \mu_2 s_2) {\bf m})+ {\bf \Gamma}_0 {\bf m} + \varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1 {\bf m} .
\end{align}
If we multiply \eq{num1} by $s_2^j$ and integrate with respect to $ds_2$, then we get
\begin{align}\label{eq:num2}
\lambda {\bf m}_j &= \frac{1}{2}\partial_{s_1}^2 {\bf m}_j + \mu_1 \partial_{s_1} (s_1 {\bf m}_j)
+ j\beta s_1 {\bf m}_{j-1} + \nonumber\\
& \qquad \qquad - j\mu_2 {\bf m}_j+ {\bf \Gamma}_0 {\bf m}_j +
\varepsilon a_1 s_1 {\bf \Gamma}_1 {\bf m}_j + \varepsilon a_2{\bf \Gamma}_1 {\bf m}_{j+1},
\end{align}
where
\begin{equation*}
{\bf m}_j (s_1) = \int_\mathbb{R} s_2^j {\bf m}(s_1,s_2)ds_2.
\end{equation*}
This is an infinite set of equations for the marginals $\{ {\bf m}_j(s_1)\}$.
Let $\varphi_k(s_1) = H_k(\sqrt{\mu_1}s_1)e^{-\mu_1 s_1^2}$, where $H_k$ is the $k$th Hermite
polynomial. We expand $ {\bf m}_j$ in the basis $\varphi_k$ as
\begin{equation*}
{\bf m}_j (s_1) = \sum_k {\bf c}^k_j \varphi_k(s_1).
\end{equation*}
The $\varphi_k$ are eigenfunctions of the differential operator in the
$s_1$ variable in \eq{num2}; explicitly
\begin{equation*}
\frac{1}{2}\partial_{s_1}^2 \varphi_k + \mu_1 \partial_{s_1} (s_1
\varphi_k) = -k\mu_1 \varphi_k \quad k \geq 0.
\end{equation*}
The Hermite polynomials satisfy the
recursion relation $H_{k+1}(y) = 2yH_{k}(y)-2kH_{k-1}(y)$, hence
\begin{equation*}
s_1\varphi_k(s_1) = \frac{1}{2\sqrt{\mu_1}}(\varphi_{k+1}(s_1) + 2k\varphi_{k-1}(s_1))\quad k\geq 0.
\end{equation*}
Thus, \eq{num2} simplifies and becomes an equation for $ {\bf c}_j^k$
\begin{align}\label{eq:num3}
\lambda {\bf c}_j^k &= \left({\bf \Gamma}_0 - (k\mu_1 + j\mu_2) {\bf I}\right) {\bf c}_j^k
+\frac{j\beta}{2\sqrt{\mu_1}}\left( {\bf c}_{j-1}^{k-1} + 2(k+1) {\bf c}_{j-1}^{k+1}\right)\nonumber \\
&\qquad \qquad +\frac{\varepsilon a_1}{2\sqrt{\mu_1}}{\bf \Gamma}_1
\left( {\bf c}_{j}^{k-1} + 2(k+1) {\bf c}_{j}^{k+1}\right)+\varepsilon a_2 {\bf \Gamma}_1 {\bf c}_{j+1}^k
\end{align}
If we consider a finite number of moments ${\bf m}_j$ for $j\leq N_m$, and truncate the
expansion in $\varphi_k$ at $k\leq N_h$, then we get an approximation to the
doubly infinite system \eq{num3}. This can be written as a matrix equation
\begin{equation}\label{eq:evaleq}
{\bf L} {\bf z} = \lambda {\bf z}
\end{equation}
where $ {\bf L}$ is an $(N_mN_hJ)\times(N_mN_hJ)$ matrix. This eigenvalue
problem can be solved quickly on a computer.
\begin{table}[!h!b!t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& $\lambda(\varepsilon)$ & $E_2$ & $E_4$ \\
\hline
$\varepsilon = 0.01$ &$-9.89\times 10^{-3}$ & $5.74\times 10^{-8}$ &
$2.20\times 10^{-11}$ \\
\hline
$\varepsilon = 0.05$ &$-7.20\times 10^{-3}$ & $3.62\times 10^{-5}$& $3.44\times 10^{-7}$ \\
\hline
$\varepsilon = 0.10$ & $1.65\times 10^{-3}$ & $5.96\times 10^{-4}$&
$2.21\times 10^{-5}$ \\
\hline
\end{tabular}
\caption{Values of the error in computing $\lambda(\varepsilon)$ (for second moments)
for three values of $\varepsilon$.
$E_2$ is the error from the second order expansion, and $E_4$ is the
error from the fourth-order expansion.
Parameter values: $\mu_1 = 1.8, \mu_2 = 0.9, \beta = 1, \gamma = 0.01,
\omega_0 = 0.5, a_1=1, a_2=0.9$, $N_m = 7,\, N_h = 5$.}
\end{center}
\label{table_one}
\end{table}
Table 1 shows
the computed value of $\lambda(\varepsilon)$ for second moments,
which is the largest eigenvalue of $ {\bf L}$ in (\ref{eq:evaleq}).
That is, $\lambda(\varepsilon)$ is the largest eigenvalue
for the Mathieu equation with filter \eq{filter2} (in this case the largest
eigenvalue is real). $E_2$ is the error from a second-order perturbation
expansion. That is, $E_2(\varepsilon) = | \lambda_0 + \lambda_2\varepsilon^2
-\lambda(\varepsilon)|$ with $\lambda_0 = -\gamma$ and $\lambda_2$
is given in equation (\ref{eq:lam2mat}).
$E_4$ is the fourth-order error, $E_4(\varepsilon) = | \lambda_0 +
\lambda_2\varepsilon^2 +\lambda_4\varepsilon^4-\lambda(\varepsilon)|$, where
$\lambda_4$ is computed by performing the perturbation
analysis to order four (the formula for $\lambda_4$ is not presented
here).
The method converges rapidly; the values of
$\lambda(\varepsilon)$ in the table were computed for $N_m = 7$ and $N_h = 5$.
\subsection{Alternative Representation of $\lambda_2$}\label{sec:tensor}
We present a formula for $\lambda_2$ that involves only
${\bf A}_0$ and ${\bf A}_1$, avoiding construction of ${\bf \Gamma}_0, {\bf \Gamma}_1$.
We do not present all of the details because the bookkeeping
can be quite cumbersome
(an interested reader can find the details in \cite{SAND}),
but we believe the formula for
$\lambda_2$ will be useful for applications.
For instance,
if one wants to compute the perturbation coefficients on a computer,
it is easy to build an algorithm based on equation \eqref{eq:lam2tensor}
below,
since one only needs to input the filter
$( {\bf H}, {\bf B}, {\bf a})$ and the matrices ${\bf A}_0$ and ${\bf A}_1$.
The equation
for the second marginal moments can be written as
\begin{equation*}
\partial_t {\bf M} = \mathcal{D} {\bf M} + {\bf A}_0 {\bf M} + {\bf M} {\bf A}_0^T + \varepsilon \langle \bfa , \bfs \rangle \left({\bf A}_1 {\bf M}
+ {\bf M} {\bf A}_1^T \right)
\end{equation*}
where ${\bf M}$ is the $N\!\times \! N$
symmetric matrix with ${\bf M}_{ij} =\int_{\mathbb{R}^N}x_ix_jP({\bf s},{\bf x},t)d{\bf x}$.
In this case one can solve an eigenvalue problem for
the stability where we have eigenvalues and eigenmatrices.
Looking for solutions of the form
$\widetilde{ {\bf M}}( {\bf s},t) = e^{\lambda t} {\bf M}( {\bf s})$,
yields the eigenvalue problem for $ {\bf M}( {\bf s})$
\begin{equation*
\lambda {\bf M} = \mathcal{D} {\bf M} +{\bf A}_0 {\bf M} + {\bf M} {\bf A}_0^T + \varepsilon \langle \bfa , \bfs \rangle \left({\bf A}_1 {\bf M}
+ {\bf M} {\bf A}_1^T \right).
\end{equation*}
The marginal moment tensor $ {\bf M}$ is symmetric ($M_{jk}=M_{kj}$), so we will
use a basis of symmetric tensors to express $ {\bf M}$, and in turn
reproduce the results of \S \ref{sec:pert}. The basis
that is simplest is given by
the eigenmatrices ${ {\bf E}}_{jk}$ (and adjoints by ${ {\bf F}}_{jk}$
with inner product $\ip{ {\bf E}}{ {\bf F}} = {\rm tr}( {\bf E}^T {\bf F})$)
\begin{equation*
{ {\bf E}}_{jk} = \frac{1}{2}\left( {\bf h}_j {\bf h}_k^T + {\bf h}_k {\bf h}_j^T\right),
\quad { {\bf F}}_{jk} = \frac{1}{2}\left( {\bf g}_j {\bf g}_k^T + {\bf g}_k {\bf g}_j^T\right),
\end{equation*}
where $ {\bf h}_j$ are eigenvectors of ${\bf A}_0$ with eigenvalues $\sigma_j$,
and $ {\bf g}_k$ are the normalized
adjoint eigenvectors of
${\bf A}_0$, $\ip{{\bf g} _j}{{\bf h} _k} = \delta_{jk}$. The
eigenvalues of the ${ {\bf E}}_{jk}$ are sums of the $\sigma_i$;
${\bf A}_0 { {\bf E}}_{jk} + { {\bf E}}_{jk} {\bf A}_0^T = (\sigma_j +\sigma_k){ {\bf E}}_{jk}$.
The analogous result to Lemma \ref{lem:solve} is straightforward to
show, and following the steps in \S \ref{sec:pert} we arrive at
the following result (note that the eigenvalues $\nu_j$ of ${\bf \Gamma}_0$
from \S \ref{sec:pert} and \S \ref{sec:compare}
are sums $\sigma_\ell + \sigma _m$).
\begin{theorem}\label{thm:tensorm2}
Let ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
and let $\{ {\bf h}_j \}_{j=1}^N$ form a complete set.
For $q,r$ fixed, if $\rp{\sigma_q+\sigma_r-\sigma_j-\sigma_k + \mu_\ell}>0$
for each $j,k=1,\ldots, N$ and $\ell =1,\ldots , n$, then
the order-two coefficient in the
expansion $\lambda = \lambda_0 + \lambda_2 \varepsilon^2 +\ldots$, with
$\lambda_0 = \sigma_p + \sigma_r$, is given by
\begin{equation}\label{eq:lam2tensor}
\lambda_2=8\sum_{j,k=1}^N
\frac{C_{jkqr}C_{qrjk} }{1+\delta_{qr}}
G(\sigma_q + \sigma_r -\sigma_j-\sigma_k),
\end{equation}
where
\begin{equation}
C_{jk\ell m} = \frac{1}{4}\left(\delta_{jm}\ip{ {\bf g}_k}{{\bf A}_1 {\bf h}_\ell}+
\delta_{km}\ip{ {\bf g}_j}{{\bf A}_1 {\bf h}_\ell}
+\delta_{j\ell}\ip{ {\bf g}_k}{{\bf A}_1 {\bf h}_m}+\delta_{k\ell}\ip{ {\bf g}_j}{{\bf A}_1 {\bf h}_m}\right),\nonumber
\end{equation}
and $ {\bf h}_j$ are eigenvectors of ${\bf A}_0$ with eigenvalues $\sigma_j$,
and $ {\bf g}_k$ are the normalized
adjoint eigenvectors of ${\bf A}_0$, $\ip{{\bf g} _j}{{\bf h} _k}
= \delta_{jk}$.
\end{theorem}
\section{ Conclusions}
We have carried out a perturbation analysis
to characterize the moment
stability
of parametrically forced
linear equations, where the forcing
is colored noise coming out of an Ornstein-Uhlenbeck process.
Our analysis applies to arbitrary linear systems, and can in principle
be carried out
to any order.
Our analysis depends on characterizing
the spectrum of the vector Ornstein-Uhlenbeck process using
ladder operators. Though this spectrum has been
characterized elsewhere
\cite{Liberzon, Metafune, roy}, we believe the ladder operator approach
has been shown to be
useful in carrying out our perturbation analysis.
\section*{Acknowledgements}
We would like to thank John Torczynski for motivating and
finding funding for this work. We also
thank Jim Ellison, Nawaf Bou Rabee, and Rich Field for
several fruitful discussions concerning stochastic differential equations.
\section{Appendix A: Supplementary Material for \S \ref{sec:lad} }\label{sec:appA}
In this appendix we give several lemmas
used in \S \ref{sec:lad}, as well as supplying the proofs of
several of the lemmas used in that section.
\begin{lemma} \label{Dsymm}
The operator $\mathcal{D}$ defined in equation \eqref{eq:defD} can be expressed as in
equation \eqref{eq:Dsum},
where the $d_{jk}$ are the components of
the symmetric matrix $ {\bf D}$, given in
equation \eqref{eq:AD}.
\end{lemma}
\begin{proof}
With $d_{jk}$ as the components of $ {\bf D}$ given in
equation \eqref{eq:AD}, we have
\begin{equation}
\frac{1}{2} \sum_{i=1}^{2n+1}\sum_{j=1}^{2n+1} d_{ij}
L_i L_j =
\frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n
\left(b_{ij} L_i L_j
- h_{ij} L_i L_{j+n} - h_{ji} L_{i+n} L_j \right)
- \frac{1}{2} \tr{ {\bf H}}
\label{eqn_temp}
\end{equation}
The part of the operator involving the coefficients $b_{ij}$ is
clearly equal to the operator $\frac{1}{2}\Div{ {\bf B}\nabla \cdot}$.
To show that
the left hand side of equation (\ref{eqn_temp}) is actually $\mathcal{D}$, we need to shows
that the terms involving $h_{ij}$ are in fact the same
as $-\sum_{i=1}^n \sum_{j=1}^n h_{ij} L_i L_{j+n}=-\Div{ {\bf H} {\bf s} \cdot}$.
We compute
\begin{align*}
&\frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n
\left(
h_{ij} L_i L_{j+n} + h_{ji} L_{i+n} L_j \right)
+ \frac{1}{2} tr({\bf H}) \\
&= \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n
\left(
h_{ij} L_i L_{j+n} + h_{ij} L_{j+n} L_i \right)
+ \frac{1}{2} tr({\bf H}) \\
&= \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n
\left(
h_{ij} L_i L_{j+n} + h_{ij} \left(L_i L_{j+n} - \delta_{ij} \right) \right)
+ \frac{1}{2} tr({\bf H})
= \sum_{i=1}^n \sum_{j=1}^n
h_{ij} L_i L_{j+n}
\end{align*}
In the second to last line above, we used the commutator
relation from (\ref{eqn_comL}).
\end{proof}
\begin{center}
{\bf Proof of Lemma \ref{matrixEig} }
\end{center}
\begin{proof
We compute
an expression for $[\mathcal{D},\mathcal{L}]$ in terms of ${\bf D}$ and ${\bf A}$.
\begin{align*}
[\mathcal{D},\mathcal{L}] & = \sum_{i,j,m}\frac{1}{2}d_{ij}y_m (L_i L_j L_m - L_mL_iL_j) \\
& = \sum_{i,j,m}\frac{1}{2}d_{ij}y_m (L_i [L_j, L_m] + [L_i,L_m]L_j)\\
& = \sum_{i,j,m}\frac{1}{2}d_{ij}y_m (L_i a_{j,m} + a_{i,m}L_j)
=\sum_{i,m}\left(\frac{1}{2}\left({\bf D}+ {\bf D}^T\right){\bf A}\right)_{i,m}y_mL_i.
\end{align*}
For the equation $[\mathcal{D},\mathcal{L}]=\mu \mathcal{L}$, this implies that we have
\[
\sum_{i,m}\left(\frac{1}{2}({\bf D}+ {\bf D}^T){\bf A}\right)_{i,m}y_mL_i
= \mu \sum_i y_i L_i.
\]
In matrix notation, this is just $ {\bf D}{\bf A} {\bf y}
= \mu {\bf y}$, because $ {\bf D} = {\bf D}^T$.
\end{proof}
This proof holds even if we do not assume that ${\bf D}$ is symmetric.
In that case the analysis that follows would be done in terms
of the symmetric matrix ${\bf S} = \frac{1}{2}({\bf D} + {\bf D}^T)$,
instead of ${\bf D}$.
Thus, it is only for convenience that we use the symmetric form of $ {\bf D}$
in \eqref{eq:AD}.
\begin{center}
{\bf Proof of Lemma \ref{lem:T} }
\end{center}
\begin{proof}
We denote the eigenvalues of ${\bf H}$ as $-\mu_k$
with $\rp{\mu_k} >0$ for $k=1,2,\ldots,n$.
Let $ {\bf u}_k$ be the eigenvectors of $ {\bf H}$ and $ {\bf v}_k$ be the adjoint eigenvectors
\begin{equation}
{\bf H} {\bf u} _k = - \mu_k {\bf u} _k ,
\qquad
{\bf H}^T {\bf v} _k = - \overline{\mu}_k {\bf v} _k
\label{eqn_Hu}
\end{equation}
normalized so that
\begin{equation*}
\ip{ {\bf v}_k}{ {\bf u}_j}=\delta_{jk}.
\label{eqn_Huv}
\end{equation*}
Recall that $ {\bf H}$ is a real matrix, so complex eigenvalues
come in complex conjugate pairs.
If we write $ {\bf y} = ( {\bf p}, {\bf q}, r)^T$ then
$ {\bf T} {\bf y} = \mu {\bf y}$ becomes
\begin{equation}\label{eq:T_eig}
\left( \begin{array}{ccc} {\bf H} & {\bf B} & 0\\ \,\, {\bf 0}_n & -{\bf H}^T & 0 \\
0 & 0& 0 \end{array} \right)
\left( \begin{array}{c} {\bf p} \\ {\bf q} \\ r \end{array} \right) = \mu
\left( \begin{array}{c} {\bf p} \\ {\bf q} \\ r \end{array} \right).
\end{equation}
There is a solution with
$\mu = 0$ and $ {\bf y}^0 = (0,\ldots,0,1)^T$. If $\mu \neq 0$
then $r=0$, and
we have two cases. If $ {\bf q} = {\bf 0}$ then \eq{T_eig} reduces to $ {\bf H} {\bf p} = \mu {\bf p}$.
Hence, $\mu = -\mu_k$ and $ {\bf p} = {\bf u}_k$ for some $k$. We will denote this
solution as $ {\bf y}^{-k} = ( {\bf u}_k, {\bf 0}, 0)^T$. If $ {\bf q} \neq {\bf 0}$ then we
must have $ {\bf H}^T {\bf q} = -\mu {\bf q}$, so $\mu = \mu_k$ and $ {\bf q} = \ob{ {\bf v}}_k$ for some
$k$. We denote the solution in this case as
$ {\bf y}^k = (-( {\bf H} -\mu_k {\bf I})^{-1} {\bf B}\ob{ {\bf v}}_k, \ob{ {\bf v}}_k , 0)^T$.
\end{proof}
\begin{remark}\label{rem:mu}
$ {\bf T}$ has the eigenvalue $0$, with
corresponding ladder operator $\mathcal{L}_0=1$. This implies that $\mu_0 = 0$. However,
in this degenerate case, it is convenient for notational purposes
to define $\mu_0 = -\tr{ {\bf H}}$. We will also write $\mu_{-k}$ in
place of $-\mu_k$ to accommodate negative indices in the proof of Lemma \ref{lem:Dsum}.
\end{remark}
\begin{center}
{\bf The Proof of Lemma \ref{lem_commute} }
\end{center}
We denote by $ {\bf w}^{\pm k}$ the normalized adjoint eigenvectors of $ {\bf T}$.
That is, $ {\bf T}^T {\bf w}^{\pm k} = \pm\overline{\mu}_k {\bf w}^{\pm k}$,
$\ip{ {\bf w}^{\pm k}}{ {\bf y}^{\pm k}}=\delta_{jk}$, and
$\ip{ {\bf w}^{\pm k}}{ {\bf y}^{\mp k}} = 0$.
We begin with a preliminary lemma.
\begin{lemma}
\label{lem:EigT}
Let ${\bf u} _k$ and ${\bf v} _k$ be the eigenvectors of
${\bf H}$ as in equations (\ref{eqn_Hu}).
Let
${\bf y}^{\pm k}$, $k=1,\ldots, n$, be the eigenvectors of ${\bf T}$ associated with
the eigenvalue
$\pm \mu_{k}$, and let ${\bf w} ^{\pm k},k=1,\ldots, n$ be the normalized adjoint
eigenvectors. Then for each $k=1,\ldots, n$,
$ {\bf A} {\bf y} ^{\pm k} = {\bf \overline{w} } ^{\mp k}$.
For $k=0,1,\ldots, n$,
$ {\bf D} {\bf w} ^{ \pm k} = \overline{\mu}_k \overline{{\bf y}}^{\mp
k}$ (using $\mu_0 = -\tr{ {\bf H}}$ from Remark \ref{rem:mu}).
Finally, $\sum_{k=-n}^n \ob{w}^k_iy^k_j = \delta_{ij}$.
\end{lemma}
\begin{proof}
Note that $ {\bf y}^{\pm k}$ are given explicitly in the proof of Lemma
\ref{lem:T} and for $k\neq 0$
\begin{equation}
\label{eq:yk}
{\bf y}^{-k} = ( {\bf u}_k, {\bf 0}, 0)^T,\qquad
{\bf y}^k = (-( {\bf H} -\mu_k {\bf I})^{-1} {\bf B}\ob{ {\bf v}}_k, \ob{ {\bf v}}_k , 0)^T.
\end{equation}
We define
\begin{equation}
\label{eq:wk}
{\bf w}^k = ({\bf 0}, -\ob{ {\bf u}}_k,0)^T, \qquad
{\bf w}^{-k}=( {\bf v}_k, ( {\bf H}-\overline{\mu}_k {\bf I})^{-1} {\bf B} {\bf v}_k,0)^T
\end{equation}
and ${\bf y}^0 = {\bf w}^0 = (0,\ldots,0,1)^T$.
It is straightforward
to check that
$\ip{ {\bf w}^{\pm j}}{ {\bf y}^{\pm k}}=\delta_{jk}$,
$\ip{ {\bf w}^{\pm j}}{ {\bf y}^{\mp k}} = 0$, $ {\bf T}^T {\bf w}^0 ={\bf 0}$,
and for $k\neq 0$, $ {\bf T}^T {\bf w}^{\pm k} = \pm\overline{\mu}_k {\bf w}^{\pm k}$,
so $ {\bf w}^{\pm k}$
are the normalized adjoint eigenvectors.
Applying ${\bf A}$ to the
${\bf y}^{\pm k}$ in \eqref{eq:yk}
gives ${\bf A} {\bf y} ^{\pm k} = {\bf \overline{w} } ^{\mp k}$ for
$k \neq 0$, and hence
applying $ {\bf D}$ to ${\bf A} {\bf y} ^{\pm k} = {\bf \overline{w} } ^{\mp k}$
gives $ {\bf D} {\bf w} ^{ \pm k} = \overline{\mu}_k \overline{{\bf
y}}^{\mp k}$ for $k \neq 0$.
With $\mu_0 = -\tr{ {\bf H}}=\overline{\mu}_0 $ (since ${\bf H}$ is real), we have
$ {\bf D} {\bf w} ^{ 0} = \overline{\mu}_0 \overline{{\bf y}}^{0}$.
(Note that ${\bf A} {\bf y} ^{0} ={\bf 0}$, so without the convention
in Remark \ref{rem:mu} we would not have
$ {\bf D} {\bf w} ^{ 0} = \overline{\mu}_0 \overline{{\bf y}}^{0}$.)
We define the $(2n+1)\!\times \! (2n+1)$
matrices $ {\bf Y} = [ {\bf y}^{-n},\ldots, {\bf y}^n]$ and
$ {\bf W} = [ {\bf w}^{-n},\ldots, {\bf w}^n]$,
then $ {\bf W}^* {\bf Y} = {\bf I}_{2n+1}$ because $(\ob{ {\bf w}}^i)^T {\bf y}^j = \delta_{ij}$ for
$-n \leq i,j \leq n$. But this means $ {\bf Y} {\bf W}^*= {\bf I}_{2n+1}$ as well, and the
components of $ {\bf Y} {\bf W}^*$ are $( {\bf Y} {\bf W}^*)_{ij} = \sum_{k=-n}^n \ob{w}^k_iy^k_j$.
\end{proof}
We now give the proof of
Lemma \ref{lem_commute}.
\begin{proof}[Proof of Lemma \ref{lem_commute}]
Recall ${\bf A}$ was defined as having coefficients $a_{mp}=[L_m,L_p]$.
Writing out $[\mathcal{L}_{\pm j},\mathcal{L}_k]$ in terms of the $L_m$ we have
\begin{align*}
[\mathcal{L}_{\pm j},\mathcal{L}_k] &= \sum_{m,p=1}^{2n+1} y^{\pm j}_my^k_p[L_m,L_p]
= \sum_{m,p=1}^{2n+1} y^{\pm j}_my^k_pa_{mp} \\
& = ( {\bf y}^{\pm j})^T {\bf A} {\bf y}^k = \langle \ob{ {\bf y}}^{\pm j}, {\bf A} {\bf y}^k \rangle.
\end{align*}
Using $ {\bf A} {\bf y} ^{\pm k} = {\bf \overline{w} } ^{\mp k}$
we have
$\langle \ob{ {\bf y}}^{\pm j}, {\bf A} {\bf y}^k \rangle
=
\langle \ob{ {\bf y}}^{\pm j}, \overline{{\bf w}}^{-k} \rangle
= \overline{\langle {\bf y}^{\pm j}, {\bf w}^{-k} \rangle}$.
Hence, $[\mathcal{L}_j,\mathcal{L}_k] = \overline{ \ip{{\bf y} ^{+j}} {{\bf w } ^{-k}} }=0$
and $[\mathcal{L}_{-j},\mathcal{L}_{k}] =
\overline{\ip{ {\bf y}^{-j}}{ {\bf w}^{-k}}}=\delta_{jk}$.
\end{proof}
\begin{center}
{\bf The Proof of Lemma \ref{lem:Dsum} }
\end{center}
\begin{proof}[Proof of Lemma \ref{lem:Dsum}]
We first consider $\frac{\mu_k}{2}\mathcal{L}_{-k}\mathcal{L}_{k} =
\sum_{p,m=1}^{2n+1}\frac{\mu_k}{2}y^{-k}_my^k_pL_mL_p$,
for each $k =-n, \ldots, n$, using the conventions in Remark
\ref{rem:mu}.
For each $k$, ${\bf D} \overline{{\bf w} }^k=\mu_k {\bf y}^{-k}$, which
follows from Lemma \ref{lem:EigT}. Hence,
$y^{-k}_m = \frac{1}{\mu_k}\sum_{q=1}^{2n+1}d_{mq}\ob{w}^{k}_q$, so
if we replace the term $y^{-k}_m$ in the above expression
for $\frac{\mu_k}{2}\mathcal{L}_{-k}\mathcal{L}_{k}$, and sum over $k$, we get
\begin{align*}
\sum_{k=-n}^n \frac{\mu_k}{2}\mathcal{L}_{-k}\mathcal{L}_{k}
& = \sum_{k=-n}^n \sum_{p,m,q=1}^{2n+1}\frac{\mu_k}{2}\frac{1}{\mu_k}d_{mq}
\ob{w}_q^k y^k_pL_mL_p \\
& = \sum_{p,m,q=1}^{2n+1} \frac{1}{2}d_{mq}L_mL_p \sum_{k=-n}^n \ob{w}_q^k y^k_p.
\end{align*}
From Lemma \ref{lem:EigT}, $\sum_{k=-n}^n \ob{w}_q^k y^k_p = \delta_{qp}$, so
\begin{equation}\label{eq:D1}
\sum_{k=-n}^n \frac{\mu_k}{2}\mathcal{L}_{-k}\mathcal{L}_{k}= \sum_{p,m,q=1}^{2n+1}
\frac{1}{2}d_{mq} \delta_{qp} L_mL_p
= \sum_{p,m=1}^{2n+1}\frac{1}{2}d_{mp}L_mL_p = \mathcal{D}.
\end{equation}
For each $k > 0$,
we can write $\frac{\mu_{k}}{2}\mathcal{L}_{k}\mathcal{L}_{-k} = \frac{\mu_{k}}{2}
\left( \mathcal{L}_{-k}\mathcal{L}_{k} -1\right)$ by the result of
Lemma \ref{lem_commute}. Combining this with \eq{D1} and using $\mu_0 = -\tr{ {\bf H}}$
we can write $\mathcal{D}$ as
\begin{equation*}
\mathcal{D} = -\frac{1}{2}\tr{ {\bf H}} + \sum_{k=1}^n \left\{
\frac{\mu_k}{2}\mathcal{L}_{-k}\mathcal{L}_{k} +\frac{\mu_{k}}{2}
\left( \mathcal{L}_{-k}\mathcal{L}_{k} -1\right) \right\}.
\end{equation*}
But the eigenvalues of $ {\bf H}$ are $-\mu_k$, hence $\tr{ {\bf H}} = -\sum_{k=1}^n\mu_k$ and
we have $\mathcal{D} = \sum_{k=1}^n\mu_k\mathcal{L}_{-k}\mathcal{L}_{k}$.
\end{proof}
\section{Appendix B: Supplementary Material for \S \ref{sec:eig}}
\begin{center}
{\bf Proof of Lemma \ref{lem:boundD} }
\end{center}
\begin{proof
Suppose $\chi$ is an eigenvalue of $\mathcal{D}$ with eigenfunction $\phi$,
$\int_{\mathbb{R}^n}|\phi|^2d {\bf s} = 1$.
If we multiply \eq{eig_prob} by $\overline{\phi}$, use the
definition of $\mathcal{D}$ in \eqref{eq:defD}, integrate over all
of space, and integrate the term involving ${\bf B}$ by parts, we get
\begin{align}
\chi \int_{\mathbb{R}^n}|\phi|^2d {\bf s} = \int_{\mathbb{R}^n}\ob{\phi}\mathcal{D} \phi d {\bf s} &= \int_{\mathbb{R}^n}
\ob{\phi}\frac{1}{2}\Div{ {\bf B} \nabla \phi}-\ob{\phi}\Div{ {\bf H} {\bf s} \phi} d {\bf s} \nonumber \\
& = -\int_{\mathbb{R}^n}\frac{1}{2}\ip{\nabla \phi}{ {\bf B} \nabla \phi}+
\ob{\phi}\Div{ {\bf H} {\bf s} \phi} d {\bf s} \nonumber
\end{align}
The matrix $ {\bf B}$ is positive semi-definite, so $\ip{\nabla \phi}{ {\bf B} \nabla \phi}
\geq 0$, hence $\rp{\chi} \leq \rp{-\int_{\mathbb{R}^n}\ob{\phi}\Div{ {\bf H} {\bf s} \phi} d {\bf s}}$.
But, because $ {\bf H}$ is real,
\begin{align}
2\rp{\int_{\mathbb{R}^n}\ob{\phi}\Div{ {\bf H} {\bf s} \phi} d {\bf s}} &=
\int_{\mathbb{R}^n}\ob{\phi}\Div{ {\bf H} {\bf s} \phi}+\phi\Div{ {\bf H} {\bf s} \ob{\phi}} d {\bf s}. \nonumber
\end{align}
If we integrate the first term on the right in this expression by parts, and
expand the second term we get
\begin{align}
2\rp{\int_{\mathbb{R}^n}\ob{\phi}\Div{ {\bf H} {\bf s} \phi} d {\bf s}} &=
\int_{\mathbb{R}^n}-\nabla \ob{\phi}\cdot (\phi {\bf H} {\bf s}) +
\phi(\ob{\phi}\tr{ {\bf H}} + ( {\bf H} {\bf s}) \cdot \nabla\ob{\phi})d {\bf s} \nonumber \\
& = \int_{\mathbb{R}^n}|\phi|^2\tr{ {\bf H}}d {\bf s} = \tr{ {\bf H}}.\nonumber
\end{align}
Hence, $\rp{\chi}\leq -\frac{1}{2}\tr{ {\bf H}}$.
\end{proof}
\begin{center}
{\bf Proof of Lemma \ref{lem:Sigma} }
\end{center}
The proof of Lemma \ref{lem:Sigma} follows almost
immediately from a few preliminary lemmas.
\begin{lemma}\label{lem:SigmaInverse}
Suppose the eigenvectors
${\bf q}_k$ of ${\bf H}$ are complete and the adjoint eigenvectors
${\bf p}_k$
are normalized so $\ip{{\bf p}_j}{{\bf q}_k}=\delta_{jk}$.
Let ${\bf P} = [{\bf p} _1,{\bf p} _2,\ldots,{\bf p} _n]$,
${\bf Q} = [{\bf q} _1,{\bf q} _2,\ldots,{\bf q} _n]$.
We have
\begin{equation}
{\bf H} {\bf \Sigma} ^{-1} + {\bf \Sigma} ^{-1} {\bf H} ^T = - {\bf B}
\label{eqn_siginv}
\end{equation}
where ${\bf \Sigma}^{-1} = {\bf P} {\bf Q} ^{-1}$.
\end{lemma}
\begin{remark}
Lemmas \ref{lem_carlson} and \ref{lem:Sigma}
show that, with the appropriate assumptions on $ {\bf H}$ and $ {\bf B}$,
the matrix $ {\bf P}$ is invertible
and thus there exists a nonsingular matrix ${\bf \Sigma} = {\bf Q} {\bf P}^{-1}$, so our use of the notation
${\bf \Sigma}^{-1}$ is appropriate.
\end{remark}
\begin{proof}
According to equation (\ref{eq:T_eig}) we have
${\bf H} {\bf p} _k + {\bf B} {\bf q} _k = \mu_k {\bf p} _k$, and
$- {\bf H} ^T {\bf q} = \mu_k {\bf q} _k$.
Writing this out in matrix form we get
${\bf H} {\bf P} + {\bf B} {\bf Q} = {\bf P} {\bf M}$,
$- {\bf H} ^T {\bf Q} = {\bf Q} {\bf M}$.
Here ${\bf M}$ is the diagonal matrix with $\mu_k$ on the $k$th
diagonal. Using the second of these equations to
write ${\bf M}$ in terms of ${\bf Q}$ and ${\bf H}$,
and assuming ${\bf Q}$ is invertible (the eigenvectors of
${\bf H}$ are complete) we get
${\bf M} = - {\bf Q}^{-1} {\bf H} ^T {\bf Q} $.
Substituting this into the first equation we get
$ {\bf H} {\bf P} + {\bf B} {\bf Q} = - {\bf P} {\bf Q} ^{-1} {\bf H} ^T {\bf Q}$.
If we multiply this by ${\bf Q} ^{-1}$ on the right and rearrange,
we get the result of the
lemma.
\end{proof}
We will use the following result for controllable pairs,
which follows immediately from Theorem 2 in \cite{carlson}.
\begin{lemma}
\label{lem_carlson}
If ${\bf B}$ is positive semi-definite, and the eigenvalues of
${\bf H}$ all have real parts less than zero, then
the solution to
${\bf H} {\bf R} + {\bf R} {\bf H} ^T = - {\bf B}$ is symmetric and
positive definite provided $({\bf H}, {\bf B})$
form a controllable pair.
\end{lemma}
Lemma \ref{lem:Sigma}
follows almost immediately from the previous two lemmas.
\begin{center}
{\bf Some lemmas used in the proof of Theorem \ref{thm:integ} }
\end{center}
\begin{lemma}\label{lem_commute_pow}
For any integer $m \geq 0$,
the operators $\mathcal{L}_k$ and $\mathcal{L}_{-k}$ satisfy
\begin{equation}
\left[ \mathcal{L}_{-k}^{m+1}, \mathcal{L}_k \right] =
(m+1) \mathcal{L}_{-k}^m
\end{equation}
\end{lemma}
\begin{proof}
For $m=0$, this follows immediately from Lemma
\ref{lem_commute}.
We can now proceed by induction. In particular, if
$\mathcal{L}_{-k} ^m \mathcal{L} _k - \mathcal{L} _k \mathcal{L}_{-k}^m = m \mathcal{L}_{-k} ^{m-1}
$, then if we multiply both sides of this equation by $\mathcal{L}_{-k}$ and
use $\mathcal{L}_{-k}^m \mathcal{L}_k \mathcal{L}_{-k}=
\mathcal{L}_{-k}^m \left( -I + \mathcal{L}_{-k} \mathcal{L}_k \right) =
\mathcal{L}_{-k}^{m+1} \mathcal{L}_k - \mathcal{L}_{-k}^m$, we find that
$\mathcal{L}_{-k}^{m+1} \mathcal{L}_k - \mathcal{L}_k \mathcal{L}_{-k}^{m+1} = (m+1)
\mathcal{L}_{-k}^m$, which proves the lemma.
\end{proof}
\begin{lemma}\label{lem_efuncs}
Let ${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
and let ${\bf k} = (k_1,k_2,..k_n)$ be a vector of nonnegative integers.
Let
\begin{equation}
\Phi_{{\bf k}}({\bf s}) = \mathcal{L}_{-1}^{k_1}
\mathcal{L}_{-2}^{k_2}....\mathcal{L}_{-n}^{k_n}
\Phi_0({\bf s}),
\end{equation}
then $\Phi_{{\bf k}}( {\bf s}) $ is nonzero, and has an eigenvalue of
\begin{equation}\label{eq:chi_k}
\chi_{{\bf k}} = - \sum_{j=1}^n k_j \mu_j
\end{equation}
\end{lemma}
\begin{proof}
We begin by showing that $\mathcal{L}_{-k}^m \Phi_0({\bf s})$ is
nonzero for all $m\geq 0$. This clearly holds for $m=0$ by Lemma \ref{lem:phi0}.
By induction we can see that if it is nonzero for $m-1$, then it is non-zero for $m$.
This follows from the fact that
$\left[ \mathcal{L}_{-k}^m,\mathcal{L}_k \right] = m \mathcal{L}_{-k}^{m-1}$, and the
fact that $\mathcal{L}_k \Phi_0 =0$. Combining these two facts we get
$- \mathcal{L}_{k} \mathcal{L}_{-k}^m \Phi_0({\bf s}) = m \mathcal{L}_{-k}^{m-1} \Phi_0({\bf s})$.
This shows that if $\mathcal{L}_{-k}^m$ vanished,then $\mathcal{L}_{-k}^{m-1}$ would
also have to vanish.
Since we are assuming this is not the case,
it follows that $\mathcal{L}_{-k}^m \Phi_0({\bf s} )$ does not vanish, and
hence by induction
does not vanish for any $m\geq 0$.
To show that a general function $\Phi_{{\bf k}}({\bf s})$ does not
vanish, we can proceed by a different induction proof. In
particular, since the operator $\mathcal{L}_{-1}$ commutes with both
$\mathcal{L}_{-2}$ and $\mathcal{L}_2$ we see that for any operator
$Z$ of the form $Z= \mathcal{L}_{-1}^p$ where $p$ is a non-negative integer, we have
$\left[Z \mathcal{L}_{-2}^m , \mathcal{L}_{2} \right] = m Z\mathcal{L}_{-2}^{m-1} $.
We can now use almost the identical argument as in the last paragraph to
show that any function of the form $Z \mathcal{L}_{-2}^m \Phi_0$ will be non-zero.
We can now carry out this process by induction to see that
any function of the form $\Phi_{{\bf k} }({\bf s})$ will be nonzero.
Once we know that $\Phi_{{\bf k} }( {\bf s})$ is nonzero, it is clear from
the ladder operator formalism that its eigenvalue must have the
form in \eqref{eq:chi_k}.
\end{proof}
There is one subtle point we would like to discuss in
our proof of Theorem \ref{thm:integ}.
Our proof relies on the fact that if $\phi$ is an eigenfunction of
$\mathcal{D}$, then either $\mathcal{L} _k \phi=0$, or $\mathcal{L}_k \phi$ gives a new
eigenfunction whose eigenvalue has a smaller real part.
This relies on the assumption that $\mathcal{L}_k \phi$ remains in the domain
of our operator.
The domain of our operator consists of functions that have moments of
all orders. Clearly, if this is true of $\phi$, this will be true of
$\mathcal{L} _k \phi$. However, we must also make sure that the function
$\mathcal{L} _k \phi$ has sufficient numbers of derivatives to satisfy
our differential equation.
This is clearly true of the eigenfunctions we have found. That is,
they clearly have infinitely many derivatives.
However, we should consider the possibility that there are other
eigenfunctions that we have not accounted for that
are not infinitely differentiable.
General theorems on elliptic operators rule out such eigenfunctions if
${\bf B}$ is positive definite. However, we have only required that
${\bf B}$ be positive semi-definite, and that ${\bf H}$ and ${\bf B}$
form a controllable pair.
A heuristic argument that we have found all of the eigenfunctions
in this less restrictive case is as follows.
If we perturb the matrix ${\bf B}$ to make it positive definite, then
we know we have all of the eigenfunctions. As our perturbation parameter
goes to zero, there is nothing unusual happening to our spectrum (such
as eigenvalues going off to infinity, or clustering about a point).
Hence, if the eigenfunctions are complete for positive definite ${\bf B}$
they are clearly complete in the less restrictive case
where ${\bf B}$ and ${\bf H}$ form a controllable pair.
\section{Appendix C}\label{sec:appB}
In this appendix, we provide formulas for the
asymptotic autocorrelation function of
the process $ {\bf s} (t)$ and the
extended power spectral density
(defined in (\ref{eq:Gdef})) for $ {\bf s} (t)$ as well
as for the filter $\langle \bfa , \bfs (t)\rangle$. In
particular, the results of Theorem \ref{thm:G} and Corollary \ref{cor:S},
are used to
express $\lambda_2$
in Theorems \ref{thm:m2}, \ref{thm:lam2}, and \ref{thm:tensorm2},
and throughout \S \ref{sec:app}. Corollary \ref{cor:S} gives a practical formula
for computing the power spectral densities of $ {\bf s} (t)$ and $\langle \bfa , \bfs (t)\rangle$.
\subsection{The Asymptotic Autocorrelation Function} \label{sec:auto}
We begin by proving a lemma concerning the autocorrelation function
of ${\bf s}(t)$ as defined in equation \eqref{eq:filter}. $ {\bf s}(t)$ is not
a stationary process, but as $t \to \infty$ it approaches a
stationary process, which we refer to as \emph{asymptotically stationary}.
\begin{lemma}
\label{lem_app2}
Suppose
${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
and
let ${\bf s}(t)$ be the solution to equation \eqref{eq:filter} with
zero initial conditions.
As $ t \rightarrow \infty$
the autocorrelation function ${\bf R}(\tau) =
\avg{{\bf s}(t) {\bf s}^T(t+\tau)} $ is given by
\begin{equation}
\label{eqn_Rapp}
{\bf R}(\tau) = {\bf \Sigma}^{-1} e^{ {\bf H} ^T \tau }
\;\;\;\;\mbox{ for $\tau>0$},
\end{equation}
and ${\bf \Sigma}^{-1} = {\bf P} {\bf Q}^{-1}$
satisfies equation (\ref{eqn_siginv}).
\end{lemma}
\begin{proof}
We define
\begin{equation}
\label{eqn_defK}
{\bf K}(t) = e^{ {\bf H} t } {\bf B} e^{ {\bf H}^T t}, \qquad
{\bf K}_0 = \lim_{t \rightarrow \infty}
\int_0^t {\bf K}(t-\sigma ) d\sigma.
\end{equation}
The solution to equation \eqref{eq:filter} (with zero initial conditions)
is given by
\begin{equation*}
{\bf s}(t) = \int_0^t e^{ {\bf H} (t-s)} {\boldsymbol \xi}(s) ds
\end{equation*}
We can write
\begin{equation*}
{\bf s}(t) {\bf s}^T(t+ \tau)
= \int_0^t \int_0^{t+\tau} e^{ {\bf H} (t-s) }
{\boldsymbol \xi}(s) {\boldsymbol \xi}^T(r) e^{ {\bf H^T} (t+\tau-r) }
dr ds.
\end{equation*}
If we take the expected value of both sides of this equation,
and use the fact that
$\avg{ {\boldsymbol \xi}(s) {\boldsymbol \xi}^T(r) } =
{\bf B} \delta(r-s),$
we arrive at the equation
\begin{equation}
\avg{ {\bf s}(t) {\bf s}^T(t+\tau) }
= \int_0^t e^{ {\bf H} (t-\sigma) }
{\bf B} e^{ {\bf H}^T (t - \sigma)} e^{ {\bf H}^T \tau } d\sigma =
\int_0^t{\bf K}(t-\sigma) d\sigma e^{ {\bf H}^T \tau }.
\label{eqn_temp2}
\end{equation}
When deriving equation (\ref{eqn_temp2}) we have assumed that the
variable $r$ is equal to the variable $s$ at some point when doing
the integration. This will only be guaranteed if $\tau>0$, and
hence this is only valid for $\tau>0$. The expression for
$\tau<0$, is obtained by using the fact that the autocorrelation
function must satisfy ${\bf R} (-\tau) = {\bf R}^T ( \tau)$.
Assuming that all of the eigenvalues of ${\bf H}$ have negative real part,
the process ${\bf s}(t)$ will become stationary as $t \rightarrow \infty$.
We take the limit of equation (\ref{eqn_temp2}) as $t \rightarrow
\infty$ to get
\begin{equation*}
{\bf R} (\tau) = {\bf K}_0 e^{ {\bf H} ^T \tau },
\end{equation*}
where ${\bf K} _0$ is defined in equation (\ref{eqn_defK}).
We now show ${\bf K}_0={\bf \Sigma}^{-1}$ by showing
${\bf K}_0$ satisfies equation (\ref{eqn_siginv}), i.e.
${\bf H} {\bf K}_0 + {\bf K}_0 {\bf H}^T = -{\bf B} $.
We have from \eqref{eqn_defK}
\begin{equation*}
\odone{}{s} {\bf K}(s)
= {\bf H} {\bf K}(s) + {\bf K}(s) {\bf H} ^T .
\end{equation*}
It follows that
\begin{equation*}
{\bf H} {\bf K}_0 + {\bf K}_0 {\bf H}^T =
- \lim_{t \rightarrow \infty }
\int_0^t \odone{}{s} \left( {\bf K} (t-s) \right) ds.
\end{equation*}
We can evaluate this integral using the fundamental theorem of calculus.
When we do this we find that the contribution at $s=0$ vanishes in the limit
as $t \rightarrow \infty$. Since ${\bf K}(0) = {\bf B}$, the
contribution at $s=t$ is just $-{\bf B}$, which completes the proof of
the lemma.
\end{proof}
\subsection{The Extended Power Spectral Density}
The expression for the eigenvalue (with largest real part)
of the perturbed operator
$\mathcal{D} +{\bf \Gamma}_0 + \varepsilon \langle \bfa , \bfs \rangle {\bf \Gamma}_1$ will be written in terms of
the Laplace transform of the asymptotic autocorrelation function
of the
asymptotically stationary filter $\langle \bfa , \bfs (t)\rangle$, which we denote by $G$.
$G$ can be viewed as an extension of the power spectral density,
and has the advantage that
it can be evaluated at points in the complex plane,
outside of the domain of the power spectral density.
\begin{definition}
Let $ {\bf s}(t)$ be an asymptotically stationary stochastic process
(i.e. stationary in the limit
$t\to\infty$) with asymptotic autocorrelation function $ {\bf R}(\tau)$.
We define the \emph{extended power spectral density} of $ {\bf s}(t)$ as
\begin{equation}
\label{eq:Gdef}
{\bf G} (z) = \int_0^\infty {\bf R} (\tau) e^{-z\tau}\, d\tau.
\end{equation}
With this definition, the scalar filter $\langle \bfa , \bfs (t)\rangle$ has extended power spectral density
$G(z) = \ip{ {\bf a}}{ {\bf G} (z) {\bf a}}$. $ {\bf G}$ is indeed an extension of the
power spectral density $ {\bf S}(\omega) = \int_\mathbb{R} {\bf R}(\tau)e^{-i\omega \tau}d\tau$,
because the domain of $ {\bf G}$ contains the set $\{z\in \mathbb{C} : \rp{z}\geq 0 \}$.
In particular, $\rp{ {\bf G} (i\omega)} = \frac{1}{2} {\bf S}(\omega)$, which follows from
$ {\bf R}^T(\tau) = {\bf R}(-\tau)$.
\end{definition}
\begin{theorem}\label{thm:G}
If
${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
then
the extended power spectral density ${\bf G}(z)$ for the
asymptotically stationary process ${ {\bf s}} (t)$,
defined in \eqref{eq:filter}, is given by
\begin{equation}
\label{eq:Gepsd}
{\bf G}(z) = -{\bf \Sigma}^{-1}\left( {\bf H}^T - z {\bf I} \right)^{-1},
\end{equation}
provided $\rp{\mu_{l}+z} >0$ for $l=1,\ldots , n$.
Furthermore,
the extended power spectral density $G(z)$ for the asymptotically stationary
filter $\langle \bfa , \bfs (t)\rangle$ can be written as
\begin{equation}
\label{eq:G}
G(z) = \ip{ {\bf a}}{ {\bf G} (z) {\bf a}} = -\sum_{l=1}^n \frac{\alpha_l \beta_l}{\mu_{l}+z},
\end{equation}
where $\alpha_l, \, \beta_l$ are defined in \eq{coeffs1}.
\end{theorem}
\begin{proof}
In Lemma \ref{lem_app2}, we showed that the
autocorrelation function of
the asymptotically stationary
process $ {\bf s}(t)$, in the limit $t\to \infty$, is given by
$ {\bf R} (\tau) = {\bf \Sigma}^{-1} e^{ {\bf H}^T \tau}$ where
\begin{equation}\label{eq:r0}
{\bf \Sigma}^{-1} = \lim_{t\to \infty}\int_0^t e^{ {\bf H} (t-s)}\, {\bf B}\, e^{ {\bf H}^T (t-s)} \, ds.
\end{equation}
From $ {\bf R} (\tau) = {\bf \Sigma}^{-1} e^{ {\bf H}^T \tau}$, we have
\[
\int_0^\infty {\bf R}(t)e^{-zt} \, dt = -{\bf \Sigma}^{-1}\left( {\bf H}^T - z {\bf I} \right)^{-1},
\]
assuming that $\rp{\mu_{l}+z} >0$ for $l=1,\ldots , n$ so that the integral converges.
Since $ {\bf a}$ is real, we can use \eqref{eq:coeffs2} to write
$ {\bf a} = \sum_{k=1}^n \alpha_k \ob{ {\bf v}}_k = \sum_{k=1}^n \ob{\alpha}_k {\bf v}_k$.
Recall, we defined $ {\bf v}_l$ so that $ {\bf H}^T {\bf v}_l = -\ob{\mu}_l {\bf v}_l$, so we have
$( {\bf H}^T - z {\bf I} )^{-1} \ob{ {\bf v}}_l = \frac{-1}{\mu_{l}+z}\ob{ {\bf v}}_l$
and
$e^{ {\bf H}^T (t-s)}\ob{ {\bf v}}_l = e^{-\mu_{l}(t-s)}\ob{ {\bf v}}_l$ .
Using these expressions along with \eqref{eq:coeffs2}, \eqref{eq:r0},
and $ {\bf B} = {\bf B}^T$, we compute
\begin{align}
G(z) &= -\lim_{t\to \infty}\int_0^t\sum_{l,m=1}^n\ob{\alpha}_m \alpha_l
{\bf v}_m^T e^{ {\bf H} (t-s)}\, {\bf B}\, e^{ {\bf H}^T (t-s)} ( {\bf H}^T - z {\bf I} )^{-1} \ob{ {\bf v}}_l \,
ds \nonumber \\
& = \sum_{l,m=1}^n\frac{\ob{\alpha}_m \alpha_l}{\mu_{l} + z}\ip{\ob{ {\bf v}}_m}{ {\bf B} \ob{ {\bf v}}_l}
\lim_{t\to \infty}\int_0^t e^{-(\ob{\mu}_{m} +\mu_{l})(t-s)}ds\nonumber \\
& = \sum_{l,m=1}^n\frac{\ob{\alpha}_m \alpha_l}{(\ob{\mu}_{m} + \mu_{l})(\mu_{l} + z)}
\ip{ {\bf v}_l}{{\bf B} {\bf v}_m}
=-\sum_{l=1}^n \frac{\alpha_l \beta_l}{\mu_{l}+z} .
\end{align}
\end{proof}
\begin{corollary}
\label{cor:S}
If
${\bf H}$ and ${\bf B}$ satisfy the basic conditions (Def. \ref{def:bc}),
then
the power spectral density $S(\omega)$ of the
asymptotically stationary filter
$\langle \bfa , \bfs (t)\rangle$ is given by $S(\omega) = \ip{ {\bf a}}{ {\bf S} (\omega) {\bf a}}$, where
${\bf S}(\omega)$ is the power spectral density of the
asymptotically stationary process
${ {\bf s}} (t)$, defined in \eqref{eq:filter}, and
\begin{equation}\label{eq:Spsd}
{\bf S} (\omega) = \left( {\bf H} ^T + i \omega {\bf I} \right)^{-1}
{\bf B} \left( {\bf H} ^T - i \omega {\bf I} \right)^{-1}.
\end{equation}
\end{corollary}
\begin{proof}
Using the expression for ${\bf G}$ in equation (\ref{eq:Gepsd}) we get
\begin{align*}
{\bf S}(\omega) &= 2\rp{{\bf G}(i\omega) } = {\bf G}(i\omega)+{\bf
G}(i\omega)^* \\
& = -{\bf \Sigma}^{-1}\left( {\bf H}^T - i \omega {\bf I} \right)^{-1} -
\left( {\bf H} + i \omega {\bf I} \right)^{-1} {\bf \Sigma}^{-1} \\
& = - \left( {\bf H} + i \omega {\bf I} \right) ^{-1}
\left( ({\bf H} + i \omega {\bf I} ) {\bf \Sigma}^{-1} +
{\bf \Sigma}^{-1} ({\bf H}^T - i \omega {\bf I} ) \right)
\left( {\bf H}^T - i \omega {\bf I} \right)^{-1} \\
& = - \left( {\bf H} + i \omega {\bf I} \right) ^{-1}
\left( {\bf H} {\bf \Sigma}^{-1} +
{\bf \Sigma}^{-1} {\bf H}^T \right)
\left( {\bf H}^T - i \omega {\bf I} \right)^{-1}\\
&= \left( {\bf H} + i \omega {\bf I} \right) ^{-1}
{\bf B}
\left( {\bf H}^T - i \omega {\bf I} \right)^{-1}.
\end{align*}
The asymptotic autocorrelation function for $\langle \bfa , \bfs (t)\rangle = {\bf a}^T {\bf s} (t)$ is
given by
$\avg{ {\bf a}^T {\bf s}(t) {\bf s}^T(t+\tau) {\bf a}}=\ip{ {\bf a}}{ {\bf R}(\tau) {\bf a}}$.
Hence $S (\omega) = \ip{ {\bf a}}{ {\bf S}(\omega) {\bf a}}$.
\end{proof}
\section{Appendix D: Supplementary Material for \S \ref{sec:pert}}
\begin{center}
{\bf Proof of Lemma \ref{lem:alpha_beta} }
\end{center}
\begin{proof}
We begin by defining
\begin{equation}\label{eq:coeffs1}
\alpha_k = \left( {\bf U}^T {\bf a}\right)_k, \quad
\beta _k = -\sum_{m=1}^n\frac{\ob{\alpha}_m}{\ob{\mu}_{m} + \mu_{k}}\ip{ {\bf v}_k}{ {\bf B} {\bf v}_m}
\end{equation}
where $ {\bf U} = [ {\bf u}_1 , {\bf u}_2, \ldots , {\bf u}_n]$. Recall, $\{ {\bf u}_j\}$ are the
eigenvectors of $ {\bf H}$ and $\{ {\bf v}_j\}$ are the normalized adjoint vectors.
With $ {\bf y}^{\pm k} = ( {\bf p}_{\pm k} , {\bf q}_{\pm k}, 0)^T$, from Lemma \ref{lem:T}, we know
the ladder operators can be written as
\begin{equation*}
\mathcal{L}_k = {\bf p}_k \cdot \nabla + {\bf q}_k \cdot {\bf s}, \quad k=-n,\ldots,-1,1,\ldots,n,
\end{equation*}
with $ {\bf p}_{\pm k}$ and $ {\bf q}_{\pm k}$ given explicitly in the proof of Lemma \ref{lem:T}.
From these we see that for \eq{as} to be satisfied we must have
\begin{equation}\label{eq:coeffs2}
\sum_{k=1}^n \alpha_k \ob{ {\bf v}}_k = {\bf a},\quad
\sum_{k=1}^n \beta_k {\bf u}_k = \sum_{j=1}^n \alpha_j ( {\bf H} -\mu_j {\bf I})^{-1} {\bf B} \ob{ {\bf v}}_j.
\end{equation}
Hence, $\ob{ {\bf V}}{\boldsymbol \alpha} = {\bf a}$, where $ {\bf V}=[ {\bf v}_1 , {\bf v}_2, \ldots , {\bf v}_n]$.
But $ {\bf V}^* {\bf U} = {\bf I}$, so the first expression in
\eq{coeffs2} is equivalent to the definition of $\alpha_k$ in \eq{coeffs1}.
Also, since the $\{ {\bf u}_k\}$ are complete,
and $(( {\bf H}-\mu_j {\bf I})^{-1})^* {\bf v}_k =-(\ob{\mu}_k + \ob{\mu}_j)^{-1} {\bf v}_k$
we conclude
\begin{align}
\beta_k & = \ip{ {\bf v}_k}{\sum_{j=1}^n \alpha_j
( {\bf H} -\mu_j {\bf I})^{-1} {\bf B} \ob{ {\bf v}}_j} \nonumber \\
& = -\sum_{j=1}^n \frac{\alpha_j}{\mu_k + \mu_j} \ip{ {\bf v}_k}{ {\bf B} \ob{ {\bf v}}_j}
= -\sum_{j=1}^n \frac{\ob{\alpha}_j}{\mu_k + \ob{\mu}_j} \ip{ {\bf v}_k}{ {\bf B} {\bf v}_j}, \nonumber
\end{align}
where the last equality follows from a rearrangement of the sum over $j$, and the
fact that the eigenvectors $ {\bf v}_j$ and eigenvalues $\mu_j$ come in conjugate pairs.
Thus, with $\alpha_k,\, \beta_k$ defined as in \eq{coeffs1}, the equations in \eq{coeffs2}
are satisfied, and therefore \eq{as} holds.
\end{proof}
\begin{center}
{\bf Proof of Theorem \ref{thm:m2} }
\end{center}
\begin{proof}
From Lemma \ref{lem:quick} we
have
\begin{align}
\lambda_2 &=-\ip{\boldsymbol \psi_1}{\sum_{k=1}^n\alpha_k {\bf \Gamma}_1 {\bf c}_k}=
-\ip{\boldsymbol \psi_1}{ \sum_{m=1}^n \sum_{j=1}^J \frac{\alpha_m \beta_m \ip{\boldsymbol \psi_j}{{\bf \Gamma}_1 \boldsymbol \phi_1}}
{\nu_1 - \nu_j + \mu_m}{\bf \Gamma}_1 \boldsymbol \phi_j } \nonumber \\
& = -\sum_{j=1}^J \ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_j}\ip{\boldsymbol \psi_j}{{\bf \Gamma}_1 \boldsymbol \phi_1}
\sum_{m=1}^n \frac{\alpha_m \beta_m }
{\nu_1 - \nu_j + \mu_m} \nonumber \\
& = \sum_{j=1}^J \ip{\boldsymbol \psi_1}{{\bf \Gamma}_1 \boldsymbol \phi_j}\ip{\boldsymbol \psi_j}{{\bf \Gamma}_1 \boldsymbol \phi_1}
G(\nu_1 - \nu_j).\nonumber
\end{align}
The last equality follows from \eq{G}.
\end{proof}
\nocite{Arnold} \nocite{Dirac} \nocite{Kloeden} \nocite{Asmussen}
\nocite{Lamb} \nocite{Kampen}
\bibliographystyle{plain}
| 0cc6999f79639e2b01c0430e72f05901512b4844 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Topological insulators (TIs) belong to a new class of quantum matters, where unique gapless surface states with linear energy-momentum dispersion characteristics (i.e., Dirac fermion states) coexist with gapped bulk states. \cite{Hasan10,Moore10,Qi12} As a consequence of the inherit strong spin-orbit coupling, the spin orientations of the helical surface states are transversely locked to their translational crystal momenta. In the absence of any magnetic scattering, the time-reversal coherent backscattering would thus be suppressed, leading to the well-known weak antilocalization (WAL) effect which manifests positive magnetoresistances (MRs) in low perpendicular magnetic fields. \cite{Bergmann84,Hikami80,McCann06,Tkachov11}
Although an observation of the two-dimensional (2D) WAL effect can be a signature of the existence of TI surface states, the strong spin-orbit coupling in the bulk conduction channel can also contribute to a WAL effect. This situation often happens in the three-dimensional (3D) TIs, such as the p-type Bi$_{2}$Te$_{3}$ and the n-type Bi$_{2}$Se$_{3}$, owing to the high levels of unintentional doping which readily occurs during the sample growth as well as the device fabrication processes. Therefore, the MR data of 3D TIs have often been analyzed in terms of a multichannel-conduction model which considers the potential contributions from both the surface and the bulk states. \cite{Takagaki12,Steinberg11,Chen11,Chen10,Liu11,He11,Checkelsky11} Due to the small surface-to-volume ratios in real samples, a clear-cut separation of the possible surface contribution from the overall carrier transport has remained nontrivial. Even if a surface contribution were separated, it still poses a very difficult task to associate the contribution with either the top or the bottom surface states, or both. To reach a consensus on this issue will definitely require more experiments employing specifically designed samples (e.g., with both top and bottom gates) and with improved material qualities. Equally important, a good theoretical understanding of the various issues, such as the coupling mechanisms (e..g, inelastic electron relaxation) between the surface and the bulk states, the electron/hole dephasing processes, the Coulomb interaction effect in the limit of strong spin-orbit scattering, and the band bending effect in individual TI materials, is urgently called for before a quantitative analysis and definitive conclusion about the coherent surface transport could be unambiguously drawn.
In this work we have studied the MRs in two exfoliated Bi$_{2}$Te$_{3}$ microflakes between 0.3 and 10 K and under applied backgate voltages $V_{\rm BG}$. We have found the positive MRs in the WAL effect in small perpendicular magnetic fields $B$. Our results indicate an emergence of two coherent conduction channels as either the temperature is reduced to below 1 K or V$_{\rm BG}$ is increased to be higher than a few tens of volts. That is, the prefactor $\alpha$ [Eq.~(\ref{2DWAL})] which characterizes the WAL MR magnitude increases by a factor of $\approx$ 2 from $\approx$ 0.35 to $\approx$ 0.7. These observations are discussed in terms of the possible (partial) decoupling of the surface states from the bulk states. Moreover, we have observed the 2D electron-electron interaction (EEI) effect in the weakly disordered regime, which caused a logarithmic temperature dependent resistance rise at low temperatures. The extracted Coulomb screening parameter is negative, which faithfully reflects a situation of strong spin-orbit scattering as is inherited in the TI materials.
This paper is organized as follows. Section II contains our experimental method for sample fabrication and electrical-transport measurements. Section III contains our experimental results of resistance and MR as functions of temperature, magnetic field, and backgate voltage. Comparison and analyses based on existing theoretical concepts and predictions are made. Possible limitations on the deduced information based on current theoretical understanding are discussed. Our conclusion is given in Sec. IV.
\section{Experimental method}
Single crystals of Bi$_{2}$Te$_{3}$ were prepared by melting and annealing high-purity Bi$_{2}$Te$_{3}$ and Te powders (99.999\% purity) in a sealed quartz ampoule which was continuously stirred. The temperature was rapidly increased to 1000 $^\circ$C for a few hours, and then slowly reduced to 500 $^\circ$C to allow the crystalline nucleation during a period of five days. A subsequent annealing for another five days was followed to allow the ampoule temperature to reduce slowly from 500 to 420 $^\circ$C. The mixture was then cooled to room temperature. The x-ray powder diffraction study demonstrated the genuine crystalline condition of no. 166 space group as referred to the PDF card 820358.
\begin{table*}[tb]
\caption{\label{table_1
Parameters for two Bi$_2$Te$_3$ microflake devices. $t$ is the thickness, $w$ is the width, $L$ is the device length (i.e., the closest distance between the two voltage probes in a four-probe configuration), $\mu$ is the hole mobility, $p$ is the hole concentration, $l$ is the elastic mean free path, and $D$ is the diffusion constant. The values of $\mu$, $p$, $l$ and $D$ are for 10 K.}
\begin{ruledtabular}
\begin{tabular}{ccccccccccccc}
Device & $t$ & $w$ & $L$ & $R(300\,{\rm K})$ & $\rho(300\,{\rm K})$ & $R(10\,{\rm K})$ & $\rho(10\,{\rm K})$ & $\mu$ & $p$ & $l$ & $D$ \\
& (nm) & ($\mu$m) & ($\mu$m) & ($\Omega$) & (m$\Omega$\,cm) & ($\Omega$) & (m$\Omega$\,cm) & (cm$^2$/V\,s) & (cm$^{-3}$) & (nm) & (cm$^2$/s) \\ \hline
BT-15 & 270 & 1.9 & 3.9 & 111 & 1.48 & 35.0 & 0.464 & 1690 & 8.0$\times$$10^{18}$ & 69 & 36 \\
BT-24 & 65 & 2.8 & 1.9 & 424 & 4.06 & 84.0 & 0.805 & 1360 & 5.7$\times$$10^{18}$ & 50 & 23 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
Bi$_2$Te$_3$ microflakes were placed on 300-nm SiO$_2$ capped highly doped n-type Si substrates. Four-probe 20-nm thick Au electrodes on the microflakes were fabricated by photolithography. The inset of Fig.~\ref{fig_1}(a) shows an optical micrograph of the BT-24 device. The thickness $t$ and width $w$ of the microflakes were measured with an atomic force microscope. In the present experiment, the relevant sample length $L$ was taken to be the closest distance between the two voltage probes in a four-probe configuration. This manner of defining the effective sample length could cause an error bar in the extracted sample resistivity (and thus, the sheet resistance $R_\square = \rho /t$) by an amount of at most 10\%, see the Supplemental Material. \cite{supplementary}
MR measurements were performed on an Oxford Heliox $^3$He cryostat with a base temperature of 250 mK and equipped with a 4-T superconducting magnet. The temperature was monitored with calibrated RuO$_2$ and Cernox thermometers. The resistances were measured using a Linear Research LR-700 ac resistance bridge and a Stanford SIM-921 ac resistance bridge. An excitation current of 100 nA was applied to avoid electron heating. That is, the voltage drop along the sample length was $\lesssim k_BT/e$ with this amount of excitation current, where $k_B$ is the Boltzmann constant, and $e$ is the electronic charge. A Keithley Model 2635A dc sourcemeter was utilized to provide backgate voltages. Our device parameters are listed in Table~\ref{table_1}.
\section{Results and discussion}
\subsection{Temperature dependence of sheet resistance}
\begin{figure}[tb]
\includegraphics[scale=0.22]{fig_1.eps}
\caption{\label{fig_1
(Color online) (a) Resistivity as a function of temperature for BT-15 and BT-24 devices. Inset: an optical micrograph of the BT-24 device. The scale bar is 10 $\mu$m. (b) Sheet resistance as a function of the logarithm of temperature of BT-15 and BT-24 at low temperatures. The straight lines are least-squares fits to Eq.~(\ref{2DEEI}).}
\end{figure}
The temperature dependence of resistivity $\rho$ of BT-15 and BT-24 reveals overall metallic behavior, as shown in Fig.~\ref{fig_1}(a). A metallic feature of $\rho (T)$ has often been observed in Bi$_{2}$Te$_{3}$ and Bi$_{2}$Se$_{3}$ materials because the unintentional defects (free carriers) are readily generated during the sample growth as well as the device fabrication processes, which cause the Fermi level to shift into the bulk valence or conduction band. In Bi$_{2}$Te$_{3}$, the anti-structural defects in which Bi atoms are found on Te sites are responsible for the hole doping. \cite{Hor-jpcm10} At liquid-helium temperatures, our two devices show very different $T$ dependences [Fig.~\ref{fig_1}(b)], strongly suggesting a disorder related phenomenon. The sheet resistance $R_\square$ of BT-24 increases with decreasing $T$ below about 4 K. In contrast, the BT-15 device has a sheet resistance of $R_\square$(10\,K) = 17.1 $\Omega$, which is nearly one order of magnitude smaller than that (124 $\Omega$) of BT-24. As a consequence, $R_\square$ remains roughly constant below 6 K in BT-15. The ln\,$T$ increase in BT-24 between 0.4 and 4 K is due to the 2D EEI effect in the weakly disordered regime. Similar logarithmic temperature dependence of $R_\square$ has been seen in several TI samples. \cite{Liu11,Wang11,Takagaki12} The EEI correction to the residual resistance in a quasi-2D conductor is given by \cite{Altshuler-prl80,Lin-prb87a}
\begin{equation}
\frac{\triangle R _\square (T)}{R_\square (T_0)} = - \frac{e^2}{2 \pi^2 \hbar} \left( 1 - \frac34 F \right) R_\square \ln \left( {\frac{T}{T_0} } \right) \,,
\label{2DEEI}
\end{equation}
where $\triangle R_\square (T) = R_\square (T) - R_\square (T_0)$, $2\pi\hbar$ is the Planck constant, \textit{F} is an electron screening factor, and \textit{T}$_{0}$ is a reference temperature (taken to be 4 K in this work). From least-squares fits to Eq.~(\ref{2DEEI}), we obtain 1$-$3\textit{F}/4 = 1.33 and \textit{F} = $-$0.43, while the EEI theory in its simplest form predicts $0 \lesssim F \lesssim 1$. \cite{F-tilde} Recently, a negative $F$ value has also been extracted by Takagaki {\it et al.} \cite{Takagaki12} in Cu-doped Bi$_{2}$Se$_{3}$, but they did not connect their observation and discussion in terms of the strong spin-orbit coupling property of the TI materials. In the original theory of Altshuler {\it et al.}, \cite{Altshuler-ssc82} the triplet term in the EEI effect in the diffusion channel is found to be suppressed by strong spin-orbit coupling. However, the theory does not yield a negative $F$ value. Empirically, Wu {\it et al.} \cite{Wu-prb95} had previously demonstrated, by using a series of TiAl alloys doped with dilute heavy Au atoms, that sufficiently stronger spin-orbit coupling can cause a more negative $F$ value. Whether this is also the origin in the TI materials deserves further theoretical clarification. \cite{F-value} Here we also would like to point out that, due to the finite width of our voltage leads as compared with the relevant sample length in a four-probe configuration, our evaluation of $R_\square$ could be overestimated by an amount of at most 10\% (see the Supplemental Material \cite{supplementary}). Taking the largest possible 10\% uncertainty in $R_\square$ [which appears on the right-hand side of Eq.~(\ref{2DEEI})] into account, our extracted $F$ value can be recalculated to be $F \simeq$ $-$0.52$\pm$0.09.
Figure~\ref{fig_1}(b) also plots the $R_\square (T)$ of BT-24 under an applied backgate voltage of \textit{V}$_{\rm BG}$ = $+$60 V. The overall $R_\square (T)$ curve is slightly higher than the corresponding curve under zero backgate voltage, indicating hole doping. The straight line is a least-squares fit to Eq.~(\ref{2DEEI}). It indicates a coefficient of 1$-$3\textit{F}/4 = 1.23 and \textit{F} = $-$0.31. (If taking a possible 10\% overestimate in the $R_\square$ value into account, our $F$ value would read $ F = -$0.39$\pm$0.08.) This result illustrates that the EEI effect persists in Bi$_2$Te$_3$ when a large positive \textit{V}$_{\rm BG}$ was applied and the surface and bulk states became (partly) decoupled. (The possible decoupling under a $V_{\rm BG}$ = $+$60 V is asserted through the measurements of the MR dips in the WAL effect, see below.) On the other hand, in Eq.~(\ref{2DEEI}), since the magnitude of the resistance correction due to the 2D EEI effect scales linearly with $R_\square$, a ln\,$T$ increase in $R_\square$ thus must be minute in BT-15, as mentioned.
In the above discussion, we have ignored the WAL correction to $R_\square (T)$. Theoretically, the WAL effect is known to cause an opposite decrease of $R_\square$ with reducing $T$, and is given by \cite{Lin-prb87a,Abrahams-prl79} $\triangle R_\square (T) / R_\square (T_0) = -\alpha \tilde{p} (e^2/2 \pi^2 \hbar) R_\square {\rm ln} (T/T_0)$, where $\alpha$ is defined in Eq.~(\ref{2DWAL}), and $\tilde{p}$ is the exponent of temperature in the electron dephasing time $\tau_\varphi \propto T^{- \tilde{p}}$. In our case, this contribution does not seem to become important until $T$ is lowered to subkelvin temperatures. Indeed, the seemingly saturation of $R_\square$ below $\sim$ 0.4 K in BT-24 under zero backgate voltage is not due to the Joule heating (as was discussed in the Experimental Method), it is most likely a signature of the onset of the WAL effect. This interpretation is supported by the fact that an onset of the downward deviation from the ln\,$T$ dependence occurs at a slightly higher $T \approx$ 0.7 K when a $V_{\rm BG}$ = $+$60 V was applied, which caused a slightly more negative $\alpha$ value [Fig.~\ref{fig_4}(b)], and hence a slightly larger WAL contribution. \cite{Bergmann84, Lin-prb87a} We may roughly estimate that the EEI term is $\sim$ 5 times greater than the WAL term, i.e., $(1 - 3F/4)/|\alpha \tilde{p}| \approx 5$, where we have used our experimental values of $\alpha \sim - 0.5$ and $\tilde{p} \approx$ 0.5 (see Sec. III B). In short, the observation of increasing $R_\square$ with decreasing $T$ in Fig.~\ref{fig_1}(b) confirms the important role played by the 2D EEI effect in the presence of both weak disorder and strong spin-orbit coupling in Bi$_2$Te$_3$. \cite{strictly} To the best of our knowledge, the connection of negative $\tilde{F}$ values with strong spin-orbit coupling in TIs has not been pointed out in any previous theoretical and experimental studies.
Figure~\ref{fig_2}(a) shows the MR data of the BT-15 and BT-24 devices at 0.3 K with the $B$ field applied perpendicular to the microflake plane. Note that the MR (ignoring the dip around $B$ = 0) reveals an approximate \textit{B}$^{2}$ dependence in the low magnetic field regime of $|B| \lesssim 1$ T, see the inset of Fig.~\ref{fig_2}(a). This may be tentatively ascribed to the classical MR and approximated by: \cite{MR} $[R(B)-R(0)]/R(0) \simeq (\mu B)^{2}$. We thus obtain the mobility $\mu \approx$ 1690 (1360) cm$^{2}$/V\,s in BT-15 (BT-24). The hole concentration $p = 1/(\rho |e| \mu)$ is then calculated to be $\approx 8.0 \times 10^{18}$ ($\approx 5.7 \times 10^{18}$) cm$^{-3}$ in BT-15 (BT-24). \cite{Hall} These values of $\mu$ and $p$ are in line with those values deduced for the Bi$_2$Te$_3$ samples with compatible resistivities. \cite{He11} Using the free-electron model, we estimate the thermal diffusion length $L_T = \sqrt{D \hbar/k_BT} >$ 65 nm at $T \lesssim$ 4 K in BT-24 ($D$ being the diffusion constant). Therefore, the 2D EEI effect may occur at low $T$ in this device, as we have seen above.
Furthermore, the overall feature of the MR curves of BT-15 and BT-24 in the wide magnetic field range $|B| \leq$ 4 T are nearly $T$ independent between 0.3 and 10 K and can be satisfactorily described by the power law $[R(B) - R(0)] \propto |B|^\gamma$, with $\gamma$ = 1.53$\pm$0.01 (1.55$\pm$0.01) for BT-15 (BT-24), see the solid curves drawn through the MR data in the main panel of Fig.~\ref{fig_2}(a). The authors of Refs. \onlinecite{Takagaki12} and \onlinecite{Takagaki07} proposed that a magnitude of an intermediate power of $1 < \gamma < 2$ would be suggestive of the presence of multiple types of carriers.
\begin{figure}[tb]
\includegraphics[scale=0.21]{fig_2.eps}
\caption{\label{fig_2
(Color online) (a) MR curves of BT-15 and BT-24 in perpendicular \textit{B} field at $T$ = 0.3 K. The solid curve drawn through BT-15 (BT-24) describes a $|B|^\gamma$ power law, with $\gamma$ = 1.53 (1.55). The inset shows the low magnetic field regime of $|B| \leq$ 0.4 T. The solid curve drawn through BT-24 describes a $B^2$ power law. (b) MR curves of BT-24 in perpendicular and parallel \textit{B} fields at $T$ = 0.3 K. The solid curve drawn through the parallel MR data is the 2D WAL theoretical prediction with an electron dephasing length $L_\varphi \simeq$ 420 nm (see Refs. \onlinecite{ng-prb93} and \onlinecite{Chiu-unpublished}). The inset shows the large magnetic field regime of $|B| \leq$ 4 T.}
\end{figure}
In the inset of Fig.~\ref{fig_2}(a) we plot the MR curves measured in the low field regime of $|B| \leq$ 0.4 T and at $T$ = 0.3 K. A resistance dip in the MR curve of BT-24 is clearly seen, manifesting the WAL effect. On the other hand, the WAL effect in BT-15 is obscured by the relatively large universal conductance fluctuations (UCFs). \cite{Lee87,Matsuo12,Yang-prb12} In the rest of this paper, we shall thus focus our analysis of the WAL effect only on the BT-24 sample. It is, however, worth noting in passing that the UCF signals do allow us to extract the electron dephasing length $L_\varphi (T)$. Our analysis of the root-mean-square UCF magnitudes at various temperatures led to $L_\varphi$ values which are consistent to within $\sim$ 40\% with those corresponding values deduced from the WAL method, see Fig.~\ref{fig_3}(d).
\subsection{Weak-antilocalization magnetoresistance: Temperature dependence}
In Fig.~\ref{fig_2}(b), we plot the MR curves of BT-24 at $T$ = 0.3 K and with the \textit{B} field applied either perpendicular to the microflake plane or parallel to the microflake plane and in the direction of the current flow. In both $B$ field orientations, the MR dips around $B$ = 0 are evident. Note that in the parallel $B$ field orientation, only the bulk states can possibly contribute to the quantum-interference correction to the classical parabolic MR. Therefore, an observed MR dip in this case unambiguously indicates that the bulk states must lie in the strong spin-orbit scattering limit, giving rise to the WAL effect. This observation does not support the recent theoretical prediction of a weak-localization effect (which should manifest a MR ``peak") in the Bi$_2$Te$_3$ material. \cite{Lu-prb11} Also, we have carried out least-squares fits of the parallel MR curve to the pertinent 2D WAL theoretical prediction \cite{ng-prb93,Chiu-unpublished} [the solid curve in Fig.~\ref{fig_2}(b)] and obtained an electron dephasing length of $L_\varphi$(0.3\,K) $\approx$ 420 nm. This length scale is larger than the thickness (65 nm) of the BT-24 microflake, and hence our bulk channel must be 2D with regard to the WAL effect. \cite{He-2D} Thus, in the case of perpendicular $B$ field orientation, both the surface and bulk states would contribute to the 2D WAL effect and the measured MR curves have to be analyzed in terms of a multichannel-conduction model.
\begin{figure}[tb]
\includegraphics[scale=0.2]{fig_3.eps}
\caption{\label{fig_3
(Color online) BT-24 microflake in perpendicular $B$ fields and under $V_{\rm BG}$ = 0. (a) MR curves at several $T$ values (from top down): 0.30, 0.50, 1.0, 2.0, 5.0, and 10 K. The curves are vertically offset for clarity. (b) MR curves after subtracting away the 10-K background curve and at four representative $T$ values, as indicated. The solid curves are the theoretical predictions of Eq.~(\ref{2DWAL}). Note that universal conductance fluctuations are visible. (c) Variation of extracted parameter $- \alpha$ with temperature. As $T$ decreases from 10 to 0.3 K, $- \alpha$ increases by a factor of $\approx$ 2. (d) Variation of extracted electron dephasing length \textit{L}$_{\varphi}$ with temperature. The straight dashed line is drawn proportional to $T^{-0.24}$ and is a guide to the eye.}
\end{figure}
In Fig.~\ref{fig_3}(a), we show the MR curves of BT-24 measured in the low, perpendicular magnetic field regime of $|B| \leq$ 0.4 T and at several $T$ values between 0.3 and 10 K, as indicated in the caption to Fig.~\ref{fig_3}. Note that the curves are vertically offset for clarity. \cite{offset} It can be seen that the MR dips increases with decreasing $T$, suggesting the more pronounced quantum-interference carrier transport at lower $T$. As $T$ increases, the MR dip eventually diminishes around $T \sim$ 10 K. Therefore, we take this 10-K MR curve as the background curve and subtract it away from those MR curves measured at lower $T$ values. This procedure gives rise to those WAL MR curves shown in Fig.~\ref{fig_3}(b) (for clarity, only four representative curves are plotted). The 2D WAL MR in perpendicular $B$ fields and in the limit of strong spin-orbit scattering (which is pertinent to TIs) can be written as \cite{Hikami80,McCann06,Tkachov11,Lin-prb87b}
\begin{equation}
\frac{\triangle R_\square (B)}{[R_\square (0)]^2} = - \alpha \frac{e^2}{2\pi ^2 \hbar} \left[ \Psi \left( \frac{1}{2} + \frac{B_\varphi} {B} \right) - \ln \left( \frac{B_\varphi}{B} \right) \right] \,,
\label{2DWAL}
\end{equation}
where $\triangle R_\square (B) = R_\square(B) - R_\square (0)$, $\Psi (x)$ is the digamma function, the characteristic magnetic field $B_\varphi = \hbar /(4eL_\varphi^2)$, and $L_\varphi = \sqrt{D \tau_\varphi}$. $\alpha$ is a parameter whose value reflects the number of conduction channels. For the 2D surface states in a 3D TI, $\alpha = - 1/2$ for a single coherent topological surface channel, and $\alpha = - 1$ for two independent coherent transport channels with similar $L_\varphi$'s. In practice, for yet to be identified reasons, the experimentally extracted $\alpha$ values often differ from these two ideal values. For simplicity and also in order to reduce the number of adjustable parameters, in this work we take $L_\varphi$ as an effective dephasing length averaged over the channels. \cite{average} The main objective of this paper is thus to show that the value of $\alpha$ is indeed tunable, which to a good extent seems to signify the existence of conductive surface states.
Our measured MR data plotted in Fig.~\ref{fig_3}(b) can be well described by the predictions of Eq.~(\ref{2DWAL}) (the solid curves). The extracted $\alpha (T)$ and $L_{\varphi} (T)$ values are plotted in Figs.~\ref{fig_3}(c) and \ref{fig_3}(d), respectively. Our experimental value of $- \alpha$ monotonically increases from $\simeq$ 0.35 to $\simeq$ 0.64 as $T$ is reduced from 10 to 0.3 K. (If taking a possible 10\% uncertainty in the $R_\square$ value into account, our extracted $-$$\alpha$ value would be slightly modified to vary from 0.39 to 0.70.) This variation of $\alpha$ with $T$ is well beyond our experimental uncertainties, and it is plausible to think that such a change in the $\alpha$ value by a factor of $\sim$ 2 does reflect a decoupling (doubling) of the charge transport channels at sufficiently low $T$. Note that several recent studies of Bi$_{2}$Se$_{3}$ have also reported temperature dependent $\alpha$ values which lie between $-$0.3 and $-$0.6, but not between $-$0.5 and $-$1. \cite{Liu11,Chen11,Chen10,Steinberg11,He11,Checkelsky11} For the Bi$_2$Te$_3$ material, to our knowledge, the temperature behavior of $\alpha$ has not yet been reported in the literature. He {\it et al.} have recently reported an $\alpha$ = $-$0.39 at $T$ = 2 K, \cite{He11} while our value is $\alpha$(2\,K) $\simeq$ $-$0.54. In our case, an $- \alpha$(0.3\,K) value smaller than unity might imply that the decoupling is incomplete (partly due to the high carrier density in BT-24). The exact reason why ultralow temperature facilitates decoupling should await further theoretical investigations. Tentatively, if there should exist an inelastic relaxation process between the surface carriers and the bulk low-lying excitations (e.g., surface-carrier--bulk-phonon scattering \cite{Sebastien11}), the scattering strength would decrease with reducing $T$. Then, a higher degree of decoupling of the surface and bulk states could likely take place at lower $T$. \cite{phonon} For comparison, we would like to stress that in the conventional metals and alloys with strong spin-orbit scattering, such as Au-doped Cu (Ref. \onlinecite{Huang-prl07}) and AuPd (Refs. \onlinecite{Lin-prb87b} and \onlinecite{Zhong-prl10}) thin films, the MR dips in 2D WAL effect have been firmly observed, where one {\em always} finds a constant $\alpha$ = $-$1/2 in the wide $T$ range from subkelvin temperatures up to above 20 K.
Our extracted \textit{L}$_{\varphi}$ values of BT-24 are plotted in Fig.~\ref{fig_3}(d). As $T$ decreases from 10 to 0.3 K, $L_\varphi$ increases from $\simeq$ 200 to $\simeq$ 440 nm. The straight dashed line is drawn proportional to $L_\varphi \propto T^{-0.24}$ and is a guide to the eye. This slope corresponds to an effective exponent of temperature $\tilde{p} \simeq$ 0.48 in $\tau_\varphi \propto T^{-\tilde{p}}$, which is considerably lower than that ($\tilde{p}$ = 1) would be expected for the quasielastic Nyquist electron-electron scattering process in 2D. \cite{Altshuler82,Wu-prb12} This suggests the existence of additional weakly $T$ dependent magnetic or nonmagnetic electron dephasing processes which are noticeable in this device over our measurement temperature range. \cite{Huang-prl07,Lin02} Our extracted $L_\varphi$ values are larger than the thickness of the microflake, justifying the 2D WAL characteristics in our samples. Furthermore, our value of $L_\varphi$(0.3\,K) inferred from the perpendicular MR, Eq.~(\ref{2DWAL}), is in good agreement with that ($\simeq$ 420 nm) deduced from the parallel MR, see Fig.~\ref{fig_2}(b) and Ref. \onlinecite{Chiu-unpublished}. This close agreement in the extracted $L_\varphi (B_\perp)$ and $L_\varphi (B_\parallel)$ values strongly indicates the self-consistency of our experimental method and data analysis as well as the reasonable validity of applying the WAL theory in its current form \cite{Hikami80} to the TI Bi$_2$Te$_3$ material. Whether any modifications might need to be incorporated into Eq.~(\ref{2DWAL}) to take into account any subtle effect(s) that might arise from the multichannel feature and the specific materials properties of Bi$_2$Te$_3$ have to await future theoretical investigations. For instance, it would be of interest and importance to see if the frequent experimental observations of an $|\alpha|$ value smaller than 0.5 could be answered.
\subsection{Weak-antilocalization magnetoresistance: Backgate-voltage dependence}
We have further performed the MR measurements under different applied backgate voltages $V_{\rm BG}$ and at $T$ = 0.3 K. Figure~\ref{fig_4}(a) clearly shows that the size of the MR dip increases monotonically as \textit{V}$_{\rm BG}$ progressively increases from $-$40 to $+$60 V. From least-squares fits to Eg.~(\ref{2DWAL}), we have obtained the values of $\alpha$ and \textit{L}$_{\varphi}$ under various \textit{V}$_{\rm BG}$ values. Figure~\ref{fig_4}(b) reveals that, as \textit{V}$_{\rm BG}$ increases from $-$40 to $+$60 V, $- \alpha$ monotonically increases from $\simeq$ 0.36 to $\simeq$ 0.72, while \textit{L}$_{\varphi}$ monotonically decreases from $\simeq$ 450 to $\simeq$ 250 nm. (If taking a possible 10\% overestimate in the $R_\square$ value into account, our $-$$\alpha$ value would be slightly modified to vary from 0.40 to 0.79, while the extracted $L_\varphi$ values are barely affected.) These amounts of variation in $\alpha$ and $L_\varphi$ are remarkable, since a change of $V_{\rm BG}$ from $-$40 V to $+$60 V only causes a $\simeq$ 2\% reduction in the hole concentration $p$ in this device. [Such a minor change in $p$ can be ascribed to the facts (1) that our applied $V_{\rm BG}$ partly dropped in the highly doped n-type Si substrate, (2) that the relatively thick SiO$_2$ layer reduced the ability to electrostatically modulate the interface carrier density, and (3) that our sample possessed a relatively high $p$ level. Recall that, in Fig.~\ref{fig_1}(b), the $R_\square$ value of BT-24 increased only by a tiny amount of $\lesssim$ 0.2\% as $V_{\rm BG}$ was varied from 0 to $+$60 V.] For comparison, we would like to point out that several previous experiments using the n-type Bi$_{2}$Se$_{3}$ have also found that, as the electron density ($n$) is lowered by a sufficiently large negative $V_{\rm BG}$, the value of $- \alpha$ increases (i.e., $\alpha$ becomes more negative) while the value of $L_{\varphi}$ decreases. \cite{Chen11,Chen10,Steinberg11,Checkelsky11} Since the $p$/$n$ carriers in the bulk channel are (partly) depleted away under a large positive/negative $V_{\rm BG}$, a decoupling of the two channels thus could possibly take place. This in turn is reflected by the increase in the $-$$\alpha$ value by a factor of $\approx$ 2.
Because our extracted $- \alpha$ value is always smaller than unity, we have presumed that there exists only one conductive surface channel in BT-24. This presumption is in accord with a recent report by Takagaki {\it et al.} \cite{Takagaki12} who have pointed out that the seemingly absence of the WAL effect for one of the surfaces was a common finding in a number of studies of 3D TIs. \cite{Liu11,Chen11,Chen10,Steinberg11,He11,Wang11} In our case, the top surface layer might have been deteriorated owing to its constant exposure to the air atmosphere as well as its undergoing the device fabrication process. Furthermore, our BT-24 is 65 nm thick and highly doped, and hence the top surface state, if any exists, would be too far away to be tuned by $V_{\rm BG}$. Therefore, we tentatively ascribe the observed conductive surface states to be associated with the bottom surface of the microflake. It is also important to note that we have ascribed the additional coherent transport channel to the TI bottom states, rather than to an electron accumulation layer. To rule out the latter scenario, we have measured the low-$T$ resistance as a function of backgate voltage in a BT-23 microflake which possessed very similar electronic properties to those of the BT-24 microflake. Our measured resistance curve $R(V_{\rm BG})$ at every temperature always revealed a maximum at a characteristic backgate voltage $V_{\rm BG}^\ast$, which signified that the Fermi energy was indeed shifted to become aligned with the Dirac point at this particular backgate voltage (see further discussion in the Supplemental Material \cite{supplementary}).
\begin{figure}[tb]
\includegraphics[scale=0.22]{fig_4.eps}
\caption{\label{fig_4
(Color online) BT-24 microflake at $T$ = 0.3 K. (a) MR data measured in perpendicular \textit{B} field and under several \textit{V}$_{\rm BG}$ values, as indicated. The data are vertically offset for clarity. The solid curves are least-squares fits to Eq.~(\ref{2DWAL}). Note that resistance fluctuations are visible. (b) Variation of the extracted parameter $- \alpha$ and electron dephasing length \textit{L}$_{\varphi}$ with $V_{\rm BG}$. The dashed curves are guides to the eye.}
\end{figure}
Finally, the reason for the marked effect of $V_{\rm BG}$ on $L_\varphi$ [Fig.~\ref{fig_4}(b)] is theoretically less clear. Under a large positive $V_{\rm BG}$, the carriers are slightly depleted away from the bottom surface. The depletion would certainly lead to deteriorated Coulomb screening between the carriers, leading to an enhanced electron-electron (hole-hole) scattering rate and thus a shortened $L_\varphi$. However, as mentioned, the hole concentration is reduced only by 2\% under a $V_{\rm BG}$ = $+$60 V in this BT-24 device. It is thus conjectured whether such a small reduction in $p$ could have already caused a notable change in $L_\varphi$. There could be other explanations, for example, based on a change in the degree of coupling between bulk and surface states, which would take place in the presence of large backgate voltages. A quantitative account for the suppression of $L_\varphi$ by $V_{\rm BG}$ must await further theoretical investigations.
\section{Conclusion}
We have measured the resistances and magnetoresistances of two Bi$_{2}$Te$_{3}$ microflakes at low temperatures and under applied backgate voltages. Low-temperature resistance corrections due to the 2D EEI effect in the presence of weak disorder are observed. The extracted Coulomb screening parameter $F$ is negative, which is in line with the situation of strong spin-orbit scattering as is inherited in the TI materials. Positive MR dips in small perpendicular magnetic fields are measured, which can be satisfactorily described by the existing 2D WAL theory by taking into account a multichannel-conduction model in a TI material. Our extracted $\alpha (T,V_{\rm BG})$ values, which increase by a factor of $\approx$ 2 from $\approx$ 0.35 to $\approx$ 0.7, suggest the emergence of two coherent conduction channels as either the temperature decreases to subkelvin temperatures or the backgate voltage increases to several tens of volts. This doubling of the conduction channels seems to signify the (partial) decoupling of the surface and the bulk states. In short, our overall results provide qualitative, but not yet quantitative, evidence for the likely existence of Dirac fermion states in the 3D Bi$_2$Te$_3$ material. To achieve a clear-cut separation of the contributions from the individual surface and bulk states undoubtedly would require further theoretical formulations and experimental measurements.
\begin{acknowledgments}
The authors thank Zhao-guo Li, Tai-shi Chen and Feng-qi Song for providing us with the samples used in this study and for their reading through an early version of the manuscript. This work was supported by the Taiwan National Science Council through Grant No. NSC 100-2120-M-009-008 and the MOE ATU Program.
\end{acknowledgments}
| 7fa4f493ed7312c9a63791347e594ad670fb7d71 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Introduction}
Let $H, A, B$ be smooth classical observables on $\mathbb{R}^{2d}$ in the variables $X=(x,\xi)$. The Poisson brackets
is defined as $\{A,B\} =\partial_\xi A\cdot\partial_xB-\partial_xA\cdot\partial_\xi B$. So the classical time evolution of $A$ determined by the Hamilton equation for $H$ is solution of the equation:
\begin{eqnarray}} \def\eea{\end{eqnarray}
\frac {d}{dt}A(t) &=& \{A(t), H\} \\
A(0) &=&A. \nonumber
\eea
The Weyl quantization $\widehat A$ of $A$ is defined as as the following operator:
\begin{equation}} \def\eeq{\end{equation}
\widehat A f(x) :={\rm Op}^w_\hbar f(x)=(2\pi\hbar)^{-d} \int_{\mathbb{R}^{2d}} A\!\left(\frac{x+y}{2}, \xi\right)
{\rm e}^{i\xi\cdot(x-y)/\hbar} f(y) \,dy\, d\xi
\eeq
for any $f\in{\mathcal S}(\mathbb{R}^d)$. \\
The quantum time evolution of the quantum observable $\widehat A$ must satisfy the Heisenberg equation
\begin{eqnarray}} \def\eea{\end{eqnarray}
\frac {d}{dt}\widehat A(t) &=& \frac{i}{\hbar}[ \widehat A(t), \widehat H]\\
\widehat A(0) &=&\widehat A
\eea
where $[\hat A, \hat B] = \hat A\hat B -\hat B\hat A$.\\
The Moyal bracket of the observables $A, H$ is defined such that
\begin{equation}} \def\eeq{\end{equation}
\frac{i}{\hbar}[ \widehat A, \widehat H]={\rm Op}^w_\hbar(\{A,H\}_\circledast.
\eeq
Notice that it results from the Weyl quantization calculus with a small parameter $\hbar$ that we have
$$
\lim_{\hbar \searrow 0}\{A,H\}_\circledast = \{A,H\}.
$$
There is an equivalent definition by introducing the Moyal product $A\circledast B$ (see the next section)
such that
$$
({\rm Op}^w_\hbar A)({\rm Op}^w_\hbar B) = {\rm Op}^w_\hbar(A\circledast B).
$$
Then we have
$$
\{A,B\}_\circledast = \frac{i}{\hbar}(A\circledast B-B\circledast A).
$$
These definitions make sense for $A, B\in {\mathcal S}(\mathbb{R}^{2d})$ and can be extended to suitable classes of symbols with moderate growth. To be more explicite we introduce the classes
${\mathbb{S}}_\delta^\mu$, for $\delta<1$, $\mu\in\mathbb{R}$. $A\in{\mathbb{S}}_\delta^\mu$ iff $A\in C^\infty(\mathbb{R}^{2d})$ and for any multiindex $\gamma\in\mathbb{N}^{2d}$
we have:
$$
\vert\partial^\gamma_X A(X)\vert \leq C_\gamma\langle X\rangle^{\mu+\delta\vert\gamma\vert}
$$
Using Theorem A.1 in \cite{BR}, we can see that $A\circledast H$ is a smooth symbol if $H\in{\mathbb{S}}_\delta^\mu$ and $A\in{\mathbb{S}}^0_0$.
Our aim here is to prove the following result.
\begin{theorem}\label{MainThm} Assume that $\hbar$ is fixed ($\hbar =1$).
Let be $H\in {\mathbb{S}}_\delta^\mu$ for some $\mu\in\mathbb{R}$ and $\delta<1$. Assume that for any $A\in{\mathbb{S}}^0_0$ we have
$ \{A, H\}_\circledast = \{A, H\}$. Then $H(X)$ is a polynomial in $X=(x,\xi)$ of degree at most 2.
\end{theorem}
\begin{remark}
It is well known that if $H$ is a polynomial of degree at most 2 then $\{A, H\}_\circledast = \{A, H\}$ for any $A\in{\mathbb{S}}^\nu_0$. I do not know any reference for a proof of the converse statement. The proof given here is a direct consequence of basic properties of the Weyl quantization.
\end{remark}
\begin{remark}
The usual proofs of the Groenewold-van Hove Theorem on the phase space $\mathbb{R}^{2d}$ concern more general quantization procedures but are restricted to polynomial symbols $A, H$.\\
A quotation from \cite{vHove} p.66-67:\\
"On \'etablit ensuite qu'une correspondance biunivoque entre grandeurs classiques et quantiques, ayant le caract\`ere d'un isomorphisme entre alg\`ebres de Lie, existe entre les grandeurs repr\'esent\'ees par des polyn\^omes de degr\'e 0, 1, 2 en les variables $p_1,\cdots p_N,q_1,\cdots q_n$ mais ne peut \^etre \'etendue sans perdre ses propri\'et\'es essentielles \`a l'ensemble de toutes les grandeurs classiques" \\
The Theorem of Groenewold-van Hove is detailed p.76 and the quadratic case p.87 of \cite{vHove}.\\
Notice that the quadratic case is related with the metaplectic representation \cite{Gotay}.
\end{remark}
\section{Weyl calculus}
\subsection{Introduction to the Weyl quantization}
In this section, we recall some basic properties of the Weyl calculus (for more details see \cite{ho}).\\
Weyl quantization start by quantization of exponent of linear forms $L_Y(X) = \sigma(Y,X)=\eta\cdot x-y\cdot\xi$
with $X=(x,\xi)$, $Y=(y,\eta)$. Apart the usual properties asked for an admissible quantization, Weyl quantization is uniquely determined by imposing
that the Weyl symbol of ${\rm e}^{i\widehat{L_Y}}$ is ${\rm e}^{iL_Y}$. Recall that $ \widehat T(Y):={\rm e}^{-\frac{i}{\hbar}\widehat{L_Y}}$ is the Weyl-Heisenberg translation operator by $Y$ in the phase space $\mathbb{R}^{2d}$.
Then for any observable $A$, using a Fourier transform, the Weyl quantization $ A$ is defined for any $\psi\in {\mathcal S}(\mathbb{R}^d)$, as
$$
\widehat A\psi = (2\pi)^{-d}\int_{\mathbb{R}^{2d}}{\tilde A}_\sigma(Y) T(Y)\psi dY
$$
where ${\tilde A}_\sigma(Y)=\int_{\mathbb{R}^{2d}} A(z){\rm e}^{-i\sigma(Y,z)}dz$ is the symplectic Fourier transform of $A$ (in the sense of distributions).
So that the family $\{T(Y)\}_{Y\in\mathbb{R}^{2d}}$ is an over-complete basis for operators between the Schwartz spaces ${\mathcal S}(\mathbb{R}^d)$ and
${\mathcal S}'(\mathbb{R}^d)$
\subsection{The Moyal Product}
We first recall the formal product rule for quantum observables
with Weyl
quantization. Let $A, B \in {\mathcal S}(\mathbb{R}^{2d})$. The Moyal product $C:=A\circledast B$ is the
observable
$C$ such that $\widehat{A}\cdot\widehat{B} = \widehat{C}$. Some computations with
the Fourier
transform give the following well known formula \cite{ho}
\begin{equation}} \def\eeq{\end{equation}\label{moy1}
C(x,\xi) =
\exp\left(\frac{i\hbar}{2}\sigma(D_q,D_p;D_{q^\prime},D_{p^\prime})\right)A(q,p)
B(q^\prime,p^\prime)\vert_{(q,p)=(q^\prime,p^\prime)=(x,\xi)},
\eeq
where $\sigma$ is the symplectic bilinear form $\sigma((q,p), (q',p'))=p\cdot q'-p'\cdot q$ and
$D=i^{-1}\hbar\nabla$.
By expanding the exponential term in a formal power series in $\hbar$ we get
\begin{equation}} \def\eeq{\end{equation}\label{prod2}
C(x,\xi) =
\sum_{j\geq
0}\frac{\hbar^j}{j!}\left(\frac{i}{2}\sigma(D_q,D_p;D_{q^\prime},D_{p^\prime}) \right)^j
A(q,p)B(q^\prime,p^\prime)\vert_{(q,p)=(q^\prime,p^\prime)=(x,\xi)}.
\eeq
So that $C(x,\xi)$ is a formal power series in $\hbar$ with coefficients
given by
\begin{equation}} \def\eeq{\end{equation} \label{exprod}
C_j(A,B;x,\xi) = \frac{1}{2^j}\sum_{\vert\alpha+\beta\vert=j}
\frac{(-1)^{\vert \beta\vert}}{\alpha!\beta!}
(D^\beta_x\partial^\alpha_\xi A).( D^\alpha_x\partial^\beta_\xi B)(x,\xi).
\eeq
\medskip
Furthermore we need a remainder estimates for the expansion of the Moyal product.\\
For every $N \geq 1$. we denote
\begin{equation}} \def\eeq{\end{equation}
R_N(A,B;z):=
A\circledast B(z) - \sum_{0\leq j \leq N}\hbar^jC_j(z) .
\eeq
The following estimate is a particular case of Theorem A.1 in \cite{BR} see also Remark A.3.
\begin{lem}\label{lem:Moyalest}
Let be $A\in{\mathbb S}^{\mu_A}_\delta$ and $B\in{\mathbb S}^{\mu_B}_\delta$ then for any $N\geq 1$, $\gamma\in\mathbb{N}^{2d}$, $M\geq M_0$ there exists $C_{N,\gamma,M}>0$(independent of $(A, B)$)
such that
\begin{eqnarray}} \def\eea{\end{eqnarray}\label{remest1}
\vert\partial_z^\gamma R_N(A,B;z)\vert \leq C_{N,\gamma,M}\hbar^{N+1}\sum_{\substack{\vert\alpha+\beta\vert=N+1\\ \vert\mu+\nu\vert\leq M+\vert\gamma\vert}}\\\sup_{u, v\in\mathbb{R}^{2d}} (1+\vert u\vert^2+\vert v\vert^2)^{(M_0-M)/2}\vert\partial_u^{(\alpha,\beta)+\mu}A(z+u)\vert\vert\partial_v^{(\beta,\alpha)+\nu}B(z+v)\vert
\eea
In particular $ R_N(A,B,z)\in \mathbb S_\delta^{\mu_{AB}}$ for some $\mu_{AB}\geq \mu_A+\mu_B$.
\end{lem}
For proving this Lemma one assume first that $A, B\in{\mathcal{S}}(\mathbb{R}^{2d})$. For the general case we put $A_\varepsilon(X)={\rm e}^{-\epsilon\vert X\vert^2}A(X)$,
$B_\varepsilon(X)={\rm e}^{-\epsilon\vert X\vert^2}B(X)$ and pass to the limit for $\varepsilon\searrow 0$.
\section{Proof of Theorem(\ref{MainThm})}
Here $\hbar=1$. It is enough to consider the test observables $ T_Y:={\rm e}^{-iL_Y}$ ($Y\in\mathbb{R}^{2d}$). \\
We have
$$
\widehat T_Y\widehat H\widehat T_Y^* = [\widehat T_Y, \widehat H]\widehat T_Y^* + \widehat H
$$
Using the assumption of Theorem(\ref{MainThm}) we get
\begin{equation}} \def\eeq{\end{equation}\label{MPC1}
\frac{1}{i}(\{T_T^*, H\}\circledast T_Y)(X) = H(X+Y)-H(X), \forall X, Y\in\mathbb{R}^{2d}.
\eeq
Computing the Poisson bracket in \eqref{MPC1} gives
\begin{equation}} \def\eeq{\end{equation}\label{MPC2}
(((y\cdot\partial_x H +\eta\cdot\partial_\xi H)T_Y^*)\circledast T_Y)(X) = H(X+Y) -H(X),\;\forall X, Y\in\mathbb{R}^{2d}.
\eeq
Our aim is to prove that \eqref{MPC2} implies that $H(X)$ is a polynomial of degree at most 2. For that purpose we shall compute
the asymptotic expansion as $Y\rightarrow 0$ of the left hand side of \eqref{MPC2} and compare it with the Taylor expansion for $H(X+Y)$ modulo
$O(\vert Y\vert^4)$. From that we shall conclude that all the third order derivatives of $H$ vanish in $X$ hence the conclusion will follow.
We have
$$
\partial_x^\alpha\partial_\xi^\beta T_Y = i^{-\vert\alpha+\beta\vert}\eta^\alpha y^\beta T_Y
$$
Let us denote by $C(X,Y)$ the left hand side in \eqref{MPC2}.
So using Lemma(\ref{lem:Moyalest}) we have
$$
C(X,Y) = \sum_{0\leq j\leq 2}(C_j(X,Y) + O(\vert Y\vert^4),
$$
where
\begin{eqnarray}} \def\eea{\end{eqnarray}
C_0(X,Y) &= & Y\cdot\nabla_XH(X)\\
C_1(X,Y) &=& \frac{1}{2}Y\cdot \nabla_X^2H(X)Y,
\eea
where $\nabla_X^2H(X)$ is the Hessian matrix of $H$.\\
Let us compute now $C_2(X,Y)$, which is an homogeneous polynomials of degree 3 in $Y$.\\
For simplicity let us consider the 1-D case. The same computation can clearly be done for $d>1$.\\
Using \eqref{prod2} we get with $Y=(y,\eta)$,
\begin{equation}} \def\eeq{\end{equation}\label{C2}
C_2(X,Y) = \frac{1}{8}\left(y^3\partial_x^3H +\eta^3\partial_\eta^3H -y^2\eta\partial_\xi\partial_x^2H - y\eta^2\partial_\xi^2\partial_xH\right).
\eeq
According \eqref{MPC2}, $C_2(X,Y)$ must coincide with the term of order 3 in $Y$ of the Taylor expansion in $X$ for $H(X+Y)-H(X)$.
But this is possible only if $\partial_x^3H=\partial_\eta^3H =\partial_\xi\partial_x^2H =\partial_\xi^2\partial_xH=0$ for any $(x,\xi)\in \mathbb{R}^2$.
So $H$ must be a polynomial of degree $\leq 2$. $\square$
\section{Extension to polynomials of arbitrary degree }
The asymptotic expansion in $\hbar$ in the Moyal product suggest to introduce the following semi-classical approximations of the Moyal bracket:
$$
\{ A,B\}_{\circledast,m} = \{A,B\} +\hbar^2\{A,B\}_3 +\cdots +\hbar^{2m}\{A,B\}_{2m+1},
$$
where $\{A,B\}_j = \frac{i}{\hbar}(C_j(A,B)-C_j(B,A))$ (notation of \eqref{exprod}).
Notice that $\{A,B\}_j =0$ for $j$ even.\\
It is clear that if $H$ is a polynomial of degree at most $2m+2$ then we have $ \{ A,H\}_{\circledast,m} = \{ A,H\}_{\circledast}$
for any $A$. Conversely we have
\begin{theorem}\label{thm:orderm} Assume $\hbar =1$. If for any $A\in {\mathbb S}^0_0$ we have $ \{ A,H\}_{\circledast,m} = \{ A,H\}_{\circledast}$
then $H$ must be a polynomial of degree at most $2m+2$.
\end{theorem}
{\em Proof}. Here we give a proof different from the case $m=0$, without connection with the Taylor formula, for simpler computations.\\
Using Lemma \ref{lem:Moyalest} we have
\begin{equation}} \def\eeq{\end{equation}\label{comp1}
T_Y^*\left( \{T_Y,H\}_{\circledast} - \{T_Y,H\}_{\circledast,m} \right)= {\mathcal O}(\vert Y\vert^{2m+3}),\; Y\rightarrow 0.
\eeq
Moreover from \eqref{exprod} we get:
\begin{equation}} \def\eeq{\end{equation}\label{comp2}
T_Y^*\{T_Y,H\}_{2j+1} =\frac{1}{2^{j+1}}\sum_{\vert\alpha+\beta=2j+1}\frac{y^\alpha\eta^\beta}{\alpha!\beta!}\partial_x^\alpha\partial_\xi^\beta H
\eeq
Using the assumption of Theorem \ref{thm:orderm}. and \eqref{comp1} we get that
$$
T_Y^*\{T_Y,H\}_{2m+3} = {\mathcal O}(\vert Y\vert^{2m+5})
$$
But $T_Y^*\{T_Y,H\}_{2m+3} $ is an homogeneous polynomial of degree $2m+3$ in $Y$ so we get that this polynomial is 0 and from
\eqref{comp2} we get that $ \partial_x^\alpha\partial_\xi^\beta H =0 $ for $\vert\alpha+\beta\vert=2m+3$. Then we can conclude that $H(X)$ is a polynomial of degree at most $2m+2$ in $X\in\mathbb{R}^{2d}$. $\square$.
| d13cdb1d5f31f64146c1a4669b34b1d74a78c582 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz"
} |
\section{Comparison with CTA results}
\label{sec:App_CTA_comparison}
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{figures/limit_uncertainties_model_ww_interp.pdf}~~~~
\includegraphics[width=0.48\textwidth]{figures/limit_uncertainties_model_bb_interp.pdf}
\includegraphics[width=0.48\textwidth]{figures/limit_uncertainties_model_tau_interp.pdf}
\caption{Comparison of our expected upper limits on the DM annihilation cross-section to those predicted by the CTA collaboration \cite{CTAConsortium:2017dvg} for the individual annihilation channels $W^+W^-$, $b\bar{b}$, and $\tau^+ \tau^-$. Our results are in agreement with the results of CTA (solid red lines) within the $1\sigma$ and $2\sigma$ uncertainty bands.}
\label{Fig:Comparison_CTA}
\end{figure}
\section{Spectrum comparison of the contributing channels}
\label{sec:App_spectra_cirelli}
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 1.6cm 0.0cm},clip]{figures/Singlet_scalar_paper_diff_spec_channels005TeV_add_channels.pdf}~~~~
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 1.6cm 0.0cm},clip]{figures/Singlet_scalar_paper_diff_spec_channels0081TeV_add_channels.pdf}
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 1.6cm 0.0cm},clip]{figures/Singlet_scalar_paper_diff_spec_channels05TeV_add_channels.pdf}~~~~
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 1.6cm 0.0cm},clip]{figures/Singlet_scalar_paper_diff_spec_channels1TeV_add_channels.pdf}
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 1.6cm 0.0cm},clip]{figures/Singlet_scalar_paper_diff_spec_channels10TeV_add_channels.pdf}~~~~
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 1.6cm 0.0cm},clip]{figures/Singlet_scalar_paper_diff_spec_channels100TeV_add_channels.pdf}
\caption{Differential $\gamma$-ray energy spectra of the most contributing dark matter annihilation channels at $m_S = 0.05$, $0.081$, $0.5$, $1$, $10$, and $100$~TeV.}
\label{Fig:Cirelli_spectra_diff}
\end{figure}
\section{Simulated observations of Sculptor with the Cherenkov Telescope Array}
\label{sec:Astro_model}
To study the impact of a more complex particle model on the resulting upper limits, we produce mock data that simulate the observations expected for the dSph Sculptor with the Cherenkov Telescope Array (CTA). CTA is a telescope array currently under construction. It is divided in two sites, one in La Palma on the Canary Islands in the Northern hemisphere and the second in the Atacama desert in Chile located in the Southern hemisphere. The array will cover an energy range between 20~GeV and 300~TeV and will consist of a total of about a hundred telescopes of three different sizes including large sized telescopes to capture the lowest energy $\gamma$ rays, medium-sized telescopes to cover the core energy range, and small-sized telescopes covering the highest energy events. The average Point Spread Function (PSF) of the instrument is designed to be about $0.1\degree$. CTA will observe the most promising dSphs with the highest $J$-factor as part of its upcoming dark matter search programme starting with Sculptor and Draco, one in each hemisphere.
We make use of the \texttt{Gammapy} distribution package \cite{gammapy:website} for our mock data production. We focus on the case of upper limit derivation, where no signal from dark matter is detected. We build our model with no significant excess towards the source of interest and simulate the resulting mock data from \textit{wooble mode} observations of 500 hours total. The wooble mode corresponds to an observation strategy where the telescopes point in a direction offset by a small angle, typically $0.5\degree$, from the nominal source position. The source is observed using four pointing positions alternating the offset in the positive and negative declination and right ascension. This method allows a simultaneous estimate of the background thanks to the other side of the field of view which serves as a control region \cite{Fomin:1994aj}. We use the multiple OFF technique \cite{HESS:2006fka} to estimate the background noise due to cosmic rays. The \textit{OFF region} or background region is defined by multiple circular regions of the same size as the \textit{ON region} or signal region which are equidistant from the pointing position, i.e.\ the centre of the cameras. As we treat our target dSph as a point-like source in the $\gamma$-ray sky, following previous CTA work \cite{CTAConsortium:2017dvg}, the size of each ON/OFF region is set to a radius of $0.1\degree$ corresponding to the average PSF of CTA. We use the Instrument Response Functions \texttt{prod3b-v2} publicly available on the CTA performance website \cite{CTAperformance} for the south site at zenith angle $z = 20\degree$, the lowest zenith angle to capture the lowest energy events, and for an observation time of 500 hours.
In this work, we focus on Sculptor, a dSph satellite of the Milky Way located in the Southern hemisphere at Galactic coordinates ($\ell=287.62\degree$, $b = -83.16\degree$) at a distance of $86 \pm 6$~kpc. The dynamics of the dSph and hence its DM density distribution is estimated based on 1365 member stars \cite{Walker:2008fc}. We use the $J$-factor profile and its associated uncertainties provided in Ref.\ \cite{Bonnivard:2015xpq} as a function of angular radius, whose total $J$-factor reaches $\log_{10}(J/{\rm GeV}^2{\rm cm}^{-5}\rm{sr}) = 18.63^{+0.14}_{-0.08}$. We interpolate the value of the $J$-factor for an angular radius of $\theta = 0.1\degree$, corresponding to the point-like treatment of the source. We find a value of $\log_{10} (J_{0.1\degree} / \text{GeV}^2 \text{cm}^{-5} \rm{sr}) = 18.3 \pm 0.3$.
\section{Conclusions} \label{sec:conlusions}
Current limits on indirect dark matter detection cross-sections are mainly derived based on the assumption of one single annihilation channel and without considering specific particle physics models. We have first demonstrated that this assumption is not valid within a given framework providing a viable candidate for WIMP dark matter. In the singlet scalar dark matter model, the typical channels $b\bar{b}$ and $W^+W^-$ dominate the annihilation cross-section in only a restricted part of the viable parameter space, while, e.g., annihilations into $\tau^+\tau^-$ and $t\bar{t}$ remain subdominant in this model. Second, we have shown that taking into account the full annihilation pattern of the WIMP particle can have a significant impact on the derived limits of the dwarf spheroidal galaxy Sculptor which can shift by more than an order of magnitude. Depending on the exact situation, the obtained limits may be more or less constraining than those from individual channels only. Based on these results, one can see that it is necessary to take into account all annihilation channels producing different $\gamma$-ray spectral shapes to capture additional features in the DM annihilation cross-section upper limits. This conclusion can also be drawn, e.g., from Refs.\ \cite{GAMBIT:2018eea,Baldes:2020hwx} which focus on the reinterpretation of the indirect detection results in the context of specific models including specific energy spectra.
Our analysis has been performed using CTA mock data of Sculptor. A similar impact can naturally be expected for other $\gamma$-ray sources as well as for other $\gamma$-ray observatories. The numerical setup that has been elaborated for this study, namely combining the particle physics and the astrophysics aspects, could be used on the future observations of the CTA dark matter programme or on the data of any $\gamma$-ray experiment.
Let us finally point out that our demonstration has been carried out in a very simple particle physics model, where the Standard Model is extended by only a singlet scalar, which is the WIMP dark matter candidate. Even in this setup the assumption of a single annihilation channel basically never holds. Consequently, it generally cannot be expected to hold in more complex extensions of the Standard Model involving a richer field content or even several possible dark matter candidates such as, e.g., supersymmetric models \cite{Ellis:1983ew, Martin:1997ns}, the inert doublet model \cite{Deshpande:1977rw, LopezHonorez:2006gr, Goudelis:2013uca}, or scotogenic models \cite{Restrepo:2013aga, Esch:2018ccs, Sarazin:2021nwo}. We suggest to derive upper limits on the dark matter annihilation cross-section within concrete particle physics frameworks rather than considering generic individual annihilation channels.
\section{Expected $\gamma$-ray flux} \label{sec:exp_flux}
In the framework of dark matter (DM) indirect detection, WIMPs annihilate into Standard Model particles which subsequently hadronise and/or decay into observable particles such as $\gamma$ rays. Particular interest is generally given to hadronisation into neutral pions, which decay almost exclusively into photons. The expected differential $\gamma$-ray flux generated by DM annihilation is given by \cite{Bertone:2004pz, armand:tel-03560610}
\begin{equation}
\frac{{\rm d}\Phi_{\gamma}}{{\rm d}E_{\gamma}} ~=~ \frac{1}{\xi} \frac{\langle\sigma v\rangle}{4\pi m_{\chi}^2} \, \sum_f B_f \frac{{\rm d}N_{\gamma}^f}{{\rm d}E_{\gamma}} \times J \,,
\label{Eq:dPhidE}
\end{equation}
where the sum runs over all possible annihilation channels. The prefactor $1/\xi$ depends on the nature of the DM particle: $\xi = 2$ if the DM is its own anti-particle (e.g.\ Majorana fermion or neutral scalar), $\xi = 4$ otherwise (e.g.\ Dirac fermion).
We distinguish two key components: the particle physics factor (before the multiplication sign) carries the DM annihilation cross-section averaged over the velocity distribution $\langle\sigma v\rangle$, the DM particle mass $m_{\chi}$, and the differential spectrum ${\rm d}N_{\gamma}^f/{\rm d}E_{\gamma}$ of each annihilation channel $f$ weighted by their respective branching ratio $B_f$. More details on the particle physics model that we use in this study will be given in Sec.\ \ref{Sec:SingletScalarModel}.
The second term (after the multiplication sign) is the so-called astrophysical $J$-factor which describes the DM distribution and the amount of DM annihilations within the source, i.e.\ it quantifies the strength of the signal emitted by the DM annihilations. The $J$-factor is defined as
\begin{equation}
J ~=~ \iint \: \rho_{\rm{DM}}^2\big(r(s, d, \theta)\big) \:{\rm d}s \: {\rm d}\Omega \,,
\label{dwarf_J} \\
\end{equation}
where $\rho_{\text{DM}}$ is the DM density distribution profile defined as a function of the distance $r$ between the centre of the source and the observer. Here, $\Omega$ is the solid angle associated to the source. The distance $r$ can also be expressed as $r^2(s, d, \theta) = s^2+d^2-2sd\cos\theta$, where $s$ is the distance from Earth along the light of sight and $\theta$ is the angular distance with respect to the centre of the source. The quantity $d$ is the distance between the Earth and the nominal position of the source. We note that the derivation of the density distribution profile $\rho_{\textrm{DM}}$ is performed through the Jeans analysis using the spherical Jeans equation formalism \cite{Bonnivard:2014kza, binney2011galactic, Bonnivard:2015xpq}. This method makes use of the spectroscopic data to reconstruct galactic dynamics under the assumptions that the dSphs under consideration are in steady-state hydrodynamic equilibrium, have a spherical symmetry, and are non-rotating objects.
\section{Introduction}
Numerous observational probes indicate that about 85\% of the total matter of the Universe is composed of non-baryonic cold dark matter (DM). This exotic form of matter is responsible for many phenomema at different scales such as the formation of the large structures, the motion of galaxies and clusters, and the bending of the path of light. In addition to astrophysical evidence, the presence of dark matter is confirmed by cosmological measurements. More precisely, within the cosmological $\Lambda$CDM model, the relic density of cold dark matter (CDM) has been restricted to the rather narrow interval
\begin{equation}
\Omega_{\rm CDM}h^2 ~=~ 0.1200 \pm 0.0012
\label{Eq:omh2Planck}
\end{equation}
by combining {\it Planck} data with additional cosmological observations \cite{Planck:2018vyg}. However, the exact nature of dark matter still remains a mystery and represents one of the leading questions in modern particle and astroparticle physics. A popular assumption is that cold dark matter consists of so-called Weakly Interacting Massive Particles (WIMPs) that are predicted by many extensions of the Standard Model (SM). Such a particle is supposed to be stable, massive, and interacts only through weak and gravitational interactions.
Experimentally, the nature of WIMP dark matter can be challenged by different approaches: production at colliders, direct detection, or indirect detection. In the present work, we focus on the latter and assume that WIMPs annihilate into SM particles (bosons, quarks, leptons), which in turn hadronise and/or decay into stable particles such as $\gamma$ rays. The corresponding signals might be detected by $\gamma$-ray telescopes and can be used as probes for indirect DM searches \cite{Feng:2010gw, Hooper:2009zm, MultiMessengers}. High-energy $\gamma$ rays present several advantages compared to charged particles as they do not get deflected by the Galactic magnetic field, and hence their source of emission can be well localised in the sky. Moreover, $\gamma$ rays do not experience significant energy losses during their propagation at Galactic scales. These properties allow us to point directly our $\gamma$-ray instruments to the sources in order to search for signals reaching the Earth.
A vast choice of targets is available for DM indirect searches. We look at DM-rich environments where the DM annihilation rate is the highest to maximise the chance of detection of a possible DM signal. The selection of ideal targets requires a balance between a high enough $J$-factor and dealing with the potential astrophysical $\gamma$-ray background.
One of the most promising targets for DM annihilations are the dwarf spheroidal galaxies (dSphs), satellites of the Milky Way Galaxy. These sources lie at ${\cal O}$(100 kpc) galactocentric distance at high latitudes and hence away from the Galactic plane. They are host to a small amount of luminous mass made of old stellar populations and do not contain much gas or dust. Therefore, new star formation is impossible and dSphs are left with an old stellar population of red giants only. Moreover, dSphs are non-rotating objects but rather are pressure-supported as their kinematics are dominated by the random motion of the stars whose amplitudes are driven by the gravitational potential of the galaxy \cite{Biney_book}. The measurements of the galactic dynamics are based on the line-of-sight velocity of individual stars from which a velocity dispersion profile is derived \cite{Walker:2007ju} to constrain the dark matter distribution profile. Their high mass and low luminosity indicate that the dSphs are DM-dominated with negligible astrophysical background \cite{Cowan:2010js}.
Numerous studies based on data from dSphs obtained from several $\gamma$-ray telescopes have been performed in order to identify a potential excess stemming from DM annihilation. They cover different energy ranges starting from a few tens of MeV with Fermi-LAT \cite{Fermi-LAT:2009ihh} up to several tens of TeV with the Air Cherenkov telescopes such as H.E.S.S.\ \cite{HESS:2006fka}, MAGIC \cite{MAGIC:2014zas}, or VERITAS \cite{Park:2015ysa} and the water Cherenkov detector HAWC \cite{Abeysekara:2017mjj}.
In the absence of any excess in the data over the estimated $\gamma$-ray background, only upper limits on the dark matter annihilation cross-section have been derived as a function of the presumed DM mass. These limits are obtained from either one dSph or by performing a stack of their respective observations with either a continuous spectrum \cite{Fermi-LAT:2015att, Fermi-LAT:2016uux, HESS:2014zqa, HESS:2010epq, HAWC:2017mfa, Viana:2011qaf, HESS:2007ora, Aleksic:2013xea, MAGIC:2017avy, MAGIC:2021mog, VERITAS:2017tif, HESS:2020zwn, HESS:2021zzm} or a mono-energetic line \cite{Fermi-LAT:2015kyq, HESS:2018kom, HESS:2020zwn, HESS:2021zzm}. More recently, combined DM searches have been carried out to increase the statistics and the sensitivity to a potential DM signal. Their results present more constraining upper limits than those of individual experiments \cite{MAGIC:2016xys, Alvarez:2020cmw, Hess:2021cdp}.
The current astrophysical constraints on DM annihilation cross-section are mainly derived assuming one single annihilation channel, e.g.\ annihilation into $W$-bosons or $\tau$-leptons. The main goal of this work is to explore to which extent such limits are affected when relaxing this assumption, i.e.\ taking into account the full annihilation pattern of the presumed DM particles. Contrary to previous studies, we intend to quantify the impact of this more precise procedure on the obtained limits and point out the importance of taking into account the full underlying particle physics model. A secondary goal is to compare our obtained upper limits to the annihilation cross-section predicted in the respective particle physics model.
In a similar context, a recent study of the Cherenkov Telescope Array (CTA) sensitivities to two classes of dark matter portal models has been published in Ref.\ \cite{Duangchan:2022jqn}.
We simulate mock data for CTA and perform a statistical analysis to derive constraints on the DM annihilation cross-section within the singlet scalar dark matter model as an example of a complete particle physics framework. In particular, we compare the results to those obtained assuming only the individual annihilation channels. We focus on the dwarf spheroidal galaxy Sculptor, which is selected for the DM search programme of CTA \cite{CTAConsortium:2017dvg}.
This article is organised as follows: in Sec.\ \ref{sec:exp_flux}, we recall the $\gamma$-ray flux computation and its key components. In Sec.\ \ref{sec:Astro_model}, we describe the properties of the considered dwarf spheroidal galaxy Sculptor as well as the CTA mock data simulations. We explain our statistical analysis in Sec.\ \ref{sec:stat_analysis}. Then, Sec.\ \ref{Sec:SingletScalarModel} presents the particle physics model that we use in this study. We discuss the obtained results in Sec.\ \ref{sec:results} and conclude on this work in Sec.\ \ref{sec:conlusions}.
\section{Singlet scalar dark matter}
\label{Sec:SingletScalarModel}
In order to illustrate the impact of various annihilation channels on the limits derived from indirect dark matter detection experiments, we consider a very simple framework, where a real singlet scalar $S$ is added to the Standard Model particle content \cite{Silveira:1985rk, McDonald:1993ex}. This scalar is odd under a discrete $\mathbb{Z}_2$ symmetry and thus a viable WIMP dark matter candidate. Note that the scalar is its own antiparticle, corresponding to the case $\xi = 2$ in Eq.\ \eqref{Eq:dPhidE}. The scalar potential is given by
\begin{equation}
V_{\rm scalar} ~=~ \mu_H^2 |H|^2 + \lambda_H |H|^4 + \frac{1}{2} \mu_S^2 S^2 + \frac{1}{4} \lambda_S S^4 + \frac{1}{2} \lambda_{SH} S^2 |H|^2 \,.
\end{equation}
After electroweak symmetry breaking, the Higgs doublet $H$ is expressed in terms of the physical Higgs boson $h$ and the vacuum expectation value $v = \langle H \rangle \approx 246$ GeV. Moreover, minimising the potential leads to $m_h^2 = 2 \lambda_H v^2 = -2 \mu_H^2$, the Higgs mass being measured as $m_h = 125.23 \pm 0.17$ GeV \cite{PDG2022}. At tree-level, the physical mass of the singlet scalar is given by
\begin{equation}
m_S^2 ~=~ \mu_S^2 + \frac{1}{2} \lambda_{SH} v^2 \,.
\end{equation}
The phenomenology of the model can then be fully described by only two parameters: the dark matter mass $m_S$ and the scalar coupling parameter $\lambda_{SH}$. Note that the quartic interactions $h^4$ and $S^4$ are irrelevant for dark matter phenomenology (as long as all calculations are performed at tree-level).
Dark matter pair annihilation can occur into final states containing gauge and Higgs bosons, leptons, and quarks. DM annihilation into fermions proceeds solely through $s$-channel Higgs exchange, and will thus depend on the coupling parameter $\lambda_{SH}$ as well as the relevant Yukawa couplings, preferring annihilation into heavy quarks ($b$ and $t$) and $\tau$-leptons. DM annihilation into bosonic states can proceed through $s$-channel Higgs exchange, $t$- or $u$-channel singlet exchange, and through direct four-vertex interactions. Again, the parameter $\lambda_{SH}$ plays a key role in most of the contributing diagrams. Note that DM annihilations into photon ($\gamma\gamma$) or gluon ($gg$) final states involve loop-mediated diagrams and are typically included through effective couplings to the Higgs boson. All relevant Feynman diagrams are shown in Fig.\ \ref{Fig:SingletScalarAnn}.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/feynmandiagrams.pdf}
\caption{Tree-level Feynman diagrams for scalar pair annihilation into fermions ($f = e, \mu, \tau, u, d, c, s, t, b$), gauge ($V=W^{\pm},Z^0$) and Higgs bosons ($h$). Diagrams corresponding to $u$-channels obtained through crossing are not separately depicted. Annihilation into $\gamma\gamma$ and $gg$ final states proceeds through effective couplings to the Higgs boson $h^0$.}
\label{Fig:SingletScalarAnn}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth, trim={0.2cm 0.4cm 0.2cm 0.0cm},clip]{figures/fig_grid_mS_lamSH.pdf}
\caption{Dark matter relic density in the $m_S$--$\lambda_{SH}$ plane. The black line corresponds to $\Omega_S h^2 \approx 0.12$, the blue area corresponds to $\Omega_S h^2 < 0.12$. The white region is excluded because of $\Omega_S h^2 > 0.12$.}
\label{Fig:SingletRelicDensity}
\end{figure}
We make use of the package {\tt micrOMEGAs 5.2.13} \cite{MO2001, MO2004, MO2007a, MO2007b, MO2013, MO2018} which includes an implementation of the singlet scalar model to describe the DM phenomenology of the particle physics part of our study. A delicate interplay between the two key parameters $m_S$ and $\lambda_{SH}$ is needed in order to meet the stringent relic density constraint of Eq.\ \eqref{Eq:omh2Planck}. Figure \ref{Fig:SingletRelicDensity} presents the parameter space regions which are cosmologically favoured or excluded in view of the relic density constraint value, while Figs.\ \ref{Fig:Gradient_BR1} and \ref{Fig:Gradient_BR2} illustrate the most contributing DM annihilation channels in terms of branching ratios (colour bars) in the $m_S$--$\lambda_{SH}$ plane. For low dark matter masses, $m_S \lesssim 50$ GeV, the relic density constraint is met for couplings of about $\lambda_{SH} \sim {\cal O}(0.1)$. In this regime, dark matter particle annihilations occur dominantly into $b\bar{b}$ final states due to the larger Yukawa coupling, with subdominant contributions into $\tau^+ \tau^-$, $gg$ and $c\bar{c}$ as shown in Fig.\ \ref{Fig:Gradient_BR1}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/bb_gradient.pdf}~~~
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/tautau_gradient.pdf}\\
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/cc_gradient.pdf}~~~
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/gg_gradient.pdf}
\caption{Gradients showing the relative contributions in terms of branching ratio of the dominant dark matter annihilation channels in the $m_S$-$\lambda_{SH}$ plane for $m_S \lesssim m_W = 80.4$ GeV. We only display parameter points leading to cosmologically viable configurations, i.e.\ $\Omega_S h^2 \leq 0.12$ (see also Fig.\ \ref{Fig:SingletRelicDensity}). The dashed lines indicate the cosmologically preferred region where $\Omega_S h^2 \approx 0.12$. Note the different scales on the colour bars.}
\label{Fig:Gradient_BR1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/WW_gradient.pdf}~~~
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/ZZ_gradient.pdf}\\
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/tt_gradient.pdf}~~~
\includegraphics[width=0.48\textwidth, trim={0.2cm 0.0cm 0.7cm 0.0cm},clip]{figures/hh_gradient.pdf}
\caption{Same as Fig.\ \ref{Fig:Gradient_BR1} for the dominant annihilation channels for $m_S \gtrsim m_W$.}
\label{Fig:Gradient_BR2}
\end{figure}
Around $m_S = m_h/2 \approx 62.5$ GeV, the Higgs-boson resonance increases the annihilation cross-section significantly. This increase has to be compensated by smaller couplings in order to maintain the singlet scalar relic density at $\Omega_S h^2 \approx 0.12$. Consequently, the value of $\lambda_{SH}$ drops as low as $10^{-4}$ in a very small mass interval around the resonance (see Fig.\ \ref{Fig:SingletRelicDensity}). After the resonance region, several kinematical thresholds are crossed at $m_S = m_W \approx 80.4$ GeV, $m_S = m_Z \approx 91.2$ GeV, $m_S = m_h \approx 125.0$ GeV, and $m_S = m_t \approx 175.3$ GeV, where the corresponding annihilation channels open up and dominate the total annihilation cross-section just above the respective kinematical threshold. Note that annihilation into $h^0h^0$ depends more strongly on the coupling $\lambda_{SH}$ leading to the observed non-uniform behaviour in the mass range between 125 GeV and approximately 1 TeV.
Finally, above $m_S \gtrsim 200$ GeV, increasing the dark matter mass requires an increase in the coupling $\lambda_{SH}$ following the relic density constraint. Here, the annihilation cross-section is dominated by the bosonic final states with $W^+ W^-$ (about 50\%), $Z^0Z^0$ (about 25\%), and $h^0h^0$ (about 25\%). While in the following we focus on indirect dark matter detection, an extensive analysis of the singlet scalar model taking into account numerous constraints has been published in Ref.\ \cite{GAMBIT:2017gge}. Let us note that although relatively large couplings $\lambda_{SH} \gtrsim 1$ may be disfavoured by arguments related to perturbativity, we include this part of the parameter space as it allows us to cover a large part of the energy range of CTA.
Based on the various regimes described above, one can see that the assumption of one single DM annihilation channel is therefore not valid, especially around the resonance and the kinematic thresholds in the $m_S$-$\lambda_{SH}$ parameter space.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth, trim={0.2cm 0.2cm 0.2cm 0.0cm},clip]{figures/fig_mS_BRs.pdf}
\caption{Branching ratios of the individual annihilation channels as a function of the dark matter mass $m_S$ when following the cosmologically preferred parameter space region where $\Omega_S h^2 \approx 0.12$ corresponding to the black band in Fig.\ \ref{Fig:SingletRelicDensity}.}
\label{Fig:BR}
\end{figure}
In the following, we assume -- without loss of generality for our study -- that the singlet scalar accounts for the total cold dark matter present in the Universe. We are thus interested in the parameter region where $\Omega_S h^2 \approx 0.12$ according to Eq.\ \eqref{Eq:omh2Planck} manifesting as the black band in Fig.\ \ref{Fig:SingletRelicDensity}. In Fig.\ \ref{Fig:BR}, we show the different branching ratios as a function of the dark matter mass $m_S$ following precisely this parameter space region. For each mass value, the value of $\lambda_{SH}$ has been chosen such that the relic density constraint is satisfied. Again, it becomes clear that, except for the very narrow interval between $m_W$ and $m_Z$, the assumption of a single 100\% branching ratio is never satisfied. Let us finally note that, if this conclusion is found within such a minimal and simple framework, it is also expected in any extension of the Standard Model providing viable DM candidates. Dedicated interpretations within specific particle physics models are therefore at order.
\section{Constraints on DM annihilation cross section}
\label{sec:results}
In the absence of any significant excess found in the data obtained from the observation of, e.g., Sculptor, upper limits on the dark matter (DM) annihilation cross-section $\langle \sigma v \rangle$ can be derived as a function of the DM mass using a log-likelihood ratio test statistic as discussed in Sec.\ \ref{sec:stat_analysis}.
In the present study, we perform the computation of predicted upper limits based on CTA mock data prepared for 500 hours of observation time. We consider the singlet scalar DM model presented in Sec.\ \ref{Sec:SingletScalarModel} assuming that the scalar field $S$ accounts for all DM present in the Universe according to Eq.\ \eqref{Eq:omh2Planck}. We take into account all relevant annihilation products -- $W^+W^-$, $Z^0Z^0$, $h^0h^0$, $gg$, $b\bar{b}$, $c\bar{c}$, $t\bar{t}$, $e^+e^-$, $\mu^+\mu^-$, $\tau^+\tau^-$, the mono-energetic channel $\gamma \gamma$, and $q\bar{q}$ including the three light quarks $u\bar{u}$, $d\bar{d}$, and $s\bar{s}$ --, all weighted by their respective branching ratio throughout the model parameter space. The differential $\gamma$-ray spectra of all annihilation channels are taken from Ref.\ \cite{Cirelli:2010xx}, obtained using {\tt PYTHIA} (version 8.135) \cite{Sjostrand:2007gs} including the final state radiative corrections. We use the mean $J$-factor value, and its uncertainty $\sigma_J$, $\log_{10}(J_{0.1\degree}/{\rm GeV}^2{\rm cm}^{-5}) = 18.3 \pm 0.3$ \cite{Bonnivard:2015xpq} integrated up to $\theta = 0.1\degree$.
\begin{figure}
\centering{
\includegraphics[scale=0.8, trim={0.2cm 0.2cm 0.05cm 0.0cm},clip]{figures/limit_uncertainties_model_omh2_ms_wrd_refined_500h_raptor_fixed_interp_ww_tau_bb_color.pdf}}
\caption{Upper limits obtained at 95\% confidence level within the singlet scalar DM model taking into account the full annihilation pattern. We also show the limits obtained from three individual annihilation channels ($W^+W^-$, $\tau^+\tau^-$, and $b\bar{b}$) for comparison. The limits are presented in dependance of the dark matter mass $m_S$ following the cosmologically preferred region where $\Omega_S h^2 \approx 0.12$ corresponding to the black band in Fig.\ \ref{Fig:SingletRelicDensity}.}
\label{Fig:UL_CTA}
\end{figure}
In Fig.\ \ref{Fig:UL_CTA}, we present the predicted upper limit and its uncertainty bands at the $1\sigma$ and $2\sigma$ confidence levels derived from a sample of 500 Poisson realizations of the background event mock data in the ON and OFF regions. The mean expected upper limit and its uncertainty bands correspond to the mean and the standard deviations at $1\sigma$ and $2\sigma$ of the $\langle \sigma v \rangle$ distribution for each DM mass.
In the low mass regime, the limit becomes more constraining when approaching $m_S \approx m_W \approx 80$ GeV. As the DM particles get heavier, they produce more energetic SM particles which in turn generate more $\gamma$ rays. This implies a lower annihilation cross-section to compensate a higher $\gamma$-ray spectrum. We also notice an inflection point at $m_S \approx m_h/2 \approx 62.5$ GeV corresponding to the Higgs resonance. Here, the annihilation rate is increased (see Sec.\ \ref{Sec:SingletScalarModel}) such that the obtained limit decreases.
A striking increase of the limit is then observed at $m_S \approx m_W \approx 80$ GeV, where the annihilation into the $W^+W^-$ channel opens up and dominates the total annihilation cross-section. This channel produces less $\gamma$ rays as compared to hadrons ($b\bar{b}$) (see Fig.~\ref{Fig:Cirelli_spectra_diff} in App.~\ref{sec:App_spectra_cirelli}), such that the limit increases. Our predicted upper limit on $\langle \sigma v \rangle$ reaches $3.8 \times 10^{-24}~\rm{cm}^3\,\rm{s}^{-1}$ at a DM mass of 1~TeV at 95\% confidence level. For $m_S \gtrsim 1$ TeV, the obtained limit becomes less constraining due to descreasing statistics.
Figure\ \ref{Fig:UL_CTA} also indicates the limits obtained assuming DM particle annihilations into the individual annihilation channels $W^+W^-$, $\tau^+\tau^-$, and $b\bar{b}$, i.e.\ assuming a branching ratio of 100\% in each case. For the sake of a proper comparison, we have performed the CTA likelihood analysis on the same simulated dataset for these individual channels. We show in Fig.~\ref{Fig:Comparison_CTA} in App.~\ref{sec:App_CTA_comparison} that our predicted upper limits in the case of the individual channels are compatible with those published by the CTA collaboration \cite{CTAConsortium:2017dvg}.
While the overall shape of our limit within the singlet scalar model follows the results obtained assuming individual annihilation channels, several differences are observed:
\begin{itemize}
\item The limit obtained within the singlet scalar DM model shows to be more conservative than the one from the individual $W^+W^-$ channel. Below $m_S = m_W \approx 80.4$~GeV, no upper limit can be derived as the $W^+W^-$ channel is kinematically forbidden. Above this value, new channels open up (see Sec.\ \ref{Sec:SingletScalarModel}) and hence the total $\gamma$-ray spectrum contains additional contributions. Therefore, we observe a slight difference in favour of the $W^+W^-$ channel which produces more $\gamma$ rays than the remaining channels (see also App.\ \ref{sec:App_spectra_cirelli}). Above approximately 1 TeV, the singlet scalar DM model is dominated by annihilation into $W^+W^-$ (up to approx.\ 62\%) with subdominant contributions into $Z^0Z^0$ (approx.\ 30\%) and $h^0h^0$ (approx.\ 8\%). Here, the relative error ranges between -6\% and -22\%. Note that for masses just above the threshold, the individual $W^+W^-$ channel reaches values of relative errors beyond $100\%$.
\item In all indirect DM searches, the $\tau^+\tau^-$ channel presents the most constraining upper limits since its $\gamma$-ray production is higher than for the other channels \cite{Cirelli:2010xx}. However, in the singlet scalar DM model, the $\tau^+\tau^-$ channel is never dominant (see also Figs.\ \ref{Fig:Gradient_BR1}, \ref{Fig:Gradient_BR2} and \ref{Fig:BR} and the associated discussion in Sec.\ \ref{Sec:SingletScalarModel}). Therefore, treating $\tau^+\tau^-$ as an individual channel translates into an overestimation of the $\gamma$-ray production and consequently leads to more constraining upper limits. Below the $W$-mass threshold, the relative error varies between $12\%$ and $-65\%$. Just after the threshold, the error reaches $-98\%$ before it decreases reaching about $-3\%$ around 100 TeV.
\item Regarding the hadronic channel $b\bar{b}$, the limit obtained within the singlet scalar model is generally more stringent than the one obtained considering this channel alone. For $m_S \lesssim m_W$, we observe an important difference between the two limits of slightly more than one order of magnitude. This is explained by the albeit subdominant presence of the $\tau^+\tau^-$ channel, which yields a larger amount of $\gamma$ rays. Although $\tau^+\tau^-$ accounts for only about 10\% of the total annihilation cross-section, its contribution decreases the obtained limits in a significant way. For $m_S \gtrsim m_W$, the $b\bar{b}$ channel is suppressed in the singlet scalar model. As for the $\tau^+\tau^-$ channel discussed above, considering this channel alone leads to an inaccurate estimation of the upper limit. Here, deriving the limit based on $b\bar{b}$ alone yields a less constraining upper limit due to its softer $\gamma$-ray spectrum. In this case, the relative error below the $W$ mass is of the order of $\mathcal{O}(1000\%)$ due to the important discrepancies between the two limits. The error then drops to $-56\%$ at the $W$ mass then remains in the range of $-10\%$ and $+38\%$ after the $W$ threshold.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth, trim={0.2cm 0.2cm 0.2cm 0.0cm},clip]{figures/fig_mS_sigv.pdf}
\caption{Dark matter annihilation cross-section (solid green line) in the singlet scalar model compared to the expected limit (dashed blue line) presented in Fig.\ \ref{Fig:UL_CTA} as a function of the dark matter mass $m_S$ following the parameter space region where $\Omega_S h^2 \approx 0.12$ corresponding to the black band in Fig.\ \ref{Fig:SingletRelicDensity}.}
\label{Fig:XSec}
\end{figure}
Let us note that combining the individual limits (obtained from the individual annihilation channels assuming a 100\% branching ratio) by simply reweighting them with the corresponding branching ratios from the particle physics models does not lead to an accurate estimation of the complete limit on the annihilation cross-section for all DM masses. While such an approximation may be reasonable in the case where the contributing channels feature similar $\gamma$-ray spectra (e.g.\ in the singlet scalar model for $m_S \gtrsim 1$ TeV), it will not be valid in the case of rather different spectra (e.g. singlet scalar model for $m_S \lesssim m_W$).
We finally show in Fig.\ \ref{Fig:XSec} the comparison of the obtained limit, taking into account the full model information, and the predicted total annihilation cross-section within the singlet scalar model. Although the model would not be excluded by the presented limit, the graph illustrates again to which extend the resonance and the kinematical thresholds affect both the total annihilation cross-section and the expected exclusion limit. In a situation where the two curves are generally closer one to the other, the observed fluctuations may easily lead to an exclusion in the corresponding mass range.
\section{Statistical analysis}
\label{sec:stat_analysis}
We perform a log-likelihood ratio statistical test on the mock data in order to constrain the DM annihilation cross-section setting upper limits. We scan over the DM particle mass ranging from 30~GeV to 100~TeV divided into 100 logarithmically-spaced DM mass bins. In order to capture new features in kinematically specific regions, e.g.\ thresholds of annihilation channels or presence of a resonance, we add a selection of refined mass bins between 76 GeV and 174 GeV (see Figs.\ \ref{Fig:Gradient_BR1}, \ref{Fig:Gradient_BR2} and \ref{Fig:BR} in Sec.~\ref{Sec:SingletScalarModel} for a specific case). We assume a positive signal $\langle \sigma v \rangle > 0$, based on the method proposed in Ref.\ \cite{Cowan}. The test statistic (TS) is defined as
\begin{equation}
\rm{TS} = \left\{
\begin{array}{rcl}
&0
& \:\: \text{~~for} \: \widehat{\langle \sigma v \rangle} > \langle \sigma v \rangle \,, \quad \\ \\
&\displaystyle{-2 \ln \frac{\mathcal{L}(\langle \sigma v \rangle, \doublehat{\boldsymbol{N}}_{\text{B}}(\langle \sigma v \rangle), \doublehat{J}(\langle \sigma v \rangle))}{\mathcal{L}(\widehat{\langle \sigma v \rangle}, \hat{\boldsymbol{N}}_{\text{B}}, \hat{J})}} & \:\: \text{~~for} \: 0 \leq \widehat{\langle \sigma v \rangle} \leq \langle \sigma v \rangle \,, \quad \\ \\
&\displaystyle{-2 \ln \frac{\mathcal{L}(\langle \sigma v \rangle, \doublehat{\boldsymbol{N}}_{\text{B}}(\langle \sigma v \rangle), \doublehat{J}(\langle \sigma v \rangle))}{\mathcal{L}(0, \doublehat{\boldsymbol{N}}_{\text{B}}(0), \doublehat{J}(0))}} & \:\: \text{~~for} \: \widehat{\langle \sigma v \rangle} < 0 \,,\quad
\end{array} \right.
\label{coef_qNS0}
\end{equation}
where $\langle \sigma v \rangle$ is the parameter of interest and ($\boldsymbol{N_\text{B}}$, $J$) are the nuisance parameters. The denominator holds the value of the annihilation cross-section $\widehat{\langle \sigma v \rangle}$, the vector of number of background events $\hat{\boldsymbol{N}}_{\text{B}}$, and $\hat{J}$ the value of the $J$-factor, that maximize unconditionally the likelihood function. The numerator contains the quantities $\doublehat{\boldsymbol{N}}_{\text{B}}(\langle \sigma v \rangle)$ and $\doublehat{J}(\langle \sigma v \rangle)$, the vector of number of background events and the $J$-factor value that maximize the likelihood function conditionally for a given annihilation cross-section $\langle \sigma v \rangle$. The upper limit on $\langle \sigma v \rangle$ for a given DM mass will be the value that responds to the criterion value of the test statistic $\rm TS$. In this work, we will derive constraints on $\langle \sigma v \rangle$ at 95\% confidence level which corresponds to a criterion value ${\rm TS} = 2.71$, in the case of a one-sided test and following previous $\gamma$-ray telescope analyses.
The total likelihood function $\mathcal{L}$ is the product of a Poisson likelihood $\mathcal{L}^{\mathcal P}_i$ on the events of all energy bins $i$ with a log-normal distribution $\mathcal{L}^J$ of the $J$-factor, which reads
\begin{equation}
\mathcal{L}\big(\langle \sigma v\rangle, \boldsymbol{N_\text{B}}, J\big) ~=~ \prod_i \mathcal{L}^{\mathcal{P}}_i \big( N^i_{\text{S}}(\langle \sigma v\rangle,J), N^i_{\text{B}} \big| N^i_{\text{ON}}, N^i_{\text{OFF}},\alpha \big) \times \mathcal{L}^J(J|\bar{J},\sigma_J) \,,
\end{equation}
where $N^i_{\text{S}}$ is the number of predicted signal events for a given energy bin $i$, and $N^i_{\text{B}}$ the associated number of expected background events, with $\boldsymbol{N_\text{B}}$ the corresponding vector. The values $N^i_{\text{ON}}$ and $N^i_{\text{OFF}}$ represent the number of ON and OFF events in the energy bin $i$, respectively, and $\alpha$ is the acceptance corrected exposure ratio between both ON and OFF regions. The energy bins are logarithmically-spaced and, for the sake of sufficient statistics, they are merged with the next neighbouring one if they contain less than four ON or OFF events \cite{Feldman:1997qc}. For each energy bin $i$, $\mathcal{L}^{\mathcal{P}}_i$ is the product of two Poisson likelihood functions, corresponding to the ON and OFF regions, respectively,
\begin{equation}
\mathcal{L}^{\mathcal{P}}_i ~=~ \frac{\big(N_{\text{S}_i}(\langle \sigma v \rangle, J) + N^i_{\text{B}}\big)^{N^i_{\text{ON}} }}{N^i_{\text{ON}}!} e^{-(N^i_{\text{S}} + N^i_{\text{B}})}
\times \frac{\big(\alpha N^i_{\text{B}}\big)^{N^i_{\text{OFF}}}}{N^i_{\text{OFF}}!} e^{-\alpha N^i_{\text{B}}} \,.
\end{equation}
Here, $N^i_{\text{S}}$ is the predicted number of signal events in the energy bin $i$ obtained through the convolution of the expected differential $\gamma$-ray flux given in Eq.\ \eqref{Eq:dPhidE} with the energy-dependent acceptance function $A_{\rm{eff}}(E_{\gamma})$, the observation time $T_{\rm{obs}}$, and the energy resolution function $R(E_{\gamma}, E'_{\gamma})$ which relates the detected energy $E'_{\gamma}$ to the true energy $E_{\gamma}$ of the events.
We then perform the integral of the convolution over the bin energy width $\Delta E_i$. The number of signal events obtained for an energy bin $i$ is computed as
\begin{equation}
N_{\text{S}_i}\big(\langle \sigma v \rangle, J\big) ~=~ J \times \frac{1}{\xi} \frac{\langle \sigma v \rangle}{4\pi m_{\chi}^2} \int_{\Delta E_i} \int^{\infty}_0 \sum_f B_f \frac{{\rm d}N^f_{\gamma}}{{\rm d}E_{\gamma}} \: R(E_{\gamma}, E'_{\gamma}) \: A_{\rm{eff}}(E_{\gamma}) \: T_{\rm{obs}}\: {\rm d}E_{\gamma} \: {\rm d}E'_{\gamma} \,.
\end{equation}
In our analysis, we take into account the $J$-factor uncertainties with a log-normal distribution given by
\begin{equation}
\mathcal{L}^J ~=~ \frac{1}{\ln(10) \sqrt{2\pi} \,\sigma_J \, J} \, \exp\left[-\frac{\big( \log_{10}J - \log_{10}\bar{J}\big)^2}{2 \sigma^2_J}\right] \,,
\label{L^J}
\end{equation}
where $J$ is the true value of the $J$-factor, $\bar{J}$ is the mean $J$-factor, and $\sigma_J$ is the uncertainty of $\log_{10}J$. | a37a11852f3291b64dc76352e9a9f5cfbdff21f3 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction} \label{sec:Introduction}
The near-ultraviolet (NUV; $\sim$1700-3200 \AA) is a historically under-studied region for stellar flares, but has an enormous potential for diagnosing flare physics. In flare models such as \citet{Kowalski2013}, the broad-band visible light from a flare comes from a sunspot-sized patch of hot, $\sim$10,000 K plasma on the stellar photosphere, making NUV measurements potentially more efficient at constraining this flaring mechanism than any other wavelength range. The NUV is also thought to be an essential energy source for triggering abiogenesis, the origin of life, and open questions remain about whether the cool photospheres of M dwarfs could provide enough NUV photons to initiate chemical reactions that ultimately lead to the production of RNA precursors and whether flares could make up for the deficit in NUV energy \citep{Ranjan2017, Rimmer2018}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Visit18_flare.png}
\caption{Swift\ grism data from the brightest flaring epoch show that AU Mic regularly exceeds the abiogenesis threshold and that the flaring emission is comprised of chromospheric emission lines, Balmer continuum, and possibly blackbody continuum. Left: the average stellar flux from 2000-2800 \AA\ is shown as a function of time from the start of Visit 18. The red shaded area shows the abiogenesis threshold level from \cite{Rimmer2018} at 0.31 AU (AU Mic's conservative inner habitable zone edge), and the dashed grey line shows the quiescent stellar flux level. Right: a flare-only spectrum (i.e., the quiescent stellar spectrum is subtracted from the in-flare spectrum) of the brightest data point from the left panel is shown as a black line with the gray shading representing the 1-$\sigma$ uncertainty. Various emission lines and the Balmer continuum are labeled. The colored lines show blackbody curves with temperatures ranging from 5000-18,000 K normalized to match the spectrum at 4200 Å. }
\label{fig:flares}
\end{figure}
\section{Observations \& Reductions} \label{sec:ObservationsReductions}
We designed a pilot program to determine the viability of NUV spectroscopic monitoring of flare stars with the Neil Gehrels Swift Observatory (Swift) UltraViolet Optical Telescope (UVOT) grism. The UVOT UV grism is optimized for 1700-2900 \AA\ first-order spectra at spectral resolving power $R\sim$150. We targeted AU Mic, a pre-main sequence M0Ve star at 9.7 pc, for its brightness (V=8.6 mag), high flare rate ($\sim$0.5 hour$^{-1}$ based on archival UVOT UVM2 photometry; E. Gilbert et al., 2022 in preparation), and sky location away from the Galactic center where source confusion and oversubscription of Swift\ observing time are less of a concern. We observed AU Mic for 52 ks (14.4 hours) over 42 individual spacecraft visits between 2020 April 5 and 2021 April 2 using the UVOT 0x0384 mode, which alternates between the UV grism (UGRISM, clocked mode, 120~s exposures) and UVM2 images (30~s exposures, used for aspect correction, 1997-2495 \AA), resulting in $\sim$14-18 minutes elapsed time per visit. After overhead, the total target exposure time was 34.5 ks (9.6 hours). Event mode is generally not used with the grism because of the burden on telemetry.
We reduced the UVOT data using standard procedures from \texttt{HEASoft} \citep{Heasoft} and \texttt{uvotpy} \citep{Kuin2014}. We corrected for offsets in the \texttt{uvotpy} wavelength solutions by fitting the blended Mg II h\&k emission lines with a Gaussian and shifting the centroid to 2800 \AA\ \citep{Kuin2015}. We corrected for contamination by background stars by visually inspecting every 2D spectrum and using linear interpolation to splice out parts of the spectrum contaminated by background stars. These corrections were different for every spectrum because the roll angle of the telescope varies over the course of a year.
\section{Results} \label{sec:Results}
We established median flux levels for the ensemble UVM2 (11.4 counts/s) and grism data and searched for data that lay $>$3$\sigma$ above the median in order to identify flares. In the total 34.5 ks spent on target, we detected four flares (one shown in Figure~\ref{fig:flares}) and three additional epochs where the count rate was significantly elevated above the median, but the light curve morphology was inconsistent with flaring. Note, however, that the cadence and total duration of our visits do not allow us to rule out flaring for these three elevated-flux epochs. Without event mode, 30-120~s cadence combined with 14-18 minute visit durations is a challenging setup for fully resolving flare light curves. We could not determine the start and stop times of any of our flares, so we adopt the visit durations (16.6-18.2 minutes) as the minimum flare durations. For the three elevated epochs, the visit durations were only 14.2 minutes.
The dates corresponding to the four flaring epochs are: 2020 May 10, 2020 July 05, 2020 July 31 (Figure~\ref{fig:flares}), and 2020 October 18. The three periods of elevated flux that could not confidently be identified as flares occurred on 2020 April 26, 2020 June 28, and 2020 July 26 . The observed flare rate of 0.42-0.73 hour$^{-1}$ is consistent with that expected from archival UVM2 photometry of AU Mic (E. Gilbert et al., 2022 in preparation).
Sufficient NUV photons may be a necessary ingredient for triggering photoreactions that kick off prebiotic chemical pathways. We compared all spectra to the 2000-2800 \AA\ abiogenesis threshold from \cite{Rimmer2018} to determine how flares could impact the ability of M dwarf stars to meet this flux threshold. Unlike older, later type M dwarfs, AU Mic provides $\sim4\times$ the abiogenesis threshold energy during quiescence to its conservative inner-habitable zone edge (0.31 AU; \cite{Kane2022}). Our results indicate that AU Mic spends 98\% of the time $>$5$\times$ above the threshold, 26\% $>10\times$ above the threshold, 4\% $>25\times$ above the threshold, and 0.3\% $>50\times$ above the threshold. Note that AU Mic's two known Neptune-sized exoplanets at 0.06 and 0.11 AU are closer to the star than the habitable zone \citep{Cale2021}.
We subtracted the median spectrum from each of the flare spectra in order to analyze the type of emission processes responsible for the flux increases we observe during flaring and/or elevated epochs. We obtained useful flare spectra between $\sim$1730-5000 \AA\ for our largest flare (Figure~\ref{fig:flares}), which lasted $>$18.2 minutes and released a minimum of 6$\times$10$^{33}$ erg in the 1730-5000 \AA\ bandpass, with roughly equal energy contribution from emission above and below the Balmer series limit at 3646 \AA. We find Balmer emission from H$\beta$ through the Balmer series limit at 3646 \AA, and \ion{Ca}{2}\ H\&K emission. Continuum in the 4000-5000 \AA\ region is roughly flat and consistent with a $\sim$6,000-15,000 K blackbody. Shortward of 3646 \AA, we observe bright Balmer continuum emission that dominates over any hot blackbody component. The emission lines shortward of the Balmer series limit are blended multiplets from \ion{Mg}{2}\ and \ion{Fe}{2}. This flare spectral energy distribution is consistent with that found for GJ 1243 (M4Ve) by \cite{Kowalski2019}.
\section{Conclusion} \label{sec:Conclusions}
The goal of this Swift\ pilot survey of AU Mic was to explore the utility of Swift's UV grism mode for stellar flare monitoring. Aside from the Hubble Space Telescope, which has detector protection restrictions in place against flaring M dwarfs, Swift's grism uniquely provides NUV spectroscopy for studying stellar flares. We have shown that although the 0x0384 mode's lack of event mode and short visit duration make determining flare morphology and total duration challenging, flux-calibrated flare spectra are readily measurable. Future Swift\ flare surveys should explore the feasibility of consecutive executions of the 0x0384 mode so that individual visits could extend longer.
\begin{acknowledgments}
The authors thank Paul Kuin for advice in setting up this observing program and using \texttt{uvotpy}. This work was supported by NASA's Swift\ Guest Investigator Program under award 80NSSC20K1112.
\end{acknowledgments}
| 9577e2aafcec370339513653e2e1ffa7d1c53f29 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
The growing popularity of smartphones and tablets has resulted in the increasing demand for high data rate services, and a huge amount of data traffic normally needs to be transmitted through cellular networks, which in turn leads to severe traffic overload problems. Recently, Device-to-Device (D2D) communication has emerged as a new data-offloading solution by enabling direct communication between two mobile users without traversing the base station (BS) \cite{D2Dsurvey}.
D2D communication can be implemented over the cellular spectrum (i.e. inband) or the unlicensed spectrum (i.e. outband).
Inband D2D can be further classified into spectrum overlay and spectrum underlay.
In the overlay scenario, the cellular and D2D transmitters use orthogonal time/frequency resources, while in the underlay scenario the D2D transmitters access the same time/frequency resources occupied by cellular users.
The rate performance is evaluated in \cite{D2DoverlayAndrews} for both overlay and underlay scenarios. It is observed that D2D mobiles in both scenarios can enjoy much higher data rates than regular cellular mobiles. As for cellular mobiles in the overlay scenario, their rate performance also improves due to the offloading capability of D2D communication.
Besides performance improvement over the pure cellular mode, inband overlay D2D is also more tractable in analysis since it does not interfere with regular cellular mobiles or suffer from random interference from unlicensed band.
In \cite{D2DoverlayAndrews} the authors use a simple spatial Aloha access scheme to support D2D scheduling.
In this paper we assume that all D2D links use carrier sense multiple access (CSMA) as the multiple access scheme to share a dedicated inband overlay channel.
Spatial reuse is considered, i.e., different transmit-receive (Tx-Rx) pairs at a sufficient distance away that do not cause interference are allowed to transmit simultaneously\cite{LiewBoE}. Although D2D communication does not route the data traffic through the cellular network, the available network infrastructure can still be an effective means to exert light control over all the D2D links when performing resource allocation. In our model, the D2D links have heterogeneous service requirements and different willingness to pay, and the central entity (e.g., evolved node B (eNB))\cite{D2Dsurvey} controls the transmission behaviors of all links by modifying the price per unit service rate.
A simple example is given in Fig. \ref{D2D}, where there are three D2D links and a single BS oversees/controls them in the control plane. Each D2D link consists of a Tx-Rx pair, and hence the D2D links resemble the situation where distributed pairs are transmitting. Hereafter, the terms ``D2D link", ``CSMA Tx-Rx pair" and ``CSMA user" are used synonymously. The involvement of the cellular network in the control plane is the key difference between our system model with that defined in Mobile Ad-hoc NETworks (MANET)\cite{MANET}. Moreover, D2D communication is mainly used for single-hop communications which does not inherit the multihop routing problem of MANET and wireless sensor networks\cite{JMchenDataGatheringTNET}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth, trim=0 90 0 0,clip]{Fig1_D2Doverlay}
\caption{An Example of Overlay D2D Communications. The solid arrow represents the data communication of a D2D link; the dashed arrow represents the control information exchange between a D2D transmitter and the BS.} \label{D2D}
\vspace{-1em}
\end{figure}
\IEEEpubidadjcol
When spatial reuse is considered in CSMA methods, the carrier-sense relationships among the CSMA users become \textit{non-all-inclusive}, i.e., each CSMA user may only sense a subset, but not all other users. As commented by Liew \textit{et al.}\cite{LiewBoE}, it is extremely difficult to extend the analytic methods for all-inclusive carrier-sense networks (e.g., \cite{Bianchi,YangXiaoDCFtvt,BianchiDCFrefineTVT}) to the non-all-inclusive case because of the inhomogeneity in the state spaces of the CSMA users. In fact, the problem of computing user throughputs in a spatial CSMA network is shown to be NP-hard\cite{LiewBoE}. In order to perform tractable analysis, existing literature \cite{FirstIdealCSMA,KarCSMA, LiewBoE, JiangLibinCSMA} have adopted an \textit{ideal CSMA network (ICN)} model to capture the essence of the CSMA mechanism under spatial reuse.
In this paper we also leverage on the ICN model to address the physical channel access issue.
Our contributions in this paper are twofold. First of all, we propose a Stackelberg game\cite{Fudenberg} which maximizes the total throughput of the D2D links, where these links have heterogeneous utility functions. The BS in the cellular networks will act as a Stackelberg leader to regulate the D2D link transmissions by modifying the service price, so that the payoff of each individual D2D link can be maximized while maintaining the D2D network to function within the feasible throughput region determined by the CSMA access mechanism. The problem is shown to be quasi-convex and can be solved by a sequence of equivalent convex optimization problems. The pricing strategies are designed so that the network always operates within the feasible throughput region.
Secondly, each D2D link will acquire a rate based on its actual demand and willingness to pay.
We explicitly model the possible selfish behaviors among the D2D links with spatial reuse. Under a given network price, the transmitter of each D2D link competes for channel usage by choosing its transmission parameters in order to maximize its own payoff. Such user dynamics are studied in the setting of non-cooperative games, and the resulting CSMA game model serves as the follower-subgame in the proposed Stackelberg game. An algorithm is proposed followed by proofs for the existence and convergence of the equilibrium solution.
The rest of the paper is organized as follows. We introduce related works in Section \ref{RelatedWorks} and the network model in Section \ref{CSMAModel}, and summarize some important results on ICN. In Section \ref{SectionFeasibleThroughput}, the feasible throughput region for a CSMA network is defined, while some important properties are derived. The Stackelberg game is detailed in Section \ref{Stackelberg}. Performance of the proposed game is evaluated through simulations in Section \ref{Simulation}. We conclude the paper in Section \ref{Summary}.
The notations used in this paper are as follows. An arrow over a variable $\vec{\cdot}$ represents a vector, or a system state consisting of the binary status of the $N$ links. The variable $\theta$ is used to denote the throughput from the channel access point of view, where the solution is controlled by the ICN model. On the other hand, $\tilde{\theta}$ is used to represent a desired throughput from the link layer aspect, whose value is derived from the price and link utility function. If the final solution is within the feasible throughput region, these two values should match. There are two types of equilibria to be differentiated: the subgame equilibrium is denoted by a superscript `*' whilst for the Stackelberg game is denoted by a superscript `opt'.
\section{Related Works}\label{RelatedWorks}
\subsection{Spectrum Sharing Games for D2D Communication}
In the licensed spectrum, a potential D2D pair can communicate through conventional cellular mode (relay through the BS), dedicated D2D mode (spectrum overlay), or underlay sharing mode (share with cellular users).
Game theoretic approaches have been applied in D2D communications for mode selection and resource management \cite{GameD2Doverview}.
In particular, Cai \textit{et al}. \cite{CaiYuemingICC} model the spectrum sharing mode selection as a coalition formation game, and propose a distributed coalition formation algorithm to improve the total achievable rate. Wu \textit{et al}. \cite{WuDanTVT} study the underlay spectrum sharing problem among potential D2D pairs and cellular users with quality-of-service requirements. A coalition formation game and a distributed coalition formation algorithm are proposed to decide for the most energy-efficient spectrum sharing strategy.
The focus of these works is to look for efficient spectrum sharing solutions among the D2D pairs and cellular users, and spectrum underlay is adopted in the sharing mode under the constraints on the amount of mutual interference. In this paper, we focus on spectrum overlay mode, in which the number of orthogonal channels is limited and multiple D2D pairs share a common channel via distributed transmission scheduling.
\subsection{CSMA Distributed Transmission Scheduling}
A survey on applying game theory to CSMA can be found in \cite{GameCSMAsurvey}, where several non-cooperative contention control games in CSMA methods are presented. For example, Jin and Kesidis \cite{JinKesidisCSMA} analyze the non-cooperative user behaviors in CSMA wireless networks where users have the freedom to choose the contention window sizes according to the network congestion level. The existence and uniqueness of the equilibrium point are investigated, as well as a distributed iterative method to approach the equilibrium. However, as commented by \cite{GameCSMAsurvey}, most of the proposed CSMA games assume all-inclusive carrier sensing.
The analysis cannot be directly applied in the presence of spatial reuse.
The ICN model captures the essence of spatial reuse CSMA networks. Jiang and Walrand \cite{JiangLibinCSMA} developed an elegant distributed CSMA algorithm for throughput and total utility maximization based on the ICN model after making assumptions about concavity and monotonicity of the user utility functions. Their work removes the need for knowledge of the underlying link topology and their transmission parameters can be updated distributively. However, the approach implicitly assumes a best effort transmission to achieve total utility maximization and there is no explicit treatment if users have heterogeneous rate requirements and different willingness to pay. In other words, while optimizing the sum-rate, there is no mechanism to weigh the individual user utility
so as to differentiate the services. Moreover, the global optimization approach does not reflect the fact that users are selfish and behave non-altruistically in maximizing their own payoffs. In fact, Cagalj \textit{et al.} have shown that even the presence of a few selfish users may lead such a CSMA network to collapse\cite{SelfishCSMA}, while proper pricing or penalty mechanisms lead to overall improvement \cite{PenaltyWLAN}.
Indeed, when users with heterogeneous rate requirements coexist in the network and the collective target rates are outside the feasible throughput region, the self-interested actions by the CSMA users would lead the network into heavily congested status.
In this paper we incorporate a game theoretic framework into the ICN model to harness the selfish behaviors of a group of non-cooperative spatially distributed CSMA users with heterogeneous rate requirements.
\section{Spatial Reuse CSMA Network}\label{CSMAModel}
\subsection{Spatial Reuse and Contention Graph}
Assume there are $N$ D2D links in the network sharing a dedicated inband overlay channel via CSMA-like random access. These D2D links can transmit in the same frequency band simultaneously if they do not cause any performance degradation to each other. We assume that the CSMA network is hidden-node-free, which can be achieved by properly setting the carrier-sensing power threshold as in \cite{HiddenNodeTVT}\cite{HiddenNodeFreeCS}.
Such a spatial reuse model can be characterized by a ``contention graph" as in \cite{LiewBoE}. For simplicity, only a connected network is considered, and if the network is not connected, then it can be divided into several independent connected sub-networks and dealt with separately. We assume that the contention graph is un-directed and the transmission queue of each D2D link is continuously backlogged, i.e., the transmitter of every D2D link always has a packet to transmit to its designated receiver. An example for three D2D links is shown in Fig. \ref{pair}, where D2D links 1 and 3 can transmit concurrently without collisions but neither of them can transmit together with link 2. In such cases, link 2 is a neighbor of link 1, but link 3 is not.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\linewidth, trim=0 0 0 0,clip]{Fig2_pair}
\caption{3 Tx-Rx Pairs and the corresponding Contention Graph\cite{AlohaGamesSpatialReuse}. In the upper part of the figure, the solid-thick arrow represents the transmission link from a transmitter to its designated receiver; the solid-thin and the dash-thin arrows represent the non-negligible and negligible interference, respectively.} \label{pair}
\vspace{-1em}
\end{figure}
\subsection{Ideal CSMA Network Model}
In the CSMA random access method, a link senses the channel before transmitting. Based on such a carrier-sensing relationship, a link will refrain from transmitting if any of its neighbors is transmitting.
In the ICN model, each link maintains a countdown timer, whose value $t_{cd}$ is modelled as a continuous random variable with an arbitrary distribution\cite{LiewBoE}. The timer value $t_{cd}$ counts down if the channel is sensed as idle, and is frozen if the channel is sensed as busy. When the channel becomes idle again, the countdown of $t_{cd}$ resumes until $t_{cd}=0$, upon which the link transmits a packet. The transmission time $t_{tr}$ is a random variable with an arbitrary distribution. For simplicity, we have adopted uniform distributions for both $t_{cd}$ and $t_{tr}$ in our simulations.
At any time, a link is either transmitting or idle. Denote the state of link $i$ as $s_i\in \{0, 1\}$, where $s_i=1$ if link $i$ is transmitting and $s_i=0$ otherwise. When $s_i=0$, link $i$ is either actively counting down or frozen, depending on whether a neighboring link $j$ is transmitting or not. We shall denote the system state of a $N$-link ICN by a $N$-tuple binary vector $\vec{s}=[s_1,s_2,\cdots,s_N]$ or simply by a string $s_1s_2\cdots s_N$. Notice that $s_i=s_j=1$ is not allowed if links $i$ and $j$ are neighbors, for the reasons that they can sense each other and the probability of them counting down to zero simultaneously is negligibly small under ICN due to the adopted continuous random variables\cite{LiewBoE}. Therefore, each feasible state corresponds to an independent set\cite{LiewBoE} of the contention graph.
For the example in Fig. \ref{pair}, the five independent sets are $\O$, $\{1\},\{2\},\{3\},\{1,3\}$. By default, we also include $\O$, which corresponds to $\vec{s}=\vec{0}$, as an independent set. The collection of these feasible system states are denoted by the set
\begin{equation}\label{SystemStates}
\mathcal{S}=\{[0,0,0],[1,0,0],[0,1,0],[0,0,1],[1,0,1]\}.
\end{equation
If we denote the state $\vec{s}$ with $s_j=0,\forall j\in\mathcal{N}=\{1,2,\cdots,N\}$ as $\vec{e}_0$, and the state $\vec{s}$ with $s_i=1, s_j=0, \forall j\neq i$ as $\vec{e}_i$, then $\mathcal{S}$ can be denoted as
\begin{equation}\label{SystemStatesE}
\mathcal{S}=\{\vec{e}_0,\vec{e}_1,\vec{e}_2,\vec{e}_3,\vec{e}_1+\vec{e}_3\}.
\end{equation}
\subsection{Stationary Distribution}
Here we summarize the stationary distribution of the system states based on the results in \cite{LiewBoE}.
If the transmission time and countdown time are exponentially distributed, then the system state $\vec{s}(t)$ is a time-reversible Markov process.
The state transition diagram of the example in Fig. \ref{pair} is shown in Fig. \ref{Markov3detail}, where there are 5 feasible system states. Each transition from a state in the left to a state in the right represents the beginning of a link transmission,
while the reverse transition represents the ending of the same link transmission.
For example, the transition from 001 to 101 represents the beginning of link 1's transmission while link 3 is transmitting. Similarly, the transition from 101 to 001 represents the ending of link 1's transmission while link 3 continues its transmission.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth, trim=0 0 0 0,clip]{Fig3_Markov3detail}
\caption{State Transition Diagram for Fig. \ref{pair}} \label{Markov3detail}
\vspace{-1em}
\end{figure}
The transition rate of a link from idle state to transmission state is $\lambda=1/E[t_{cd}]$, while the transition rate from transmission state to idle state is $\mu=1/E[t_{tr}]$. Hence a higher rate $\lambda$ and a lower rate $\mu$ suggest a higher intensity of the link to access the channel. We define the \textit{access intensity} (AI) \cite{LiewBoE} of a link as the ratio of its mean transmission time to its mean countdown time: $\rho=E[t_{tr}]/E[t_{cd}]=\lambda/\mu$. Note that a higher value of AI suggests a higher intensity to access the channel.
We further define the \textit{transmission aggressiveness} (TA) \cite{JiangLibinCSMA}, which is the natural logarithm of AI $\rho$, i.e., $r=\log_e \rho$. Since natural logarithm is a monotonically increasing function, a higher value of AI corresponds to a higher value of TA, which suggests the link is more aggressive to transmit.
Given a profile of AIs $\vec{\rho}=[\rho_1,\rho_2,\cdots,\rho_N]$, the stationary probability of the state $\vec{s}\in\mathcal{S}$ is shown in \cite{LiewBoE} to be given by
\begin{equation}\label{DistributionRho}
p_{\vec{s}}=1/Z\cdot\textstyle\prod_{i: s_i=1}\rho_i, \forall \vec{s} \in \mathcal{S},
\end{equation}%
where
\begin{equation}\label{Zrho}
Z=\textstyle\sum_{\vec{s}\in\mathcal{S}}\textstyle\prod_{i: s_i=1}\rho_i.
\end{equation}%
and by default, $p_{\vec{e}_0}=1/Z$. In (\ref{Zrho}), when evaluating $p_{\vec{s}}$, the notation $\prod_{i: s_i=1}\rho_i$ means that for each state $\vec{s}$, only those transmitting links are involved in the multiplication.
Collectively, we can write the state probability distribution as a vector $\overrightarrow{p}=[p_{\vec{s}_1},p_{\vec{s}_2},\cdots,p_{\vec{s}_{|\mathcal{S}|}}]$, where $|\mathcal{S}|$ is the cardinality of the set $\mathcal{S}$, i.e., the number of feasible states.
Similarly, if we replace AIs by TAs and define a profile of TAs $\vec{r}=[r_1,r_2,\cdots,r_N]=\log_e\vec{\rho}$ for all links, the stationary state probabilities are given by
\begin{equation}\label{Distribution}
p_{\vec{s}}=1/Z\cdot\exp(\textstyle\sum_{i=1}^N s_i r_i), \forall \vec{s} \in \mathcal{S},
\end{equation}%
where
\begin{equation}\label{Z}
Z=\textstyle\sum_{\vec{s}\in\mathcal{S}}\exp(\textstyle\sum_{i=1}^N s_i r_i).
\end{equation}%
As an illustration, consider the state transition diagram in Fig. \ref{Markov3detail}. Since the system state is a time-reversible Markov process, the stationary probability distribution should satisfy
\begin{equation}\label{MarkovCompute}
\left\{
\begin{array}{l l l l l}
p_{100}=\rho_1\cdot p_{000},\\
p_{010}=\rho_2\cdot p_{000},\\
p_{001}=\rho_3\cdot p_{000},\\
p_{101}=\rho_1\cdot p_{001}=\rho_3\cdot p_{100}=\rho_1\cdot \rho_3\cdot p_{000},\\
p_{000}+p_{100}+p_{010}+p_{001}+p_{101}=1.
\end{array} \right.
\end{equation}%
Solving the equations in (\ref{MarkovCompute}) yields
\begin{equation}\label{P000}
p_{000}=1/(1+\rho_1+\rho_2+\rho_3+\rho_1\rho_3)=1/Z,
\end{equation}%
where $Z$ is given in (\ref{Zrho}). Once $Z$ is evaluated, other state probabilities
can be easily computed.
Despite the idealized assumption about instantaneous sensing and continuous backoff time, the ICN model does capture the essence of CSMA under spatial reuse. It is shown in \cite{LiewBoE} that the stationary probability distribution in (\ref{DistributionRho}) holds even if both the transmission time and countdown time are not exponentially distributed, given that the ratio of their mean $\rho_i=E[t_{tr,i}]/E[t_{cd,i}]$ for each link $i\in\mathcal{N}$ remains unchanged. On the other hand, in the discrete time model, the stationary probability distribution will deviate from (\ref{DistributionRho}) due to collisions. Fortunately, when RTS/CTS handshaking is used and under the same TA, the stationary distribution will approach that given in (\ref{DistributionRho}) since the collision period will be comparatively small for a sufficiently large holding time \cite{JiangCSMAcollision}.
Finally, it then follows from (\ref{Distribution}) and (\ref{Z}) that the throughput or mean service rate of link $i$ is given by
\begin{equation}\label{throughput}
\theta_i=\textstyle\sum_{\vec{s}\in\mathcal{S}} s_i p_{\vec{s}}=\frac{\textstyle\sum_{\vec{s}\in\mathcal{S}} s_i \exp(\textstyle\sum_{i=1}^N s_i r_i)}{\textstyle\sum_{\vec{s}\in\mathcal{S}}\exp(\textstyle\sum_{i=1}^N s_i r_i)}, \forall i\in\mathcal{N},
\end{equation}%
which is the sum of the stationary state probabilities defined in (\ref{Distribution}) in which link $i$ is actively transmitting (i.e., $s_i=1$). In vector form, if we define the vector $\vec{\theta}=[\theta_1, \theta_2, \cdots, \theta_N]$, then the $N$ equations in (\ref{throughput}) can be collectively written as
\begin{equation}
\vec{\theta}=\textstyle\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}\vec{s}.
\end{equation}
where $\vec{s}$ is the $N$-dimensional vector used to represent a system state.
\section{Feasible Throughput Region in Spatial CSMA Networks}\label{SectionFeasibleThroughput}
In this section, we state and derive the key results on ICN which are important to our proposed game theoretic framework to be presented in Section \ref{Stackelberg}.
\subsection{Feasible and Strictly Feasible Throughput Region}
Each feasible system state $\vec{s}\in \mathcal{S}$ corresponds to a feasible scheduling vector of link transmissions. The feasible throughput region is therefore the convex hull \cite[pp. 24]{Boyd} of $\mathcal{S}$, namely,
\begin{equation}\label{ConvexHull}
\bar{\mathcal{C}}=\{\vec{\theta}|(\vec{\theta}=\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}\vec{s})\wedge (p_{\vec{s}}\geq 0, \forall {\vec{s}}) \wedge (\sum_{\vec{s}\in\mathcal{S}} p_{\vec{s}}=1) \}.
\end{equation}%
Eq.(\ref{ConvexHull}) shows that the feasible solutions are given by the convex combinations of the throughputs at these feasible states while fulfilling the probability and probability distribution constraints. The solutions are fully defined by a polytope whose vertices are the feasible system states $\vec{s}\in \mathcal{S}$.
The interior of $\bar{\mathcal{C}}$ is the \textit{strictly feasible region} $\mathcal{C}$:
\begin{equation}\label{StrictFeasible}
\mathcal{C}=\{\vec{\theta}|(\vec{\theta}=\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}\vec{s})\wedge (p_{\vec{s}}> 0, \forall {\vec{s}}) \wedge (\sum_{\vec{s}\in\mathcal{S}} p_{\vec{s}}=1) \}.
\end{equation}%
Using the contention graph in Fig. \ref{pair} as an example. The set of feasible system states $\mathcal{S}$ has been given in (\ref{SystemStates}). The feasible throughput region is $\bar{\mathcal{C}}$ shown in Fig. \ref{CSMAfeasibleRegion}, which is a polyhedron vertexed by the maximum throughput of these states (the region enclosed by the mesh surface and its intersections with $\theta_1-\theta_2$, $\theta_1-\theta_3$ and $\theta_2-\theta_3$ planes). The strictly feasible region $\mathcal{C}$ refers to the inner region of the polyhedron only.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth, trim=0 0 0 0,clip]{Fig4_CSMAfeasibleRegion}
\caption{Feasible Throughput Region for Contention Graph in Fig. \ref{pair}} \label{CSMAfeasibleRegion}
\vspace{-1em}
\end{figure
\subsection{Transmission Aggressiveness}\label{SectionLemma}
CSMA is a distributed and randomized way to schedule the transmissions among the feasible system states.
It is shown in \cite{JiangLibinCSMA} using the ICN model that any throughput in the strictly feasible region can be achieved through a properly chosen TA $\vec{r}$, which is stated in the following lemma.
\newtheorem{lemma}{Lemma}
\begin{lemma}[Lemma 8 in \cite{ICNuniqueProof}]\label{CSMAlemma}
In the ICN model, for any desired throughput for all the $N$ links $\vec{\tilde{\theta}}=[\tilde{\theta}_1, \tilde{\theta}_2, \cdots, \tilde{\theta}_N] \in \mathcal{C}$ (strictly feasible region), there exists a unique finite-valued $\vec{r}=[r_1, r_2, \cdots, r_N]\in \mathcal{R}^N$ such that $\theta_i(\vec{r})= \tilde{\theta}_i, \forall i\in \mathcal{N}$.
\end{lemma
A detailed proof can be found in \cite{JiangLibinCSMA} and \cite{ICNuniqueProof}. Here we only present a sketch of the proof.
\textit{Proof}: Given a $\vec{\tilde{\theta}}\in \mathcal{C}$, we use the maximum log-likelihood method to estimate the parameters $\vec{r}^*$ which result in $\vec{\theta}(\vec{r}^*)=\vec{\tilde{\theta}}$, or equivalently, result in the desired state probability distribution $\overrightarrow{p}^{\tilde{\theta}}$ such that
$\vec{\theta}(\vec{r}^*)=\textstyle\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}\vec{s}$.
The log-likelihood function \cite{LogLikelihood} is defined as:
\begin{equation}\label{LogLikelihoodOrigin}
F(\vec{r};\vec{\tilde{\theta}})=\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}\log_e(p_{\vec{s}}).
\end{equation}%
By applying $\vec{\tilde{\theta}}=\textstyle\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}\vec{s}$ and substituting the expression for $p_{\vec{s}}$ given in (\ref{Distribution}), and after some manipulations, we have
\begin{equation}\label{LogLikelihood}
F(\vec{r};\vec{\tilde{\theta}})=\textstyle\sum_{i=1}^N \tilde{\theta}_i r_i-\log_e [\textstyle\sum_{\vec{s}\in\mathcal{S}}\exp(\textstyle\sum_{i=1}^N s_i r_i)].
\end{equation}%
Since $\textstyle\sum_{i=1}^N \tilde{\theta}_i r_i$ is affine in $\vec{r}$ and $\log_e [\textstyle\sum_{\vec{s}\in\mathcal{S}}\exp(\textstyle\sum_{i=1}^N s_i r_i)]$ is a log-sum-exp function and thus is convex in $\vec{r}$,
the function $F(\vec{r};\vec{\tilde{\theta}})$ is concave in $\vec{r}$ \cite[pp. 72]{Boyd}.
Therefore, the max-log-likelihood problem below is a convex optimization problem with $\vec{r}$ as the variables to be solved and $\vec{\tilde{\theta}}$ as the parameters:
\begin{equation}\label{MaxLikelihood1}
\max\limits_{\vec{r}} ~~F(\vec{r};\vec{\tilde{\theta}})\quad \textrm{(Maximize log-likelihood)}.
\end{equation}
It is then shown in \cite{JiangLibinCSMA} that the max-log-likelihood problem in (\ref{MaxLikelihood1}) is the dual problem of the max-entropy problem in (\ref{MaxEntropy1}), where $-\textstyle\sum_{\vec{s}\in\mathcal{S}} p_{\vec{s}}\log_e p_{\vec{s}}$ is the entropy of the distribution vector $\overrightarrow{p}$, whose element $p_{\vec{s}}$ is the state probability for the state $\vec{s}, \forall \vec{s}\in\mathcal{S}$. The max-entropy problem is also a convex optimization problem, with $\overrightarrow{p}$ as the variables and $\vec{\tilde{\theta}}$ as the parameters.
\begin{equation}\label{MaxEntropy1}
\begin{array}{l l}
\max\limits_{\overrightarrow{p}} ~~ -\textstyle\sum_{\vec{s}\in\mathcal{S}} p_{\vec{s}}\log_e p_{\vec{s}} \quad \textrm{(Maximize entropy)}\\
\mbox{s.t.} ~~\left\{
\begin{array}{l l l}
\textstyle\sum_{\vec{s}\in\mathcal{S}} s_i p_{\vec{s}}= \tilde{\theta}_i, \forall i\in \mathcal{N},\\
p_{\vec{s}}\geq 0, \forall \vec{s}\in\mathcal{S},\\
\textstyle\sum_{\vec{s}\in\mathcal{S}} p_{\vec{s}}=1.\\
\end{array} \right.
\end{array}
\end{equation}%
We are now ready to prove Lemma \ref{CSMAlemma}.
We need to verify that the Slater's condition \cite[pp. 226]{Boyd} is satisfied, so that the optimal solutions to the two convex optimization problems (\ref{MaxLikelihood1}) (\ref{MaxEntropy1}) exist with zero duality gap, given that $\vec{\tilde{\theta}}\in \mathcal{C}$ (strictly feasible region).
Since all the constraints in (\ref{MaxEntropy1}) are linear equalities and inequalities, we only need to verify that there exists a feasible $\overrightarrow{p}$ in the relative interior \cite[pp. 23]{Boyd} of the domain $\mathcal{D}$ of the objective function $-\textstyle\sum_{\vec{s}\in\mathcal{S}} p_{\vec{s}}\log_e p_{\vec{s}}$, which is $\mathcal{D}=\{\overrightarrow{p} | p_{\vec{s}}\geq 0, \forall \vec{s}\in\mathcal{S}\}$.
The relative interior of $\mathcal{D}$ is $\textbf{relint} \mathcal{D}=\{\overrightarrow{p} | p_{\vec{s}}> 0, \forall \vec{s}\in\mathcal{S}\}$.
Since $\vec{\tilde{\theta}}\in \mathcal{C}$, from (\ref{StrictFeasible}) we can write $\vec{\tilde{\theta}}=\textstyle\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}\vec{s}$
where $p_{\vec{s}}^{\tilde{\theta}}>0,\forall \vec{s}\in\mathcal{S}$ and $\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}=1$.
By letting $\overrightarrow{p}=\overrightarrow{p}^{\tilde{\theta}}\in \textbf{relint} \mathcal{D}$, we find a feasible $\overrightarrow{p}$ which satisfies all the constraints in (\ref{MaxEntropy1}). Therefore, the Slater's condition is satisfied.
As a result, the optimal solutions to the two convex optimization problems (\ref{MaxLikelihood1}) (\ref{MaxEntropy1}) exist with zero duality gap. Moreover, the dual optimal value is attainable, i.e., there exists a finite $\vec{r}^*$ such that $F(\vec{r}^*;\vec{\tilde{\theta}})=\max_{\vec{r}} F(\vec{r};\vec{\tilde{\theta}})$.
Therefore, the first order condition \cite[pp. 457]{Boyd} of the unconstrainted differentiable convex optimization problem in (\ref{MaxLikelihood1}) is satisfied at $\vec{r}^*$, i.e.,
\begin{equation}\label{FirstOrder}
\nabla F(\vec{r};\vec{\tilde{\theta}})\mid_{\vec{r}=\vec{r}^*}=\vec{0},
\end{equation}%
which yields
\begin{multline}\label{FirtOrderPartial}
\frac{\partial F(\vec{r};\vec{\tilde{\theta}})}{\partial r_i}\mid_{\vec{r}=\vec{r}^*}=\tilde{\theta}_i-\frac{\textstyle\sum_{\vec{s}\in\mathcal{S}} s_i \exp(\textstyle\sum_{i=1}^N s_i r_i^*)}{\textstyle\sum_{\vec{s}\in\mathcal{S}}\exp(\textstyle\sum_{i=1}^N s_i r_i^*)}\\
=\tilde{\theta}_i-\textstyle\sum_{\vec{s}\in\mathcal{S}} s_i p_{\vec{s}}=\tilde{\theta}_i-\theta_i^*=0, \forall i\in \mathcal{N}.
\end{multline}%
Therefore, for any $\vec{\tilde{\theta}}\in \mathcal{C}$ (strictly feasible region),
the log-likelihood function
$F(\vec{r};\vec{\tilde{\theta}})$ attains its maximum value at a finite-valued $\vec{r}=\vec{r}^*\in \mathcal{R}^N$. At the optimal solution $\vec{r}^*$, the first-order optimality condition (\ref{FirstOrder}) is satisfied, which corresponds to $\theta_i^*(\vec{r}^*)= \tilde{\theta}_i, \forall i\in \mathcal{N}$.
It is further shown in \cite{ICNuniqueProof} that $F(\vec{r};\vec{\tilde{\theta}})$ is strictly concave in $\vec{r}$. Therefore, the optimal solution $\vec{r}^*$ is unique.
$\blacksquare$
Lemma \ref{CSMAlemma} suggests that, if $\vec{\tilde{\theta}}\in \mathcal{C}$, then a unique solution $\vec{r}^*$ exists such that $\theta_i^*(\vec{r}^*)= \tilde{\theta}_i, \forall i\in \mathcal{N}$. The above proof also suggests that we can solve for $\vec{r}^*$ by maximizing the concave function $F(\vec{r};\vec{\tilde{\theta}})$. This is useful for the design of our game iteration algorithm presented in Section \ref{CSMAgame}.
\subsection{Feasible Throughput Region Under ICN}
Previously we have defined the feasible throughput region for any given set of feasible system states. The shape of the polytopes derived from the ICN model owns a property which will be discussed here.
We first introduce a binary relation ``$ \preceq $" between two real-valued vectors $\vec{\tilde{\vartheta}}$ and $
\vec{\tilde{\theta}}$, which is defined as component-wise less than or equal to, i.e.,
\begin{equation}
\vec{\tilde{\vartheta}}\preceq \vec{\tilde{\theta}}\Leftrightarrow \tilde{\vartheta}_i\leq \tilde{\theta}_i,\forall i\in \mathcal{N}.
\end{equation}%
We now establish the following theorem which will be useful when presenting our proposed games.
\newtheorem{theorem}{Theorem}
\begin{theorem}\label{PartialOrderTheorem}
In the ICN model, given that $\vec{\tilde{\theta}}\in \bar{\mathcal{C}}$ ($\tilde{\theta}$ is in the feasible region), then any desired throughput $\vec{\tilde{\vartheta}}$, where $\vec{0}\preceq\vec{\tilde{\vartheta}}\preceq \vec{\tilde{\theta}}$, is also in $\bar{\mathcal{C}}$.
\end{theorem
\textit{Proof}:
A first glance at Fig. \ref{CSMAfeasibleRegion} may lead to the thought that the theorem is trivial, but this is not true. Fig. \ref{ConvexSetNotPartialOrder} shows a convex set $\mathcal{A}_1$
in the two-dimensional space. For a $\vec{\tilde{\theta}}\in \mathcal{A}_1$ as shown in Fig. \ref{ConvexSetNotPartialOrder}, it is easy to find a point $\vec{\tilde{\vartheta}}$ such that $\vec{\tilde{\vartheta}}\preceq \vec{\tilde{\theta}}$ and yet $\vec{\tilde{\vartheta}}$ is not within the convex region $\mathcal{A}_1$. On the other hand, it is not difficult to figure out that the convex set $\mathcal{A}_2$ in Fig. \ref{ConvexSetPartialOrder} owns the property stated in Theorem \ref{PartialOrderTheorem}.
In the ICN model, for a target throughput vector $\vec{\tilde{\theta}}$ where $\vec{\tilde{\theta}}\in \bar{\mathcal{C}}$, there exists a probability distribution $\overrightarrow{p}^{\tilde{\theta}} = \{p_{\vec{s}}^{\tilde{\theta}}, \forall \vec{s} \in \mathcal{S}\}$ where $\vec{\tilde{\theta}}=\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}\vec{s}$ according to (\ref{ConvexHull}).
To prove that $\vec{\tilde{\vartheta}} \preceq \vec{\tilde{\theta}} \in \bar{\mathcal{C}}$, we need to similarly show that there exists another probability distribution $\overrightarrow{p}^{\tilde{\vartheta}} = \{ p_{\vec{s}}^{\tilde{\vartheta}}, \forall \vec{s}\in \mathcal{S} \}$ that fulfills (\ref{ConvexHull}).
However, it is difficult to obtain the distribution $\overrightarrow{p}^{\tilde{\vartheta}}$ directly from $\overrightarrow{p}^{\tilde{\theta}}$ since it depends on the underlying link topology.
Our approach is to define an orthotope $\mathcal{B}$
whose ``vertices" are obtained by projecting $\vec{\tilde{\theta}}$ on all the coordinate planes. An example for the 3-dimensional illustration is shown in Fig. \ref{CSMAfeasibleRegion}. The problem is now becoming equivalent to showing that all the ``vertices" of $\mathcal{B}$ are in $\bar{\mathcal{C}}$. Finally, because $\vec{\tilde{\vartheta}}\preceq \vec{\tilde{\theta}}$, $\vec{\tilde{\vartheta}}$ is within the cuboid and hence within $\bar{\mathcal{C}}$.
We first perform a projection parallel to the $i$-th axis. Consider a throughput vector $\vec{\tilde{\psi}}$, with the setting of $\tilde{\psi}_i=0,\tilde{\psi}_j=\tilde{\theta}_j, \forall j\neq i$, i.e., the $i$-th link has zero throughput.
It is intuitive that $\vec{\tilde{\psi}}$ is one of the ``vertices" of $\mathcal{B}$. In order to show that $\vec{\tilde{\psi}}$ is in $\bar{\mathcal{C}}$, we need to show that we are able to obtain its state probability distribution $\{\overrightarrow{p}^{\tilde{\psi}}\}$ from $\{\overrightarrow{p}^{\tilde{\theta}}\}$, and $\vec{\tilde{\psi}}$ can be expressed in the form as in (\ref{ConvexHull}). This can be done in the following way.
For $\vec{\tilde{\theta}}\in \bar{\mathcal{C}}$, its state distribution $\overrightarrow{p}^{\tilde{\theta}}$ satisfies $\vec{\tilde{\theta}}=\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}\vec{s}$, $p_{\vec{s}}^{\tilde{\theta}}\geq 0,\forall \vec{s}\in\mathcal{S}$ and $\sum_{\vec{s}\in\mathcal{S}} p_{\vec{s}}^{\tilde{\theta}}=1$.
We next describe how to construct the state distribution $\overrightarrow{p}^{\tilde{\psi}}$ for $\vec{\tilde{\psi}}$.
For those states in $\mathcal{S}$ with $s_i=1$, choose $p_{\vec{s}}^{\tilde{\psi}}=0$ and $p_{\vec{s}-\vec{e}_i}^{\tilde{\psi}}=p_{\vec{s}-\vec{e}_i}^{\tilde{\theta}}+p_{\vec{s}}^{\tilde{\theta}}$.
For the remaining states, choose $p_{\vec{s}}^{\tilde{\psi}}=p_{\vec{s}}^{\tilde{\theta}}$. In other words, those states $\vec{s}$ with $s_i=1$ should now have state probability $p_{\vec{s}}^{\tilde{\psi}}=0$. The ``removed" state probability $p_{\vec{s}}^{\tilde{\psi}}$ should now be attributed to the state $\vec{s}-\vec{e}_i$. It is not difficult to verify that, by doing so, the total probability remains one and the throughputs of all unaffected links remain the same as before. This state probability distribution $p_{\vec{s}}^{\tilde{\psi}}$ clearly satisfies (\ref{ConvexHull}), hence we conclude that the vertex $\vec{\tilde{\psi}}$ is within $\bar{\mathcal{C}}$ and so are other vertexes of $\mathcal{B}$.
For the example shown in Fig. \ref{Markov3detail},
assume that we have a throughput $\vec{\tilde{\theta}}=[\tilde{\theta}_1,\tilde{\theta}_2,\tilde{\theta}_3]\in \bar{\mathcal{C}}$. We now show that $\vec{\tilde{\psi}}=[\tilde{\psi}_1,\tilde{\psi}_2,\tilde{\psi}_3]=[0,\tilde{\theta}_2,\tilde{\theta}_3]$ is also in $\bar{\mathcal{C}}$.
Note that the throughput $\vec{\tilde{\psi}}$ is equivalent to the case in which link 1 powers off and stops transmitting. In such a case, there are only three feasible system states left: 000, 010, 001. In other words, the states 100 and 101 disappear and are merged into the states 000 and 001 respectively, since link 1 is no longer transmitting. State 010 remains unchanged.
Merging the state probability $p_{101}$ with $p_{001}$ will ensure the throughput for link 3 remains the same, since $\tilde{\theta}_3=p_{001}+p_{101}$. Merging the state probability $p_{100}$ with $p_{000}$ will not affect the throughput of any remaining links.
Since the total probability still sum up to be one, link 1 will not be transmitting and both link 2 and link 3 transmit as before.
Therefore, the throughput $\vec{\tilde{\psi}}$ resulting from the above state merging operations is still in $\bar{\mathcal{C}}$.
Other vertices of $\mathcal{B}$ can also be similarly shown to be in $\bar{\mathcal{C}}$. Since $\bar{\mathcal{C}}$ is a convex set and the convex combination of these ``vertices" are all in $\bar{\mathcal{C}}$, we have $\mathcal{B}\subset\bar{\mathcal{C}}$. Since $\vec{\tilde{\vartheta}}\preceq \vec{\tilde{\theta}}$ is enclosed in the hyperrectangle ($N$-orthotope), $\vec{\tilde{\vartheta}}$ should also be in $\bar{\mathcal{C}}$.
$\blacksquare$
\textit{Remark}: Theorem \ref{PartialOrderTheorem} is not generally true for any convex set. It is true since the values of $s_i$ are chosen from 0 and 1 only; and the subset of feasible states induced by a maximal independent set is a complete partially ordered set\cite{AlohaGamesSpatialReuse} based on how ICN is modelled. For the example in Fig. \ref{Markov3detail}, the maximal independent set $\{1,3\}$ induces the subset of feasible states $\mathcal{Q}=\{[0,0,0],[1,0,0],[0,0,1],[1,0,1]\}$, which is a complete partially ordered set, with the least element $[0,0,0]$ and the largest element $[1,0,1]$ under the partial order ``$\preceq$". Hence the use of the theorem needs to be carefully dealt with.
Theorem \ref{PartialOrderTheorem} will be used in Section \ref{QuasiConvex} to show that the pricing problem is a valid quasi-convex optimization problem.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=1\linewidth, trim=0 0 0 0,clip]{Fig5a_ConvexSetNotPartialOrder}
\caption{Convex Polytope $\mathcal{A}_1$ without Property in Theorem \ref{PartialOrderTheorem}}
\label{ConvexSetNotPartialOrder}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=1\linewidth, trim=0 0 0 0,clip]{Fig5b_ConvexSetPartialOrder}
\caption{Convex Polytope $\mathcal{A}_2$ with Property in Theorem \ref{PartialOrderTheorem}}
\label{ConvexSetPartialOrder}
\end{subfigure
\caption{Examples of 2-Dimensional Convex ``Polytopes"}\label{Polytope}
\end{figure}
\subsection{D2D Network Model}\label{D2Dmodel}
This subsection discusses how to efficiently model the resulting D2D network if CSMA is adopted by all D2D links. If the objective of the network is to maximize the sum-rate of all transmitting links, and the BS gives no control on the admission and transmission of links, then \cite{JiangLibinCSMA} has successfully solved this problem. The solution is computed in a completely distributed manner. However, as pointed out earlier, such a fully cooperative model is too idealistic and there is no consideration on the utility heterogeneity and selfish behaviors of the links. It may be better to build a pricing framework so that each link tries to maximize its payoff function when competing for resources, rather than someone tries to take advantage when the network is in operation and drive the network to unstable states.
Furthermore, maximizing the sum-rate may not distribute the resources according to demand because links with low demand may be assigned to transmit at higher rates due to its spatial location.
In this paper, we feel that the BS can take a more proactive role to assist in D2D transmission. In fact, the problem can be formulated separately in terms of the objectives of the D2D links and the BS. The objective of the BS is to maximize the sum-rate while satisfying the physical layer constraints:
\begin{equation}\label{BSObjective}
\begin{array}{l l}
\max ~~ \textstyle\sum_{i=1}^N \tilde{\theta}_i\\
\mbox{s.t.} ~~\vec{\tilde{\theta}}\in\mathcal{C},
\end{array}
\end{equation}%
where $\tilde{\theta}_i$ is the target rate the network has to support link $i$, and the solution must fulfill the CSMA channel access constraint, i.e., the final rates to support all D2D links must be in the strictly feasible throughput region $\mathcal{C}$ defined in (\ref{StrictFeasible}).
Each link is a player of a non-cooperative game. Each player tries to maximize its payoff $v_i(\theta_i, \theta_{-i})$ while satisfying the physical layer constraints.
\begin{equation}\label{D2DObjective}
\begin{array}{l l}
\max ~~ v_i(\theta_i, \theta_{-i}), \forall i\\
\mbox{s.t.} ~~\vec{\theta}\in\mathcal{C}.
\end{array}
\end{equation}%
In (\ref{BSObjective}), $\tilde\theta_i$ is used to represent the rate demand from the utility point of view and should be differentiated from $\theta_i$ in (\ref{D2DObjective}) or ($\ref{throughput}$) used in the ICN model as a result of competing for channel access. At the equilibrium state, these two quantities have to be the same and the pricing mechanism aims to achieve this objective.
There are two challenges in the formulation. The above two optimization problems both involve the constraint defined by the strictly feasible throughput region $\mathcal{C}$. From Lemma \ref{CSMAlemma} we know that, for any desired throughput $\vec{\tilde{\theta}}$ in the strictly feasible region $\mathcal{C}$, there exists an operating point $\vec{r}$ such that $\vec{\theta}(\vec{r})=\vec{\tilde{\theta}}$. However, in order to obtain $\mathcal{C}$ as in (\ref{StrictFeasible}), we need to know all the feasible system states, which correspond to all the independent sets \cite{LiewBoE} in the contention graph. As is shown in \cite{LiewBoE}, to compute all the independent sets (include the maximal independent sets) is a NP-hard problem. Hence it is practically difficult to obtain $\mathcal{C}$.
The second challenge is how to align the solution of (\ref{BSObjective}) with the involvement of (\ref{D2DObjective}).
Our approach is to develop a simple mechanism which does not require a-prior knowledge of $\mathcal{C}$ and yet the radio resource can be allocated to the heterogeneous D2D links efficiently while satisfying the objectives of both the BS and D2D links. A pricing mechanism is introduced to achieve this purpose. The payoff function of each link is made to be dependent on the resource price. The BS broadcasts the resource price and uses it to control the transmission behavior of each link.
Mathematically, the BS solves the following optimization problem:
\begin{equation}\label{LeaderProblemC}
\max\limits_{M\geq 0} ~~g(M):=\textstyle\sum_{i=1}^N \tilde{\theta}_i(M)\\
\end{equation}%
where $\tilde{\theta}_i(M)$ is the target rate of D2D link $i$ under the service price $M$, which will be presented in (\ref{TargetRate}).
The D2D links are the followers in the overall Stackelberg game, each of which chooses its transmission strategy so that its individual payoff is maximized under the service price chosen by the BS, i.e., the Stackelberg leader.
In the next section, we describe how our proposed Stackelberg game model can achieve the above purposes.
\section{Stackelberg Games for Non-Cooperative D2D Links}\label{Stackelberg}
Stackelberg games\cite{Fudenberg} are a class of non-cooperative games in which a leader, who makes the first move in the game, anticipates the actions of the followers based on a model of how the followers would respond to its actions. We propose a Stackelberg game, in which the BS in the cellular network acts as a Stackelberg leader to regulate the transmission behaviors of all the D2D links by broadcasting a proper service price $M$. The D2D links are the followers, each of which responds to the price $M$ by choosing its transmission strategy in an attempt to maximize its individual payoff
In Section \ref{UtilityFunction}, we first define the utility functions for the D2D links, each of which characterizes the individual service requirements and willingness to pay. In Section \ref{CSMAgame}, we study the non-cooperative behaviors of the D2D links under a given network price $M$, which defines the follower-subgame in the Stackelberg game.
The Stackelberg game is analyzed in Section \ref{QuasiConvex}. Based on the analysis, the pricing strategies of the Stackelberg leader are proposed in Section \ref{PricingStrategies}. A brief complexity analysis is given in Section \ref{SectionComplexity}.
\subsection{D2D Link Utility Function}\label{UtilityFunction}
We modify the traffic model used in \cite{alohaprice} to our system. Suppose D2D link $i$ has a target rate $\tilde\theta_i$ in the range of $[\gamma_i, \pi_i]$, where $\gamma_i \leq {\tilde\theta}_i \leq \pi_i$.
If $\tilde\theta_i < \gamma_i$, link $i$ achieves zero utility, and each link has no intention to go beyond $\tilde\theta_i > \pi_i$. The exact target rate value $\tilde\theta_i$ is controlled by the service price $M$ through the following relationship
\begin{equation}\label{TargetRate}
\begin{array}{l l}
\tilde\theta_i(M)=\left\{
\begin{array}{l l}
0, &\textrm{ $M>m_i$,}\\
\min\{\gamma_i-b_i(M-m_i),\pi_i\}, & \textrm{ $0\leq M\leq m_i$},\\
\end{array} \right.
\end{array}
\end{equation}
where $\gamma_i$, $\pi_i$
and $m_i$ together decide how link $i$ is willing to pay for the transmission. For simplicity, we have adopted a monotonically decreasing linear function for $\tilde\theta_i(M)$ in the range $\gamma_i \leq {\tilde\theta}_i \leq \pi_i$, where $b_i$ is a positive coefficient and $-b_i$ is the slope.
Eq. (\ref{TargetRate}) is interpreted as follows. The parameter $m_i$ is the highest price that link $i$ is willing to pay for its transmission. When $M=m_i$, link $i$ will only desire a minimum throughput of $\gamma_i$. When the price is too high (i.e., $M>m_i$), link $i$ chooses not to transmit, and thus its target rate drops to zero, i.e., $\tilde{\theta}_i(M)=0$. Over the range $0\leq M\leq m_i$, link $i$ is willing to pay for its transmission, and the lower the price $M$, the higher throughput it desires, unless it has already reached its maximum desired throughput $\pi_i$.
In this range, we have used a linear function to simplify the above monotonic relationship. Other function forms such as hyperbolic, parabolic, cubic $\cdots$ can also be used, as long as the monotonic relationship is preserved.
As a result, the relationship in (\ref{TargetRate}) is a piecewise linear function. In the special case where the minimum desired throughput is $\gamma_i=0$ and the maximum desired throughput $\pi_i\geq b_i m_i$, the piecewise linear relationship in (\ref{TargetRate}) simply reduces to a smooth linear relationship.
A smooth monotonic curve can be similarly obtained when other function forms are adopted. Note that our algorithm works if only the monotonic function property holds.
The utility function is designed to provide differentiated treatment for the links based on their actual demand and willingness to pay for the desired transmission rate. For the example given in Fig. \ref{TargetRate3}, D2D link 1 and link 3 have the same range in their target rates, i.e., $\pi_1=\pi_3$ and $\gamma_1=\gamma_3$, and link 3 has a higher willingness to pay, i.e., $m_3>m_1$. When the price $M$ continually decreases from a large value until zero, link 3 will be admitted into the system first.
From the game theoretic perspective, link $i$ will try to choose its target rate $\tilde{\theta}_i$ in order to maximize its own payoff $v_i(\theta_i)=U_i(\theta_i)-M\theta_i$ (utility minus cost). To be compatible with such an incentive, we can reversely derive the utility function of D2D link $i$ as follows. If the utility function $U_i(\theta_i)$ is concave, then the $\theta_i$ value that maximizes $v_i(\theta_i)$ is given by the first-order condition $v_i'(\theta_i)=0$, i.e., $\tilde{\theta}_i=(U_i')^{-1}(M)$. Equating with the example of (\ref{TargetRate}) and we can reversely derive the utility function for D2D link $i$:
\begin{equation}\label{Utility}
U_i(\theta_i)=\left\{
\begin{array}{l l l}
m_i\theta_i,&\textrm{ $0\leq \theta_i < \gamma_i$,}\\
m_i\theta_i-\frac{(\theta_i-\gamma_i)^2}{2b_i},&\textrm{ $\gamma_i\leq \theta_i< \pi_i$,}\\
m_i\pi_i-\frac{(\pi_i-\gamma_i)^2}{2b_i},&\textrm{ $\pi_i\leq\theta_i\leq 1$}.\\
\end{array} \right.
\end{equation}
The utility functions for the three links' example in Fig. \ref{TargetRate3} are plotted in Fig. \ref{Utility3}.
If we take the derivative of $U_i(\theta_i)$ on $\theta_i$ in (\ref{Utility}), it can be seen that a higher $m_i$ value corresponds to a steeper slope, which suggests a higher willingness to pay. This can be seen from Fig. \ref{Utility3}, in which the utility function of link 3 has a steeper slope than that of link 1.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=1\linewidth, trim=0 0 0 0,clip]{Fig6a_TargetRate3}
\caption{Target Rates of the CSMA Users under the Price $M$}
\label{TargetRate3}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=1\linewidth, trim=0 0 0 0,clip]{Fig6b_Utility3}
\caption{Utility Functions of the Three CSMA Users}
\label{Utility3}
\end{subfigure
\caption{Target Rates and Utility Functions of the CSMA Users}\label{TRandUF}
\end{figure}
\subsection{A Subgame of Noncooperative CSMA Users}\label{CSMAgame}
The Stackelberg game at the $l$-th stage begins with the BS broadcasting a price $M^{(l)}$. Each D2D link $i$ ($i \in \mathcal{N}$) aims to maximize its payoff $v_i(\theta_i)=U(\theta_i)-M^{(l)}\theta_i$. According to the analysis in Section \ref{UtilityFunction}, link $i$'s objective is equivalent to attaining a target rate $\tilde\theta_i^{(l)}$ under the service price $M^{(l)}$, as given in (\ref{TargetRate}). Whether these target rates are achievable still depends on whether the underlying CSMA mechanism can support these transmissions. The D2D links therefore play a CSMA game among themselves to determine their individual TAs to make the throughput ${\theta_i}$ as close to $\tilde\theta_i^{(l)}$ as possible. This CSMA game is therefore the follower-subgame in the Stackelberg game.
To avoid the links from transmitting too aggressively and driving the network to unstable states as a result of congestion, a simple approach is to let $M$ begin with a large value and then gradually decrease.
We formally state the CSMA subgame as follows:
\textit{Players}: Distributed Tx-Rx pairs (D2D links), $i\in \mathcal{N}$, who compete to transmit in the ideal CSMA network.
\textit{Strategies}: Each player $i$ chooses its TA $r_i\in \mathcal{R}$, $\forall i \in \mathcal{N}$.
\textit{Objectives}: Each player $i$ ($i \in \mathcal{N}$) aims to achieve its target rate $\tilde\theta_i^{(l)}$ set by maximizing its payoff
\begin{equation}\label{payoff}
v_i(\theta_i)=U(\theta_i)-M^{(l)}\theta_i
\end{equation}
under the given service price $M^{(l)}$.
Note that the throughput $\theta_i$ that player $i$ can achieve is determined by its own TA $r_i$ and the TAs of all the other players $r_{-i}$ based on the relationships in (\ref{throughput}). The equilibrium solution of the CSMA subgame is a \textit{Nash Equilibrium (NE)}\cite{Fudenberg}, which is defined as a strategy profile $\vec{r}^*=[r_1^*,\cdots,r_N^*]$ in which player $i$'s strategy $r_i^*$ is a best response to the strategies $r_{-i}^*$ of all the other players, i.e.,
\begin{equation}\label{NashEquilibrium}
r_i^*= \arg \min_{r_i\in (-\infty,+\infty)} |\tilde{\theta}_i^{(l)}-{\theta}_i(r_i,r_{-i}^*)|, \forall i \in \mathcal{N},
\end{equation}%
where $\theta_i(\vec{r})$ is the achieved throughput of D2D link $i$ in the ICN model, as given in (\ref{throughput}).
It is clear that the objective of the subgame is to find the equilibrium TAs for all D2D links so that every link achieves throughputs that are as close to what are desired.
According to Lemma \ref{CSMAlemma}, if the target rate ${\vec{\tilde\theta}}$ is in $\mathcal{C}$, i.e., it is achievable, then there exists a unique TA $\vec{r}^*$ such that $\theta_i^*(\vec{r}^*) =\tilde\theta_i^{(l)}, \forall i\in\mathcal{N}$. On the other hand, if the target rate ${\vec{\tilde\theta}}$ is beyond $\mathcal{C}$, during the myopic best response updates, all players will keep increasing their TAs if their throughputs are lower than their respective target rates, i.e., all links transmit aggressively and result in undesired network congestion.
The existence and uniqueness of the NE in the CSMA subgame can then be established in the following proposition.
\newtheorem{proposition}{Proposition}
\begin{proposition}\label{csmaNEexistence}
For the target rate $\vec{\tilde{\theta}}\in \mathcal{C}$ (strictly feasible region), there exists a unique finite-valued NE $\vec{r}^*\in \mathcal{R}^N$ in the CSMA subgame. Moreover, the target rate $\vec{\tilde{\theta}}$ is achieved at the NE, i.e., $\vec{\theta}^*(\vec{r}^*)=\vec{\tilde{\theta}}$.
\end{proposition}
\textit{Proof}:
From Lemma \ref{CSMAlemma}, if the target rate $\vec{\tilde{\theta}}\in \mathcal{C}$ (strictly feasible region), then there exists a unique finite-valued $\vec{r}^*\in \mathcal{R}^N$ such that $\vec{\theta}^*(\vec{r}^*)=\vec{\tilde{\theta}}$.
As can be seen from (\ref{NashEquilibrium}), there exists a unique NE $\vec{r}^*$, since the payoff of each player is maximized when $\vec{\theta}^*(\vec{r}^*)=\vec{\tilde{\theta}}$ and no player has the incentive to deviate from this NE unilaterally.
$\blacksquare$
In practice, the strategies of all the other players $r_{-i}$ are usually not known by player $i$ if we assume that there is no explicit information exchange among the players. We therefore design some distributed updating method for the players to arrive at the NE. Let each player update its strategy by measuring its own local statistics, e.g., measured throughput $\hat{\theta}_i$. For the $k$-th measurement period $\tau(k)$, player $i$ keeps a record of the accumulated transmission time, $T_i(k)$, and obtains the empirical average throughput as
\begin{equation}\label{MessuredTheta}
\hat{\theta}_i(k)=T_i(k)/\tau(k), \forall i \in \mathcal{N}.
\end{equation} %
A distributed way for player $i$ to update its strategy can be
\begin{equation}\label{IterationDynamics}
r_i(k+1)=r_i(k)+\alpha\cdot(\tilde{\theta}_i^{(l)}-\hat{\theta}_i(k)), \forall i \in \mathcal{N},
\end{equation} %
where $\alpha$ is a small positive step size.
The conditions for the convergence of the NE in the CSMA subgame are summarized in the following proposition.
\begin{proposition}\label{csmaNEconvergence}
For the target rate $\vec{\tilde{\theta}}\in \mathcal{C}$ (strictly feasible region), the iteration dynamics in (\ref{IterationDynamics}) with a small enough step size $\alpha$ and a long enough measurement period $\tau$ will always converge to the NE $\vec{r}^*\in \mathcal{R}^N$ in the CSMA subgame.
\end{proposition}
\textit{Proof}:
We apply the same idea used when proving Lemma \ref{CSMAlemma}. Given a $\vec{\tilde{\theta}}\in \mathcal{C}$, we use the maximum log-likelihood method to estimate the parameters $\vec{r}^*$ which result in $\vec{\theta}(\vec{r}^*)=\vec{\tilde{\theta}}$, or equivalently, result in the desired state probability distribution $\overrightarrow{p}^{\tilde{\theta}}$ such that
$\vec{\theta}(\vec{r}^*)=\textstyle\sum_{\vec{s}\in\mathcal{S}}p_{\vec{s}}^{\tilde{\theta}}\vec{s}$.
The log-likelihood function $F(\vec{r};\vec{\tilde{\theta}})$ is given in (\ref{LogLikelihood}).
It has been shown in Section \ref{SectionLemma} that $F(\vec{r};\vec{\tilde{\theta}})$ is a strictly concave function in $\vec{r}$ and attains its maximum when $\vec{\theta}(\vec{r}^*)=\vec{\tilde{\theta}}$.
Therefore, we can use the subgradient method \cite{boyd2003subgradient} to obtain the optimal solution $\vec{r}^*$,
where $\tilde{\theta}_i^{(l)}-\hat{\theta}_i(k)$ is an estimation of the gradient $\frac{\partial F(\vec{r};\vec{\tilde{\theta}})}{\partial r_i}$ (see (\ref{FirtOrderPartial})) in the $k$-th measurement period.
Since the objective function $F(\vec{r};\vec{\tilde{\theta}})$ is differentiable and concave in $\vec{r}$, the subgradient method with constant step size $\alpha$ yields convergence to the optimal value, provided the step size $\alpha$ is small enough and the measurement period is long enough \cite{boyd2003subgradient}.
In summary, the above proposition follows.
$\blacksquare$
As mentioned above, under the myopic best response update approach, if the target rate ${\vec{\tilde\theta}}$ is beyond $\mathcal{C}$, all players will keep increasing their TAs if their throughputs are lower than their respective target rates, and the network will be pushed into an undesired congested situation. To overcome this problem,
we impose a upper limit $r_{max}$ on $r_i, \forall i\in \mathcal{N}$ as an implementation constraint.
The iteration dynamics in (\ref{IterationDynamics}) then become:
\begin{equation}\label{IterationDynamicsRmax}
r_i(k+1)=\min\{ r_i(k)+\alpha\cdot(\tilde{\theta}_i^{(l)}-\hat{\theta}_i(k)), r_{max}\}, \forall i \in \mathcal{N}.
\end{equation} %
The physical meaning of imposing the $r_{max}$ constraint is to refrain the D2D links from transmitting too aggressively, so that the local congestion at some links will not affect the whole network.
The outcome of introducing such a restriction is that the feasible throughput region will shrink into a subset of the original one. Hence the solution obtained with this constraint imposed is always ensured to be within $\mathcal{C}$. During the myopic play, if any link arrives at $r_{max}$, the BS will be informed. Then the price $M$ is frozen and the whole D2D network functions at the boundary of the ``shrunken" feasible throughput region. If this happens, not all users are able to achieve their desired rates or even admitted, as the network is in ``congestion".
\subsection{Analysis of the Stackelberg Game}\label{QuasiConvex}
In this subsection we analyze the game structure of the Stackelberg game.
From Proposition \ref{csmaNEexistence} and Proposition \ref{csmaNEconvergence}, to satisfy $\vec{\tilde{\theta}}\in\mathcal{C}$, it is equivalent to checking that the CSMA subgame converges to the unique subgame NE $\vec{r}^*$ defined in (\ref{NashEquilibrium}) under the given service price $M$.
Therefore, the leader problem in (\ref{LeaderProblemC}) is equivalent to the following optimization problem:
\begin{equation}\label{LeaderProblem}
\begin{array}{l l}
\max\limits_{M\geq 0} ~~g(M)\\
\mbox{s.t.} ~~\left\{
\begin{array}{l l l}
\textrm{equality constraint (\ref{TargetRate})}, \forall i\in \mathcal{N}, \\
\textrm{equality constraint (\ref{NashEquilibrium})},\\
r_i^*< r_{max}, \forall i\in \mathcal{N},\\
\end{array} \right.
\end{array}
\end{equation}%
where $r_i^*$ is the TA of D2D link $i$ at the NE of the CSMA subgame, as given in (\ref{NashEquilibrium}).
The constraint $r_i^*< r_{max}, \forall i\in \mathcal{N}$ implies that the target rate $\vec{\tilde{\theta}}$ as given in (\ref{TargetRate}) is strictly feasible at the subgame NE, i.e., the throughput will converge to $\vec{\theta}^*(\vec{r}^*)=\vec{\tilde{\theta}}$ in the CSMA subgame.
To distinguish from the NE $\vec{r}^*$ in the CSMA subgame, we call the equilibrium solution $(M^{opt},\vec{r}^{opt})$ of the Stackelberg game as the Stackelberg Equilibrium (SE) \cite{StackelbergEquilibrium}, where $M^{opt}$ is the optimal solution to (\ref{LeaderProblem}) and $\vec{r}^{opt}$ is the subgame NE under the price $M^{opt}$.
The problem in (\ref{LeaderProblem}) is non-convex \cite[pp. 136]{Boyd} since the objective function $g(M)=\textstyle\sum_{i=1}^N \tilde{\theta}_i(M)$ is non-concave in $M$ and the equality constraints (\ref{TargetRate}) and (\ref{NashEquilibrium}) are nonlinear.
Fortunately, the problem can be converted into a quasi-convex optimization problem \cite[pp. 144]{Boyd} and the solution can be iteratively evaluated by solving a sequence of convex optimization problems. It can be interpreted in the following way.
Since the target rate $\tilde{\theta}_i(M)$ of each D2D link $i$ is non-increasing with the price $M$,
the chain of prices $M^{(0)}>M^{(1)}>\cdots>M^{(l)}>M^{(l+1)}$ induces a chain of target rates $\vec{\tilde{\theta}}^{(0)}\preceq\vec{\tilde{\theta}}^{(1)}\preceq\cdots \preceq\vec{\tilde{\theta}}^{(l)}\preceq\vec{\tilde{\theta}}^{(l+1)}$.
Therefore, the objective function $g(M)=\textstyle\sum_{i=1}^N \tilde{\theta}_i(M)$ is also non-increasing with $M$, and hence is quasi-concave in $M$.
Regarding the constraints in (\ref{LeaderProblem}),
from Lemma \ref{CSMAlemma}, if the target rate $\vec{\tilde{\theta}}^{(l)}\in \mathcal{C}$ (strictly feasible throughput region, which is the interior of the feasible throughput region $\bar{\mathcal{C}}$), then it is achievable with finite-valued TAs $\vec{r}^*$.
On the other hand, from Theorem \ref{PartialOrderTheorem}, if the target rate $\vec{\tilde{\theta}}^{(l)}\not\in \bar{\mathcal{C}}$, then any target rate $\vec{\tilde{\theta}}^{(l+1)}\succeq\vec{\tilde{\theta}}^{(l)}$ is not in $\bar{\mathcal{C}}$, i.e., it is not achievable and the constraints in (\ref{LeaderProblem}) are not satisfied.
The crossing from within $\bar{\mathcal{C}}$ to beyond can be detected by the use of $r_{max}$.
Therefore, the superlevel set $\{M|g(M)\geq G\}$ is convex, which is equivalent to the line segment $\{M|g^{-1}(\sup g)\leq M\leq g^{-1}(G)\}$, where $G$ is a constant, $g^{-1}$ is the inverse function of $g(M)$, and $\sup g$ is the optimal value of (\ref{LeaderProblem}).
In summary, the problem in (\ref{LeaderProblem}) is quasi-convex \cite[pp. 137, pp.144]{Boyd}, since the objective function $g(M)$ to be maximized is quasi-concave, and the superlevel set $\{M|g(M)\geq G\}$ is convex.
As a result, the problem in (\ref{LeaderProblem}) can be reduced into a sequence of feasibility problems:
\begin{equation}\label{FeasibilityProb}
\begin{array}{l l}
\textrm{find} ~~M\\
\mbox{s.t.} ~~\left\{
\begin{array}{l l l l}
g(M)\geq G,\\
\textrm{equality constraint (\ref{TargetRate})}, \forall i\in \mathcal{N},\\
\textrm{equality constraint (\ref{NashEquilibrium})},\\
r_i^*< r_{max}, \forall i\in \mathcal{N}.\\
\end{array} \right.
\end{array}
\end{equation}%
If the problem (\ref{FeasibilityProb}) is feasible, then the maximum total throughput $\sup g$ is not less than $G$. Conversely, if the problem (\ref{FeasibilityProb}) is infeasible, then we can conclude $\sup g<G$.
In order to find the optimal value $\sup g$ to the problem (\ref{LeaderProblem}), we can test different superlevels $G$ in the feasibility problem (\ref{FeasibilityProb}).
For each superlevel $G$,
from the proof of Proposition \ref{csmaNEconvergence}, the feasibility problem in (\ref{FeasibilityProb}) is equivalent to the following max-log-likelihood problem:
\begin{equation}\label{MaxLog}
\begin{array}{l l}
\max\limits_{\vec{r}} ~~F(\vec{r};\vec{\tilde{\theta}})\\
\mbox{s.t.} ~~\left\{
\begin{array}{l l l}
\textrm{equality constraint (\ref{TargetRate})}, \forall i\in \mathcal{N},\\
M=g^{-1}(G),\\
r_i< r_{max}, \forall i\in \mathcal{N},\\
\end{array} \right.
\end{array}
\end{equation}
where $F(\vec{r};\vec{\tilde{\theta}})$ is the log-likelihood function defined in (\ref{LogLikelihood}).
In other words, if the problem (\ref{FeasibilityProb}) is feasible,
then there exists a price $M=g^{-1}(G)$, such that the target rate $\vec{\tilde{\theta}}(M)$ is achievable with finite TA $r_i< r_{max}, \forall i\in \mathcal{N}$.
Therefore, we can use the max-log-likelihood method to estimate the parameters $\vec{r}$ which achieve the target rate $\vec{\tilde{\theta}}(M)|_{M=g^{-1}(G)}$, as given in (\ref{TargetRate}).
Notice that given the constant $G$, the price $M$ and the target rate $\vec{\tilde{\theta}}$ become constant values as well.
Moreover, as shown in the proof of Proposition \ref{csmaNEconvergence}, the log-likelihood function is concave in $\vec{r}$.
As a result, the max-log-likelihood problem in (\ref{MaxLog}) is a convex optimization problem, and can be solved by the subgradient updating method in (\ref{IterationDynamicsRmax}).
If the iteration dynamics converge to a subgame NE with $r_i^*< r_{max},\forall i\in\mathcal{N}$, then the optimal solution to (\ref{MaxLog}) exists, i.e., the problem (\ref{FeasibilityProb}) is feasible. Otherwise, if the iteration dynamics in (\ref{IterationDynamicsRmax}) converge to a subgame NE with $r_i^*= r_{max}$ and $\theta_i^*<\tilde{\theta}_i$ for some D2D link $i$, then the optimal solution to (\ref{MaxLog}) does not exist and the problem (\ref{FeasibilityProb}) is infeasible, i.e., not all D2D links' target rates are being achieved.
In summary, the problem in (\ref{LeaderProblem}) can be reduced into a sequence of convex optimization problems. A simple bisection method can be used to choose the superlevels $G$ (or equivalently, the price $M=g^{-1}(G)$) and test the feasibility problem (\ref{FeasibilityProb}). Alternatively, we can borrow ideas from the feasible direction method \cite[Chap. 10]{Bazaraa} which avoids testing in the infeasible region, and design the pricing strategies so as to keep the network operating in the feasible region while tuning the price $M$.
\subsection{Pricing Strategies of the Stackelberg Leader}\label{PricingStrategies}
We call each round of CSMA subgame under a certain price $M$ as a \textit{stage} in the Stackelberg game. In each stage, the leader needs the feedback from each D2D link $i$ about its target rate $\tilde{\theta}_i$, and
the converged TA $r_i^*$ and throughput $\theta_i^*$.
Notice that the leader only has knowledge about the monotonicity of the D2D link's target rate with the price $M$ and no information about (\ref{TargetRate}) of all links is required.
Under some low load situations, all links achieve their maximum desired throughput $\pi_i, \forall i\in \mathcal{N}$, if for all links $i\in\mathcal{N}$, the target rate $\tilde{\theta}_i>0$ and remains unchanged between two consecutive prices $M^{(l)}$ and $M^{(l+1)}$.
Under heavy load situations, the pricing strategies of the Stackelberg leader need to be carefully designed to converge to the optimal price $M^{opt}$.
To detect convergence, we
define $\Delta_i=r_{max}-r_i^*$ as the ``margin" of transmission aggressiveness for each D2D link $i\in \mathcal{N}$. When the achieved target rates are close to the capacity boundary, the leader can make use of $\Delta_{min}=\min \{\Delta_i,\forall i\in \mathcal{N}\}$ as an indication of how close the current throughput $\vec{\theta}^*$ is to the boundary of $\mathcal{C}$.
Since the total throughput $g(M)$ is non-increasing with the price $M$, the leader can gradually decrease $M$ to increase $g(M)$ until the constraint $r_i^*< r_{max}$ is ``critically" satisfied for some D2D link says $i$, i.e., $\Delta_{min}\leq\epsilon$, where $\epsilon$ is a small positive threshold.
The algorithm at the BS works as follows.
In the 0-th stage, the leader can start with a large price $M^{(0)}$ so that the network starts with low load.
Similar to the Newton method \cite[pp. 488]{Boyd} which applies line search to narrow down the searching region before using Newtonian steps to refine the optimal solution, the adjustment of our price strategies consist of two phases as well. In the first phase the leader uses a relatively large decrement step $\phi$ to decrease price $M$ until $\Delta_{min}\leq\eta$, where $\eta>\epsilon$ is a threshold before entering the second phase. In the second phase, the decrement steps are refined using $\Delta_{min}$ since $\Delta_{min}$ is getting smaller as the target rates are approaching the boundary of the feasible throughput region.
In summary, the leader can update its price $M$ based on $\Delta_{min}$ at the end of the $l$-th stage as follows:
\begin{equation}\label{PriceUpdate}
\begin{array}{l l}
M^{(l+1)}=\left\{
\begin{array}{l l}
M^{(l)}-\phi, &\textrm{ $\Delta_{min}>\eta$,}\\
\max\{ M^{(l)}-\beta\cdot\Delta_{min}, M_{lower}\}, & \textrm{ $\Delta_{min}\leq\eta$},\\
\end{array} \right.
\end{array}
\end{equation}
where $\phi$ is a positive constant, $\beta$ is a positive parameter. $M_{lower}$ is initially set at 0 and is updated to take the value of current $M^{(l)}$ once it is detected that the solution for the target rate $\vec{\tilde{\theta}}$ is outside the feasible region. Its purpose is to ensure that subsequent $M^{(l+1)}, \cdots$ should not go below this value.
The parameter $\beta$ can be chosen to be small enough so that the price gradually decreases until $\Delta_{min}\leq\epsilon$.
However, for faster convergence, it might happen that the initially chosen $\beta$ is too large such that the new price $M^{(l+1)}$ pushes the target rate $\vec{\tilde{\theta}}$ to be outside the feasible region, i.e., $r_i^*=r_{max}$ but $\theta_i^*<\tilde{\theta}_i$ for some D2D link $i$.
In such cases, the leader stores the current unachievable price as the new lower bound $M_{lower}$, resets the price to the previously found achievable price $M_{prev}$, and reduces $\beta$ by a discount factor $\sigma$, e.g., $\sigma=0.9$.
The pricing strategies of the leader and the CSMA subgame are summarized in Algorithm \ref{IterationProcessStackelberg}.
Through Algorithm \ref{IterationProcessStackelberg}, the Stackelberg game is guaranteed to gradually converge to the optimal price $M^{opt}$ under which the total throughput of the CSMA users are maximized while their heterogeneous target rates can all be satisfied.
\begin{algorithm}
\caption{Iteration Process of the Stackelberg Game}\label{IterationProcessStackelberg}
\begin{algorithmic}[1]
\small
\State\hskip-\ALG@thistlm \textbf{Initialize}:
\State The BS chooses the initial price $M=M^{(0)}$ and informs the D2D links in the control plane;
\State Each D2D link $i\in \mathcal{N}$ chooses the initial TA $r_i(0)$;
\
\Repeat:
\State In the $l$-th stage:
\For {$i=1,\cdots,N$ D2D links}:
\State \begin{varwidth}[t]{0.85\linewidth}
Set target rate $\tilde{\theta}_i^{(l)}$ based on price $M^{(l)}$, as in (\ref{TargetRate});
\end{varwidth}
\EndFor
\State
\Repeat:
\State In the $k$-th measurement period:
\For {$i=1,\cdots,N$ users}:
\State \begin{varwidth}[t]{0.8\linewidth}
Estimate the empirical throughput $\hat{\theta}_i(k)$, as in (\ref{MessuredTheta});
\end{varwidth}
\State \begin{varwidth}[t]{0.8\linewidth}
Update the TA $r_i(k+1)$, as in (\ref{IterationDynamicsRmax});
\end{varwidth}
\EndFor
\State $k\leftarrow k+1$;
\Until{
\begin{varwidth}[t]{0.7\linewidth}
$\vec{r}$ converges to the subgame NE $\vec{r}^*$.
\end{varwidth}
\State Each user $i\in \mathcal{N}$ informs the BS about $\tilde{\theta}_i^{(l)}$, $r_i^*$ and $\theta_i^*$;
}
\State
\State At the BS:
\State $\Delta_{min}=\min\limits_{i\in \mathcal{N}} \Delta_i=\min\limits_{i\in \mathcal{N}} (r_{max}-r_i^*)$;
\If {$\Delta_{min}>\epsilon$}:
\State set $M^{(l+1)}$ as in (\ref{PriceUpdate}); $M_{prev}=M^{(l)}$; $\Delta_{prev}=\Delta_{min}$.
\ElsIf {$0<\Delta_{min}\leq\epsilon$ or ($\vec{\tilde{\theta}}^{(l)}\succ\vec{0}$ and $\vec{\tilde{\theta}}^{(l)}=\vec{\tilde{\theta}}^{(l-1)}$)}:
\State The Stackelberg game converges with $M^{opt}=M^{(l)}$; go to \textbf{END}.
\ElsIf {$r_i^*=r_{max}$ but $\theta_i^*<\tilde{\theta}_i$ for some user $i$}:
\State $M_{lower}=M^{(l)}$; $\beta\leftarrow\sigma\cdot\beta$; $M^{(l+1)}= \max\{ M_{prev}-\beta\cdot\Delta_{prev}, M_{lower}\}$.
\EndIf
\State $l\leftarrow l+1$;
\Until{The Stackelberg game converges. \textbf{END}.}
\end{algorithmic}
\end{algorithm}
An important side-information which can be provided by the proposed algorithm is the identification of the bottleneck link in the heterogeneous D2D networks.
Upon convergence of the Stackelberg game, the D2D link $L=\arg\min \{\Delta_i,\forall i\in \mathcal{N}\}=\arg\min \{r_{max}-r_i^{opt},\forall i\in \mathcal{N}\}$ is the bottleneck link to the network since any further decrease on the price $M^{opt}$ would drive the target rate $\vec{\tilde\theta}$ to be outside the capacity region and link $L$ can no longer achieve its target rate. The identification of such bottleneck links can be of valuable information, for example, in data offloading, to re-assign these links back to the cellular network when necessary.
How to achieve optimal trade-off remains as interesting future work.
\subsection{Complexity of Algorithm \ref{IterationProcessStackelberg}}\label{SectionComplexity}
Algorithm \ref{IterationProcessStackelberg} consists of two loops. In the outer loop, the BS chooses a service price $M^{(l)}$ at the $l$-th stage according to the pricing strategies in Section \ref{PricingStrategies}. In the inner loop, for each given service price $M^{(l)}$, the D2D links play the CSMA subgame distributively and iteratively until converging to their respective target rates. We analyze the complexity in terms of the number of iterations required, first for the CSMA subgame, then for the pricing strategies.
For the CSMA subgame, assume that the target rate $\vec{\tilde\theta}$ under the given service price $M^{(l)}$ is in the strictly feasible region $\mathcal{C}$. According to Proposition \ref{csmaNEexistence} and Proposition \ref{csmaNEconvergence}, the distributed strategy updates of the CSMA users in (\ref{IterationDynamics}) are equivalent to the gradient method in maximizing the log-likelihood function $F(\vec{r};\vec{\tilde{\theta}})$ which is differentiable and strictly concave in $\vec{r}$.
In particular, the gradient of $F(\vec{r};\vec{\tilde{\theta}})$ is $\nabla F(\vec{r};\vec{\tilde{\theta}})=\vec{\tilde\theta}-\vec{\theta}(\vec{r})$, as shown in (\ref{FirtOrderPartial}). Since the maximum value of $F(\vec{r};\vec{\tilde{\theta}})$ is finite and attained at $\vec{r}^*$, this means that $\vec{\theta}^*(\vec{r}^*)=\vec{\tilde\theta}$ can be solved by setting the gradient $\nabla F(\vec{r}^*;\vec{\tilde{\theta}})=\vec{0}$.
Since the norms of the throughput $\vec{\theta}(\vec{r})$ and its gradient $\nabla \vec{\theta}(\vec{r})$ are both bounded, it can be shown that $\nabla F(\vec{r};\vec{\tilde{\theta}})$ is Lipschitz continuous \cite{polyak1987} in $\vec{r}$, i.e., $\|\nabla F(\vec{r}_a;\vec{\tilde{\theta}})-\nabla F(\vec{r}_b;\vec{\tilde{\theta}})\|=\|\vec{\theta}(\vec{r}_a)-\vec{\theta}(\vec{r}_b)\|\leq H\|\vec{r}_a-\vec{r}_b \|$, $\forall \vec{r}_a,\vec{r}_b\in \mathcal{R}^N$, where $H$ is a positive constant.
According to Theorem 1 in \cite[Section 1.4]{polyak1987} and Theorem 2.1.14 in \cite[Section 2.1.5]{nesterov2004}, for a small enough step size $\alpha$ ($0<\alpha\leq 1/H$), the number of iterations to reach
$\|\nabla F(\vec{r};\vec{\tilde{\theta}})\|=\|\vec{\tilde\theta}-\vec{\theta}(\vec{r})\|<\xi$ is $O(1/\xi)$ (i.e., no more than a fixed multiple of $1/\xi$).
It is worth to mention that although this complexity $O(1/\xi)$ on the number of required iterations is independent on the number of users, we have inherently assumed that the measurement period $\tau$ is long enough to provide an accurate estimation of throughputs. In fact, the choice of $\tau$ depends on the number of users and the underlying topology. The purpose of choosing a large $\tau$ is to ensure that the Markov chain corresponding to the updated $\vec{r}$ reaches its stationary distribution to allow for an accurate estimation of throughputs. In general, a larger number of users requires a larger value of $\tau$. More comparisons and discussions on how to choose $\tau$ for a given number of users and different topologies can be found in \cite{ICNuniqueProof}.
We now briefly discuss how to estimate the number of pricing stages required in the outer loop.
This analysis is complicated by the fact that the step sizes for $M$ are changing each time. Assume that the maximum value of the price is $M_{max}$. In phase 1 of the price setting, since the price is decreasing at a large constant step $\phi$, the number of pricing stages in phase 1 is capped by $\lceil M_{max}/\phi \rceil$, or $\lceil M_{max}/\phi \rceil/2$ on average.
In phase 2, the TA margin $\Delta_{min}\leq\eta$, and the price is already close to the optimal. In the algorithm, we refine the price change $\delta M^{(l+1)}=M^{(l+1)}-M^{(l)}$ at stage $l$ according to $\delta M^{(l+1)}=-\beta \Delta_{min}^{(l)}$ progressively until the TA margin gradually approaches the required precision $\epsilon$, i.e., $\Delta_{min}\leq\epsilon$.
Assume that the interval $\epsilon<\Delta_{min}\leq\eta$ is small, through simulations we find that the relationship between the TA margin $\Delta_{min}^{(l+1)}$ and the price change $\delta M^{(l+1)}$ can be approximated by $\Delta_{min}^{(l+1)}=\Delta_{min}^{(l)}+B\cdot \delta M^{(l+1)}$, where $B$ is a positive constant.
Since we set $\delta M^{(l+1)}=-\beta \Delta_{min}^{(l)}$, hence $\Delta_{min}^{(l+1)}=\Delta_{min}^{(l)}+B\cdot (-\beta \Delta_{min}^{(l)})=(1-\beta B)\Delta_{min}^{(l)}$.
The TA margin then follows a geometric progression and we can estimate the value of $h$, so that $\delta M^{(l+h)}\leq \epsilon$.
Hence it can be easily shown that
the number of stages for $\Delta_{min}$ to decrease from $\eta$ to $\epsilon$ is approximately $\frac{\log_{10}\eta/\epsilon}{\log_{10} 1/(1-\beta B)}$, or $O(d\log_{10} (\eta/\epsilon))$ for some suitable choices of $\beta$ and $B$ ($0<\beta B<1$), where $d=\frac{1}{\log_{10} 1/(1-\beta B)}$.
Note that $\beta$ can be chosen according to the value of $B$, but $B$ is topology and utility dependent.
As a result, the total number of stages required for convergence is $O(1/\phi)+O(d\log_{10} (\eta/\epsilon))$.
In summary, the number of iterations required for convergence in the proposed game is given by the number of iterations per stage multiplied by the required number of stages, i.e., $O(1/\xi)\cdot(O(1/\phi)+O(d\log_{10} (\eta/\epsilon)))$.
\section{Simulation Study}\label{Simulation}
In this section we demonstrate the Stackelberg game via an example.
Consider the 8 D2D links' contention graph in Fig. \ref{asym8}.
Assume that the relationships between the links' target rates and the price $M$ are given as in Fig. \ref{utility8}.
In the ICN model, we assume that the links' transmission time is uniformly distributed with mean of 1 ms in the range $[0.5, 1.5]$ ms. Further assume that link $i$'s backoff time is uniformly distributed with mean of $1/\exp(r_i)$ ms in the range $[0,2/\exp(r_i)]$ ms.
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=1\linewidth, trim=0 0 0 0,clip]{Fig7a_asym8}
\caption{contention Graph}
\label{asym8}
\end{subfigure}%
~
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=1\linewidth, trim=0 0 0 0,clip]{Fig7b_utility8}
\caption{Target Rates under Price $M$}
\label{utility8}
\end{subfigure
\caption{Topology and Target Rates of 8 D2D Links}\label{Stackelberg8}
\end{figure}
\subsection{CSMA Subgame}\label{SimSub}
Assume that the current price $M=30$, then from Fig. \ref{utility8} we know that the D2D links' target rates are $\vec{\tilde{\theta}}=[0.270,0.297,0.347,0.315,0.242,0.176,0.132,0.220]$.
Assume that the initial TAs $r_i=-2,\forall i\in\mathcal{N}$. The CSMA subgame is then played according to Lines 10 to 17 in Algorithm \ref{IterationProcessStackelberg}.
In the $k$-th measurement period, we apply a simple averaging filter to smooth the measured throughput as:
\begin{equation}\label{MessuredThetaFactor}
\hat{\theta}_i(k)=(1-\delta)\cdot\hat{\theta}_i(k-1)+\delta\cdot T_i(k)/\tau, \forall i \in \mathcal{N},
\end{equation} %
where $\delta$ is the weight of the new measurement. In our simulations, we choose $\delta=0.05$ and the measurement time $\tau=200$ ms.
A smaller value of $\delta$ makes the measured throughput more smooth, but also increases the convergence time.
To update TAs as in (\ref{IterationDynamicsRmax}), we choose the step size $\alpha=0.4$ and the maximum allowable TA $r_{max}=3$.
Note that a smaller value of $\alpha$ guarantees the convergence of the CSMA subgame, but also increases the convergence time.
The iteration process of the CSMA subgame is then plotted in Fig. \ref{resultM}.
The CSMA subgame converges to 99\% of the target rates ($\xi=1\%$) in around 150 iterations.
According to the complexity analysis in Section \ref{SectionComplexity}, the number of required iterations is $O(1/\xi)$, i.e., in the order of a fixed multiple of $1/\xi=100$. Thus our simulation result is in the same order as the above prediction.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth, trim=0 0 0 0,clip]{Fig8_resultM}
\caption{CSMA Subgame of the 8 D2D links under $M=30$} \label{resultM}
\end{figure}
\subsection{Stackelberg Game}
Assume that the initial price $M^{(0)}=55$, $
\phi=5$, $\beta=5$, $\eta=1$, $\epsilon=0.1$, $\sigma=0.9$, and the rest of the parameters are the same as in Section \ref{SimSub}.
The iteration process of the Stackelberg game is shown in Fig. \ref{result}.
The game converges after 11 stages, in which $\Delta_{min}=r_{max}-r_3=3-r_3$ and gradually approaches 0. The first 6 stages undergo a constant price decrement ($\phi=5$), i.e., $M=55,50,45,40,35,30$ until $\Delta_{min}\leq\eta=1$ is detected. After the CSMA subgame converges under the price $M=30$, we have $\Delta_{min}=r_{max}-r_3^*=3-2.4=0.6$ and hence $0.1=\epsilon<\Delta_{min}<\eta$. Therefore, the Stackelberg game enters the second pricing phase, which consists of 5 stages ($M=27.10,25.00,23.68,22.73,22.12$), according to (\ref{PriceUpdate}). We consider the game converged when $\Delta_{min}\approx 0.08<\epsilon$ and the optimal price is $M^{opt}=22.12$.
After convergence, the average error of the measured throughputs as compared to the target rates is around $\xi=1\%$.
Notice that we cannot decrease $M$ any further since the network is already close to the capacity boundary ($r_3^*=2.92\approx r_{max}=3$). In other words, any further decrease on $M$ would drive the target rate $\vec{\tilde\theta}$ to be outside the capacity region and some D2D links (e.g., link 3) can no longer achieve their target rates.
In summary, the proposed Stackelberg game is able to maximize the total throughput of the CSMA users while the target rates of the heterogeneous users can all be satisfied.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth, trim=50 0 0 0,clip]{Fig9_result}
\caption{Stackelberg Games of the 8 D2D Links} \label{result}
\end{figure}
The number of required stages before convergence is consistent with the analysis in Section \ref{SectionComplexity}.
In the above simulations, the maximum value of the price is $M_{max}=60$, and the constant step $\phi=5$.
Therefore, the maximum number of stages in phase 1 is $\lceil M_{max}/\phi \rceil=12$, or 6 on average. In the simulations, phase 1 actually consists of 6 stages before entering phase 2. In phase 2, the required precision $\epsilon=0.1, \eta=1$ and the required number of stages is $O(d\log_{10} (\eta/\epsilon))$, where $d=\frac{1}{\log_{10} 1/(1-\beta B)}$. The value of $B$ can be estimated by using $(\Delta_{min}^{(l+1)}-\Delta_{min}^{(l)})/(M^{(l+1)}-M^{(l)})$, which is approximately 0.07 in the small interval $\epsilon<\Delta_{min}\leq\eta$.
Since we have chosen $\beta=5$, hence $0<\beta B=0.35<1$ and $d=\frac{1}{\log_{10} 1/(1-\beta B)}=5.3$, and the required stages in phase 2 is in the order of a fixed multiple of $d\log_{10} (\eta/\epsilon)=5.3$. In the simulations, phase 1 actually consists of 5 stages before convergence.
Note that a smaller $\beta$ could be used to guarantee $\beta B<1$, however, it also increases $d$ and hence requires more stages for convergence.
Finally,
the total number of iterations required in the Stackelberg game is $O(1/\xi)\cdot(O(1/\phi)+O(d\log_{10} (\eta/\epsilon)))$, which is in the order of
$100\cdot(6+5.3)=1130$.
In the simulations, the Stackelberg game actually converges in around 1000 iterations, which is in the same order as the above prediction.
\subsection{Effect of Parameter $r_{max}$}\label{oscilate}
In the above simulations, we have used the parameter $r_{max}=3$, and obtained the optimal price $M^{opt}$ that maximizes the total throughput $g(M)$ for the 8 users in Fig. \ref{Stackelberg8} with heterogeneous rate requirements.
As is discussed at the end of Section \ref{CSMAgame}, by introducing the $r_{max}$ constraint, the feasible throughput region will shrink into a subset of the original one.
In this subsection, we apply different values of $r_{max}$ to the network and obtain the optimal price $M^{opt}$ that maximizes the total throughput $g(M)$ for the 8 users in Fig. \ref{asym8}. To see the effect of parameter $r_{max}$ only, we assume that the 8 users are homogeneous in their rate requirements, i.e., $\gamma_i=0.05, \pi_i=0.55, b_i=0.0125, m_i=50, \forall i\in\mathcal{N}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth, trim=0 0 0 0,clip]{Fig10_Mrmax}
\caption{Effect of Parameter $r_{max}$} \label{Mrmax}
\end{figure}
The optimal price $M^{opt}$ and the corresponding total throughput $g(M^{opt})$ under each value of the parameter $r_{max}$ are plotted in Fig. \ref{Mrmax}. From Fig. \ref{Mrmax} we can see that, as the value of $r_{max}$ increases, the achievable total throughput $g(M^{opt})$ also increases, moreover, the rate of increase gradually slows down.
Recall that $r_{max}$ is used to refrain the D2D links from transmitting
too aggressively.
The outcome of introducing such a restriction is that the feasible throughput region will shrink
into a subset of the original one.
In particular, when $r_{max}=3$, the achievable total throughput is 2.39. For $r_{max}>3$, the total throughput curve becomes almost flat and approaches the upper bound 2.65 when $r_{max}$ tends to infinity (each user achieves a throughput of 0.33), under which the shrunken capacity region stretches back to the original feasible throughput region $\bar{\mathcal{C}}$.
The corresponding bound on the optimal price is $M^{opt}=27.5$.
In other words, we cannot further
reduce the price $M$ below 27.5 to increase the target rate $\vec{\tilde{\theta}}$, as it is already on the boundary of the feasible throughput region $\bar{\mathcal{C}}$.
It is observed from Fig. \ref{Mrmax} that a larger $r_{max}$ value leads to a larger capacity region, but this also allows for longer transmission durations.
However, the transmission duration should not be too long in practice, otherwise it would lead to large access delay (where the access delay refers to the time between the onset of two consecutive successful transmissions of a link) and large variations of the delay.
The readers are referred to \cite[Sec. IV]{JiangCSMAcollision} for more discussions.
As a result of the above observations, we have adopted $r_{max}=3$ in the above two subsections.
\subsection{Unstable Network Behavior without Price Control from BS}
Finally, we illustrate the outcome of the CSMA game when there is no price control from the BS and when the collective target rates are outside the feasible throughput region $\bar{\mathcal{C}}$ (corresponding to $r_{max}=+\infty$). Suppose the 8 users in Fig. \ref{asym8} all desire a target rate of 0.5 (which is larger than the upper bound 0.33 achieved by the Stackelberg game in Section \ref{oscilate}), and they choose their TAs as in (\ref{IterationDynamics}) in order to achieve their own target rates. The behaviors of the D2D links are plotted in Fig. \ref{unstable}. Since the target rates are not achievable even when $r_{max}=+\infty$, each selfish user chooses an ever-increasing TA when its throughput is below its own target rate. When the TAs are high enough, the users intermittently capture the whole channel and prevent their neighbors from transmitting for a long time, which results in unstable network behavoirs and large access delay.
On the other hand, with the pricing mechanism, the network is stable and the maximum total throughput $g(M^{opt})$ is achieved as in Fig. \ref{Mrmax}, according to the $r_{max}$ value chosen. Therefore, the BS plays an important role in tuning the service price so that the network always operates within the feasible throughput region.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth, trim=0 0 0 0,clip]{Fig11_unstable}
\caption{Unstable Network Behavior without Price Control from BS} \label{unstable}
\end{figure}
\section{Conclusions and Future Work}\label{Summary}
We study a group of D2D links which shares a dedicated inband overlay channel via CSMA. The ICN model is leveraged on to analyze their behaviors and interactions under spatial reuse. We further assume that the D2D links have heterogeneous rate requirements and different willingness to pay, and they act non-altruistically to achieve their target rates and maximize their own payoffs. To manage such non-cooperative user dynamics, we propose a Stackelberg game in which the BS in the cellular network acts as a Stackelberg leader to regulate the D2D link transmissions by modifying the service price, so that the total throughput is maximized while the heterogeneous target rates of the D2D links can all be satisfied. The problem is shown to be quasi-convex and can be solved by a sequence of equivalent convex optimization problems. The pricing strategies are designed so that the network always operates within the capacity region. The results are verified by simulations.
The joint optimization of D2D link scheduling and cellular data off-loading is our future work.
| 3ed1169e303fe41f7e3cc5a05d597dfc7d76f95a | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
\begin{pg} Let $k$ be an algebraically closed field of odd
characteristic and let $X$ and $Y$ be K3 surfaces over $k$. Let $$
\Phi :D(X)\rightarrow D(Y) $$ be an equivalence between their bounded
triangulated categories of coherent sheaves given by a Fourier-Mukai kernel
$P\in D(X\times Y)$, so $\Phi $ is the functor given by sending $M\in D(X)$
to $$ R\text{pr}_{2*}(L\text{pr}_1^*M\otimes ^{\mathbb{L}}P). $$ As
discussed in \cite[2.9]{LO} the kernel $P$ also induces an isomorphism on
rational Chow groups modulo numerical equivalence $$ \Phi
_P^{A^*}:A^*(X)_{\text{num}, \Q}\rightarrow A^*(Y)_{\text{num}, \Q}. $$
We can consider how a given equivalence $\Phi$ interacts with the codimension
filtration on $A^\ast$, or how it acts on the ample cone of $X$ inside $A^1(X)$.
The underlying philosophy of this work is that tracking filtrations and ample
cones (in ways we will make precise in Section \ref{S:stronglyfiltered}) gives a
semi-linear algebraic gadget that behaves a lot like a Hodge structure. In
Section \ref{S:stronglyfiltered} we will define a notion of \emph{strongly
filtered} for an equivalence $\Phi$ that imposes conditions reminiscent of the
classical Torelli theorem for K3 surfaces.
With this in mind, the purpose of this paper is to prove the following result.
\end{pg}
\begin{thm}\label{T:1.2} If $\Phi _P:D(X)\rightarrow D(Y)$ is a strongly
filtered equivalence then there exists an isomorphism $\sigma :X\rightarrow
Y$ such that the maps on the crystalline and \'etale realizations of the
Mukai motive induced by $\Phi _P$ and $\sigma $ agree.
\end{thm}
For the definition of the realizations of the Mukai motive see
\cite[\S 2]{LO}. In
\cite[Proof of 6.2]{LO} it is shown that any filtered equivalence can be modified to be
strongly filtered. As a consequence, we get a new proof of the following
result.
\begin{thm}[{\cite[6.1]{LO}}]\label{T:1.1} If $\Phi _P^{A^*}$ preserves the
codimension filtrations on $A^*(X)_{\text{\rm num}, \Q}$ and
$A^*(Y)_{\text{\rm num}, \Q}$ then $X$ and $Y$ are isomorphic.
\end{thm}
Whereas the original proof of Theorem \ref{T:1.1} relied heavily on liftings to
characteristic $0$ and Hodge theory, the proof presented here works primarily in
positive characteristic using algebraic methods.
In Section \ref{S:section8} we present a proof of Theorem \ref{T:1.2} using certain
results about ``Kulikov models'' in positive characteristic (see Section
\ref{S:section4}). This argument implicitly uses Hodge theory which is an
ingredient in the proof of Theorem \ref{T:2.2v1}. In Section \ref{S:section9} we discuss
a characteristic $0$ variant of Theorem \ref{T:1.2}, and finally the last
section \ref{S:section10} we explain how to bypass the use of the Hodge theory
ingredient of Theorem \ref{T:2.2v1}. This makes the argument entirely algebraic, except
for the Hodge theory aspects of the proof of the Tate conjecture. This also
gives a different algebraic perspective on the statement that any Fourier-Mukai
partner of a K3 surface is a moduli space of sheaves, essentially inverting
the methods of \cite{LO}.
The bulk of this paper is devoted to proving Theorem \ref{T:1.2}. The basic idea is to
consider a certain moduli stack $\mathscr S_d$ classifying data $((X, \lambda ), Y,
P)$ consisting of a primitively polarized K3 surface $(X, \lambda )$ with
polarization of some degree $d$, a second K3 surface $Y$, and a complex $P\in
D(X\times Y)$ defining a strongly filtered Fourier-Mukai equivalence $\Phi
_P:D(X)\rightarrow D(Y)$. The precise definition is given in Section \ref{S:3},
where it is shown that $\mathscr S_d$ is an algebraic stack which is naturally a
$\mathbb{G}_m$-gerbe over a Deligne-Mumford stack $\overline {\mathscr S}_d$ \'etale
over the stack $\mathscr M_d$ classifying primitively polarized K3 surfaces of
degree $d$. The map $\overline {\mathscr S}_d\rightarrow \mathscr M_d$ is induced by the
map sending a collection $((X, \lambda ), Y, P)$ to $(X, \lambda )$. We then
study the locus of points in $\mathscr S_d$ where Theorem \ref{T:1.2} holds showing that it
is stable under both generization and specialization. From this it follows that
it suffices to consider the case when $X$ and $Y$ are supersingular where we can
use Ogus' crystalline Torelli theorem \cite[Theorem I]{Ogus2}.
\begin{rem} Our restriction to odd characteristic is because we appeal to the
Tate conjecture for K3 surfaces, proven in odd characteristics by Charles,
Maulik, and Pera \cite{Ch, Maulik, Pera}, which at present is not known in
characteristic $2$.
\end{rem}
\begin{pg} (Acknowledgements) Lieblich partially supported by NSF CAREER Grant
DMS-1056129 and Olsson partially supported by NSF grant DMS-1303173 and a grant
from The Simons Foundation. Olsson is grateful to F. Charles for inspiring
conversations at the Simons Symposium ``Geometry Over Nonclosed Fields'' which
led to results of this paper. We also thank E. Macr\`\i\, D. Maulik, and K. Pera for
useful correspondence.
\end{pg}
\section{Strongly filtered equivalences}\label{S:stronglyfiltered}
\begin{pg} Let $X$ and $Y$ be K3 surfaces over an algebraically closed field
$k$ and let $P\in D(X\times Y)$ be an object defining an equivalence $$ \Phi
_P:D(X)\rightarrow D(Y), $$ and let $$ \Phi ^{A^*_{\text{num},
\Q}}_P:A^*(X)_{\text{num}, \Q}\rightarrow A^*(Y)_{\text{num}, \Q} $$ denote the
induced map on Chow groups modulo numerical equivalence and tensored with $\Q$.
We say that $\Phi _P$ is \emph{filtered} (resp.\ \emph{strongly filtered},
resp.\ \emph{Torelli}) if $\Phi ^{A^*_{\text{num}, \Q}}_P$ preserves the
codimension filtration (resp.~is filtered, sends $(1, 0, 0)$ to $(1, 0, 0)$,
and sends the ample cone of $X$ to plus or minus the ample cone of $Y$;
resp.~is filtered, sends $(1, 0, 0)$ to $\pm (1, 0, 0)$, and sends the ample
cone of $X$ to the ample cone of $Y$). \end{pg}
\begin{rem}
Note that if $P$ is strongly filtered then either $P$ or $P[1]$ is Torelli. If
$P$ is Torelli then either $P$ or $P[1]$ is strongly filtered.
\end{rem}
\begin{rem} Note that $A^1(X)$ is the orthogonal complement of
$A^0(X)\oplus A^2(X)$ and similarly for $Y$. This implies that if $\Phi _P$
is filtered and sends $(1, 0, 0)$ to $\pm (1, 0, 0)$ then $\Phi
_P(A^1(X)_{\text{num}, \Q})\subset A^1(Y)_{\text{num}, \Q}$.
\end{rem}
\begin{rem} It is shown in \cite[6.2]{LO} that if $\Phi _P:D(X)\rightarrow
D(Y)$ is a filtered equivalence, then there exists a strongly filtered
equivalence $\Phi :D(X)\rightarrow D(Y)$. In fact it is shown there that
$\Phi $ can be obtained from $\Phi _P$ by composing with a sequence of
shifts, twists by line bundles, and spherical twists along $(-2)$-curves.
\end{rem}
\begin{pg} As noted in \cite[2.11]{LO} an equivalence $\Phi _P$ is filtered if
and only if the induced map on Chow groups $$ \Phi ^{A^*_{\text{num},
\Q}}_P:A^*(X)_{\text{num}, \Q}\rightarrow A^*(Y)_{\text{num}, \Q} $$ sends
$A^2(X)_{\text{num}, \Q}$ to $A^2(X)_{\text{num}, \Q}$.
\end{pg}
\begin{lem}\label{L:2.5b} Let $\ell $ be a prime invertible in $k$, let
$\widetilde H(X, \Q_\ell )$ (resp.~$\widetilde H(Y, \Q_\ell )$) denote the
$\Q_\ell $-realization of the Mukai motive of $X$ (resp.~$Y$) as defined in
\cite[2.4]{LO}, and let $$ \Phi _P^{\et }:\widetilde H(X, \Q_\ell
)\rightarrow \widetilde H(Y, \Q_\ell ) $$ denote the isomorphism defined by
$P$. Then $\Phi _P$ is filtered if and only if $\Phi _P^\et $ preserves the
filtrations by degree on $\widetilde H(X, \Q_\ell )$ and $\widetilde H(Y,
\Q_\ell )$.
\end{lem}
\begin{proof} By the same reasoning as in
\cite[2.4]{LO} the map $\Phi _P^{\et }$ is filtered if and only if $$\Phi
_P^\et (H^4(X, \Q_\ell )) = H^4(Y, \Q_\ell ).$$ Since the cycle class maps $$
A^2(X)_{\text{num}, \Q}\otimes _{\Q}\Q_\ell \rightarrow H^4(X, \Q_\ell ), \ \
A^2(Y)_{\text{num}, \Q}\otimes _{\Q}\Q_\ell \rightarrow H^4(Y, \Q_\ell ) $$
are isomorphisms and the maps $\Phi _P$ and $\Phi _P^\et $ are compatible in
the sense of \cite[2.10]{LO} it follows that if $\Phi _P$ is filtered then so
is $\Phi _P^\et $. Conversely if $\Phi _P^\et $ is filtered then since the
cycle class maps $$ A^*(X)_{\text{num}, \Q}\rightarrow \widetilde H(X,
\Q_\ell ), \ \ A^*(Y)_{\text{num}, \Q}\rightarrow \widetilde H(Y, \Q_\ell )
$$ are injective it follows that $\Phi _P$ is also filtered.
\end{proof}
\begin{rem} The same proof as in Lemma \ref{L:2.5b} gives variant results for
crystalline cohomology and in characteristic $0$ de Rham cohomology.
\end{rem}
The condition that $\Phi _P$ takes the ample cone to plus or minus the ample
cone appears more subtle. A useful observation in this regard is the
following.
\begin{lem}\label{L:2.7} Let $P\in D(X\times Y)$ be an object
defining a filtered equivalence $\Phi _P:D(X)\rightarrow D(Y)$ such that
$\Phi _P^{A^*_{\text{\rm num}}}$ sends $(1, 0, 0)$ to $(1, 0, 0)$. Then
$\Phi _P$ is strongly filtered if and only if for some ample invertible sheaf
$L$ on $X$ the class $\Phi _P^{A^*_{\text{\rm num}}}(L)\in NS(Y)_\Q$ is plus
or minus an ample class.
\end{lem}
\begin{proof} Following \cite[p.
366]{Ogus2} define $$ V_X:= \{x\in NS(X)_\mathbb R|x^2>0, \ \text{and $\langle x,
\delta \rangle \neq 0$ for all $\delta \in NS(X)$ with $\delta ^2 = -2$}\},
$$ and define $V_Y$ similarly. Since $\Phi _P^{A^*_{\text{\rm num}}}$ is an
isometry it induces an isomorphism $$ \sigma :V_X\rightarrow V_Y. $$ By
\cite[Proposition 1.10 and Remark 1.10.9]{Ogus2} the ample cone $C_X$ (resp.
$C_Y$) of $X$ (resp. $Y$) is a connected component of $V_X$ (resp. $V_Y$) and
therefore either $\sigma (C_X)\cap C_Y = \emptyset $ or $\sigma (C_X) = C_Y$,
and similarly $\sigma (-C_X) \cap C_Y = \emptyset$ or $\sigma (-C_X) = C_Y$.
\end{proof}
\begin{prop}\label{P:2.8} Let $X$ and $Y$ be K3-surfaces over a scheme $S$
and let $P\in D(X\times _SY)$ be a relatively perfect complex. Assume that
$X/S$ is projective. Then the set of points $s\in S$ for which the induced
transformation on the derived category of the geometric fibers $$ \Phi
_{P_{\bar s}}:D(X_{\bar s})\rightarrow D(Y_{\bar s}) $$ is a strongly
filtered equivalence is an open subset of $S$.
\end{prop}
\begin{proof} By a
standard reduction we may assume that $S$ is of finite type over $\Z$.
First note that the condition that $\Phi _{P_{\bar s}}$ is an equivalence is an
open condition. Indeed as described in \cite[discussion preceding 3.3]{LO}
there exists a morphism of $S$-perfect complexes $\epsilon :P_1\rightarrow P_2$
in $D(X\times _SY)$ such that $\Phi _{P_{\bar s}}$ is an equivalence if and
only if $\epsilon _{\bar s}:P_{1, \bar s}\rightarrow P_{2, \bar s}$ is an
isomorphism in $D(X_{\bar s}\times Y_{\bar s})$ (in loc. cit. we considered two
maps of perfect complexes but one can just take the direct sum of these to get
$\epsilon $). Let $Q$ be the cone of $\epsilon $, and let $Z\subset X\times
_SY$ be the support of the cohomology sheaves of $Q$. Then the image of $Q$ in
$S$ is closed and the complement of this image is the maximal open set over
which the fiber transformations $\Phi _{P_{\bar s}}$ are equivalences.
Replacing $S$ by an open set we may therefore assume that $\Phi _{P_{\bar s}}$
is an equivalence in every fiber.
Next we show that the condition that $\Phi _P$ is filtered is an open and
closed condition. For this we may assume we have a prime $\ell $ invertible
in $S$. Let $f_X:X\rightarrow S$ (resp. $f_Y:Y\rightarrow S$) be the structure
morphism. Define $\widetilde {\mathscr H}_{X/S}$ to be the lisse $\Q_\ell $-sheaf
on $S$ given by $$ \widetilde {\mathscr H}_{X/S}:= (R^0f_{X*}\Q_\ell (-1))\oplus
(R^2f_{X*}\Q_\ell )\oplus (R^4f_{X*}\Q_\ell )(1), $$ and define $\widetilde
{\mathscr H}_{Y/S}$ similarly. The kernel $P$ then induces a morphism of lisse
sheaves $$ \Phi _{P/S}^{\et, \ell }:\widetilde {\mathscr H}_{X/S}\rightarrow
\widetilde {\mathscr H}_{Y/S} $$ whose restriction to each geometric fiber is the
map on the $\Q_\ell $-realization of the Mukai motive as in \cite[2.4]{LO}. In
particular, $\Phi _{P/S}^{\et, \ell }$ is an isomorphism. By Lemma \ref{L:2.5b} for
every geometric point $\bar s\rightarrow S$ the map $\Phi _{P_{\bar s}}$ is
filtered if and only if the stalk $\Phi _{P/S, \bar s}^{\et, \ell }$ preserves
the filtrations on $\widetilde {\mathscr H}_{X/S}$ and $\widetilde {\mathscr H}_{Y/S}$.
In particular this is an open and closed condition on $S$. Shrinking on $S$ if
necessary we may therefore further assume that $\Phi _{P_{\bar s}}$ is filtered
for every geometric point $\bar s\rightarrow S$.
It remains to show that in this case the set of points $s$ for which $\Phi _P$
takes the ample cone $C_{X_{\bar s}}$ of $X_{\bar s}$ to $\pm C_{Y_{\bar s}}$
is an open subset of $S$. For this we can choose, by our assumption that $X/S$
is projective, a relatively ample invertible sheaf $L$ on $X$. Define $$ M:=
\text{det}(R\text{pr}_{2*}(L\text{pr}_1^*(L)\otimes P)), $$ an invertible sheaf
on $Y$. Then by Lemma \ref{L:2.7} for a point $s\in S$ the transformation $\Phi
_{P_{\bar s}}$ is strongly filtered if and only if the restriction of $M$ to
the fiber $Y_{\bar s}$ is plus or minus the class of an ample divisor. By
openness of the ample locus \cite[III, 4.7.1]{EGA} we get that being strongly
filtered is an open condition.
\end{proof}
\begin{prop}\label{P:2.10}
Let $P\in D(X\times Y)$ be a complex such that the induced transformation
$$
\Phi _P^{A^*_{\text{\rm num}, \Q}}:A^*(X)_{\text{\rm num}, \Q}\rightarrow A^*(X)_{\text{\rm num}, \Q}
$$
preserves the codimension filtration, takes $(1, 0, 0)$ to $(1, 0, 0)$, and takes the ample cone of $X$ to plus or minus the ample cone of $Y$ (so $P$ does not necessarily define an equivalence but otherwise behaves like a strongly filtered Fourier-Mukai equivalence).
Suppose there exists an equivalence $\Phi _Q:D(X)\rightarrow D(Y)$ which is
Torelli, and such that the induced map $NS(X)\rightarrow NS(Y)$ agrees with the
map defined by $\pm \Phi _P$. Then $\Phi _P^{A^*_{\text{\rm num}, \Q}}$
preserves the ample cones.
\end{prop}
\begin{proof}
Suppose that $\Phi _P$ takes the ample cone of $X$ to the negative of the ample
cone of $Y$. Consider the auto-equivalence $\Phi := \Phi _Q^{-1}\circ \Phi
_{P[1]}$ of $D(X)$. The induced automorphism
$$
\Phi ^{A^*_{\text{num}, \Q}}:A^*(X)_{\text{num} , \Q}\rightarrow
A^*(X)_{\text{num}, \Q}
$$
then preserves the codimension filtration, Mukai pairing, and is the identity
on $NS(X)_{\text{num} , \Q}$ and multiplication by $-1$ on $A^0(X)_{\text{num},
\Q}$ and $A^2(X)_{\text{num}, \Q}$. By the compatibility of $\Phi $ with the
Mukai pairing this implies that for any $H\in NS(X)$ we have
$$
-H^2 = \Phi \langle (0, H, 0), (0, H, 0)\rangle = \langle (0, H, 0), (0, H,
0)\rangle = H^2,
$$
which is a contradiction. Thus $\Phi _{P[1]}$ must take $(0, 0, 1)$ to $(0, 0, 1)$
which implies that $\Phi _{P[1]}$ takes $(1, 0,0)$ to $(1, 0, 0)$, a contradiction.
\end{proof}
\section{Moduli spaces of K3 surfaces}\label{S:3}
\begin{pg} For an integer $d$ invertible in $k$ let $\mathscr M_{d}$ denote the
stack over $k$ whose fiber over a scheme $T$ is the groupoid of pairs $(X,
\lambda )$ where $X/T$ is a proper smooth algebraic space all of whose
geometric fibers are K3 surfaces and $\lambda :T\rightarrow
\text{Pic}_{X/T}$ is a morphism to the relative Picard functor such that in
every geometric fiber $\lambda $ is given by a primitive ample line bundle
$L_\lambda $ whose self-intersection is $2d$. The following theorem
summarizes the properties of the stack $\mathscr M_d$ that we will need.
\end{pg}
\begin{thm}\label{T:2.2} (i) $\mathscr M_d$ is a Deligne-Mumford stack, smooth over
$k$ of relative dimension $19$.
(ii) If $p\geq 3$ and $p^2\nmid d$ then the geometric fiber of $\mathscr M_d$ is
irreducible.
(iii) The locus $\mathscr M_{d, \infty }\subset \mathscr M_d$ classifying supersingular
K3 surfaces is closed of dimension $\geq 9$.
\end{thm}
\begin{proof} A
review of (i) and (iii) can be found in \cite[p.\ 1]{Ogus}. Statement (ii) can
be found in \cite[2.10 (3)]{Liedtke}.
\end{proof}
\begin{rem} The stack $\mathscr M_d$ is defined over $\Z$, and it follows from (ii)
that the geometric generic fiber of $\mathscr M_d$ is irreducible (this follows
also from the Torelli theorem over $\mathbb{C}$ and the resulting description of
$\mathscr M_{d, \mathbb{C}}$ as a period space). Furthermore over $\Z[1/d]$ the stack
$\mathscr M_d$ is smooth. In what follows we denote this stack over $\Z[1/d]$ by
$\mathscr M_{d, \Z[1/2]}$ and reserve the notation $\mathscr M_d$ for its reduction to
$k$.
\end{rem}
\begin{rem} Note that in the definition of $\mathscr M_d$ we consider ample
invertible sheaves, and don't allow contractions in the corresponding morphism
to projective space.
\end{rem}
\begin{pg} Let $\mathscr S_d$ denote the fibered category over $k$ whose fiber
over a scheme $S$ is the groupoid of collections of data
\begin{equation}\label{E:theobject} ((X, \lambda ), Y, P), \end{equation}
where $(X, \lambda )\in \mathscr M_{d}(S)$ is a polarized K3 surface, $Y/S$ is
a second K3 surface over $S$, and $P\in D(X\times _SY)$ is an $S$-perfect
complex such that for every geometric point $\bar s\rightarrow S$ the induced
functor $$ \Phi ^{P_{\bar s}}:D(X_{\bar s})\rightarrow D(Y_{\bar s}) $$ is
strongly filtered.
\end{pg}
\begin{thm}\label{T:4.5} The fibered category $\mathscr S_d$ is an algebraic stack
locally of finite type over $k$.
\end{thm}
\begin{proof} By fppf descent
for algebraic spaces we have descent for both polarized and unpolarized K3
surfaces.
To verify descent for the kernels $P$, consider an object \eqref{E:theobject}
over a scheme $S$. Let $P^\vee $ denote $\mathscr RHom (P, \mathscr O_X)$. Since $P$
is a perfect complex we have $\mathscr RHom (P, P)\simeq P^\vee \otimes P$. By
\cite[2.1.10]{L} it suffices to show that for all geometric points $\bar
s\rightarrow S$ we have $H^i(X_{\bar s}\times Y_{\bar s}, P^\vee _{\bar
s}\otimes P_{\bar s}) = 0$ for $i<0$. This follows from the following
result (we discuss Hochschild cohomology further in Section \ref{S:section4b} below):
\begin{lem}[{\cite[5.6]{Toda}, \cite[5.1.8]{Grigg}}]\label{L:2.5} Let $X$ and
$Y$ be K3 surfaces over an algebraically closed field $k$, and let $P\in
D(X\times Y)$ be a complex defining a Fourier-Mukai equivalence $\Phi
_P:D(X)\rightarrow D(Y)$. Denote by $HH^*(X)$ the Hochschild cohomology of
$X$ defined as $$ \text{\rm RHom}_{X\times X}(\Delta _*\mathscr O_X, \Delta
_*\mathscr O_X). $$
(i) There is a canonical isomorphism $\text{\rm Ext}^*_{X\times Y}(P, P)\simeq
HH^*(X)$.
(ii) $\text{\rm Ext}^i_{X\times Y}(P, P) = 0$ for $i<0$ and $i=1$.
(iii) The natural map $k\rightarrow \text{\rm Ext}^0_{X\times Y}(P, P)$ is an
isomorphism.
\end{lem}
\begin{proof} Statement (i) is \cite[5.6]{Toda}.
Statements (ii) and (iii) follow immediately from this, since $HH^1(X) = 0$ for
a K3 surface.
\end{proof}
Next we show that for an object \eqref{E:theobject} the polarization $\lambda $
on $X$ induces a polarization $\lambda _Y$ on $Y$. To define $\lambda _Y$ we
may work \'etale locally on $S$ so may assume there exists an ample invertible
sheaf $L$ on $X$ defining $\lambda $. The complex $$ \Phi _P(L):=
R\text{pr}_{2*}(\text{pr}_1^*L\otimes ^{\mathbb{L}}P) $$ is $S$-perfect, and
therefore a perfect complex on $Y$. Let $M$ denote the determinant of $\Phi
_P(L)$, so $M$ is an invertible sheaf on $Y$. By our assumption that $\Phi
^{P_s}$ is strongly filtered for all $s\in S$, the restriction of $M$ to any
fiber is either ample or antiample. It follows that either $M$ or $M^\vee $ is
a relatively ample invertible sheaf and we define $\lambda _Y$ to be the
resulting polarization on $Y$. Note that this does not depend on the choice of
line bundle $L$ representing $\lambda $ and therefore by descent $\lambda _Y$
is defined even when no such $L$ exists.
The degree of $\lambda _Y$ is equal to $d$. Indeed if $s\in S$ is a point then
since $\Phi ^{P_s}$ is strongly filtered the induced map $NS(X_{\bar
s})\rightarrow NS(Y_{\bar s})$ is compatible with the intersection pairings and
therefore $\lambda _Y^2 = \lambda ^2 = 2d$.
From this we deduce that $\mathscr S_d$ is algebraic as follows. We have a
morphism \begin{equation}\label{E:2.4.1} \mathscr S_d\rightarrow \mathscr M_d\times
\mathscr M_d, \ \ ((X, \lambda ), Y, P)\mapsto ((X, \lambda ), (Y, \lambda _Y)),
\end{equation} and $\mathscr M_d\times \mathscr M_d$ is an algebraic stack. Let $\mathscr
X$ (resp. $\mathscr Y$) denote the pullback to $\mathscr M_d\times \mathscr M_d$ of the
universal family over the first factor (resp. second factor). Sending a
triple $((X, \lambda ), Y, P)$ to $P$ then realizes $\mathscr S_d$ as an open
substack of the stack over $\mathscr M_d\times \mathscr M_d$ of simple universally
gluable complexes on $\mathscr X\times _{\mathscr M_d\times \mathscr M_d}\mathscr Y$ (see for
example \cite[\S 5]{LO}).
\end{proof}
\begin{pg} Observe that for any object $((X, \lambda ), Y, P)\in \mathscr S_d$ over
a scheme $S$ there is an inclusion $$ \mathbb{G}_m\hookrightarrow \underline
{\text{Aut}}_{\mathscr S_d}((X, \lambda ), Y, P) $$ giving by scalar
multiplication by $P$. We can therefore form the rigidification of $\mathscr
S_d$ with respect to $\mathbb{G}_m$ (see for example \cite[\S 5]{ACV}) to get
a morphism $$ g:\mathscr S_d\rightarrow \overline {\mathscr S}_d $$ realizing $\mathscr
S_d$ as a $\mathbb{G}_m$-gerbe over another algebraic stack $\overline {\mathscr
S}_d$. By the universal property of rigidification the map $\mathscr
S_d\rightarrow \mathscr M_d$ sending $((X, \lambda ), Y, P)$ to $(X, \lambda )$
induces a morphism \begin{equation}\label{E:etalemap} \pi :\overline {\mathscr
S}_d\rightarrow \mathscr M_d. \end{equation}
\end{pg}
\begin{thm}\label{T:4.8} The stack $\overline {\mathscr S}_d$ is Deligne-Mumford
and the map \eqref{E:etalemap} is \'etale.
\end{thm}
\begin{proof} Consider
the map \ref{E:2.4.1}. By the universal property of rigidification this
induces a morphism $$ q:\overline {\mathscr S}_d\rightarrow \mathscr M_d\times \mathscr
M_d. $$ Since $\mathscr M_d\times \mathscr M_d$ is Deligne-Mumford, to prove that
$\overline {\mathscr S}_d$ is a Deligne-Mumford stack it suffices to show that
$q$ is representable. This follows from Lemma \ref{L:2.5} (iii) which implies that
for any object $((X, \lambda ), Y, P)$ over a scheme $S$ the automorphisms of
this object which map under $q$ to the identity are given by scalar
multiplication on $P$ by elements of $\mathscr O_S^*$.
It remains to show that the map \eqref{E:etalemap} is \'etale, and for this it
suffices to show that it is formally \'etale.
Let $A\rightarrow A_0$ be a surjective map of artinian local rings with kernel
$I$ annhilated by the maximal ideal of $A$, and let $k$ denote the residue
field of $A_0$ so $I$ can be viewed as a $k$-vector space. Let $((X_0, \lambda
_0), Y_0, P_0)\in \mathscr S_d(A_0)$ be an object and let $(X, \lambda )\in \mathscr
M_d(A)$ be a lifting of $(X_0, \lambda _0)$ so we have a commutative diagram of
solid arrows $$ \xymatrix{ \text{\rm Spec} (A_0)\ar@{^{(}->}[dd]^-i\ar[r]^-{x_0}& \mathscr
S_d\ar[d]\\ & \overline {\mathscr S}_d\ar[d]\\ \text{\rm Spec} (A)\ar@{-->}[ru]^-{\bar
x}\ar@{-->}[ruu]^-x\ar[r]^-{y}& \mathscr M_d.} $$ Since $\mathscr S_d$ is a
$\mathbb{G}_m$-gerbe over $\overline {\mathscr S}_d$, the obstruction to lifting a
map $\bar x$ as indicated to a morphism $x$ is given by a class in $H^2(\text{\rm Spec}
(A), \widetilde I) = 0$, and therefore any such map $\bar x$ can be lifted to a
map $x$. Furthermore, the set of isomorphism classes of such liftings $x$ of
$\bar x$ is given by $H^1(\text{\rm Spec} (A), \widetilde I) = 0$ so in fact the lifting
$x$ is unique up to isomorphism. The isomorphism is not unique but determined
up to the action of $$ \text{Ker}(A^*\rightarrow A_0^*) \simeq I. $$ From this
it follows that it suffices to show the following:
\begin{enumerate}
\item [(i)] The lifting $(X, \lambda )$ of $(X_0, \lambda _0)$ can be extended
to a lifting $((X, \lambda ), Y, P)$ of $((X_0, \lambda _0), Y_0, P_0)$.
\item [(ii)] This extension $((X, \lambda ), Y, P)$ of $(X, \lambda )$ is
unique up to isomorphism. \item [(iii)] The automorphisms of the triple
$((X, \lambda ), Y, P)$ which are the identity on $(X, \lambda )$ and
reduce to the identity over $A_0$ are all given by scalar multiplication
on $P$ by elements of $1+I\subset A^*$.
\end{enumerate}
Statement (i) is
shown in \cite[6.3]{LO}.
Next we prove the uniqueness statements in (ii) and (iii). Following the
notation of \cite[Discussion preceding 5.2]{LO}, let $s\mathscr D_{X/A}$ denote the
stack of simple, universally gluable, relatively perfect complexes on $X$, and
let $sD_{X/A}$ denote its rigidifcation with respect to the
$\mathbb{G}_m$-action given by scalar multiplication. The complex $P_0$ on
$X_0\times _{A_0}Y_0$ defines a morphism $$ Y_0\rightarrow sD_{X/A}\otimes
_AA_0 $$ which by \cite[5.2 (ii)]{LO} is an open imbedding. Any extension of
$(X, \lambda )$ to a lifting $((X, \lambda ), Y, P)$ defines an open imbedding
$Y\hookrightarrow sD_{X/A}$. This implies that $Y$, viewed as a deformation of
$Y_0$ for which there exists a lifting $P$ of $P_0$ to $X\times _AY$, is unique
up to unique isomorphism.
Let $Y$ denote the unique lifting of $Y_0$ to an open subspace of $sD_{X/A}$.
By \cite[3.1.1 (2)]{L} the set of isomorphism classes of liftings of $P_0$ to
$X\times _AY$ is a torsor under $$ \text{Ext}^1_{X_k\times Y_k}(P_k, P_k)\otimes I, $$
which is $0$ by Lemma \ref{L:2.5} (ii). From this it follows that $P$ is unique up
to isomorphism, and also by Lemma \ref{L:2.5} (iii) we get the statement that the
only infinitesimal automorphisms of the triple $((X, \lambda ), Y, P)$ are
given by scalar multiplication by elements of $1+I$.
\end{proof}
\begin{pg} There is an automorphism $$ \sigma :\mathscr S_d\rightarrow \mathscr S_d $$
satisfying $\sigma ^2 = \text{id}$. This automorphism is defined by sending
a triple $((X, \lambda ), Y, P)$ to $((Y, \lambda _Y), X, P^\vee [2])$.
This automorphism induces an involution $\bar \sigma :\overline {\mathscr
S}_d\rightarrow \overline {\mathscr S}_d$ over the involution $\gamma :\mathscr
M_d\times \mathscr M_d\rightarrow \mathscr M_d\times \mathscr M_d$ switching the factors.
\end{pg}
\begin{rem}\label{R:4.10} In fact the stack $\mathscr S_d$ is defined over
$\Z[1/d]$ and Theorems \ref{T:4.5} and \ref{T:4.8} also hold over $\Z[1/d]$.
In what follows we write $\mathscr S_{d, \Z[1/d]}$ for this stack over $\Z[1/d]$.
\end{rem}
\section{Deformations of autoequivalences}\label{S:section4b}
In this section, we describe the obstructions to deforming
Fourier-Mukai equivalences. The requisite technical machinery for this is worked
out in \cite{HMS} and \cite{HT}. The results of this section will play a crucial
role in Section \ref{sec:supers-reduct}.
Throughout this section let $k$ be a perfect field of positive characteristic
$p$ and ring of Witt vectors $W$. For an integer $n$ let $R_n$ denote the ring
$k[t]/(t^{n+1})$, and let $R$ denote the ring $k[[t]]$.
\begin{pg} Let $X_{n+1}/R_{n+1}$ be a smooth proper scheme over $R_{n+1}$ with
reduction $X_n$ to $R_n$. We then have the associated \emph{relative
Kodaira-Spencer class}, defined in
\cite[p. 486]{HMS}, which is the morphism in $D(X_n)$
$$
\kappa _{X_n/X_{n+1}}:\Omega ^1_{X_n/R_n}\rightarrow \mathscr O_{X_n}[1]
$$
defined as the morphism corresponding to the short exact sequence
$$
\xymatrix{ 0\ar[r]& \mathscr O_{X_n}\ar[r]^-{\cdot dt}& \Omega
^1_{X_{n+1}/k}|_{X_n}\ar[r]& \Omega ^1_{X_n/R_n}\ar[r]& 0.}
$$
\end{pg}
\begin{pg} We also have the \emph{relative universal Atiyah class} which is a
morphism
$$
\alpha _n:\mathscr O_{\Delta _n}\rightarrow i_{n*}\Omega ^1_{X_n/R_n}[1]
$$
in $D(X_n\times _{R_n}X_n)$, where $i_n:X_n\rightarrow X_n\times
_{R_n}X_n$ is the diagonal morphism and $\mathscr O_{\Delta _n}$ denotes $i_{n*}\mathscr
O_{X_n}$.
This map $\alpha _n$ is given by the class of the short exact sequence
$$
0\rightarrow I/I^2\rightarrow \mathscr O_{X_n\times _{R_n}X_n}/I^2\rightarrow \mathscr
O_{\Delta _n}\rightarrow 0,
$$
where $I\subset \mathscr O_{X_n\times _{R_n}X_n}$ is the ideal of the diagonal.
Note that to get the morphism $\alpha _n$ we need to make a choice of
isomorphism $I/I^2\simeq \Omega ^1_{X_n/R_n}$, which implies that the relative
universal Atiyah class is not invariant under the map switching the factors, but
rather changes by $-1$.
\end{pg}
\begin{pg} Define the \emph{relative Hochschild cohomology} of $X_n/R_n$ by
$$
HH^*(X_n/R_n):= \text{Ext}^*_{X_n\times _{R_n}X_n}(\mathscr O_{\Delta _n}, \mathscr
O_{\Delta _n}).
$$
The composition
$$
\xymatrix{ \mathscr O_{\Delta _n}\ar[r]^-{\alpha _n}& i_{n*}\Omega
^1_{X_n/R_n}[1]\ar[rr]^-{i_{n*}\kappa _{X_n/X_{n+1}}}& &\mathscr O_{\Delta _n}[2]}
$$
is a class
$$
\nu _{X_n/X_{n+1}}\in HH^2(X_n/R_n).
$$
\end{pg}
\begin{pg}\label{P:4.4b} Suppose now that $Y_{n}/R_n$ is a second smooth proper
scheme with a smooth lifting $Y_{n+1}/R_{n+1}$ and that $E_n\in D(X_n\times
_{R_n}Y_n)$ is a $R_n$-perfect complex.
Consider the class
$$
\nu := \nu _{X_n\times _{R_n}Y_n/X_{n+1}\times _{R_{n+1}}Y_{n+1}}:\mathscr
O_{\Delta _{n, X_n\times _{R_n}Y_n}}\rightarrow \mathscr O_{\Delta _{n, X_n\times
_{R_n}Y_n}}[2].
$$
Viewing this is a morphism of Fourier-Mukai kernels
$$
D(X_n\times _{R_n}Y_n)\rightarrow D(X_n\times _{R_n}Y_n)
$$
and applying it to $E_n$ we get a class
$$
\omega (E_n)\in \text{Ext}^2_{X_n\times _{R_n}Y_n}(E_n, E_n).
$$
In the case when
$$
\text{Ext}^1_{X_0\times Y_0}(E_0, E_0) = 0,
$$
which will hold in the cases of interest in this paper, we know by \cite[Lemma
3.2]{HMS} that the class $\omega (E_n)$ is $0$ if and only if $E_n$ lifts to a
perfect complex on $X_{n+1}\times _{R_{n+1}}Y_{n+1}$.
\end{pg}
\begin{pg} To analyze the class $\omega (E_n)$ it is useful to translate it into
a statement about classes in $HH^2(Y_n/R_n)$. This is done using Toda's argument
\cite[Proof of 5.6]{Toda}. Let
$$
E_n\circ :D(X_n\times _{R_n}X_n)\rightarrow D(X_n\times _{R_n}Y_n)
$$
denote the map sending an object $K\in D(X_n\times _{R_n}X_n)$ to the complex
representing the Fourier-Mukai transform $\Phi _{E_n}\circ \Phi _K$. Explicitly
it is given by the complex
$$
p_{13*}(p_{12}^*K\otimes p_{23}^*E_n),
$$
where $p_{ij}$ denote the various projections from $X_n\times _{R_n}X_n\times
_{R_n}Y_n$. As in loc.~cit.~the diagram
$$
\xymatrix{ D(X_n)\ar[d]^-{i_{n*}}\ar[rd]^-{p_1^*(-)\otimes E_n}& \\
D(X_n\times _{R_n}X_n)\ar[r]^-{E_n\circ }& D(X_n\times _{R_n}Y_n)}
$$
commutes.
In particular we get a morphism
$$
\eta _{X}^*:HH^*(X_n/R_n)\rightarrow \text{Ext}^*_{X_n\times _{R_n}Y_n}(E_n,
E_n).
$$
Now assume that both $X_n$ and $Y_n$ have relative dimension $d$ over $R_n$
and that the relative canonical sheaves of $X_n$ and $Y_n$ over $R_n$ are trivial. Let $E_n^\vee $ denote
$\mathscr RHom (E_n, \mathscr O_{X_n\times _{R_n}Y_n})$ viewed as an object of $D(Y_n\times _{R_n}X_n)$. In this
case the functor
$$
\Phi _{E_n^\vee [d]}:D(Y_n)\rightarrow D(X_n)
$$
is both a right and left adjoint of $\Phi _{E_n}$ \cite[4.5]{Bridgeland}. By the same argument, the functor
$$
\circ E_n^\vee [d]:D(X_n\times _{R_n}Y_n)\rightarrow D(Y_n\times _{R_n}Y_n),
$$
defined in the same manner as $E_n\circ $ has left and right adjoint given by
$$
\circ E_n:D(Y_n\times _{R_n}Y_n)\rightarrow D(X_n\times _{R_n}Y_n).
$$
Composing with the adjunction maps
\begin{equation}\label{E:adjunctionmaps}
\alpha :\text{id}\rightarrow \circ E_n\circ E_n^\vee [d], \ \ \beta :\circ E_n\circ E_n^\vee
[d]\rightarrow \text{id}
\end{equation}
applied to the diagonal $\mathscr O_{\Delta _{Y_n}}$
we get a morphism
$$
\eta _{Y*}:\text{Ext}^*_{X_n\times _{R_n}Y_n}(E_n, E_n)\rightarrow
HH^*(Y_n/R_n).
$$
We denote the composition
$$
\eta _{Y*}\eta _X^*:HH^*(X_n/R_n)\rightarrow HH^*(Y_n/R_n)
$$
by $\Phi _{E_n}^{HH^*}$. In the case when $E_n$ defines a Fourier-Mukai
equivalence this agrees with the standard definition (see for example
\cite{Toda}).
\end{pg}
\begin{pg}
Evaluating the adjunction maps \eqref{E:adjunctionmaps} on $\mathscr O_{\Delta _{Y_n}}$ we get a morphism
\begin{equation}\label{E:4.6.1}
\xymatrix{ \mathscr O_{\Delta _{Y_n}}\ar[r]^-{\alpha }&\mathscr O_{\Delta
_{Y_n}}\circ E_n\circ E_n^\vee [d]\ar[r]^-{\beta }& \mathscr O_{\Delta _{Y_n}}.}
\end{equation}
We say that $E_n$ is \emph{admissible} if this composition is the identity map.
If $E_n$ is a Fourier-Mukai equivalence then it is clear that $E_n$ is
admissible. Another example is if there exists a lifting $(\mathscr X, \mathscr Y, \mathscr
E)$ of $(X_n, Y_n, E_n)$ to $R$, where $\mathscr X$ and $\mathscr Y$ are smooth proper
$R$-schemes with trivial relative canonical bundles and $\mathscr E$ is a
$R$-perfect complex on $\mathscr X\times _R\mathscr Y$, such that the restriction $\mathscr
E$ to the generic fiber defines a Fourier-Mukai equivalence. Indeed in this case
the map \eqref{E:4.6.1} is the reduction of the corresponding map $\mathscr
O_{\Delta _{\mathscr Y}}\rightarrow \mathscr O_{\Delta _{\mathscr Y}}$ defined over $R$,
which in turn is determined by its restriction to the generic fiber.
\end{pg}
\begin{pg} Consider Hochschild homology
$$
HH_i(X_n/R_n):= H^{-i}(X_n, Li_n^*\mathscr O_{\Delta _n}).
$$
By the argument of \cite[\S 5]{Caldararu1} we also get an action
$$
\Phi _{E_n}^{HH_*}:HH_*(X_n/R_n)\rightarrow HH_*(Y_n/R_n).
$$
Hochschild homology is a module over Hochschild cohomology, and an
exercise (that we do not write out here) shows that the following diagram
$$
\xymatrixcolsep{5pc}\xymatrix{ HH^*(X_n/R_n)\times HH_*(X_n/R_n)\ar[r]^-{\Phi _{E_n}^{HH^*}\times
\Phi _{E_n}^{HH_*}}\ar[d]^-{\text{mult}}&HH^*(Y_n/R_n)\times
HH_*(Y_n/R_n)\ar[d]^-{\text{mult}}\\ HH_*(X_n/R_n)\ar[r]^-{\Phi _{E_n}^{HH_*}}&
HH_*(Y_n/R_n)}
$$
commutes.
\end{pg}
\begin{pg} Using this we can describe the obstruction $\omega (E_n)$ in a
different way, assuming that $E_n$ is admissible. First note that viewing the relative Atiyah class of $X_n\times
_{R_n}Y_n$ as a morphism of Fourier-Mukai kernels we get the Atiyah class of
$E_n$ which is a morphism
$$
A(E_n):E_n\rightarrow E_n\otimes \Omega ^1_{X_n\times _{R_n}Y_n/R_n}[1]
$$
in $D(X_n\times _{R_n}Y_n)$. There is a natural decomposition
$$
\Omega ^1_{X_n\times _{R_n}Y_n/R_n}\simeq p_1^*\Omega ^1_{X_n/R_n}\oplus
p_2^*\Omega ^1_{Y_n/R_n},
$$
so we can write $A(E_n)$ as a sum of two maps
$$
A(E_n)_X:E_n\rightarrow E_n\otimes p_1^*\Omega ^1_{X_n/R_n}[1], \ \
A(E_n)_Y:E_n\rightarrow E_n\otimes p_2^*\Omega ^1_{X_n/R_n}[1].
$$
Similarly the Kodaira-Spencer class of $X_n\times _{R_n}Y_n$ can be written as
the sum of the two pullbacks
$$
p_1^*\kappa _{X_n/X_{n+1}}:p_1^*\Omega ^1_{X_n/R_n}\rightarrow p_1^*\mathscr
O_{X_n}[1], \ \ p_2^*\kappa _{Y_n/Y_{n+1}}:p_2^*\Omega ^1_{Y_n/R_n}\rightarrow
p_2^*\mathscr O_{Y_n}[1].
$$
It follows that the obstruction $\omega (E_n)$ can be written as a sum
$$
\omega (E_n) = (p_1^*\kappa _{X_n/X_{n+1}}\circ A(E_n)_X)+ (p_2^*\kappa
(Y_n/Y_{n+1})\circ A(E_n)_Y).
$$
Now by construction we have
$$
\eta _{X_n}^*(\nu _{X_n/X_{n+1}}) = p_1^*\kappa _{X_n/X_{n+1}}\circ A(E_n)_X,
$$
and
$$
\eta _{Y_n*}(p_2^*\kappa (Y_n/Y_{n+1})\circ A(E_n)_Y) = -\nu _{Y_n/Y_{n+1}},
$$
the sign coming from the asymmetry in the definition of the relative Atiyah
class (it is in the verification of this second formula that we use the assumption that $E_n$ is admissible). Summarizing we find the formula
\begin{equation}\label{E:keyformula} \eta _{Y_n*}(\omega (E_n)) = \Phi
_{E_n}^{HH^*}(\nu _{X_n/X_{n+1}})-\nu _{Y_n/Y_{n+1}}.
\end{equation}
In the case when $\Phi _{E_n}$ is an equivalence the maps $\eta _{Y_n*}$ and
$\eta _{X_n}^*$ are isomorphisms, so the obstruction $\omega (E_n)$ vanishes if
and only if we have
$$
\Phi _{E_n}^{HH^*}(\nu _{X_n/X_{n+1}})-\nu _{Y_n/Y_{n+1}} = 0.
$$
\end{pg}
\begin{rem} By \cite[Remark 2.3 (iii)]{HMS}, the functor $\Phi _{E_n}$ is an
equivalence if and only if $\Phi _{E_0}:D(X_0)\rightarrow D(Y_0)$ is an
equivalence.
\end{rem}
\begin{cor} Suppose $F_n\in D(X_n\times _{R_n}Y_n)$ defines a Fourier-Mukai
equivalence, and that $E_n\in D(X_n\times _{R_n}Y_n)$ is another admissible $R_n$-perfect
complex such that $\Phi _{F_n}^{HH^*} = \Phi _{E_n}^{HH^*}$. If $E_n$ lifts to a
$R_{n+1}$-perfect complex $E_{n+1}\in D(X_{n+1}\times _{R_{n+1}}Y_{n+1})$ then
so does $F_n$.
\end{cor}
\begin{proof} Indeed the condition that $\Phi _{F_n}^{HH^*} = \Phi
_{E_n}^{HH^*}$ ensures that
$$
\eta _{Y_n*}(\omega (E_n) ) = \eta _{Y_n*}(\omega (F_n)),
$$
and since $\omega (E_n) = 0$ we conclude that $\omega (F_n) = 0$.
\end{proof}
\begin{pg} The next step is to understand the relationship between $\Phi
_{E_n}^{HH^*}$ and the action of $\Phi _{E_n}$ on the cohomological realizations
of the Mukai motive.
Assuming that the characteristic $p$ is bigger than the dimension of $X_0$
(which in our case will be a K3 surface so we just need $p>2$) we can
exponentiate the relative Atiyah class to get a map
$$
\exp (\alpha _n):\mathscr O_{\Delta _n}\rightarrow \oplus _ii_{n*}\Omega
^i_{X_n/R_n}
$$
which by adjunction defines a morphism
\begin{equation}\label{E:HKRiso} Li_n^*\mathscr O_{\Delta _n}\rightarrow \oplus
_i\Omega ^i_{X_n/R_n}
\end{equation} in $D(X_n)$. By \cite[Theorem 0.7]{AC}, which also holds
in positive characteristic subject to the bounds on dimension, this map is an
isomorphism. We therefore get an isomorphism
$$
I^{HKR}:HH^*(X_n/R_n)\rightarrow HT^*(X_n/R_n),
$$
where we write
$$
HT^*(X_n/R_n) := \oplus _{p+q=*}H^p(X_n, \bigwedge ^qT_{X_n/R_n}).
$$
We write
$$
I^K_{X_n}:HH^*(X_n/R_n)\rightarrow HT^*(X_n/R_n)
$$
for the composition of $I^{HKR}$ with multiplication by the inverse square root of
the Todd class of $X_n/R_n$, as in \cite[1.7]{Caldararu2}.
The isomorphism \eqref{E:HKRiso} also defines an isomorphism
$$
I_{HKR}:HH_*(X_n/R_n)\rightarrow H\Omega _*(X_n/R_n),
$$
where
$$
H\Omega _*(X_n/R_n):= \oplus _{q-p=*}H^p(X_n, \Omega ^q_{X_n/R_n}).
$$
We write
$$
I_{K}^{X_n}:HH_*(X_n/R_n)\rightarrow H\Omega _*(X_n/R_n)
$$
for the composition of $I_{HKR}$ with multiplication by the square root of the
Todd class of $X_n/R_n$.
We will consider the following condition $(\star )$ on a $R_n$-perfect complex
$E_n\in D(X_n\times _{R_n}Y_n)$:
\end{pg}
\begin{enumerate}
\item [$(\star )$] The diagram
$$
\xymatrix{ HH_*(X_n/R_n)\ar[r]^-{\Phi _{E_n}^{HH_*}}\ar[d]^-{I_K^{X_n}}&
HH_*(Y_n/R_n)\ar[d]^-{I_K^{Y_n}}\\ H\Omega _*(X_n/R_n)\ar[r]^-{\Phi
_{E_n}^{H\Omega _*}}& H\Omega _*(Y_n/R_n)}
$$
commutes.
\end{enumerate}
\begin{rem} We expect this condition to hold in general. Over a field of
characteristic $0$ this is shown in \cite[1.2]{MS}. We expect that a careful
analysis of denominators occurring of their proof will verify $(\star )$ quite
generally with some conditions on the characteristic relative to the dimension
of the schemes. However, we will not discuss this further in this paper.
\end{rem}
\begin{pg} There are two cases we will consider in this paper were $(\star )$ is
known to hold:
\begin{enumerate}
\item [(i)] If $E_n = \mathscr O_{\Gamma _n}$ is the structure sheaf of the graph
of an isomorphism $\gamma _n:X_n\rightarrow Y_n$. In this case the induced maps
on Hochschild cohomology and $H\Omega _*$ are simply given by pushforward
$\gamma _{n*}$ and condition $(\star )$ immediately holds.
\item [(ii)] Suppose $B\rightarrow R_n$ is a morphism from an integral domain
$B$ which is flat over $W$ and that there exists a lifting $(\mathscr X, \mathscr Y,
\mathscr E)$ of $(X_n, Y_n, E_n)$ to $B$, where $\mathscr X$ and $\mathscr Y$ are proper and
smooth over $B$ and $\mathscr E\in D(\mathscr X\times _B\mathscr Y)$ is a $B$-perfect
complex pulling back to $E_n$. Suppose further that the groups $HH^*(\mathscr X/B)$
and $HH^*(\mathscr Y/B)$ are flat over $B$ and their formation commutes with base
change (this holds for example if $\mathscr X$ and $\mathscr Y$ are K3 surfaces). Then
$(\star )$ holds. Indeed it suffices to verify the commutativity of the
corresponding diagram over $B$, and this in turn can be verified after passing
to the field of fractions of $B$. In this case the result holds by
\cite[1.2]{MS}.
\end{enumerate}
\end{pg}
\begin{lem}\label{L:4.13} Let $E_n, F_n\in D(X_n\times _{R_n}Y_n)$ be two
$R_n$-perfect complexes satisfying condition $(\star )$. Suppose further that
the maps $\Phi _{E_0}^{\text{\rm crys} }$ and $\Phi _{F_0}^\text{\rm crys} $ on
the crystalline realizations $\widetilde H(X_0/W)\rightarrow \widetilde
H(Y_0/W)$ of the Mukai motive are equal. Then the maps $\Phi _{E_n}^{HH_*}$ and
$\Phi _{F_n}^{HH_*}$ are also equal. Furthermore if the maps on the crystalline
realizations are isomorphisms then $\Phi _{E_n}^{HH_*}$ and $\Phi _{F_n}^{HH_*}$
are also isomorphisms.
\end{lem}
\begin{proof} Since $H\Omega _*(X_n/R_n)$ (resp.~$H\Omega _*(Y_n/R_n)$) is
obtained from the de Rham realization $\widetilde H_{\text{dR}}(X_n/R_n)$
(resp.~$\widetilde H_{\text{dR}}(X_n/R_n)$) of the Mukai motive of $X_n/R_n$
(resp.~$Y_n/R_n$) by passing to the associated graded, it suffices to show that
the two maps
$$
\Phi _{E_n}^{\text{dR}}, \Phi _{F_n}^{\text{dR}}: \widetilde
H_{\text{dR}}(X_n/R_n)\rightarrow \widetilde H_{\text{dR}}(Y_n/R_n)
$$
are equal, and isomorphisms when the crystalline realizations are
isomorphisms. By the comparison between crystalline and de Rham cohomology it
suffices in turn to show that the two maps on the crystalline realizations
$$
\Phi _{E_n}^{\text{crys}}, \Phi _{F_n}^{\text{crys}}: \widetilde
H_{\text{crys}}(X_n/W[t]/(t^{n+1}))\rightarrow \widetilde
H_{\text{crys}}(Y_n/W[t]/(t^{n+1}))
$$
are equal. Via the Berthelot-Ogus isomorphism \cite[2.2]{BO}, which is
compatible with Chern classes, these maps are identified after tensoring with
$\Q$ with the two maps obtained by base change from
$$
\Phi _{E_0}^{\text{crys}}, \Phi _{F_0}^{\text{crys}}: \widetilde
H_{\text{crys}}(X_0/W)\rightarrow \widetilde H_{\text{crys}}(Y_0/W).
$$
The result follows.
\end{proof}
\begin{pg} In the case when $X_n$ and $Y_n$ are K3 surfaces the action of
$HH^*(X_n/R_n)$ on $HH_*(X_n/R_n)$ is faithful. Therefore from Lemma \ref{L:4.13} we
obtain the following.
\end{pg}
\begin{cor}\label{C:4.15} Assume that $X_n$ and $Y_n$ are K3 surfaces and that
$E_n, F_n\in D(X_n\times _{R_n}Y_n)$ are two $R_n$-perfect complexes satisfying
condition $(\star )$. Suppose further that $\Phi ^{\text{\rm crys}}_{E_0}$ and
$\Phi ^{\text{\rm crys}}_{F_0}$ are equal on the crystalline realizations of the
Mukai motives of the reductions. Then $\Phi _{E_n}^{HH^*}$ and $\Phi
_{F_n}^{HH^*}$ are equal.
\end{cor}
\begin{proof} Indeed since homology is a faithful module over cohomology the
maps $\Phi _{E_n}^{HH^*}$ and $\Phi _{F_n}^{HH^*}$ are determined by the maps on
Hochschild homology which are equal by Lemma \ref{L:4.13}.
\end{proof}
\begin{cor}\label{P:4.15} Let $X_{n+1}$ and $Y_{n+1}$ be K3 surfaces over
$R_{n+1}$ and assume given an admissible $R_{n+1}$-perfect complex $E_{n+1}$ on
$X_{n+1}\times _{R_{n+1}}Y_{n+1}$ such that $E_n$ satisfies condition $(\star
)$. Assume given an isomorphism $\sigma _n:X_n\rightarrow Y_n$ over $R_n$ such
that the induced map $\sigma _0:X_0\rightarrow Y_0$ defines the same map on
crystalline realizations of the Mukai motive as $E_0$. Then $\sigma _n$ lifts to
an isomorphism $\sigma _{n+1}:X_{n+1}\rightarrow Y_{n+1}$.
\end{cor}
\begin{proof} Indeed by \eqref{E:keyformula} and the fact that $\Phi
^{HH^*}_{E_n}$ and $\Phi ^{HH^*}_{\Gamma _{\sigma _n}}$ are equal by
Corollary \ref{C:4.15}, we see that the obstruction to lifting $\sigma _{n}$ is equal to
the obstruction to lifting $E_n$, which is zero by assumption.
\end{proof}
\section{A remark on reduction types}\label{S:section4}
\begin{pg} In the proof of Theorem \ref{T:1.2} we need the following Theorem
\ref{T:2.2v1}, whose proof relies on known characteristic $0$ results obtained
from Hodge theory. In Section \ref{S:section10} below we give a different
algebraic argument for Theorem \ref{T:2.2v1} in a special case which suffices
for the proof of Theorem \ref{T:1.2}.
\end{pg}
\begin{pg} Let $V$ be a complete discrete valuation ring with field of fractions
$K$ and residue field $k$. Let $X/V$ be a projective K3 surface with generic fiber $X_K$,
and let $Y_K$ be a second K3 surface over $K$ such that the geometric fibers
$X_{\overline K}$ and $Y_{\overline K}$ are derived equivalent.
\end{pg}
\begin{thm}\label{T:2.2v1} Under these assumptions the K3 surface $Y_K$ has
potentially good reduction.
\end{thm}
\begin{rem} Here potentially good reduction means that after possibly replacing
$V$ be a finite extension there exists a K3 surface $Y/V$ whose generic fiber
is $Y_K$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{T:2.2v1}] We use \cite[1.1 (1)]{LO} which implies
that after replacing $V$ by a finite extension $Y_K$ is isomorphic to a moduli
space of sheaves on $X_K$.
After replacing $V$ by a finite extension we may assume that we have a complex
$P\in D(X\times Y)$ defining an equivalence $$ \Phi _P:D(X_{\overline
K})\rightarrow D(Y_{\overline K}). $$ Let $E\in D(Y\times X)$ be the complex
defining the inverse equivalence $$ \Phi _E:D(Y_{\overline K})\rightarrow
D(X_{\overline K}) $$ to $\Phi _P$. Let $\nu := \Phi _E(0,0,1)\in
A^*(X_{\overline K})_{\text{num}, \Q}$ be the Mukai vector of a fiber of $E$ at
a closed point $y\in Y_{\overline K}$ and write $$ \nu = (r, [L_X], s)\in
A^0(X_{\overline K})_{\text{num}, \Q}\oplus A^1(X_{\overline K})_{\text{num},
\Q}\oplus A^2(X_{\overline K})_{\text{num}, \Q}. $$ By \cite[8.1]{LO} we may
after possible changing our choice of $P$, which may involve another extension
of $V$, assume that $r$ is prime to $p$ and that $L_X$ is very ample. Making
another extension of $V$ we may assume that $\nu $ is defined over $K$, and
therefore by specialization also defines an element, which we denote by the same
letter, $$ \nu = (r, [L_X], s)\in \Z\oplus \text{Pic}(X)\oplus \Z. $$ This class
has the property that $r$ is prime to $p$ and that there exists another class
$\nu '$ such that $\langle \nu, \nu '\rangle = 1$. This implies in particular
that $\nu $ restricts to a primitive class on the closed fiber. Fix an ample
class $h$ on $X$, and let $\mathscr M_h(\nu )$ denote the moduli space of semistable
sheaves on $X$ with Mukai vector $\nu $. By \cite[3.16]{LO} the stack $\mathscr
M_h(\nu )$ is a $\mu _r$-gerbe over a relative K3 surface $M_h(\nu )/V$, and
by \cite[8.2]{LO} we have $Y_{\overline K}\simeq M_h(\nu )_{\overline K}$. In
particular, $Y$ has potentially good reduction.
\end{proof}
\begin{rem} As discussed in \cite[p.~2]{LM}, to obtain Theorem \ref{T:2.2v1} it suffices
to know that every K3 surface $Z_K$ over $K$ has potentially semistable
reduction and this would follow from standard conjectures on resolution of
singularities and toroidization of morphisms in mixed and positive
characteristic. In the setting of Theorem \ref{T:2.2v1}, once we know that $Y_K$ has
potentially semistable reduction then by \cite[Theorem on bottom of p. 2]{LM} we
obtain that $Y_K$ has good reduction since the Galois representation
$H^2(Y_{\overline K}, \Q_\ell )$ is unramified being isomorphic to direct
summand of the $\ell $-adic realization $\widetilde H(X_{\overline K}, \Q_\ell
)$ of the Mukai motive of $X_K$.
\end{rem}
\begin{pg}\label{P:4.6} One can also consider the problem of extending $Y_K$
over a higher dimensional base. Let $B$ denote a normal finite type $k$-scheme
with a point $s\in B(k)$ and let $X/B$ be a projective family of K3 surfaces.
Let $K$ be the function field of $B$ and let $Y_K$ be a second K3 surface over
$K$ Fourier-Mukai equivalent to $X_K$. Dominating $\mathscr O_{B, s}$ by a suitable
complete discrete valuation ring $V$ we can find a morphism $$ \rho :\text{\rm Spec}
(V)\rightarrow B $$ sending the closed point of $\text{\rm Spec} (V)$ to $s$ and an
extension $Y_V$ of $\rho ^*Y_K$ to a smooth projective K3 surface over $V$. In
particular, after replacing $B$ by its normalization in a finite extension of
$K$ we can find a primitive polarization $\lambda _K$ on $Y_K$ of degree prime
to the characteristic such that $\rho ^*\lambda _K$ extends to a polarization on
$Y_V$. We then have a commutative diagram of solid arrows $$ \xymatrix{ \text{\rm Spec}
(\text{Frac}(V))\ar@{^{(}->}[d]\ar[r]& \text{\rm Spec} (K)\ar@{^{(}->}[d]\ar@/^1pc/[rdd]& \\
\text{\rm Spec} (V)\ar@/_1pc/[rrd]\ar[r]& B\ar@{-->}[rd]&\\ && \mathscr M_d} $$ for a suitable
integer $d$. Base changing to a suitable \'etale neighborhood $U\rightarrow \mathscr
M_d$ of the image of the closed point of $\text{\rm Spec} (V)$, with $U$ an affine scheme,
we can after shrinking and possibly replacing $B$ by an alteration find a
commutative diagram $$ \xymatrix{ \text{\rm Spec} (\text{Frac}(V))\ar@{^{(}->}[d]\ar[r]& \text{\rm Spec}
(K)\ar@{^{(}->}[d]\ar@/^1pc/[rd]& \\ \text{\rm Spec} (V)\ar@/_2pc/[rr]\ar[r]& B\ar[rd]&
U\ar@{^{(}->}[d]^-j\\ && \overline U,} $$ where $j$ is a dense open imbedding
and $\overline U$ is projective over $k$. It follows that the image of $s$ in
$\overline U$ in fact lands in $U$ which gives an extension of $Y_K$ to a
neighborhood of $s$. This discussion implies the following:
\end{pg}
\begin{cor}\label{C:4.7} In the setup of Proposition \ref{P:4.6}, we can, after replacing
$(B, s)$ by a neighborhood of a point in the preimage of $s$ in an alteration of
$B$, find an extension of $Y_K$ to a K3 surface over $B$.
\end{cor}
\section{Supersingular reduction}\label{sec:supers-reduct}
\begin{pg}\label{P:5.1} Let $B$ be a normal scheme of finite type over an
algebraically closed field $k$ of odd positive characteristic $p$. Let $K$
denote the function field of $B$ and let $s\in B$ be a closed point. Let
$f:X\rightarrow B$ be a projective K3 surface over $B$ and let $Y_K/K$ be a second K3
surface over $K$ such that there exists a strongly filtered Fourier-Mukai
equivalence $$ \Phi _P:D(X_K)\rightarrow D(Y_K) $$ defined by an object $P\in
D(X_K\times _KY_K)$. Assume further that the fiber $X_s$ of $X$ over $s$ is a
supersingular K3 surface.
\end{pg}
\begin{pg} Using Corollary \ref{C:4.7} we can, after possibly replacing $B$ by a
neighborhood of a point over $s$ in an alteration, assume that we have a smooth
K3 surface $Y/B$ extending $Y_K$ and an extension of the complex $P$ to a
$B$-perfect complex $\mathscr P$ on $X\times _BY$, and furthermore that the complex
$Q$ defining the inverse of $\Phi _P$ also extends to a complex $\mathscr Q$ on
$X\times _BY$. Let $f_{\mathscr X}:\mathscr X\rightarrow B$ (resp. $f_{\mathscr Y}:\mathscr
Y\rightarrow B$) be the structure morphism, and let $\mathscr
H^i_{\text{crys}}(X/B)$ (resp. $\mathscr H^i_{\text{crys}}(Y/B)$) denote the
$F$-crystal $R^if_{\mathscr X*}\mathscr O_{\mathscr X/W}$ (resp. $R^if_{\mathscr Y*}\mathscr O_{\mathscr
Y/W}$) on $B/W$ obtained by forming the $i$-th higher direct image of the
structure sheaf on the crystalline site of $\mathscr X/W$ (resp. $\mathscr Y/W$).
Because $\Phi _{\mathscr P}$ is strongly filtered, it induces an isomorphism of
$F$-crystals $$ \Phi ^{\text{crys}, i}_{\mathscr P}:\mathscr
H^i_{\text{crys}}(X/B)\rightarrow \mathscr H^i_{\text{crys}}(Y/B) $$ for all $i$,
with inverse defined by $\Phi _{\mathscr Q}$. Note that since we are working here
with K3 surfaces these morphisms are defined integrally.
We also have the de Rham realizations $\mathscr H^i_{\text{dR}}(X/B)$ and $\mathscr H^i_\text{dR}
(Y/B)$ which are filtered modules with integrable connection on $B$ equipped
with filtered isomorphisms compatible with the connections
\begin{equation}\label{E:A} \Phi ^{\text{dR} , i}_{\mathscr P}:\mathscr
H^i_{\text{dR}}(X/B)\rightarrow \mathscr H^i_{\text{dR}}(Y/B). \end{equation} as well as \'etale
realizations $\mathscr H^i_\et (X/B)$ and $\mathscr H^i_\et (Y/B)$ equipped with
isomorphisms \begin{equation}\label{E:B} \Phi ^{\et , i}_{\mathscr P}:\mathscr H^i_\et
(X/B)\rightarrow \mathscr H^i_\et (Y/B). \end{equation}
\end{pg}
\begin{pg} Let $H^i_{\text{crys}}(X_s/W)$ (resp. $H^i_{\text{crys}}(Y_s/W)$)
denote the crystalline cohomology of the fibers over $s$. The isomorphism $\Phi
^{\text{crys}, i}_{\mathscr P}$ induces an isomorphism $$ \theta ^i
:H^i_{\text{crys}}(X_s/W)\rightarrow H^i_{\text{crys}}(Y_s/W) $$ of
$F$-crystals. By \cite[Theorem I]{Ogus2} this implies that $X_s$ and $Y_s$ are
isomorphic. However, we may not necessarily have an isomorphism which induces
$\theta ^2$ on cohomology.
\end{pg}
\begin{pg} Recall that as discussed in \cite[10.9 (iii)]{Huybrechts} if
$C\subset X_K$ is a $(-2)$-curve then we can perform a spherical twist $$
T_{\mathscr O_{C}}:D(X_K)\rightarrow D(X_K) $$ whose action on $NS(X_K)$ is the
reflection $$ r_C(a) := a+\langle a, C\rangle C. $$
\end{pg}
\begin{prop}\label{T:5.5} After possibly changing our choice of model $Y$ for
$Y_K$, replacing $(B, s)$ by a neighborhood of a point in an alteration over
$s$, and composing with a sequence of spherical twists $T_{\mathscr O_C}$ along
$(-2)$-curves in the generic fiber $Y_K$, there exists an isomorphism $\sigma
:X_s\rightarrow Y_s$ inducing the isomorphism $\theta ^2$ on the second
crystalline cohomology group. If $\theta ^2$ preserves the ample cone of the
generic fiber then we can find an isomorphism $\sigma $ inducing $\theta ^2$.
\end{prop}
\begin{proof} By \cite[4.4 and 4.5]{Ogus3} there exists an isomorphism $\theta
_0:NS(X_s)\rightarrow NS(Y_s)$ compatible with $\theta ^2$ in the sense that the
diagram
\begin{equation}\label{E:5.4.1} \xymatrix{ NS(X_s)\ar[r]^-{\theta
_0}\ar[d]^-{c_1}& NS(Y_s)\ar[d]^-{c_1}\\ H^2_{\text{crys}}(X_s/W)\ar[r]^-{\theta
^2}& H^2_{\text{crys}}(Y_s/W)}
\end{equation} commutes. Note that as discussed in \cite[4.8]{Liedtke} the map
$\theta _0$ determines $\theta ^2$ by the Tate conjecture for K3 surfaces,
proven by Charles, Maulik, and Pera \cite{Ch, Maulik, Pera}. In particular, if
we have an isomorphism $\sigma :X_s\rightarrow Y_s$ inducing $\pm \theta _0$ on
N\'eron-Severi groups then $\sigma $ also induces $\pm \theta ^2$ on crystalline
cohomology. We therefore have to study the problem of finding an isomorphism
$\sigma $ compatible with $\theta _0$.
Ogus shows in \cite[Theorem II]{Ogus2} that there exists such an isomorphism
$\sigma $ if and only if the map $\theta _0$ takes the ample cone to the ample
cone. So our problem is to choose a model of $Y$ in such a way that $\pm \theta
_0$ preserves ample cones. Set $$ V_{X_s}:= \{x\in NS(X_s)_{\mathbb R}|\text{$x^2>0$
and $\langle x, \delta \rangle \neq 0$ for all $\delta \in NS(X_s)$ with $\delta
^2 = -2$}\}, $$ and define $V_{Y_s}$ similarly. Being an isometry the map
$\theta _0$ then induces an isomorphism $V_{X_s}\rightarrow V_{Y_s}$, which we
again denote by $\theta _0$. Let $R_{Y_s}$ denote the group of automorphisms of
$V_{Y_s}$ generated by reflections in $(-2)$-curves and multiplication by $-1$.
By \cite[Proposition 1.10 and Remark 1.10.9]{Ogus2} the ample cone of $Y_s$ is a
connected component of $V_{Y_s}$ and the group $R_{Y_s}$ acts simply
transitively on the set of connected components of $V_{Y_s}$.
Let us show how to change model to account for reflections by $(-2)$-curves in
$Y_s$. We show that after replacing $(P, Y)$ by a new pair $(P', Y')$ consisting
of the complex $P\in D(X_K\times _KY_K)$ obtained by composing $\Phi _P$ with a
sequence of spherical twists along $(-2)$-curves in $Y_K$ and replacing $Y$ by a
new model $Y'$ there exists an isomorphism $\gamma :Y'_s\rightarrow Y_s$ such
that the composition $$ \xymatrix{ NS(X_s)\ar[r]^-{\theta _0}&
NS(Y_s)\ar[r]^-{r_C}& NS(Y_s)\ar[r]^-{\gamma ^*}& NS(Y'_s)} $$ is equal to the
map $\theta '_0$ defined as for $\theta _0$ but using the model $Y'$.
Let $C\subset Y_s$ be a $(-2)$-curve, and let
$$ r_C:NS(Y_s)\rightarrow NS(Y_s)$$ be the reflection in the $(-2)$-curve.
If $C$ lifts to a curve in the family $Y$ we get a $(-2)$-curve in the generic
fiber and so by replacing our $P$ by the complex $P'$ obtained by composition
with the spherical twist by this curve in $Y_K$ (see \cite[10.9
(iii)]{Huybrechts}) and setting $Y' = Y$ we get the desired new pair. If $C$
does not lift to $Y$, then we take $P' = P$ but now replace $Y$ by the flop of
$Y$ along $C$ as explained in \cite[2.8]{Ogus2}.
Thus after making a sequence of replacements $(P, Y)\mapsto (P', Y')$ we can
arrange that $\theta _0$ sends the ample cone of $X_s$ to plus or minus the
ample cone of $Y_s$, and therefore we get our desired isomorphism $\sigma $.
To see the last statement, note that we have modified the generic fiber by
composing with reflections along $(-2)$-curves. Therefore if $\lambda $ is an
ample class on $X$ with restriction $\lambda _K$ to $X_K$, and for a general
ample divisor $H$ we have $\langle \Phi _P(\lambda ), H\rangle >0$, then the
same holds on the closed fiber. This implies that the ample
cone of $X_s$ gets sent to the ample cone of $Y_s$ and not its negative.
\end{proof}
\begin{rem} One can also consider \'etale or de Rham cohomology in Theorem \ref{T:5.5}.
Assume we have applied suitable spherical twists and chosen a model $Y$ such
that we have an isomorphism $\sigma :X_s\rightarrow Y_s$ inducing $\pm \theta
_0$. We claim that the maps $$ \theta _\text{dR} :H_\text{dR} ^i(X_s/k)\rightarrow H_\text{dR}
^i(Y_s/k), \ \ \theta _\et :H_\et ^i(X_s, \Q_\ell )\rightarrow H_\et ^i(Y_s,
\Q_\ell ) $$ induced by the maps \eqref{E:A} and \eqref{E:B} also agree with the
maps defined by $\pm \sigma $. For de Rham cohomology this is clear using the
comparison with crystalline cohomology, and for the \'etale cohomology it
follows from compatibility with the cycle class map.
\end{rem}
\begin{pg} With notation as assumptions as in Theorem \ref{T:5.5} assume further
that $B$ is a curve or a complete discrete valuation ring, and that we have
chosen a model $Y$ such that each of the reductions satisfies condition $(\star
)$ and such that the map $\theta _0$ in \eqref{E:5.4.1} preserves plus or minus
the ample cones. Let $\sigma :X_s\rightarrow Y_s$ be an isomorphism inducing
$\pm \theta _0$.
\end{pg}
\begin{lem}\label{P:5.7} The isomorphism $\sigma $ lifts to an isomorphism $\tilde
\sigma :X\rightarrow Y$ over the completion $\widehat B$ at $s$ inducing the
maps defined by $\pm \Phi ^{\text{\rm crys}, i} _{\mathscr P}$.
\end{lem}
\begin{proof} By Proposition \ref{P:2.10} in fact $\Phi _{\mathscr P}$ preserves the ample cone
of the closed fiber and so we can choose $\sigma $ such that the map on
cohomology is $\theta _0$. By Corollary \ref{P:4.15} $\sigma $ lifts uniquely to each
infinitesimal neighborhood of $s$ in $B$, and therefore by the Grothendieck
existence theorem we get a lifting $\tilde \sigma $ over $\widehat B$. That the
realization of $\tilde \sigma $ on cohomology agrees with $\pm \Phi ^{\text{\rm
crys}, i}_{\mathscr P}$ can be verified on the closed fiber where it holds by
assumption.
\end{proof}
\begin{lem}\label{C:5.8} With notation and assumptions as in Proposition \ref{P:5.7} the map
$\Phi _P^{A^*_{\text{\rm num}, \Q}}$ preserves the ample cones of the generic
fibers.
\end{lem}
\begin{proof} The statement can be verified after making a field extension of
the function field of $B$. The result therefore follows from Proposition \ref{P:5.7} and
Proposition \ref{P:2.10}.
\end{proof}
\begin{rem}\label{R:6.10} In the case when the original $\Phi _P$ preserves the
ample cones of the geometric generic fibers, no reflections along $(-2)$--curves
in the generic fiber are needed. Indeed, by the above argument we get an
isomorphism $\sigma _K:X_K\rightarrow Y_K$ such that the induced map on
crystalline and \'etale cohomology
agrees with $\Phi _P\circ \alpha $ for some sequence $\alpha $ of spherical
twists along $(-2)$-curves in $X_K$ (also using Corollary \ref{C:5.8}). Since both $\sigma $
and $\Phi _P$ preserve ample cones it follows that $\alpha $ also preserves the
ample cone of $X_{\overline K}$. By \cite[1.10]{Ogus2} it follows that $\alpha $ acts
trivially on the N\'eron-Severi group of $X_{\overline K}$. We claim that this implies that
$\alpha $ also acts trivially on any of the cohomological realizations. We give
the proof in the case of \'etale cohomology $H^2(X_{\overline K}, \Q_\ell )$ (for a prime
$\ell $ invertible in $k$) leaving slight modifications for the other cohomology
theories to the reader. Let $\widetilde R_X$ denote the subgroup of
$GL(H^2(X_{\overline K},\Q_\ell ))$ generated by $-1$ and the action induced by spherical twists along
$(-2)$-curves in $X_{\overline K}$, and consider the inclusion of $\Q_\ell $-vector spaces
with inner products $$ NS(X_{\overline K})_{\Q_\ell }\hookrightarrow H^2(X_{\overline K}, \Q_\ell ). $$ By
\cite[Lemma 8.12]{Huybrechts} the action of the spherical twists along
$(-2)$-curves in $X_{\overline K}$ on $H^2(X_{\overline K}, \Q_\ell )$ is by reflection across classes in
the image of $NS(X_{\overline K})_{\Q_\ell }$. From this (and Gram-Schmidt!) it follows that
the the group $\widetilde R_X$ preserves $NS(X_{\overline K})_{\Q_\ell }$, acts trivially on
the quotient of $H^2(X_{\overline K}, \Q_\ell )$ by $NS(X_{\overline K})_{\Q_\ell }$, and that the
restriction map $$ \widetilde R_X\rightarrow GL(NS(X_{\overline K})_{\Q_\ell }) $$ is
injective. In particular, if an element $\alpha \in \widetilde R_X$ acts
trivially on $NS(X_{\overline K})$ then it also acts trivially on \'etale cohomology.
It follows that $\sigma $ and $\Phi _{P}$ induce the same map on realizations.
\end{rem}
\section{Specialization}\label{S:section7}
\begin{pg} We consider again the setup of Proposition \ref{P:5.1}, but now we don't assume
that the closed fiber $X_s$ is supersingular. Further we restrict attention to
the case when $B$ is a smooth curve, and assume we are given a smooth model
$Y/B$ of $Y_K$ and a $B$-perfect complex $\mathscr P\in D(X\times _BY)$ such that
for all geometric points $\bar z \rightarrow B$ the induced complex $\mathscr P_{\bar
z}$ on $X_{\bar z}\times Y_{\bar z}$ defines a strongly filtered equivalence
$D(X_{\bar z})\rightarrow D(Y_{\bar z})$.
Let $\mathscr H^i(X/B)$ (resp. $\mathscr H^i(Y/B)$) denote either $\mathscr H^i_\et (X/B)$
(resp. $\mathscr H^i_\et (Y/B)$) for some prime $\ell \neq p$ or $\mathscr H^i_\text{\rm
crys} (X/B)$ (resp. $\mathscr H^i_\text{\rm crys} (Y/B)$). Assume further given an
isomorphism $$ \sigma _K:X_K\rightarrow Y_K $$ inducing the map given by
restricting $$ \Phi ^{ i}_{\mathscr P}:\mathscr H^i(X/B)\rightarrow \mathscr H^i(Y/B) $$ to
the generic point.
\end{pg}
\begin{rem} If we work with \'etale cohomology in this setup we could also
consider the spectrum of a complete discrete valuation ring instead of $B$, and
in particular also a mixed characteristic discrete valuation ring.
\end{rem}
\begin{rem} When the characteristic of $k$ is zero we can also use de Rham
cohomology instead of \'etale cohomology.
\end{rem}
\begin{prop}\label{P:6.4} The isomorphism $\sigma _K $ extends to an isomorphism
$\sigma :X\rightarrow Y$.
\end{prop}
\begin{proof} We give the argument here for \'etale cohomology in the case when
$B$ is the spectrum of a discrete valuation ring, leaving the minor
modifications for the other cases to the reader.
Let $Z\subset X\times _BY$ be the closure of the graph of $\sigma _K$, so $Z$
is an irreducible flat $V$-scheme of dimension $3$ and we have a correspondence
$$ \xymatrix{ & Z\ar[ld]_-{p}\ar[rd]^-q& \\ X&& Y.} $$ Fix an ample line bundle
$L$ on $X$ and consider the line bundle $M:= \text{det}(Rq_*p^*L)$ on $Y$. The
restriction of $M$ to $Y_K$ is simply $\sigma _{K*}L$, and in particular the
\'etale cohomology class of $M$ is equal to the class of $ \Phi _{\mathscr P}(L)$.
By our assumption that $\Phi _{\mathscr P}$ is strongly filtered in the fibers the
line bundle $M$ is ample on $Y$. Note also that by our assumption that $\Phi
_{\mathscr P}$ is strongly filtered in every fiber we have $$ \Phi _{\mathscr
P}(L^{\otimes n})\simeq \Phi ^{\mathscr P}(L)^{\otimes n}. $$ In particular we can
choose $L$ very ample in such a way that $M$ is also very ample. The result then follows from Matsusaka-Mumford \cite[Theorem 2]{MM}.
\end{proof}
\section{Proof of Theorem \ref{T:1.2}}\label{S:section8}
\begin{pg} Let $K$ be an algebraically closed field extension of $k$ and let $X$
and $Y$ be K3 surfaces over $K$ equipped with a complex $P\in D(X\times _KY)$
defining a strongly filtered Fourier-Mukai equivalence $$ \Phi
_P:D(X)\rightarrow D(Y). $$ We can then choose a primitive polarization $\lambda
$ on $X$ of degree prime to $p$ such that the triple $((X, \lambda ), Y, P)$
defines a $K$-point of $\mathscr S_d$. In this way the proof of Theorem \ref{T:1.2} is
reformulated into showing the following: For any algebraically closed field $K$
and point $((X, \lambda ), Y, P)\in \mathscr S_d(K)$ there exists an isomorphism
$\sigma :X\rightarrow Y$ such that the maps on crystalline and \'etale
realizations defined by $\sigma $ and $\Phi _P$ agree.
\end{pg}
\begin{pg}\label{P:extendfield}
To prove this it suffices to show that there exists such an isomorphism after replacing $K$ by a field extension. To see this let $I$ denote the scheme of isomorphisms between $X$ and $Y$, which is a
locally closed subscheme of the Hilbert scheme of $X\times _KY$. Over $I$ we
have a tautological isomorphism $\sigma ^u:X_I\rightarrow Y_I$. The condition
that the induced action on $\ell $-adic \'etale cohomology agrees with $\Phi _P$
is an open and closed condition on $I$. It follows that there exists a subscheme
$I'\subset I$ classifying isomorphisms $\sigma $ as in the theorem. This implies
that if we can find an isomorphism $\sigma $ over a field extension of $K$ then
such an isomorphism also exists over $K$.
\end{pg}
\begin{pg} By Proposition \ref{P:6.4} it suffices to show that the result holds for each
generic point of $\mathscr S_d$. By Theorem \ref{T:4.8} any such generic point maps to a
generic point of $\mathscr M_d$ which by Theorem \ref{T:2.2} admits a specialization to a
supersingular point $x\in \mathscr M_d(k)$ given by a family $(X_R, \lambda _R)/R$,
where $R$ is a complete discrete valuation ring over $k$ with residue field
$\Omega $, for some algebraically closed field $\Omega $. By Theorem \ref{T:2.2v1} the point $(Y, \lambda _Y)\in \mathscr M_d(K)$ also has a limit $y\in \mathscr M_d(\Omega )$ given by a second family $(Y_R, \lambda _R)/R$. Let $P'$ be the complex on $X\times Y$ giving the composition of $\Phi _P$ with suitable twists by $(-2)$-curves such that after replacing $Y_R$ by a sequence of flops the map $\Phi _{P'}$ induces an isomorphism on crystalline cohomology on the closed fiber preserving plus or minus the ample cone. By the Cohen structure theorem we have $R\simeq \Omega [[t]]$, and $((X, \lambda ), Y, P')$ defines a point of $\mathscr S_d(\Omega ((t)))$.
Let $B$ denote the completion of the strict henselization of $\mathscr M_{\Z[1/d]}\times \mathscr M_{\Z[1/d]}$ at the point $(x, y)$. So $B$ is a regular complete local ring with residue field $\Omega $. Let $B'$ denote the formal completion of the strict henselization of $\overline {\mathscr S}_{d, \Z[1/d]}$ at the $\Omega ((t))$-point given by $((X, \lambda ), Y, P')$. So we obtain a commutative diagram
\begin{equation}\label{E:ringmagic}
\xymatrix{
B\ar[r]\ar[d]& \Omega [[t]]\ar[d]\\
B'\ar[r]& \Omega ((t)).}
\end{equation}
Over $B$ we have a universal families $\mathscr X_B$ and $\mathscr Y_B$, and over the base changes to $B'$ we have, after trivializing the pullback of the gerbe $\mathscr S_{d, \Z[1/d]}\rightarrow \overline {\mathscr S}_{d, \Z[1/d]}$, a complex $\mathscr P_{B'}'$ on $\mathscr X_{B'}\times _{B'}\mathscr Y_{B'}$, which reduces to the triple $(X, Y, P')$ over $\Omega ((t))$. The map $B\rightarrow B'$ is a filtering direct limit of \'etale morphisms. We can therefore replace $B'$ by a finite type \'etale $B$-subalgebra over which all the data is defined and we still have the diagram \eqref{E:ringmagic}. Let $\overline B$ denote the integral closure of $B$ in $B'$ so we have a commutative diagram
$$
\xymatrix{
\text{\rm Spec} (B')\ar@{^{(}-}[r]\ar[rd]& \text{\rm Spec} (\overline B)\ar[d]\\
& \text{\rm Spec} (B),}
$$
where $\overline B$ is flat over $\Z[1/d]$ and normal. Let $Y\rightarrow \text{\rm Spec} (\overline B)$ be an alteration with $Y$ regular and flat over $\Z[1/d]$, and let $Y'\subset Y$ be the preimage of $\text{\rm Spec} (B')$. Lifting the map $B\rightarrow \Omega [[t]]$ to a map $\text{\rm Spec} (\widetilde R)\rightarrow Y$ for some finite extension of complete discrete valuation rings $\widetilde R/R$ and letting $C$ denote the completion of the local ring of $Y$ at the image of the closed point of $\text{\rm Spec} (\widetilde R)$ we obtain a commutative diagram
$$
\xymatrix{
C\ar[r]\ar[d]& \Omega [[t]]\ar[d]\\
C'\ar[r]& \Omega ((t)),}
$$
where $C\rightarrow C'$ is a localization, we have K3-surfaces $\mathscr X_C$ and $\mathscr Y_C$ over $C$ and a perfect complex $\mathscr P_{C'}'$ on $\mathscr X_{C'}\times _{C'}\mathscr Y_{C'}$ defining a Fourier-Mukai equivalence and the triple $(\mathscr X_{C'}, \mathscr Y_{C'}, \mathscr P_{C'}')$ reducing to $(X, Y, P)$ over $\Omega ((t))$. By \cite[5.2.2]{TT} we can extend the complex $\mathscr P'_{C'}$ to a $C$-perfect complex $\mathscr P'_C$ on $\mathscr X_C\times _C\mathscr Y_C$ (here we use that $C$ is regular). It follows that the base change $(X_{\Omega [[t]]}, Y_{\Omega [[t]]}, P'_{\Omega [[t]]})$ gives an extension of $(X, Y, P)$ to $\Omega [[t]]$ all of whose reductions satisfy our condition $(\star )$.
This puts us in the setting of Proposition \ref{P:5.7}, and we conclude that there exists an isomorphism
$\sigma
:X\rightarrow Y$ (over $\Omega ((t))$, but as noted above we are allowed to make a field extension of $K$) such that the induced map on crystalline and \'etale cohomology
agrees with $\Phi _P\circ \alpha $ for some sequence $\alpha $ of spherical
twists along $(-2)$-curves in $X$ (using also Corollary \ref{C:5.8}). By the
same argument as in Remark \ref{R:6.10} it follows that $\sigma $ and $\Phi _{P}$ induce the same map on realizations
which concludes the proof of Theorem \ref{T:1.2}. \qed
\end{pg}
\begin{rem} One consequence of the proof is that in fact any strongly filtered
equivalence automotically takes the ample cone to the ample cone, and not its
negative. This is closely related to \cite[4.1]{HMS}.
\end{rem}
\section{Characteristic $0$}\label{S:section9}
From our discussion of positive characteristic results one can also deduce the
following result in characteristic $0$.
\begin{thm}\label{T:8.1} Let $K$ be an algebraically closed field of
characteristic $0$, let $X$ and $Y$ be K3 surfaces over $K$, and let $\Phi
_P:D(X)\rightarrow D(Y)$ be a strongly filtered Fourier-Mukai equivalence
defined by an object $P\in D(X\times Y)$. Then there exists an isomorphism
$\sigma :X\rightarrow Y$ whose action on $\ell $-adic and de Rham cohomology
agrees with the action of $\Phi _P$.
\end{thm}
\begin{proof} It suffices to show that we can find an isomorphism $\sigma $
which induces the same map on $\ell $-adic cohomology as $\Phi _P$ for a single
prime $\ell $. For then by compatibility of the comparison isomorphisms with
$\Phi _P$, discussed in \cite[\S 2]{LO}, it follows that $\sigma $ and $\Phi _P$
also define the same action on the other realizations of the Mukai motive.
Furthermore as in Proposition \ref{P:extendfield} it suffices to prove the existence of $\sigma $ after making a field extension of $K$.
As in Proposition \ref{P:extendfield} let $I'$ denote the scheme of isomorphisms $\sigma :X\rightarrow Y$ as in the theorem. Note that since the action of such $\sigma $ on the ample cone is fixed, the scheme $I'$ is in fact of finite type.
Since $X$, $Y$, and $P$ are all locally finitely presented over $K$ we can
find a finite type integral $\Z$-algebra $A$, K3 surfaces $X_A$ and $Y_A$ over
$A$, and an $A$-perfect complex $P_A\in D(X_A\times _AY_A)$ defining a strongly
filtered Fourier-Mukai equivalence in every fiber, and such that $(X, Y, P)$ is
obtained from $(X_A, Y_A, P_A)$ by base change along a map $A\rightarrow K$.
The scheme $I'$ then also extends to a finite type $A$-scheme $I'_A$ over $A$.
Since $I'$ is of finite type over $A$ to prove that $I'$ is nonempty it
suffices to show that $I'_A$ has nonempty fiber over $\mathbb{F}_p$ for
infinitely many primes $p$. This holds by Theorem \ref{T:1.2}.
\end{proof}
\section{Bypassing Hodge theory}\label{S:section10}
\begin{pg}
The appeal to analytic techniques implicit in the results of Section
\ref{S:section4}, where characteristic $0$ results based on Hodge theory are
used to deduce Theorem \ref{T:2.2v1}, can be bypassed in the following way using
results of \cite{Maulik} and \cite{LM}.
\end{pg}
\begin{pg} Let $R$ be a complete discrete valuation ring of equicharacteristic
$p>0$ with residue field $k$ and fraction field $K$. Let $X/R$ be a smooth K3
surface with supersingular closed fiber. Let $Y_K$ be a K3 surface over $K$
and $P_K\in D(X_K\times Y_K)$ a perfect complex defining a Fourier-Mukai
equivalence $\Phi _{P_K}:D(X_{\overline K})\rightarrow D(Y_{\overline K})$.
\end{pg}
\begin{thm}\label{T:GR} Assume that $X$ admits an ample invertible sheaf $L$
such that $p>L^2+4$. Then after replacing $R$ by a finite extension there exists
a smooth projective K3 surface $Y/R$ with generic fiber $Y_K$.
\end{thm}
\begin{proof}
Changing our choice of Fourier-Mukai equivalence $P_K$, we may assume that
$P_K$ is strongly filtered. Setting $M_K$ equal to $\text{det}(\Phi _{P_K}(L))$
or its dual, depending on whether $\Phi _{P_K}$ preserves ample cones, we get an
ample invertible sheaf on $Y_K$ of degree $L^2$. By \cite[2.2]{LM}, building on
Maulik's work \cite[Discussion preceding 4.9]{Maulik} we get a smooth K3 surface
$Y/R$ with $Y$ an algebraic space. Now after replacing $P_K$ by the composition
with twists along $(-2)$-curves and the model $Y$ by a sequence of flops, we can
arrange that the map on crystalline cohomology of the closed fibers induced by
$\Phi _{P_K}$ preserves ample cones. Let $P\in D(X\times _RY)$ be an extension
of $P_K$ and let $M$ denote $\text{det}(\Phi _P(L))$. Then $M$ is a line bundle
on $Y$ whose reduction is ample on the closed fiber. It follows that $M$ is also
ample on $Y$ so $Y$ is a projective scheme.
\end{proof}
\begin{pg} We use this to prove Theorem \ref{T:1.2} in the case of \'etale realization
in the following way. First observe that using the same argument as in Section
\ref{S:section8}, but now replacing the appeal to Theorem \ref{T:2.2v1} by the above
Theorem \ref{T:GR}, we get Theorem \ref{T:1.2} under the additional assumption that $X$
admits an ample invertible sheaf $L$ with $p>L^2+4$. By the argument of
Section \ref{S:section9} this suffices to get Theorem \ref{T:1.2} in characteristic $0$,
and by the specialization argument of Section \ref{S:section7} we then get also the
result in arbitrary characteristic.
\end{pg}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
| 64fe2d3d2828f48c692d80ee89ea9748c381af30 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{\label{sec:intro}Introduction}
As quantum information processors become more complex a key challenge is the validation and verification of integrated systems. Individual gates can be characterized by quantum process tomography \cite{Pauli_twirling_Chuang}, randomized benchmarking \cite{Emerson07,RB,RB3,RB4,Rand_Knill}, and related methods \cite{Flammia_tomo,Flammia_tomo2,Poulin_tomo,BlumeKohout2013,MerkelPRA2013,WallmanNJP2015}. Full characterization scales exponentially with the size of the gate , but efficient characterization is possible when the faulty gates have sparse descriptions \cite{qdyn_comp_sens} or only limited information, such as the average fidelity, is obtained \cite{RB5,RB6}. Efficient characterizations scale polynomially with the system size and can become impractical for larger gate sizes. An alternative method for testing larger devices is to compare the physical algorithmic output to the expected algorithmic output. For many algorithms, like the quantum linear system algorithm \cite{HarrowPRL2009,CladerPRL2013}, the ideal output may not be known and the effect of errors on the output cannot be calculated.
Fortunately there are classes of quantum circuits that can be efficiently computed, with the prime example being circuits composed of only Clifford gates, which can be simulated efficiently by the Gottesman-Knill theorem \cite{CHP,Faster_than_CHP}. The circuits can then be decorated with random Pauli errors and the output can be sampled using Monte-Carlo methods. This Monte-Carlo sampling can be extended to include Clifford errors \cite{Cory} and Clifford gates conditional on measurements in a Pauli basis \cite{PRA_us,GutierrezPRA2015}. Since the Clifford group transforms Pauli errors to Pauli errors, all of the errors can be pushed to the end of the circuit. This transformation is the basis of fault-path methods which identify the sets of errors that result in failure by following how Pauli operators propagate through the correction circuit \cite{AGP_method}. For low-distance codes, these method are used to rigorously bound the fault-tolerant threshold of specific protocols. Exact calculations are not practical due to the exponential possible combinations of errors and these methods rely on cutoffs that consider only a certain number of errors. This is well motivated by the reduced probability of having multiple errors and the limited distance of the codes.
Here we apply the fault-path method to algorithms made from Clifford circuits. While these algorithms provide at most only a polynomial advantage, they are ideal for testing the integration of many qubits into a quantum computer. Most quantum error correction codes expect that the errors are independent probabilistic Pauli operators. Implementing a non-fault tolerant circuit of Clifford gate and testing the output distribution relative to this model provides confidence in the accuracy of this error model for a given implementation.
In contrast to quantum error correction codes, we find that the fault-path method can efficiently calculate the exact success rate for certain tree-like quantum algorithms in polynomial time for Pauli error models. We show this can be determined from the graph structure of the circuit and discuss how the cost of exact simulation can be related to the weight of the nodes and the number of cycles in the graph. We then apply our tools to the Bernstein-Vazirani algorithm and exactly simulate the success rate for circuits containing up to 1350 qubits \cite{BValgo}. Finally, we apply error truncation to our method to estimate the threshold of the Steane [[7,1,3]] code with Shor ancilla \cite{Steane_QEC,TomitaPRA2013}.
\section{\label{sec:background} Background and Definitions: Pauli Errors and Clifford Circuits }
\begin{figure}[tb]
\begin{minipage}{0.65\textwidth}
\includegraphics[width = 1\textwidth]{Fig1.pdf}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\caption{\label{fig:error_prop} Two types of error ($X$ and $Z$) propagating across a controlled-NOT, Hadamard, and Phase gates.}
\end{minipage}
\end{figure}
The Pauli operators on $n$ qubits are composed from the tensor product of the single qubit Pauli operators $X$, $Y$, and $Z$, and the Identity, $I$. The weight of the Pauli operator is the number of non-identity elements in the tensor product. For $n$ qubits there are $4^n$ Pauli operators. The Clifford group is defined as unitary operations that transform Pauli operators to Pauli operators. The Clifford group can be generated from one and two qubit operations: \textit{CNOT}, $H$, and $S$. For additional information, we refer the readers to any quantum computation textbook \cite{MikeandIke}.
A Pauli error channel, $\mathbf{\mathcal{E}}$ is equivalent to a random application of a set of Pauli operators. The action of the channel is defined by Kraus operators $\mathbf{\mathcal{E}}(\rho)=\sum_j \mathbf{A}_j \rho \mathbf{A}^\dagger_j$, where $\mathbf{A}_j=\sqrt{p_j}\mathbf{P}_j$, $\mathbf{P}_j$ is a Pauli operator, and $p_j$ is the probability that the operator is applied. We define the number of non-zero $p_j$ as the rank of the channel, $r$. Clifford operators map Pauli error channels to Pauli error channels and although the weight of the Pauli operators can be changed the rank of the channel is preserved. Pauli error channels compose with other Pauli error channels to create new Pauli error channels with a rank that is bound by the product of the ranks of the channel or the maximum rank allowed by the system.
A standard model for errors is that each gate $g$ acting on $k$ qubits has an associated Pauli error channel $\mathbf{\mathcal{E}}_g$ composed of Pauli operators that also act on the same $k$ qubits, limiting the rank to $r_g \leq 4^k$. Assuming a circuit constructed from one and two-qubit Clifford operators, the maximum rank for each error channel is $16$. It is very convenient to push all of the error operators to the end of the circuit. The other Clifford operations transform the error channel to $\mathbf{\mathcal{E}}^\prime_g$ but preserve the rank. If there are $G$ gates, the Pauli error channel of the entire circuit can be composed from $G$ Pauli error channels of low rank. The cost of this composition determines whether we can efficiently determine the probability distribution of outcomes and the success rate.
It is convenient to introduce the notion of an error vector, $\mathbf{\Psi}$, which contains the $4^k$ probabilities for a state to a specific Pauli error. Each Clifford gate, $g$, first transforms $\mathbf{\Psi}$ by mapping one Pauli error to another Pauli error. This can be represented by a $4^k \times 4^k$ transformation matrix $\mathbf{T}_g$, with only $4^k$ non-zero entries of 1 and preserving the error-free entry of the error vector. Then, the associated error channel $\mathbf{\mathcal{E}}_g$ is applied, which in this representation is a $4^k \times 4^k$ error matrix $\mathbf{E}_g$ which has $r_g$ distinct coefficients and $4^k r_g$ non-zero entries. The transformation matrices for $H$, $S$, and \textit{CNOT} are given graphically in Fig. \ref{fig:error_prop}, alongside the full rank single qubit error matrix. To calculate the full error vector of $k$ qubits with $G$ gates, we can apply the formula:
\begin{equation}\label{eq:transform_state}
\mathbf{\Psi}_{final} = (\prod_{i=1}^{G} \mathbf{E}_i\mathbf{T}_i) \mathbf{\Psi}_{initial}.
\end{equation}
This calculation is impractical in general, but can be used for small problem sizes.
We often combine the error matrix and transformation matrix into a single bi-stochastic matrix: $\mathbf{M}_i = \mathbf{E}_i\mathbf{T}_i$. As per Fig. \ref{fig:error_prop}, $H$ changes $X$ errors to $Z$ errors, Pauli operations such as $Z$ do not change Pauli errors. Assuming the same error matrices for the two gates, we present two example bi-stochastic matrices:
\begin{equation*}
\mathbf{M}_{Z} =
\begin{bmatrix}
p_I & p_X & p_Y & p_Z \\
p_X & p_I & p_Z & p_Y \\
p_Y & p_Z & p_I & p_X \\
p_Z & p_Y & p_X & p_I
\end{bmatrix}, ~ \mathbf{M}_{H} =
\begin{bmatrix}
p_I & p_Z & p_Y & p_X \\
p_X & p_Y & p_Z & p_I \\
p_Y & p_X & p_I & p_Z \\
p_Z & p_I & p_X & p_Y
\end{bmatrix}
\end{equation*}
Let us examine two simple scenarios. In the first example, there are $G$ qubits each acted on by a single 1-qubit gate, and each gate has a distinct rank four error channel. In this case, every $\mathbf{\mathcal{E}}_g$ is equivalent to $\mathbf{\mathcal{E}}^\prime_g$, since there are no sequential Clifford gates. Finding the complete Pauli error channel requires multiplying all combinations of error probabilities to yield $4^G$ coefficients, which is inefficient in the circuit size. If we define the success probability as the probability of no qubits having error, we only need to consider the $I$ component of each error channel yielding a success rate, $P_{I,G}=\prod_g p_{I,g}$, which can be efficiently calculated with $G$ multiplications.
In a second example, there is one qubit with $G$ 1-qubit gates each with a distinct rank four error channel. Now the gates are in sequence and the channels are transformed by the gates to $\mathbf{\mathcal{E}}^\prime_g$. Unlike the previous example, the final rank of the error channel is bound to be 4. We can compose two error channels by multiplying the 4 coefficients of each channel to yield only 4 coefficients. As a result the complete error distribution can be found efficiently with only $16 G$ multiplications of error probabilities after the error transformation. Generalizing to $k$ qubits, we require $16^k G$ multiplications, which is efficient in $G$ but inefficient in $k$. Formally, we calculate the bi-stochastic matrix for a sub circuit $F$.
\begin{equation} \label{eq:matrix_multi}
\mathbf{M}_F = \prod_{g \in F} \mathbf{M}_g
\end{equation}
The crux of our method for calculating success rates is to cut every circuit into these two examples by identifying circuit components whose failure rate can be calculated independently and by limiting the size of the dependent block to a small numbers of qubits. If the circuit naturally has a small dependency, we can calculate the success rate exactly, otherwise we use approximations to truncate the dependency.
\section{Fault Path Method}
\begin{figure}[tb]
(a) \includegraphics[width = 0.45\textwidth]{Fig2a.pdf} (b) \includegraphics[width = 0.45\textwidth]{Fig2b.pdf}
\caption{\label{fig:circuits} Demonstration of a standard circuit converted to a directed graph which contains: \textbf{a} a undirected cycle and \textbf{b} a tree-like pattern. Intersecting lines represent multi-qubit gates.}
\end{figure}
We start with a circuit of $G$ one and two-qubit gates. We convert the circuit to a directed graph where each gate is a node with incoming edges and outgoing edges corresponding to the qubits acted on by the gate. A fault path is defined by starting at an output qubit of the circuit and then walking the graph backwards to the input qubits. The fault-path shows where errors can arise that may propagate to the final qubit output. We refer to our methods for using fault-paths to then calculate or estimate success rates as the Fault-Path Tracing (FPT) method.
Two circuits and their related graphs are shown in Fig. \ref{fig:circuits}. The fault-path, $fp (q)$, finds all gates where errors can be introduced to the final state of qubit $q$ (Construction \ref{con:fault-path}). To calculate the error on that qubit for circuits composed of one and two-qubit gates, we break the fault-path into sub-paths of single qubit gates connected by two-qubit gates. We can calculate the error matrix for the single-qubit gate paths efficiently as described earlier. Starting from the input nodes, we then combined these single qubit error matrices with the two-qubit error transformation matrix and gate error to generate a two-qubit error matrix on the outputs. We can then ask if the output qubit paths are in the fault path. If the answer is yes, we need to keep the two-qubit error matrix. If not, we can reduce the two-qubit error matrix to a one-qubit error matrix by tracing over the error state of the output qubit that is off the path. Either way, we then continue along the graph towards the output qubit.
For tree-like graphs and a single fault path, we can always reduce to a single qubit error matrix after each gate. This simplification allows us to work with only single qubit error matrices except for at the two-qubit nodes where we need to calculate a two-qubit error matrix before reducing it. The result is an efficient method for calculating error states at single qubits without knowledge of the error states on other qubits (Construction \ref{con:qubit_success}). For undirected cycle on the underlying graph, the error matrices can continue to grow. In Fig. \ref{fig:circuits}a, we see that a two-qubit error matrix must be kept for a few nodes and that a three-qubit error matrix must be briefly constructed for the triangle-shaped loop. If we treat the undirected cycle as a single three-qubit Clifford gate, the graph becomes tree-like again but a three-qubit error matrix still must be generated. The number of qubits that input to the undirected cycle determines the size of error matrix that must be constructed.
For any algorithm, a lower-bound on the success probability can be determined by calculating the independent error probability of each output qubit having no error and then multiplying the probabilities. This will overestimate the error since output errors on qubits will be correlated. In order to calculate the correlations, we need to look at the overlap between fault-paths that affect our output of interest.
Our procedure for calculating error rates from overlapping fault paths is described in Construction \ref{con:gen_success}. The four cases mentioned are: error on no branch, error on control branch, error on target branch, and error on both branches. We often assume that the output qubit is measured in a specific basis $X$ or $Z$. As a result, the fault path is simplified and reduces the Pauli errors to simply an error ($X$ or $Y$ for $Z$ measurements) or no error ($I$ and $Z$ for $Z$ measurements). We refer to this fault-path as $fp (q;X) $. By breaking the overlapping fault-points into non-overlapping fault points, we can exactly calculate both the correlation and we can handle each subgraph exactly. However, in the case that there is a undirected cycle that has more than 2 qubit inputs or 2 qubit outputs, this method cannot no longer exactly calculate the success rate. Instead, a lower bound is used to estimate the rate for each subgraph.
\begin{construct}{\label{con:fault-path} Finding a fault-path}{The particular circuit being studied, $\mathbb{C}$; the qubit the fault-path is being formed from, $q$; and the assumed end error type, $\mathbb{E}$ if any.}{The fault-path containing a list of potential fault-points: $fp(q;\mathbb{E})$.}
\item Find the last gate implemented on $q$ in $\mathbb{C}$ $\rightarrow$ $g$.
\item The first fault-point is ($g$;$\mathbb{E}$). If no $\mathbb{E}$ specified, then two points ($X$ and $Z$).
\item Based on $g$ and each $\mathbb{E}$, use the reverse error propagation rules to find all potential error sources $\rightarrow$ $\mathbb{S}$.
\item \textbf{for source in $\mathbb{S}$ do}
\item \hspace{10pt} Find the gate previous to $g$ that corresponds to the source, which may or may not be on the same $q$ $\rightarrow$ $g$.
\item \hspace{10pt} Determine the new error type after error transformation $\rightarrow$ $\mathbb{E}$.
\item \hspace{10pt} $fp(q;\mathbb{E})$ += ($g$;$\mathbb{E}$).
\item \textbf{end for}
\item Repeat steps 3-8 until reached the beginning.
\end{construct}
\begin{construct}{\label{con:qubit_success} Probability of success for single tree-like fault-path}{The fault-path for a single qubit, $fp$; a dictionary of bistochastic matrices for each gate on the fault-path, $\mathbb{M}$.}{Probability of the qubit yielding the correct output, $\bar{\varepsilon}$.}
\item Ensure that points in $fp$ are well-ordered based on the circuit, $\mathbb{C}$.
\item Let $\Psi$ be [1,0,0,0].
\item \textbf{for gate in $fp$ do}
\item \hspace{10pt} Find the matrix in $\mathbb{M}$ corresponding to the gate.
\item \hspace{10pt} If the gate is a two-qubit gate, condense the matrix to a 4x4 matrix.
\item \hspace{10pt} Apply the matrix to $\Psi$ using \textbf{Eq. \ref{eq:matrix_multi}}.
\item \textbf{end for}
\item $\bar{\varepsilon}$ is the first element in $\Psi$.
\end{construct}
\begin{construct}{\label{con:gen_success}Approximate probability of success for multiple fault-paths}{The list of fault-paths, $\mathbb{F}$.}{Probability of all fault-paths having no error, $\bar{\varepsilon}$, and the probability of all fault-paths having error, $\varepsilon$.}
\item Split $\mathbb{F}$ into independent groups.
\item \textbf{for each independent group do}
\item \hspace{10pt} Find the fault-points common to all fault-paths, and remove points from each path.
\item \hspace{10pt} Determine if all fault-paths (with common points removed) separate into $n$ independent branches.
\item \hspace{10pt} For the common fault-points, find the probabilities of all $2^n$ possible cases (all combinations of each branch having or not having error).
\item \hspace{10pt} Build the probability state, $\mathbf{\Psi}$, from these rates.
\item \hspace{10pt} \textbf{IF} Branches are independent \textbf{THEN} call \textbf{Construction \ref{con:gen_success}} for each branch.
\item \hspace{10pt} \textbf{IF} Branches are dependent (therefore part of a cycle) \textbf{THEN} use $\varepsilon = \prod_{i} \epsilon_i$, for each fault-path (worst case).
\item \hspace{10pt} Using the branch error rates, build a stochastic matrix.
\item \hspace{10pt} Apply matrix to $\mathbf{\Psi}$.
\item \hspace{10pt} $\mathbf{\Psi}$[first] $\rightarrow$ Success rate.
\item \hspace{10pt} $\mathbf{\Psi}$[last] $\rightarrow$ Error rate.
\item \textbf{end for}
\item $\bar{\varepsilon}$ is the product of all independent groups success rates.
\item $\varepsilon$ is the product of all independent groups error rates.
\end{construct}
\subsection{\label{sec:BV}Bernstein-Vazirani Algorithm}
\begin{figure}[tb]
\begin{minipage}{0.5\textwidth}
\includegraphics[width = \textwidth]{Fig3.pdf}
\end{minipage}
\begin{minipage}{0.05\textwidth}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\caption{\label{fig:faultpath_example} The Bernstein-Vazirani Algorithm for one bit, with a Hamming weight of one. A possible X error on the first qubit could have resulted from various previous gates, found through backwards error propagation rules. The controlled-X gate leads to a branch in the fault-path. Each possible error source is a fault-point and has an associated error type that affects the output.}
\end{minipage}
\end{figure}
\begin{figure}[tb]
(a) \includegraphics[width = 0.45\textwidth]{Fig4a.pdf} (b) \includegraphics[width = 0.45\textwidth]{Fig4b.pdf}
\caption{\label{fig:tree} \textbf{a} The Bernstein-Vazirani Algorithm for three bits, with a Hamming weight of three, showing how errors spread in the circuit. Only part of the fault-paths are highlighted to emphasize the tree-pattern formed from fault-paths. \textbf{b} The same circuit represented as a directed-graph with the full fault-path labeled.}
\end{figure}
The Bernstein-Vazirani Algorithm finds the value of an unknown string, $s$, composed of $m$ unknown bits \cite{BValgo}. It requires the oracle operation $U_{BV}(s)$ that changes the ouput qubit state $y$ based on the data qubits $x$ and the function $f_s(x)$:
\begin{eqnarray*}\label{eq:bv_oracle}
f_s(x) = \vec{x} \cdot \vec{s} &=& \left( x_0s_0 + x_1s_1 + \dots + x_{n-1}s_{n-1} \right)\mod 2 \\
U_{BV}(s) \ket{x} \ket{y} &=& \ket{x} \ket{y\oplus f_s(x)}.
\end{eqnarray*}
Like all oracle based algorithms, the construction of the oracle is not specified. We choose the simplest oracle that consists of CNOTs between data qubits where the value of $s$ is 1 and the output qubit. The number of gates depends on the Hamming weight of $s$ and, to determine worst case probabilities, we assume that $s$ has maximum Hamming weight.
Classically, one sends in data strings with a single bit flipped and determines $s$ in $m$ steps. Quantum mechanically, by using Hadamard transformations and a Pauli $Z$, one can obtain $s$ in a single oracle call. For this procedure, success is having no bit flips on the data qubits. The output qubit is free to have any error.
Each of the data qubits is measured in the $Z$ basis, implying that only $X$/$Y$ errors are malignant. $fp_Z(q)$ for each qubit is found using Construction \ref{con:fault-path}. Fig. \ref{fig:faultpath_example} shows the fault-path branching due to the multi-qubit gate. By mapping the overlap between all of the fault-paths, a tree-structure emerges. To emphasize the tree-structure in Fig. \ref{fig:tree}a, some fault-points were deliberately left unhighlighted. This tree-structure meets the main assumption that none of the branches cross each other. To find the success rate for this 3-qubit circuit, each highlighted portion is analyzed separately. Construction \ref{con:qubit_success} gives the probability error state of the Q123 region, which represents the fault-points that affect all three data qubits. By tensoring this state with a unit vector, the first CNOT error matrix can be applied to this state to produce a 16-dimension vector. This larger vector can be divided into four distinct cases: no errors ($\bar{\epsilon}$), error occurring on control branch ($\epsilon_{c}$), error occurring on target branch ($\epsilon_{t}$), and error occurring on both branches ($\epsilon_{ct}$):
\begin{equation}\label{eq:black_vector}
\begin{bmatrix}
\bar{\epsilon}\\
\epsilon_{c} \\
\epsilon_{t} \\
\epsilon_{ct}
\end{bmatrix}_{Q123}
\end{equation}
\noindent After the overlap, each branch is calculated recursively. Since the control branch only contains one fault-path, the probability of no error, $\bar{\epsilon}_{Q1}$, can be found using Construction \ref{con:fault-path}. The target branch contains two fault-paths which have a second overlap region and two additional branches. Similar to the Q123 region, a four-case vector can be found for the Q23 region:
\begin{equation}\label{eq:orange_vector}
\begin{bmatrix}
\bar{\epsilon}\\
\epsilon_{c} \\
\epsilon_{t} \\
\epsilon_{ct}
\end{bmatrix}_{Q23}
\end{equation}
\noindent Similar to before, after the Q23 overlap, the control and target branches have one fault-path each. The probability of no error, $\bar{\epsilon}_{Q2}$ and $\bar{\epsilon}_{Q3}$ respectively, is found using Construction \ref{con:fault-path}. All of these error rates can be combined using Eq. \ref{eq:overlap_matrix} to find the success rate. By dividing the circuit into parts depending on the nodes, the matrices do not change size regardless of the number of qubits.
\begin{equation*}
\begin{bmatrix}
\bar{\epsilon} & \epsilon_{c} & \epsilon_{t} & \epsilon_{ct} \\
\epsilon_{c} & \bar{\epsilon} & \epsilon_{ct} & \epsilon_{t} \\
\epsilon_{t} & \epsilon_{ct} & \bar{\epsilon} & \epsilon_{c} \\
\epsilon_{ct} & \epsilon_{t} & \epsilon_{c} & \bar{\epsilon}
\end{bmatrix}_{Q2,Q3}
\begin{bmatrix}
\bar{\epsilon}\\
\epsilon_{c} \\
\epsilon_{t} \\
\epsilon_{ct}
\end{bmatrix}_{Q23}
=
\begin{bmatrix}
\bar{\epsilon}\\
~ \\
~ \\
\epsilon
\end{bmatrix}_{Q2,Q3,Q23}
\end{equation*}
\begin{equation}\label{eq:overlap_matrix}
\begin{bmatrix}
\bar{\epsilon} & \epsilon_{c} & \epsilon_{t} & \epsilon_{ct} \\
\epsilon_{c} & \bar{\epsilon} & \epsilon_{ct} & \epsilon_{t} \\
\epsilon_{t} & \epsilon_{ct} & \bar{\epsilon} & \epsilon_{c} \\
\epsilon_{ct} & \epsilon_{t} & \epsilon_{c} & \bar{\epsilon}
\end{bmatrix}_{(Q1),(Q2,Q3,Q23)}
\begin{bmatrix}
\bar{\epsilon}\\
\epsilon_{c} \\
\epsilon_{t} \\
\epsilon_{ct}
\end{bmatrix}_{Q123}
=
\begin{bmatrix}
Success\\
~ \\
~ \\
Error
\end{bmatrix}
\end{equation}
As with the lowerbound method, various other sub-sets of the Pauli Channel can be found by exchanging $\epsilon$s and $\bar{\epsilon}$s. For example, consider the scenario that the first and third qubit have no error, but the second qubit does have error. To solve for this probability only a minor exchanging of the error rates for the second qubit, $\bar{\epsilon}_{r}$ and $\epsilon_r$, are necessary:
\begin{equation*}
\begin{bmatrix}
\bar{\epsilon} & \epsilon_{c} & \epsilon_{t} & \epsilon_{ct} \\
\epsilon_{c} & \bar{\epsilon} & \epsilon_{ct} & \epsilon_{t} \\
\epsilon_{t} & \epsilon_{ct} & \bar{\epsilon} & \epsilon_{c} \\
\epsilon_{ct} & \epsilon_{t} & \epsilon_{c} & \bar{\epsilon}
\end{bmatrix}_{Q2,Q3}
\rightarrow
\begin{bmatrix}
\epsilon_{c} & \bar{\epsilon} & \epsilon_{ct} & \epsilon_{t} \\
\bar{\epsilon} & \epsilon_{c} & \epsilon_{t} & \epsilon_{ct} \\
\epsilon_{ct} & \epsilon_{t} & \epsilon_{c} & \bar{\epsilon} \\
\epsilon_{t} & \epsilon_{ct} & \bar{\epsilon} & \epsilon_{c}
\end{bmatrix}_{Q2,Q3}
\end{equation*}
\subsection{\label{sec:QECC}Steane-Shor QECC}
The FPT method was previously used to evaluate syndrome extraction methods for the Steane code on a model ion trap architecture \cite{TomitaPRA2013}. Here we describe the details of the process for a specific syndrome extraction method assuming a quantum machine without geometry, i.e. two-qubit gates are possible between any qubits. The presented FPT method for quantum error correction is an extension and generalization of the previous method described in Ref. \cite{Tomita} and used in Ref. \cite{TomitaPRA2013}.
For distance-3 codes, all single qubits errors can be decoded. For the Steane Code, $X$ and $Z$ errors are decoded independently, allowing for some two-qubit errors to be fixed. This means the success rate is the probability of all data qubits having less than two errors of the same tpye on two different qubits after the correction is applied. Unlike before, this rate allows multiple correlated output errors, which renders the previous methods inefficient. To reduce the size of the circuit, every syndrome is assumed to be independent, which means they can be analyzed separately. The syndrome is divided into three sub-groups: detectable fault-paths, $S_d$, undetectable fault-paths, $S_u$, and ancilla fault-paths, $S_a$. Detectable fault-paths are sub-groups of data fault-paths where errors will affect the ancilla measurement. In contrast, undetectable fault-paths are those fault-points were the errors will not affect the ancilla measurement. Finally, ancilla fault-paths are the complete fault-paths from ancilla qubits. For our FPT method, we assume these three categories share no fault-points in common. This implies that a single error in any of the three sub-groups will result in a single data-qubit error. Since each FPT calculation is dependent on the individual gate errors, the fault path only produces pseudothreshold curve, not a real threshold point. To find the real threshold, the circuit is encoded to a $k$-level and the error matrices are recursively modified to reflect the $k-1$ error rate. The method is outlined in Construction \ref{con:QECC_success}.
\begin{construct}{\label{con:QECC_success} Probability of QECC successful}{The code style, the error correcting style, and the current level, $k$.}{The probability of error at level $k+1$, $\varepsilon$.}
\item If the matrix dictionary is not populated at level $k$, populate it by calculating rates for all gates with the level $k-1$ QECC circuit.
\item Based on the code style and the error correcting style, make the circuit, $\mathbb{C}$.
\item \textbf{for error\_type in [X, Z]}
\item \hspace{10pt} Find all fault-paths for data qubits $\rightarrow$ D.
\item \hspace{10pt} Find all fault-paths for ancilla qubits $\rightarrow$ A.
\item \hspace{10pt} \textbf{for path in D do}
\item \hspace{10pt} \hspace{10pt} Separate path into $S_d$ and $S_u$
\item \hspace{10pt} \textbf{end for}
\item \hspace{10pt} \textbf{for path in A do}
\item \hspace{10pt} \hspace{10pt} Separate path into $S_a$ and benign fault-points
\item \hspace{10pt} \textbf{end for}
\item \hspace{10pt} Find $\varepsilon_{d}$, $\varepsilon_{u}$, and $\varepsilon_{a}$
\item \hspace{10pt} $\overline{\varepsilon_{error\_type}}$ = $\overline{\varepsilon_{d}}\overline{\varepsilon_{u}}\overline{\varepsilon_{a}} + \varepsilon_{d}\overline{\varepsilon_{u}}\overline{\varepsilon_{a}} + \overline{\varepsilon_{d}}\varepsilon_{u}\overline{\varepsilon_{a}} + \overline{\varepsilon_{d}}\overline{\varepsilon_{u}}\varepsilon_{a}$
\item \textbf{end for}
\item $(1-\varepsilon) = (1-\varepsilon_X) (1-\varepsilon_Z)$
\end{construct}
The exact procedure to find $\varepsilon_{d}$, $\varepsilon_{u}$, and $\varepsilon_{a}$ varies with each QECC. Here we describe how it is applied to Steane QECC with Shor ancilla and the decoding scheme proposed by Divincenzo and Aliferis to account for the overlap between $S_d$ and $S_a$ in each syndrome \cite{Steane}. An example sysdrome measurement circuit is shown in Fig. \ref{fig:steane_shor}.
This QECC measures each syndrome ($X$ and $Z$) three times, and employs a majority vote to ensure accurate corrections. Since each syndrome is independent, calculations can be reduced by assuming $\varepsilon_{d1} = \varepsilon_{d2} = \varepsilon_{d3}$. For each syndrome, the fault-paths for the DiVincenzo and Aliferis correction are found first. Based on the probability that an error will spread to both the ancilla measurements and the data measurements, additional gates are added to the data qubits to represent the probability of a correction occurring. For the case of the Steane-Shor QECC, the detectable and ancilla groups have a number of shared fault-points; therefore, the overlap between these groups is treated as a forth group, $S_b$. The data qubit fault-paths are divided among undetectable and detectable while the remaining ancilla fault-paths remain intact. Construction \ref{con:gen_success} is used to find $\varepsilon_{d}$, $\varepsilon_{u}$, $\varepsilon_{b}$, and $\varepsilon_{a}$. For this particular QECC, a single error in any of the four categories will render the entire syndrome faulty. Using the probability of a single $X$ and $Z$ syndrome measuring fault, the probability of the three syndromes giving the right correction is easy to calculate.
In general, this method is accurate when there is very little or no overlap between $S_d$ and $S_a$. In addition, many QECCs require decoding schemes to reduce the number of relevant qubits and account for any classical computations. Without these decoding schemes, the number of possible outcomes quickly renders the FPT method ill-suited. In general, the FPT method cannot simultaneously calculate multiple parts of the Pauli channel. To find the full Pauli channel exactly requires $G$ $4^m \times 4^m$ matrices where $G$ is the number of gates and $m$ is the number of data and ancilla qubits. These matrices would act on a size $4^m$ probability error state vector. Any correction steps would also need to be represented as $4^m \times 4^m$ matrices, as no classical corrections can be applied in this context.
\begin{figure}[tb]
\includegraphics{Fig5.pdf}
\caption{\label{fig:steane_shor} A single syndrome measurement for the Steane-Shor QEC with DiVincenzo decoding. The method generates a undirected cycle in the circuit diagram precludng the use of our methods for tree-like circuits.}
\end{figure}
\section{\label{sec:results}Results}
All matrix and vector math is done using the NumPy python package \cite{NumPy}.
\subsection{\label{sec:BV_results}Bernstein-Vazirani Algorithm}
\begin{figure}[tb]
\begin{center}
(a) \includegraphics[width = 0.7\textwidth]{Fig6a.pdf}
(b) \includegraphics[width = 0.7\textwidth]{Fig6b.pdf}
\end{center}
\caption{\label{fig:accuracy}\textbf{a} Comparison of exact and approximate FPT methods to Monte Carlo for the Bernstein-Vazirani algorithm with CNOT error rate = $1.0\times 10^{-3}$ and Hamming weight equal to the string size. \textbf{b} Here we vary the error rate for a string size and Hamming weight equal to 6.}
\end{figure}
For testing purposes, we choose to model error as Markovian-depolarizing noise. Depolarizing noise represents the error rate of all gates as $\epsilon$. Since single-qubit gates have three types of error ($X$, $Y$, and $Z$), each type of error has an equal chance of occurring ($\frac{\epsilon}{3}$). For two-qubit gates, this fraction changes to $\frac{\epsilon}{15}$ to represent the additional types of error ($XX$, $YZ$, etc.).
When comparing the fault-path method to Monte Carlo simulations, there are two main parameters: accuracy of the success rate and computation speed. We tested both of these parameters against two circuit variables: the gate error rate, $\epsilon$, and the size of the unknown string, $s$. The Monte Carlo results consisted of many trials. Each ($\epsilon$,$s$) combination was simulated $\frac{10 \cdot s}{\epsilon}$ times with a minimum of 100,000, and each combination is an average of least three trials. As seen in Fig. \ref{fig:accuracy}, the success rate behavior is reasonable since it decreases for higher error rates and increases for smaller circuit sizes.
The exact FPT method accurately predicts all Monte Carlo results, both when the size of the string and the error rate are varied, Fig. \ref{fig:accuracy}. In contrast, the lowerbound FPT method has regions of ($\epsilon$,$s$) that appear more accurate. As the string size increases, the lowerbound method loses accuracy at an exponential rate. Comparatively, at error rates less than 0.002 and higher than 0.4, the percent error is less than 5\%, while the region in between has percent error as high as 60\%. In general, the lowerbound method reasonable predict the correct success rate with a percent error less than 5\% at $s\cdot \epsilon <$ 0.01.
Both the exact FPT method and the lowerbound FPT method consistently take less time as expected from an analytical method. As the fault-path method for tree-circuits is fully independent of error rate, the timing does not change based on error rate, unlike Monte Carlo methods.
\subsection{\label{sec:QECC_results}Steane-Shor QECC}
\begin{figure}[tb]
\includegraphics[width = 0.95\textwidth]{Fig7.pdf}
\caption{\label{fig:steane} The threshold curves based on two methods: the FPT and Monte Carlo simulations for a EC circuit. The FPT shows the threshold curve for different levels of encoding to find the threshold. MC results were found at level one. The AGP result represents the predicted threshold at an infinite level.}
\end{figure}
A key figure for any error-correcting code is where the logical error rate is less than the physical error rate. This first error threshold is called the pseudothreshold. The threshold is defined for a code family and is the error which below one can achieve arbitrary low failure probability by increasing the code distance. Fig. \ref{fig:steane} compares the FPT method to Monte-Carlo. We expect Monte-Carlo to give exact results but also it requires more statistics as error rates are reduced \cite{new}. Here we use it to benchmark the pseudothreshold for an isolated error correction implementation. We see that the FPT method yields similar results.
Using the fault-path tracer method, the threshold curve was found for the first five levels of encoding, Fig. \ref{fig:steane}. The Steane-Shor circuit does not follow the binary-tree pattern; therefore, the FPT method only produces a lower bound on the threshold. It estimates the pseudothreshold at $3.25\times 10^{-4}$ which is lower than the Monte Carlo simulations. Since the difference between these two curves is a second degree polynomial, this emphasizes how the tracing method misses some errors that cancel. Under the assumptions that logical measurement and preparation operations have failure rates as if they were transversal, a level $k$ circuit can be analyzed in terms of $k-1$-level error rates. Our method estimates the real threshold at $1.91\times 10^{-4}$. Here we examine a circuit of I followed by error correction.
The method of Aliferis, Gottesman, and Preskill (AGP) based on fault-paths and malignant pair counting produces a conservative estimate of the threshold. We implemented the AGP method using code from Andrew Cross \cite{Cross}. We were able to predict a memory threshold assuming error correction, an identity gate, and then error correction. We found a threshold of $5.91 \times 10^{-5}$. We expect that the real Pauli error threshold lies above our estimate and this estimate.
\section{Conclusions}
The analytic methods based on fault paths can be used to accurately assess the integrated performance of quantum devices. We have shown the utility of fault paths for understanding the failure of simple algorithms and error correcting codes. Although the method is limited to circuits which are not universal with relatively simple structures, the method is scalable to many qubits. We expect that testing the performance of faulty quantum computers on easy problems will be an important method for showing that errors between gates are sufficiently independent for error correction to work.
The work also suggests that a tensor network approach could be applied to calculate the error of the circuits \cite{OrusAnnPhys2014}. Tensor networks are typically used to describe quantum states and to calculate their properties. In this case the tensor network describes the error states and sampling different error output configurations would correspond to changing output error vectors. We expect similarities with the graphical methods for stabilizer circuits \cite{Faster_than_CHP}. Tensor network contraction also naturally allows for partial parallelization of algorithms and this may lead to faster algorithms for more accurate estimation of error correcting circuit thresholds.
\begin{acknowledgements}
We would like to thank Andrew Cross, Ryan Sheehan, Alonzo Hernandez, and Silas Fradley for useful discussions. This project was supported by Office of the Director of National Intelligence (ODNI) - Intelligence Advanced Research Project Activity (IARPA) through Army Research Office Grant No. W911NF-10-1-0231 and Department of Interior Contract No. D11PC20167. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government.
\end{acknowledgements}
| f3c8b20b4ea8f34cbe055d18197f4f036c6befe3 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
Dielectric optical microcavities and microlasers have received a lot of interest both as mesoscopic model systems and as devices in micro-optics applications \cite{Vahala,Microcavities_review}.
They were found, \textit{e.g.}, to show directional far-field emission characteristics \cite{limacon,review_directional}.
One interesting class of those optical billiards are polygonal cavities with triangular resonators as a simple representative.
In the past, polygonally shaped optical microcavities (modelled often with rounded corners) have been studied both experimentally and theoretically \cite{rounded_isosceles,triangle_MH,Wiersig_BEM,superscars}.
True polygonal billiards with sharp corners were modelled in detail especially for the closed (hard-wall) case (see \textit{e.g.} \cite{geometry_and_billiards,billiards-in-polygons_1986,billiards-in-polygons_1996,Boshernitzan_polygons} and references therein).
Here, we examine triangular optical billiards with sharp corners as open optical systems.
It has been shown that the properties of trajectories in generic triangular billiards display a rich behavior depending crucially on the realized geometry \cite{Veech_triangles,equilateral-triangles_periodic,right-triangles_periodic,right-triangles_unstable,nearly_isosceles_triangles}.
In a recent experiment \cite{triangle_experiments}, various triangular microlasers displaying different symmetries are analyzed.
The far-field emission patterns of some triangles appear to originate from modes localized on short periodic orbits, whereas the emission of others cannot be explained in this picture.
We shall see below that indeed another class of orbits, that we will call maximum intensity trajectories, determines and explains the observed far-field emission.
The experiments are performed with cavities made out of a thin layer of a dye-doped polymer which can be treated as two-dimensional.
The material has a relatively low refractive index of $n < 2$ that corresponds to a rather poor confinement of light by total internal reflection in comparison to typical semiconductor lasers with refractive indices around $n\approx3$.
Nonetheless, these organic lasers are interesting for applications \cite{organic_dye_lasers,solid-state_organic_lasers}, can be easily processed and optically pumped.
To gain a better understanding of triangular microlasers with low refractive index, we perform ray-tracing simulations as justified for systems with large size parameters $nkL\gg1$.
Here $L$ is the characteristic length of the structure, $k=2\pi/\lambda$ the vacuum wavenumber, and $n$ the relative refractive index.
We include amplification of light relevant in the present case of poor confinement in order to extend the geometrical-optics description to active cavities.
We also use the wave description for comparison and to analyze the properties of triangular microcavities.
The paper is organized as follows.
We first discuss the selection of trajectories that contribute to the far-field in the case of the equilateral triangle.
Further, we examine the influence of light amplification in this system.
Then, we compare the ray optics results to the results of full electromagnetic wave simulations and experimental results.
The simplicity and high symmetry of the equilateral triangle allow us to study the influence of amplification due to an active material in detail and without the obscuring effects of a more complex geometry.
\section{Maximum intensity trajectories}
The far-field emission of the triangular microlasers studied in \cite{triangle_experiments} could, in many cases, be explained by short and simple periodic orbits.
One example are the (generalized) Fabry-Perot orbits (\textit{cf.} figure \ref{fig:orbit_selection}) where light hits the resonator boundary vertically on two sides, with a total reflection on the third side in between.
In order to generalize this picture, and to make a connection to chaotic microlasers where the unstable manifold is known to determine the far-field characteristics \cite{unstable_manifold1,unstable_manifold2,limacon,low-index}, we discuss \textit{all} possible ray trajectories in the equilateral triangle.
This will enable us to derive a criterion which trajectories contribute to the far-field response.
A trajectory in a billiard is fully described by its angle of incidence and its position on the boundary at each reflection point.
These two coordinates define the Poincar\'{e} surface of section, a projection of the four-dimensional phase space spanned by the two-dimensional billiard dynamics onto the plane.
Due to symmetry, each trajectory in an equilateral triangle is characterized by exactly three angles of incidence $\chi_1, \chi_2, \chi_3$.
Each trajectory is then given by a certain sequence of the $\chi_i$ that depends sensitively on the initial condition (the starting direction chosen at a certain point on one side of the triangle).
Clearly, a generic trajectory will close only after infinite time and, therefore, is not periodic.
Let $0^\circ \leq \chi_1 < 30^\circ$ be the initial angle of incidence, then $\chi_2=60^\circ-\chi_1$ and $\chi_3=-(60^\circ+\chi_1)$ \cite{equilateral-triangles_periodic}.
The sign of the angle $\chi$ specifies the directions of the incoming and outgoing rays at the corresponding reflection point, where the opposite sign changes the direction of the trajectory.
The angle $\chi_3$ with the largest absolute value always has the opposite sign than the two smaller angles, $\chi_1$ and $\chi_2$.
Trajectories with reversed signs in all angles are equivalent except for their sense of rotation, we can thus restrict our considerations to the case $\chi_1 \geq 0^\circ$.
Note that the larger $\chi_1$ is, the less frequent occurs $\chi_3$ along the trajectory sequence.
The two limiting cases are the ``quasi-Fabry-Perot orbit'' ($\chi_1=0^\circ$ and $\chi_{2/3}=\pm60^\circ$, sequence $\chi_1, \chi_2, \chi_1, \chi_3, \chi_1, \ldots$) and the inscribed triangle (and the corresponding family of period-doubled orbits) with $\chi_1=\chi_2=30^\circ$ where $\chi_3$ does not occur any more.
So far, we have not considered the intrinsic openness of the dielectric cavities.
Hence, we discuss now which of the possible trajectories can be made responsible for the emission characteristics of the dielectric triangular cavity.
We find that trajectories, which maximize the reflected intensity inside the cavity, dominate the far-field emission.
Although these trajectories are, in general, neither periodic nor simple, they determine the far-field emission for the following reason (maximum intensity trajectory selection rule):
For the equilateral triangle cavities with relatively low refractive index considered here, at least one of the angles of incidence lies below the critical angle since for $n<2$ the critical angle of total internal reflection is $\chi_{\rm{c}}>30^\circ$.
Therefore refractive losses are important, and after some initial transition time, trajectories which retain the most reflected intensity will dominate the far-field emission.
In other words, trajectories are favored that minimize the loss rate along their paths.
This optimization problem has to be solved.
\begin{figure
{\centering
\includegraphics[width=.9\textwidth]{Figure1}
}
\caption{\label{fig:orbit_selection}
\textit{Left:} Intensity reflection coefficient $R(\chi)$ for relative refractive index $n=1.5$ for both polarizations, TE and TM.
Critical angle $\chi_{\rm{c}}\approx41.8^\circ$ and Brewster angle (TE) $\chi_{\rm{B}} = \arctan (1/n) \approx33.7^\circ$ are indicated.
The intervals of possible incident angles in the equilateral triangle are marked by shading.
Triangles and diamonds denote the incident angles of the maximum intensity trajectories for TE and TM polarization, respectively.
\textit{Right:} Examples of the families of maximum intensity trajectories for both polarizations.
Quasi-Fabry-Perot orbits with $\chi_1=0^\circ$, $\chi_2=60^\circ$, $\chi_3=-60^\circ$ for TE polarization.
Non-periodic trajectories with $\chi_1=18^\circ$, $\chi_2=42^\circ$, $\chi_3=-78^\circ$ for TM polarization.
}
\end{figure}
Now, we apply this method to equilateral triangles with relative refractive index $n=1.5$ yielding a critical angle $\chi_{\rm{c}}\approx41.8^\circ$ and a Brewster angle (vanishing reflectivity for TE polarization) $\chi_{\rm{B}} = \arctan (1/n) \approx33.7^\circ$.
All following results will be restricted to this refractive index.
The resulting intensity reflection coefficients $R(\chi)$ are depicted in figure \ref{fig:orbit_selection} for both TE and TM polarization.
For TE polarization, the optimization procedure leads to trajectories with $\chi_1=0^\circ$, $\chi_2=60^\circ$, $\chi_3=-60^\circ$, the so-called ``quasi-Fabry-Perot orbits''
The maximum intensity trajectories for TM polarization are found to be the family of trajectories with $\chi_1=18^\circ$, $\chi_2=42^\circ$, $\chi_3=-78^\circ$ which are not periodic, as can be seen in figure \ref{fig:orbit_selection}.
For both polarizations, the respective family of trajectories minimizes the losses along their paths independent of the initial position on the boundary under the constraint that the trajectory does not directly hit one of the corners of the triangle.
The large difference between the TM and TE maximum intensity trajectories and, consequently, between the expected far-field emissions
might be less pronounced in other cavities.
Each geometry has a particular set of possible trajectories and, hence, specific maximum intensity trajectories depending on the refractive index.
If the maximum intensity trajectories happen to be the same for TE and TM polarizations, the difference between the far-field emission for the two polarizitions is expected to be small.
\section{Amplification in the ray-description}
The usual ray optics simulations follow the rules of classical geometrical optics using the laws of reflection, $\chi_{\rm{ref}}=\chi_{\rm{in}}$, and Snell's law, $\sin(\chi_{\rm{trans}})=n\sin(\chi_{\rm{in}})$, as well as the Fresnel coefficients where $\chi_{\rm{in}}$, $\chi_{\rm{ref}}$ and $\chi_{\rm{trans}}$ are the incident, reflected and transmitted angles, respectively.
Here, we include amplification along the light path in order to extend the ray model to active, lasing microcavities.
The reflected and transmitted intensities, $I^{\rm{ref}}$ and $I^{\rm{trans}}$, are obtained from the incident intensity $I^{\rm{in}}$ using the Fresnel equations \cite{Jackson}.
At the $m$th reflection point of the ray they are given by
\begin{equation}
I_m^{\rm{ref}} = R(\chi_m) I_m^{\rm{in}} \qquad {\rm{and}} \qquad I_m^{\rm{trans}} = T(\chi_m) I_m^{\rm{in}}
\label{eq:intensities}
\end{equation}
with the corresponding angle of incidence, $\chi_m$, and the Fresnel reflection and transmission coefficients, $R(\chi_{\rm{in}})$ and $T(\chi_{\rm{in}}) = 1-R(\chi_{\rm{in}})$ that differ for TE and TM polarization, \textit{cf.} figure \ref{fig:orbit_selection}.
In the case of a passive cavity, the incident intensity is just the reflected intensity of the last bounce, $I_m^{\rm{in}}=I_{m-1}^{\rm{ref}}$.
We assume no absorption or scattering losses inside the cavity, the only loss mechanism is transmission by refractive escape through the resonator boundary.
To model gain in an active cavity, we assume uniform pumping and a uniform distribution of the active medium throughout the cavity.
In previous works, this situation was studied within a semiclassical laser theory \cite{laser-theory_Stone} or using the Schr\"odinger-Bloch model
\cite{sbmodel,spiral_microlasers,gain_MH}.
A non-uniform gain distribution in chaotic cavities has been studied in the ray model in Ref.~\cite{chaotic_explosions}.
Generalizing the concept of Husimi functions \cite{MH_husimi_epl} to active cavities illustrated the role of amplification along the light trajectory, and how transmission and reflection of light depend on the previously accumulated intensity \cite{gain_MH}.
These findings suggest that amplification can be taken into account in an effective manner.
Here, we model the amplification as
\begin{equation}
I_m^{\rm{in}}=I_{m-1}^{\rm{ref}}\rm{e}^{\alpha \ell_m}
\label{eq:amp}
\end{equation}
where $\alpha>0$ is the gain coefficient of the active material and $\ell_m$ is the optical pathlength between the $(m-1)$th and $m$th bounce \cite{Siegman_lasers}.
This means that the intensity gain is proportional to the intensity $I_{m-1}^{\rm{ref}}$ that enters the piece of trajectory under consideration.
In experiments with cavities made of a polymer doped with a laser dye, the above stated assumptions are usually fulfilled.
Uniform pumping can be obtained when the cavities are optically pumped with the pump beam covering the whole cavity area.
An approximately uniform distribution of the dye in the polymer matrix is ensured during the liquid phase processing of the material.
Finally, lasing modes can be assumed to be fully developed even in the case of pulsed pumping as long as the photon round trip time is much shorter than the pump pulse.
Typical gain coefficients for thin dye-doped polymer layers are of the order of magnitude of $\alpha \sim 10\,\rm{cm}^{-1} \rm{-} 100\,\rm{cm}^{-1}$ \cite{Gozhyk_PRB,Gozhyk_thesis}.
\begin{figure
{\centering
\includegraphics[width=.7\textwidth]{Figure2}
}
\caption{\label{fig:farfield_passive}
Non-universal far-field emission of the equilateral triangle from usual, passive ray optics for TE polarization.
The far-field is collected in the time interval $\tau_1 \leq t \leq \tau_2$ with $\tau_1$ varied and $\tau_2 = 86\,nL/c$ for all curves.
Time is given in units of the optical path length, with the length $L$ of one triangle side and $c/n$ the speed of light in the medium.
The intensity is scaled such that the maximum equals 1.
\textit{Left:} Full far-field in polar plot.
\textit{Right:} Close-up for $0^\circ\leq\phi\leq60^\circ$.
}
\end{figure}
Now, we examine the ray-optical calculations in more detail.
In figure \ref{fig:farfield_passive} the far-field emission from usual passive ray optics is shown
for a cavity emitting TE polarized light.
To obtain these results, we started $600\,000$ rays ($100\,000$ rays on each side in both directions) with initially unity intensity and random initial conditions uniformly distributed in the angle and the position on the boundary.
Each trajectory is followed for $100$ reflections.
During this time the total intensity inside the cavity has dropped to less than $10^{-100}$.
To calculate the far-field, the emitted intensities are collected in a time interval $\tau_1 \leq t \leq \tau_2$ where the starting time $\tau_1$ is varied and the end time $\tau_2$ is fixed at the value corresponding to the total length of the shortest trajectory.
We see that the far-field emission is, indeed, dominated by the predicted maximum intensity trajectories, \textit{i.e.}, for TE polarization the ``quasi-Fabry-Perot orbits'' leading to emission perpendicular to the triangle sides.
We find, however, a strong sensitivity on the time interval chosen to calculate the far-field emission.
Depending on the chosen starting time $\tau_1$, other directions than those from the predicted maximum intensity trajectories can have a considerable contribution.
After a very long time, we expect the differences to vanish as the family of maximum intensity trajectories will eventually outperform all other trajectories \cite{chaotic_explosions}.
For practical reasons, however, the calculations cannot be done for infinitely long times.
Especially the rapidly decreasing intensities limits the maximum number of reflections for which reasonable and numerically reliable results can be obtained.
Hence, we cannot deduce a reliable prediction from the passive ray calculations.
We find that this problem can be solved if amplification in the active material is included in the ray simulation.
The far-field pattern calculated from the ray model including amplification according to (\ref{eq:amp}) is shown in figure \ref{fig:farfield_amp}.
The gain coefficient is chosen to be $\alpha=3L^{-1}$, where $L$ is the length of one side of the triangle, such that the total intensity inside the cavity increases with time.
All other parameters are the same as in the passive calculation.
Using amplification, the long time limit which is not easily reached in the passive calculation can now be established
and the influence of the time interval used to collect the far-field intensities is diminished.
\begin{figure
{\centering
\includegraphics[width=.65\textwidth]{Figure3}
}
\caption{\label{fig:farfield_amp}
Far-field emission of the equilateral triangle from ray optics including amplification according to (\ref{eq:amp}) for (a) TE and (b) TM polarization.
The far-field is collected in the same time intervals as in figure \ref{fig:farfield_passive}.
Obviously, all curves coincide.
}
\end{figure}
For both polarizations, the calculated far-field is now independent of the chosen time interval and can be nicely explained by the predicted families of maximum intensity trajectories.
In the case of TM polarization, the angle of incidence $\chi_{\rm{in}} = 18^\circ$ leads to the angle of transmission $\chi_{\rm{ref}} = \arcsin(n\sin(\chi_{\rm{in}})) \approx 27.6^\circ$.
Taking into account the threefold symmetry of the cavity and the two possible travelling directions along the trajectory, gives the six observed far-field angles.
\section{Comparison to wave simulations and experimental results}
Next, we compare the ray optics results to results from full electromagnetic wave simulations.
These simulations are performed with the finite-difference time-domain (FDTD) method \cite{fdtd,fdtd_Taflove}, using a freely available software package \cite{meep}.
In a first step, the resonant modes of the cavity are calculated.
Then, the far-field emission is determined for the longest-lived modes where the life time is given in terms of the quality factor $Q$.
The far-field emission patterns obtained from both approaches, ray optics and wave simulations, are shown in figure \ref{fig:far-fields}.
For TE polarization (first and second panel), we find good agreement of the wave simulations with the ray optics results discussed above.
In both cases, one observes narrow emission peaks perpendicular to the triangle sides as predicted from the maximum intensity trajectories.
The same emission pattern has also been seen in the experiment reported in \cite{triangle_experiments} where only samples with TE polarized lasing emission have been studied.
\begin{figure
{\centering
\includegraphics[width=\textwidth]{Figure4}
}
\caption{\label{fig:far-fields}
Far-field from ray optics including amplification and from electromagnetic wave simulations for both polarizations.
Wave results are calculated from the modes with the longest life time found in the equilateral triangle cavity.
The dimensionless wave numbers and quality factors of the modes are $\rm{Re}(kL)\approx81$, $Q=85$ for TE and $\rm{Re}(kL)\approx96$, $Q=230$ for TM where $k$ is the wavenumber and $L$ is the side length of the triangle.
The additional red curve in the ray optics result for TM polarization is the result of amended ray optics which accounts
for finite wavelength effects.
}
\end{figure}
For TM polarization, however, the agreement between the two approaches is not perfect.
Therefore, we have included semiclassical correction terms in the ray picture in order to account for wave effects \cite{TureciStone02,Aiello_overview,Shinohara2011,SchomerusHentschel_phase-space,PS_EPL2014,Unterhinninghofen_PRE2008}.
The need for wave corrections is obvious as ray optics is strictly only valid in the limit $kL\rightarrow\infty$,
whereas the wave simulations are performed in the regime $kL\approx100$.
Here, the effect known as Fresnel filtering or angular Goos-H\"anchen effect is og importance and gives corrections to the reflected and transmitted angles, $\chi_{\rm{ref}} = \chi_{\rm{in}} + \Delta\chi_{\rm{ref}}(\chi_{\rm{in}})$ and $\chi_{\rm{trans}} = \arcsin(n\sin(\chi_{\rm{in}})) + \Delta\chi_{\rm{trans}}(\chi_{\rm{in}})$ \cite{TureciStone02,GoetteShinoharaHentschel2013,Microcavities_review,Aiello_overview}.
The black curve in the third panel of figure \ref{fig:far-fields} shows the far-field emission for TM polarization calculated from the ray-optical approach as described before, the red curve is the semiclassically corrected far-field which agrees much better with the wave-optical result.
The semiclassical correction terms have the largest contribution for incident angles near the critical angle \cite{Aiello_overview,PS_PIERS2015}, therefore, the maximum intensity trajectory for TM polarization with one angle close to the critical angle is strongly affected by the corrections.
Especially, the correction to the transmitted angle is negative \cite{TureciStone02,GoetteShinoharaHentschel2013} which explains the observed change in the far-field for TM polarization if the wave corrections are included in the ray optics (compare black and red curve in figure \ref{fig:far-fields}(c)).
The family of maximum intensity trajectories for TE polarization, however, has incident angles far away from the critical angle, therefore, it is not affected by the corrections and the expected far-field is not changed.
The deviations of the wave simulations from ray optics are assumed to vanish when the limit $kL\rightarrow\infty$ is approached, \textit{e.g.}, when the system size gets larger while keeping the wavelength fixed.
In figure \ref{fig:modes}, the mode patterns of the longest-lived modes of the wave simulations are shown together with examples of the predicted maximum intensity trajectories.
The qualitative agreement between the wave and ray patterns further illustrates the correspondence of the two approaches and the validity of ray-wave correspondence in triangular microcavities.
The waves which emerge from the sides of the triangle and account for the far-field are clearly visible.
The spherical waves that emerge from the three corners, observed in the wave pattern for TM polarization, indicate diffraction in the near-field (in agreement with the Huygens-Fresnel principle).
However, these diffracted contributions fall-off as (distance)$^{-2}$, thus, they are only visible in the near-field.
Indeed, they do not leave a trace in the far-field (compare figure \ref{fig:far-fields}(d)).
\begin{figure
{\centering
\includegraphics[width=.9\textwidth]{Figure5}
}
\caption{\label{fig:modes}
Mode patterns obtained from full electromagnetic wave simulations in the equilateral triangle cavity in comparison with the predicted maximum intensity trajectories.
The wave patterns, the magnetic field component $H_z$ for TE polarization and the electric field component $E_z$ for TM polarization, show the modes used to calculate the far-fields in figure \ref{fig:far-fields}.
}
\end{figure}
\section{Discussion and conclusion}
We have presented a ray optics description of triangular low-refractive-index microlasers confirming ray-wave correspondence.
Including amplification by the active medium into the ray model leads to excellent agreement with full electromagnetic wave simulations and with the experimental results reported in \cite{triangle_experiments}.
Hence, we conclude that amplification is important to obtain reliable results
in ray optics simulations of non-chaotic optical cavities, especially in the case of low-index materials and highly lossy geometries.
Often, ray optics has been found a useful model to determine the far-field of microcavities and microlasers, see \textit{e.g.} \cite{Microcavities_review} and references therein.
In many cases, simple periodic orbits of the classical ray dynamics dominate the spectral properties and the emission, \textit{e.g.} \cite{periodic-orbits_Lebental}.
We find, however, that this assumption is not always sufficient to explain all findings.
Here, we suggest that maximum intensity trajectories determine the far-field emission characteristics.
Whereas they may coincide with short periodic orbits as in the case of TE polarization in the equilateral triangle, we demonstrated that this is not generally the case:
We find non-periodic trajectories to dominate the far-field emission of the equilateral triangle cavity for TM polarization.
For classically chaotic systems, the unstable manifold of the chaotic saddle, the way light rays cross the critical line and gets (partially) refracted out of the cavity, was found to determine the far-field characteristics. \cite{unstable_manifold1,unstable_manifold2,limacon,low-index,deformed-cylinders_Schwefel}.
However, this concept is not applicable to polygonal cavities and other systems having integrable or pseudointegrable classical dynamics.
The concept of maximum intensity trajectories fills this gap and can provide a more general point of view:
Trajectories that retain, in the limit of long times, more intensity than others, while still contributing to the transmission, will dominate the far-field.
This is a similar line of argument as the unstable manifold considerations:
The unstable manifold of a chaotic saddle is constituted by trajectories that undergo many total internal reflections, thus, keeping all their intensity before they are eventually transmitted and contribute to the far-field pattern.
While we have applied the method of maximum intensity trajectories introduced here to the equilateral triangle which has integrable classical dynamics, we assume that this selection rule
is applicable to
a large class of cavity and microlaser geometries.
\ack
This work is partially funded by the German Research Foundation (DFG) via the Emmy Noether Program.
The authors thank Joseph Zyss, Stefan Bittner and Melanie Lebental for useful discussions.
\section*{References}
| 69ae63e4310959fd9623a9974deeac138d18aaf5 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{О хроматических числах пространств} \label{Obzor}
Соответствующая тематика активно развивается, получено много интересных результатов и оценок в~различных ситуациях (например, см.~\cite{Soi2}). Мы приведем лишь краткий перечень основных достигнутых результатов. Хроматические числа пространств активно исследовались, например, в школе А.~М.~Райгородского. Более подробные сведения о проблеме Нельсона--Хадвигера и смежных задачах можно почерпнуть из следующих обзоров: П.~К.~Агарвал и Я.~Пах~\cite{AP}, П.~Брасс, В.~Мозер, Я.~Пах \cite{BMP}, М.~Бенда, М.~Перлес~\cite{BP}, К.~Б.~Чилакамарри~\cite{Ch}, В.~Кли и С.~Вэгон~\cite{KW}, А.~М.~Райгородский~\cite{Rai1},~\cite{Rai2},~\cite{Rai8},~\cite{Rai9},~\cite{Rai10}, А.~Сойфер~\cite{Soi2,Soi3} и Л.~А.~Секеи~\cite{Szek}.
\subsection{О хроматическом числе плоскости}
Начнем с ослаблений. Если каждое одноцветное множество разбивается на связные области, ограниченные жордановыми кривыми, то необходимо не менее 6 цветов, что было доказано Д.~Р.~Вудаллом еще в 1973-ем году~\cite{W}. К.~Дж.~Фалконер в 1981-ом году показал, что если потребовать, чтобы множества точек, раскрашенных в один и тот же цвет, были измеримы по Лебегу, тогда для правильной раскраски плоскости требуется хотя бы $5$ цветов~\cite{F}.
Разумеется, раскраска плоскости в $7$ цветов обеспечивает оценку сверху и для ослабленных формулировок
(см. Рис. 1).
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{pic1a.jpg}\\
\caption{Раскраска плоскости в $7$ цветов. Стороны правильных $6$-угольников имеют длину $\frac{1}{\sqrt{7}}$.}
\end{figure}
Одна из главных трудностей заключается в том, что ответ \textit{может зависеть} от те\-о\-ре\-ти\-ко-мно\-жес\-твен\-ной аксиоматики, как показали в 2003-ем году C.~Шелах и А.~Сойфер~\cite{ShS}.
Если мы предполагаем аксиому выбора, то по теореме Эрд\-е\-ша-де Брейна~\cite{EB} хроматическое число бесконечного графа реализуется на конечном подграфе. Однако компьютерный перебор не находит подграфов с хроматическим числом хотя бы 5, что позволяет предположить, что хроматическое число в стандартной аксиоматике равняется четырем.
Если же отказаться от аксиомы выбора, но дополнить стандартную аксиоматику Цермело-Френкеля аксиомой зависимого выбора и дополнительно потребовать измеримость всех
подмножеств по Лебегу, то применимо доказательство Фалконера, и хроматическое число лежит между пятью и семью.
Также можно рассматривать случай ограниченного подмножества плоскости, что было сделано, например, в~\cite{Kr}.
\subsection{Задача для произвольных метрических пространств}
Перейдем к различным обобщениям. Для произвольного метрического пространства $(X, d)$ и числа $a > 0$ определим граф $G_a(X,d)$ следующим образом:
множество его вершин совпадает с точками пространства, множество ребер образуют пары точек, лежащие на расстоянии $a$. Нас по-прежнему интересует хроматическое число этого графа $\chi(X,d,a)$.
Наиболее часто в качестве $(X,d)$ рассматривают $\mathbb{R}^n$ и $\mathbb{Q}^n$ c евклидовой метрикой. Мы ограничимся этими случаями при $a=1$.
Стоит отметить, что в~вещественном случае все графы $G_a$, очевидно, изоморфны.
\subsubsection {Хроматические числа вещественных пространств}
Для прямой ответ очевиден: $\chi(\mathbb{R}) = 2$. При $n =2$ мы получаем в точности исходную формулировку вопроса о хроматическом числе плоскости.
При $n=3$ задача представляется еще более сложной, чем классическая проблема Нель\-со\-на--Хад\-ви\-ге\-ра,~--- последние оценки с обеих сторон получены в текущем столетии.
\begin{theorem}
$$6 \leq \chi(\mathbb{R}^3) \leq 15.$$
\end{theorem}
Нижняя оценка принадлежит О.~Нечуштану~\cite{Nech}, верхняя~--- Д.~Кулсону~\cite{Coul}.
В асимптотике же выполняются следующие оценки:
\begin{theorem}
$$\left(1.239\ldots+o(1)\right)^n \le \chi\left({\mathbb R}^n\right) \le (3+o(1))^n.$$
\end{theorem}
Нижняя оценка принадлежит А.~М.~Райгородскому~\cite{Rai7}, верхняя~--- Д.~Г.~Ларману и К.~А.~Роджерсу~\cite{LR}. Стоит отметить, что асимптотические нижние оценки в этой и смежных задачах были
получены интересным самим по себе \textit{линейно-алгебраическим методом}, более того, с их помощью Дж.~Кан и Дж.~Калаи в 1993-ем году построили контрпример к~гипотезе Борсука~\cite{KK}, стоявшей к~тому моменту более пятидесяти лет.
Более подробно про гипотезу Борсука и линейно-алгебраический метод написано в~\cite{Rai8}.
\subsubsection {Хроматические числа рациональных пространств}
Одномерный случай, как и для вещественных чисел, тривиален: $\chi(\mathbb{Q})=2$.
Некоторое удивление может вызвать тот факт, что точное значение хроматического числа $\mathbb{Q}^n$ известно не только в размерности $2$, но и в размерностях $3$ и $4$
(см. Д.~Р.~Вудалл~\cite{W}, П.~Д.~Джонсон~\cite{J} и М.~Бенда-М.~Перлес~\cite{BP}).
\begin{theorem} \label{Th4}
$\chi(\mathbb{Q}^2) = \chi(\mathbb{Q}^3) = 2$, $\chi(\mathbb{Q}^4) = 4$.
\end{theorem}
Лучшая асимптотическая нижняя оценка принадлежит Е.~И.~Пономаренко и А.~М.~Райгородскому~\cite{PR1},~\cite{PR2}, верхняя~--- Д.~Г.~Ларману и К.~А.~Роджерсу~\cite{LR}.
\begin{theorem} \label{Th5}
$$\left(1.199\ldots + o(1)\right)^n \le \chi\left({\mathbb Q}^n\right) \le \left(3+o(1)\right)^n.$$
\end{theorem}
Отметим также, что в статье~\cite{Ax} рассматривался смешанный случай пространства $\mathbb{R} \times \mathbb{Q}$.
\subsection{Полихроматическое число}
В книге~\cite{HD} Г.~Хадвигер и Х.~Дебрюннер с подачи П.~Эрдеша сформулировали естественный вопрос об отыскании \textit{полихроматического числа плоскости}, заключающийся в нахождении
наименьшего числа цветов, необходимых для такой раскраски плоскости, что для любого цвета $i$ найдется расстояние $d$, такое что цвет $i$ не содержит пару точек на расстоянии $d$.
Мы используем обозначение $\chi_p$ для вышеупомянутой величины, предложенное Сойфером в статье~\cite{Soi}.
Наилучшие оценки были получены Райским и Стечкиным в работе~\cite{Rai} (пример Стечкина опубликован с его разрешения в той же статье).
\begin{theorem}
$$4 \leq \chi_p \leq 6.$$
\end{theorem}
\noindent Другие доказательства тех же оценок можно найти в статье Д.~Р.~Вудалла~\cite{W}.
\section{Основные результаты}
Нас интересуют хроматические числа пространств вида $\mathbb{K}^n \times [0,\varepsilon]^k$, где $\mathbb{K}\in\{\mathbb{R},\mathbb{Q}\}$, $n,k \geq 1$ с евклидовой метрикой. Такие метрические пространства будем называть ``слойками'', а в случае $n = k = 1$ ``полосами''.
В данной работе основное внимание уделяется случаю $n=2$.
\subsection {Хроматические числа одномерных слоек}
Начнем с простого наблюдения. Нижняя оценка есть в~\cite{bauslaugh1998tearing} и~\cite{Ch}, но мы приведем ее для полноты изложения.
\begin{prop} \label{Utv1}
Пусть $0 < h\leq \sqrt{\frac{3}{4k}}$. Тогда $$\chi(\mathbb{R} \times [0,h]^k) = 3.$$
Пусть $\sqrt{\frac{3}{4k}} < h \leq \sqrt{\frac{8}{9k}}$. Тогда $$\chi(\mathbb{R}\times [0, h]^k) = 4.$$
\end{prop}
\noindent{\bf Верхняя оценка.}\ Пусть $0 < h \leq\sqrt{\frac{3}{4k}}$. Раскрасим $\mathbb{R}$ в $3$ цвета, чередуя одноцветные полуинтервалы длины $1/2$ (цветов $1, 2, 3, 1, 2, 3$ и т.д.). Затем каждой точке $\mathbb{R}\times [0,h]^k$ присвоим тот же цвет, который имеет проекция из прямого произведения на действительную прямую. Тогда диаметр одноцветного параллелепипеда $[0; \frac{1}{2}] \times [0,h]^k$ не превосходит $1$, причем в случае равенства концы диаметра раскрашены по-разному.
Аналогично строится раскраска в $4$ цвета при ${\sqrt{\frac{3}{4k}} < h\leq\sqrt{\frac{8}{9k}}}$~--- в этом случае полуинтервалы имеют длину $1/3$.
\vskip 0.2cm
\noindent{\bf Нижняя оценка.}\
Проиллюстрируем на тривиальном примере схему доказательства, которая будет использована далее для размерностей 3 и 4.
Предположим, что полоса $\mathbb{R} \times [0,\varepsilon]$ правильно раскрашена в несколько цветов. Пусть $l \in \mathbb{N}$ таково, что $1/l = \delta \leq \varepsilon^2$. На границе слойки $\mathbb{R} \times \{0\}$ выберем раскрашенные по-разному точки $u=(x, 0)$, $v = (x+\delta,0)$, расстояние между которыми равно $\delta$. Такой выбор возможен в силу того, что точки $(0, 0)$, $(\delta,0)$, ... $(1,0)$ не могут быть все одного цвета. Обозначим $w = (x+\delta/2, \varepsilon)$. В одной из пар точек $u, w$ или $v, w$ встречаются два цвета. Пусть это пара $u, w$. Тогда для раскраски точки $\xi$, которая находится на расстоянии 1 от $u$, $w$ и лежит внутри полосы, потребуется еще один цвет. \qed
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{stripe.png}\\
\caption{Нижняя оценка для $\chi(\mathbb{R} \times [0, \varepsilon]$).}
\end{figure}
Множество $\mathbb{R} \times [0, h]^k$ при $\sqrt{\frac{3}{4k}} < h \leq \sqrt{\frac{8}{9k}}$ содержит полосу $\mathbb{R} \times [0, h_1]$, $h_1 >\sqrt{3}/2$, которая является произведением $\mathbb{R}$ на большую диагональ $k$-мерного гиперкуба с ребром $h$.
В такую полосу вкладывается дистанционный граф, изображенный на рис. 2, причем $d(y,z)$ может принимать любое значение из отрезка
$[0,3-2\sqrt{3-h_1^2}]$
Выберем такую реализацию этого графа, чтобы точки $x,y,z$ лежали на границе полосы, и $d(y,z)=1/m$; $m \in \mathbb{N}$. Копируя конструкцию $m$ раз, строим дистанционный граф с хроматическим числом $4$. \qed
\vskip 0.2cm
\textbf{Замечание.} Число вершин критического графа стремится к бесконечности, когда $h$ приближается к значению, в котором хроматическое число разрывно ($h=0$ и $h=\sqrt{\frac{3}{4k}}$), но граф может быть размещен в области, диаметр которой не зависит от $h$.
\vskip 0.2cm
Очевидно, функция $\xi_{n,k}(h) = \chi (\mathbb{R}^n\times [0,h]^k)$, определенная при $h \geq 0$, не убывает. При любых фиксированных $n$, $k$ число значений $\xi_{n,k}(h)$ конечно, поскольку $\chi(\mathbb{R}^n)\leq\xi_{n,k}(h) \leq \chi(\mathbb{R}^{n+k})$, а следовательно, конечно и число точек разрыва. По-видимому, при $n>1$ ни одна точка разрыва $\xi_{n,k}(h)$ не может быть найдена без улучшения известных оценок $\chi(\mathbb{R}^n)$.
\vskip 0.2cm
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{pic2a.jpg}\\
\caption{Цепочка $\theta$-графов внутри полосы.}
\end{figure}
Укажем более широкий класс множеств, для которых нижняя оценка из Утверждения \ref{Utv1} остается в силе.
\begin{prop} \label{Utv2}
Пусть $\varepsilon$~--- произвольное положительное число; $Q$~--- $\varepsilon$-окрестность некой кривой $\xi$ диаметра хотя бы $2$.
Тогда $\chi(Q) \geq 3$.
\end{prop}
\subsection {Хроматические числа двумерных слоек}
Перейдем к промежуточному случаю между плоскостью и пространством~---
$\mathbb{R}^2 \times [0,\varepsilon]$ (слойка высоты $\varepsilon$). Несмотря на то, что это множество по-прежнему допускает правильную раскраску в $7$ цветов, нижняя оценка менее тривиальна, чем для плоскости:
\begin{theorem} \label{Th6}
Пусть $\varepsilon$~--- положительное число, меньшее $\sqrt{3/7}$. Тогда
$$5\leq\chi(\mathbb{R}^2\times [0,\varepsilon])\leq 7.$$
\end{theorem}
В отличие от случая полосы ($n=1$) здесь не удается доказать даже то, что функция $\chi (\mathbb{R}^2 \times [0, \varepsilon])$ разрывна в точке $\varepsilon=0$.
Рассмотрим теперь ``раздутие''\ плоскости в пространстве большей размерности.
Благодаря тому, что раскраска плоскости в $7$ цветов не содержит расстояний, принадлежащих некоторому интервалу, верхняя оценка сохраняется при увеличении размерности.
\begin{theorem} \label{Th6prim}
Пусть $k$~--- целое число, $\varepsilon < \varepsilon_0(k)$~--- положительное число, Тогда
$$\chi(\mathbb{R}^2\times [0,\varepsilon]^k)\leq 7.$$
\end{theorem}
Нижнюю оценку удается улучшить уже при $k=2$.
\begin{theorem} \label{Th7}
Пусть $\varepsilon$~--- произвольное положительное число, Тогда $$\chi(\mathbb{R}^2\times [0,\varepsilon]^2) \geq 6.$$
\end{theorem}
Отметим, что для получения приведенных нижних оценок, как и при $n=1$, достаточно рассмотреть раскраску ограниченной области, диаметр которой не зависит от $\varepsilon$.
В доказательстве Теоремы \ref{Th5} используется следующая небезынтересная сама по себе
\begin{lemma} \label{L1}
Предположим, что евклидова плоскость правильно покрашена в $k$ цветов. Тогда для любого $\varepsilon>0$ найдется круг радиуса $\varepsilon$, содержащий точки как минимум трех различных цветов.
\end{lemma}
\begin{corr} \label{Cor1}
Предположим, что евклидова плоскость правильно покрашена в $k$ цветов. Тогда для любого $\varepsilon>0$ найдется окружность радиуса меньше чем $\varepsilon$, содержащая точки как минимум трех различных цветов.
\end{corr}
Верно и более сильное утверждение, нежели Лемма~\ref{L1}:
\begin{theorem} \label{Th8}
Пусть пространство $\mathbb{R}^n$ правильно покрашено в $m$ цветов. Иными словами, обозначив множество точек $i$-того цвета через $C_i$, имеем: $$\bigcup_{i=1}^{m} C_i = \mathbb{R}^n,$$ и ни одно из множеств $C_i$ не содержит двух точек, находящихся на расстоянии $1$. Тогда найдется $n+1$ множество из этого семейства, пересечение замыканий которых непусто.
\end{theorem}
Утверждение теоремы очевидно в том случае, если компоненты связности замыканий ${C}_i$ представляют собой многогранники, но оно справедливо и для произвольного покрытия с одним запрещенным расстоянием.
\subsection{Хроматические числа рациональных пространств}
В рациональном случае справедлива следующая теорема:
\begin{theorem} \label{Th9}
Для достаточно малого положительного $\varepsilon$ выполняется $$\chi(\mathbb{Q}\times [0,\varepsilon]_\mathbb{Q}^3)=3. $$
\end{theorem}
Очевидно, нельзя заменить в условии теоремы $[0,\varepsilon]_\mathbb{Q}^3$ на $[0,\varepsilon]_\mathbb{Q}^2$ так как $\chi(\mathbb{Q}^3)=2$.
\section {Доказательства}
Начнем с конструкции, неоднократно используемой в доказательствах.
\begin{definition}
Пусть $\omega_r$~--- окружность радиуса $r$. Назовем число $r > 0$ {\em запрещенным радиусом}, если $G_1(\omega_r)$ содержит нечетный цикл.
\end{definition}
\begin{prop} \label{Utv3}
Запрещенные радиусы плотны в $[1/2, \ \infty).$
\end{prop}
\begin{proof}
В самом деле, для числа $q \in \mathbb{Q} \cap (0,\frac{1}{2})$, представимого в виде $q = \frac{l}{2k+1}$; $k,l \in \mathbb{N}$ можно построить запрещенный радиус $$r = \frac{1}{2 \sin {\pi q}}.$$
\end{proof}
\subsection {Доказательство нижней оценки в Теореме \ref{Th6}}
Будем обозначать стандартную метрику на плоскости за $\rho$, а сферу полной размерности с радиусом $r$ и центром в точке $u$ за $S(u; r)$.
Предположим, что слойка $\mathbb{R}^2 \times [0, \varepsilon]$ правильно покрашена в несколько цветов. Пусть $0<\delta < \varepsilon^2$. Выберем на границе слойки $\mathbb{R}^2 \times \{0\}$ такие по-разному раскрашенные точки $u,v$, что $\rho(u, v) = \delta$.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm]{slice.png}\\
\caption{Окружность запрещенного радиуса внутри слойки $\mathbb{R}^2 \times [0, \varepsilon]$}
\end{figure}
Пусть $\varepsilon_1$ удовлетворяет условию $$\sqrt{ \delta} \leq \varepsilon_1 < \varepsilon$$
и, кроме того, $r = \sqrt{1-\varepsilon^2_1/4}$~--- запрещенный радиус. Построим равнобедренный треугольник $u v w$, высота которого $w w_1$ перпендикулярна границе слойки, а боковые стороны имеют длину $\varepsilon_1$. Поскольку $u$ и $v$ покрашены в разные цвета, хотя бы в одной из пар $u, w$ и $v, w$ встречаются два цвета. Без ограничения общности предположим, что точки $u, w$ раскрашены в цвета 1 и 2 соответственно. Тогда окружность
$$\omega = S(u; 1) \cap S(w; 1)$$
лежит в слойке, не содержит точек цветов 1 и 2, и имеет запрещенный радиус $r$, то есть для ее правильной раскраски требуется еще 3 цвета. \qed
\subsection {Доказательство верхней оценки в Теоремах~\ref{Th6} и~\ref{Th6prim}}
Рассмотрим стандартную правильную раскраску плоскости в 7 цветов (см. Рис. 1). В ней нет двух точек, покрашенных в один и тот же цвет, и лежащих на расстоянии между $2/\sqrt{7}$ и $1$.
Покрасим слойку следующим образом: каждый куб ${(x,y)} \times [0, \varepsilon]^k$ получит тот же цвет, что и точка $(x,y)$ в раскраске плоскости. Такая раскраска слойки будет правильной при условии $(2/\sqrt{7})^2 + k\varepsilon^2 < 1$, которое эквивалентно неравенствам в формулировках теорем. \qed
\subsection {Доказательство Утверждения~\ref{Utv2}}
Не умаляя общности можно считать, что $\varepsilon < 1$.
Предположим противное: существует раскраска множества $Q$ в два цвета. Обозначим за $G(Q)$ соответствующий граф и найдем в нем нечетный цикл.
\vskip+0.2cm
\vskip+0.2cm
Рассмотрим некоторую точку $u \in \xi$. Поскольку $\operatorname{Diam}{\xi} \geq 2$, пересечение $S(u;1)$ и $\xi$ непусто. Пусть $v \in S(u;1) \cap \xi$;
$\lVert u-v_1 \rVert = 1$; $\lVert v_i-v_{i+1} \rVert = 1$; $i = 1,2,3$.
Если углы между соседними единичными звеньями ломаной $v u v_1 v_2 v_3 v_4$ не превосходят $\frac{\varepsilon}{2}$, то $\lVert v - v_1 \rVert < \frac{\varepsilon}{2}$, $\lVert u - v_2 \rVert < \frac{\varepsilon}{2}$, $\lVert v - v_3 \rVert < \varepsilon$, $\lVert u - v_4 \rVert < \varepsilon$, тем самым $v_i \in Q$, $i = 1,2,3,4$.
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{prop2.png}\\
\caption{Путь длины 4 между точками $u$ и $v_4$.}
\end{figure}
\noindent При этом
$$l_1 = \lVert u - v_2 \rVert \in \left [0; 2 \sin \frac{\varepsilon}{4} \right],$$
$$l_2 = \lVert v_2 - v_4 \rVert \in \left [0; 2 \sin \frac{\varepsilon}{4} \right]$$
могут быть выбраны произвольным образом, а ориентированный угол между векторами $\overrightarrow{v_2 u}$ и $\overrightarrow{v_2 v_4}$ независимо выбирается из отрезка $[-\frac{\varepsilon}{4}; \frac{\varepsilon}{4}]$.
Зафиксируем прямую, содержащую вектор $\overrightarrow{v_2 u}$ (пусть, например, он ортогонален $u v$). Тогда всевозможные положения точки $v_4$ образуют некоторую фигуру, содежащую ромб с центром в $u$, стороной $2 \sin \frac{\varepsilon}{4}$ и углом $\frac{\varepsilon}{2}$. Следовательно, существует путь длины 4 между $u$ и любой точкой из $\gamma$-окрестности $u$, где $\gamma = \sin \frac{\varepsilon}{2} \sin \frac{\varepsilon}{4}$.
\vskip+0.2cm
Таким образом, идя по кривой $\xi$ шагами размера $\gamma$ от $u$ к $v$, мы строим путь четной длины между точками $u$ и $v$, а значит, и нечетный цикл в $G(Q)$.
\qed
\subsection {Доказательство Леммы~\ref{L1}}
Покажем, что найдется круг сколь угодно малого радиуса, содержащий точки по крайней мере трех цветов.
Предположим противное: найдутся правильная раскраска плоскости и $\varepsilon>0$ такие, что все круги радиуса $\varepsilon$ содержат точки не более двух различных цветов.
Разобьем плоскость на квадраты со стороной $$\delta \leq \frac{2}{ \sqrt{10}} \varepsilon.$$
Тогда каждый такой квадрат раскрашен не более чем в два цвета.
По Утверждению~\ref{Utv2} внешняя граница любой связной двухцветной области ограничивает некоторую фигуру (иначе, соединив центры соседних граничных клеток, мы получим достаточно длинную ломаную).
Поскольку у любой связной двухцветной области внешняя граница состоит из одноцветных квадратов, диаметр области конечен.
Значит, мы можем рассмотреть такую связную двухцветную область, что ее внешняя граница ограничивает фигуру максимальной площади.
Добавим произвольный квадрат, смежный с областью снаружи и получим противоречие.
\qed
\subsection {Доказательство Следствия~\ref{Cor1}}
По Лемме~\ref{L1} для произвольного $\varepsilon > 0$ существует круг диаметра $\varepsilon$, в котором есть точки трех различных цветов.
Покажем, что какие-то точки трех различных цветов лежат на окружности радиуса не более $\varepsilon$.
Рассмотрим в круге диаметра $\varepsilon$ разноцветный треугольник $ABC$. У него есть тупой угол (иначе нам подходит описанная окружность треугольника $ABC$); не умаляя общности, это угол $A$. Рассмотрим точку $D$, такую что $\angle ADB = \angle ADC = \pi/3$.
Тогда $\angle BDC = 2\pi/3$. Заметим, что независимо от цвета точки $D$ один из треугольников $ABD, ACD, BCD$
разноцветный. Радиус описанной окружности любого из треугольников не превосходит $$\frac{\varepsilon}{2\sin \angle D} = \frac{\varepsilon}{\sqrt 3} < \varepsilon.$$ Доказательство завершено, поскольку $\varepsilon$ можно выбрать произвольно.\qed
\subsection {Доказательство Теоремы~\ref{Th7}}
В основе доказательства лежит следующая конструкция: если существует треугольник с разноцветными вершинами $v_1,v_2,v_3$ из слойки и центром описанной окружности $u_0$, то слойка содержит окружность, которая является пересечением трех сфер радиуса 1 с центрами в $v_1,v_2,v_3$. При надлежащем выборе вершин треугольника эта окружность содержит нечетный цикл, и для ее раскраски потребуется еще три цвета, откуда имеем оценку на хроматическое число слойки, приведенную в формулировке теоремы.
Предположим существование правильной раскраски слойки $$\mathbb{R}^2\times[0, \varepsilon]^2 = \left\{(x,y,z,t) \mid x,y \in \mathbb{R},\ z,t \in [0, \varepsilon] \right\}$$ в несколько цветов. Далее потребуется вспомогательное
\begin{prop}
Пусть $\phi (v_1,v_2,v_3)$~--- угол между двумерной плоскостью, содержащей точки $v_1$, $v_2$, $v_3$, и плоскостью $\{(0,0,z,t)\}$. Для произвольных $\varepsilon_2>0$, $\varepsilon_3>0$ в слойке найдется треугольник с вершинами $v_1$, $v_2$, $v_3$ трех различных цветов и углами $\alpha_1$, $\alpha_2$, $\alpha_3$, для которого выполнены условия:
\begin{equation}\label{ineq_phi}
\phi(v_1,v_2,v_3) \leq \varepsilon_2;
\end{equation}
\begin{equation}\label{ineq_alpha}
\alpha_i \geq \frac{\pi}{5} - \varepsilon_3, \; i = 1,2,3.
\end{equation}
\end{prop}
\begin{proof} Выберем $\varepsilon_1<\varepsilon/2$. Пусть $M = \left\{(z_1,t_1), (z_2,t_2), \dots (z_5,t_5)\right\}$~--- вершины правильного пятиугольника, вписанного в окружность радиуса $\varepsilon_1$ с центром в точке $(\varepsilon/2,\varepsilon/2)$. Сопоставим каждой точке $(x,y) \in \mathbb{R}^2$ вершины пятиугольника, лежащего в инфинитезимальном квадрате:
$$Q_{x, y} = \{(x,y)\} \times M.$$
Если при некоторых $x,y$ множество $Q_{x,y}$ раскрашено по крайней мере в 3 цвета, то треугольник с вершинами в разноцветных точках из $Q_{x,y}$~--- искомый. Пусть верно обратное: для любых $x,y$ в $Q_{x,y}$ встречается не более чем 2 цвета. Тогда в раскраске $Q_{x,y}$ какой-либо цвет использован по крайней мере 3 раза; обозначим этот цвет $c(x,y)$.
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{pent1a.png}\\
\caption{Пятерка точек $Q_{x, y} = \{(x,y)\} \times M$}
\end{figure}
Сопоставим раскраске слойки вспомогательную раскраску плоскости $P = \mathbb{R}^2$: точка $(x,y)$ имеет цвет $c(x,y)$. Заметим, что эта раскраска правильная. В самом деле, если точки $(x_1,y_1)$ и $(x_2,y_2)$ находятся на расстоянии 1 и раскрашены одинаково, то в соответствующих пятерках точек из слойки $Q_{x_1,y_1}=\left\{q_{1i}\right\}$, $Q_{x_2,y_2}=\left\{q_{2i}\right\}$ по крайней мере три точки имеют тот же цвет. Но поскольку $$\rho(q_{1i},q_{2i})=1, \, \, i = 1, \dots, 5,$$
в $Q_{x_1,y_1} \cup Q_{x_2,y_2}$ может быть не более 5 точек одного цвета.
Применим Лемму~\ref{L1} к раскраске плоскости $P$: для произвольного $\delta>0$ найдутся точки $u,v,w$ трех различных цветов $c(u)$, $c(v)$, $c(w)$, попарные расстояния между которыми не превосходят $\delta$. Это означает, что в пятерках $Q_u$, $Q_v$, $Q_w$ есть по крайней мере 3 точки цветов $c(u)$, $c(v)$, $c(w)$ соответственно. Можно выбрать по одной точке из каждого набора так, чтобы они не совпадали в проекции на плоскость $(0,0,z,t)$ и имели цвета $c(u)$, $c(v)$, $c(w)$. Нетрудно проверить, что если выполнены условия
$$16\left(\frac{\delta}{\varepsilon_1}+2 \frac{\delta^2}{\varepsilon_1^2}\right) \leq \sin \varepsilon_2; $$
$$\delta \leq \frac{\varepsilon_1}{2} \sin \frac{\varepsilon_3}{2},$$
то справедливы неравенства (\ref{ineq_phi}), (\ref{ineq_alpha}).
\end{proof}
Теперь мы готовы начать доказательство Теоремы~\ref{Th7}. Рассмотрим точки $v_1, v_2, v_3$, удовлетворяющие условиям Утверждения 4. Пусть $u_0$~--- центр описанной окружности треугольника $v_1 v_2 v_3$; $\overline{n}$~--- некоторый единичный вектор, ортогональный двумерной плоскости, в которой лежит треугольник; $u_1 = u_0 + \delta_1 \overline{n}$; $L(u_1, v_1, v_2, v_3)$ --- гиперплоскость, проходящая через точки $u_1$, $v_1$, $v_2$, $v_3$.
Пусть $B(u_1; \delta_2) \subset L(u_1, v_1, v_2, v_3)$~--- трехмерный открытый шар радиуса $\delta_2 > 0$ с центром в $u_1$.
Для точки $w \in B(u_1; \delta_2)$ определим
$$T_{1}(w) = S(v_2; 1) \cap S(v_3; 1) \cap S(w; 1),$$
где $S(v;1)$~--- сфера единичного радиуса с центром $v$.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm]{pent3.png}\\
\caption{Построение окружности запрещенного радиуса}
\end{figure}
Пусть радиус окружности $T_{1}(w)$ равен $r_1(w)$; окружности $T_{2}(w)$, $T_{3}(w)$ и их радиусы $r_2(w),r_3(w)$ определены аналогично.
Поскольку вершины треугольников $w v_1 v_2$, $w v_2 v_3$, $w v_1 v_3$ находятся на расстоянии не более $\delta +\delta_1 +\delta_2$ от вершин треугольника, лежащего в сечении $\{(0,0)\} \times [0, \varepsilon]^2$, то при достаточно малых $\delta$, $\delta_1$, $\delta_2$ соответствующие окружности лежат внутри слойки.
Для определенного выше трехмерного шара $B(u_1; \delta_2)$ введем функцию
$$r: B(u_1; \delta_2) \rightarrow \mathbb{R}^3;$$
$$r(w) = (r_1(w),r_2(w),r_3(w)).$$
Заметим, что при $w = u_1$ градиенты функций $r_i(w)$ коллинеарны медианам равнобедренных треугольников $u_1 v_2 v_3$, $u_1 v_1 v_3$, $u_1 v_1 v_2$, проведенным из вершины $u_1$:
$$
\nabla r_1(u_1) = \lambda_1 \left(u_1 - (v_2+v_3)/2 \right); \quad
\nabla r_2(u_1) = \lambda_2 \left(u_1 - (v_1+v_3)/2 \right); \quad
\nabla r_3(u_1) = \lambda_3 \left(u_1 - (v_1+v_2)/2 \right),
$$
причем $\lambda_i \neq 0$ и симплекс $u_1 v_1 v_2 v_3$ невырожден. Следовательно, при $w = u_1$ якобиан $\partial r / \partial w$ отличен от 0, и в окрестности $u_1$ для $r(\cdot)$ выполнены условия теоремы об обратной функции. Но поскольку запрещенные радиусы всюду плотны в окрестности каждого из значений $r_1(w)$, $r_2(w)$, $r_3(w)$, то найдется тройка запрещенных радиусов $r^*_1$, $r^*_2$, $r^*_3$, имеющая прообраз $u^*$ в $B(u_1; \delta_2)$.
Тогда при любом цвете точки $u^*$ хотя бы один из треугольников $u^*v_1v_2$, $u^*v_2v_3$, $u^*v_1v_3$ будет иметь разноцветные вершины, а на соответствующей окружности запрещенного радиуса должны встречаться еще по крайней мере 3 цвета. Тем самым $\chi(\mathbb{R}^2\times[0, \varepsilon]^2) \geq 6$. \qed
\subsection {Доказательство Теоремы \ref{Th8}}
Идея доказательства заключается в построении семейства замкнутых множеств, диаметр каждого из которых не превосходит 2, и которые также образуют покрытие $\mathbb{R}^n$. После того, как искомое семейство найдено, утверждение следует непосредственно из определения топологической размерности. Используется стандартная топология $\mathbb{R}^n$.
Напомним, что $C_i$~--- множество всех точек $\mathbb{R}^n$, раскрашенных в $i$-й цвет, $1\leq i \leq m$, и положим
$${C^*_i:=\overline{\operatorname{Int} \, \overline{C_i}}}\qquad \mbox{(замыкание\ внутренности\ замыкания)}.$$
\noindent Разобьем каждое из множеств $C^*_i$ на компоненты связности (в смысле стандартной топологии): $$C^*_i = \bigcup_{\alpha\in A_i} D_{\alpha}.$$
\noindent Для краткости введем обозначение $\{ D_\alpha \} = \bigcup\limits_{i=1}^m \bigcup\limits_{\alpha \in A_i} D_\alpha$.
\medskip
\noindent{\bf (i).} {\it Множества $C^*_i$ образуют покрытие $\mathbb{R}^n$.}
\medskip
\noindent Пусть верно обратное: $\exists v: \forall i \ \ v\notin C^*_i$. Тогда существует открытый шар $B(v; \varepsilon)$:
$$B(v; \varepsilon)\cap C^*_i =\emptyset; \ \ B(v; \varepsilon)\subset\bigcup C_i.$$
Рассмотрим некоторый шар $$B^1 \subset B(v; \varepsilon) \setminus \overline{C}_1.$$ Очевидно, $B^1$ не может быть подмножеством $\overline{C}_i$~--- иначе пересечение внутренности $\overline{C}_i$ и $B(v; \varepsilon)$ было бы непусто. Определим последовательность вложенных шаров $$B^{k+1} \subset B^k \setminus \overline{C}_k.$$ Точки из $B^{m+1}$ не принадлежат ни одному из $\overline{C}_i$, что противоречит исходным предположениям.
\medskip
\noindent{\bf (ii).} {\it Если сфера $S$ радиуса 1 с центром в точке $v$ содержит внутренние точки множеств $\overline{C}_i$, ${1\leq i\leq k \leq n}$, то $v$ принадлежит хотя бы одному из множеств $C^*_j$, $k+1\leq j\leq m$.}
\medskip
\noindent
Можно выбрать точки $x_1, ... , x_n$ таким образом, что
$$x_i\in S \cap \operatorname{Int} \overline{C}_i, \quad \ 1\leq i \leq k;$$
$$ x_i \in S, \quad k+1 \leq i \leq n;$$
и $\{v, x_1, ... , x_n\}$ находятся в общем положении (в смысле невырожденности симплексов). Определим такое $\varepsilon > 0$, что $B(x_i; \varepsilon) \subset \overline{C}_i$, $1 \leq i \leq k$.
Цвет точки
$$w = w(q_1,\dots,q_k) = \bigcup\limits_{1 \leq i \leq k} S(q_i; 1),$$
если она определена, отличается от каждого из цветов точек $q_1, ... , q_n$.
Пусть $$z \in B(0; \varepsilon); \quad y_i = x_i + z.$$ В достаточно малой окрестности набора точек $y_i$ функция $w(\cdot)$ определена и непрерывна по каждому из аргументов. Выберем такие точки
$$y'_i \in C_i, \quad 1 \leq i \leq k;$$
$$y'_i = y_i, \quad 1 \leq k+1 \leq n,$$ что существует $w(y'_1, ... , y'_n)$. Тогда
$$w(y'_1,\dots,y'_k) \in \bigcup_{j=k+1}^{m} {C}_j.$$
При этом
$$\delta(y'_1, ... ,y'_k) = \max \limits_{1 \leq i \leq k} \lVert y'_i - y_i \rVert$$
может быть сколь угодно малым, следовательно
$$w(y_1,\dots,y_k) \in \bigcup_{j=k+1}^{m} \overline{C}_j.$$
\noindent Поскольку $z \in B(0; \varepsilon)$ произвольно,
$$B(v; \varepsilon) \subset \bigcup_{j=k+1}^{m} \overline{C}_j.$$
Следовательно, хотя бы одно из множеств $\overline{C}_j$, $j = k+1, ... ,m$ всюду плотно в некоторой окрестности $v$.
\medskip
\noindent{\bf (iii).}\ {\it Если некоторая точка $v\in\mathbb{R}^n$ покрыта не более чем $n$ множествами из $\{D_\alpha\}$, то диаметр хотя бы одного из этих множеств не превосходит $2$.}
\medskip
\noindent В противном случае каждое множество из $\{D_\alpha\}$, покрывающее $v$, имеет непустое пересечение со сферой $S$ радиуса 1 с центром в точке $v$. Без ограничения общности предположим, что точку $v$ покрывают множества $D_1,\dots,D_n$, являющиеся компонентами связности $C^*_1,\dots,C^*_n$ соответственно. Пусть $\operatorname{min}\{\operatorname{Diam} D_i\} = 2+\delta$.
Пусть $w \in \mathbb{R}^n, \, \|w\| = 1$~--- некоторое направление; $S_\eta$~--- сфера радиуса $1$ с центром в точке $u(\eta) = v+\eta w$, $\eta\in\mathbb{R}_+$.
Тогда множество $$T_i = \{\eta\in\mathbb{R}_+: \ \ S_\eta \cap\operatorname{Int} D_i \neq \emptyset; \ 1 \leq i \leq n \}$$ всюду плотно на отрезке $[0,\delta]$.
Следовательно, $$\forall \eta \in [0,\delta] \ \ u(\eta) \in \bigcup^m_{n+1} \overline{C}_j\ .$$
Аналогичное рассуждение справедливо для произвольного вектора $w$ единичной длины.
Но тогда любая окрестность $v$ содержит шар, являющийся подмножеством $\bigcup^m_{n+1} \overline{C}_j$, а следовательно, и внутреннюю точку хотя бы одного из множеств $\overline{C}_j, \ j>n$. Полученное противоречие доказывает (iii).
\medskip
\noindent{\bf (iv).}\ {\it Если любая точка $\mathbb{R}^n$ покрыта не более чем $n$ множествами из $\{D_\alpha\}$, то семейство множеств $\Delta = \{D_\alpha | \ \operatorname{Diam}(D_\alpha)\leq 2\}$ покрывает $\mathbb{R}^n$}.
\medskip
\noindent Очевидным образом следует из (iii).
\medskip
\noindent{\bf (v).}\ {\it Найдутся множества $D'_1, D'_2,\dots,D'_{n+1} \in \Delta$, имеющие непустое пересечение.}
\medskip
\noindent Рассмотрим шар $B(0; R)\subset\mathbb{R}^n$ и его покрытие семейством множеств $\Delta$. Согласно одному из определений топологической размерности (см. П.~С.~Александров и Б.~А.~Пасынков~\cite{AlPas}) при достаточно большом $R$ среди покрывающих $B(0; R)$ замкнутых множеств, диаметр каждого из которых не превосходит $2$, найдутся $n+1$, пересечение которых непусто.
\medskip
\noindent{\bf (vi).}\ {\it Найдется $n+1$ множество из семейства $\{C_i\}$, пересечение замыканий которых непусто.}
\medskip
\noindent Пусть для множеств $D'_1, D'_2,\dots,D'_{n+1} \in \Delta$ выполнено
$$
\bigcap \limits_{i=1}^{n+1} D'_i \neq \emptyset;
$$
$$
D'_i \subset C^*_{l_i}, \, \, i=1, 2, ... , n+1.
$$
Заметим, что индексы $l_i$ попарно различны, поскольку в противном случае попарно пересекающиеся множества $D'_i$ не могли бы являться различными компонентами связности $C^*_i$. Следовательно,
$$
\emptyset \neq \bigcap \limits_{i=1}^{n+1} D'_i \subset \bigcap \limits_{i=1}^{n+1} C^*_{l_i} \subset \bigcap \limits_{i=1}^{n+1} \overline{C}_{l_i},
$$
и $\{C_{l_i}\}$~--- искомое подсемейство. \qed
\medskip
\textbf{Замечание.} Пользуясь леммой Шпернера, нетрудно получить оценку на радиус шара, который содержит хотя бы одну точку, принадлежащую $n+1$ из множеств $\overline{C}_i$.
\subsection {Доказательство Теоремы \ref{Th9}}
Пусть координата $x$ полноценная, а координаты $y$, $z$ и $t$ инфинитезимальные.
\noindent{\bf Верхняя оценка.}\ Покрасим точки с координатами $(x,y,z,t)$ при $\frac{2k}{3} < x \leq \frac{2(k+1)}{3}$ в цвет $k \ \mbox{mod}\ 3$ ($k$~--- целое число).
\noindent{\bf Нижняя оценка.}\ Покажем наличие нечетного цикла в дистанционном графе $G_1 (\mathbb{Q}\times [0,\varepsilon]_\mathbb{Q}^3)$.
Рассмотрим четное $n > 2\varepsilon^{-2}$ и вектор $e = (1-n^{-1},bn^{-1},cn^{-1},dn^{-1})$, такой что $b^2+c^2+d^2=2n-1$. Этот вектор имеет единичную длину.
Заметим, что $e$ умещается в нашу полосу, поскольку
$$\max (|b|n^{-1}, |c|n^{-1}, |d|n^{-1}) < \sqrt{\frac{2}{n}} < \varepsilon.$$
Также рассмотрим вектор $e'=(1-n^{-1},-bn^{-1},-cn^{-1},-dn^{-1})$ и последовательность точек $A_i$, определенную следующим образом
$$A_0 := (0,0,0,0); \ \ \ A_{2k+1} := A_{2k}+e; \ \ \ A_{2k+2} := A_{2k+1}+e'.$$
Нетрудно видеть, что $A_n=(n-1,0,0,0)$, поскольку $n$ четно. Таким образом, точки $A_0,\dots,A_n$ и точки $(1,0,0,0),\dots,(n-2,0,0,0)$ образуют искомый нечетный цикл.
Остается заметить, что для любого $\varepsilon>0$ найдутся целые $n$, $b$, $c$, $d$, удовлетворяющие нашим условиям (например, $b = c = d = 2l+1$, $n = 6l^2 + 6l + 2$ при достаточно большом $l$). \qed
\section {Заключение и вопросы для дальнейшего исследования}
Мы показали, что $5 \leq \chi(\mathbb{R}^2 \times [0,\varepsilon]) \leq 7$ и ${6 \leq \chi(\mathbb{R}^2 \times [0,\varepsilon]^2) \leq 7}$, а также $\chi(\mathbb{R}^2 \times [0,\varepsilon]^k) \leq 7$ при достаточно маленьком $\varepsilon$. Естественным образом возникает вопрос о существовании такого $k$, что \\
$\chi(\mathbb{R}^2 \times [0,\varepsilon]^k) = 7$ при произвольно малом $\varepsilon$.
Как можно заметить, в тех случаях, когда мы можем посчитать хроматическое число вещественной слойки, имеет место дискретная непрерывность.
Выполняется ли это свойство в общем случае? Иными словами, является ли функция $\chi(\mathbb{K}^n \times [0,\varepsilon]^m)$, где $\mathbb{K} \in \{\mathbb{R}, \mathbb{Q}\}$ дискретно непрерывной по $\varepsilon$?
В одномерном и двумерном случае добавление одной инфинитезимальной размерности увеличивает нижнюю оценку на хроматическое число пространства.
Из общих соображений следует, что должна выполняться оценка $\chi(\mathbb{R}^3\times [0,\varepsilon]) \geq 7$ , где $\varepsilon$~--- произвольное положительное число, однако нам не удалось это доказать. Более того, хочется предположить, что $\chi (\mathbb{R}^n \times [0, \varepsilon]) > \chi (\mathbb{R}^n)$, однако это уже куда более сложное утверждение.
А.~Б.~Купавский в работе~\cite{K} поставил вопрос о максимальном гарантированном количестве цветов на $m$-мерной сферы радиуса $r$ при правильной раскраске $n$-мерного
пространства в конечное количество цветов. В той же работе были получены оценки при $r$, отделенных от нуля. Лемма~\ref{L1} дополняет эти результаты при инфинитезимальных $r$, но только в случае $n = 2$, $m = 1$. По-видимому, из Теоремы~\ref{Th8} можно вывести аналогичный результат для $n = m+1$ при произвольном $n>2$.
Также интерес представляет и обратная задача.
По натуральному числу $k$ требуется построить ``разумное'' пространство с хроматическим числом ровно $k$.
Например, было бы очень интересным такое утверждение для пространства с большим аффинным подпроcтранством, в частности для $[0,h_1] \times\dots\times [0,h_m] \times [0,\varepsilon]^l \times \mathbb{R}^s$ при $s>0$.
\vskip 0.5cm
\textbf{Благодарности.}
Работа была поддержана Российским Научным Фондом:
Теоремы 7 и 11, а также Лемма 1 грантом 16-11-10039, а Теоремы 8 и 9 "--- грантом 17-11-01377.
Авторы благодарят Мишу Баска и Федора Петрова за плодотворное обсуждение и ценные замечания, а также анонимного рецензента за указание огромного количества неточностей и предложения по их исправлению.
\vskip 0.5cm
\textbf{Сведения об авторах.}
Канель-Белов Алексей Яковлевич, Лаборатория продвинутой комбинаторики, Московский физико-технический институт, Институтский пер., 9, Долгопрудный, Московская обл., 141701.
kanelster@gmail.com
Воронов Всеволод Александрович, Институт динамики систем и теории управления
имени В.М. Матросова, Сибирского отделения РАН,
664033, Иркутск, ул. Лермонтова, 134, а/я 292.
v-vor@yandex.ru
Черкашин Данила Дмитриевич, Лаборатория продвинутой комбинаторики, Московский физико-технический институт, Институтский пер., 9, Долгопрудный, Московская обл., 141701; Лаборатория им. П.Л. Чебышева, Санкт-Петербургский государственный университет, 14 линия В.О., дом 29Б, Санкт-Петербург 199178 Россия;
С.-Петербургское отделение Математического института
им. В. А. Стеклова РАН
191023, Санкт-Петербург
наб. р. Фонтанки, 27
Россия
matelk@mail.ru
| 593590f3cedad34c1c35252240f8617cc6bc0c20 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Technical appendices}
\subsection{Proof of Lemma~\ref{lemma:nice-families-l1-differentiable}}
\label{sec:proof-nice-families-l1-differentiable}
By the triangle inequality, we have
\begin{align}
&\int |p_{\theta_0 + h} - p_{\theta_0} -
h^T \dot{\ell}_{\theta_0} p_{\theta_0}|d\mu \nonumber \\
&\leq \underbrace{\int \left|p_{\theta_0 + h} - p_{\theta_0} -
\half h^T \dot{\ell}_{\theta_0} \sqrt{p_{\theta_0}}
({\sqrt{p_{\theta_0 + h}} + \sqrt{p_{\theta_0}}})\right|
d\mu}_{\defeq I_1(h; \theta_0)}
+ \underbrace{\int \left| \half h^T \dot{\ell}_{\theta_0}
\sqrt{p_{\theta_0}} (\sqrt{p_{\theta_0 + h}} - \sqrt{p_{\theta_0}})\right|
d\mu}_{\defeq I_2(h; \theta_0)}.
\nonumber
\end{align}
We show that each of the integral terms $I_1$ and $I_2$ are both
$o(\norm{h})$ as $h \to 0$.
By algebraic
manipulation and the Cauchy--Schwarz inequality,
\begin{align*}
I_1(h; \theta_0) &= \int |\sqrt{p_{\theta_0 + h}} +
\sqrt{p_{\theta_0}}| \cdot \left|\sqrt{p_{\theta_0 + h}} -
\sqrt{p_{\theta_0}} - \half h^T \dot{\ell}_{\theta_0}
\sqrt{p_{\theta_0}}\right| d\mu \nonumber \\
&\leq \left(\int |\sqrt{p_{\theta_0 + h}} + \sqrt{p_{\theta_0}}|^2
d\mu \right)^\half \cdot \left(\int \left|\sqrt{p_{\theta_0 + h}} -
\sqrt{p_{\theta_0}} - \half h^T \dot{\ell}_{\theta_0}
\sqrt{p_{\theta_0}}\right|^2 d\mu\right)^\half
\end{align*}
Jensen's inequality gives
$\int |\sqrt{p_{\theta_0 + h}} + \sqrt{p_{\theta_0}}|^2 d\mu
\leq 2 \int (p_{\theta_0 + h} + p_{\theta_0}) d\mu = 2$.
The assumption
that $\mc{P}$ is QMD at $\theta_0$ immediately yields
$I_1(h; \theta_0) = o(\norm{h})$.
To bound $I_2$, we again apply the Cauchy--Schwarz inequality, obtaining
\begin{equation*}
2 I_2 (h; \theta_0) \leq \left(\int |h^T \dot{\ell}_{\theta_0}
\sqrt{p_{\theta_0}} |^2 d\mu \right)^\half
\cdot \left(\int |\sqrt{p_{\theta_0 + h}} -
\sqrt{p_{\theta_0}}|^2 d\mu \right)^\half
\end{equation*}
Since $\mc{P}$ is QMD at $\theta_0$, we have $\int |\sqrt{p_{\theta_0 + h}}
- \sqrt{p_{\theta_0}}|^2 d\mu = \int |\half h^T \dot{\ell}_{\theta_0}
\sqrt{p_{\theta_0}} |^2 d\mu + o(\norm{h}^2) = O(\norm{h}^2)$
(see~\cite[Ch.~7.2]{VanDerVaart98}). Thus $I_2(h; \theta_0) = O(\norm{h}^2)$,
givin the lemma.
\subsection{Proof of Proposition~\ref{proposition:achievable}}
\label{sec:proof-achievable}
Let $P_0$ and $P_1$ be distributions on $\mc{X}$, each with
densities $p_0,p_1$ according to some base measure $\mu$.
Let $\theta_a = \theta(P_a)$, and consider the problem of privately
collecting observations and deciding whether $\theta = \theta_0$ or $\theta
= \theta_1$. We define a randomized-response based estimator
for this problem using a simple hypothesis test.
Define the ``acceptance'' set
\begin{equation*}
A \defeq \left\{x \in \mc{X} \mid p_0(x) > p_1(x) \right\}.
\end{equation*}
Then we have $P_0(A) - P_1(A) = \tvnorm{P_0 - P_1}$ by a
calculation.
Now, consider the following estimator:
for each $X_i$, define
\begin{equation*}
T_i = \indic{X_i \in A}
~~ \mbox{and} ~~
Z_i \mid \{T_i = t\}
= \begin{cases}
1 & \mbox{with~probability~}
(e^\diffp + 1)^{-1}\left(e^{\diffp} t + 1-t\right) \\
0 & \mbox{with~probability~}
(e^{-\diffp} + 1)^{-1}\left(e^{-\diffp} t + 1-t\right)
\end{cases}
\end{equation*}
Then the channel $\channel(\cdot \mid X_i)$ for $Z_i \mid X_i$ is
$\diffp$-differentially-private by inspection, and
setting $\delta_\diffp = \frac{e^\diffp}{1 + e^\diffp} - \half$,
we have
\begin{equation*}
\E_0[Z_i]
= \frac{1 + \delta_\diffp}{2} P_0(A)
+ \frac{1 - \delta_\diffp}{2} P_0(A^c)
= \frac{1- \delta_\diffp}{2} + \delta_\diffp P_0(A)
~~ \mbox{and} ~~
\E_1[Z_i]
= \frac{1- \delta_\diffp}{2} + \delta_\diffp P_1(A)
\end{equation*}
while $Z_i \in \{0, 1\}$.
Now, define the statistic
\begin{equation*}
K_n \defeq \frac{1}{\delta_\diffp}
\left(\frac{1}{n} \sum_{i = 1}^n Z_i - \frac{1-\delta_\diffp}{2}\right),
\end{equation*}
so that $\E_0[K_n] = P_0(A)$ and $\E_1[K_n] = P_1(A)$.
We define our estimator to be
\begin{equation*}
\what{\theta} \defeq \begin{cases}
\theta_0 & \mbox{if~} K_n \ge \frac{P_0(A) + P_1(A)}{2} \\
\theta_1 & \mbox{if~} K_n < \frac{P_0(A) + P_1(A)}{2}. \end{cases}
\end{equation*}
We now analyze the performance of our randomized choice $\what{\theta}$. By
construction of the acceptance set $A$, note that
\begin{equation*}
\frac{P_0(A) + P_1(A)}{2}
= P_0(A) + \frac{P_1(A) - P_0(A)}{2}
= P_0(A) - \half \tvnorm{P_1 - P_0}
= P_1(A) + \half \tvnorm{P_1 - P_0}.
\end{equation*}
By Hoeffding's inequality, we thus have
\begin{equation*}
\max\left\{P_0\left(K_n \le \frac{P_0(A) + P_1(A)}{2}
\right),
P_1\left(K_n \ge \frac{P_0(A) + P_1(A)}{2}\right)
\right\}
\le \exp\left(- \frac{n \delta_\diffp^2 \tvnorm{P_0 - P_1}^2}{2}\right).
\end{equation*}
In particular, we have
\begin{equation*}
\E_0[\loss(\what{\theta}, P_0)]
+ \E_1[\loss(\what{\theta}, P_1)]
\le \left[\loss(\theta_1, P_0)
+ \loss(\theta_0, P_1)\right]
\exp\left(-\frac{n \delta_\diffp^2 \tvnorm{P_0 - P_1}^2}{2}\right).
\end{equation*}
Using the reverse triangle condition~\ref{cond:reverse-triangle} on the
distance function's growth, we obtain
\begin{align}
\nonumber
\E_0[\loss(\what{\theta}, P_0)]
+ \E_1[\loss(\what{\theta}, P_1)]
& \le
\Creverse \lossdist(P_0, P_1)
\exp\left(-\frac{n \delta_\diffp^2 \tvnorm{P_0 - P_1}^2}{2}
\right) \\
& \le \Creverse \sup_{P_1 \in \mc{P}}
\lossdist(P_0, P_1)
\exp\left(-\frac{n \delta_\diffp^2 \tvnorm{P_0 - P_1}^2}{2}
\right) \nonumber \\
& = \Creverse \sup_{r \ge 0}
\left\{\modcont(r; P_0)
\exp\left(-\frac{n \delta_\diffp^2 r^2}{2}\right)\right\}.
\label{eqn:minimax-upper-exponential}
\end{align}
The bound~\eqref{eqn:minimax-upper-exponential} is the key inequality.
Let us substitute $\tau^2 = \frac{n r^2 \delta_\diffp^2}{2}$,
or $r = \frac{\sqrt{2} \tau}{\delta_\diffp \sqrt{n}}$ in the expression,
which yields
\begin{equation*}
\E_0[\loss(\what{\theta}, P_0)]
+ \E_1[\loss(\what{\theta}, P_1)]
\le \Creverse \sup_{\tau \ge 0}
\left\{ \modcont\left(\frac{\sqrt{2} \tau}{\delta_\diffp \sqrt{n}};
P_0\right) e^{-\tau^2}\right\}.
\end{equation*}
For all $\tau \le 1$, this gives the result; otherwise, we use
the growth condition on the modulus of continuity to obtain
\begin{equation*}
\E_0[\loss(\what{\theta}, P_0)]
+ \E_1[\loss(\what{\theta}, P_1)]
\le \Creverse \modcont\left(\frac{\sqrt{2}}{
\delta_\diffp \sqrt{n}}; P_0\right)
\Cgrow^\alpha \sup_{\tau \ge 1}
\tau^\alpha e^{-\tau^2}
\end{equation*}
Noting that $\sup_{\tau \ge 0}
\tau^\alpha e^{-\tau^2} = (\alpha/2)^{\alpha/2} e^{-\alpha/2}$ gives
the result.
\subsection{Proof of
Proposition~\ref{proposition:uniform-achievability-one-dim-exp}}
\label{sec:proof-uniform-achievability-one-dim-exp}
We require one additional piece of notation before we begin the proof.
Let $W_i = Z_i - V_i$ be the error in the private version of the
quantity $V_i$, so that $\E[W_i \mid V_i] = 0$, and
\begin{equation*}
W_i = \begin{cases}
\frac{2}{e^\diffp - 1} V_i
- \frac{1}{e^\diffp - 1}
& \mbox{w.p.}~ \frac{e^\diffp}{e^\diffp + 1} \\
\frac{-2 e^\diffp}{e^\diffp - 1} V_i
+ \frac{e^\diffp}{e^\diffp - 1}
& \mbox{w.p.}~ \frac{1}{e^\diffp + 1}.
\end{cases}
\end{equation*}
Recall our definitions of $V_i = \indics{T(X_i) \ge \what{T}_n}$ and $Z_i$ as
the privatized version of $V_i$. Letting $\wb{Z}_n = \frac{1}{n} \sum_{i =
1}^n Z_i$, and similarly for $\wb{V}_n$ and $\wb{W}_n$, recall also the
definition of the random variable $G_n \defeq \Psi(\what{T}_n, \theta_0) =
P_{\theta_0}(T(X) \ge \what{T}_n)$.
By mimicking the delta method, we will show that
\begin{equation}
\sqrt{n}(\what{\theta}_n - \theta_0)
= 2 \information[\theta_0]^{-1}
\cdot \sqrt{n} \left(\wb{V}_n - G_n + \wb{W}_n \right) + o_P(1).
\label{eqn:delta-method-expfam}
\end{equation}
Deferring the proof of the expansion~\eqref{eqn:delta-method-expfam},
let us show how it implies the proposition.
First, with our definition of the $W_i$, we have
\begin{equation*}
\var(W_i \mid V_i) =
\E[W_i^2 \mid V_i]
= \frac{e^\diffp }{(e^\diffp - 1)^2}
= \delta_\diffp^{-2},
\end{equation*}
so that $\wb{W}_n = \frac{1}{n} \sum_{i = 1}^n W_i$ satisfies $\sqrt{n}
\wb{W}_n \cd \normal(0, \delta_\diffp^{-2})$ by the Lindeberg CLT. Thus,
assuming the expansion~\eqref{eqn:delta-method-expfam}, it remains to show
the weak convergence result
\begin{equation}
\label{eqn:asymp-Z-n}
\frac{\sqrt{n}\left(\wb{V}_n - G_n\right)}{G_n(1-G_n)} \cd \normal(0, 1).
\end{equation}
where $G_n = \Psi(\what{T}_n, \theta_0)$.
By definition, the $\{X_i\}_{i=1}^n$ are independent of $\what{T}_n$, and
hence
\begin{equation*}
\E [V_i \mid \what{T}_n]= \Psi(\what{T}_n, \theta_0) = G_n~~\text{and}~~
\var(V_i \mid \what{T}_n) =
\Psi(\what{T}_n, \theta_0)(1-\Psi(\what{T}_n, \theta_0))
= G_n(1-G_n).
\end{equation*}
The third central moments of the $V_i$ conditional on $\what{T}_n$
have the bound
\begin{equation*}
\E \left[ \left|V_i - \E [V_i \mid \what{T}_n]\right|^3 \mid \what{T}_n \right]
\leq \Psi(\what{T}_n, \theta_0)(1-\Psi(\what{T}_n, \theta_0))
= G_n(1-G_n).
\end{equation*}
Thus, we may apply the Berry-Esseen Theorem
\cite[Thm 11.2.7]{LehmannRo05} to obtain
\begin{equation*}
\sup_{t\in \R}
\left|\P\left(\frac{\sqrt{n}\left(\wb{V}_n - G_n\right)}{G_n(1- G_n)}
\leq t \mid \what{T}_n\right)
- \Phi(t) \right|
\leq U_n \defeq \frac{1}{\sqrt{n G_n(1- G_n)}} \wedge 2.
\end{equation*}
Jensens's inequality then implies
\begin{equation*}
\sup_{t \in \R}
\left|\P\left(\frac{\sqrt{n}\left(\wb{V}_n - G_n\right)}{G_n(1- G_n)}
\leq t\right) - \Phi(t)\right|
\leq
\E\left[\sup_{t \in \R} \left|
\P\left(\frac{\sqrt{n}\left(\wb{V}_n - G_n\right)}{G_n(1- G_n)} \leq t \mid \what{T}_n\right)
- \Phi(t) \right|\right]
\leq \E [U_n]
\end{equation*}
To show the convergence~\eqref{eqn:asymp-Z-n}, it is thus sufficient to show
that $\E [U_n] \to 0$ as $n \uparrow \infty$. To that end, the
following lemma on the behavior of $\Psi(t, \theta)
= P_\theta(T(X) \ge t)$ is useful.
\begin{lemma}
\label{lemma:overlap-nonzero-lemma}
Let $t_0 = \E_{\theta_0}[T(X)]$ and
assume that $\var_{\theta_0} (T(X)) > 0$. Then there exist
$\epsilon > 0$ and $c \in (0, \half)$ such that
if $t \in [t_0 \pm \epsilon]$ and
$\theta \in [\theta_0 \pm \epsilon]$, then
$\Psi(t, \theta) \in [c, 1-c]$.
\end{lemma}
\begin{proof}
By the dominated convergence theorem
and our assumption that
$\var_{\theta_0}(T(X)) > 0$,
where $t_0 = \E_{\theta_0}[T(X)]$,
we have
\begin{equation*}
\liminf_{t \uparrow t_0}
\Psi(t, \theta_0)
= P_{\theta_0}(T(X) \ge t_0) \in (0, 1)
~~ \mbox{and} ~~
\limsup_{t \downarrow t_0}
\Psi(t, \theta_0)
= P_{\theta_0}(T(X) > t_0) \in (0, 1).
\end{equation*}
The fact that
$t \mapsto \Psi(t, \theta_0)$ is non-increasing implies
that for some $\epsilon_1 > 0, c \in (0, \frac{1}{4})$,
we have
$\Psi(t, \theta_0) \in [2c, 1-2c]$ for
$t \in [t_0 - \epsilon_1, t_0 + \epsilon_1]$.
Fix this $\epsilon_1$ and $c$.
By~\cite[Thm 2.7.1]{LehmannRo05},
we know that any $t \in \R$,
the function $\theta \mapsto \Psi(t, \theta)$ is continuous and
non-decreasing. Thus for any
$\epsilon_2 > 0$, we have
\begin{equation*}
\Psi(t_0 + \epsilon_1, \theta_0 - \epsilon_2)
\le \Psi(t, \theta)
\le \Psi(t_0 - \epsilon_1, \theta_0 + \epsilon_2)
~~ \mbox{for}~~
(t, \theta) \in [t_0 \pm \epsilon_1] \times [\theta_0 \pm \epsilon_2].
\end{equation*}
Using the continuity of $\theta \mapsto \Psi(t, \theta)$, we may
choose $\epsilon_2 > 0$ small enough that
\begin{equation*}
\Psi(t, \theta) \in [ c, 1- c]~~\mbox{for}~~
(t, \theta) \in \{t_0 - \epsilon_1, t_0 + \epsilon_1\}
\times \{\theta_0 - \epsilon_2, \theta_0 + \epsilon_2\}.
\end{equation*}
The lemma
follows by taking $\epsilon = \epsilon_1 \wedge \epsilon_2$.
\end{proof}
As $\var_{\theta_0} (T(X)) > 0$ by assumption,
Lemma~\ref{lemma:overlap-nonzero-lemma} and the
fact that $\what{T}_n \cp t_0$ imply
\begin{equation}
\label{eqn:range-of-G-n}
G_n \defeq \Psi(\what{T}_n, \theta_0)
= P_{\theta_0}(T(X) \ge \what{T}_n)\in [c+o_P(1), 1-c + o_P(1)].
\end{equation}
The bounds~\eqref{eqn:range-of-G-n} imply
that $G_n(1-G_n) \geq c(1-c) + o_P(1)$, so
$U_n \cp 0$.
By construction $|U_n| \leq 2$ for all $n$,
so the bounded convergence theorem implies $\E [U_n] \rightarrow 0$,
which was what we required to show the weak convergence
result~\eqref{eqn:asymp-Z-n}.
The joint convergence in the proposition follows because
$\wb{W}_n$ and $\wb{V}_n - G_n$ are conditionally uncorollated.
\paragraph{The delta method expansion}
We now return to demonstrate the claim~\eqref{eqn:delta-method-expfam}. For
$p \in [0, 1]$, recall the
definition~\eqref{eqn:private-expfam-estimator}
of the function $H$, and define
\begin{equation}
\label{eqn:def-H-n}
H_n(p)
\defeq H(p, \what{T}_n)
= \inf \left\{\theta\in \R \mid P_\theta(T(X) \ge \what{T}_n) \ge p\right\},
\end{equation}
where the value is $-\infty$ or $+\infty$ for $p$ below or above the range
of $\theta \mapsto P_\theta(T(X) \ge \what{T}_n)$, respectively. Then
$\what{\theta}_n = H_n(\wb{Z}_n)$ by
construction~\eqref{eqn:private-expfam-estimator}. We would like to apply
Taylor's theorem and the inverse function theorem to $\what{\theta}_n -
\theta_0 = H_n(\wb{Z}_n) - \theta_0$, but this requires a few additional
steps.
By the inverse function theorem, $p \mapsto H_n(p)$ is
$\mc{C}^\infty$ on the interval $(\inf_\theta \Psi(\what{T}_n, \theta),
\sup_\theta \Psi(\what{T}_n, \theta))$, and letting
\begin{equation*}
\dot{\Psi}_\theta(t, \theta) = \frac{\partial}{\partial \theta}
\Psi(t, \theta)
= \E_\theta[\indic{T(X) \ge t} (T(X) - A'(\theta))]
\end{equation*}
be the derivative of $P_\theta(T(X) \ge t)$ with respect to $\theta$, we
have $H_n'(p) = \dot{\Psi}_\theta(\what{T}_n, H_n(p))^{-1}$ whenever $p$ is
interior to the range of $\theta \mapsto P_\theta(T(X) \ge \what{T}_n)$. To
show that $\wb{Z}_n$ is (typically) in this range, we require a bit of
analysis on $\dot{\Psi}_\theta$.
\begin{lemma}
\label{lemma:continuity-expfam}
The function $(t, \theta) \mapsto \dot{\Psi}_\theta(t, \theta)
= \E_{\theta}[\indic{T(X) \ge t} (T(X) - A'(\theta))]$
is continuous at $(t_0, \theta_0)$, where $t_0 =
\E_{\theta_0}[T(X)] = A'(\theta_0)$.
\end{lemma}
\noindent
To avoid disrupting the flow, we defer the proof to
Section~\ref{sec:proof-continuity-expfam}.
Now, we have that $\dot{\Psi}_{\theta}(t_0, \theta_0) = \half
\E_{\theta_0}[|T(X) - t_0|] > 0$, so Lemma~\ref{lemma:continuity-expfam}
implies there exists $\epsilon > 0$ such that
\begin{equation}
\label{eqn:positive-expfam-deriv}
\inf_{|t - t_0| \le \epsilon
,|\theta - \theta_0| \le \epsilon}
\dot{\Psi}_{\theta}(t, \theta) \ge c > 0
\end{equation}
for some constant $c$. Thus, we obtain that
\begin{align}
\nonumber
\P\left(\wb{Z}_n \not \in \range(\Psi(\what{T}_n, \cdot))\right)
& \le \P\left(\wb{Z}_n \not \in \range(\Psi(\what{T}_n, \cdot)),
\what{T}_n \in [t_0 \pm \epsilon] \right)
+ \P\left(\what{T}_n \not \in [t_0 \pm \epsilon]\right) \\
& \stackrel{(i)}{\le} \P\left(\wb{Z}_n \not \in
[\Psi(\what{T}_n, \theta_0)
\pm c \epsilon] \right)
+ o(1)
\to 0,
\label{eqn:z-in-range}
\end{align}
where inequality~$(i)$ follows because
$\range(\Psi(t, \cdot)) \supset [\Psi(t, \theta_0) \pm c \epsilon]$
for all $t$ such that $|t - t_0| \le \epsilon$
by condition~\eqref{eqn:positive-expfam-deriv}, and the
final convergence because $\wb{Z}_n - \Psi(\what{T}_n, \theta_0)
\cp 0$ and $\what{T}_n$ is consistent for $t_0$.
We recall also that for
any fixed $t$, $\theta \mapsto \Psi(t, \theta)$ is analytic on the interior
of the natural parameter space and strictly increasing at all $\theta$ for
which $\Psi(t, \theta) \in (0, 1)$ (cf.~\cite[Thm.~2.7.1,
Thm.~3.4.1]{LehmannRo05}). Thus,
\begin{equation*}
H_n(\Psi(\what{T}_n, \theta)) = \theta
~~ \mbox{whenever} ~~
\Psi(\what{T}_n, \theta) \in (0, 1).
\end{equation*}
As $G_n = \Psi(\what{T}_n, \theta_0)
\in [c + o_P(1), 1- c+o_P(1)]$ by definition~\eqref{eqn:range-of-G-n} of
$G_n$, we obtain
\begin{equation*}
\P \left(H_n(\Psi(\what{T}_n, \theta_0)) \neq \theta_0\right) \to 0.
\end{equation*}
Now, by the differentiability of $H_n$ on the interior of its
domain (i.e.\ the range of $\Psi(\what{T}_n, \cdot)$), we use
the convergence~\eqref{eqn:z-in-range}
and Taylor's intermediate value theorem to obtain that for
some $p_n$ between $\wb{Z}_n$ and $\Psi(\what{T}_n, \theta_0)$, we have
\begin{align}
\nonumber \sqrt{n}(\what{\theta}_n - \theta_0)
& = \sqrt{n} (\what{\theta}_n - H_n(\Psi(\what{T}_n, \theta_0)))
+ o_P(1) \\
& = H_n'(p_n) \sqrt{n} \left(\wb{Z}_n - \Psi(\what{T}_n, \theta_0)
\right) + o_P(1) \nonumber \\
& = \dot{\Psi}_\theta(\what{T}_n, H_n(p_n))^{-1}
\sqrt{n} \left(\wb{Z}_n - \Psi(\what{T}_n, \theta_0)
\right) + o_P(1)
\label{eqn:expfam-almost-done}
\end{align}
as $p_n \in \interior \dom H_n$ with high probability
by~\eqref{eqn:z-in-range}.
It remains to show that $H_n(p_n) \cp \theta_0$. To see this,
note that whenever $\what{T}_n \in [t_0 \pm \epsilon]$,
the growth condition~\eqref{eqn:positive-expfam-deriv} implies
that
\begin{align*}
\Psi(\what{T}_n,
\theta_0 + \epsilon)
= P_{\theta_0 + \epsilon}(T(X) \ge \what{T}_n)
& \ge P_{\theta_0}(T(X) \ge \what{T}_n)
+ c \epsilon
= \Psi(\what{T}_n, \theta_0) + c \epsilon \\
\Psi(\what{T}_n,
\theta_0 - \epsilon)
= P_{\theta_0 - \epsilon}(T(X) \ge \what{T}_n)
& \le P_{\theta_0}(T(X) \ge \what{T}_n)
- c \epsilon
= \Psi(\what{T}_n, \theta_0) - c \epsilon,
\end{align*}
and thus
\begin{equation*}
\P(|H_n(p_n) - \theta_0| \ge \epsilon)
\le \P(|\wb{Z}_n - \Psi(\what{T}_n, \theta_0)| \ge c \epsilon)
+ \P(|\what{T}_n - t_0| \ge \epsilon)
\to 0.
\end{equation*}
We have the convergence $\dot{\Psi}_\theta(\what{T}_n, H_n(p_n)) \cp \half
\E_{\theta_0}[|T(X) - A'(\theta_0)|] = \half \information[\theta_0]$ by the
continuous mapping theorem, and Slutsky's theorem applied to
Eq.~\eqref{eqn:expfam-almost-done} gives the delta-method
expansion~\eqref{eqn:delta-method-expfam}.
\subsubsection{Proof of Lemma~\ref{lemma:continuity-expfam}}
\label{sec:proof-continuity-expfam}
We have
\begin{align*}
\dot{\Psi}_\theta(t_0, \theta_0)
- \dot{\Psi}_\theta(t, \theta)
& = \E_{\theta_0}[\indic{T(X) \ge t_0}
(T(X) - A'(\theta_0))]
- \E_\theta[\indic{T(X) \ge t}
(T(X) - A'(\theta))] \\
& \stackrel{(i)}{=} \E_{\theta_0}\left[
\hinge{T(X) - t_0}\right]
- \E_\theta[\indic{T(X) \ge t}
(T(X) - t + t - A'(\theta))] \\
& = \E_{\theta_0}\left[\hinge{T(X) - t_0}\right]
- \E_\theta\left[\hinge{T(X) - t}\right]
+ P_\theta(T(X) \ge t) (t - A'(\theta)) \\
& \stackrel{(ii)}{\in} \E_{\theta_0}\left[\hinge{T(X) - t_0}\right]
- \E_\theta\left[\hinge{T(X) - t_0}\right]
\pm |t - t_0| \pm |t - A'(\theta)|,
\end{align*}
where step~$(i)$ follows because $t_0 = A'(\theta_0) = \E_{\theta_0}[T(X)]$,
while the inclusion~$(ii)$ is a consequence of the 1-Lipschitz continuity of
$t \mapsto \hinge{t}$. Now we use the standard
facts that $A(\theta)$ is analytic in $\theta$
and that $\theta \mapsto \E_\theta[f(X)]$ is continuous for any $f$
(cf.~\cite[Thm.~2.7.1]{LehmannRo05}) to see that for
any $\epsilon > 0$, we can choose $\delta > 0$ such that
$|t - t_0| \le \delta$ and $|\theta - \theta_0| \le \delta$ imply
\begin{equation*}
|t - t_0| \le \epsilon, ~~
|t - A'(\theta)| \le \epsilon, ~~ \mbox{and} ~~
\left|
\E_{\theta_0}\left[\hinge{T(X) - t_0}\right]
- \E_\theta\left[\hinge{T(X) - t_0}\right]\right| \le \epsilon.
\end{equation*}
This gives the result.
\subsection{Proof of Lemma~\ref{lemma:mis-expfam-growth}}
\label{sec:proof-mis-expfam-growth}
Indeed, for $P \in \mc{P}$ let
$\mu(P) = \E_P[X]$
and
$\mu_0 = \E_{P_0}[X]$, and define the diameter
$D \defeq \mathop{\rm diam} (\mathcal{X})
= \sup\{\ltwo{x_1-x_2} \mid x_1, x_2 \in \mathcal{X}\}$.
As $A$ and $A^*$ are $\mc{C}^\infty$,
if we define the remainder
$R(\mu; \mu') = \nabla A^*(\mu) - \nabla A^*(\mu')
- \nabla^2 A^*(\mu') (\mu - \mu')$, then
there exists $G < \infty$ such that
\begin{equation*}
|R(\mu; \mu')|
\le \half G \ltwo{\mu - \mu'}^2
~~ \mbox{for~all~} \mu, \mu' \in \mathcal{X}.
\end{equation*}
Now, define the constant
\begin{equation*}
K \defeq \sup_{x \in \mc{X}} \ltwo{\nabla^2 A^*(\mu_0)(x - \mu_0)},
\end{equation*}
which is positive whenever $\mc{X}$ has at least two elements, as
$\nabla^2 A^* \succ 0$ (recall Eq.~\eqref{eqn:inverse-hessian-conjugate}).
If $K = 0$ then the example becomes trivial as $\card(\mathcal{X}) \le 1$.
We claim that
\begin{equation}
\delta K - 2 G^2 D^2 \delta^2
\le \modltwo(\delta) \le
2 \delta K
+ 2 G^2 D^2 \delta^2
\label{eqn:modltwo-exp-mean}
\end{equation}
A Taylor approximation is useful for this result.
Define the linearized modulus
\begin{equation*}
\modltwo^{\rm lin}(\delta) =
\sup_{P \in \mc{P}} \left\{\ltwo{\nabla^2 A^*(\mu_0)(\mu(P) - \mu_0)}:
\tvnorm{P - P_0} \le \delta\right\}
\end{equation*}
Then the remainder guarantee $R(\mu, \mu') \le
\half G \ltwo{\mu - \mu'}^2$ implies
\begin{equation*}
\left|\modltwo(\delta) - \modltwo^{\rm lin}(\delta)\right|
\le \half G \cdot \{\ltwo{\mu - \mu_0}^2 \mid \tvnorm{P-P_0}
\le \delta\}
\stackrel{(i)}{\le} 2G \delta^2 K^2.
\end{equation*}
where inequality~$(i)$ follows from Eq.~\eqref{eqn:modltwo-mean}. The
bounds on $\delta K \le \modltwo^{\rm lin}(\delta) \le 2 \delta K$ are immediate
consequences of inequality~\eqref{eqn:modltwo-mean}, yielding
the bounds~\eqref{eqn:modltwo-exp-mean}.
We now show how inequalities~\eqref{eqn:modltwo-exp-mean}
imply the polynomial growth condition~\ref{cond:polynomial-growth}
on $\modltwo$, that
is, that for some constant $\beta < \infty$,
\begin{equation}
\label{eqn:goal-of-example-two}
\sup_{c\ge 1, \delta > 0}
\frac{\modltwo(c\delta)}{c \cdot \modltwo(\delta)}
\le \beta.
\end{equation}
Define $\delta_0 = \frac{K}{4 G^2 D^2}$.
We consider three cases. In the first, when $\delta \ge \delta_0$,
we have for $c \ge 1$ that
\begin{equation*}
\frac{\modltwo(c\delta)}{c \cdot \modltwo(\delta)}
\le \frac{\modltwo(1)}{\modltwo(\delta_0)}
\le \frac{2K + 2G^2 D^2}{\delta_0 K - 2 G^2 D^2 \delta_0^2}
= 32 (K + G^2 D^2) \frac{G^2 D^2}{K^2}.
\end{equation*}
In the case that $\delta \le \delta_0$ and
$c\delta \le 1$, we have
$\delta K - 2 G^2 D^2 \delta^2 \ge \delta K / 2$, and
\begin{equation*}
\frac{\modltwo(c\delta)}{c \cdot \modltwo(\delta)}
\le \frac{2 c \delta K + 2G^2 D^2 (c\delta)^2}{c \delta K
- 2 c G^2 D^2 \delta^2}
\le 4 \frac{c \delta K + G^2 D^2 (c\delta)^2}{c \delta K}
\le 4 + 4 \frac{G^2 D^2}{K},
\end{equation*}
and in the final case that $c\delta \ge 1$, we have
$\frac{1}{c} \le \delta$ and so
\begin{equation*}
\frac{\modltwo(c\delta)}{c \cdot \modltwo(\delta)}
\le \frac{\delta \modltwo(1)}{\modltwo(\delta)}
\le \delta \frac{2K + 2 G^2 D^2 }{ \delta K / 2}
= 4 + 4 \frac{G^2 D^2}{K}.
\end{equation*}
These cases combined yield
inequality~\eqref{eqn:goal-of-example-two} for large enough $\beta$,
and thus Condition~\ref{cond:polynomial-growth} holds.
\subsection{Proof of Lemma~\ref{lemma:le-cam-method}}
\label{sec:proof-le-cam-method}
Let us define the minimal loss function
\begin{equation*}
L_{i,*}(\theta)
\defeq \inf_{P \in \mc{P}_i} L(\theta, P).
\end{equation*}
Then by the definition of the quantity $\lossdist$ and assumption that
$\lossdist(P_0, P_1) \ge \delta$ for all $P_0 \in \mc{P}_0, P_1 \in
\mc{P}_1$, we immediately see that $L_{0,*}(\theta) + L_{1,*}(\theta) \ge
\delta$ for all $\theta$. For any measure $\pi$ on $\mc{P}$, define $P_\pi
= \int P d\pi(P)$ to be the mixture distribution according to $\pi$. Then
for any measures $\pi_0$ on $\mc{P}_0$ and $\pi_1$ on $\mc{P}_1$, we have
\begin{align*}
2 \sup_{P \in \mc{P}}
\E_P[L(\what{\theta}, P)]
& \ge
\int \E_P[\loss(\what{\theta}, P)] d\pi_0(P)
+ \int \E_P[\loss(\what{\theta}, P)] d\pi_1(P) \\
& \ge \int \E_P[L_{0,*}(\what{\theta})] d\pi_0(P)
+ \int \E_P[\loss_{1,*}(\what{\theta})] d\pi_1(P) \\
& = \E_{P_{\pi_0}}[L_{0,*}(\what{\theta})]
+ \E_{P_{\pi_1}}[L_{1,*}(\what{\theta})].
\end{align*}
Using the standard variational equality $1 - \tvnorm{P_0 - P_1} = \inf_{f
\ge 1} \int f (dP_0 + dP_1)$,
the fact othat $L_{0,*} + L_{1,*} \ge \delta$ implies
\begin{equation*}
2 \sup_{P \in \mc{P}}
\E_P[L(\what{\theta}, P)]
\ge \delta \inf_{f \ge 1}
\left\{\E_{P_{\pi_0}}[f]
+ \E_{P_{\pi_1}}[f] \right\}
= \delta \left(1 - \tvnorm{P_{\pi_0} - P_{\pi_1}}\right),
\end{equation*}
which is the desired result.
\subsection{Proof of Theorem~\ref{theorem:big-tensor}}
\label{sec:proof-big-tensor}
We begin with the arguments common to each result, then specializing
to prove the two inequalities claimed in the theorem.
By the convexity and tensorization properties of the KL-divergence, respectively,
we have
\begin{align}
\nonumber
\dkl{\marginprob_0^n}{\meanmarginprob^n}
& \le \frac{1}{|\packset|}
\sum_{\packval \in \packset} \dkl{\marginprob_0^n}{\marginprob_\packval^n}
\\
& = \frac{1}{|\packset|} \sum_{\packval \in \packset}
\sum_{i = 1}^n \int \dkl{\marginprob_0^{(i)}(\cdot \mid z_{1:i-1})}{
\marginprob^{(i)}_\packval(\cdot \mid z_{1:i-1})}
d\marginprob_0(z_{1:i-1}) \nonumber \\
& = \sum_{i = 1}^n
\left[\frac{1}{|\packset|} \sum_{\packval \in \packset}
\dkl{\marginprob_0^{(i)}(\cdot \mid z_{1:i-1})}{
\marginprob^{(i)}_\packval(\cdot \mid z_{1:i-1})}
\right]
d\marginprob_0(z_{1:i-1}),
\label{eqn:easy-kl-tensorize}
\end{align}
where $\marginprob^{(i)}$ denotes the distribution over $Z_i$.
We consider
Without loss of generality, we may assume that the $Z_i$ are finitely
supported (as all $f$-divergences can be approximated by
finitely supported distributions~\cite{CoverTh06}),
and we let $\margindens$ and $\channeldens$ denote the p.m.f.s of
$\marginprob$ and $\channel$, respectively.
Then, as $X_i$ is independent of $Z_{1:i-1}$ for all $i \in \N$, we
obtain that
\begin{equation*}
\margindens_\packval(z_i \mid z_{1:i-1})
= \int \channeldens(z_i \mid x_i, z_{1:i-1}) dP_\packval(x_i
\mid z_{1:i-1})
= \int \channeldens(z_i \mid x_i, z_{1:i-1}) dP_\packval(x_i).
\end{equation*}
Returning to expression~\eqref{eqn:easy-kl-tensorize}
and using that $\dkl{P}{Q} \le \log(1 + \dchis{P}{Q})$ for any
$P$ and $Q$ (see~\cite[Lemma 2.7]{Tsybakov09}), we obtain
\begin{align*}
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\dkl{\marginprob_0^{(i)}(\cdot \mid z_{1:i-1})}{
\marginprob_\packval^{(i)}(\cdot \mid z_{1:i-1})}
& \le
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\dchis{\marginprob_0^{(i)}(\cdot \mid z_{1:i-1})}{
\marginprob_\packval^{(i)}(\cdot \mid z_{1:i-1})}.
\end{align*}
Thus, abstracting the entire history as $\what{z}$, the theorem
reduces to proving upper bounds on the quantity
\begin{align}
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\dchis{\marginprob_0^{(i)}(\cdot \mid z_{1:i-1})}{
\marginprob_\packval^{(i)}(\cdot \mid z_{1:i-1})}
& = \frac{1}{|\packset|}
\sum_{\packval \in \packset}
\sum_z \frac{(\margindens_0(z \mid \what{z})
- \margindens_\packval(z \mid \what{z}))^2}{
\margindens_\packval(z \mid \what{z})} \nonumber \\
& = \frac{1}{|\packset|}
\sum_{\packval \in \packset}
\sum_z \frac{(\int \channeldens(z \mid x, \what{z})
(dP_0(x) - dP_\packval(x)))^2}{
\int \channeldens(z \mid x, \what{z}) dP_\packval(x)}.
\label{eqn:chi-square-to-bound}
\end{align}
We provide upper bounds for the quantity~\eqref{eqn:chi-square-to-bound}
in the two cases of the theorem, that is, when the channel
$\channel$ is $\diffp$-differentially private and when it is
$\diffp^2$-$\chi^2$-private.
\paragraph{The differentially private case}
We begin with the case in which $\channeldens(z \mid x, \what{z})
/ \channeldens(z \mid x', \what{z}) \le e^\diffp$ for all
$x, x', z$, and $\what{z}$. Let
$P$ be any distribution on $\mathcal{X}$, which we are free to choose,
and define $\margindens(z \mid \what{z})
= \int \channeldens(z \mid x, \what{z}) dP(x)$. In this case, we note that
$\int \margindens(z \mid \what{z})
(dP_0 - dP_\packval)(x) = 0$ for any $\packval \in \packset$, so that
the quantity~\eqref{eqn:chi-square-to-bound}
is equal to
\begin{align*}
\lefteqn{\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\sum_z \left(
\int\frac{\channeldens(z \mid x, \what{z})
- \margindens(z \mid \what{z})}{
\margindens_\packval(z \mid \what{z})}
(dP_0(x) - dP_\packval(x))
\right)^2
\margindens_\packval(z \mid \what{z})} \\
& =
\sum_z
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(
\int\frac{\channeldens(z \mid x, \what{z})
- \margindens(z \mid \what{z})}{
\margindens(z \mid \what{z})}
(dP_0(x) - dP_\packval(x))
\right)^2
\frac{\margindens(z \mid \what{z})}{
\margindens_\packval(z \mid \what{z})}
\margindens(z \mid \what{z}) \\
& \le
\bigg[\sum_z
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(
\int\frac{\channeldens(z \mid x, \what{z})
- \margindens(z \mid \what{z})}{
\margindens(z \mid \what{z})}
(dP_0(x) - dP_\packval(x))
\right)^2
\margindens(z \mid \what{z})
\bigg]
\cdot
\max_{\packval', z'} \frac{\margindens(z' \mid \what{z})}{
\margindens_{\packval'}(z' \mid \what{z})}.
\end{align*}
Let $C = \max_{z,\packval} \frac{\margindens_0(z \mid \what{z})}{
\margindens_{\packval}(z \mid \what{z})}$, which is evidently bounded by
$e^\diffp$. For any $z, \what{z}$, and $x$, we have
\begin{equation*}
\frac{\channeldens(z \mid x, \what{z})
- \margindens(z \mid \what{z})}{
\margindens(z \mid \what{z})}
\in \left[e^{-\diffp} - 1, e^\diffp - 1 \right],
\end{equation*}
so we may choose the constant $c_\diffp = \half (e^{\diffp} + e^{-\diffp}) -
1$ to see that inequality~\eqref{eqn:chi-square-to-bound} has the further
upper bound
\begin{align*}
C \cdot \sum_z &
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(
\int\left(\frac{\channeldens(z \mid x, \what{z})
- \margindens(z \mid \what{z})}{
\margindens(z \mid \what{z})} - c_\diffp \right)
(dP_0(x) - dP_\packval(x))
\right)^2
\margindens(z \mid \what{z})
\nonumber \\
& \le
\frac{C}{4} (e^{\diffp/2} - e^{-\diffp/2})^2
\sum_z \sup_{\linf{f} \le 1}
\frac{1}{|\packset|} \sum_{\packval \in \packset}
\left(\int f(x) (dP_0(x) - dP_\packval(x))\right)^2
\margindens(z \mid \what{z}) \nonumber \\
& = \frac{C}{4} (e^{\diffp/2} - e^{-\diffp/2})^2
\complexity_\infty(\{P_\packval\}_{\packval \in \packset}).
\end{align*}
Recalling that for our choice of $C = \max_{\packval, z}
\frac{\margindens(z \mid \what{z})}{ \margindens_\packval(z \mid
\what{z})}$, we have $C \le e^\diffp$ and $C \le \max_\packval \sup_x
\frac{dP(x)}{dP_\packval(x)}$, we obtain the desired
result~\eqref{eqn:diffp-big-le-cam-tensor} for the differentially private
case.
\paragraph{The $\chi^2$-private case}
In the second case, when the channel is $\chi^2$-private, we require
a slightly different argument.
We begin by using the convexity of the function
$t \mapsto 1 / t$ and, as in the proof of the differentially private case,
that $\int c (dP_0 - dP_\packval) = 0$, to obtain
\begin{align*}
\frac{(\margindens_\packval(z \mid \what{z})
- \margindens_0(z \mid \what{z}))^2}{
\margindens_\packval(z \mid \what{z})}
& =
\frac{\inf_{x_0} (\int (\channeldens(z \mid x, \what{z})
- \channeldens(z \mid x_0, \what{z}))
(dP_0(x) - dP_\packval(x)))^2}{
\int \channeldens(z \mid x', \what{z})
dP_\packval(x')} \\
& \le
\int \frac{\inf_{x_0} (\int (\channeldens(z \mid x, \what{z})
- \channeldens(z \mid x_0, \what{z}))
(dP_0(x) - dP_\packval(x)))^2}{
\int \channeldens(z \mid x', \what{z})}
dP_\packval(x') \\
& \le
\int \frac{(\int (\channeldens(z \mid x, \what{z})
- \channeldens(z \mid x', \what{z}))
(dP_0(x) - dP_\packval(x)))^2}{
\int \channeldens(z \mid x', \what{z})}
dP_\packval(x').
\end{align*}
Now, let $P$ be an arbitrary distribution on $\mathcal{X}$.
As a consequence of the preceding
display, we may upper bound the average~\eqref{eqn:chi-square-to-bound},
using the shorthand $\what{z} = z_{1:i-1}$ for the history
$z_{1:i-1}$,
by
\begin{align}
\lefteqn{\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\dchis{\marginprob_0(\cdot \mid \what{z})}{
\marginprob_\packval(\cdot \mid \what{z})}} \nonumber \\
& \le
\sum_z
\int \frac{1}{|\packset|}
\sum_{\packval \in \packset}
\frac{(\int (\channeldens(z \mid x, \what{z})
- \channeldens(z \mid x', \what{z}))(dP_0(x) - dP_\packval(x)))^2}{
\channeldens(z \mid x', \what{z})}
dP_\packval(x') \nonumber \\
& \le \sup_{\packval, x}
\frac{dP_\packval(x)}{dP(x)}
\cdot \bigg[
\int \sum_z
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\frac{(\int (\channeldens(z \mid x, \what{z})
- \channeldens(z \mid x', \what{z}))(dP_0(x) - dP_\packval(x)))^2}{
\channeldens(z \mid x', \what{z})}
dP(x')\bigg]. \nonumber
\end{align}
Rearranging, the final expression is equal to
\begin{align}
\lefteqn{ \max_{\packval} \linf{\frac{dP_\packval}{dP}}
\cdot \bigg[
\int \sum_z
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(\int \frac{\channeldens(z \mid x, \what{z})
- \channeldens(z \mid x', \what{z})}{
\channeldens(z \mid x', \what{z})}
(dP_0(x) - dP_\packval(x))\right)^2
\channeldens(z \mid x', \what{z})
dP(x')\bigg]} \nonumber \\
& = \max_{\packval} \linf{\frac{dP_\packval}{dP}}
\cdot \bigg[
\int \sum_z
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(\int \Delta(z \mid x, x', \what{z})
(dP_0(x) - dP_\packval(x))\right)^2
\channeldens(z \mid x', \what{z})
dP(x')\bigg],
\label{eqn:get-to-something-to-square}
\end{align}
where in the equality~\eqref{eqn:get-to-something-to-square}
we defined the quantity
\begin{equation*}
\Delta(z \mid x, x', \what{z})
\defeq \frac{\channeldens(z \mid x, \what{z})}{
\channeldens(z \mid x', \what{z})} - 1.
\end{equation*}
For any distribution $P^*$ supported on $\mathcal{X}$, we can further upper
bound the innermost summation over $\packval \in \packset$ in
expression~\eqref{eqn:get-to-something-to-square} by
\begin{align*}
\lefteqn{\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(\int \Delta(z \mid x, x', \what{z})
(dP_0(x) - dP_\packval(x))\right)^2} \\
& \le \sup_{f : \mc{X} \to \R} \left\{
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(\int f(x) (dP_0(x) - dP_\packval(x))\right)^2
\mid
\int f(x)^2 dP^*(x)
\le \int \Delta(z \mid x, x', \what{z})^2 dP^*(x)
\right\}.
\end{align*}
Recall the definition~\eqref{eqn:chi-square-complexity}
of $\complexity_2$, and assume without loss of generality that
the supremum in the quantity is attained by $P^*$. In this case,
we then obtain
\begin{align*}
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(\int \Delta(z \mid x, x', \what{z})
(dP_0(x) - dP_\packval(x))\right)^2
& \le \complexity_2(\{P_\packval\}_{\packval \in \packset})
\cdot \int \Delta(z \mid x, x', \what{z})^2 dP^*(x).
\end{align*}
Substituting this into the upper bound~\eqref{eqn:get-to-something-to-square}
and applying Fubini's theorem, we obtain
\begin{align}
\lefteqn{\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\dchis{\marginprob_0(\cdot \mid \what{z})}{
\marginprob_\packval(\cdot \mid \what{z})}} \nonumber \\
& \le \max_\packval
\linf{\frac{dP_\packval}{dP}}
\complexity_2(\{P_\packval\}_{\packval \in \packset})
\int \int
\bigg[\sum_z \Delta(z \mid x, x', \what{z})^2
\channeldens(z \mid x', \what{z})\bigg]
dP(x') dP^*(x) \nonumber \\
& =
\max_\packval
\linf{\frac{dP_\packval}{dP}}
\complexity_2(\{P_\packval\}_{\packval \in \packset})
\int \int
\dchis{\channel(\cdot \mid x, \what{z})}{
\channel(\cdot \mid x', \what{z})}
dP(x') dP^*(x).
\nonumber
\end{align}
Of course, by assumption we know that $\dchis{\channel(\cdot \mid
x,\what{z})}{\channel(\cdot \mid x', \what{z})} \le \diffp^2$, which gives
the result~\eqref{eqn:chi-square-big-le-cam-tensor} by
way of equality~\eqref{eqn:chi-square-to-bound}.
\subsection{Proof of Proposition~\ref{proposition:sparse-logistic-regression}}
\label{sec:proof-sparse-logistic-regression}
As in our proof of Proposition~\ref{proposition:highdim-mean-hard}
(Section~\ref{sec:proof-highdim-mean-hard}) we construct distributions for
which the parameters are well-separated but for which the complexity measure
$\complexity_2$ is reasonably small. With this in mind, let $\theta_0 \ge 0$
without loss of generality, and consider the base distribution $P_0$ defined
by the conditional p.m.f.
\begin{equation*}
p_0(y \mid x) = \frac{1}{1 + \exp(-y\theta_0)}
\end{equation*}
where we assume $X \sim \uniform(\{-1, 1\}^d)$. Now we construct the
well-separated packing set by, as in the proof of
Proposition~\ref{proposition:highdim-mean-hard}, setting $\packset = \{\pm
e_j\}_{j = 1}^d$ and for a parameter $\delta \in [0, 1]$ to be chosen, we
set
\begin{equation*}
p_\packval(y \mid x)
= \frac{1}{1 + \exp(-y (\theta_0 + \delta \packval^T x))},
\end{equation*}
letting $X \sim \uniform(\{-1, 1\}^d)$ as in the case of $P_0$. With this
setting, we have $\theta_\packval = \theta(P_\packval) = \delta \packval$.
Our starting point is the following lemma, which controls the
complexity measure $\complexity_2$ for this class.
(See Appendix~\ref{sec:proof-highdim-logreg-complexity} for a proof.)
\begin{lemma}
\label{lemma:highdim-logreg-complexity}
Let $P_0$ and $P_\packval$ be as above, and
define
\begin{equation}
\label{eqn:funny-quantities-for-logreg}
\alpha = \frac{e^{\theta_0}}{e^{\theta_0} + 1} - \frac{e^{\theta_0}}{
e^{\theta_0} + e^{-\delta}}
~~ \mbox{and} ~~
\beta = \frac{e^{\theta_0}}{e^{\theta_0} + 1} - \frac{e^{\theta_0}}{
e^{\theta_0} + e^{\delta}}.
\end{equation}
Then
\begin{equation*}
\complexity_2(\{P_\packval\}_{\packval \in \packset})
\le 2 \max\left\{\frac{(\alpha - \beta)^2}{d},
(\alpha + \beta)^2 \right\}.
\end{equation*}
\end{lemma}
Inspecting this quantity, we obtain that
for $\theta_0 \ge 0$, we have
\begin{equation*}
0 \le \alpha + \beta
= \frac{e^{\theta_0} (e^{\theta_0} - 1)
(e^\delta + e^{-\delta} - 2)}{
1 + e^{\theta_0 - \delta}
+ e^{\theta_0} + e^{\theta_0 + \delta}
+ e^{2 \theta_0 - \delta}
+ e^{2\theta_0} + e^{2 \theta_0 + \delta}
+ e^{3 \theta_0}}
\le e^{-\theta_0} \frac{e^{\theta_0} - 1}{e^{\theta_0}}
(e^\delta + e^{-\delta} - 2),
\end{equation*}
while
\begin{equation*}
0 \le \beta - \alpha
= \frac{e^{\theta_0} (e^\delta - e^{-\delta})}{
e^{\theta_0 - \delta} + e^{\theta_0} + e^{\theta_0 + \delta}
+ e^{2 \theta_0}}
\le e^{-\theta_0} (e^\delta - e^{-\delta}).
\end{equation*}
For $0 \le \delta \le 1$, we thus obtain that
$0 \le \alpha + \beta \le 2 e^{-\theta_0} (1 - e^{-\theta_0}) \delta^2$
and $0 \le \beta - \alpha \le 2 e^{-\theta_0} \delta$, so that
in this regime, we have the complexity bound
\begin{equation}
\label{eqn:highdim-logreg-complexity}
\complexity_2(\{P_\packval\}_{\packval \in \packset})
\le \frac{8\delta^2}{e^{2 \theta_0}}
\max\left\{d^{-1}, (1 - e^{-\theta_0})^2 \delta^2 \right\}.
\end{equation}
Using the upper bound~\eqref{eqn:highdim-logreg-complexity}, we can
substitute into Theorem~\ref{theorem:big-tensor}, choosing $P = P_0$ in the
theorem so that $\linf{dP_\packval / dP_0} \le 1 + e^\delta \le 4$ for
$\delta \le 1$; we thus obtain
\begin{equation*}
\tvnorm{\marginprob_0^n - \meanmarginprob^n}^2
\le \half \dkl{\marginprob_0^n}{\meanmarginprob^n}
\le \frac{16 n \diffp^2}{e^{2 \theta_0}}
\max\left\{d^{-1}, (1 - e^{-\theta_0})^2 \delta^2\right\} \delta^2.
\end{equation*}
If we solve for $\delta^2$ to obtain
$\tvnorms{\marginprob_0^n - \meanmarginprob^n} \le \half$,
we see that the choice
\begin{equation*}
\delta_n^2
\defeq \min\left\{
\frac{d e^{2 \theta_0}}{64 n \diffp^2},
\frac{e^{\theta_0}}{8 (1 - e^{-\theta_0})^2 \sqrt{n \diffp^2}},
1
\right\}
\end{equation*}
is sufficient to guarantee that $\tvnorms{\marginprob_0^n -
\meanmarginprob^n} \le \half$. Using
Lemma~\ref{lemma:private-general-le-cam} and the separation lower
bound~\eqref{eqn:loss-sep-highdim-mean}, we obtain the minimax lower bound
\begin{equation*}
\minimax_n
\ge \min_{\packval \in \packset}
\half \Phi(\delta_n \psi(\packval) / 2)
=
\min_{j = 1, \ldots, d}
\half \Phi\left(
\delta_{n} \psi(e_j) \right).
\end{equation*}
\subsubsection{Proof of Lemma~\ref{lemma:highdim-logreg-complexity}}
\label{sec:proof-highdim-logreg-complexity}
We take $P^*$ in the definition~\eqref{eqn:chi-square-complexity} to be
uniform on $(x, y) \in \{-1, 1\}^d \times \{-1, 1\}$. We then have
\begin{align}
\complexity_2(\{P_\packval\})
& \le \sup_{f : P^* f^2 \le 1}
\frac{1}{4^d \cdot 2d} \sum_{\packval \in \packset}
\left(
\sum_{x \in \{\pm 1\}^d}
\sum_{y \in \{\pm 1\}}
f(x, y) (p_0(y \mid x) - p_\packval(y \mid x))\right)^2 \nonumber \\
& \le \sup_{f : P^*f^2 \le 1}
\frac{1}{4^d \cdot d}
\sum_{\packval \in \packset}
\left(
\sum_{x \in \{\pm 1\}^d}
f(x) (p_0(1 \mid x) - p_\packval(1 \mid x))\right)^2 + \cdots \nonumber \\
& ~~~~~
\sup_{f : P^*f^2 \le 1}
\frac{1}{4^d \cdot d}
\sum_{\packval \in \packset}
\left(
\sum_{x \in \{\pm 1\}^d}
f(x) (p_0(-1 \mid x) - p_\packval(-1 \mid x))\right)^2,
\label{eqn:complexity-logreg-big}
\end{align}
where we have used that $(a + b)^2 \le 2 a^2 + 2b^2$. Fixing $f$
temporarily, we consider the terms inside the squares, recalling that
$\packval \in \{\pm e_j\}_{j=1}^d$. For a fixed $\packval \in \{\pm e_j\}$,
because $x$ is uniform, we have
\begin{align*}
\lefteqn{\sum_{x \in \{\pm 1\}^d} f(x) (p_0(1 \mid x)
- p_\packval(1 \mid x))} \\
& = \sum_{x : \packval_j x_j = 1}
f(x) \left[\frac{e^{\theta_0}}{1 + e^{\theta_0}}
- \frac{e^{\theta_0}}{e^{\theta_0} + e^{-\delta}}\right]
+ \sum_{x : \packval_j x_j = -1}
f(x) \left[\frac{e^{\theta_0}}{1 + e^{\theta_0}}
- \frac{e^{\theta_0}}{e^{\theta_0} + e^{\delta}}\right] \\
& = \sum_{x : \packval_j x_j = 1} f(x) \alpha
+ \sum_{x : \packval_j x_j = -1} f(x) \beta
= \sum_x f(x) \frac{\packval^T x}{2}(\alpha - \beta)
+ \sum_x f(x) \frac{\alpha + \beta}{2},
\end{align*}
where $\alpha$ and $\beta$ are as defined in
expression~\eqref{eqn:funny-quantities-for-logreg}.
A similar calculation yields that
\begin{align*}
\sum_x
f(x) \left(p_0(-1 \mid x) - p_\packval(-1 \mid x)\right)
& = -\sum_{x : \packval_j x_j = 1}
f(x) \alpha - \sum_{x : \packval_j x_j = -1} f(x) \beta \\
& =
\sum_x f(x) \frac{\packval^T x}{2} (\beta - \alpha)
- \sum_x f(x) \frac{\alpha + \beta}{2}.
\end{align*}
Returning to expression~\eqref{eqn:complexity-logreg-big},
we thus obtain
\begin{align*}
\complexity_2(\{P_\packval\})
& \le
\sup_{f : \E[f(X)^2] \le 1}
\frac{1}{2 \cdot 4^d \cdot d}
\sum_{\packval \in \packset}
\left(\sum_x f(x) \packval^T x (\alpha - \beta)
+ \sum_x f(x) (\beta + \alpha) \right)^2 \\
& \le
\sup_{f : \E[f(X)^2] \le 1}
\frac{1}{d}
\left(\sum_{\packval \in \packset}
(\alpha - \beta)^2 \E[f(X) X^T \packval]^2
+ (\alpha + \beta)^2 \E[f(X)]^2 \right)
\end{align*}
where $X \sim \uniform(\{-1, 1\}^d)$. Finally, we use that
$\sum_{\packval \in \packset} \packval\packval^T = 2 I_{d \times d}$ to
obtain that
\begin{align*}
\complexity_2(\{P_\packval\})
& \le \frac{2}{d} \sup_{f : \E[f(X)^2] \le 1}
\left((\alpha - \beta)^2 \ltwo{\E[f(X) X]}^2
+ d (\alpha + \beta)^2 \E[f(X)]^2 \right).
\end{align*}
As the final step, we apply an argument analogous to that in
the proof of Proposition~\ref{proposition:highdim-mean-hard}
to bound the expectations involving $f(X)$. Indeed,
for any $a, b \ge 0$ and $\E[f(X)^2] = 1$, we have
\begin{align*}
\lefteqn{a^2 \ltwo{\E[f(X) X]}^2
+ b^2 \E[f(X)]^2
= \sup_{\ltwo{w} \le 1}
\E\left[f(X) \left[\begin{matrix} a X \\ b \end{matrix}
\right]^T w\right]^2} \\
& ~~~~~ \le
\E[f(X)^2]
\sup_{\ltwo{w} \le 1} \E[\<(aX ~ b), w\>^2]
= \sup_{\ltwo{w} \le 1}
w^T \left[\begin{matrix} a^2 I & 0 \\ 0 & b^2 \end{matrix}
\right] w = \max\{a^2, b^2\}.
\end{align*}
Returning to the preceding bound on $\complexity_2$, we find that
\begin{align*}
\complexity_2(\{P_\packval\})
& \le
2 \max\left\{\frac{(\alpha - \beta)^2}{d}, (\alpha + \beta)^2\right\}.
\end{align*}
\subsection{Logistic regression and Bernoulli estimation}
\label{sec:bernoulli-lr}
Let us consider the local minimax complexity metrics and $L^1$-information
in
the context of two problems for which the results are particularly
evocative and simple to describe: estimating the parameter of a Bernoulli
random variable and estimation in a 1-dimensional logistic regression. The
problems are related, so we treat them simultaneously.
\subsubsection{Private Bernoulli estimation}
For the Bernoulli case, we start by letting the distribution $P_0$ be that
$X_i \simiid \bernoulli(p_0)$, that is, $P_0(X_i = 1) = p_0$. In this case,
the sample mean $\wb{X}_n = \frac{1}{n} \sum_{i = 1}^n X_i$ achieves
mean-squared error $\E[(\wb{X}_n - p_0)^2] = \frac{p_0(1 - p_0)}{n}$, so
that for $p_0$ near 0 or 1, the problem is easy. In the private case,
however, the difficulty of the problem is \emph{independent of the parameter
$p_0$}. Indeed, let $P$ be the $\bernoulli(p)$ distribution, which
satisfies
$\tvnorm{P - P_0} = |p - p_0|$.
Theorem~\ref{theorem:modulus-of-continuity} then yields the following
corollary.
\begin{corollary}
\label{corollary:bernoulli-lower}
There exists a numerical constant $c > 0$ such that the following
holds. Let $P_0$ be $\bernoulli(p_0)$, $\mc{P} = \{\bernoulli(p)\}_{p \in
[0, 1]}$ the family of Bernoulli distributions, and $L(\theta, P_p) =
(\theta - p)^2$ be the squared error. Then for the collection
$\qfam$ of $\diffp^2$-$\chi^2$-private channels,
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}, \qfam)
\ge c \frac{1}{n \diffp^2}.
\end{equation*}
\end{corollary}
\noindent
Corollary~\ref{corollary:bernoulli-lower} shows that under our
notions of privacy, it is impossible to adapt to problem difficulty. The
lower bound in Corollary~\ref{corollary:bernoulli-lower} is tight to within
numerical constants for $\diffp$ not too large (even under
$\diffp$-differential privacy); the classical randomized response
estimator~\cite{Warner65, DuchiJoWa18} achieves the risk.
\subsubsection{Private 1-dimensional logistic regression}
\label{sec:1-dim-logreg}
\newcommand{L_{\rm pred}}{L_{\rm pred}}
A similar result to Corollary~\ref{corollary:bernoulli-lower} holds for
logistic regression problems that may be more striking, which is relevant
for modern uses of private estimation, such as learning a classifier from
(privately shared) user data~\cite{ErlingssonPiKo14,AbadiErGoMcPaMiTaZh17,
ApplePrivacy17}. In this case, the lower
bound and gap between private and non-private estimation is more
striking. To see this, let $P_0$ be the distribution on pairs $(x, y) \in
\{-1, 1\}^2$ such that $x$ is uniform and
\begin{equation*}
P_0(y \mid x) = \frac{1}{1 + e^{-y \theta_0 x}},
\end{equation*}
where $\theta_0 > 0$. The Fisher information for the parameter $\theta$ in
this model is $I_{\theta}^{-1} = (1 + e^{\theta})(1 + e^{-\theta})$,
so that given an i.i.d.\ sample $(X_i, Y_i) \sim P_0$, the maximum
likelihood estimator $\what{\theta}_n^{\rm ml}$ satisfies
\begin{equation}
\label{eqn:logreg-asymptotics}
\sqrt{n} (\what{\theta}_n^{\rm ml} - \theta_0)
\cd \normal\left(0, 2 + e^{\theta_0} + e^{-\theta_0}\right).
\end{equation}
The asymptotics~\eqref{eqn:logreg-asymptotics} are not the entire story. In
many applications of logistic regression, especially in machine
learning~\cite{HastieTiFr09}, one wishes to construct a
classifier with low classification risk or to provide good confidence
estimates $p(y \mid x)$ of a label $y$ given covariates $x$. We expect in
such situations that large parameter values $\theta_0$ should make the
problem easier; this is the case. To make this concrete,
consider the absolute error in the conditional probability $p_\theta(y
\mid x)$, a natural error metric for classification or confidence accuracy:
for a logistic distribution $P_0$ parameterized by $\theta_0$, we define the
loss
\begin{equation*}
L_{\rm pred}(\theta, P_0)
\defeq \E_{P_0}\left[\left|p_\theta(Y \mid X) - p_{\theta_0}(Y \mid X)
\right|\right],
\end{equation*}
where $p_\theta(y \mid x) = \frac{1}{1 + e^{-y \theta x}}$.
By the delta method
and convergence~\eqref{eqn:logreg-asymptotics},
setting $\phi(t) = \frac{1}{1 + e^t}$, we have
\begin{equation*}
\sqrt{n} \cdot L_{\rm pred}(\what{\theta}_n^{\rm ml}, P_0)
\cd \frac{1}{\sqrt{2 + e^{\theta_0} + e^{-\theta_0}}}
|W| ~~~ \mbox{where~} W \sim \normal(0, 1),
\end{equation*}
and because $L_{\rm pred}$ is bounded in $[0, 1]$, we have
\begin{equation}
\label{eqn:logistic-non-private}
\E_{P_0}\left[L_{\rm pred}(\what{\theta}_n^{\rm ml}, P_0)\right]
= \frac{\sqrt{2 / \pi}}{\sqrt{2 + e^{\theta_0} + e^{-\theta_0}}}
\cdot \frac{1}{\sqrt{n}} (1 + o(1)).
\end{equation}
The asymptotic value of the loss $L_{\rm pred}$, normalized
by $\sqrt{n}$, scales as $e^{-|\theta_0|/2}$, so that the problem
is easier when the parameter $\theta_0$ is large.
In the private case, large parameters yield no such easy classification
problems, and there
is an \emph{exponential} gap (in the parameter $\theta$) between
the prediction risk of private and non-private estimators.
\begin{corollary}
\label{corollary:logistic-lower}
There exists a numerical constant $c > 0$ such that the followsing
holds. Let $\mc{P}_{\rm log}$ be the family of 1-parameter logistic
distributions on pairs $(x, y) \in \{\pm 1\}^2$ and let $\qfam$ be the
collection of $\diffp^2$-$\chi^2$-private channels.
Then the local minimax prediction error satisfies
\begin{equation*}
\localminimax_n(P_{\theta_0}, L_{\rm pred}, \mc{P}_{\rm log}, \qfam)
\ge c \min\left\{\frac{1}{\sqrt{n \diffp^2}},
\frac{1}{1 + e^{|\theta_0|}}\right\}.
\end{equation*}
\end{corollary}
\begin{proof}
Without loss of generality, we assume that $\theta_0 > 0$.
The variation distance between logistic distributions $P_\theta$
and $P_{\theta_0}$ for $\theta = \theta_0 + \Delta \in \R$ is
\begin{equation*}
\tvnorm{P_\theta - P_{\theta_0}}
= \left|\frac{e^\theta - e^{\theta_0}}{
1 + e^\theta + e^{\theta_0} + e^{\theta + \theta_0}}\right|
= \left|\frac{e^{\theta_0}}{1 + e^{\theta_0 + \Delta}
+ e^{\theta_0} + e^{2 \theta_0 + \Delta}}\right|
|e^\Delta - 1|
\le e^{-\theta_0} |1 - e^{-\Delta}|.
\end{equation*}
For any $\delta > 0$, to have $\tvnorm{P_{\theta_0 + \Delta}
- P_{\theta_0}} \le \delta$, then, it suffices to have
\begin{equation}
e^{-\theta_0} (1 - e^{-\Delta}) \le \delta
~~ \mbox{or} ~~
0 \le \Delta \le -\log\hinge{1 - \delta e^{\theta_0}}.
\label{eqn:sufficient-logistic-tv}
\end{equation}
Now we evaluate $\lossdist(P_0, P_1)$, the separation $P_0$ and $P_1$ for
the loss $L_{\rm pred}$. Denote $\phi(t) = 1 / (1 + e^t)$. For distributions
$P_a$ with parameters $\theta_a$, $L_{\rm pred}(\theta, P_0) =
\phi(-\theta_0) |\phi(-\theta) - \phi(-\theta_0)| + \phi(\theta_0)
|\phi(\theta) - \phi(\theta_0)|$, and thus $\lossdist(P_0, P_1)$ satisfies
\begin{align*}
\lossdist(P_0, P_1)
= \inf_\theta \left\{L_{\rm pred}(\theta, P_0)
+ L_{\rm pred}(\theta, P_1) \right\}
& = |\phi(-\theta_0) - \phi(-\theta_1)|
= |\phi(\theta_0) - \phi(\theta_1)|,
\end{align*}
where we have used that $\phi(\theta) + \phi(-\theta) = 1$ for all
$\theta$. For $\delta > 0$, then, using the sufficient
condition~\eqref{eqn:sufficient-logistic-tv}
for $\tvnorms{P_{\theta_0 + \Delta} - P_{\theta_0}} \le \delta$, the choice
$\Delta = -\log \hinge{1 - \delta e^{\theta_0}}$ yields that
whenever $\delta < e^{-\theta_0}$,
\begin{align*}
\modcont[L_{\rm pred}](P_{\theta_0}, \delta, \mc{P}_{\rm \log})
\ge \lossdist(P_{\theta_0}, P_{\theta_0 + \Delta}) =
\left|\frac{\delta}{(1 + e^{-\theta_0})(1 + e^{-\theta_0}
(1 - \delta e^{\theta_0}))}\right|
\ge \frac{1}{(1 + e^{-\theta_0})^2} \cdot \delta.
\end{align*}
For $\delta \ge e^{-\theta_0}$, we have $\modcont[L_{\rm pred}](P_{\theta_0},
\delta) \ge (1 + e^{\theta_0})^{-1}$. Substituting the choice
$\delta = 1 / \sqrt{8 n \diffp^2}$ as in
Theorem~\ref{theorem:modulus-of-continuity}
gives the desired lower bound.
\end{proof}
\section{Definitions of privacy}
\label{sec:definition-of-privacy}
Our starting point is a formalization of our notions of local privacy.
With the notion~\eqref{eqn:sequential-interactive} of sequentially
interactive channels, where the $i$th private observation
is drawn conditionally on the past as $Z_i \mid X_i = x, Z_1, \ldots, Z_{i-1}
\sim \channel(\cdot \mid x, Z_{1:i-1})$, we consider several notions of
privacy, going from the strongest to the weakest.
First is \emph{local differential privacy}, which
Warner~\cite{Warner65} first proposed (implicitly)
in his 1965 work on survey sampling, then explicitly defined by
\citet{EvfimievskiGeSr03} and \citet{DworkMcNiSm06}.
\begin{definition}
\label{definition:local-dp}
The channel $\channel$ is \emph{$\diffp$-locally differentially private}
if for all $i \in \N$, $x, x' \in \mc{X}$, and
$z_{1:i-1} \in \mc{Z}^{i-1}$, we have
\begin{equation*}
\sup_{A \in \sigma(\mc{Z})}
\frac{\channel(A \mid x, z_{1:i-1})}{\channel(A \mid x', z_{1:i-1})}
\le e^\diffp.
\end{equation*}
The channel $\channel$ is \emph{non-interactive} if
\begin{equation*}
\channel(A \mid x, z_{1:i-1}) = \channel(A \mid x)
\end{equation*}
for all $z_{1:i-1} \in \mc{Z}^{i-1}$ and $A \in \sigma(\mc{Z})$.
\end{definition}
\citet{DuchiJoWa18} consider this notion of privacy, developing its
consequences for minimax optimal estimation. It is a satisfying definition
from a privacy point of view, and an equivalent view is that an adversary
knowing the data is either $x$ or $x'$ cannot accurately test, even
conditional on the output $Z$, whether the generating data was $x$ or
$x$'. To mitigate the consequent difficulties for estimation and
learning with differentially private procedures,
researchers have proposed a number of
weakenings of Definition~\ref{definition:local-dp}, which we also consider.
To that end, a second notion of privacy, which \citet{DworkRo16} propose and
\citet{BunSt16} develop reposes on R\'{e}nyi-divergences. For an $\alpha \ge
1$, the R\'{e}nyi-divergence of order $\alpha$ is
\begin{equation*}
\rendiv{P}{Q}
\defeq \frac{1}{\alpha - 1}
\log \int \left(\frac{dP}{dQ}\right)^\alpha dQ,
\end{equation*}
where for $\alpha = 1$ one takes the downward limit as $\alpha \downarrow
1$, yielding $\rendiv{P}{Q} = \dkl{P}{Q}$. We then have the following
definition.
\begin{definition}
\label{definition:local-cdp}
The channel $\channel$ is \emph{$(\kappa, \rho)$-zero-concentrated
locally differentially private} (zCDP) if for all
$\alpha$, $x, x' \in \mc{X}$, and $z_{1:i-1} \in \mc{Z}$, we have
\begin{equation*}
\rendiv{\channel(\cdot \mid x, z_{1:i-1})}{
\channel(\cdot \mid x', z_{1:i-1})}
\le \kappa + \rho \alpha.
\end{equation*}
\end{definition}
\noindent
An equivalent definition is that the log likelihood ratio
$\log \frac{dQ(Z \mid x, z_{1:i-1})}{dQ(Z \mid x', z_{1:i-1})}$ has
sub-Gaussian tails, that is, for
$Z \sim \channel(\cdot \mid x', z_{1:i-1})$, we have
\begin{equation*}
L \defeq \log \frac{dQ(Z \mid x, z_{1:i-1})}{dQ(Z \mid x', z_{1:i-1})}
~~ \mbox{satisfies} ~~
\E\left[\exp\left(
\lambda L \right)\right]
\le \exp\left(\rho \lambda^2
+ \lambda (\rho + \kappa)\right)
\end{equation*}
for all $\lambda \ge 0$ (and $\E[\exp(L)] = 1$). \citet{Mironov17} proposes
a natural relaxation of Definition~\ref{definition:local-cdp},
suggesting that we require it hold only for a \emph{single} fixed $\alpha >
1$. This yields
\begin{definition}
\label{definition:local-renyi}
The channel $\channel$ is \emph{$(\alpha, \diffp)$-R\'{e}nyi locally
differentially private} if for all $x, x' \in \mc{X}$, and $z_{1:i-1}
\in \mc{Z}$, we have
\begin{equation*}
\rendiv{\channel(\cdot \mid x, z_{1:i-1})}{
\channel(\cdot \mid x', z_{1:i-1})}
\le \diffp.
\end{equation*}
\end{definition}
\newcommand{\pi}{\pi}
Perhaps the most salient point in Definition~\ref{definition:local-renyi} is
the choice $\alpha = 2$, which will be important in our subsequent
analysis. Consider a prior distribution on two points $x, x'$, represented
by $\pi(x) \in [0,1]$ and $\pi(x') = 1 - \pi(x)$, and then consider
the posterior $\pi(x \mid Z)$ and $\pi(x' \mid Z)$ after observing the
private quantity $Z \sim \channel(\cdot \mid x)$. Then $(\alpha,
\diffp)$-R\'{e}nyi privacy is equivalent~\cite[Sec.~VII]{Mironov17} to the
condition that the prior and posterior odds ratios of $x$ against $x'$ do
not change much in expectation:
\begin{equation}
\label{eqn:renyi-prior-posterior}
\E\left[\frac{\pi(x \mid Z) / \pi(x' \mid Z)}{
\pi(x) / \pi(x')} \mid x \right] \le e^\diffp
\end{equation}
for all two-point priors $\pi$, where the expectation is taken
over $Z \mid x$. (For $\diffp$-differential
privacy, the inequality holds for \emph{all} $Z$ without expectation).
Because R\'{e}nyi divergences are monotonic in $\alpha$
(cf.~\cite[Thm.~3]{ErvenHa14}), any
channel that is $(\alpha, \diffp)$-R\'{e}nyi private
is also $(\alpha', \diffp)$-R\'{e}nyi private for $\alpha' \le \alpha$.
Our final notion of privacy is based on $f$-divergences, which is related to
Definition~\ref{definition:local-renyi}. Recall for a convex function $f :
\R_+ \to \R \cup \{+\infty\}$ with $f(1) = 0$, the $f$-divergence between
distributions $P$ and $Q$ is
\begin{equation*}
\fdiv{P}{Q} \defeq \int f\left(\frac{dP}{dQ}\right) dQ,
\end{equation*}
which is non-negative and strictly positive when $P \neq Q$ and $f$ is
strictly convex at the point $1$. We consider $f$-divergences parameterized
by $k \in \openright{1}{\infty}$ of the
form
\begin{equation*}
f_k(t) \defeq |t - 1|^k.
\end{equation*}
\begin{definition}
\label{definition:f-divergence-dp}
For $k \in \openright{1}{\infty}$, the channel $\channel$ is
\emph{$\diffp$-$f_k$-divergence locally private} if for all $i \in \N$, $x,
x' \in \mc{X}$, and $z_{1:i-1} \in \mc{Z}$, we have
\begin{equation*}
\fdivf{f_k}{\channel(\cdot \mid x, z_{1:i-1})}{
\channel(\cdot \mid x', z_{1:i-1})}
\le \diffp^k.
\end{equation*}
\end{definition}
\noindent
When $k = 2$, this is the familiar
$\chi^2$-divergence~\cite{LieseVa06,Tsybakov09}, and it is equivalent to
$(2, \log(1 + \diffp^2))$-R\'{e}nyi differential privacy. We
describe this special situation as \emph{$\diffp^2$-$\chi^2$-privacy}.
The definitions provide varying levels of privacy. It is
immediate that if a channel is $\diffp$-differentially private, then it is
$(e^\diffp - 1)$-$f_k$-divergence locally private. For $\diffp \le 1$, this
implies $(e - 1)\diffp$-$f_k$-divergence local privacy.
It is also clear that Definition~\ref{definition:local-dp}
is stronger than \ref{definition:local-cdp},
which is stronger than \ref{definition:local-renyi}.
We can quantify this as well: $\diffp$-differential
privacy implies $(0, \half \diffp^2)$-zero-concentrated differential
privacy. For $k = 2$, we also find that if the channel $\channel$ is
$(\kappa, \rho)$-zCDP, then it is immediate that it satisfies
$\diffp^2$-$\chi^2$-divergence privacy with $\diffp^2 = e^{\kappa + 2
\rho} - 1$, where we take $\alpha = 2$ in the definition of the
R\'{e}nyi-divergence.
Our results all apply for $\chi^2$-private channels, so that
$\chi^2$-privacy (Definition~\ref{definition:local-renyi} with
$\alpha = 2$ or Definition~\ref{definition:f-divergence-dp} with
$k = 2$) implies strong lower bounds on estimation.
\section{Discussion}
\label{sec:discussion}
By the careful construction of locally optimal and adaptive estimators, as
well as our local minimax lower bounds, we believe results in this paper
indicate more precisely the challenges associated with locally private
estimation. To illustrate this, let us reconsider the estimation of a
linear functional $v^T \theta$ in a classical statistical problem. Indeed,
let $\{P_\theta\}$ be a family with Fisher information matrices
$\{\fisher[\theta]\}$ and a score function $\dot{\ell}_\theta : \mc{X} \to
\R^d$. Then a classical estimators $\what{\theta}_n$ of the parameter
$\theta_0$ is efficient~\cite[Sec.~8.9]{VanDerVaart98} (among regular
estimators) if and only if
\begin{equation*}
\what{\theta}_n
- \theta_0
= \frac{1}{n} \sum_{i = 1}^n
-\fisher[\theta_0]^{-1} \dot{\ell}_{\theta_0}(X_i)
+ o_P(1 / \sqrt{n}),
\end{equation*}
and an efficient estimator $\what{\psi}_n$ of $v^T\theta$ satisfies
$\what{\psi}_n = v^T \theta_0 - n^{-1} \sum_{i = 1}^n v^T
\fisher[\theta_0]^{-1} \dot{\ell}_{\theta_0}(X_i) + o_P(n^{-1/2})$. In
constrast, in the private case, our locally minimax optimal estimators
(recall Sections~\ref{sec:one-param-expfams}
and~\ref{sec:mis-specified-expfam}) have the asymptotic form
\begin{equation*}
\what{\psi}_{{\rm priv},n} = v^T \theta_0
-v^T
\bigg(\frac{1}{n} \sum_{i = 1}^n \fisher[\theta_0]^{-1}\dot{\ell}_{\theta_0}(X_i)
\bigg)
+ \frac{1}{n} \sum_{i = 1}^n W_i
+ o_P(1 / \sqrt{n}),
\end{equation*}
where the random variables $W_i$ must add noise of a magnitude scaling as
$\frac{1}{\diffp} \sup_x |v^T \fisher[\theta_0]^{-1}
\dot{\ell}_{\theta_0}(x)|$, because otherwise it is possible to distinguish
examples for which $v^T \fisher[\theta_0]^{-1} \dot{\ell}_{\theta_0}(X_i)$
is large from those for which it has small magnitude. This enforced lack of
distinguishability of ``easy'' problems (those for which the scaled score
$\fisher[\theta_0]^{-1} \dot{\ell}_{\theta_0}(X_i)$ is typically small) from
``hard'' problems (for which it is large) is a feature of local privacy
schemes, and it helps to explain the difficulty of estimation.
We thus believe it prudent to more carefully explore feasible definitions of
privacy, especially in local senses. Regulatory decisions and protection
against malfeasance may require less stringent notions of privacy than pure
differential privacy, but local notions of privacy---where no sensitive
non-privatized data leaves the hands of a sample participant---are
desirable. The asymptotic expansions above suggest a notion of privacy
that allows some type of \emph{relative} noise addition, to preserve
the easiness of ``easy'' problems, owill help.
Perhaps large values of $\diffp$, at least for high-dimensional
problems, may still provide acceptable privacy protection, at least in
concert with centralized privacy guarantees. We look forward
to continuing study of these fundamental limitations and acceptable
tradeoffs between data utility and protection of study participants.
\section{Examples}
\label{sec:examples}
The ansatz of finding a locally most difficult problem via the modulus of
continuity gives an approach to lower bounds that leads to non-standard
behavior for a number of classical and not-so-classical problems. In this
section, we investigate examples to illustrate the consequences of assuming
local privacy, showing how it leads to a different geometry of lower bounds
than classical cases. Our first step is to provide a private analogue of
the Fisher Information (Sec.~\ref{sec:fisher-information}), showing in
particular that Fisher Information no longer governs the complexity of
estimation. We use this to prove lower bounds for estimation in Bernoulli
and logistic models (Sec.~\ref{sec:bernoulli-lr}), showing that even in one
dimension there are substantial consequences to (locally) private
estimation. In the final two sections within this section,
we develop a methodology based on Fisher scoring and one-step
corrected estimators to adaptively achieve our local minimax
bounds for exponential families with and without mis-specification.
\input{fisher-information-bounds}
\input{basic-logistic-regression}
\input{one-param-exponential-families}
\input{multi-param-exponential-families}
\section{Experimental investigation: generalized linear modeling}
\label{sec:experiments}
In this section, we perform experiments investigating the behavior of
our proposed locally optimal estimators, comparing their performance both to
non-private estimators and to minimax optimal estimators developed by
\citet{DuchiJoWa18} for locally private estimation.
In our experiments, we consider fitting a generalized linear model
for a variable $Y$ conditioned on $X$, where the model has the form
\begin{equation}
\label{eqn:glm}
p_\theta(y \mid x)
= \exp\left(T(x, y)^T \theta - A(\theta \mid x)\right),
\end{equation}
where $A(\theta \mid x) = \int e^{T(x, y)^T \theta} d\basemeasure(y)$ for
some base measure $\basemeasure$ and $T : \mc{X} \times \mc{Y} \to \R^d$ is
the sufficient statistic. We give a slight elaboration of the techniques of
\citet[Sec.~5.2]{DuchiJoWa18} for generalized linear models. In our context,
we wish to model $P(Y \mid X)$ using the GLM model~\eqref{eqn:glm}, which
may be mis-specified as in Section~\ref{sec:mis-specified-expfam}, but where
we assume that the base distribution on $X$ is known. This assumption is
strong, but may (approximately) hold in practice; in biological
applications, for example, we may have a large collection of covariate data
and wish to estimate the conditional distribution of $Y \mid X$ for a new
outcome $Y$~\cite[e.g.][]{CandesFaJaLv18}.
For a distribution $P$ on the pair $(X, Y)$,
let $P_{\mathsf{x}}$ denote the marginal over $X$, which we assume is fixed and known,
$P_{\mathsf{y}|\mathsf{x}}$ be the conditional distribution over $Y$ given $X$,
and $P = P_{\mathsf{y}|\mathsf{x}} P_{\mathsf{x}}$ for shorthand.
Define the population risk using the log loss
$\loss_\theta(y \mid x)
= -\log p_\theta(y \mid x)$, by
\begin{equation}
\label{eqn:population-risk}
\risk_P(\theta) = \E_P[\loss_\theta(Y \mid X)]
= \E_P[-T(X, Y)]^T \theta
+ \E_{P_{\mathsf{x}}}[A(\theta \mid X)]
= -\E_P[T(X, Y)]^T \theta + A_{P_{\mathsf{x}}}(\theta),
\end{equation}
where we use the shorthand $A_{P_{\mathsf{x}}}(\theta) \defeq \E_{P_{\mathsf{x}}}[A(\theta \mid
X)]$. Now, let $\mc{P}_{\mathsf{y}}$ be a collection of conditional distributions
of $Y$ given $X$, and for $P_{\mathsf{y}|\mathsf{x}} \in \mc{P}_{\mathsf{y}}$, we define
\begin{equation*}
\theta(P_{\mathsf{y}|\mathsf{x}})
\defeq \argmin_\theta \risk_{P_{\mathsf{y}|\mathsf{x}} P_{\mathsf{x}}}(\theta)
= \nabla A_{P_{\mathsf{x}}}^*(\E_{P_{\mathsf{y}|\mathsf{x}} P_{\mathsf{x}}}[T(X, Y)]),
\end{equation*}
in complete analogy to the general exponential family case in
Section~\ref{sec:mis-specified-expfam}.
In our experiments, we study estimation of the linear
functional $v^T \theta$, where the loss
for an estimator $\what{\functional}$ is
\begin{equation*}
L(\what{\functional}, P_{\mathsf{y}|\mathsf{x}})
\defeq \Phi\big(\big|\what{\functional} - v^T \theta(P_{\mathsf{y}|\mathsf{x}})\big|\big),
\end{equation*}
where $\Phi : \R \to \R_+$ is nondecreasing. As motivation, consider the
problem of testing whether a covariate $X_j$ is relevant to a binary outcome
$Y \in \{-1, 1\}$. In this case, a logistic GLM model~\eqref{eqn:glm} has
the form $p_\theta(y \mid x) = \exp(yx^T \theta) / (1 + \exp(y x^T
\theta))$, and using $v = e_j$, the standard basis vector, estimating $v^T
\theta$ corresponds to asking whether $\theta_j \lessgtr 0$, while
controlling for the other covariates.
\subsection{Optimal estimation for linear functionals of GLM parameters}
\label{sec:optimal-glm-experiments}
Extending the results of Section~\ref{sec:mis-specified-expfam} is nearly
immediate for our situation. Let us assume for this that the range of the
sufficient statistic $T(x, y)$ is contained in a norm ball $\{t \in \R^d
\mid \norm{t} \le 1\}$. Then
Proposition~\ref{proposition:mis-specified-expfam},
applied
to the loss $L(\what{\functional}, P_{\mathsf{y}|\mathsf{x}}) =
\Phi(\what{\functional} - v^T \theta(P_{\mathsf{y}|\mathsf{x}}))$,
yields the following lower bound.
\begin{corollary}
Let $\mc{P}_{\mathsf{y}}$ be a collection of conditional distributions on
$Y \mid X$, $P_0 \in \mc{P}_{\mathsf{y}}$, and
$\channeldistset_\diffp$ be the collection
of $\diffp^2$-$\chi^2$-sequentially interactive private channels.
Then
for numerical constants $c_0, c_1 > 0$ and all
large enough $n$,
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}_{\mathsf{y}}, \channeldistset_\diffp)
\ge c_0
\sup_{P_{\mathsf{y}|\mathsf{x}} \in \mc{P}_{\mathsf{y}}}
\Phi\left(c_1 \left|\frac{v^T \nabla^2 A_{P_{\mathsf{x}}}(\theta_0)^{-1}
(\E_{P_0 P_{\mathsf{x}}}[T(X, Y)] - \E_{P_{\mathsf{y}|\mathsf{x}} P_{\mathsf{x}}}[T(X, Y)])}{
\sqrt{n \diffp^2}}\right|\right).
\end{equation*}
\end{corollary}
\noindent
If the set $\mc{P}_{\mathsf{y}}$ and distribution $P_{\mathsf{x}}$ are such that
$\{\E_{P_{\mathsf{y}|\mathsf{x}} P_{\mathsf{x}}}[T] \mid P_{\mathsf{y}|\mathsf{x}} \in \mc{P}_{\mathsf{y}}\} \supset \{t \in \R^d
\mid \norm{t} \le r\}$,
then we have the simplified lower bound
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}_{\mathsf{y}}, \channeldistset_\diffp)
\ge c_0
\Phi\Big(c_1 \frac{r \dnorm{\nabla^2 A_{P_{\mathsf{x}}}(\theta_0)^{-1} v}}{
\sqrt{n \diffp^2}}\Big).
\end{equation*}
An optimal estimator is similar to that we describe in
Section~\ref{sec:mis-specified-expfam}. Consider a non-private sample
$\{(X_i, Y_i)\}_{i=1}^n$, and split it into samples of size $n_1 =
\ceil{n^{2/3}}$ and $n_2 = n - n_1$. As in
Section~\ref{sec:mis-specified-expfam},
for $i = 1, \ldots, n_1$,
let $Z_i$ be any $\diffp$-locally differentially private
estimate of $T(X_i, Y_i)$ with $\E[Z_i \mid X_i, Y_i] = T(X_i, Y_i)$ and
$\E[\norm{Z_i}^2] < \infty$, and define
$\wt{\mu}_n = \wb{Z}_{n_1} = \frac{1}{n_1} \sum_{i = 1}^{n_1} Z_i$
and
$\wt{\theta}_n = \nabla A^*_{P_{\mathsf{x}}}(\wt{\mu}_n)
= \argmin_\theta \{-\wt{\mu}_n^T \theta + A_{P_{\mathsf{x}}}(\theta)\}$.
Then, for $i = n_1 + 1, \ldots, n$, let
\begin{equation*}
Z_i = v^T \nabla^2 A_{P_{\mathsf{x}}}(\wt{\theta}_n)^{-1} T(X_i, Y_i)
+ \frac{r \dnorms{\nabla^2 A_{P_{\mathsf{x}}}(\wt{\theta}_n)^{-1} v}}{\diffp}
W_i
~~ \mbox{where} ~~
W_i \simiid \laplace(1),
\end{equation*}
where we recall that $\nabla^2 A_{P_{\mathsf{x}}}(\theta)
= \E_{P_{\mathsf{x}}}[\cov_{P_\theta}(T(X,Y) \mid X)]$ for $P_\theta$ the
GLM model~\eqref{eqn:glm}. The $Z_i$ are evidently $\diffp$-differentially
private, and we then define the private estimator
\begin{equation}
\label{eqn:private-glm-estimator}
\what{\functional}_n
\defeq \wb{Z}_{n_2} + v^T \left(\wt{\theta}_n
- \nabla^2 A_{P_{\mathsf{x}}}(\wt{\theta}_n)^{-1} \wt{\mu}_n\right).
\end{equation}
An identical analysis to that we use to prove
Proposition~\ref{proposition:private-expfam-mis-estimator} then gives the
following corollary, in which we recall the Mahalanobis norm $\norm{x}_C^2 =
x^T C x$.
\begin{corollary}
\label{corollary:local-glm-asymptotics}
Let $\what{\functional}_n$ be the
estimator~\eqref{eqn:private-glm-estimator} and
$\theta_0 = \nabla A_{P_{\mathsf{x}}}^*(\E_P[T(X, Y)])
= \argmin_\theta \risk_P(\theta)$. Then
\begin{equation*}
\sqrt{n} (\what{\functional}_n - v^T \theta_0)
\cd \normal\left(0,
\norm{\nabla^2 A_{P_{\mathsf{x}}}(\theta_0)^{-1} v}_{\cov(T(X, Y))}^2
+ \frac{2}{\diffp^2}
\dnorm{\nabla^2 A_{P_{\mathsf{x}}}(\theta_0)^{-1} v}^2\right).
\end{equation*}
\end{corollary}
\noindent
In this case, when $\diffp$ is large, the estimator becomes nearly
efficient---the local minimax variance is precisely the first term in the
variance of Corollary~\ref{corollary:local-glm-asymptotics}.
\subsection{Flow cytometry experiments}
\newcommand{\thetamle}[1]{\theta_{\rm ml}^{({#1})}}
\newcommand{\thetasgm}[1]{\what{\theta}_{\rm sg}^{({#1})}}
\newcommand{\what{\theta}_{\rm init}}{\what{\theta}_{\rm init}}
\newcommand{\thetaonestep}[1]{\what{\theta}_{\rm os}^{(#1)}}
In this section, we investigate the performance of our locally private
estimators on a flow-cytometry dataset for predicting protein
expression~\cite[Ch.~17]{HastieTiFr09}. We compare our local optimal
one-step estimators against minimax optimal (parameter) estimators that
\citet{DuchiJoWa18} develop. The flow-cytometry dataset contains
measurements of the expression levels of $d = 11$ proteins on $n = 7466$
cells, and the goal is to understand the network structure linking the
proteins: how does the expression level of protein $j$ depend on the
remaining proteins. The raw data is heavy-tailed and skewed, so we perform
an inverse tangent transformation so that each expression level $x_{ij}
\mapsto \tan^{-1} (x_{ij})$. We treat the data as a matrix $X \in \R^{n
\times d}$ and then consider the problem of predicting column $i$ of $X$
from the remaining columns. To compare the methods and to guarantee
existence of a ground truth in our experiments, we treat $X$ as the
\emph{full population}, so that each experiment consists of sampling rows of
$X$ with replacement.
Let $x \in \R^d$ denote a row of $X$. For each $i \in [d]$, we
wish to predict whether $x_i$ based on $x_{-i} \in \R^{d-1}$, the remaining
covariates, and we use a logistic regression model to
perform the slightly simpler task of predicting $y = \sign(x_i)$. That
is, for each $i$ we model
\begin{equation*}
\log \frac{P_\theta(\sign(x_i) = 1 \mid x_{-i})}{
P_\theta(\sign(x_i) = -1 \mid x_{-i})}
= \theta^T x_{-i} + \theta_{\rm bias},
\end{equation*}
so that $T(x_{-i}, y) = y [x_{-i}^T ~ 1]^T$ and $A(\theta \mid x_{-i}) =
\log(e^{\theta^T x_{-i} + \theta_{\rm bias}} + e^{-\theta^T x_{-i} -
\theta_{\rm bias}})$, where $y = \sign(x_i)$ is the sign of the expression
level of protein $i$.
For each $i \in \{1, \ldots, d\}$, we let
$\thetamle{i} \in \R^d$ be the parameter (including the bias) maximizing
the likelihood for this logistic model of predicting $x_i$
using the full data $X$.
We perform multiple experiments, where each is as follows. We sample $N$
rows of $X$ uniformly (with replacement), and we perform two private
procedures (and one non-private procedure) on the sampled data
$X_{\text{new}} \in \R^{N \times d}$. We vary the privacy parameter in
$\diffp \in \{1, 4\}$.
\begin{enumerate}[(i)]
\item The first procedure is the minimax optimal stochastic gradient
procedure of \citet[Secs.~4.2.3 \& 5.2]{DuchiJoWa18}. In brief, this
procedure begins from $\theta^0 = 0$, and at iteration $k$ draws a pair
$(x,y)$ uniformly at random, then uses a carefully designed
$\diffp$-locally private version $Z^k$ of $T = T(x,y)$ with the property
that $\E[Z \mid x,y)] = T(x,y)$ and $\sup_k \E[\norm{Z^k}^2] < \infty$,
updating
\begin{equation*}
\theta^{k + 1} = \theta^k - \eta_k \left(\nabla A_{P_{\mathsf{x}}}(\theta^k) - Z^k\right),
\end{equation*}
where $\eta_k > 0$ is a stepsize sequence. (We use optimal the $\ell_\infty$
sampling mechanism~\cite[Sec.~4.2.3]{DuchiJoWa18} to construct $Z_i$.)
We use stepsizes $\eta_k = 1 / (20 \sqrt{k})$, which gave optimal
performance over many choices of stepsize and power $k^{-\beta}$.
We perform $N$ steps of this stochastic gradient method,
yielding estimator $\thetasgm{i}$ for prediction of protein $i$ from
the others.
\item The second procedure is the one-step corrected
estimator~\eqref{eqn:private-glm-estimator}. To construct the initial
$\wt{\theta}_n$, we again use \citeauthor{DuchiJoWa18}'s $\ell_\infty$
sampling mechanism to construct an approximate estimate $\wt{\mu}_{n} =
\frac{1}{n_1} \sum_{i=1}^n Z_i$ and let $\what{\theta}_{\rm init} = \wt{\theta}_n$. For
each coordinate $i = 1, \ldots, d$, we then construct $\thetaonestep{i}$
precisely as in Eq.~\eqref{eqn:private-glm-estimator},
using $v = e_1, \ldots, e_d$.
\item The final procedure is the non-private maximum likelihood
estimator based on the resampled data of size $N$.
\end{enumerate}
\noindent
We perform each of these three-part tests $T = 100$ times,
where within each test, each method uses an identical sample
(the samples are of course independent across tests).
We give summaries of our results in Figure~\ref{fig:histogram-results} and
Table~\ref{table:who-wins}. In Figure~\ref{fig:histogram-results}, we show
histograms of the errors across all coordinates of $\thetamle{i}$, $i = 1,
\ldots, d$, and all $T = 100$ tests, of the three procedures: the minimax
stochastic gradient procedure~\cite{DuchiJoWa18}, our one-step correction,
and the maximum likelihood estimator. For each, we use a sample of size $N
= 10n$, though results are similar for sample sizes $N = 4n, 6n$ and
$8n$. In the figures, we see that the non-private estimator is quite
concentrated in its errors around the ``population'' solution based on the
data $X$ (we truncate the top of the plot). In the case that we have
``high'' privacy ($\diffp = 1$, the left of
Fig.~\ref{fig:histogram-results}), we see that the one-step estimator has
errors more concentrated around zero than the minimax estimator, though the
two have comparable performance. In the slightly lower privacy regime,
corresponding to $\diffp = 4$, the one-step-corrected estimator has much
better performance. The non-private classical minimax estimator
substantially outperforms it, but the one-step-corrected estimator still has
much tighter concentration of its errors than does the minimax procedure.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\begin{overpic}[width=.5\columnwidth]
Data/hist-errors-p1-n2}
\put(73, 64.6){\small Non-private}
\put(73, 58.9){\small Minimax}
\put(73, 53){\small One-step}
\end{overpic}
&
\begin{overpic}[width=.5\columnwidth]
Data/hist-errors-p2-n2}
\put(73, 64.6){\small Non-private}
\put(73, 58.9){\small Minimax}
\put(73, 53){\small One-step}
\end{overpic}
\end{tabular}
\caption{\label{fig:histogram-results} Histogram of errors
across all experiments
in estimation of $v^T \thetamle{i}$, for $v = e_1, \ldots, e_d$
and $i = 1, \ldots, d$, in the
logistic regresssion model. Left: privacy level $\diffp = 1$,
right: privacy level $\diffp = 4$.}
\end{center}
\end{figure}
In Table~\ref{table:who-wins}, we compare the performance of the one-step
estimator with other possible estimators more directly. For the estimators
$\what{\theta}_{\rm init}{i}$, $\thetasgm{i}$, and $\thetaonestep{i}$ of the true
parameter $\thetamle{i}$, we count the number of experiments (of $T$) and
parameters $j = 1, \ldots, d$ for which
\begin{equation*}
\left|[\thetaonestep{i}]_j - [\thetamle{i}]_j\right|
< \left|[\what{\theta}_{\rm init}{i}]_j - [\thetamle{i}]_j\right|
~~ \mbox{and} ~~
\left|[\thetaonestep{i}]_j - [\thetamle{i}]_j]\right|
< \left|[\thetasgm{i}]_j -
[\thetamle{i}]_j\right|,
\end{equation*}
that is, the number of experiments in which the one-step estimator provides
a better estimate than its initializer or the minimax stochastic
gradient-based procedure. Table~\ref{table:who-wins} shows these results,
displaying the proportion of experiments in which the one-step method has
higher accuracy than the other procedures for sample sizes of $N = 2n$ and
$10N$ and privacy levels $\diffp \in \{1, 4\}$. The table shows that the
one-step estimator does typically provide better performance than the other
methods---though this comes with some caveats. When the privacy level is
high, meaning $\diffp = 1$, the performance between the methods is more
similar, as it is at smaller sample sizes. An explanation that we believe
plausible is that in the case of the small sample sizes, the initializer is
inaccurate enough that the one-step correction has a poor Hessian estimate,
so that it becomes a weak estimator. In the low privacy regime, the full
minimax procedure~\cite{DuchiJoWa18} adds more noise than is necessary, as
it privatizes the \emph{entire} statistic $xy$ in each iteration---a
necessity because it iteratively builds the estimates
$\thetasgm{\cdot}$---thus causing an increase in sample complexity over the
local minimax estimator, which need not explicitly estimate $\theta$.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
Sample size & \multicolumn{2}{|c|}{$N = 2n$} &
\multicolumn{2}{|c|}{$N = 10n$} \\
\hline
Privacy $\diffp$ & $\diffp = 1$ & $\diffp = 4$ & $\diffp = 1$
& $\diffp = 4$ \\
\hline
vs.\ initializer & 0.501 & 0.82 & 0.808 & 0.851 \\
\hline
vs.\ minimax (stochastic gradient) & 0.321 & 0.677 & 0.659 & 0.79 \\
\hline
\end{tabular}
\caption{\label{table:who-wins}
Frequency with which the one-step estimator outperforms
initialization and minimax (stochastic-gradient-based) estimator
over $T = 100$ tests, all coordinates $j$ of the parameter
and proteins $i = 1, \ldots, d$ for the flow-cytometry data.}
\end{center}
\end{table}
In summary, a one-step correction---which we demonstrate is locally minimax
optimal---typically outperforms alternative (non-optimal) approaches, though
the benefits become apparent only in reasonably large sample regimes. This
type of behavior may be acceptable, however, in scenarios in which we
actually wish to apply local privacy. As our results and those of
\citet{DuchiJoWa18} make clear, there are substantial costs to local
privacy protections, and so they may only make sense for situations (such as
web-scale data) with very large sample sizes.
\subsection{Private analogues of the Fisher Information}
\label{sec:fisher-information}
Our first set of results builds off of
Theorem~\ref{theorem:modulus-of-continuity} by performing asymptotic
approximations to the variation distance for regular enough parametric
families of distributions. By considering classical families, we can more
easily relate the modulus of continuity-based lower bounds to classical
results, such as the H\'{a}jek-Le-Cam local asymptotic minimax theorem. One
major consequence of our results is that, under the notions of locally
private estimation we consider, the Fisher information is \emph{not} the
right notion of complexity and difficulty in estimation, but
a precise analogy is possible.
We begin by considering parametric families of distributions that are
parameterized in a way reminiscent of the classical
quadratic mean differentiability conditions of Le
Cam~\cite[Ch.~7]{VanDerVaart98}. Define the collection $\mc{P} =
\{P_\theta\}_{\theta \in \Theta}$, parameterized by $\theta \in \R^d$, all
dominated by a measure $\mu$ (at least for $\theta$ in a neighborhood of
some fixed $\theta_0 \in \interior \Theta$), with densities $p_\theta =
dP_\theta / d\mu$. We say that $\mc{P}$ \emph{is $L^1$-differentiable at
$\theta_0$} with score function $\dot{\ell}_{\theta_0} : \mc{X} \to \R^d$
if
\begin{equation}
\label{eqn:l1-differentiable}
2 \tvnorm{P_{\theta_0 + h} - P_{\theta_0}}
= \int_{\mc{X}} \left|h^T \dot{\ell}_{\theta_0}(x)\right|
p_{\theta_0}(x) d\mu(x) + o(\norm{h})
\end{equation}
as $h \to 0$.
An evidently sufficient condition for this to hold is that
\begin{equation*}
\int |p_{\theta_0 + h} - p_{\theta_0}
- h^T \dot{\ell}_{\theta_0} p_{\theta_0}| d\mu
= o(\norm{h}),
\end{equation*}
which makes clear the appropriate differentiability notion.
Recall that a family of distributions is \emph{quadratic
mean differentiable} (QMD) if
\begin{equation}
\label{eqn:qmd}
\int \Big(\sqrt{p_{\theta_0 + h}} - \sqrt{p_{\theta_0}}
- \half h^T \dot{\ell}_{\theta_0} \sqrt{p_{\theta_0}}\Big)^2 d\mu
= o(\norm{h}^2),
\end{equation}
which is the analogue of definition~\eqref{eqn:l1-differentiable} for the
Hellinger distance. Most ``classical'' families (e.g.\ exponential
families) of distributions are QMD
with the familiar score function $\dot{\ell}_\theta(x) = \nabla_\theta \log
p_\theta(x)$ (cf.~\cite{LehmannRo05,VanDerVaart98}). For QMD families,
$L^1$-differentiability is automatic.
\begin{lemma}
\label{lemma:nice-families-l1-differentiable}
Let the family $\mc{P} \defeq \{P_\theta\}_{\theta \in \Theta}$ be
QMD~\eqref{eqn:qmd} at the point $\theta_0$. Then
$\mc{P}$ is $L^1$-differentiable at $\theta_0$
with identical score $\dot{\loss}_\theta$ to the QMD case.
\end{lemma}
\noindent
As the proof of Lemma~\ref{lemma:nice-families-l1-differentiable} is
a more or less standard exercise, we defer it to
Appendix~\ref{sec:proof-nice-families-l1-differentiable}.
In the classical case of quadratic-mean-differentiable
families~\eqref{eqn:qmd}, the Fisher information matrix is
$\fisher[\theta] = \E_{P_\theta}[\dot{\ell}_\theta \dot{\ell}_\theta^T]$, and
we have
\begin{equation*}
\dhel^2(P_{\theta + h}, P_\theta)
= \frac{1}{8}
h^T \fisher[\theta] h + o(\norm{h}^2).
\end{equation*}
Written differently, if we define the Mahalanobis norm
$\norm{h}_Q = \sqrt{h^T Q h}$ for a matrix $Q \succeq 0$, then
the Fisher information is the
unique matrix $\fisher[\theta]$ such that $\dhel(P_{\theta + h},
P_\theta) = \frac{1}{2\sqrt{2}} \norm{h}_{\fisher[\theta]} + o(\norm{h})$.
By analogy with this notion of Fisher information, we define
the \emph{$L^1$-information} as the (semi)norm
\begin{equation}
\label{eqn:l1-information}
\information[\theta_0] : \R^d \to \R_+,
~~~ \information[\theta_0](h)
\defeq \int \left|h^T \dot{\ell}_{\theta_0}(x)\right|
dP_{\theta_0}(x),
\end{equation}
which is the unique (semi)norm $\norm{\cdot}_\theta$ for which
$\tvnorm{P_{\theta + h} - P_\theta} = \half \norm{h}_\theta
+ o(\norm{h})$.
With these definitions, we can establish information lower bounds for
$L^1$-differentiable families. We consider a somewhat general case in which
we wish to estimate the value $\functional(\theta)$ of a functional
$\functional : \Theta \to \R$, where $\functional$
is continuously differentiable in a
neighborhood of $\theta_0$. We measure our error by a
nondecreasing loss $\Phi : \R_+ \to \R_+$, where $\Phi(0) = 0$ and
$L(\theta, P_{\theta_0}) = \Phi(|\functional(\theta) -
\functional(\theta_0)|)$. Before stating the proposition, we also recall the
definition of the dual norm $\dnorm{\cdot}$ to a norm $\norm{\cdot}$,
defined by $\dnorm{v} = \sup_{\norm{h} \le 1} v^T h$. Let
$\information[\theta]^*$ denote the dual norm to $\information[\theta]$. In
the classical case of the Fisher information, where the norm $\norm{h} =
\sqrt{h^T \fisher[\theta_0] h}$ is Euclidean, we have dual norm $\dnorm{h} =
\sqrt{h^T \fisher[\theta_0]^{-1} h}$, that is, the usual inverse Fisher
information. In the case of $L^1$-information, because the norm is no longer
quadratic, such explicit formulae are no longer possible.
\begin{proposition}
\label{proposition:generic-private-lb}
Let $\mc{P} = \{P_\theta\}_{\theta \in \Theta}$ be $L^1$-differentable at
$\theta_0$ with score $\dot{\ell}_{\theta_0}$, and assume that the
classicial Fisher information $\fisher[\theta_0] =
\E_{\theta_0}[\dot{\ell}_{\theta_0} \dot{\ell}_{\theta_0}] \succ 0$.
Let $\channeldistset_\diffp$ be the family of $\diffp^2$-$\chi^2$-private,
sequentially interactive channels. Then
\begin{equation*}
\localminimax_n(P_{\theta_0}, L, \mc{P}, \channeldistset_\diffp)
\geq
\frac{1 - o(1)}{16\sqrt{2}}
\cdot \Phi\left(\frac{1}{2 \sqrt{2 n \diffp^2}}
\information[\theta_0]^*(\nabla \functional(\theta_0))\right).
\end{equation*}
\end{proposition}
\begin{proof}
We apply Theorem~\ref{theorem:modulus-of-continuity}. We have that
\begin{equation}
\label{eqn:theorem-modulus-of-continuity-example}
\localminimax_n(P_{\theta_0}, L, \channeldistset)) \geq
\frac{1}{8} \modcont\left(\frac{1}{\sqrt{8 n \diffp^2}}; P_{\theta_0}\right).
\end{equation}
Now, we evaluate $\modcont\left(\delta; P_{\theta_0}\right)$ for small
$\delta > 0$. Note that $\lossdist(P_{\theta_0 + h}, {P_{\theta_0}}) \ge
\Phi(\half |\functional(\theta_0 + h) - \functional(\theta_0)|)$ by
the calculation~\eqref{eqn:lossdist-for-quasi-convex},
so that
\begin{align*}
\modcont(\delta; P_{\theta_0})
& \ge \sup_h
\left\{\Phi\left(\half |\functional(\theta_0 + h) - \functional(\theta_0)|\right)
\mid \tvnorm{P_{\theta_0 + h} - P_{\theta_0}}
\le \delta \right\} \\
& \ge \sup_h
\left\{\Phi\left(\half |\nabla \functional(\theta_0)^T h
+ o(\norm{h})|\right)
\mid \information[\theta_0](h) + o(\norm{h})
\le 2 \delta \right\}.
\end{align*}
The assumption that the score $\dot{\ell}_{\theta_0}$ has
positive definite second moment matrix guarantees that
$\information[\theta_0](h) > 0$ for all $h \neq 0$, and as
$\information[\theta_0]$ is
homogeneous, we see that for $\delta \downarrow 0$, we have
\begin{equation*}
\modcont(\delta; P_{\theta_0})
\ge \sup_h \left\{\Phi\left(\half
|\nabla \functional(\theta_0)^T h |
+ o(\delta)\right)
\mid J_{\theta_0}(h) \le 2 \delta (1 + o(1)) \right\}
= \Phi\left((1 - o(1)) \delta
\information[\theta_0]^*(\nabla \functional(\theta_0))
\right).
\end{equation*}
Substituting this
into inequality~\eqref{eqn:theorem-modulus-of-continuity-example}
and setting $\delta = \frac{1}{\sqrt{8 n \diffp^2}}$ gives the proposition.
\end{proof}
To understand the proposition more clearly, let us compare with the
classical Fisher information bounds. In this case,
the analogous lower bound is
\begin{equation*}
\Phi\left(\sqrt{\frac{1}{n} \nabla \functional(\theta_0)^T
\fisher[\theta_0]^{-1} \nabla \functional(\theta_0)}\right),
\end{equation*}
which is the familiar
local minimax complexity for estimating a one-dimensional
functional~\cite[Ch.~7]{VanDerVaart98}. In the case that $\Phi$ is the
squared error, $\Phi(t) = t^2$, for example, the non-private lower bound
becomes $\frac{1}{n}\nabla \functional(\theta_0)^T \fisher[\theta_0]^{-1} \nabla
\functional(\theta_0)$. In the one-dimensional case, the lower bounds become
somewhat cleaner to state and are more easily comparable to Fisher
information bounds. To that end, consider direct estimation of a
real-valued parameter $\theta_0$.
\begin{corollary}
\label{corollary:one-param-information-bound}
Let the conditions of Proposition~\ref{proposition:generic-private-lb}
hold, but specialize $\Phi(t) = t^2 \wedge 1$ to the truncated squared error.
Assume $\Theta \subset \R$, and let $\functional(\theta) = \theta$ be the
identity map. Then
there exists a numerical constant $C > 0$ such that
\begin{equation*}
\localminimax_n(P_{\theta_0}, L, \mc{P}, \channeldistset_\diffp)
\ge \frac{C}{n \diffp^2}
\cdot \frac{1}{\E_{\theta_0}[|\dot{\ell}_{\theta_0}|]^2} \wedge 1.
\end{equation*}
\end{corollary}
\noindent
Because $\E_{\theta_0}[|\dot{\ell}_{\theta_0}|]^2 \le
\E_{\theta_0}[\dot{\ell}_{\theta_0}^2]$, the $L^1$-information is
always smaller than the Fisher information. In some cases,
as we shall see, it can be much smaller.
\section{Introduction}
The increasing collection of data at large scale---medical records, location
information from cell phones, internet browsing history---points to the
importance of a deeper understanding of the tradeoffs inherent between
privacy and the utility of using the data collected. Classical mechanisms
for preserving privacy, such as permutation, small noise addition, releasing
only mean information, or basic anonymization are insufficient, and notable
privacy compromises with genomic data~\cite{HomerSzReDuTeMuPeStNeCr08} and
movie rating information~\cite{NarayananSh08} have caused the NIH to
temporarily stop releasing genetic information and Netflix to cancel a
proposed competition for predicting movie ratings. Balancing the tension
between utility and the risk of disclosure of sensitive information
is thus essential.
In response to these challenges, researchers in the statistics, databases,
and computer science communities have studied \emph{differential
privacy}~\cite{Warner65, EvfimievskiGeSr03, DworkMcNiSm06, DworkSm09,
HardtTa10, DuchiJoWa18} as a formalization of disclosure risk limitation,
providing strong privacy guarantees. This literature discusses two notions
of privacy: \emph{local} privacy, in which data is privatized before it is
even shared with a data collector, and \emph{central} privacy, where a
centralized curator maintains the sample and guarantees that any information
it releases is appropriately private. The local model is stronger, and
consequently it is more challenging to develop statistically efficient
algorithms. Yet the strong privacy protections local privacy provides
encourage its adoption. Whether for ease of compliance with regulatory
strictures, for example with European Union privacy rules~\cite{EU18}; for
reasons of transparency and belief in the importance of privacy; or to avoid
the risks proximate to holding sensitive data, like hacking or subpoena
risk, because private data never leaves an individual's device in the clear;
major technology companies have adopted local differential privacy
protections in their data collection and machine learning tools. Apple
provides local differential privacy in many of its iPhone
systems~\cite{ApplePrivacy17}, and Google has built systems supplying
central and local differential
privacy~\cite{ErlingssonPiKo14,AbadiErGoMcPaMiTaZh17}. The broad impact of
privacy protections in billions of devices suggest we should carefully
understand the fundamental limitations and possibilities of learning with
local notions of privacy.
To address this challenge, we study the \emph{local minimax complexity} of
estimation and learning under local notions of privacy.
Worst-case notions of complexity may be too stringent for statistical
practice~\cite{DuchiJoWa18}, and in real-world use, we wish to understand
how difficult the \emph{actual} problem we have is, and whether we can adapt
to this problem difficulty, so that our procedurs more efficiently solve
easy problems as opposed to being tuned to worst-case notions of
difficulty. Our adoption of local minimax complexity is thus driven by three
desiderata: we seek fundamental limits on estimation and learning that (i)
are instance specific, applying to the particular problem at hand, (ii) are
uniformly attainable, in that there exist adaptive procedures to achieve the
instance-specific difficulty, and (iii) have super-efficiency limitations, so
that if a procedure achieves \emph{better} behavior than the lower bounds
suggest is possible, there should be problem instances in which the
procedure must have substantially worse behavior. In this paper, we provide
a characterization of the difficulty of estimation of one-dimensional
quantities under local privacy that satisfies these desiderata.
The celebrated Le Cam--H\'{a}jek local asymptotic minimax
theory~\cite{Hajek72, LeCam72, LeCam86, VanDerVaart98, LeCamYa00} cleanly
delineates efficient from inefficient estimators in classical statistics and
highlights the importance of local notions of optimality (making
Fisher information bounds rigorous). As an example encapsulating the
differences between global and local minimax complexity, consider the
one-dimensional logistic regression problem of predicting $y \in \{\pm 1\}$
from $x \in \R$, with $p_\theta(y \mid x) = (1 + \exp(-y x\theta))^{-1}$,
where---taking our motivation from applications of machine
learning~\cite{HastieTiFr09, ApplePrivacy17,AbadiErGoMcPaMiTaZh17}---we wish
only to accurately estimate $p(y \mid x)$. This problem is easier the larger
$|\theta|$ is, and a calculation
shows that the maximum likelihood estimator has misclassification error
decreasing exponentially in $|\theta|$; a fully minimax analysis provides a
lower bound at $\theta = 0$, or random guessing, with convergence lower
bound $1/\sqrt{n}$ independent of $\theta$. For most applications of
statistical learning, we hope our model substantially outperforms random
guessing, so such (global worst-case) analyses are of limited utility in the
design of (near-) optimal procedures. To that end, any practicable theory of
optimal private estimation should encapsulate a \emph{local} notion of
problem difficulty.
\subsection{Contributions, outline, and related work}
Our development of instance-specific (local) notions of problem complexity
under privacy constraints allows us to more precisely quantify the
statistical price of privacy. Identifying the tension here is of course of
substantial interest, and \citet*{DuchiJoWa18,DuchiJoWa13_focs} develop a
set of statistical and information-theoretic tools for understanding the
\emph{minimax} risk in locally differentially private settings, providing
the point of departure for our work. To understand their and our coming
approach, let us formally define our setting.
We have i.i.d.\ data
$X_1, \ldots, X_n$ drawn according to a distribution $P$ on a space
$\mc{X}$. Instead of observing the original sample $\{X_i\}$, however, the
statistician or learner sees only \emph{privatized} data $Z_1, \ldots, Z_n$,
where the data $Z_i$ is drawn from a Markov kernel $\channel(\cdot \mid
X_i)$ conditional on $X_i$ (following information-theoretic parlance, we
often call $\channel$ the privacy \emph{channel}~\cite{CoverTh06}; in the
privacy literature $\channel$ is the
\emph{mechanism}~\cite{DworkMcNiSm06}). In full generality, we allow the
channel to be \emph{sequentially interactive}~\cite{DuchiJoWa18}, meaning
that at observation $i$, the channel may depend on the previous (private)
observations $Z_1, \ldots, Z_{i - 1}$. That is, we have
\begin{equation}
\label{eqn:sequential-interactive}
Z_i \mid X_i = x, Z_1, \ldots, Z_{i-1}
\sim \channel(\cdot \mid x, Z_{1:i-1}).
\end{equation}
This notion of interactivity is important for procedures, such as stochastic
gradient methods~\cite{DuchiJoWa18} or the one-step-corrected estimators we
develop in the sequel, which modify the mechanism after some number of
observations to more accurately perform inference.
The statistical problems we consider are, abstractly, as follows. Let
$\mc{P}$ be a family of distributions, and let $\theta: \mc{P} \to \Theta$
be a parameter belonging to a parameter space $\Theta$ we wish to
estimate, where $\theta(P)$ denotes the estimand. Let
$L : \Theta \times \mc{P} \to \R_+$ be a loss function measuring the loss of
an estimated value $\theta$ for the distribution $P$, where we assume that
$L(\theta(P), P) = 0$ for all distributions $P$. As an example, we
may consider the mean $\theta(P) = \E_P[X] \in \R$ and squared error
$L(\theta, P) = (\theta - \theta(P))^2 = (\theta - \E_P[X])^2$.
Let $\channeldistset$ be a collection of appropriately private
channels, for example, $\diffp$-differentially private channels (which
we define in the sequel). The \emph{private minimax risk}~\cite{DuchiJoWa18}
is
\begin{equation}
\label{eqn:private-minimax}
\minimax_n(L, \mc{P}, \channeldistset)
\defeq \inf_{\what{\theta}, \channel \in \channeldistset}
\sup_{P \in \mc{P}}
\E_{\channel \circ P}\left[L(\what{\theta}(Z_1, \ldots, Z_n), P)\right]
\end{equation}
where $\channel \circ P$ denotes
the marginal $X_i \sim P$ and $Z_i$ drawn
conditionally~\eqref{eqn:sequential-interactive}. \citet{DuchiJoWa18}
provide upper and lower bounds on this quantity when $\channeldistset$ is
the collection of $\diffp$-locally differentially private channels,
developing strong data processing inequalities to quantify the costs of
privacy.
The worst-case nature of the formulation~\eqref{eqn:private-minimax} may
suggest lower bounds that are too pessimistic for practice, and
does not allow a characterization of
problem-specific difficulty, which is important for a deeper understanding
of adaptive and optimal procedures. Accordingly, we adopt a \emph{local
minimax approach}, which builds out of the classical statistical
literature on hardest one-dimensional alternatives that begins with
Stein~\cite{Stein56a, Birge83, DonohoLi87,DonohoLi91a,DonohoLi91b, CaiLo15,
ChatterjeeDuLaZh16}. In the same setting as the above,
we define the \emph{local minimax risk} at
the distribution $P_0$ for the set of channels $\channeldistset$ as
\begin{equation}
\label{eqn:private-local-minimax}
\localminimax_n(P_0, L, \mc{P}, \channeldistset)
\defeq \sup_{P_1 \in \mc{P}}
\inf_{\what{\theta},
\channel \in \channeldistset}
\max_{P \in \{P_0, P_1\}}
\E_{Q \circ P}\left[L(\what{\theta}(Z_1, \ldots, Z_n), P)\right].
\end{equation}
The quantity~\eqref{eqn:private-local-minimax} measures the
difficulty of the loss minimization problem for a \emph{particular
distribution} $P_0$ under the privacy constraints $\channeldistset$
characterizes, and at this distinguished distribution, we look for the
hardest alternative distribution $P_1 \in \mc{P}$.
To situate our contributions, let us first consider the non-private variant
of the minimax complexity~\eqref{eqn:private-minimax} and local minimax
complexity~\eqref{eqn:private-local-minimax}, when $\channeldistset =
\{{\rm id}\}$ (the identity mapping), and we use the squared error loss
$L_{\rm sq}(\theta, P) = (\theta - \theta(P)))^2$. Let us first consider a
classical setting, in which we wish to estimate a linear function $v^T
\theta$ of a parameter $\theta$ in a parametric family $\mc{P} =
\{P_\theta\}_{\theta \in \Theta}$ with Fisher information matrix
$\fisher[\theta]$. The
Fisher information bound~\cite{LeCamYa00} for the parameter $\theta_0$
is
\begin{equation*}
\localminimax_n(P_{\theta_0}, L_{\rm sq}, \mc{P}, \{{\rm id}\})
\asymp \frac{1}{n} \E\left[\left(v^T Z\right)^2\right]
~~ \mbox{for} ~~
Z \sim \normal\big(0, \fisher[\theta_0]^{-1}\big).
\end{equation*}
More generally,
if we wish to estimate a functional $\theta(P) \in \R$ of
a distribution $P$, \citet{DonohoLi87,DonohoLi91a,DonohoLi91b} show
how the \emph{modulus of continuity} takes the place of the classical
information bound. Again considering the squared error, define the
modulus of continuity of $\theta(\cdot)$ over $\mc{P}$ with respect
to Hellinger distance by
\begin{equation}
\label{eqn:hellinger-modcont}
\modcont[{\rm hel}](\delta; \mc{P})
\defeq \sup_{P_0, P_1 \in \mc{P}}
\left\{(\theta(P_0) - \theta(P_1))^2 \mid P_0, P_1 \in \mc{P},
\dhel(P_0, P_1) \le \delta \right\}
\end{equation}
where $\dhel^2(P_0, P_1) = \half \int (\sqrt{dP_0} - \sqrt{dP_1})^2$.
Then under mild regularity conditions,
\begin{equation*}
\minimax_n(L_{\rm sq}, \mc{P}, \{{\rm id}\})
\asymp \modcont[{\rm hel}] (n^{-1/2}; \mc{P}),
\end{equation*}
which highlights that separation in Hellinger distance precisely governs
problem difficulty in non-private classical statistical problems. In the
local minimax case, similar characterizations via a local modulus of
continuity are available in some problems, including estimation of the value
of a convex function~\cite{CaiLo15} and stochastic
optimization~\cite{ChatterjeeDuLaZh16}.
In contrast, the work of \citet{DuchiJoWa18,DuchiJoWa13_focs} suggests that
for $\diffp$-locally differentially private estimation, we should replace
the Hellinger distance by \emph{variation distance}. In the case of
higher-dimensional problems, there are additional dimension-dependent
penalties in estimation that local differential privacy makes unavoidable,
at least in a minimax sense~\cite{DuchiJoWa18}. In work independent of and
contemporaneous to our own, \citet{RohdeSt18} build off
of~\cite{DuchiJoWa18} to show that (non-local) minimax rates of convergence
under $\diffp$-local differential privacy are frequently governed by a
modulus of continuity~\eqref{eqn:hellinger-modcont}, except that the
variation distance $\tvnorm{P_0 - P_1} = \sup_A |P_0(A) - P_1(A)|$ replaces
the Hellinger distance $\dhel$. \citeauthor{RohdeSt18} also exhibit a
mechanism that is minimax optimal for ``nearly'' linear functionals based on
randomized response~\cite[Sec.~4]{Warner65,RohdeSt18}. Thus, locally
differentially private procedures give rise to a different geometry than
classical statistical problems.
Now we are in a position for a high-level description of our results. Our
results apply in a variety of locally private estimation settings, whose
definitions we formalize in Section~\ref{sec:definition-of-privacy}, but all
of them consist of weakenings of $\diffp$-differential privacy (including
concentrated and R\'{e}nyi-differential
privacy~\cite{DworkMcNiSm06, DworkRo16, BunSt16, Mironov17}). We provide a
precise characterization of the local minimax
complexity~\eqref{eqn:private-local-minimax} in these settings.
If we define the local modulus of continuity (for the squared
error) at $P_0$ by
\begin{equation*}
\modcont[{\rm TV}](\delta; P_0, \mc{P})
\defeq \sup_{P \in \mc{P}} \left\{(\theta(P_0) - \theta(P))^2
\mid \tvnorm{P - P_0} \le \delta \right\},
\end{equation*}
then a consequence of our
Theorem~\ref{theorem:modulus-of-continuity} is that for
the squared loss and family $\channeldistset_\diffp$ of $\diffp$-locally
private channels,
\begin{equation*}
\localminimax_n(P_0, L_{\rm sq}, \mc{P}, \channeldistset_\diffp)
\asymp \modcont[{\rm TV}]\left((n\diffp^2)^{-1/2}; P_0, \mc{P}\right).
\end{equation*}
We provide this characterization in more detail and for general losses in
Section~\ref{sec:local-minimax}. Moreover, we show a super-efficiency
result that any procedure that achieves risk better than the local minimax
complexity at a distribution $P_0$ \emph{must} suffer higher risk at another
distribution $P_1$, so that this characterization does indeed satisfy our
desiderata of an instance-specific complexity measure.
The departure of these risk bounds from the typical Hellinger-based moduli
of continuity~\eqref{eqn:hellinger-modcont} has consequences for
locally private estimation and adaptivity of estimators, which we address
via examples in Section~\ref{sec:examples}. For instance,
instead of the classical Fisher information, an alternative we
term the \emph{$L_1$-information} characterizes the complexity of locally
private estimation: the classical Fisher information bounds are
unobtainable. A challenging consequence of these results is that, for some
parametric models (including Bernoulli estimation and binomial logistic
regression), the local complexity~\eqref{eqn:private-local-minimax} is
\emph{independent} of the underlying parameter: problems that are easy (in
the classical Fisher information sense) are never easy under local privacy
constraints. Our proofs, building off of those of \citet{DuchiJoWa18}, rely
on novel Markov contraction inequalities for divergence measures, which
strengthen classical strong data processing
inequalities~\cite{CohenKeZb98,DelMoralLeMi03}.
Developing adaptive procedures uniformly achieving the instance-specific
local minimax risk~\eqref{eqn:private-local-minimax} is challenging, but we
show that such optimal design is possible in a number of cases in
Section~\ref{sec:examples}, including well- and mis-specified exponential
family models, using an extension of classical one-step corrected
estimators. We compare these locally optimal procedures with the minimax
optimal procedures \citet{DuchiJoWa18} propose on a protein
expression-prediction problem in Section~\ref{sec:experiments}; the
experimental results suggests that the local minimax perspective indeed
outperforms the global minimax procedures, however, the costs of privacy are
\emph{still} nontrivial.
Lastly, because we consider weaker notions of privacy, one might ask whether
it is possible to improve the minimax bounds that \citet{DuchiJoWa18,
DuchiJoWa13_focs} develop. Unfortunately, this appears impossible (see
Section~\ref{sec:high-dim}). \citeauthor{DuchiJoWa18} show
only that \emph{non-interactive privacy} mechanisms (i.e.\ the channel
$\channel$ may depend only on $X_i$ and not the past observations) must
suffer poor performance in high dimensions under stringent
differential privacy constraints. Our results show that this is unavoidable,
even with weaker notions of privacy and allowing interactive mechanisms. We
provide some additional discussion and perspective in
the closing of the paper in Section~\ref{sec:discussion}.
\section{Local minimax complexity and private estimation}
\label{sec:local-minimax}
We turn to our main goal of establishing
localized minimax complexities for locally private estimation.
To that end, we begin in
Section~\ref{sec:modulus-of-continuity-private} by defining the
\emph{modulus of continuity} of estimands, showing how it provides a tight
lower bound on localized complexity for private estimation.
Section~\ref{sec:super-efficiency} continues the development of
Section~\ref{sec:modulus-of-continuity-private} by establishing a
super-efficiency result, showing that any estimator achieving lower risk
than our localized modulus of continuity for some distribution $P_0$ must be
inefficient on other distributions $P$. In
Section~\ref{sec:contraction-probability-measures}, we present the main
technical tools that underlie our results, providing new strong
data-processing inequalities showing precisely how locally private channels
degrade the information in statistical problems.
\subsection{The modulus of continuity and local minimax complexities}
\label{sec:modulus-of-continuity-private}
Recall our setting, where we wish to estimate a parameter $\theta(P)$ of
a distribution $P \in \mc{P}$, a collection of possible distributions, and
we measure performance of an estimand $\theta$ via a loss $L : \Theta \times
\mc{P} \to \R_+$ satisfying $L(\theta(P), P) = 0$. We define the
``distance'' between distributions $P_0$ and $P_1$ for the loss $L$ by
\begin{equation*}
\lossdist(P_0, P_1) \defeq
\inf_{\theta \in \Theta} \left\{L(\theta, P_0) + L(\theta, P_1)\right\},
\end{equation*}
which is always non-negative. As an example, if $\theta$
is 1-dimensional and we use the squared error $L(\theta, P) = \half (\theta
- \theta(P))^2$,
\begin{equation*}
\lossdist(P_0, P_1)
= \frac{1}{4} (\theta(P_0) - \theta(P_1))^2.
\end{equation*}
More generally, for any symmetric convex function $\Phi : \R^d \to \R_+$,
if $L(\theta, P) = \Phi(\theta - \theta(P))$,
\begin{equation}
\lossdist(P_0, P_1)
= 2 \Phi\left(\half (\theta_0 - \theta_1)\right)
\label{eqn:lossdist-for-convex}
\end{equation}
where $\theta_a = \theta(P_a)$.
A similar result holds for general losses; if $\Phi : \R_+
\to \R_+$ is non-decreasing and we measure the parameter
error $L(\theta, P) = \Phi(\ltwo{\theta - \theta(P)})$, then
\begin{equation}
\label{eqn:lossdist-for-quasi-convex}
\begin{split}
2 \Phi\left(\half \ltwo{\theta_0 - \theta_1}\right)
& \ge \lossdist(P_0, P_1) \\
& = \inf_{\lambda \in [0, 1]}
\left\{\Phi(\lambda \ltwo{\theta_0 - \theta_1})
+ \Phi((1 - \lambda) \ltwo{\theta_0 - \theta_1}) \right\}
\ge \Phi\left(\half \ltwo{\theta_0 - \theta_1}\right),
\end{split}
\end{equation}
as it is no loss of generality to assume that $\theta \in [\theta_0,
\theta_1]$ in the definition of the distance.
For a family of distributions $\mc{P}$,
the \emph{modulus of continuity}
associated with the loss $L$ at the distribution $P_0$ is
\begin{equation}
\label{eqn:private-moc}
\modcont(\delta; P_0, \mc{P})
\defeq \sup_{P \in \mc{P}} \left\{ \lossdist(P, P_0)
\mid \tvnorm{P - P_0} \le \delta \right\}.
\end{equation}
As we shall see, this modulus of continuity fairly precisely
characterizes the difficulty of locally private estimation of functionals.
The key in this definition is that the modulus of continuity is
defined with respect to \emph{variation distance}.
This is in contrast to
classical results on optimal estimation, where the more familiar modulus of
continuity with respect to Hellinger distance characterizes problem
difficulty. Indeed, Le Cam's theory of quadratic mean differentiability,
contiguity, local asymptotic normality, and local alternatives for testing
all reposes on closeness in Hellinger distance~\cite{LeCam86, LeCamYa00,
Pollard97, VanDerVaart98}, which justifies the use of Fisher Information
in classical statistical problems. In nonparametric problems, as we
mentioned briefly in the introduction, the modulus of continuity of the
parameter $\theta(P)$ with respect to Hellinger distance also characterizes
minimax rates for estimation of functionals of distributions~\cite{Birge83,
DonohoLi87, DonohoLi91a} (at least in a global minimax sense), and in some
instances, it governs local minimax guarantees as well~\cite{CaiLo15}.
These results all correspond to replacing the variation
distance $\tvnorm{\cdot}$ in definition~\eqref{eqn:private-moc} with the
Hellinger distance between $P$ and $P_0$. As we illustrate, the difference
between the classical Hellinger-based modulus of continuity and
ours~\eqref{eqn:private-moc} leads to to substantially different behavior
for private and non-private estimation problems.
With this, we come to our first main result, which lower bounds the local
minimax risk using the modulus~\eqref{eqn:private-moc}. We defer the proof
to Section~\ref{sec:proof-modulus-of-continuity}, using our results on
strong data-processing inequalities to come to prove it.
\begin{theorem}
\label{theorem:modulus-of-continuity}
Let $\channeldistset$ be the
collection of $\diffp$-$\chi^2$-locally private channels. Then
for any distribution $P_0$, we have
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}, \channeldistset)
\ge
\frac{1}{8} \modcont\left(\frac{1}{2 \diffp}
\sqrt{e^\frac{1}{2n} - 1}; P_0, \mc{P}\right).
\end{equation*}
\end{theorem}
\noindent
Noting that the modulus of continuity is increasing in its first
argument and that $e^x - 1 \ge x$ for all $x$, we have the simplified
lower bound
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}, \channeldistset)
\ge
\frac{1}{8} \modcont\left(\frac{1}{\sqrt{8 n \diffp^2}}; P_0,
\mc{P} \right).
\end{equation*}
In (nearly) simultaneous independent work, \citet{RohdeSt18} provide a
global minimax lower bound, akin to~\eqref{eqn:private-minimax}, using a
global modulus of continuity with respect to variation distance,
extending~\cite{DonohoLi87,DonohoLi91a,DonohoLi91b} to the private case.
Our focus here is on instance-specific bounds, with the hope that we may
calculate practically useful quantities akin to classical information
bounds~\cite{VanDerVaart98,LeCamYa00}.
Achievability in the theorem is a somewhat more delicate argument;
demonstrating procedures that achieve the lower bound uniformly is typically
nontrivial. With that said, under two reasonable conditions on our loss,
distance, and growth of the modulus of continuity, we can show a converse to
Theorem~\ref{theorem:modulus-of-continuity}, showing that the modulus
$\modcont$ indeed \emph{describes} the local minimax complexity to within
numerical constants.
\begin{condition}[Reverse triangle inequality]
\label{cond:reverse-triangle}
There exists $\Creverse < \infty$ such that for $\theta_a = \theta(P_a)$,
\begin{equation*}
L(\theta_1, P_0) + L(\theta_0, P_1)
\le \Creverse \lossdist(P_0, P_1).
\end{equation*}
\end{condition}
\noindent
In the case that the loss is based on the
error $L(\theta, P) = \Phi(\norm{\theta - \theta(P)})$
for $\Phi \ge 0$ nondecreasing,
the inequality~\eqref{eqn:lossdist-for-quasi-convex} shows that
Condition~\ref{cond:reverse-triangle}
holds whenever $\Phi(2t) \le C \Phi(t)$ for all $t \ge 0$.
In addition, we sometimes use
the following condition on the modulus of continuity.
\begin{condition}[Polynomial growth]
\label{cond:polynomial-growth}
At the distribution $P_0$,
there exist constants $\alpha, \Cgrow < \infty$ such that
for all $c \ge 1$
\begin{equation*}
\modcont(c \delta; P_0, \mc{P})
\le (\Cgrow c)^\alpha \modcont(\delta; P_0, \mc{P}).
\end{equation*}
\end{condition}
\noindent
Condition~\ref{cond:polynomial-growth} is similar to the typical
H\"older-type continuity properties assumed on the modulus
of continuity for estimation problems~\cite{DonohoLi87, DonohoLi91a}.
We give examples satisfying
Condition~\ref{cond:polynomial-growth} presently.
The conditions yield the following converse to
Theorem~\ref{theorem:modulus-of-continuity}, which shows that the modulus of
continuity characterizes the local minimax complexity. See
Appendix~\ref{sec:proof-achievable} for a proof of the result.
\begin{proposition}
\label{proposition:achievable}
Let Conditions~\ref{cond:reverse-triangle} and~\ref{cond:polynomial-growth}
on $L$ and $\mc{P}$ hold.
Let $\diffp \ge 0$ and
$\delta_\diffp = \frac{e^\diffp}{e^\diffp + 1} - \half$,
and let $\channeldistset$ be the collection
of non-interactive $\diffp$-differentially private
channels (Definition~\ref{definition:local-dp}).
Then
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}, \channeldistset)
\le \Creverse \Cgrow^\alpha
e^{\frac{\alpha}{2} [\log \frac{\alpha}{2} - 1]}
\modcont\left(\frac{\sqrt{2}}{\delta_\diffp \sqrt{n}}; P_0,
\mc{P} \right).
\end{equation*}
\end{proposition}
\noindent
The proposition as written is a bit unwieldy, so we unpack it slightly. For
$\diffp \le \frac{7}{4}$, we have $\delta_\diffp \ge \frac{\diffp}{5}$, so
that for a constant $c$ that may depend on $\alpha$, $\Cgrow$, and $\Creverse$, for
each $P_1 \in \mc{P}$ there exists a non-interactive $\diffp$-differentially
private channel $\channel$ and estimator $\what{\theta}$ such that
\begin{equation*}
\max_{P \in \{P_0, P_1\}} \E_{P,Q}
\left[L(\what{\theta}(Z_{1:n}), P)\right]
\le c \cdot \modcont\left(\frac{5\sqrt{2}}{\sqrt{n \diffp^2}},
P_0, \mc{P}\right).
\end{equation*}
This matches the lower bound in Theorem~\ref{theorem:modulus-of-continuity}
up to a numerical constant.
We briefly discuss one example satisfying
Condition~\ref{cond:polynomial-growth} to demonstrate that we typically
expect it to hold, so that the modulus of continuity characterizes
the local minimax rates. We also show that
that (potentially mis-specified) exponential family models
also satisfy Condition~\ref{cond:polynomial-growth} in
Section~\ref{sec:mis-specified-expfam}, Lemma~\ref{lemma:mis-expfam-growth}.
\begin{example}[Modulus of continuity in (nonparametric) mean estimation]
\label{example:mean-estimation}
Consider loss $L(\theta, P) = \Phi(\ltwo{\theta -
\theta(P)})$ where $\Phi$ is
nondecreasing. Inequality~\eqref{eqn:lossdist-for-quasi-convex}
implies that using the
shorthand
\begin{equation*}
\modltwo(\delta) \defeq
\sup_{P \in \mc{P}} \left\{\ltwo{\theta_0 - \theta(P)} \mid \tvnorm{P - P_0}
\leq \delta \right\},
\end{equation*}
we have
\begin{equation*}
\Phi\left(\half \modltwo(\delta)\right)
\le \modcont(\delta; P_0, \mc{P})
\le 2\Phi\left(\half \modltwo(\delta)\right),
\end{equation*}
and assuming that the function $\Phi$ itself satisfies $\Phi(2 t) \le C
\Phi(t)$ for $t \ge 0$, to
show $\modcont$ satisfies Condition~\ref{cond:polynomial-growth}, it is
sufficient to show that $\modltwo(\delta)$ satisfies
Condition~\ref{cond:polynomial-growth}.
Now consider the problem of estimating
$\theta(P) = \E_P [X]$, where the unknown distribution $P$
belongs to $\mc{P} = \{P: \supp{P} \subset \mathcal{X}\}$
for some compact set $\mathcal{X}$.
Denote $\theta_0 = \E_{P_0}[x]$. We claim the following upper and lower
bounds on $\modltwo(\delta)$:
\begin{equation}
\label{eqn:modltwo-mean}
\delta \cdot \sup_{x\in \mathcal{X}} \ltwo{x - \theta_0} \leq
\modltwo(\delta)
\leq 2 \delta \cdot \sup_{x\in \mathcal{X}} \ltwo{x - \theta_0},
\end{equation}
which of course combine to imply Condition~\ref{cond:polynomial-growth}.
To see the lower bound~\eqref{eqn:modltwo-mean}, for any $x\in \mathcal{X}$,
define $P_x = (1-\delta)P_0 + \delta \cdot \ones_x$, where $\ones_x$
denotes a point mass at $x$. Then $\tvnorm{P_x - P_0} \leq \delta$ for all
$x \in \mathcal{X}$, so $\modltwo(\delta) \geq \sup_x \ltwo{\theta_0 -
\theta_{P_x}} = \delta \cdot \sup_{x\in \mathcal{X}} \ltwo{x -
\theta_0}$. The upper bound~\eqref{eqn:modltwo-mean} is similarly
straightforward: for all $P \in \mc{P}$, we have
\begin{align*}
\ltwo{\theta(P) - \theta_0}
& = \ltwo{\int (x - \theta_0) (dP(x) - dP_0(x))}
\le 2 \sup_{x \in \mathcal{X}} \ltwo{x - \theta_0} \tvnorm{P - P_0}
\end{align*}
by the triangle inequality, which is our desired result.
\end{example}
\subsection{Super-efficiency}
\label{sec:super-efficiency}
To demonstrate that the local modulus of continuity is indeed the ``correct''
lower bound on estimation, we consider the third of the desiderata
for a strong lower bound that we idenfity in the introduction:
a super-efficiency
result. We provide this via a constrained risk inequality~\cite{BrownLo96,
DuchiRu18}. Our result applies in the typical setting in which the loss
is $L(\theta, P) \defeq \Phi(\ltwo{\theta - \theta(P)})$ for some increasing
function $\Phi: \R_+ \to \R_+$, and we use the shorthand $\risk(\what{\theta},
\theta, P) \defeq \E_P[\Phi(\ltwos{\what{\theta}(Z) - \theta})]$ for the
risk (expected loss) of the estimator $\what{\theta}$ under the distribution
$P$. The starting point for our development is an inequality extending
\citet[Thm.~1]{BrownLo96} showing that if $\what{\theta}$ has small risk for
a parameter $\theta$ under a distribution $P_0$, then its risk under a
distribution $P_1$ close to $P_0$ may be large (see
also~\cite[Thm.~6]{Tsybakov98}). In the lemma and the remainder of this
section, for measures $P_0$ and $P_1$ we define the $\chi^2$-affinity
\begin{equation}
\label{eqn:shorthand-chi}
\chipone{P_0}{P_1}
\defeq \dchi{P_0}{P_1} + 1
= \E_{P_1}\left[\frac{dP_0^2}{dP_1^2}\right]
= \E_{P_0}\left[\frac{dP_0}{dP_1}\right],
\end{equation}
which measures the similarity between distributions $P_0$ and $P_1$.
With these definitions, we have the following constrained risk inequality.
\begin{lemma}[\cite{DuchiRu18}, Theorem 1]
\label{lemma:constrained-risk}
Let $\theta_0 = \theta(P_0)$, $\theta_1 = \theta(P_1)$,
and define $\Delta = \Phi(\half \ltwo{\theta_0 - \theta_1})$.
If the estimator $\what{\theta}$ satisfies
$\risk(\what{\theta}, \theta_0, P_0) \le \delta$ for some $\delta \ge 0$, then
\begin{equation*}
\risk(\what{\theta}, \theta_1, P_1)
\ge
\hinge{\Delta^{1/2} - (\chipones{P_1}{P_0} \cdot \delta)^{1/2}}^2.
\end{equation*}
\end{lemma}
\noindent
The lemma shows that an estimator has small risk under
distribution $P_0$, then its risk for a nearby (in $\chi^2$-divergence)
distribution $P_1$ must be nearly the distance between the associated
parameters $\theta_0$ and $\theta_1$.
With Lemma~\ref{lemma:constrained-risk} in hand, we can prove
a super-efficiency result, showing that improvement over
our modulus of continuity lower bound at a point $P_0$ implies
worse performance elsewhere.
\begin{proposition}
\label{proposition:super-efficiency}
Let $\channel$ be sequentially
interactive $\diffp^2$-$\chi^2$-private
channel (Defs.~\ref{definition:local-renyi}
or~\ref{definition:f-divergence-dp}) with associated marginal
distributions $\marginprob_a^n(\cdot) = \int \channel(\cdot \mid x_{1:n})
dP_a^n(x_{1:n})$. Let Condition~\ref{cond:reverse-triangle}
hold with parameter $\Creverse$. If for some $\eta \in [0, 1]$ the
estimator $\what{\theta}$ satisfies
\begin{equation*}
\risk(\what{\theta}, \theta_0, \marginprob_0^n)
\le \eta \modcont\left(\frac{1}{\sqrt{4 n \diffp^2}};
P_0, \mc{P} \right),
\end{equation*}
then for all $t \in [0, 1]$
there exists a distribution $P_1 \in \mc{P}$ such that
\begin{equation*}
\risk(\what{\theta}, \theta(P_1), \marginprob_1^n)
\ge \Creverse^{-1}
\hinge{\half - \eta^\frac{ (1 - t)}{2}}^2
\modcont\Bigg(\frac{1}{4} \sqrt{\frac{t \log \frac{1}{\eta}}{n \diffp^2}};
P_1, \mc{P}\Bigg).
\end{equation*}
\end{proposition}
\noindent
See Section~\ref{sec:proof-super-efficiency} for a proof.
The proposition depends on a number of constants, but roughly, it shows (for
small enough $\eta$, where we simplify by taking $t = 1/2$) that if an estimator
$\what{\theta}$ is super-efficient at $P_0$, in that $\risk(\what{\theta},
\theta_0, \marginprob_0^n) \le \eta \cdot \modcont(1 / \sqrt{4 n \diffp^2}; P_0)$,
then there exists a constant $c > 0$ such that for some $P_1$ we have
$\risk(\what{\theta}, \theta_1 \marginprob_1^n) \ge c \cdot
\modcont(\sqrt{\log (1/\eta)} / \sqrt{32 n \diffp^2}; P_1)$. In this sense,
our local modulus of continuity bounds
are sharp: no estimator can achieve much better risk than the local
modulus of continuity at a distribution $P_0$ without paying nontrivial cost
elsewhere.
\subsection{Contractions of probability measures}
\label{sec:contraction-probability-measures}
The main technical tool underpinning our lower bounds is that our
definitions of privacy imply strong contractions on the space of probability
measures. Such contractive properties have been important in the study of
information channels~\cite{CohenKeZb98,DelMoralLeMi03}, where one studies
\emph{strong data processing} inequalities, and in the mixing properites of
Markov chains under so-called \emph{strong mixing conditions}, such as the
Dobrushin condition~\cite{Dobrushin56}. For $a \in \{0, 1\}$, define the
marginal distributions
\begin{equation*}
\marginprob_a(S) \defeq \int \channel(S \mid x)
dP_a(x).
\end{equation*}
The goal is then to provide upper bounds on the $f$-divergence
$\fdiv{\marginprob_0}{\marginprob_1}$ in terms of the channel $\channel$;
the standard data-processing inequality~\cite{CoverTh06,LieseVa06}
guarantees $\fdiv{\marginprob_0}{\marginprob_1} \le
\fdiv{P_0}{P_1}$. Dobrushin's celebrated
ergodic coefficient $\alpha(\channel) \defeq 1 - \sup_{x, x'}
\tvnorm{\channel(\cdot \mid x) - \channel(\cdot \mid x')}$
guarantees that
for any
$f$-divergence (see~\cite{CohenKeZb98,DelMoralLeMi03}),
\begin{equation}
\label{eqn:strong-data-processing}
\fdiv{\marginprob_0}{\marginprob_1}
\le \sup_{x, x'}
\tvnorm{\channel(\cdot \mid x) - \channel(\cdot \mid x')}
\fdiv{P_0}{P_1}.
\end{equation}
Thus, as long as the Dobrushin coefficient is strictly positive, one
obtains a strong data processing inequality. In our case, our privacy
guarantees provide a stronger condition than the positivity of the Dobrushin
coefficient. Consequently, we are able to provide substantially stronger data
processing inequalities: we can even show that it is possible to modify the
underlying $f$-divergence.
Thus, we reconsider the notions of privacy based on divergences between the
channels $\channel(\cdot \mid x)$ and $\channel(\cdot \mid x')$. We have
the following proposition, which provides a strong data processing
inequality for all channels satisfying the divergence-based notion of
privacy (Definition~\ref{definition:f-divergence-dp}).
\begin{proposition}
\label{proposition:fk-contractions}
Let $f_k(t) = |t - 1|^k$ for some $k > 1$, and let
$P_0$ and $P_1$ be arbitrary distributions on a common
space $\mc{X}$. Let $\channel$ be a Markov kernel
from $\mc{X}$ to $\mc{Z}$
satisfying
\begin{equation*}
\fdivf{f_k}{\channel(\cdot \mid x)}{\channel(\cdot \mid x')}
\le \diffp^k
\end{equation*}
for all $x, x' \in \mc{X}$
and $\marginprob_a(\cdot) = \int \channel(\cdot \mid x) dP_a(x)$.
Then
\begin{equation*}
\fdivf{f_k}{\marginprob_0}{\marginprob_1}
\le (2 \diffp)^k
\tvnorm{P_0 - P_1}^k.
\end{equation*}
\end{proposition}
Jensen's inequality implies that $2^k \tvnorm{P_0 - P_1}^k \le
\fdivf{f_k}{P_0}{P_1}$, so Proposition~\ref{proposition:fk-contractions}
provides a stronger guarantee than the classical
bound~\eqref{eqn:strong-data-processing} for the specific divergence
associated with $f_k(t) = |t - 1|^k$. Because $\tvnorm{P_0 -
P_1} \le 1$ for all $P_0, P_1$, it is possible that the $f_k$-divergence
is infinite, while the marginals are much closer together. It is this
transfer from power divergence to variation distance, that is, $f_k$ to
$f_1(t) = |t - 1|$, that allows us to prove the strong localized lower
bounds depending on variation distance such as
Theorem~\ref{theorem:modulus-of-continuity}.
As a corollary of Proposition~\ref{proposition:fk-contractions}, we may
parallel the proof of~\cite[Theorem 1]{DuchiJoWa18} to obtain a
tensorization result. In this context, the most important divergence for us
is the $\chi^2$ divergence, which corresponds to the case $k = 2$ in
Proposition~\ref{proposition:fk-contractions}, that is, $f(t) = (t - 1)^2$,
which also corresponds to R\'{e}nyi differential privacy with $\alpha = 2$
(Def.~\ref{definition:local-renyi}) with a guarantee that prior and
posterior odds of discovery do not change much
(Eq.~\eqref{eqn:renyi-prior-posterior}). Recall our
formulation~\eqref{eqn:sequential-interactive}, in which the channel
$\channel(\cdot)$ may be defined sequentially as as $\channel_i(\cdot \mid
x, z_{1:i-1})$, and let
\begin{equation*}
\channel^n(S \mid x_{1:n})
\defeq \int_{z_{1:n} \in S}
\prod_{i = 1}^n d\channel(z_i \mid x_i, z_{1:i-1}).
\end{equation*}
Now, let $P_a, a = 0, 1$ be product distributions on $\mc{X}$, where we say
that the distribution of $X_i$ either follows $P_{0,i}$ or $P_{1,i}$, and
define $\marginprob_a^n(\cdot) = \int \channel^n(\cdot \mid x_{1:n})
dP_a(x_{1:n})$, noting that $dP_a(x_{1:n}) = \prod_{i = 1}^n dP_{a,i}(x_i)$
as $P_a$ is a product distribution. We have the following corollary.
\begin{corollary}
\label{corollary:tensorized-contraction-chi}
Let $\channel$ be a sequentially interactive channel satisfying
$\diffp^2$-$\chi^2$-divergence privacy, that is, $\dchis{\channel(\cdot
\mid x, z_{1:i})}{\channel(\cdot \mid x', z_{1:i})} \le \diffp^2$ for
all $x, x' \in \mc{X}$ and $z_{1:i} \in \mc{Z}^i$. Then
\begin{equation*}
\dchi{\marginprob_0^n}{\marginprob_1^n}
\le \prod_{i=1}^n \left(1
+ 4 \diffp^2 \tvnorm{P_{0,i} - P_{1,i}}^2\right) - 1.
\end{equation*}
\end{corollary}
\noindent
See Section~\ref{sec:proof-tensorized-contraction-chi} for a proof. An
immediate consequence of
Corollary~\ref{corollary:tensorized-contraction-chi} and the
fact~\cite[Lemma 2.7]{Tsybakov09} that $\dkl{P_0}{P_1} \le \log(1 +
\dchi{P_0}{P_1})$ yields
\begin{equation}
\label{eqn:tensorized-contraction-kl}
\dkl{\marginprob_0^n}{\marginprob_1^n}
\le \sum_{i = 1}^n \log\left(1 + 4 \diffp^2
\tvnorm{P_{0,i} - P_{1,i}}^2 \right)
\le 4 \diffp^2 \sum_{i = 1}^n
\tvnorm{P_{0,i} - P_{1,i}}^2.
\end{equation}
The tensorization~\eqref{eqn:tensorized-contraction-kl} is
the key to our results, as we see in the later sections.
\subsection{Proofs}
We collect the proofs of our main results in this section, as they are
reasonably brief and (we hope) elucidating. We begin with the key
contraction inequality in Proposition~\ref{proposition:fk-contractions},
as it underlies all subsequent results.
\subsubsection{Proof of Proposition~\ref{proposition:fk-contractions}}
\label{sec:proof-fk-contractions}
Let $p_0$ and $p_1$ be the densities of $P_0, P_1$ with respect to some
base measure $\mu$ dominating $P_0, P_1$. Without loss of generality, we
may assume that $\mc{Z}$ is finite, as all $f$-divergences are
approximable by finite partitions~\cite{Vajda72}; we let $\margindens_a$
denote the associated p.m.f. For $k > 1$, the function $t \mapsto t^{1 -
k}$ is convex on $\R_+$. Thus, applying Jensen's inequality, we may
bound $\fdivf{f_k}{\marginprob_0}{\marginprob_1}$ by
\begin{align}
\nonumber
\fdivf{f_k}{\marginprob_0}{\marginprob_1}
= \sum_z \frac{|\margindens_0(z) - \margindens_1(z)|^k}{
\margindens_1(z)^{k - 1}}
& \leq
\sum_z \int \frac{|\margindens_0(z) - \margindens_1(z)|^k}{
\channeldens(z \mid x_0)^{k-1}} p_1(x_0) d\mu(x_0) \\
& = \int \underbrace{\left(\sum_z
\frac{|\margindens_0(z) - \margindens_1(z)|^k}{
\channeldens(z \mid x_0)^{k-1}} \right)}_{
\eqdef W(x_0)} p_1(x_0) d\mu(x_0).
\label{eqn:prop-k-moment-starting-point}
\end{align}
It thus suffices to upper bound $W(x_0)$.
To do so, we rewrite $\margindens_0(z) - \margindens_1(z)$ as
\begin{equation*}
\margindens_{0}(z) - \margindens_{1}(z)
= \int \channeldens(z \mid x) (dP_0(x) - dP_1(x))
= \int \left(\channeldens(z \mid x)
- \channeldens(z \mid x_0)\right) (dP_0(x) - dP_1(x)),
\end{equation*}
where we have used that $\int (dP_0 - dP_1) = 0$.
Now define the function
\begin{equation*}
\Delta(z \mid x, x_0)
\defeq \frac{\channeldens(z\mid x) -
\channeldens(z\mid x_0)}{\channeldens(z \mid x_0)^{1-1/k}}.
\end{equation*}
By Minkowski's integral inequality, we have the upper bound
\begin{align}
\label{eqn:key-step-Minkovski}
\lefteqn{W(x_0)^{1/k}
= \left(\sum_z \left|\int
\Delta(z \mid x, x_0) (p_0(x) - p_1(x))d\mu(x)
\right|^k\right)^{1/k}} \\
& \leq \int \left(\sum_z \big|\Delta(z \mid x, x_0)
(p_0(x) - p_1(x))\big|^k \right)^{1/k}
d\mu(x)
= \int \left(\sum_z |\Delta(z \mid x, x_0)|^k
\right)^\frac{1}{k} |dP_0(x) - dP_1(x)|. \nonumber
\end{align}
Now we compute the inner summation: we have
that
\begin{equation*}
\sum_z |\Delta(z \mid x, x_0)|^k
=
\sum_z \left|\frac{\channeldens(z \mid x)}{\channeldens(z \mid x_0)}
- 1 \right|^k \channeldens(z \mid x_0)
= \fdivf{f_k}{\channel(\cdot \mid x)}{\channel(\cdot \mid x_0)}.
\end{equation*}
Substituting this into our upper bound~\eqref{eqn:key-step-Minkovski}
on $W(x_0)$, we obtain that
\begin{equation*}
W(x_0)
\le \sup_{x \in \mc{X}}
\fdivf{f_k}{\channel(\cdot \mid x)}{\channel(\cdot \mid x_0)}
2^k \tvnorm{P_0 - P_1}^k,
\end{equation*}
as $\int|dP_0 - dP_1| = 2 \tvnorm{P_0 - P_1}$.
Substitute this upper bound into
inequality~\eqref{eqn:prop-k-moment-starting-point}
to obtain the proposition.
\subsubsection{Proof of Corollary~\ref{corollary:tensorized-contraction-chi}}
\label{sec:proof-tensorized-contraction-chi}
We use an inductive argument. The base case in which $n = 1$ follows
immediately by Proposition~\ref{proposition:fk-contractions}. Now, suppose that
Corollary~\ref{corollary:tensorized-contraction-chi} holds at $n - 1$; we
will show that the claim holds for $n \in \N$. We use the shorthand
$\margindens_a(z_{1:k})$ for the density of the measure $\marginprob_a^k$,
$a \in \{0, 1\}$ and $k \in \N$, which we may assume exists w.l.o.g. Then, by
definition of $\chi^2$-divergence, we have,
\begin{equation*}
\dchi{\marginprob_0^n}{\marginprob_1^n} + 1 =
\E_{\marginprob_1}\left[
\frac{\margindens_0^2(Z_{1:n})}{\margindens_1^2(Z_{1:n})} \right]
= \E_{\marginprob_1} \left[
\frac{\margindens_0^2(Z_{1:{n-1}})}{\margindens_1^2(Z_{1:{n-1}})}
\E_{\marginprob_1}
\left[
\frac{\margindens_0^2(Z_n \mid Z_{1:n-1})}{
\margindens_1^2(Z_n \mid Z_{1:n-1})}\mid Z_{1:{n-1}}\right] \right].
\end{equation*}
Noting that the $k$th marginal distributions $\marginprob_{a,k}(\cdot \mid z_{1:k-1})
= \int \channel(\cdot \mid x, z_{1:k-1}) dP_{a,i}(x)$ for $a \in \{0, 1\}$,
we see that for any $z_{1:n-1} \in \mc{Z}^{n-1}$,
\begin{align*}
\E_{\marginprob_1}
\left[\frac{\margindens_0^2(Z_n \mid z_{1:n-1})}{
\margindens_1^2(Z_n \mid z_{1:n-1})}
\mid z_{1:{n-1}}\right]
& = 1 + \dchi{\marginprob_{0,n}(\cdot \mid z_{1:n-1})}{\marginprob_{1,n}(\cdot \mid z_{1:n-1})}
\\
& \le
1 + 4 \diffp^2 \tvnorm{P_{0,n}(\cdot \mid z_{1:n-1})
- P_{1,n}(\cdot \mid z_{1:n-1})}^2 \\
& = 1 + 4 \diffp^2 \tvnorm{P_{0,n} - P_{1,n}}^2,
\end{align*}
where the inequality is Proposition~\ref{proposition:fk-contractions}
and the final equality follows because $X_n$ is independent of $Z_{1:n-1}$.
This yields the inductive step and completes the proof
once we recall the inductive hypothesis and
that $\E_{\marginprob_1}[\frac{\margindens_0^2(Z_{1:n-1})}{
\margindens_1^2(Z_{1:n-1})}]
= \dchis{\marginprob_0^{n-1}}{\marginprob_1^{n-1}} + 1$.
\subsubsection{Proof of Theorem~\ref{theorem:modulus-of-continuity}}
\label{sec:proof-modulus-of-continuity}
We follow the typical reduction of estimation to testing, common in the
literature on lower bounds~\cite{AgarwalBaRaWa12,
DuchiJoWa18,Tsybakov09,Yu97}.
By definition of the ``distance'' $\lossdist$, we have
the mutual exclusion, true for any $\theta$, that
\begin{equation}
\label{eqn:exclusion}
L(\theta, P_0) < \half \lossdist(P_0, P_1)
~~ \mbox{implies} ~~
L(\theta, P_1) \ge \half \lossdist(P_0, P_1).
\end{equation}
Let $\marginprob_0^n$ and $\marginprob_1^n$ be the marginal probabilities
over observations $Z_{1:n}$ under $P_0$ and $P_1$ for a
channel $\channel \in \channeldistset$. Using Markov's inequality,
we have for any estimator $\what{\theta}$
based on $Z_{1:n}$ and any $\delta \ge 0$ that
\begin{align*}
\E_{\marginprob_0^n}\left[L(\what{\theta}, P_0)\right]
+
\E_{\marginprob_1^n}\left[L(\what{\theta}, P_1)\right]
& \ge \delta \left[
\marginprob_0^n(L(\what{\theta}, P_0) \ge \delta)
+ \marginprob_1^n(L(\what{\theta}, P_1) \ge \delta)
\right] \\
& = \delta \left[1 -
\marginprob_0^n(L(\what{\theta}, P_0) < \delta)
+ \marginprob_1^n(L(\what{\theta}, P_1) \ge \delta)
\right].
\end{align*}
Setting $\delta = \delta_{01} \defeq \half \lossdist(P_0, P_1)$
and using the
implication~\eqref{eqn:exclusion}, we obtain
\begin{align}
\nonumber
\E_{\marginprob_0^n}\left[L(\what{\theta}, P_0)\right]
+
\E_{\marginprob_1^n}\left[L(\what{\theta}, P_1)\right]
& \ge
\delta_{01} \left[1 -
\marginprob_0^n(L(\what{\theta}, P_0) < \delta)
+ \marginprob_1^n(L(\what{\theta}, P_1) \ge \delta)
\right] \\
& \ge
\delta_{01} \left[1 -
\marginprob_0^n(L(\what{\theta}, P_1) \ge \delta)
+ \marginprob_1^n(L(\what{\theta}, P_1) \ge \delta)
\right] \nonumber \\
& \ge \delta_{01}
\left[1 - \tvnorm{\marginprob_0^n - \marginprob_1^n}\right],
\label{eqn:le-cam-application}
\end{align}
where in the last step we used the definition of the
variation distance.
Now we make use of the contraction inequality of
Corollary~\ref{corollary:tensorized-contraction-chi}
and its consequence~\eqref{eqn:tensorized-contraction-kl}
for KL-divergences.
By Pinsker's inequality and the corollary, we have
\begin{equation*}
2 \tvnorm{\marginprob_0^n - \marginprob_1^n}^2
\le \dkl{\marginprob_0^n}{\marginprob_1^n}
\le \log(1+\dchi{\marginprob_0^n}{\marginprob_1^n})
\le n \log \left(1 + 4 \diffp^2
\tvnorm{P_0 - P_1}^2\right).
\end{equation*}
Substituting this into our preceding lower
bound~\eqref{eqn:le-cam-application} and using that $\what{\theta}$ is
arbitrary and $\delta_{01} = \half \lossdist(P_0, P_1)$, we have that for
any distributions $P_0$ and $P_1$,
\begin{equation*}
\inf_{\what{\theta}}
\inf_{\channel \in \channeldistset}
\max_{P \in \{P_0, P_1\}} \E_P \left[L(\what{\theta}, P)\right]
\ge \frac{1}{4} \lossdist(P_0, P_1)
\left[1
- \sqrt{\frac{n}{2} \log\left(1 + 4 \diffp^2 \tvnorm{P_0 - P_1}^2\right)
}\right].
\end{equation*}
Now, for any $\delta \ge 0$, if
$\frac{n}{2} \log(1 + 4 \diffp^2 \delta^2)
\le \frac{1}{4}$, or equivalently,
$\delta^2 \le \frac{1}{4 \diffp^2} (\exp(\frac{1}{2n}) - 1)$,
then $1 - \sqrt{\frac{n}{2} \log(1 + 4 \diffp^2 \delta^2)} \ge \half$.
Applying this to the bracketed term in the preceding display,
we obtain
\begin{align*}
\localminimax_n(P_0, L, \mc{P}, \channeldistset)
& \ge \frac{1}{8} \sup_{P_1 \in \mc{P}}
\left\{\lossdist(P_0, P_1)
\mid \tvnorm{P_0 - P_1}^2
\le \frac{1}{4 \diffp^2} \left[e^{\frac{1}{2n}} - 1\right]\right\} \\
& = \frac{1}{8}
\modcont\left(\frac{1}{2 \diffp}
\sqrt{e^\frac{1}{2n} - 1}; P_0, \mc{P} \right).
\end{align*}
\subsubsection{Proof of Proposition~\ref{proposition:super-efficiency}}
\label{sec:proof-super-efficiency}
For shorthand let
$\risk_a(\what{\theta}) = \risk(\what{\theta}, \theta_a, \marginprob_a^n)$
denote the risk under the marginal $\marginprob_a^n$.
By Lemma~\ref{lemma:constrained-risk},
for any distributions $P_0$ and $P_1$, we have
\begin{equation*}
\risk_1(\what{\theta})
\ge \hinge{\Phi \left(\half \ltwo{\theta_0 - \theta_1}\right)
- \left(\chipone{\marginprob_1^n}{\marginprob_0^n}
\risk(\what{\theta}, \marginprob_0^n)\right)^{1/2}}^2,
\end{equation*}
and by Corollary~\ref{corollary:tensorized-contraction-chi}
we have
\begin{equation*}
\chipone{\marginprob_1^n}{\marginprob_0^n}
\le \left(1 + 4 \diffp^2 \tvnorm{P_0 - P_1}^2\right)^n
\le \exp\left(4 n \diffp^2 \tvnorm{P_0 - P_1}^2\right).
\end{equation*}
For $t \in [0, 1]$, let $\mc{P}_t$ be the collection of
distributions
\begin{equation*}
\mc{P}_t
\defeq \left\{P \in \mc{P}
\mid \tvnorm{P_0 - P_1}^2
\le t \frac{ \log \frac{1}{\eta}}{4 n \diffp^2}\right\},
\end{equation*}
so that under the conditions of the proposition,
any distribution $P_1 \in \mc{P}_t$ satisfies
\begin{equation}
\label{eqn:risk-almost-transferred}
\risk_1(\what{\theta})
\ge \hinge{\Phi \left(\half \ltwo{\theta_0 - \theta_1}\right)
- \eta^{\frac{(1 - t)}{2}}
\modcont\left((4 n \diffp^2)^{-1/2}; P_0\right)^{1/2}}^2.
\end{equation}
By inequality~\eqref{eqn:lossdist-for-quasi-convex},
$2 \Phi \left(\half \ltwo{\theta_0 - \theta(P_1)}\right) \geq
\lossdist(P_0, P_1)$. Thus,
inequality~\eqref{eqn:risk-almost-transferred}
implies that for all $t
\in [0, 1]$, there exists $P_1 \in \mc{P}_t$ such that
\begin{equation*}
\risk(\what{\theta}, \marginprob_1^n)
\ge
\hinge{\half \modcont\left(\frac{\sqrt{t
\log\frac{1}{\eta}}}{\sqrt{4 n \diffp^2}}; P_0\right)^{1/2}
- \eta^{\frac{(1 - t)}{2}}
\modcont\left(\frac{1}{\sqrt{4 n \diffp^2}}; P_0\right)^{1/2}
}^2.
\end{equation*}
Because $\delta \mapsto \modcont(\delta)$ is non-decreasing,
if $t \in [0, 1]$ we may choose $P_1 \in \mc{P}_t$
such that
\begin{equation}
\risk(\what{\theta}, \marginprob_1^n)
\ge
\hinge{\half - \eta^{(1 - t)/2}}^2
\modcont\left(\frac{\sqrt{t
\log\frac{1}{\eta}}}{\sqrt{4 n \diffp^2}}; P_0\right).
\label{eqn:intermediate-eta-risk-bound}
\end{equation}
Lastly, we lower bound the modulus of continuity at $P_0$ by
a modulus at $P_1$.
We claim that under Condition~\ref{cond:reverse-triangle},
for all $\delta > 0$, if
$\tvnorm{P_0 - P_1} \le \delta$ then
\begin{equation}
\label{eqn:transfer-modcont}
\modcont(2 \delta; P_0)
\ge \Creverse^{-1} \modcont(\delta; P_1).
\end{equation}
Deferring the proof of this claim, note that
by
taking $\delta^2 = t \log \frac{1}{\eta} / (16 n \diffp^2)$
in inequality~\eqref{eqn:transfer-modcont},
Eq.~\eqref{eqn:intermediate-eta-risk-bound} implies
that there exists $P_1 \in \mc{P}_t$ such that
\begin{equation*}
\risk(\what{\theta}, \marginprob_1^n)
\ge
\hinge{\half - \eta^{(1 - t)/2}}^2
\modcont\left(2 \delta; P_0\right)
\ge
\Creverse^{-1} \hinge{\half - \eta^{(1 - t)/2}}^2
\modcont\left(\frac{1}{4} \frac{\sqrt{t
\log\frac{1}{\eta}}}{\sqrt{n \diffp^2}}; P_1\right).
\end{equation*}
Let us return to the claim~\eqref{eqn:transfer-modcont}.
For distributions $P_0, P_1, P_2$ with
associated parameters $\theta_a = \theta(P_a)$, we
use that $L(\theta, P) = \Phi(\ltwo{\theta - \theta(P)})$ to obtain
\begin{align*}
\lossdist(P_0, P_2)
\le \Phi(\ltwo{\theta_1 - \theta_0})
+ \Phi(\ltwo{\theta_1 - \theta_2})
& \leq \frac{\Creverse}{2} \lossdist(P_0, P_1)
+ \frac{\Creverse}{2} \lossdist(P_1, P_2)
\end{align*}
by Condition~\ref{cond:reverse-triangle}.
Then for any $\delta \ge 0$ and $P_1$ with $\tvnorm{P_1 - P_0} \le \delta$, we have
\begin{align*}
\sup_{\tvnorm{P_0 - P}
\le 2 \delta}
\lossdist(P_0, P)
& \ge \sup_{\tvnorm{P_1 - P} \le
\delta} \lossdist(P_0, P) \\
& \ge
\sup_{\tvnorm{P - P_1} \le \delta}
\left\{2 \Creverse^{-1} \lossdist(P_1, P)
- \lossdist(P_0, P_1)\right\}
\ge 2 \Creverse^{-1} \modcont(\delta; P_1)
- \modcont(\delta; P_0).
\end{align*}
Rearranging, we have for any distribution $P_1$
such that $\tvnorm{P_0 - P_1} \le \delta$,
\begin{equation*}
2 \modcont(2 \delta; P_0)
\ge
\modcont(\delta; P_0)
+ \modcont(2 \delta; P_0)
\ge 2 \Creverse^{-1} \modcont(\delta; P_1),
\end{equation*}
which is inequality~\eqref{eqn:transfer-modcont}.
\section{Generalized Le Cam's method and the failure of
high-dimensional estimation}
\label{sec:high-dim}
\providecommand{\complexity}{\mc{C}}
Our localized minimax complexity results essentially characterize the
difficulty of locally private estimation---under many accepted notions of
privacy---for \emph{functionals} of distributions. It is important
to investigate more complex inferential and estimation problems, including
in high dimensional settings, and we complement our local minimax results by
studying a few such problems here. \citet*[Sec.~4]{DuchiJoWa18} develop
sophisticated contraction inequalities for divergences to study local
$\diffp$-differentially private estimation in high dimensions. Their results
suggest that local differential privacy is too strong a notion to allow
high-dimensional estimation, but their results apply only to
\emph{non-interactive} channels. Thus, we might hope that by weakening the
notion of local privacy (to, say, R\'{e}nyi-differential privacy,
Def.~\ref{definition:local-renyi})
or allowing (sequential) interaction, we mitigate this curse of
dimensionality. This hope is misplaced.
In this case, a number of tools are available, but to keep our
presentation tighter, we focus on the generalized version of Le Cam's
method~\cite{LeCam73,Yu97}.
We begin with a result
essentially due to \citet[Lemma 1]{Yu97} (see also \cite{Wainwright18}),
known as the generalized Le Cam's method. Let $\mc{P}_1$
and $\mc{P}_2$ be two collections of distributions on a space $\mc{X}$. We
say that the sets $\mc{P}_1$ and $\mc{P}_2$ are $\delta$-separated if
\begin{equation*}
\lossdist(P_0, P_1) \ge \delta
~~
\mbox{for all~} P_0 \in \mc{P}_0, P_1 \in \mc{P}_1.
\end{equation*}
We then have the following lemma.
\begin{lemma}[Le Cam's method]
\label{lemma:le-cam-method}
Let $\mc{P}$ be a collection of probability distributions. Let $\mc{P}_0
\subset \mc{P}$ and $\mc{P}_1 \subset \mc{P}$ be $\delta$-separated for
the loss $L$. For any estimator $\what{\theta}$,
\begin{equation*}
\sup_{P \in \mc{P}}
\E_P\big[L(\what{\theta}, P)\big]
\ge \frac{\delta}{2} \sup_{P_i \in \conv(\mc{P}_i)}
\left(1 - \tvnorm{P_0 - P_1}\right).
\end{equation*}
\end{lemma}
\noindent
See Appendix~\ref{sec:proof-le-cam-method} for the proof of
Lemma~\ref{lemma:le-cam-method}, which we include for completeness.
In our context of private estimation, we use a slight reformulation and
simplification of the lemma to prove our results. Consider a (sequentially
interactive) private channel $\channel$ taking input data from a space
$\mathcal{X}$ and outputting $Z \in \mathcal{Z}$. Now, we consider any estimation
problem in which the set of possible distributions $\mc{P}$ contains a
collection $\{P_\packval\}_{\packval \in \packset} \subset \mc{P}$ of
distributions on $\mathcal{X}$, indexed by $\packval \in \packset$, as well as
a distribution $P_0 \in \mc{P}$. For each of these distributions, we
have i.i.d.\ observations $X_i$, that is, samples from the product
with density
\begin{equation*}
dP_\packval^n(x_{1:n})
= \prod_{i = 1}^n dP_{\packval}(x_i).
\end{equation*}
We define the marginal distributions
$\marginprob_\packval^n(\cdot) \defeq \int \channel(\cdot \mid x_{1:n})
dP^n_\packval(x_{1:n})$ and $\meanmarginprob^n \defeq
\frac{1}{|\packset|} \sum_{\packval \in \packset} \marginprob_\packval^n$.
Then an immediate consequence of Le Cam's method is the following private
analogue:
\begin{lemma}[Private generalized Le Cam's method]
\label{lemma:private-general-le-cam}
Let the conditions above hold.
For any estimator $\what{\theta}$ based on the privatized
observations $Z_1, \ldots, Z_n$ drawn from the
channel $\channel$,
\begin{equation}
\label{eqn:private-general-le-cam}
\sup_{P \in \mc{P}}
\E_{Q,P}
\big[L(\what{\theta}(Z_1, \ldots, Z_n), P)\big]
\ge \half
\cdot \min_{\packval \in \packset}
\lossdist(P_0, P_\packval)
\cdot \left(1 - \tvnorm{\marginprob_0^n
- \meanmarginprob^n}\right).
\end{equation}
\end{lemma}
Based on Lemma~\ref{lemma:private-general-le-cam}, our approach to proving
minimax bounds is roughly a two-step process, which follows the classical
approaches to ``local'' minimax
bounds~\cite{Yu97,YangBa99,DuchiJoWa18}. First, we choose a collection of
distributions $\{P_\packval\}$ and base distribution $P_0$ that are
well-separated for our loss $L$, that is, $\min_{\packval} \lossdist(P_0,
P_\packval) > 0$. If we can then show that the variation distance
$\tvnorms{\marginprob_0^n - \meanmarginprob^n} \le \half$, or is otherwise
upper bounded by a constant less than $1$ for all appropriately private
channels $\channel$, then we obtain the minimax lower bound
$\frac{1}{4} \min_{\packval \in \packset} \lossdist(P_0, P_\packval)$ using
inequality~\eqref{eqn:private-general-le-cam}. Often, we scale the
separation $\lossdist(P_0, P_\packval)$
by a parameter $\delta > 0$ to choose the optimal (worst-case)
tradeoff.
With this rough outline in mind, the most important step is to control the
variation distance between $\marginprob_0^n$ and $\meanmarginprob^n$, and a
variational quantity bounds this distance. For each $\packval \in
\packset$, define the linear functional
\begin{equation*}
\varphi_\packval(f) \defeq \int f(x) (dP_{0}(x) - dP_{\packval}(x)).
\end{equation*}
We then define the following measures, which we parameterize to apply for
different variants of privacy of the channel $\channel$, by a power
$r \in [1, \infty]$ as follows:
\begin{equation}
\label{eqn:chi-square-complexity}
\complexity_r(\{P_\packval\}_{\packval \in \packset})
\defeq \inf_{\supp P^* \subset \mathcal{X}}
\sup_{f}
\bigg\{\frac{1}{|\packset|} \sum_{\packval \in \packset}
\varphi_\packval(f)^2
\mid \norm{f}_{L^r(P^*)} \le 1 \bigg\},
\end{equation}
where the infimum is taken over all distributions $P^*$ supported
on $\mathcal{X}$.
In their study of locally private estimation, \citet{DuchiJoWa18} also
consider a similar quantity governing private estimation and mutual
information; their Theorem 2 shows that for a \emph{single
observation} of a privatized random variable (the case $n = 1$), one
obtains a result similar to ours in the $\diffp$-differentially private
case. However, their results do not extend to interactive channels, that is,
the important and more general scenario in which the private random
variables $Z_1, \ldots, Z_{i-1}$ may influence the choice of the channel
used to privatize observation $Z_i$. This interactivity, as we have seen, is
often useful for building optimal estimators. With the
definition~\eqref{eqn:chi-square-complexity}, we have the following
tensorization and contraction result, which we prove in
Appendix~\ref{sec:proof-big-tensor}.
\begin{theorem}
\label{theorem:big-tensor}
Let the channel $\channel$ be $\diffp$-differentially private. Then
for any distribution $P$ on $\mathcal{X}$,
\begin{equation}
\label{eqn:diffp-big-le-cam-tensor}
\dkl{\marginprob_0^n}{\meanmarginprob^n}
\le \frac{n (e^{\diffp/2} - e^{-\diffp/2})^2}{4}
\cdot
\complexity_\infty(\{P_{\packval}\}_{\packval \in \packset})
\cdot
\min\left\{e^\diffp,
\max_{\packval \in \packset}
\linf{dP / dP_\packval}\right\}.
\end{equation}
If the channel $\channel$ is $\diffp^2$-$\chi^2$-private, then
for any distribution $P$ on $\mathcal{X}$,
\begin{equation}
\label{eqn:chi-square-big-le-cam-tensor}
\dkl{\marginprob_0^n}{\meanmarginprob^n}
\le n \diffp^2 \cdot
\complexity_2(\{P_\packval\}_{\packval \in \packset})
\cdot \max_{\packval \in \packset}
\linf{dP_\packval / dP}.
\end{equation}
\end{theorem}
In the remainder of this section, we apply Theorem~\ref{theorem:big-tensor}
to high-dimensional estimation problems, deriving new and stronger results
showing that---even if we allow interactive channels or relaxed
$\diffp^2$-$\chi^2$ privacy---local privacy precludes high-dimensional
estimation.
\subsection{High-dimensional mean estimation}
\label{sec:highdim-mean-estimation}
There has been substantial recent interest in estimation problems in which
the nominal dimension $d$ of the estimand is much larger than the sample
size $n$, but some underlying latent structure---such as sparsity---makes
consistent estimation possible~\cite{BuhlmannGe11, NegahbanRaWaYu12}.
The simplest version of this problem is sparse
high-dimensional mean
estimation. \citet[Corollary 5]{DuchiJoWa18}
consider this problem as well, but one of the weaknesses of their paper is
that they do not allow sequentially interactive channels in high-dimensional
settings; there are subtle difficulties in the application of Fano's method
that our approach via the generalized Le Cam's method circumvents.
We consider the class of
distributions with $s$-sparse means supported on the radius $1$ box
in $\R^d$,
\begin{equation*}
\mc{P}_s \defeq
\left\{\mbox{distributions}~ P ~ \mbox{supported~on~} [-1, 1]^d
~ \mbox{s.t.}~
\norm{\E_P[X]}_0 \le s \right\}.
\end{equation*}
In the non-private case, an $\ell_1$-regularized (soft-thresholded mean)
estimator achieves $\E[\ltwos{\what{\theta}_n - \theta}^2] \lesssim \frac{s
\log (d/s)}{n}$ for the $s$-sparse case~\cite{Johnstone13, BuhlmannGe11,
Wainwright18}. In the private case, the problem is much more
difficult. To make this concrete, let us consider estimation of some linear
function $\psi : \R^d \to \R^k$ of the mean $\E_P[X]$ under a symmetric
convex loss $\Phi : \R^k \to \R_+$ with $\Phi(\zeros) = 0$. Then for $P \in
\mc{P}_s$, we have loss $L(\theta, P) \defeq \Phi(\theta - \psi(\E_P[X]))$.
With this choice of loss function, we have the following result, where we
recall that $e_j$ denotes the $j$th standard basis vector.
\begin{proposition}
\label{proposition:highdim-mean-hard}
Let $L$ be the parameter-based error $L(\theta, P) =
\Phi(\theta - \psi(\E_P[X]))$ above and let $\qfam$ denote the collection of
$\diffp^2$-$\chi^2$-private and sequentially interactive channels. Then
\begin{equation*}
\minimax_n\left(\theta(\mc{P}_1), L,
\qfam\right)
\ge \half \cdot \min_{j \in [d]}
\Phi\left(\left(\sqrt{\frac{d}{4n \diffp^2}} \wedge 1
\right) \psi(e_j) \right).
\end{equation*}
\end{proposition}
\noindent
To demonstrate the technique to develop this result from the divergence bounds
in Theorem~\ref{theorem:big-tensor}, we provide
the proof in Section~\ref{sec:proof-highdim-mean-hard}.
A few consequences of Proposition~\ref{proposition:highdim-mean-hard} are
illustrative. If we use the mean-squared error $\Phi(\theta) = \half
\ltwo{\theta}^2$, then this result says that the minimax risk for estimation
of a 1-sparse mean scales at least as $\frac{d}{n \diffp^2} \wedge 1$, so
that estimation when $d \gtrsim n / \diffp^2$ is effectively impossible,
even for channels satisfying weaker definitions of privacy then
$\diffp$-differential privacy and allowing interactivity. This
contrasts the non-private case, where non-asymptotic rates of
$\frac{\log d}{n}$ are possible~\cite{BuhlmannGe11, NegahbanRaWaYu12}. Even
estimating linear functionals is challenging: suppose that we wish to
estimate the sum $\sum_{j = 1}^d \theta_j$, so that we consider
$\Phi(\theta) = |\ones^T \theta|$ and loss $L(\theta, P) = |\ones^T (\theta
- \E_P[X])|$. In this case, results on $\ell_1$-consistency of sparse
estimators (e.g.~\cite[Corollary 2]{NegahbanRaWaYu12} or~\cite{Johnstone13})
yield that in the non-private case, a soft-thresholded sample mean estimator
obtains $\E[\lones{\what{\theta} - \E[X]}] \lesssim \sqrt{\log d / n}$.
Thus, in the non-private case, we have $\E[|\ones^T(\what{\theta} - \E[X])|]
\le \linf{\ones} \E[\lones{\what{\theta} - \E[X]}] \lesssim \sqrt{\log d /
n}$. In contrast, in our private case,
\begin{equation*}
\sup_{P \in \mc{P}_1} \E_{P,Q}\left[\left|\what{\theta}(Z_{1:n})
- \ones^T \E_P[X]\right|\right]
\ge \half \left(\sqrt{\frac{d}{4 n \diffp^2}} \wedge 1\right)
\end{equation*}
for all $\diffp^2$-$\chi^2$-private channels $\channel$ and
\emph{any} estimator $\what{\theta}$ of $\sum_j \E[X_j]$.
\subsection{High-dimensional sparse logistic regression}
The lower bounds for the 1-sparse mean estimation problem in
Section~\ref{sec:highdim-mean-estimation} are illustrative of the
difficulties one must encounter when performing locally private inference:
under the notions of privacy we use, at least in a minimax sense,
there must be additional dimension dependence. As we develop in
Section~\ref{sec:examples}, the dependence of estimation methods
on the parameters of underlying problem at hand also causes difficulties;
Section~\ref{sec:1-dim-logreg} shows this in the case of logistic
regression. A similar difficulty arises in high-dimensional
problems, which we demonstrate here.
\newcommand{\mc{P}_{\log,\theta_0,d}}{\mc{P}_{\log,\theta_0,d}}
Let $\theta_0 \in \R$ be a fixed base parameter,
and consider the following family of $d$-dimensional 1-sparse logistic
models on pairs $(x, y) \in \{-1, 1\}^d \times \{-1, 1\}$:
\begin{equation*}
\mc{P}_{\log,\theta_0,d}
\defeq \Big\{P_\theta \mid p_\theta(y \mid x)
= \frac{e^{y (\theta^T x + \theta_0)}}{1 + e^{y (\theta^T x + \theta_0)}},
X \sim \uniform\{-1, 1\}^d,
\ltwo{\theta} \le 1,
\norm{\theta}_0 \le 1 \Big\}.
\end{equation*}
Thus, there is the ``null'' distribution with conditional distributions $p(y
\mid x) = \frac{1}{1 + e^{-y \theta_0}}$, or a fixed bias parameter
$\theta_0$, while the parameter to be estimated is an at most $1$-sparse
vector $\theta \in \R^d$ with $\ltwo{\theta} \le 1$. For this class, we
have the following lower bound, which applies to convex
losses $\Phi$ for estimating linear functions $\psi$ of $\theta$ as
in Proposition~\ref{proposition:highdim-mean-hard}
(See Appendix~\ref{sec:proof-sparse-logistic-regression} for a proof.)
\begin{proposition}
\label{proposition:sparse-logistic-regression}
Let $\mc{P}_{{\rm log},\theta_0, d}$ denote the
family of $1$-sparse logistic regression models,
and let $\qfam$ denote the collection of $\diffp^2$-$\chi^2$-private
and sequentially interactive channels. Define
\begin{equation*}
\delta_n^2 \defeq
\min\left\{\frac{e^{2 \theta_0} d}{64 n \diffp^2},
\frac{e^{\theta_0}}{8 (1 - e^{-\theta_0}) \sqrt{n \diffp^2}},
1 \right\}.
\end{equation*}
Then
\begin{equation*}
\minimax_n\left(\mc{P}_{\log,\theta_0,d}, L_\Phi,
\qfam\right)
\ge
\min_{j = 1, \ldots, d}
\half \Phi\left(
\delta_n \psi(e_j) \right).
\end{equation*}
\end{proposition}
As a particular example, the $\ell_2$-error satisfies
$\E[\ltwos{\what{\theta} - \theta}^2] \gtrsim \min\{\frac{e^{2 \theta_0}
d}{n \diffp^2}, \frac{e^{\theta_0}}{(1 - e^{-\theta_0}) \sqrt{n
\diffp^2}}, 1\}$. The contrast with the non-private case is
striking. A careful tracking of constants in
\citet[Sec.~4.4]{NegahbanRaWaYu09, NegahbanRaWaYu12} shows the following
result. Consider the empirical logistic loss
\begin{equation*}
L_n(\theta) = \frac{1}{n}
\sum_{i = 1}^n \log(1 + \exp(-Y_i (\theta_0 + \<X_i, \theta\>))),
\end{equation*}
and for $\Delta \in \R^d$ let $D_{L_n}(\Delta, \theta^*) = L_n(\theta^* +
\Delta) - L_n(\theta^*) - \<\nabla L_n(\theta^*), \Delta\>$ be the
first-order error in $L_n$, which is non-negative. Then if $(X_i, Y_i)
\simiid P_{\theta^*} \in \mc{P}_{\log,\theta_0,d}$, there are numerical constants $c_1,
c_2$ such that with high probability, the restricted strong convexity
condition
\begin{equation}
\label{eqn:rsc-logloss}
D_{L_n}(\Delta, \theta^*)
\ge c_1 e^{-2 \theta_0} \ltwo{\Delta}^2
- c_2 \frac{\log d}{n} \lone{\Delta}^2
~~ \mbox{for~all~} \ltwo{\Delta} \le 1
\end{equation}
holds.
\citet[Thm.~1]{NegahbanRaWaYu12} show that
for the non-private $\ell_1$-regularized estimator
$\what{\theta}_{\lambda_n} \defeq \argmin_\theta \{L_n(\theta) + \lambda_n
\lone{\theta}\}$,
if $2 \linf{\nabla L_n(\theta^*)} \le \lambda_n$ and
$\frac{n}{\log d} \gtrsim e^{-2 \theta_0}$, then
$\ltwos{\what{\theta}_{\lambda_n} - \theta^*}^2 \le c \cdot e^{2 \theta_0}
\lambda_n^2$ for a numerical constant $c < \infty$. In the case of 1-sparse
logistic regression, an argument with Bernstein's inequality immediately
yields that $\linf{\nabla L_n(\theta^*)} \le C \frac{1}{1 + e^{\theta_0}}
\sqrt{n^{-1} \log d}$ with high probability; thus, the choice $\lambda_n
= C e^{-\theta_0} \sqrt{n^{-1} \log d}$ yields an estimator
that w.h.p.\ achieves
\begin{equation*}
\ltwos{\what{\theta}_{\lambda_n} - \theta^*}^2
\le c e^{\theta_0} \frac{\log d}{n}.
\end{equation*}
An argument similar to our derivation of the lower bounds
in Corollary~\ref{corollary:logistic-lower} shows
that for the prediction loss $|p_\theta(y \mid x) - p_{\theta^*}(y \mid x)|$,
we must have a minimax lower bound of at least
$\sqrt{d / n \diffp^2}$ for any fixed bias term $\theta_0$---the problem
never gets easier, and the dimension dependence is unavoidable.
\subsection{Proof of Proposition~\ref{proposition:highdim-mean-hard}}
\label{sec:proof-highdim-mean-hard}
As we outline following the statement
(Lemma~\ref{lemma:private-general-le-cam}) of Le Cam's method, the proof
proceeds in two phases: first, we choose a well-separated collection of
distributions $P_\packval$, scaled by some $\delta \ge 0$ to be chosen. We
then upper bound the variation distance between their (private) mixtures
using Theorem~\ref{theorem:big-tensor}.
Fix $\delta \ge 0$, which we will optimize later.
Define the base (null) distribution $P_0$ to be uniform on $\{-1, 1\}^d$, so
that $\theta_0 \defeq \E_{P_0}[X] = 0$, with p.m.f.\ $p_0(x) =
\frac{1}{2^d}$. We let the collection $\packset = \{\pm e_j\}_{j = 1}^d$
be the collection of the standard basis vectors and their negatives,
and for each $\packval \in \packset$ we define $P_\packval$ to
be the slightly tilted distribution with p.m.f.
\begin{equation*}
p_\packval(x) = \prod_{j = 1}^d \frac{1 + \delta \packval_j x_j}{2}
= \frac{1}{2^d} (1 + \delta \packval^T x)
~~ \mbox{for~} x \in \{\pm 1\}^d.
\end{equation*}
For each of these, by inspection we have $P_\packval \in \mc{P}_1$ and
$\E_{P_\packval}[X] = \delta \packval$, which yields the separation
condition that
\begin{equation}
\label{eqn:loss-sep-highdim-mean}
\lossdist(P_0, P_\packval)
\ge \inf_{\theta \in \R^k}
\left\{\Phi(\theta + \psi(\delta \packval))
+ \Phi(\theta) \right\}
= 2 \Phi(\psi(\delta \packval) / 2).
\end{equation}
Following our standard approach, we now upper bound the complexity measure
$\complexity_2(\{P_\packval\}_{\packval \in \packset})$. Indeed, we may
take $P^*$ in the definition~\eqref{eqn:chi-square-complexity} to be $P_0$,
in which case we obtain
\begin{align*}
\complexity_2(\{P_\packval\}_{\packval \in \packset})
& \le \sup_{f : P_0 f^2 \le 1}
\frac{1}{|\packset|}
\sum_{\packval \in \packset}
\left(\int f(x) (dP_0(x) - dP_\packval(x))\right)^2 \\
& = \sup_{f : P_0 f^2 \le 1}
\frac{\delta^2}{2 d} \frac{1}{4^d}
\sum_{\packval \in \packset} \sum_{x_1 \in \mathcal{X}}
\sum_{x_2 \in \mathcal{X}}
f(x_1) f(x_2) (x_1^T \packval) (x_2^T \packval) \\
& \stackrel{(i)}{=} \sup_{f : P_0 f^2 \le 1}
\frac{\delta^2}{d} \frac{1}{4^d}
\sum_{x_1, x_2} f(x_1) f(x_2) x_1^T x_2 \\
& = \frac{\delta^2}{d}
\sup_{f : P_0 f^2 \le 1}
\ltwo{\E_{P_0}[f(X) X]}^2,
\end{align*}
where in line~$(i)$ we have used that $\sum_{\packval \in \packset}
\packval \packval^T = 2 I_{d \times d}$.
To bound the final quantity, note that $\ltwo{a}^2 = \sup_{\ltwo{v} \le 1}
\<v, a\>^2$ for any vector $a$, and thus, by Cauchy--Schwarz,
\begin{align*}
\sup_{f : P_0 f^2 \le 1}
\ltwo{\E_{P_0}[f(X) X]}^2,
& = \sup_{f : P_0 f^2 \le 1,
\ltwo{v} \le 1}
\E_{P_0}[f(X) v^T X]^2 \\
& \le \sup_{f : P_0 f^2 \le 1}
\sup_{\ltwo{v} \le 1}
\E_{P_0}[f(X)^2]
\E_{P_0}[(v^T X)^2]
= 1,
\end{align*}
where we have used that $\E[XX^T] = I_{d \times d}$.
As a consequence, we have the upper bound
\begin{equation*}
\complexity_2(\{P_\packval\}_{\packval \in \packset})
\le \frac{\delta^2}{d}.
\end{equation*}
We may substitute this upper bound on the complexity into
Theorem~\ref{theorem:big-tensor}, choosing $P = P_0$ in the theorem
so that $\linf{dP_v / dP_0}
= 1 + \delta$, whence we obtain that for our choices of $P_\packval$
and $P_0$, for any sequentially interactive channel $\channel$, we have
\begin{equation*}
2 \tvnorm{\marginprob_0^n
- \meanmarginprob^n}^2
\le \dkl{\marginprob_0^n}{\meanmarginprob^n}
\le \frac{n \diffp^2}{d} \delta^2 (1 + \delta).
\end{equation*}
Now, if we solve this last quantity so that
$\frac{n \diffp^2}{d} \delta^2 (1 + \delta) \le \half$,
we see that the choice
$\delta^2 = \min\{\frac{d}{4 n \diffp^2}, 1\}$ guarantees that
$\delta^2 (1 + \delta) \le \frac{d}{2 n \diffp^2}$. Thus, using the
separation lower bound~\eqref{eqn:loss-sep-highdim-mean}
and Lemma~\ref{lemma:private-general-le-cam}, we obtain
\begin{equation*}
\minimax_n
\ge \min_{\packval \in \packset} \Phi(\psi(\delta \packval) / 2)
\left(1 - \sqrt{\frac{n \diffp^2}{2 d} \delta^2(1 + \delta)}
\right)
\ge \min_{j = 1, \ldots, d}
\half \Phi\left(\min\left\{\sqrt{\frac{d}{4 n \diffp^2}}, 1
\right\} \psi(e_j)\right).
\end{equation*}
\subsection{Mis-specified models and multi-parameter exponential families}
\label{sec:mis-specified-expfam}
While Section~\ref{sec:one-param-expfams} provides a procedure that achieves
the optimal behavior for parametric exponential families, it relies strongly
on the model's correctness and its single-dimensionality.
In this section, we consider the situation in which we wish to estimate
a functional of an exponential family model that may be mis-specified.
To describe
the results, we first review some of the basic properties of
exponential families. Let $\{P_\theta\}_{\theta \in \Theta}$ be a
$d$-parameter exponential family with densities $p_\theta(x) = \exp(\theta^T
x - A(\theta))$ with respect to some base measure, where for simplicity we
assume that the exponential family is regular and minimal, meaning that
$\dom A$, $\nabla^2 A(\theta) = \cov_\theta(X) \succ 0$ for all
$\theta \in \dom A$, and
the log partition function $A(\theta)$ is analytic on the interior of its
domain~\cite[Thm.~2.7.1]{LehmannRo05}. We record here a few standard facts
on the associated convex analysis (for more, see the books~\cite{Brown86,
WainwrightJo08, HiriartUrrutyLe93ab}). First, recall the conjugate
function $A^*(x) \defeq \sup_{\theta} \{\theta^T x - A(\theta)\}$. Standard
convex analysis results~\cite[Ch.~X]{HiriartUrrutyLe93ab} give that
\begin{equation}
\label{eqn:conjugate-gradient-inverse}
\nabla A^*(x) = \theta_x ~~\mbox{for~the~unique~} \theta_x ~
\mbox{such that}~~ \E_{\theta_x}[X] = x.
\end{equation}
In addition, $\nabla A^*$ is continuously differentiable, one-to-one, and
\begin{equation*}
\dom A^* \supset \range(\nabla A(\cdot))
= \{\E_\theta[X] \mid \theta \in \dom A\}.
\end{equation*}
Moreover, by the inverse function theorem, we also have that on the
interior of $\dom A^*$,
\begin{equation}
\label{eqn:inverse-hessian-conjugate}
\nabla^2 A^*(x)
= (\nabla^2 A(\theta_x))^{-1}
= \cov_{\theta_x}(X)^{-1}
~~ \mbox{for~the~unique~} \theta_x ~ \mbox{s.t.}~
\E_{\theta_x}[X] = x.
\end{equation}
The uniqueness follows because $\nabla A^*$ is one-to-one, a consequence
of the minimality of the exponential family and that $\nabla^2 A(\theta)
\succ 0$.
For a distribution
$P$ with mean $\E_P[X]$, so long as the mean belongs to the
range of $\nabla A(\theta) = \E_\theta[X]$ under the exponential
family model as $\theta$ varies, the minimizer of the log loss
$\loss_\theta(x) = -\log p_\theta(x)$ is
\begin{equation*}
\theta(P) \defeq \argmin_\theta
\E_P[\loss_\theta(X)]
= \nabla A^*(\E_P[X]).
\end{equation*}
Mis-specified exponential families are are sufficiently regular---as we
discuss after Theorem~\ref{theorem:modulus-of-continuity} and
Proposition~\ref{proposition:achievable}---to guarantee the polynomial
growth condition~\ref{cond:polynomial-growth}, so that the modulus of
continuity $\modcont$ characterizes the local minimax complexity. The
following lemma shows that this is the case for the $\ell_2$ modulus of
continuity, and the extension to general losses of the form $L(\theta, P) =
\Phi(\ltwo{\theta - \theta(P)})$ from this case is immediate and exactly as
in Example~\ref{example:mean-estimation}.
\begin{lemma}
\label{lemma:mis-expfam-growth}
Let $\{P_\theta\}$ be an exponential family model as above
and $\mc{P} = \{P \mid \supp{P} \subset \mathcal{X}\}$, where
$\mathcal{X} \subset \R^d$ is compact and
$\dom A^* \supset \mathcal{X}$. Then
\begin{align*}
\modltwo(\delta)
& \defeq
\sup\left\{\ltwo{\theta(P) - \theta(P_0)}
\mid P \in \mc{P}, \tvnorm{P - P_0} \le \delta \right\} \\
& = \sup\left\{\ltwo{\nabla A^*(\E_P[X]) - \nabla A^*(\E_{P_0}[X])}
\mid P \in \mc{P}, \tvnorm{P - P_0} \le \delta\right\}
\end{align*}
satisfies growth condition~\ref{cond:polynomial-growth}
with $\alpha = 1$ and some $\Cgrow < \infty$.
\end{lemma}
\noindent
(See Appendix~\ref{sec:proof-mis-specified-expfam} for the somewhat
technical proof.)
With these preliminaries and basic results in place,
we describe our estimation setting. We consider estimation of functionals
$\functional : \R^d \to \R$ of the parameters $\theta$ of the form
$\functional(\theta)$, where we assume that $\functional$ is
differentiable. We measure the loss of an estimated value
$\what{\functional}$ by
\begin{equation*}
L(\what{\functional}, P)
= \Phi(\what{\functional} - \functional(\theta(P))),
\end{equation*}
where the loss function $\Phi : \R \to \R_+$ is assumed to be convex and
symmetric about zero. In the next two sections, we show local lower bounds
on estimation under loss $L$ and develop a near-optimal estimator.
\subsubsection{A lower bound on estimation}
In our mis-specified exponential family setting, we have the following
local minimax lower bound.
\begin{proposition}
\label{proposition:mis-specified-expfam}
Let $\mc{P}$ be a family of distributions on $X$ such that
the collection of means $\{\E_P[X]\}_{P \in \mc{P}}$ is bounded
with $\{\E_P[X]\}_{P \in \mc{P}} \subset \interior \dom A^*$.
Let $\channeldistset_\diffp$ denote the collection
of all $\diffp^2$-$\chi^2$-private sequentially interactive channels.
Then
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}, \channeldistset)
\ge \frac{1}{4} \sup_{P \in \mc{P}}
\Phi\left(\frac{\nabla \functional(\theta_0)^T \nabla^2 A(\theta_0)^{-1}
(\E_{P_0}[X] - \E_{P}[X])}{2 \sqrt{8 n \diffp^2}}
+ O\left(\frac{1}{n \diffp^2}\right)\right).
\end{equation*}
\end{proposition}
The proof is similar to our previous results, so we defer it to
Section~\ref{sec:proof-mis-specified-expfam}. Let us instead give a
corollary. Assume that $\Phi(t) = t^2$ is the squared error and
additionally that the set $\mc{P}$ consists of all
distributions supported on the norm ball $\{x \in \R^d \mid
\norm{x} \le r \}$. Then we have
\begin{corollary}
\label{corollary:mis-specified-expfam}
Let the conditions of Proposition~\ref{proposition:mis-specified-expfam}
and the preceding paragraph hold. Then there exists a numerical
constant $c > 0$ such that
\begin{equation*}
\localminimax_n(P_0, L, \mc{P}, \channeldistset_\diffp)
\ge c \frac{r^2 \dnorm{\nabla^2 A(\theta_0)^{-1}
\nabla \functional(\theta_0)}^2}{n \diffp^2}
+ O\left(\frac{1}{n^2 \diffp^4}\right).
\end{equation*}
\end{corollary}
Before we turn to estimation in the private case, we compare
Proposition~\ref{proposition:mis-specified-expfam} and
Corollary~\ref{corollary:mis-specified-expfam} to the non-private case. In
this case, a simple estimator is to take the sample mean $\what{\mu}_n =
\frac{1}{n} \sum_{i = 1}^n X_i$ and then set $\what{\theta}_n = \nabla
A^*(\what{\mu}_n)$. Letting $\theta_0 = \nabla A^*(\E_{P_0}[X])$, then classical
Taylor expansion arguments and the
delta-method~\cite[Chs.~3--5]{VanDerVaart98} yield
\begin{equation*}
\sqrt{n} (\what{\theta}_n - \theta_0)
\cd \normal\left(0, \nabla^2 A(\theta_0)^{-1}
\cov_P(X) \nabla^2 A(\theta_0)^{-1} \right)
\end{equation*}
and
\begin{equation*}
\sqrt{n} (\functional(\what{\theta}_n)
- \functional(\theta_0))
\cd \normal\left(0,
\nabla \functional(\theta_0)^T \nabla^2 A(\theta_0)^{-1}
\cov_P(X) \nabla^2 A(\theta_0)^{-1}
\nabla \functional(\theta_0)\right).
\end{equation*}
The lower bound in Corollary~\ref{corollary:mis-specified-expfam} is always
worse than this classical limit.
In this sense, the private lower bounds exhibit the lack of adaptivity
that is common in this paper: the local lower bounds show that in
the private case, no estimator can adapt to ``easy'' problems where
the covariance $\cov_P(X)$ is small.
\subsubsection{An optimal one-step procedure}
\label{sec:one-step-mis-expfam}
An optimal procedure for functionals of (possibly) mis-specified exponential
family models has strong similarities to classical one-step estimation
procedures~\cite[e.g.][Ch.~5.7]{VanDerVaart98}.
To motivate the approach, let us assume
we have a ``good enough'' estimate $\wt{\mu}_n$ of $\mu_0 \defeq
\E_P[X]$. Then we observe that if $\wt{\theta}_n = \nabla A^*(\wt{\mu}_n)$,
we have
\begin{align}
\functional(\theta_0)
& = \functional(\wt{\theta}_n)
+ \nabla \functional(\wt{\theta}_n)^T (\theta_0 - \wt{\theta}_n)
+ O(\norms{\theta_0 - \wt{\theta}_n}^2) \nonumber \\
& = \functional(\wt{\theta}_n)
+ \nabla \functional(\wt{\theta}_n)^T
(\nabla A^*(\mu_0) - \nabla A^*(\wt{\mu}_n))
+ O(\norms{\mu_0 - \wt{\mu}_n}^2) \nonumber \\
& = \functional(\wt{\theta}_n)
+ \nabla \functional(\wt{\theta}_n)^T
\nabla^2 A(\wt{\theta}_n)^{-1}(\mu_0 - \wt{\mu}_n)
+ O(\norms{\mu_0 - \wt{\mu}_n}^2),
\nonumber
\end{align}
where each equality freely uses the duality
relationships~\eqref{eqn:conjugate-gradient-inverse}
and~\eqref{eqn:inverse-hessian-conjugate}.
In this case, if $\wt{\mu}_n - \mu_0 = o_P(n^{-1/4})$ and
we have an estimator $T_n$ satisfying
\begin{equation*}
\sqrt{n}
\left(T_n -
\nabla \functional(\wt{\theta}_n)^T \nabla^2 A(\wt{\theta}_n)^{-1}
\mu_0\right)
\cd \normal(0, \sigma^2),
\end{equation*}
then the estimator
\begin{equation}
\what{\functional}_n
\defeq \functional(\wt{\theta}_n)
+ T_n - \nabla \functional(\wt{\theta}_n)^T
\nabla^2 A(\wt{\theta}_n)^{-1} \wt{\mu}_n
\label{eqn:private-expfam-mis-estimator}
\end{equation}
satisfies $\sqrt{n} (\what{\functional}_n - \functional(\theta_0)) \cd
\normal\left(0, \sigma^2\right)$ by Slutsky's theorems.
We now exhibit such an estimator. To avoid some of the difficulties
associated with estimation from unbounded data~\cite{DuchiJoWa18}, we assume
the data $\mathcal{X} \subset \R^d$ are contained in a norm ball $\{x \in
\R^d \mid \norm{x} \le 1\}$. Let us split the sample of size $n$ into two
sets of size $n_1 = \ceil{n^{2/3}}$ and $n_2 = n - n_1$. For the first set,
let $Z_i$ be any $\diffp$-locally differentially private estimate of $X_i$
satisfying $\E[Z_i \mid X_i] = X_i$ and $\E[\norm{Z_i}^2] < \infty$, so that
the $Z_i$ are i.i.d.; for example, $X_i + W_i$ for a random vector of
appropriately large Laplace noise suffices~\cite{DworkMcNiSm06,DuchiJoWa18}.
Define $\wt{\mu}_n = \frac{1}{n_1} \sum_{i = 1}^{n_1} Z_i$, in which case
$\wt{\mu}_n - \mu_0 = O_P(n^{-1/3})$, and let $\wt{\theta}_n = \nabla
A^*(\wt{\mu}_n)$. Now, for $i = n_1 + 1, \ldots, n$, define the
$\diffp$-differentially private
quantity
\begin{equation*}
Z_i \defeq
\nabla \functional(\wt{\theta}_n)^T
\nabla^2 A(\wt{\theta}_n)^{-1} X_i
+ \frac{
\dnorms{\nabla^2 A(\wt{\theta}_n)^{-1} \nabla \functional(\wt{\theta}_n)}
}{\diffp}
W_i
~~ \mbox{where} ~~
W_i \simiid \laplace(1).
\end{equation*}
Letting $\wb{X}_{n_2} = \frac{1}{n_2} \sum_{i = n_1 + 1}^{n} X_i$
and similarly for $\wb{W}_{n_2}$ and $\wb{Z}_{n_2}$, we find that
\begin{align*}
\lefteqn{\sqrt{n}
\left(\wb{Z}_{n_2}
- \nabla \functional(\wt{\theta}_n)^T \nabla^2 A(\wt{\theta}_n)^{-1}\mu_0
\right)} \\
& = \sqrt{n}
\left[\nabla\functional(\wt{\theta}_n)^T
\nabla^2 A(\wt{\theta}_n)^{-1}
\left(\wb{X}_{n_2} - \mu_0\right)
+ \frac{ \dnorms{\nabla^2 A(\wt{\theta}_n)^{-1} \nabla \functional(\wt{\theta}_n)}}{\diffp}
\wb{W}_{n_2}\right]
\cd \normal(0, \sigma^2(P,\functional,\diffp))
\end{align*}
by Slutsky's theorem, where for
$\theta_0 = \nabla A^*(\E_P[X])$ we define
\begin{equation}
\label{eqn:variance-private-expfam-mis-estimator}
\sigma^2(P,\functional, \diffp)
\defeq
\nabla \functional(\theta_0)^T \nabla^2 A(\theta_0)^{-1}
\cov_P(X) \nabla^2 A(\theta_0)^{-1} \nabla \functional(\theta_0)
+ \frac{2}{\diffp^2}
\dnorm{\nabla^2 A(\theta_0)^{-1} \functional(\theta_0)}^2.
\end{equation}
Summarizing, we have the following proposition, which
shows that the two-step estimator~\eqref{eqn:private-expfam-mis-estimator}
is (asymptotically) locally minimax optimal.
\begin{proposition}
\label{proposition:private-expfam-mis-estimator}
Let $\what{\functional}_n$ be the
estimator~\eqref{eqn:private-expfam-mis-estimator} with the choices $T_n =
\wb{Z}_{n_2}$ and $\wt{\theta}_n = \nabla A^*(\wt{\mu}_n)$ as above, and
let $\theta_0 = \nabla A^*(\E_P[X]) = \argmin_\theta
\E_P[\loss_\theta(X)]$ and $\sigma^2(P, \functional, \diffp)$ be as
in~\eqref{eqn:variance-private-expfam-mis-estimator}. Then
\begin{equation*}
\sqrt{n}
(\what{\functional}_n - \functional(\theta_0))
\cd \normal\left(0,
\sigma^2(P, \functional,\diffp)\right).
\end{equation*}
\end{proposition}
\subsubsection{Proof of Proposition~\ref{proposition:mis-specified-expfam}}
\label{sec:proof-mis-specified-expfam}
Let $P_1 \in \mc{P}$ be a distribution with mean $x = \E_{P_1}[X]$, and
for $t \in [0, 1]$, define $P_t = (1 - t) P_0 + t P_1$.
Then
$\tvnorm{P_0 - P_t} \le t$.
Let us first consider the distance $\lossdist$, which
for any distribution $P$ satisfies
\begin{equation*}
\lossdist(P_0, P)
= 2 \Phi\left(\half (\functional(\theta(P_0)) -
\functional(\theta(P)))\right),
\end{equation*}
by a calculation identical to that for the convex
case~\eqref{eqn:lossdist-for-convex}. Now, with our
choice of $P_t$, let us consider the value $\theta(P_t)$,
which evidently satisfies
\begin{equation*}
\theta(P_0) - \theta(P_t)
= \nabla A^*(\E_{P_0}[X])
- \nabla A^*(\E_{P_0}[X] + t(\E_{P_0}[X] - x))
= t \nabla^2 A^*(\E_{P_0}[X]) (\E_{P_0}[X] - x)
+ O(t^2).
\end{equation*}
Recalling Eq.~\eqref{eqn:inverse-hessian-conjugate} and using
the shorthand $\theta_t = \nabla A^*(\E_{P_t}[X])$,
we have
\begin{equation*}
\theta_0 - \theta_t
= t \nabla^2 A(\theta_0)^{-1} (\E_{P_0}[X] - x) + O(t^2).
\end{equation*}
Because $\functional$ is smooth by assumption, this yields
\begin{equation*}
\functional(\theta_0) - \functional(\theta_t)
= \nabla \functional(\theta_0)^T (\theta_0 - \theta_t)
+ O(\norm{\theta_t - \theta_0}^2)
= t \functional(\theta_0)^T \nabla^2 A(\theta_0)^{-1}(\E_{P_0}[X] - x)
+ O(t^2).
\end{equation*}
As a consequence of these derivations, that
$\functional$ is smooth by assumption, and that
convex functions are locally Lipschitz,
the modulus of continuity of $L$ at $P_0$ has lower bound
\begin{equation*}
\modcont\left(t; P_0, \mc{P}\right)
\ge 2 \Phi\left(\frac{t}{2} \nabla \functional(\theta_0)^T
\nabla^2 A(\theta_0)^{-1}
(\E_{P_0}[X] - x) + O(t^2)\right)
\end{equation*}
as $t \to 0$. Substituting $t = \frac{1}{\sqrt{n \diffp^2}}$
and applying Theorem~\ref{theorem:modulus-of-continuity} gives
the result.
\subsection{Uniform achievability in one-parameter exponential families}
\label{sec:one-param-expfams}
Corollary~\ref{corollary:one-param-information-bound} shows a lower bound of
$(n \diffp^2 \information[\theta_0]^2)^{-1}$, where $\information[\theta_0]
= \E_{\theta_0}[|\dot{\ell}_{\theta_0}|]$, for the estimation of a single parameter
in a suitably smooth family of distributions. One of our three desiderata
for a ``good'' lower bound is uniform achievability, that is, the existence
of an estimator that uniformly achieves the instance-specific lower bound.
In this section, we develop a locally private estimation scheme that does
this for single parameter exponential family models, and show how a
methodology based on Fisher scoring can adaptively attain our
local minimax lower bounds.
Let $\mc{P} = \{P_\theta\}_{\theta \in \Theta}$ be a one parameter exponential
family, so that for a
base measure $\mu$ on $\mc{X}$, each distribution $P_{\theta}$ has
density
\begin{equation*}
p_\theta(x) \defeq
\frac{dP_\theta}{d\mu} (x) = \exp\left(\theta T(x) - A(\theta)\right),
\end{equation*}
where $T(x)$ is the sufficient statistic
and $A(\theta) = \log \int e^{\theta T(x)} d\mu(x)$ is the log partition function.
It is well known
(cf.~\cite[Ch.~2.7]{Brown86,LehmannRo05}) that $A$ satisfies $A'(\theta) =
\E_\theta[T(X)]$ and $A''(\theta) = \var_\theta(T(X))$. In this case, the
$L^1$-information~\eqref{eqn:l1-information} is
the mean absolute deviation
\begin{equation*}
\information[\theta] = \E_\theta[|T(X) - A(\theta)|] =
\E_\theta[|T(X) - \E_\theta [T(X)]|].
\end{equation*}
We now provide a procedure asymptotically achieving mean square error
scaling as $(n\diffp^2 \information[\theta]^2)^{-1}$, which
Corollary~\ref{corollary:one-param-information-bound} shows is optimal. Our
starting point is the observation that for a one-parameter exponential
family, the functional $\theta \mapsto P_\theta(T(X) \ge t)$ is strictly
increasing in $\theta$ for any fixed $t \in \supp \{T(X)\}$~\cite[Lemma
3.4.2]{LehmannRo05}. A natural idea is thus to first estimate
$P_{\theta}(T(X) \ge t)$ and then invert this value to give good estimate of
$\theta$. To make this (near) optimal, we develop a two-sample procedure,
where in with the first we estimate $t \approx \E[T(X)]$ and then use the
second sample to approximate $P_\theta(T(X) \ge t)$ for this particular $t$,
which we invert.
With this motivation, we now formally define our $\diffp$-differentially
private one-step corrected estimator. Define the function $\Psi : \R^2 \to
\R_+$ by
\begin{equation}
\Psi(t, \theta) \defeq P_\theta (T(X) \geq t)
= \int \indic{T(x) \geq t}
\exp\left(\theta T(x) - A(\theta)\right) d\mu (x).
\end{equation}
The private two stage algorithm we develop splits a total sample of
size $2n$ in half. In the first stage, the algorithm uses the first half of
the sample to construct a crude estimate $\what{T}_n$ of the value
$A'(\theta) = \E_{\theta}[T]$; we require only that $\what{T}_n$ be
consistent (we may use Duchi et al.'s $\diffp$-differentially private mean
estimators, which provide consistent estimates of $\E[T(X)]$ so long as
$\E[|T(X)|^k] < \infty$ for some $k > 1$~\cite[Corollary~1]{DuchiJoWa18}.)
In the second stage, the algorithm uses the crude estimate $\what{T}_n$ and
the second half of the sample in a randomized response procedure
as follows: we construct $V_i$ and the private $Z_i$ as
\begin{equation*}
V_i = \indics{T(X_i) \ge \what{T}_n},
~~~
Z_i =
\frac{e^\diffp + 1}{e^\diffp - 1}
\cdot \left[\left\{
\begin{array}{cl}
V_i & \mbox{w.p.}~
\frac{e^\diffp}{e^\diffp + 1} \\
1 - V_i & \mbox{w.p.}~
\frac{1}{e^\diffp + 1}
\end{array}
\right\}
- \frac{1}{e^\diffp + 1}
\right].
\end{equation*}
By inspection, this is $\diffp$-differentially-private and
$\E[Z_i \mid V_i] = V_i$. Now, define the inverse function
\begin{equation*}
H(p, t) \defeq \inf \left\{\theta \in \R \mid P_\theta(T(X) \ge t)
\ge p \right\}
= \inf \left\{\theta \in \R \mid \Psi(t, \theta) \ge p \right\}.
\end{equation*}
Setting $\wb{Z}_n = \frac{1}{n} \sum_{i = 1}^n Z_i$, our final
$\diffp$-differentially private estimator is
\begin{equation}
\label{eqn:private-expfam-estimator}
\what{\theta}_n = H(\wb{Z}_n, \what{T}_n).
\end{equation}
We then have the following convergence result, which shows that the
estimator~\eqref{eqn:private-expfam-estimator} (asymptotically) has risk
within a constant factor of the local minimax bounds. The proof is
somewhat involved, so we defer it to
Appendix~\ref{sec:proof-uniform-achievability-one-dim-exp}.
\newcommand{\mathcal{E}}{\mathcal{E}}
\begin{proposition}
\label{proposition:uniform-achievability-one-dim-exp}
Assume that $\var_{\theta} \left(T(X)\right) > 0$ and $\what{T}_n \cp t_0
\defeq \E_{\theta_0}[T(X)]$. Define $\delta_\diffp^2 = \frac{e^{\diffp}}{(e^{\diffp} - 1)^2}$.
Then there exist random variables $G_n = \Psi(\what{T}_n, \theta_0) \in [0, 1]$,
$\mathcal{E}_{n, 1}$, and $\mathcal{E}_{n,2}$ such that under $P_{\theta_0}$, the
estimator~\eqref{eqn:private-expfam-estimator} satisfies
\begin{equation*}
\sqrt{n} \left(\what{\theta}_n - \theta_0\right)
= 2 \information[\theta_0]^{-1} (\mathcal{E}_{n, 1} + \mathcal{E}_{n, 2}) + o_P(1)
\end{equation*}
where
\begin{equation}
\left(\mathcal{E}_{n, 1},
\frac{1}{G_n (1 - G_n)} \mathcal{E}_{n,2}\right)
\cd \normal\left(0, \diag(\delta_\diffp^{-2}, 1)\right).
\end{equation}
\end{proposition}
\noindent
The complexity of the statement arises because the distribution of $T(X)$
may be discontinuous, including at $\E_{\theta_0}[T(X)]$, necessitating the
construction of the random variables $\mathcal{E}_{n,1}, \mathcal{E}_{n,2}$, and $G_n$
to demonstrate a limit distribution.
| b95433fdf4d2770363f26f71c7e121529cc27ed7 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
For space-time to be an emergent feature at the quantum level, it must equivalently originate in individual interactions, since interactions constitute what can be observed and hence what can be reliably modelled on. A clear picture of what the relevant interactions are would be useful for understanding quantum gravity, and so it is of interest to identify and analyse them and how they give rise to space-time. In recent years, entanglement entropy in regions of conformal field theory (CFT), giving rise to gravity through gauge/gravity duality \cite{Maldacena:1997re}, has been widely discussed and entanglement has been recognized as important for space-time emergence \cite{VanRaamsdonk:2010pw}. Focussing on emergent space instead of gravity, one requirement is apparent: for orientation to emerge, the relevant interactions must be able to communicate a scalar product. In $D=4$, the only readily available candidate for this is spin 1/2 correlations. The objective of this text is to argue for the importance of identifying and analysing the different correlations (fundamental interactions) relevant for space-time emergence and how entangling processes may give rise to equilibration between vacuum fluctuations, in a conjecture of thermalization giving rise to the appearance of space-time at large scales. Our key example is spin 1/2 correlations and emergence of space orientation, but conceptually a large part of the discussion carries over to interactions relevant for space-time emergence in general.
Connections between space-time and thermodynamic properties have long been observed \cite{Bekenstein:1973ur,Jacobson:1995ab,Wald:1999xu}. Recently even a model of thermalization of entanglement \cite{Hubeny:2018tah} was proposed, highlighting that interacting systems typically thermalize and that such equilibration is likely to carry over to quantum systems \cite{Short:2012qvq}, providing an emergence of macroscopically perceived entities such as dynamic space-time. While these types of approaches are interesting, including entanglement entropy investigations, the presence of interactions involving complementarity makes it plausible that the individual interactions are of central importance for issues with quantum gravity (e.g. strong interactions). If so, the very assumption of space-time must be dispensed of as a starting point, including space and fields in space-time, and allowed to emerge through entangling processes. The conjecture of thermalization of those interactions then conceptually overlaps with \cite{Hubeny:2018tah} and arguments therein. While the general motivation partly overlaps with \cite{Penrose:1972jq}, our approach through complementarity is different from twistor theory.
Note that metrics $g_{\mu\nu}$ will not be discussed, since the present analysis concerns interactions at a sublevel. We consider what would be required for large-scale emergence of the tension structure a metric describes, analogously to how temperature arises from collisions. Since we defer a discussion of quantization of time, no prescription for a metric can be given. The issue of space is a sufficiently interesting first subject.
We focus on a requirement for space to emerge locally: the existence of a local scalar product, arising through interactions (\S\ref{s.emerge}). Individual interactions encoding scalar products are currently considered non-local due to the proof of Bell's theorem \cite{Bell:1964kc,Clauser:1969ny}. Our key finding is that the information structure of complementarity (\S\ref{s.comp}) supports a formalism alteration which sidesteps Bell's theorem and provides local EPR correlations (\S\ref{s.beyond}). Complementarity can equally be regarded as information stored orthogonally (like different vector entries). Once non-scalar information is considered, classical probability theory must be extended to a vector concept too, for validity, and this precisely coincides with removing the structure which in Bell's theorem enforces non-locality. Hence our alternative interpretation contains the necessary and sufficient extension for a space-time formulation to be \emph{local}, which is an important concept, making the construction worth considering. We note that the non-locality of Bell's theorem arises from a classical interpretation of quantum entities, both for variables/operators \emph{and} probability theory. How complementary information would be encoded is discussed in detail in \S\ref{s.emerge}. While we use the scalar product as a key example, the vector concept extends to complementary information in general.
As apparent for spin, the behaviour of interactions related to space-time is irrevocably connected to complementarity, for information on position in and movement through space-time is complementary. Indeed, complementarity is as an important feature of quantum physics as the existence of quanta: both are fundamental quantum features distinct from classical physics. However, as identified in the Einstein--Podolsky--Rosen (EPR) paradox \cite{Einstein:1935rr}, the wave function formalism fails to encode complementarity, a limitation which is important to recognize. While entanglement entropy constructions skirt this dilemma, an approach based on individual interactions cannot. To find and motivate a suitable way of modelling the interactions, we begin with a discussion on complementarity in relation to the wave function, before moving on to how to best model complementary relations. Since a restriction to a classical (scalar) setting with simultaneous variables is not compatible with measurements, our conjecture involves information existing in parallel in a vector structure. We discuss how this strictly is required by complementarity, measurements and the EPR correlations, how the principle of locality would be respected, and how spin 1/2 and photon polarization can be well understood in this setting.
We end with a rough conjecture of what thermalization would require. The entangling processes between spin 1/2 correlated particles would have to differ from measurements in not being destructive, sot that local equilibrium configurations can be reached, followed by some thermalization process inferring space on large scales. The finer points would be dependent on effects of time and gravity, which we only briefly comment upon. In total, the complementary sets relevant for space-time emergence ought to be position/momentum, the angular counterparts, spin and time/energy,
\begin{equation}\label{eq.sp.comp.vars}
(x_i,p_i)\,,\quad(r,L)\,,\quad\{J_i\}\sim(r,\{L_j\})\,,\quad(t,E)\,,
\end{equation}
in $d$ dimensions with $i,j\in\mathbb{Z}^+,\,i\leq d,\,j\leq d-1$. In a scenario of emergent space, two features are crucial: generation of relative \emph{distance} and \emph{orientation}, through fundamental interactions. Position/momenta $(x_i,p_i)$ readily provides the former, with an angular version in $(r,L)$. For the latter, an identification of a scalar product is required, as mentioned above, and spin 1/2 has the only $D=4$ correlations which in pair production encodes multiple directional correlations simultaneously. The alternative representation of a set of $J_i$ in terms of angular position and rotations will be motivated below. For now, we note that spin has a dual in one dimension lower in photon polarization, also described by these representations. Also, while entanglement strictly does not require complementarity, in this text entanglement and entangling processes always refer to correlations and interactions involving complementarity.
\section{Complementarity \& the wave function}\label{s.comp}
Complementarity is a quantum feature which initially was thought of as a disturbance by measurement \cite{Heisenberg1927,Bohr:1928vqa}, but since has been identified as an inherent property. Its nature is conceptually clear in relations like that for position and momentum\footnote{\cite{Heisenberg1927} discusses how this goes beyond the observation that a measurement of one entity infers uncertainty of the other(s).}
\begin{equation}\label{pdx}
p_i\propto dx_i\,.
\end{equation}
A value of one entity is irrevocably connected to all its complementary variables being undefined, similar to how a change $d x_i$ is required for a value of $p_i$ and incompatible with a value of $x_i$. Complementary variables, e.g. $(x_i,p_i)$, are not simultaneous scalar entities\footnote{Note that vectors in space such as $\vec{p}$ have parts that are simultaneous scalars in this respect. A function of the set of $p_i$: $f(\{p_i\})$ is consistent, while the same is not true for spin. We also discuss complementarity rather than conjugate variables, since sets of complementary variables are not restricted to pairs.}. They cannot be probed to arbitrary accuracy from one and the same particle, nor does it make sense to use them side by side, as scalar entities, in any theoretical model:
\begin{equation}\label{fneq}
f\neq f(x_i,p_i)\,.
\end{equation}
In modelling theory after experiment, this deviation from classical reality has presented a challenge to modern physics. Pair production permits measurements on complementary variables (one per particle), and the pair correlations cause the EPR paradox \cite{Einstein:1935rr}.
The issue concerns what represents physical reality. Classically, physically real entities can be measured simultaneously. In quantum physics, complementarity prohibits this. At a measurement (destructive) at most one complementary variable can be determined. Yet quantum theory states that complementary variables are correlated in pair production, e.g. through conservation of momenta. Correlations of this type are measurable one at a time, and can thus only be statistically inferred on each specific type of pair production. While such statistical correlations do not represent physical reality in the classical sense (and it is a logical fallacy to infer the correlations for single particles) when inferred by measurements, the ensemble behaviour must represent physical reality, by definition. Measurements are what define physical reality.
To clarify the role of complementarity vs. the wave function with respect to physical reality, we now summarise some relevant history. EPR \cite{Einstein:1935rr} highlighted a paradox between two theoretical frameworks: that the quantum theory correlations for complementary variables are not captured by the wave function unless non-local. There are three ways out of that: \emph{(1)} non-locality, \emph{(2)} replacing the wave function, or \emph{(3)} a breakdown of the quantum theory predictions. Either way, causality holds since EPR correlations do not represent signals. To exclude the correlations being a product of system settings at the time of pair production \cite{Bohr:1935af}, decisive experiments also need to have the settings for measurement changed during the flight of the particles. A precise way to analyse the issue \cite{PhysRev.108.1070} is in terms of spin or linear photon polarization correlations,
\begin{subequations}\label{pai}
\begin{equation}\label{pai.1}
P(a,i)=\frac{1}{2}\,,\quad \left\{\begin{array}{llll}
a\in S^1\,,&i\in\{1,0\}\,,&\text{photon},\\
a\in S^2\,,&i\in\{+,-\}\,,&\text{spin 1/2},
\end{array}\right.
\end{equation}
\begin{equation}\label{pai.2}
\sum_iP(a,i;b,i)=\left\{\begin{array}{ll}
{\displaystyle(a\cdot b)^2}\,,&\text{photon},\\[1ex]
{\displaystyle\frac{1-a\cdot b}{2}}\,,&\text{spin 1/2},
\end{array}\right.
\end{equation}
\end{subequations}
with measurements in the directions $(a,b)$ on the separated parts of the pair, at $A$ and $B$. Recently, experiments \cite{Hensen:2015ccp,PhysRevLett.115.250401,PhysRevLett.115.250402} showed the actual `in-flight', space-like separated correlations to be within experimental error of \eqref{pai} \emph{and} outside what can be locally captured by the wave function, even if improved by hidden variables, as identified for spin in Bell's theorem \cite{Bell:1964kc,Clauser:1969ny}.
The essential message from right above is that the quantum correlations stand up to experimental tests, on macroscopic scales. The physical reality of their complementarity and statistical correlations is inferred by measurements. Moreover, the wave function, by now nearly synonymous with quantum physics, characterises EPR correlations as causal and non-local. Interpretations of this range from that the wave function fails to capture conditional probabilities concerning complementarity, to the ER=EPR conjecture \cite{Maldacena:2013xja}, equating EPR correlations with Einstein--Rosen bridges \cite{Einstein:1935tc} (wormholes).
Returning to the discussion on theory construction, it is clear from \eqref{pdx} that complementarity is a fundamental feature of \emph{quantum} physics with central conceptual implications. It may well be as important as the existence of quanta, and ought to be a natural part of a complete description of quantum physics. However, it is not present in the wave function formalism. The wave function is a probability distribution for what an observer will encounter, constructed from modern probability theory and measure theory. Conditional probability
\begin{equation}\label{basis}
\int \mathrm{d}\lambda \,\rho(\lambda) \,\ldots
\end{equation}
(for some probability density $\rho$) is defined for any set of variables $\{\lambda\}$, but relies on this set to be `measurable'. The mathematical term involves countable additivity of disjoint sets. Effectively, the considered variables are required to be simultaneously measurable. Properties which cannot be divided into independent entities, as with complementarity, are disregarded by construction. For example, no set of independent variables can capture the physics of spin in $d=3$ (multiple separate, dependent ones are required) and so the construction \eqref{basis} in Bell's theorem \cite{Bell:1964kc,Clauser:1969ny} is tantamount to leaving out complementary relations. This restriction also is why the wave function only is defined for domains of classically simultaneous variables, with a dual in position/momentum space.
A definition of what is measurable that leaves out complementarity is absurd in the quantum regime. Through pair production, two complementary variables can be measured, and statistical correlations inferred. Here is where the classical definition of measure theory ceases to be valid, and where it needs to be extended to capture the physical reality verified through experiments. \emph{The wave function does not describe quantum physics in the same sense that probability theory restricting to simultaneously measurable variables does not describe complementarity}. Having identified this limitation of the wave function, we conclude that to accurately capture complementarity it is necessary to consider probability theory beyond classical concepts, while compatible with \eqref{fneq} (e.g. different from phase space considerations).
A more pragmatic angle to identifying the shortcomings of the wave function is the following: if a theory displays causal non-local correlations, the non-locality clearly is an artefact of the formulation, since non-locality only can be proven physical through the verification of non-local signals. Since the wave function formalism cannot be altered to locality \cite{Bell:1964kc,Clauser:1969ny} and the measured correlations would require non-locality, a new definition is required.
The wave function is very useful when complementarity is disregarded. We emphasise that its limitation with respect to complementarity only is clarified through experimental verification of statistical correlations not representing signals, and those correlations appearing non-local due to a restriction to a classical notion of what is measurable, built into the formalism. Since what is measurable differs from this classical notion, it is necessary and possible to formulate a theory with local EPR correlations.
\section{Beyond the wave function}\label{s.beyond}
To model quantum physics with complementarity, a construction different from the wave function formalism is required. The physical reality of complementarity includes \emph{(1)} not simultaneously measurable variables and \emph{(2)} pair production with statistical correlations (causal and space-like separated) between complementary variables, which cannot be locally described in the absence of \emph{(1)}. The argument is circular when only simultaneously accessible variables are considered, so the logical solution is to extend the concept of information from a scalar entity to multiple entities existing in parallel. Here, the clearest analogy is that of vectors (not in space-time). As $(x_i,p_i)$ are not simultaneous scalar entities, nor is the information pertaining to them, which rather is mutually orthogonal, existing in parallel, and only accessible to an observer through a single projection (per particle). In this analogy, one complementary variable can be fully measured or multiple ones be partially accessed, in compliance with the uncertainty principle \cite{Heisenberg1927}. Meanwhile, entangling processes represent simultaneous interactions in multiple channels, and pair production gives one-to-one correlations in each channel of information, i.e. full entanglement. Complementarity is reinterpreted as orthogonality of information, with complementary variables simultaneous and orthogonal to each other in terms of accessibility of the information they represent. While unorthodox, to our knowledge nothing prohibits this type of consideration.
A second conceptual change is that a model with information encoded in parallel represents a system formulation, whereas the wave function describes the observer experience, where a classical probability setting is intuitive. Since theory strictly only is determined by what is measured, it is equally viable to aim for an objective model of the system instead. While necessary for complementarity, this change has further consequences. Modelling on the observer experience effectively puts the observer at the centre of the universe, a choice entailing more artefacts than the EPR paradox, e.g. that of Schr\"odinger's cat. As much as the wave function is a prediction of what the observer will encounter, it is also a statement on lack of information on behalf of the observer. Without previous interaction, a foreign subsystem appears undetermined, without necessarily being so except to the observer in question. However, while many pardoxes are of a philosophical nature, locality is not. Hence, it is interesting that complementarity requires an objective formulation.
An objective picture with information stored in parallel opens up for several different formulations, beyond the scope of this text. Focussing on the spin/photon pair correlations in \eqref{pai}, we conjecture the complementarity formulation to simply be \eqref{pai} with
\begin{equation}\label{pai.ext}
a\cdot b \rightarrow a\big|_A\cdot b\big|_B\,.
\end{equation}
Here, locality is made possible through the consideration of ($d$) multiple channels of orthogonal information. \eqref{pai} represents the simplest objective observation to make, without any specification of the individual systems prior to measurement, yet capturing the relations inferred by pair production. Meanwhile, \eqref{pai.ext} includes an assumption of acceleration to change the notion of orientation at particle level, so that the pair correlations accurately capture relative curvature between $(A,B)$.
Reconnecting to emergence of space-time, the complementarity picture furnishes a way to investigate how space-time might arise from individual interactions. The scalar product in the spin/photon correlations single them out as candidate origins of emergence of \emph{orientation}. However, a first question must be how to understand spin and photon polarization. The general argument of parallel information accommodates for the correlations, but gives little explanation of their characteristics. Below, we will discuss how to best understand both the individual spin/photon pair correlations and the limits to what can be measured, as well as what might be involved in an emergent picture.
\section{Relative orientation from pair correlations}\label{s.emerge}
Emergent directions require an identification of a scalar product at the level of individual interactions,
\begin{equation}\label{svp}
\sum_{i=1}^d |\hat e_i\rangle\langle\hat e_i| \,\,\,\, \longleftrightarrow \,\,\,\,\frac{1}{N}\sum_{n=1}^N f(a_n,b_n) \xrightarrow{N\rightarrow\infty} a\cdot b\,,
\end{equation}
for $\forall\, a_n\cdot b_n=a\cdot b$, or something corresponding to the rhs in a less idealized setting. At the quantum level, this abstract connection $f(a_n)$ may also be expected to give output in terms of quanta, i.e. discrete values at each site of measurement $(A,B)$. On the lhs, the scalar product shows the classically counterintuitive nature of this type of connection: parallel, simultaneous correlations through multiple channels, where classically only one at a time is possible. Correlations of this type are required to be complementary, and the only candidates (identified so far) are spin 1/2 and linear photon polarization entanglement, since their pair correlations \eqref{pai} represent versions of \eqref{svp}. For spin, the uncertainty relation $[J_a,J_b]\propto (a\times b)$ also illustrates the overlap in information of $a\cdot b$.
The central role of the spin 1/2 and photon polarization correlations makes it desirable to understand their general characteristics and similarities. Importantly, they are directional in nature (probed at an angle), with non-trivial rotation symmetry. It turns out to be useful to describe them through a (rotation symmetric) representation $(r,\{L_i\})$, with an angular position $r$ and a set of angular rotation vectors $L_i$, in total giving $d$ operators. While this basis represents intrinsic qualities, not literal rotation, the structure is analogous in terms of conservation and elucidates the correlations, the limitations to what is measurable, the connection to a scalar product and the duality of the $2$ and $3d$ settings, i.e. spin vs. photons.
Beginning with the simpler case of $d=2$ (photons), the correlations\footnote{The circular photon polarization is set apart from the EPR correlations. With $j\in\{+,-\}$ along the direction of propagation, $P(j,A)=P(j,A;j,B)=1/2$. The circular--linear correlation is trivial: $P(j,A;b,i)=1/4$.} are reproduced by rotation matrices in the plane perpendicular to the direction of propagation ($\vec{p}$), with a symmetry of $\varphi\sim\varphi+\pi$. With a reference direction $r$ and positive orientation set by a direction of rotation $L=\pm\vec{ p}/|p|$, a measurement can be given relative to the internally defined reference frame as
\begin{equation}\label{photon.a}
\pm a \,\,\rightarrow\,\, \mathfrak{a}=R(\varphi_a)\hat r=\begin{bmatrix}\cos\varphi_a&-\sin\varphi_a\\\sin\varphi_a&\cos\varphi_a\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}\,,
\end{equation}
with $\varphi_a\in [0,\pi)$. The correlations are then captured by
\begin{equation}\label{papb}
m_\mathfrak{a}=R(\varphi_a)\begin{bmatrix}1&0\\0&0\end{bmatrix}R(-\varphi_a)\,:
\quad \text{tr}(m_\mathfrak{a}m_\mathfrak{b})=(a\cdot b)^2\,,
\end{equation}
in an overlap picture that contains no way of assigning a definite outcome at either site $(A,B)$, giving \eqref{pai.1} as a consequence of the randomness of $r$.
That a pairwise shared\footnote{For photons, a shared $L$ means $L\big|_A=\hat{\vec{p}}\big|_A\Rightarrow L\big|_B=\hat{\vec{p}}\big|_B$.} $(r,L)$ captures the correlations is illuminating in terms of what can be determined, and for the duality of circular/linear polarization. $L$ literally represents circular polarization evolving upon interaction, while the linear outcome depends on a combination of $(r,L)$. Of these, only one can be fully determined at a time. Selecting on $L$ for one half of the pair would, at the other end, give a fully correlated outcome for circular polarization, and random results for the linear case. In selecting on linear polarization instead, $L$ remains undetermined but present in the rotational correlations, equivalently posed in terms of one shared, undetermined $L$ and an angular distance between measurement angles. The setting is equivalent to $e^{2i\varphi}$, which in terms of $e^{2i\theta}e^{2i\varphi_a}$ aptly illustrates the further correlations required due to that a vector $a$ is not uniquely defined by its components squared\footnote{Models on these typically require negative probabilities, to correct overestimated correlations.} $\{a_i^2\}$. These correlations describe a classically unorthodox mixing of relative probabilities.
The assignment of a definite value is in turn equivalent to the unitary outcome of $e^{2i\varphi}$ (norm $1$) with a simultaneous parallel assignment of values\footnote{To see this requirement, consider what is required for consistent assignments for different $\varphi$, connected by rotations. For the explicit decomposition in \eqref{e.multiple}, recall that $\cos2\varphi=\cos^2\varphi-\sin^2\varphi$, and the corresponding for $\sin2\varphi$.} in the real and imaginary channels
\begin{gather}\begin{aligned}\label{e.multiple}
&e^{2i\varphi}=\cos 2\varphi+i\sin2\varphi\\
&\Leftrightarrow\quad\left\{\begin{array}{ll}1:\,\cos^2\varphi\,,&-1:\,\sin^2\varphi\\ 1:\,\cos^2(\varphi-\pi/4)\,,&-1:\,\sin^2(\varphi-\pi/4)\end{array}\right.
\end{aligned}\end{gather}
exactly the requirement for an emergent scalar product. Note that the $\pi$ periodicity translates into orthogonal axes in the $2d$ plane at $\varphi\in\{0,\pi/4\}$. Again, the assignment of a definite value is complementary (`orthogonal') to the relative correlation, and so both cannot be simultaneously discussed --- they are only available through pair production.
In the light of the above, the $2d$ picture is clear, and in higher dimensions the construction can be extended to $(r,\{L_j\})$ with a set of angular rotations spanning the unit sphere $S^{(d-1)}$ in a set order. In addition, while pair produced photons share the same $(r,L)$, spin 1/2 particles require opposite characteristics. However, in $d>2$, models of the correlations like \eqref{papb} are not tractable. In $d=3$ two disparate, non-commutative rotations are present and remain undetermined, including the \emph{order} of rotation. The outcome depends on two complementary variables, neither of which can be eliminated or further specified. Here, the rotational origin only gives the observed trigonometric dependence of \eqref{pai.2}. In addition to a scalar product, spin 1/2 and $3d$ orientation also has an observed $4\pi$ rotation symmetry. While how this arises is not apparent (and desirable to understand) the set of two $L_i$ is suggestive of that $2\pi$ rotations are required for each, both in terms of acceleration by rotation for individual particles, and the entangling process discussed below.
As such, a $d=3$ rephrasing of $\{J_i\}$ into $(r,\{L_i\})$ mostly illuminates the complementary qualities of spin (in relation to measurement, same as for photons) and the connection to a scalar product, in $3d$ with correlations through $(\hat x,\hat y, \hat z)$ in a cartesian coordinate system, instead of \eqref{e.multiple}. In addition, it shows a duality of spin 1/2 and photon polarization, similar in their nature in terms of $(r,\{L_i\})$. Allowing for the different dimensionality and the opposite statistics (anti-/correlation), the different periodicity of the internal systems $(\pi,2\pi)$ translates into that the $3d$ correlations are dual to the $2d$ relation with a doubled period ($\pi\rightarrow 2\pi$),
\begin{equation}
\mathfrak{a}\xrightarrow{\varphi_a\rightarrow2\varphi_a}\tilde{\mathfrak{a}}\,: \quad (\mathfrak{a}\cdot\mathfrak{b})^2= \frac{1+\tilde{\mathfrak{a}}\cdot\tilde{\mathfrak{b}}}{2}\,.
\end{equation}
The dual nature of the correlations makes it plausible that space-time, with $3d$ space orientation emerging from spin 1/2 entanglement, in certain geometries would be dual to $2d$ gauge theory, as in gauge/gravity duality.
\section{Emergence of space orientation}
With pair correlations encoding relative orientation identified, the next requirement for emergence of (large-scale) space orientation is entangling processes besides pair production. For a local equilibrium to be reached, the entities must readily entangle with each other, and the result from two particle interactions must retain information of the initial configurations. The former certainly is true for spin, which also is what is required for $3d$ geometry ($2d$ geometry naturally is restricted to notions of rotation and parity).
Focussing on $3d$ and spin, in comparison a measurement represents a destructive process, altering the complementary qualities. But measurements are also projective interactions, i.e. not entangling, and it is reasonable to believe entangling processes to be of an averaging, stabilizing kind. At least, for space to emerge through interactions, with agreement of orientation on a larger scale, spin is the best candidate so far.
Figure \ref{f1}
\begin{figure}[tbp]
{\begin{center}
\setlength{\unitlength}{0.8pt}
\begin{picture}(70,115)(0,-95)
\put(0,-5){\circle*{4}}
\put(-0.1,-5){\line(0,-1){25}}
\put(5,-9){$1$}
\put(5,-34){$2$}
\put(0,-30){\circle*{4}}
\put(0,-55){\circle*{4}}
\put(-0.1,-55){\line(0,-1){25}}
\put(5,-59){$3$}
\put(5,-84){$4$}
\put(0,-80){\circle*{4}}
\put(27,-45){$\longrightarrow$}
\put(70,15){\circle*{4}}
\put(69.9,15){\line(0,-1){20}}
\put(75,11){$1$}
\put(75,-10){$2$}
\put(75,-34){$2'$}
\put(70,-30){\circle*{4}}
\put(70,-55){\circle*{4}}
\put(75,-59){$3'$}
\put(69.9,-100){\line(0,1){20}}
\put(75,-85){$3$}
\put(75,-104){$4$}
\put(70,-100){\circle*{4}}
\put(69.9,-30){\line(0,-1){25}}
\end{picture}
\end{center}}
\caption{Illustration of an entangling process between two particles, previously entangled with two other particles. An emergence of orientation through spin entanglement would require the new system to have $(r,\{L_i\})'$ symmetrically dependent on the two initial configurations, with randomness only through how those two initial systems differ, providing interactions towards an equilibrium.\label{f1}}
\end{figure}
gives a rough illustration of the general idea. Two particles, each entangled to third party entities, entangle into a shared state equally dependent on the two initial figurations, with the outcome fixed if the two initial configurations already are fully entangled, and otherwise distributed in-between the two initial configurations with some probability, giving equilibration instead of copies. Here, `in-between' would be determined by the relative initial configurations. For spin, the easiest approach is in terms of $\{J_i\}$ with a conjectured selection on pairs maximizing $|J_{1,i}\cdot J_{2,j}|$ and a new probability distribution of $J'_{k}$ within each such interval, while accommodating for the orthogonality of $J'_k$. Here, changes $\pm \{J_i\}$ would also have to be considered within the same equivalence class. To the purpose of this text, the precise reassignment is not central --- instead, we focus on the presence of an entangling process furnishing interactions which lead to equilibrium configurations. A suitable overall description might require different parts, such as how local equilibrium is reached (possibly through something like tensor networks) followed by a hydrodynamic formulation.
The general conjecture, as such, entails local pair correlations to entangle in multiple stages and produce an agreement on orientation on a large scale, with smooth changes over large distances. On a local scale, such a structure would have to be supported mostly by vacuum fluctuations, and given boundary conditions by stationary matter (pure geometry). Here, the lifetime of the fluctuations must be greater than the local equilibration time
\begin{equation}
\tau>\tau_{eq}
\end{equation}
for the fluctuations to encode any structure. For example, this type of construction could provide a notion of straight lines to moving particles through interactions with their momenta $\vec{p}$, e.g. giving an explanation of the double-slit experiment (particle/wave duality) in terms of geometry through stabilised entanglement exchange through and around the slits. Since any equilibrating process in $3d$ must be crucially affected by gravity, the equilibration is expected to be more complex than discussed above, requiring further analysis beyond the scope of this text. We will merely add some brief comments in relation to time and gravity below. As stated above, the objective of this text is to argue for the relevance of identifying and analysing the different interactions relevant for space-time emergence, through making an example of spin 1/2. A better understanding of all of these interactions is necessary for a more precise understanding of the thermalization, e.g. in terms of something corresponding to a Boltzmann equation.
In modelling emergence of time, the fundamental interaction of relevance most likely is $(t,E)$ entanglement, communicated through interactions. Since acceleration implies time dilation, or a change of some intrinsic definition of a time period $T$ (in addition to length), time and space variations (gravity) might be connected through taking variations in $T$ to define orientation entanglement equilibrium configurations. However, in this framework time would arise both from internal and external causes, anything representing change (i.e. interactions), not quite the type of `clock time' (internal or external) commonly discussed, e.g. in \cite{Page:1983uc}. For the specific example of black holes, there might be a breakdown of space-time structure in close proximity to the horizon. If the individual interactions play a central role near the horizon, instead of the normally present effective picture of a thermalized system, a model of the physics involved would be incompatible with an a priori definition of space-time, while also including strong interactions in the sense of complementarity. The precise effects would depend on the quantum model involving emergence of time. Considering that the relative rate of interaction within a subsystem decreases as it approaches the horizon, time as emergent from individual interactions might give very unorthodox effects, such as a boundary akin to the Zeno paradox of Achilles and the tortoise (black holes `frozen' in time).
\section{Summary}
In emergence of space-time at the level of quantum interactions, complementarity is a crucial part of the quantum physics. An accurate formulation (beyond scalar entities like entanglement entropy) requires inherent complementarity in addition to quantum features, and for this it is necessary to go beyond the wave function and consider information not as restricted to a scalar setting, but existing in parallel. Considerations of this type allows for EPR locality and for intuitive explanations of limitations and results of measurements for e.g. spin and photon polarization, as we have shown. This makes the information structure here advocated, in terms of complementary entities and an extended concept of probability theory, of interest and worth considering: the question of locality is a crucial part of physics.
The construction also allows for conjectures of how the pair correlations, encoding the scalar product required for space, through entangling processes may give rise to a thermalization process resulting in space-time. While our discussion makes an example of emergent space orientation and spin 1/2 correlations, it is relevant for complementary correlations in general, and the approach is of interest for further analysis in terms of effects due to time and gravity. An identification of the relevant fundamental interactions, their entangling properties and how they (possibly) result in a thermalization process might give a better understanding of quantum gravity.
\begin{acknowledgements}
This work is supported by the Swedish Research Council grant 2017-00328. Its initiation was supported by the Knut and Alice Wallenberg Foundation.
\end{acknowledgements}
\providecommand{\href}[2]{#2}\begingroup\raggedright | f8032c83deb5658bf9c16d83d47b965f56a37e8c | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
The solar magnetic field has long been recognized as playing a key role in the
transport, storage and release of energy from the photosphere to the corona
\citep{1960MNRAS.120...89G, 1981ApJ...246..331S}. \cite{1972ApJ...174..499P,
1994ISAA....1.....P} proposed that photospheric motions set the coronal
magnetic field in ``dynamic non-equilibrium'', that leads to the formation of
current sheets on fast ideal timescales \citep{2013ApJ...773L...2R} where
magnetic reconnection releases energy in small impulsive heating events termed
``nanoflares" \citep{1988ApJ...330..474P}. This process has been shown to have
the characteristics of magnetically dominated MHD turbulence
\citep{1996ApJ...457L.113E, 1997ApJ...484L..83D, 2003PhPl...10.3584D,
2007ApJ...657L..47R, 2008ApJ...677.1348R}, where the out-of-equilibrium
magnetic field generates a broad-band small velocity that creates small scales
distorting the magnetic islands and pushing field lines together
\citep{2011PhRvE..83f5401R}. Similar dynamics are also displayed in cold plasma
\citep{1996ApJ...467..887H} and full MHD simulations \citep{1996JGR...10113445G,
2012A&A...544L..20D}.
A first connection to observations has been provided by the statistics of these
bursty dissipative events, that have been shown to follow a power-law behavior
in total energy, peak dissipation and duration with indices not far from those
determined observationally in X-rays \citep{1998ApJ...497..957G,
1998ApJ...505..974D}.
But to constrain any model, advance our understanding of coronal heating and
correctly interpret observations it is crucial to study the thermodynamics of
such a system. Simulations of entire active regions allow the investigation of
the geometric properties of radiative emission and thermodynamical quantities
\citep[e.g., temperature, mass flows and average volumetric heating rates,][]
{2002ApJ...572L.113G, 2011A&A...531A..97Z, 2013A&A...555A.123B}, but their
coarse resolution at scales below energy injection (about the granular
scale $\sim 10^3$\,km), necessary to include an entire active region, do
not allow the full development of nonlinear dynamics leading to the
formation of strong current sheets where energy is deposited.
Magnetic reconnection is not directly observable in the corona because it has become
increasingly clear that the effective heating and particle acceleration
occurs at scales of the order of the ion (proton) inertial
length $d_i$, which for an ion density $n_i \sim 10^8$~cm$^{-3}$ becomes $d_i = c/\omega_{pi} \sim 23$~m
(the proton plasma frequency is $\omega_{pi} = \sqrt{4\pi n_i e^2/m_i}$, $c$ the speed of light,
$e$ the electron charge, and $m_i$ the proton mass), \emph{well below the
resolution limits of present instrumentation} --- to
date the highest spatial resolution achieved for direct observations of the
corona is approximately $150$\,km by the Hi-C imager
\citep{2013Natur.493..501C}. Additionally for typical active region
temperatures $\sim 10^6$~K and magnetic field intensities $\sim 50$~G
the ion gyroradius is of same order of magnitude as $d_i$.
\emph{What can be observed directly is radiation}. By
analyzing the spectral properties of the observed radiation it is possible to
infer some of the physical properties of the plasma in the solar upper atmosphere,
such as the number density and temperature distribution along the line of
sight. Thus comparisons between observations and models must focus on the
analysis of the spectral properties of the plasma.
Here we analyze results from the HYPERION compressible MHD code. HYPERION is a
parallelized Fourier collocation finite difference code with Runge-Kutta time
discretization that solves the compressible MHD equations with parallel thermal
conduction and radiation included \citep{2010AIPC.1216...40D,
2012A&A...544L..20D}. HYPERION is able to produce temperatures and number
densities obtained in a framework where the ``heating function'' is due only to
the resistive and viscous dissipation induced in the corona by the footpoint shuffling.
Recent simulations \citep{2012A&A...544L..20D} have shown that temperature is highly
structured at scales below observational resolution in loops whose
magnetic field lines are shuffled at their footpoints by random
photospheric-mimicking motions: temperature peaks around
current sheets forming similarly shaped structures, approximately elongated in
the strong guide field direction, surrounded by cooler plasma.
In this paper we use our simulations of resolved loops to return
predictions for simulated ``observables'', such as the number density and
differential emission measure distribution, that can be compared with
observations. There has been considerable interest in the temperature
distribution observed in coronal loops
\citep[e.g.,][]{2003A&A...406.1089D,2007ApJ...656..577A,2002ApJ...578L.161S,2008ApJ...686L.131W,
2012ApJ...759..141W}. Many of these studies have found relatively narrow
emission measure distributions, and it has been unclear how these observations
could be reconciled with theory.
We simulate loops of $50,000$\,km length and axial magnetic fields of 100,
200, and 400\,G. The resulting temperatures and densities are used to synthesize
the emission line intensities that the Extreme Ultraviolet (EUV) Imaging
Spectrometer (EIS, \citealt{2007SoPh..243...19C}) would observe. These
intensities are input into the same analysis software used in many observational
studies. For these first calculations we find very good agreement between the
emission measure distributions derived from the simulations and the general
trends in the distributions derived from data. The distributions are relatively
narrow, peak at temperatures between $\log T = 6.0$ and 6.4, and show very
little emission at flare-like temperatures ($\log T\sim7$). The mean
temperature in the distribution, along with its width,
also rises with increasing field strength, consistent with observations.
\section{Formulation of the problem} \label{sec:fp}
In this section we describe our extension of the Parker coronal heating model
from RMHD to a formulation that includes many more significant physical processes.
We first describe our magnetohydrodynamic model in which physical augmentations, such
as thermal conduction and optically thin radiation, are contained.
Line-tied boundary conditions appropriate to the upper chromosphere are then given.
The velocity forcing function at the boundaries is also described.
The formulations for the elliptical gravity model, initial temperature and initial number density are
also given.
\subsection{Governing equations}
We model the solar corona as a compressible, dissipative magnetofluid with nonlinear thermal conduction and optically thin radiation losses.
The governing equations, written here in dimensionless
form, are:
\begin{eqnarray}
{{\partial n}\over {\partial t}} &=& -\nabla\cdot (n {\bf v}), \label{eq:eqn} \\[.4em]
{{\partial n {\bf v}}\over{\partial t}} &=& -\nabla\cdot({n \bf v v})
-{\beta}\nabla p + {\bf J}\times{\bf B}+
{1\over S_v}\nabla\cdot{\bf \zeta} \nonumber \\
&&+ \frac{1}{Fr^2}\ n \Gamma(z)\, {\bf\hat e}_z \label{eq:eqnv} \\[.4em]
{{\partial T}\over{\partial t}} &=& -{\bf v}\cdot\nabla T
- (\gamma - 1) (\nabla\cdot {\bf v}) T
\nonumber \\
&& +\frac{1}{n} \Bigg\{ \frac{1}{ Pr\, S_v }
\bigg[{\bf B}\cdot\nabla
\bigg( \kappa_{\parallel}\ T^{5/2}\ {{\bf B}\cdot\nabla T\over B^2}\bigg)
\nonumber \\
&& +\kappa_{\perp} (n, \rho, T)\ \nabla\cdot
\bigg( {{\bf B}\times (\nabla T \times {\bf B})\over B^2}\bigg) \bigg]
\nonumber \\
&& + {(\gamma -1)\over\beta} \bigg[
{ 1\over S_v} \zeta_{ij} {\partial v_i\over\partial x_j}
+{1\over S} (\nabla\times{\bf B})^2
\nonumber \\
&& -{1\over P_{rad} S_v} n^2\Lambda (T)
+ {\beta\over(\gamma - 1)} n C_N \bigg] \Bigg\}, \label{eq:eqT}\\[.4em]
{{\partial {\bf B}}\over{\partial t}} &=& \nabla\times{\bf v}\times{\bf B}
- \frac{1}{S}\nabla\times \nabla\times {\bf B}, \label{eq:b}
\end{eqnarray}
with the solenoidality condition $\nabla\cdot{\bf B} = 0$.
The system is closed by the equation of state
\begin{equation} \label{eq:eqp}
p = n T.
\end{equation}
The non-dimensional variables are defined in the following way:
$n ({\bf x}, t)$ is the number density,
${\bf v}({\bf x}, t) = (u, v, w)$ is the flow velocity,
$p({\bf x}, t)$ is the thermal pressure,
${\bf B}({\bf x}, t) = (B_x, B_y, B_z) $ is the magnetic induction field,
${\bf J} = \nabla\times{\bf B}$ is the electric current density,
$T({\bf x}, t)$ is the plasma temperature,
$\zeta_{ij}= \mu (\partial_j v_i + \partial_i v_j) -
\lambda \nabla\cdot {\bf v} \delta_{ij}$ is the viscous stress tensor,
$e_{ij}= (\partial_j v_i + \partial_i v_j)$ is the strain tensor,
and $\gamma$ is the adiabatic ratio.
To render the equations dimensionless we set characteristic values at the
walls of the computational box: a number density $n_*$,
vertical Alfv{\'e}n speed at the boundaries $V_{A*}$,
the orthogonal box width $L_*$, and the temperature $T_*$.
Therefore time ($t$) is measured in units of the Alfv\'en time
($\tau_A=L_* /V_{A*}$ -- note that this is not the axial loop length transit time.).
The parallel thermal conductivity is given by $\kappa_\parallel$,
while the perpendicular thermal conduction is considered negligible and hence
$\kappa_{\perp}$ is set to zero.
The magnetic resistivity $\eta$, and shear viscosity $\mu$
are assumed to be constant and uniform, and Stokes relationship is assumed
so the bulk viscosity $\lambda = (2/3) \mu$.
In our previous paper \citep{2012A&A...544L..20D} the function
$\Lambda(T)$ that describes the temperature dependence of the radiation
was evaluated in the same way as \cite{1974SoPh...35..123H}.
Here we use instead the radiation function based on the CHIANTI atomic database
\citep{2012ApJ...744...99L}, normalized by its value at the base temperature
$T_* = 10000\, K$.
The Newton cooling term $C_N$ is described in section~\ref{sec:grav}.
The important dimensionless numbers are:
$S_v = n_* m_p V_{A*} L_* / \mu \equiv$ viscous Lundquist number
($m_p = 1.673\times 10^{-27}$ kg is the proton mass),
$S = \mu_0 V_{A*} L_* / \eta \equiv$ Lundquist number
($\mu_0 = 1.256\times10^{-6}$ Henrys / meter is the magnetic permeability),
$\beta = \mu_0 p_* / B_*^2 \equiv$ pressure ratio at the wall,
$Pr = C_v \mu / \kappa_{\parallel} T_*^{5/2} \equiv$ Prandtl number, and
$P_{rad} $, the radiative Prandtl number
${\mu/ \tau_A^{2} n_*^2 \Lambda (T_*)} $. $C_v$ is
the specific heat at constant volume.
The magnetohydrodynamic Froude number ($Fr$) is equal to $V_A/(g L_*)^{1/2}$,
where $g=274$~m~s$^{-2}$ is the solar surface gravity.
In what follows we assume normalizing quantities representative of the upper solar chromosphere:
$n_*=10^{17}\, $m$^{-3}$,
$T_*=10^{4}$ K, and $L_* = 4 \times 10^{6}$ m.
$B_*$ is the only quantity that is varied in the three numerical simulations $(B_*=$0.01 Teslas; 0.02 Teslas and 0.04 Teslas; see Table~\ref{tab:table 1}).
We set $\ln \Lambda = 10$. A loop length of $L_{z*}= 12.5 L_*$= 50000 km is used in all of the simulations.
The normalized time scale of the forcing, $t^*$, is set to represent
a five minute convection time scale. The normalized
velocity $V_*$ is $10^3$~m~s$^{-1}$. This velocity is expressed in dimensionless form as $\Xi=V_* / V_{A*}$.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth]{figures/NOAA_11082_field.eps}
\caption{An illustrative observation of a small solar active region. Here we
show magnetic field lines with total lengths between 45 and 55\,Mm
extrapolated from an HMI magnetogram. The magnetic field strengths are
color coded.
\label{h1}}
\end{centering}
\end{figure}
\subsection{Boundary and initial conditions} \label{sec:inibc}
We solve the governing equations in a Cartesian domain of size
$L_x \times L_y \times L_z = 1 \times 1 \times L_z$,
where $L_z$ is the loop aspect ratio determined by the loop length
and the characteristic length ($0\le x,y \le1,\ -L_z/2 \le z \le L_z/2$).
The system has periodic boundary conditions in $x$ and $y$, line-tied boundary
conditions at the top and bottom $z$-plates,
and it is threaded by a strong guide magnetic field $B_0 = 1$ in the $z$-direction.
As explained later in subsection \ref{subsec:nm}
we utilize the magnetic vector potential rather than the magnetic induction field.
In addition, our implementation of a staggered mesh in $z$ is explicated in subsection \ref{subsec:nm}
Using the normalizing quantities given above,
the dimensionless line-tied boundary conditions which are enforced at the top and bottom walls of the simulation take the following form:
\begin{equation}
n = 1,
\end{equation}
\begin{equation}
T = 1,
\end{equation}
\begin{equation}
nu = n{\partial\psi\over\partial y},
\end{equation}
\begin{equation}
nv = -n{\partial\psi\over\partial x},
\end{equation}
\begin{equation}
nw = 0,
\end{equation}
\begin{equation}
{\partial A_x\over\partial t} = v B_z,
\end{equation}
\begin{equation}
{\partial A_y\over\partial t} = -u B_z,
\end{equation}
and
\begin{equation}
B_z = 1.
\end{equation}
The velocity stream function ($\psi$) is described in section \ref{sec:vel}.
The magnetic field is expressed as
$\mathbf{B} = B_0 \mathbf{\hat e}_z + \mathbf{b}$
with $\mathbf{b} (x,y,z,t)= \nabla \times \mathbf{A}$, where $\mathbf{A}$ is the vector
potential associated with the fluctuating magnetic field.
At the top and bottom $z$-plates $B_z$, $n$ and $T$
are kept constant at their initial values $B_0$, $n_0$ and $T_0$,
while the magnetic vector potential is convected by the resulting flows.
\subsubsection{Velocity forcing function} \label{sec:vel}
At the boundaries we employ a time-dependent
forcing function analogous to those used in previous studies
\citep{1996ApJ...467..887H, 1996ApJ...457L.113E, 1999PhPl....6.4146E},
i.e., at the top boundary $z=L_z/2$ we evolve a function
\begin{equation} \label{eq:fc1}
\phi_t (x, y, t) = f_1 \sin^2 \left(\frac{\pi t}{2 t^*}\right) + f_2 \sin^2 \left(\frac{\pi t}{2 t^*} + \frac{\pi}{2}\right),
\end{equation}
and at the bottom boundary $z=-L_z/2$ we evolve a similar function
\begin{equation} \label{eq:fc2}
\phi_b (x, y, t) = f_3 \sin^2 \left(\frac{\pi t}{2 t^*}+ \frac{\pi}{4}\right) + f_4 \sin^2 \left(\frac{\pi t}{2 t^*} + \frac{3\pi}{4}\right),
\end{equation}
where
\begin{equation} \label{eq:fc3}
f_i (x,y) = \sum_{m, p} \frac{a_{mp}^i\ \sin \left[2\pi (m x + p y + \chi_{mp}^i) \right]}
{ \sqrt{m^2 + p^2} },
\end{equation}
in which all wave-numbers with $3\le \sqrt{m^2 + p^2} \le4$ are excited, so that
the typical length-scale of the eddies is $\sim 1/4$.
$a_{mp}^i$ and $\chi_{mp}^i$ are random numbers chosen such that
$0\le a_{mp}^i, \chi_{mp}^i \le1$.
Every $t^*$, the coefficients $a_{mp}^i$ and $ \chi_{mp}^i$ are randomly
changed alternatively for eddies 1 through 4.
At each timestep a provisional wall velocity is computed from:
\begin{equation}
u_{prov}={\partial\phi\over\partial y}
\end{equation}
and
\begin{equation}
v_{prov}=-{\partial\phi\over\partial x}
\end{equation}
To ensure that the kinetic energy at the wall remains constant, we compute
\begin{equation}
K=\sum_{j=1}^{n_y}\sum_{i=1}^{n_x} \bigl[ u_{prov}^2(i,j) + v_{prov}^2(i,j) \bigr]
\end{equation}
separately at the top and bottom boundaries (these are denoted by $K_t$ and $K_b$).
To achieve the desired velocity we then have the following stream functions at the top and bottom boundaries:
\begin {equation}
\psi_t = {\Xi\over K_t } \phi_t
\end{equation}
and
\begin {equation}
\psi_b = {\Xi\over K_b} \phi_b,
\end{equation}
where $\Xi=V_* / V_{A*}$.
Based on these stream functions, the top boundary velocity is given by:
\begin{equation}
u_t={\partial\psi_t\over\partial y}~~{\rm and}~~v_t=-{\partial\psi_t\over\partial x},
\end{equation}
and the bottom boundary velocity is given by:
\begin{equation}
u_b={\partial\psi_b\over\partial y}~~{\rm and}~~v_b=-{\partial\psi_b\over\partial x}
\end{equation}
\begin{table*}
\begin{center}
\caption{\label{tab:table 1} Dimensionless numbers based on solar values. }
\bigskip
\begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} ccccccc}
\hline \hline\noalign{\vspace{.5em}}
Case & $B_0$ (Tesla) & $\beta$ & $S_v$ & $S$ & $Fr$ & $Pr$ & $P_{rad}$ \\[.6em]
\hline\noalign{\vspace{.5em}}
A&0.01& $1.735\times 10^{-4}$&$2.088\times 10^9$& $2.694\times 10^9$ & $2.083\times 10^1$ & $2.533\times 10^{-2}$ & $7.339\times10^{-7}$ \\[.3em]
B&0.02& $4.339\times 10^{-5}$&$4.176\times 10^9$& $5.389\times 10^9$ & $4.166\times 10^1$ & $2.533\times 10^{-2}$ & $2.935\times10^{-6}$ \\[.3em]
C&0.04& $1.085\times 10^{-5}$&$8.352\times 10^9$& $1.078\times 10^{10}$ & $8.332\times 10^1$ & $2.533\times 10^{-2}$ & $1.174\times10^{-5}$ \\[.3em]
D&0.01& $1.735\times 10^{-4}$&$2.088\times 10^9$& $2.694\times 10^{9}$ & $2.083\times 10^1$ & $2.533\times 10^{-2}$ & $7.339\times10^{-7}$ \\[.3em]
E&0.01& $1.735\times 10^{-4}$&$2.088\times 10^9$& $2.694\times 10^9$ & $2.083\times 10^1$ & $2.533\times 10^{-2}$ & $7.339\times10^{-7}$ \\[.3em]
F&0.01& $1.735\times 10^{-4}$&$2.088\times 10^9$& $2.694\times 10^9$ & $2.083\times 10^1$ & $2.533\times 10^{-2}$ & $7.339\times10^{-7}$ \\[.3em]
G&0.01& $1.735\times 10^{-4}$&$2.088\times 10^9$& $2.694\times 10^9$ & $2.083\times 10^1$ & $2.533\times 10^{-2}$ & $7.339\times10^{-7}$ \\[.3em]
\end{tabular*}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{\label{tab:table 2} Numerical resolution and rescaled dimensionless numbers used in the numerical simulations. }
\bigskip
\begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} cccccc}
\hline \hline\noalign{\vspace{.5em}}
CASE & Resolution ($n_x \times n_y \times n_z$) & $\tilde{R}$ & $\tilde{S}_v$ & $\tilde{S}$ & $\widetilde{Pr}$ & $\tilde{P}_{rad}$ \\[.6em]
\hline\noalign{\vspace{.5em}}
A& $64\times64\times144$ &$50 $ & $3.448\times 10^4$ & $4.449\times 10^4$ & $1.534\times 10^3$ & $4.444\times10^{-2}$ \\[.3em]
B& $64\times64\times144$ &$50 $ & $6.896\times 10^4$ & $8.898\times 10^4 $ & $1.534\times 10^3$ & $1.778\times10^{-1}$ \\[.3em]
C& $64\times64\times144$ &$50 $ & $1.379\times 10^5$ & $1.780\times 10^5 $ & $1.534\times 10^3$ & $7.111\times10^{-1}$ \\[.3em]
D& $128\times128\times144$ &$100 $ & $6.896\times 10^4$ & $8.898\times 10^4 $ & $7.670\times 10^2$ & $2.222\times10^{-2}$ \\[.3em]
E& $64\times64\times 288$ &$50 $ & $3.448\times 10^4$ & $4.449\times 10^4$ & $1.534\times 10^3$ & $4.444\times10^{-2}$ \\[.3em]
F& $64\times64\times 576$ &$50 $ & $3.448\times 10^4$ & $4.449\times 10^4$ & $1.534\times 10^3$ & $4.444\times10^{-2}$ \\[.3em]
G& $64\times64\times 1620$ &$50 $ & $3.448\times 10^4$ & $4.449\times 10^4$ & $1.534\times 10^3$ & $4.444\times10^{-2}$ \\[.3em]
\end{tabular*}
\end{center}
\end{table*}
\subsubsection{Initial condition for dynamical variables}
As explained later in subsection \ref{subsec:nm}
we utilize the magnetic vector potential rather than the magnetic induction field.
The initial values for the momentum and magnetic vector potentlal are given by:
\begin{equation}
nu = 0,
\end{equation}
\begin{equation}
nv = 0,
\end{equation}
\begin{equation}
nw = 0,
\end{equation}
\begin{equation}
A_x = 0,
\end{equation}
\begin{equation}
A_y = 0,
\end{equation}
and
\begin{equation}
A_z = 0.
\end{equation}
\subsubsection{Initial temperature, number density and gravity specification} \label{sec:grav}
Here we describe how we initialize the temperature and number density, as well as specify the gravity function.
The loop gravity is determined by an elliptical model, with
\begin{equation}
\Gamma(z)={b z \over a^2 (1-z^2/a^2)^{1/2}},
\end{equation}
where $a$ is the semi-major and $b$ is the semi-minor axis of the ellipse.
The elliptical model decouples the loop height from the loop length,
since $a$ and $b$ can be specified independently.
The footpoints of the loop are located where $d\Gamma/ dz=\pm 1$, i.e.,
the loop length is given by $L_z=2.0 \left[ a^4 / \left(a^2 + b^2 \right) \right]^{1/2}$.
We impose as initial condition a temperature profile ($T_i$) with the dimensionless temperature 1
at the boundaries and 100 in the center (this corresponds to dimensional values of $10^4$ K
and $10^6$ K.
Let $T_{apex} = 100$, then:
\begin{equation}
T_i (z) = T_{apex} - {T_{apex}-1\over(0.5 L_z) ^ q}~~z^q. \label{eq:tempinit}
\end{equation}
The parameter $q$ determines the steepness of the temperature profile at the boundaries, as well
as the flatness of the temperature profile in the center of the system.
We set $q=8$ to ensure a rapid increase of temperature away from the boundaries.
The number density can then be solved for in the usual manner, i.e.,
\begin{equation}
\frac{d}{dz} nT_i = n{dT_i\over dz} + T_i {dn\over dz} = {1\over \beta Fr^2} n \Gamma (z).
\end{equation}
Rearranging this equation, we have:
\begin{equation}
{1\over n} {dn\over dz} = \frac{d}{dz} {\rm ln} n =
- \frac{d}{dz} {\rm ln} T_i + {1\over \beta Fr^2} {1\over T_i} \Gamma (z).
\end{equation}
We solve this numerically with a shooting method. Calculating the
number density in this way allows us to consider longer loops. In this
paper we choose $a = b = 6.25 \sqrt{2}$, consistent with $L_z =
12.5$ (and a dimensional loop length of 50000 km). Combined with our choices for $B_*$, this places our
loop within the range of what is typically observed in the solar corona. To
illustrate this, in Figure~\ref{h1} we show active region NOAA 11082 with field lines
in the range 45000 -- 55000~km computed from a potential extrapolation of a Helioseismic
and Magnetic Imager \citep{2012SoPh..275..229S} magnetogram.
The term $C_N$ in equation \ref{eq:eqT} denotes a Newton cooling function which is enforced
close to the $z$ boundaries \citep{2001A&A...365..562D, bp11}.
In dimensionless form we use $C_N = {1\over\ \tau_N}~[T_i (z) - T(z)] e^{-(z+0.5L_z)/h_N}$ at the lower boundary and
$C_N = {1\over\ \tau_N}~[T_i (z) - T(z)] e^{-(0.5L_z-z)/h_N}$ at the upper boundary.
Here $\tau_N$ is the Newton cooling time and $h$ is the Newton cooling height.
We use $\tau_N = 10$ and $h_N$ = 1/4.
In dimensional terms this corresponds to times between 0.145 s and 0.58 s for the various magnetic field cases (see Table 1),
and a height of 1000 km.
The Newton cooling term is only effective over the first few points in $z$ at each boundary.
Note as well that the radiation function is exponentially decreased in the inverse manner near the boundary to account for the
increasing optical thickness of the upper chromosphere.
\section{Numerical considerations} \label{sec:nc}
\subsection{Numerical method} \label{subsec:nm}
With previous definitions equation~(\ref{eq:b}) and the magnetic field solenoidality condition
($\nabla\cdot{\bf B} = 0$) can be replaced by the
magnetic vector potential equation:
\begin{equation} \label{eq:eqbp}
{\partial {\bf A}\over\partial t}={\bf v}\times ( B_0\, \mathbf{\hat e}_z + \nabla\times{\bf A} )
- {1\over S}~ \nabla\times\nabla\times {\bf A}
\end{equation}
A staggered mesh is employed in the $z$-direction \citep{1987JCP..70...300T}.
The fields that are defined at the
$z$ boundaries are advanced in time on the standard mesh. Other quantities of interest are defined and
advanced in time on the staggered mesh.
That is, on the standard mesh we evaluate $n, n u, ~n v, ~n w, ~A_x, ~A_y,~B_z$ and $T$. Some
derived fields such as $\omega_x, ~\omega_y, ~\omega_z, ~j_x,$ and $j_y$ are also defined on the standard
mesh. On the staggered mesh we evaluate $A_z, ~B_x,~ B_y,$ and $j_z$. Note that for plotting purposes we
interpolate these latter fields onto the standard mesh (at the boundaries an extrapolation is performed).
We solve numerically equations~(\ref{eq:eqn})-(\ref{eq:eqT}) and (\ref{eq:eqbp})
together with equation~(\ref{eq:eqp}). When solving for the $z$ magnetic field we add
the DC magnetic field contribution to $(\nabla\times{\bf A})_z$.
Space is discretized in $x$ and
$y$ with a Fourier collocation scheme \citep{1989PhFlB...1.2153D} with isotropic
truncation dealiasing.
Spatial derivatives are calculated in Fourier space, and nonlinear product terms are
advanced in configuration space.
A second-order central difference technique on a uniform mesh is used for the
discretization in $z$ \citep{1986JFM...169...71D}.
Variables are advanced in time by a low-storage Runge-Kutta scheme. Several
options are available: two-step second-order,
three-step third-order, four-step third-order and five-step fourth-order
\citep{carpenter1994fourth}.
Results presented in this paper use the last option, as it permits the largest time
step.
Thermal conduction is advanced with second-order Super TimeStepping
\citep{2012MNRAS.422.2102M}.
HYPERION, which previously used only MPI for parallel execution, was modified for hybrid parallelization using a combination of OpenMP and MPI. The code retains its original MPI-only strategy of assigning groups of $x$--$y$ planes to each MPI rank by decomposing the three-dimensional simulation domain along the $z$ direction. This keeps all of the data needed for FFTs in the periodic $x$ and $y$ directions local to each MPI rank. Scalability of the original MPI-only code was limited, however, because the maximum number of MPI ranks that could be used in a given simulation could not exceed the number of $x$--$y$ planes in the domain. In the hybrid code, OpenMP multithreading is used to exploit parallel work within the groups of $x$--$y$ planes assigned to each MPI rank, for example by computing one-dimensional FFTs in the $x$--$y$ planes in parallel. This allows more CPU cores to be utilized than was possible with the MPI-only version and, for a fixed number of cores, reduces the overhead of MPI communication relative to the MPI-only code.
\subsection{Simulation rescaling}
The dimensionless numbers based on the physical parameters are given in Table 1.
Note that physical Lundquist and Reynolds numbers are far too large for
present day computations.
Consider, for example, case A which has a characteristic flow velocity given by
$V_* = 1.0\times 10^3$ m~s$^{-1}$ and a characteristic
Alfv\'en velocity given by $V_{A*} = 6.896\times 10^5$~m~s$^{-1}$.
We have a physical Reynolds number equal to:
\begin{equation}
R={V_*\over V_{A*}} S_v = 3.028\times 10^6
\end{equation}
and a physical magnetic Reynolds number equal to:
\begin{equation}
R_m={V_*\over V_{A*}} S = 3.905\times 10^6
\end{equation}
(here ${V_*/ V_{A*}} = M_A = 1.45\times 10^{-3}$ can be thought of as an Alfv\'en Mach number).
Rather than use these numerically unresolvable Reynolds numbers
we present the results obtained running the code with smaller Reynolds
numbers that can be used with the currently achievable numerical
resolution. For example in case~A they are $\tilde{R} = 50$ and
$\tilde{R}_m = (S/S_v)\tilde{R}= 64.51$,
with a horizontal resolution of $64^2$, i.e.,
for case~A we use
\begin{equation}
\tilde{S_v} = {\tilde{R}\over M_A} = 3.448 \times 10^4
\end{equation}
and
\begin{equation}
\tilde{S} = {\tilde{R}_m\over M_A} = 4.449 \times 10^4.
\end{equation}
These somewhat conservative values of the Reynolds numbers are taken based on previous numerical simulations of
turbulent magnetofluids \citep{1989PhFlB...1.2153D}.
In order to keep the same relative efficiency of the radiative and conductive terms in the energy equation
as in the real corona, we have rescaled $Pr$ and $P_{rad}$ accordingly with the choice
of $\tilde{S_v}$, i.e., we set
\begin{equation}
\widetilde{Pr}~\tilde{S_v} = Pr~S_v
\end{equation}
and
\begin{equation}
\tilde{P}_{rad}~\tilde{S_v} = P_{rad}~S_v
\end{equation}
so that for case A:
$\widetilde{Pr} = 1.534 \times 10^3$
and
$\tilde{P}_{rad} = 4.444\times10^{-2}$.
This rescaling is motivated by the result found in the RMHD model
\citep{2008ApJ...677.1348R}
that turbulent dissipative processes are independent of viscosity and
resistivity when an inertial range is well resolved.
The rescaled values are given in Table 2.
Numerical resolutions for all of the simulations are given in Table 2.
These resolutions are smaller than our previous RMHD simulations.
However, the present simulations integrate more complex governing
equations, evolving eight different field components (number density, temperature,
the magnetic vector potential field and the velocity field) compared to only
two scalar fields in RMHD.
In addition, the density stratification from the upper chromosphere to the corona
constrains us, at present, to compute with a very small time step due to the large
variation in the Alfv\'en speed along the loop.
\section{Results} \label{sec:res}
We here discuss the transmission, storage and release of
energy in the simulated coronal loop.
At $t=0$ the system starts out in a ground state, defined by
the constant initial axial magnetic field $B_0 \mathbf{\hat{e}}_z$,
zero magnetic field fluctuations $\mathbf{b}$, while the
initial number density and temperature profiles are as described
in section~\ref{sec:grav}. The velocity field vanishes everywhere
initially except at the top and bottom boundaries ($z = \pm L_z/2$) as described
in section~\ref{sec:vel}.
Radiation and thermal conduction are ramped up linearly until they
attain their full values at $t = t*$.
\subsection{Loop energization}
The random velocity fields at the top and bottom boundaries
twist the field lines in a disordered way (since the forcing velocity is
not symmetric), creating a magnetic field component $\mathbf{b}$
predominantly orthogonal to the DC magnetic field. Initially $\mathbf{b}$ evolves quasi-statically
thus growing linearly with time
\citep{2008ApJ...677.1348R, 2015arXiv150504370R}.
But as soon as the intensity of $\mathbf{b}$ grows beyond
a certain threshold, that depends on the loop parameters,
current sheets form on a fast ideal timescale, with their
width thinning down to the dissipative scale in about
an axial Alfv\'en transit time $\tau_A = L_z/V_A$
\citep{2013ApJ...773L...2R}.
Furthermore thinning current sheets have been recently
shown to be unstable to tearing modes with ``ideal'' (i.e.,
of the order of $\tau_A$) growth rates even for thicknesses
larger than Sweet-Parker \citep{2014ApJ...780L..19P}. Overall this implies that once
the field lines are twisted beyond a certain threshold, or equivalently
once the magnetic field intensity grows beyond a corresponding threshold,
the magnetic field is no longer in equilibrium and transitions on the
ideal timescale to a magnetically dominated MHD turbulence
regime, where magnetic fluctuations are stronger than velocity
fluctuations \citep{1996ApJ...457L.113E, 1997ApJ...484L..83D,
2011PhRvE..83f5401R}.
The work done by boundary motions on the magnetic field line
footpoints corresponds to a Poynting flux whose axial component
gives the energy flux entering the system from the $z$-boundaries
$S_z = B_0 \mathbf{u_s} \cdot \mathbf{b}$
\citep[e.g., see][]{2008ApJ...677.1348R},
where $\mathbf{u_s}$ is the velocity at the $z$-boundary and
$\mathbf{b}$ the magnetic field at the $z$-boundary.
Because the characteristic $z$-boundary velocity timescales
are much longer than the Alfv\'en transit time $\tau_A$,
initially $S_z$ grows linearly in time akin to $\mathbf{b}$.
But once the dynamics transition to a fully turbulent regime
the system attains a statistically steady state where
the Poynting flux is on average balanced by energy dissipation,
so that also velocity and magnetic field saturate fluctuating around
their mean values.
Figure~\ref{h2} shows the Joule heating and Poynting flux in dimensional form
as functions of time for case~A, integrated respectively over the entire volume and over
both $z$-boundary surfaces. Akin to our previous reduced MHD simulations
the Poynting flux exhibits large fluctuations about its average value.
This occurs because the Poynting flux contains the scalar product of the velocity at
the $z$-boundary $\mathbf{u_s}$, a given quantity, and the perpendicular
component of the magnetic field $\mathbf{b}$ that is determined by the
nonlinear turbulent dynamics of the system. This input energy flux is therefore also a
turbulent quantity with large fluctuations in time.
Note that because the $z$-boundary velocity field
changes only slowly in time, the correlation between the velocity and the magnetic
field at the $z$-boundaries is always strong so that the Poynting flux is
always positive, i.e., energy is never removed from the loop by the boundary motions
\cite[for a study of the correlation between boundary velocity
and magnetic field see][]{2010ApJ...722...65R}.
For the latter to occur, the $z$-boundary velocity field should change over
time-scales comparable to or faster than the Alfv\'en transit time along the
loop.
In addition, the random forcing of the kind we employ is not conducive
to the formation of loop structures capable of storing a large amount of
energy. Our forcing does not inject a net magnetic helicity -
associated with inverse cascades and therefore potentially large energy storage -
into our loop. With our forcing, the injected energy is significant enough to power a
hot corona, but clearly not a major solar flare: most of the injected Poynting flux is
efficiently converted into thermal energy as well as kinetic energy (the remaining
injected energy persists as magnetic energy of the perturbed field).
Previous reduced MHD investigations have shown that the time-averaged
Poynting flux varies approximately quadratically with the strength
of the guide magnetic field ($B_0$)
\citep[e.g.,][]{2007ApJ...657L..47R, 2008ApJ...677.1348R}.
Figure~\ref{h2b} shows the Joule heating and Poynting flux in dimensional form
as functions of time for case~C, integrated respectively over the entire volume and over
both $z$-boundary surfaces. Recall that $B_0$ = 0.01 Tesla for Case A and
$B_0$=0.04 Tesla for Case C.
From Figure~\ref{h2} it is seen that the Poynting flux is of order
$5\times10^2$\,J~m$^{-2}$~s$^{-1}$ for Case A.
From Figure~\ref{h2b} it is seen that the Poynting flux is of order
$1\times10^4$\,J~m$^{-2}$~s$^{-1}$ for Case C.
Hence the Poynting flux increases by a factor of about twenty as the
guide magnetic field increases by a factor of four,
which within the error due to the short duration of the simulations
is consistent with a quadratic relation.
Figure~\ref{h2} also shows the Joule heating as a function of time for case A.
It can be seen that the Joule heating is somewhat correlated with the
Poynting flux -- it exhibits the same pattern of relative maxima and minima but
with a time lag (in Case A this time lag is about 200
seconds). This lag represents the time it takes the energy to propagate in to the loop and
for the appropriate magnetic structure, i.e., electric current sheets, to form to permit
dissipation. Similar remarks apply to Figure~\ref{h2b} that considers Case~C.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb = 150 40 635 500]{figures/pfandja}
\caption{Poynting flux ($S_z$) and Joule heating (J) vs.\ time (t) for case A.
\label{h2}}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb = 150 40 635 500]{figures/pfandjc}
\caption{Poynting flux ($S_z$) and Joule heating (J) vs.\ time (t) for case C.
\label{h2b}}
\end{centering}
\end{figure}
\subsection{Three-dimensionality and intermittency}
The fluctuations seen in the Joule heating in Figure~\ref{h2}
are also evidence of temporal intermittency.
Although the numerical simulations presented here have a relatively
low spatial resolution they do present some level of intermittency.
As expected, and as shown for this problem in our previous reduced
MHD simulations, both temporal and spatial intermittency increase
at higher resolutions, i.e., with Reynolds number
\citep{2008ApJ...677.1348R, 2010ApJ...722...65R, 2013ApJ...771...76R}.
Evidence of spatial intermittency for current density and temperature
was already shown in our previous fully compressible simulations\citep{2012A&A...544L..20D}
It was found that temperature is not uniform in space,
rather it strongly increases in and around electric current sheets, forming
similarly shaped spatial structures elongated in the direction
of the strong guide field $B_0 \mathbf{\hat{e}}_z$
\citep{2012A&A...544L..20D}.
Note that both temporal and spatial intermittency should increase as the Lundquist numbers increase.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb = 150 40 635 500]{figures/jandtmaxc}
\caption{Temperature maximum ($T_{max}$) and current maximum ($j_{max}$) {\it vs} time (t) for run C.
\label{h3}}
\end{centering}
\end{figure}
Our 3D compressible MHD simulations
allow exploration of some of the thermodynamic implications of this turbulent
and intermittent type of heating.
The coronal loop in our simulation is in a self-consistent state, energetically determined by
the balance between boundary-forcing, nonlinear dynamics, heating and cooling.
To this must be added the non-trivial caveat that the energy flux entering the system is not
determined simply by the $z$-boundary velocity $\mathbf{u_s}$,
but also by the nonlinear, turbulent dynamics developing in the loop,
This is a consequence of
the Poynting flux being given by the scalar product between the $z$-boundary
velocity and the orthogonal magnetic field component generated by the
nonlinear dynamics $S_z = B_0 \mathbf{u_s} \cdot \mathbf{b}$.
The heating is \emph{only} due to resistive and viscous dissipation
which happens at different locations at
different times where small scales are produced, i.e., within current sheets continuously
forming and disrupting.
The behavior of the volume-averaged quantities, such as kinetic and fluctuating
magnetic energies and resistive and viscous dissipation show a temporal behavior
similar to previous RMHD results
\citep[e.g.,][]{2007ApJ...657L..47R, 2008ApJ...677.1348R, 2010ApJ...722...65R}.
Fully compressible simulations with HYPERION show the time evolution of
the maximum electric current, as seen in Figure~\ref{h3} for case D, which shows
already some fluctuations.
Figure~\ref{h3} also shows the maximum temperature $T_{max}$ as a function
of time, which correlates strongly with $j_{max}$. Though not shown here, this
correlation is seen to strengthen in simulations where the axial magnetic field strength is
increased. Indeed, increasing the axial field brings
our 3D MHD simulations closer in nature to the RMHD case.
\cite{2012A&A...544L..20D} showed that the temperature is
spatially structured, i.e., it is spatially intermittent.
Figure~\ref{h4} shows the $x$ and $y$ positions of $T_{max}$ in space for case~D
at selected times. It can be seen that $T_{max}$ wanders about,
observationally resulting in a changing radiation emission pattern
that can easily give the mistaken impression of an oscillating loop.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb= 140 40 610 500]{figures/txynd}
\caption{The $x$ and $y$ maximum temperature locations for run D.
The term $L_T$ denotes the location of the temperature maximum.
\label{h4}}
\end{centering}
\end{figure}
As seen in Figure~\ref{h1}, there is considerable variation in loop lengths
and magnetic field strengths in the solar corona.
Here we briefly consider the influence of the axial magnetic field strength
on our results.
Figure~\ref{h5} shows how the maximum temperature depends on the axial magnetic field
strength (cases A, B, and C).
It can be seen that the maximum temperature increases with the magnetic
field strength, with a slightly weaker than linear dependence on field strength.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb= 100 85 620 455]{figures/tmaxcomp1}
\caption{Comparison of maximum temperature versus time for cases A, B, and C.
\label{h5}}
\end{centering}
\end{figure}
How do the results change with horizontal ($x$ and $y$) resolution and Lundquist numbers?
Case D has twice the horizontal resolution as Case A.
In addition, the Lundquist numbers are doubled and the Prandtl
numbers are halved for Case D.
Figure~\ref{h6} shows how the maximum temperature depends on numerical resolution and
the Lundquist numbers (cases A and D).
Note that the RMS temperatures are not too different for the two cases, but the
temperature oscillations in the higher Lundquist number case are somewhat stronger.
Of course as in all turbulent systems, the full understanding of the high
Reynolds number regime is non trivial, and it will be investigated
in future work.
Nevertheless our previous reduced MHD simulations indicate
that dissipation rates and Poynting flux saturate at resolutions
of about $256^2 \times 128$, and as shown here maximum temperature
variation is weak at $128^2 \times 144$.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb = 100 85 620 455]{figures/tmaxcomp2}
\caption{Comparison of maximum temperature {\it vs} time for cases A and D.
\label{h6}}
\end{centering}
\end{figure}
\subsection{Effects of vertical ($z$) numerical resolution}
The energization and response of the system depend on gradients at the $z$ boundaries \citep{2011ApJS..194...26B, 2013ApJ...773...94}.
The most significant are the gradients of the magnetic vector potential, the temperature, and the number density.
The Poynting flux depends on the magnitude of the $x$ and $y$ magnetic fields.
In HYPERION these fields are computed as the curl of the magnetic vector potential.
For the $x$ and $y$ magnetic fields there is a component due to the $z$ gradients of the $y$ and $x$ magnetic vector potential,
hence the energization of the system depends on the accurate computation of these gradients.
In the same way the response of the system to heating depends on the evolution of thermodynamic gradients near the $z$ boundaries.
At first glance it might appear that we have under-resolved these gradients.
The scale height of the initial temperature [$T_i / (dT_i / dz)$] can be estimated in nondimensional terms from equation~\ref{eq:tempinit}.
At the $z$ boundaries the temperature scale length is found to be 0.00789 (31.56 km in dimensional terms).
For our system with $L_z = 12.5$ (or 50000 km), this is resolved using 1585 uniformly spaced mesh points.
Note that the initial number density scale height will be approximately the same.
We will determine {\it a~posteriori} how the $z$ resolution affects the energization and plasma response.
Case A will be used as the baseline. In these simulations all of the physical parameters are the same as in case A; only the $z$ resolution is changed.
Case A has 144 points in $z$, case E has 288 points, case F has 576 points and case G has 1620 points (sufficient to resolve the temperature and
number density scales at the boundaries).
All of these cases have a dimensional magnetic field of 0.01 Tesla, so the stiffening effect of the DC magnetic field is
weak relative to the other runs.
Case G was only simulated for approximately 1800 seconds to allow for the computation of synthetic emissions and an emission measure, to be
shown in a subsequent part of this paper.
A comparison of the Poynting flux for the simulations with different $z$ resolutions is shown
in figure \ref{pflux_aefg}. It can be seen that all of the cases oscillate about approximately the same average value in time.
In HYPERION the magnetic vector potential is advanced in time.
Recall that the Poynting flux depends on the perpendicular component of the magnetic field.
Hence the perpendicular magnetic field is a derived quantity -- in particular
it will depend on $z$ derivatives. The value of these derivatives will vary somewhat with the $z$ resolution. The nonlinearity of
the system is reflected in the temporal variability shown in figure ~\ref{pflux_aefg}.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb = 100 85 620 455]{figures/pflux_aefg}
\caption{Comparison of Poynting flux {\it vs} time for cases A, E, and F.
\label{pflux_aefg}}
\end{centering}
\end{figure}
A comparison of the maximum temperatures for the simulations with different $z$ resolutions is shown
in figure \ref{tmax_aefg}. The behavior here is similar to that exhibited by the Poynting flux. The maximum temperature
oscillates about approximately the same value for all of the cases, with some variability seen in the details of the fluctuations.
We conclude that, for the range of $z$ resolutions we have considered here, there is not a significant change in the numerical results.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth, bb = 100 85 620 455]{figures/tmax_aefg}
\caption{Comparison of maximum temperature {\it vs} time for cases A, E, and F.
\label{tmax_aefg}}
\end{centering}
\end{figure}
\subsection {Emission measure distribution}
\begin{figure*}
\center
\includegraphics[height=.65\textwidth]{figures/summary_c001_306.eps}
\includegraphics[height=.65\textwidth]{figures/summary_c002_610.eps}
\includegraphics[height=.65\textwidth]{figures/summary_c004_1220.eps}
\includegraphics[height=.65\textwidth]{figures/summary_c001a_306.eps}
\caption{ Temperature and density distributions along and across the
loops for all cases A through D.
\label{h7}}
\end{figure*}
\begin{figure*}
\centerline{\includegraphics[bb=0 26 595
350,clip,width=0.9\textwidth]{figures/sum_int_009.eps}}
\centerline{\includegraphics[bb=0 26 595
116,clip,width=0.9\textwidth]{figures/sum_int_020.eps}}
\centerline{\includegraphics[bb=0 26 595
116,clip,width=0.9\textwidth]{figures/sum_int_041.eps}}
\centerline{\includegraphics[bb=0 15 595
116,clip,width=0.9\textwidth]{figures/sum_int_009a.eps}}
\centerline{\includegraphics[bb=0 115 595
340,clip,width=0.9\textwidth]{figures/sum_int_009a.eps}}
\caption{Synthesized intensities along and across the simulation box for a subset of the spectral lines used for the emission measure analysis. Middle panels show intensities integrated along five voxels in the $z$ direction. Top and bottom are integrations along the full range in $y$. The same scaling and units apply to all of the panels.
\label{h9}}
\end{figure*}
Since an emission line is formed over a relatively narrow temperature
range, spectrally resolved observations can be used to infer the
temperature structure of the solar atmosphere. This is often achieved by
computing the differential emission measure distribution (DEM), which is
a solution to the equation
\begin{equation}
I_{i} = \frac{1}{4\pi}\int \epsilon_i(T)\xi(T)\,dT.
\end{equation}
Here $I_{i}$ and $\epsilon_i(T)$ are the intensity and emissivity of the
emission line. The emissivity includes all of the information specific
to the atomic transition. The quantity $\xi(T)$ is the DEM, which
describes the conditions in the solar atmosphere, and is written
\begin{equation}
\xi(T) = n_e^2\frac{ds}{dT},
\end{equation}
where $n_e$ is the electron density and $s$ is a coordinate along the
line of sight. Further details and an application to solar observations
can be found, for instance, in \cite{2008ApJ...686L.131W}.
The density and temperature for each voxel, that is each element of the
simulation volume, were used
to calculate the intensity of EUV spectral
lines. We chose a set of 25 EUV lines ranging from $3\times10^5$ K to
$7\times10^6$ K in temperature of formation. With the exception of
\ion{Fe}{18} 974.86 \AA, the lines selected are all in the observed
wavelength range of the EIS instrument on board Hinode
\citep{2007SoPh..243...19C} and cover a variety of ionization stages of
Mg, Si, Fe, S, Ar and Ca. Data from the EIS instrument have been
routinely used to calculate emission measure distributions in different
coronal conditions. The \ion{Fe}{18} 974.86 \AA\ was added to improve
the constraints on the high temperature end and mimics the use of AIA
94\,\AA, which images \ion{Fe}{18} \citep{2012ApJ...759..141W}. The
emissivities for each line was calculated using the CHIANTI atomic
database \citep{1997A&AS..125..149D, 2013ApJ...763...86L} assuming
coronal abundances \citep{1992ApJS...81..387F} and the CHIANTI
ionization equilibrium tables.
Figure~\ref{h7} shows the density and temperature distributions along
and across sections of the simulation domain, for time from 1770 s to 1830 s
for the simulated cases A through D. Figure~\ref{h9} shows at the top and bottom panels
the synthesized intensities of a set of seven spectral lines integrated
along the perpendicular direction to
the loops' axis, similar to observing the loops side-on. The panels in the
center are the intensities of the loops' mid-section integrated along five
voxels in the $z$ direction. The integration times are 60 s in all four
cases.
The emitting volume selected for the EM exercise corresponds to the apex of the loops. To compute line intensities we integrate the emissivities over a region 1750 km wide centered at the mid-plane of the computational domain. The volume of integration corresponds therefore to a
$4000\times1750~\rm
km^2$ area on a hypothetical plane of the image, 4000 km deep, namely a
cross-section of about 5\arcsec, typical of loop observations in the
corona. The integrated intensities in each spectral line, with an assumed
uncertainty of 20\%, serve as input
to a Differential Emission Measure calculation algorithm. We used the
Monte Carlo Markov Chain (MCMC) code \citep{1998ApJ...503..450K},
applied in the manner described in \cite{2012ApJ...759..141W}. The MCMC
algorithm calculates multiple (250) solutions with perturbed values for
the intensities, providing an estimate of the error in the EM
distribution calculation.
\begin{figure}
\centerline{\includegraphics[width=1.\columnwidth]{figures/dem_sum_int_009.ps}}
\centerline{\includegraphics[width=1.\columnwidth]{figures/dem_sum_int_020.ps}}
\centerline{\includegraphics[width=1.\columnwidth]{figures/dem_sum_int_041.ps}}
\centerline{\includegraphics[width=1.\columnwidth]{figures/eis_l1_20100619_014433.roi02.dem.ps}}
\caption{\label{h11}Emission measure distributions in the mid-section
(loop apex) for cases A, B, and C. The bottom panel shows the DEM
computed using observed intensities from the region shown in Figure~\ref{h13}.}
\end{figure}
Figure~\ref{h11} shows the EM solutions for the A, B and C runs, where the red
line corresponds to the best-fit solution. The plot also shows, for
context, color-coded lines representing the emission measure loci for
the different atomic species of the EIS spectral lines. They illustrate
the range of temperature dependent emission measures compatible with the
intensity of a particular spectral line. A set of intensities at
different temperatures constrain the EM distribution compatible with the
complete dataset.
The computed emission measure distributions are qualitatively similar to
the emission measure distributions computed from observed intensities,
with a characteristic Gaussian-like distribution in the 1--4 MK
temperature range. The weighted mean temperatures for the the three
cases are respectively: 6.00, 6.13 and 6.22 MK. These are characteristic
temperatures for the spectral windows (e.g., \ion{Fe}{12}) and filter
bandpasses (171 \AA, 195 \AA) where we observe a significant fraction of
the loop emission in the corona and are consistent with the peak
emission measure temperatures of some of these loops
\citep{2008ApJ...686L.131W}. The bottom panel of Figure~\ref{h11} shows as
a comparison of the EM distribution for a sample area in active region NOAA
11082 (Figure~\ref{h13}), illustrating the similarities of the simulation
emission measures with regions of the corona.
The EM distributions are perhaps the easiest way to compare the simulations
with the observations. The simulated intensities for individual spectral lines
can differ from typical observed values by factors of 2--10. Such line-by-line
comparisons are beyond the scope of this work, but will be considered in the future.
Temperatures can be higher at the core of
active regions, with emission measure distributions peaking at $\sim$4
MK and exhibiting asymmetric profiles with a steeper drop in the high
temperature end \citep{2012ApJ...759..141W}.
\begin{figure}
\centerline{
\includegraphics[width=1.\columnwidth]{figures/eis_l1_20100619_014433.roi.eps}
}
\caption{\label{h13}NOAA 11082 as seen by HMI, AIA and EIS on June 19, 2010. The box marks the region
of integration for emission measure analysis at the bottom of Fig.~\ref{h11}.}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=1.\columnwidth]{figures/dems_comparison_5.eps}
}
\caption{\label{h12}A comparison of the true emission measure
and the best fit solution from MCMC for case A (Figure~\ref{h11}).
Low and high resolution runs (cases A, D and G) exhibit
nearly identical emission measures.}
\end{figure}
Fig.~\ref{h12} shows that the emission measure analysis is able to
restore the true line-of-sight emission measure, that is the true density
distributions as a function of temperature in the volume of integration ($\sum n_{e_i}^2 V_i/Area_{int}$).
This is applicable for the four cases A through D which are shown. The figure also
demonstrates that we do not find significant differences in the EM distribution
between the low and high resolution runs.
\section{Discussion and conclusions}
In this paper we have examined the dynamics of a coronal loop threaded by a
strong axial magnetic field, where the field line footpoints
are advected by random motions.
Consistent with previous two-dimensional and 3D reduced MHD simulations
\citep{1996ApJ...457L.113E, 1997ApJ...484L..83D, 2003PhPl...10.3584D,
2007ApJ...657L..47R, 2008ApJ...677.1348R, 2011PhRvE..83f5401R},
and our previous fully compressible work \citep{2012A&A...544L..20D},
the loop dynamics are found to be nonlinear, with a turbulent
MHD cascade transporting the energy injected by boundary motions
at large perpendicular scales to smaller scales. This leads to
the formation of approximately field-aligned current sheets
where energy is dissipated and around which temperature strongly increases,
with small scales mainly in planes perpendicular to the axial magnetic field,
along which both current and temperature structures are elongated.
These small scales are not uniformly distributed in the loop, but rather the dynamics become
increasingly more intermittent at higher Lundquist numbers both in space and in time.
Localized electric current sheets continuously form and disrupt, leading to
localized heating of the plasma on short time scales.
Our results show that the loop is the site of a continuous occurrence of reconnection
events that present observations are unable to resolve both spatially and temporally.
In the presence of a strong guide field magnetic reconnection
occurs at the X-points of the orthogonal magnetic field
component, leading to a continuous change
of connectivity of the field lines that cross the reconnection sites (``interchange'')
where the heating occurs \citep{2007ApJ...662L.119S}.
These many sub-resolution ``heating'' events add up to produce the
observed emission, giving the impression at larger (observational) scales
of a continuous diffuse heating.
What is called ``coronal heating'' is actually the superposition of all events due to localized
energy deposition along the subsequent different field lines that cross the reconnection sites,
at the many current sheets elongated along the axial direction present in the
loop volume at any give time
\citep[for a visualization of such current sheets, see, e.g.,][]{2008ApJ...677.1348R}.
Clearly the heating deposited along ``strands'' (small elemental flux tubes)
is much smaller than the total dissipated power in the heating peaks shown
in Figures~2--5, which is of the order of $10^{15}$ to $10^{16}$ Watts with a duration
of about 1000~s. This suggests that the energy released along each
strand is reasonably expected to be much smaller than $10^{16}$~J,
which is about $\sim 10^{-9}$ times the typical energy released in a flare.
We expect the energy deposited along strands in typical heating events
to exhibit a distribution with a peak at energy smaller than $10^{16}$~J,
and plan to investigate more in depth the energy release mechanism
and statistics in future work.
Recall that in our calculations we have used values of resistivity and viscosity that are much
larger than the real ones. In the real Sun even smaller spatial and temporal scales are
attained leading to even smaller energies being involved in each event. Evidence for this
has been seen in RMHD calculations \citep{2008ApJ...677.1348R, 2013ApJ...773L...2R}.
Considering the values of resistivity and viscosity we adopt are much higher
than the solar values, we expect in each event an energy release much smaller than that for a nanoflare.
We will study this point in detail with higher resolution simulations in future.
We have employed an emission measure analysis to investigate whether the simulated
intensities of the computational loops are representative of plasma in the corona and
find great similarities both in peak temperature and distribution.
We find that the simulated intensities and corresponding emission measures are in excellent
agreement and they are accurate representations of the true emission measures.
\cite{2012ApJ...758...54T},
looking into 3D simulations of active regions, found that this method can be inaccurate
when structures with significantly different density overlap along the same line-of-sight.
The temperatures, which increase as the value of the
axial magnetic field increases, are characteristic of warm loop
structures visible in EUV channels.
The loop is found to be a multi-temperature structure with isolated regions at temperatures
of several million degrees and most of the loop at much lower temperatures.
For each case presented in the previous sections the emission measure retains the same
form for the entire hour of the computation in spite of the strong spatial and temporal intermittency.
In this paper we have adopted a Cartesian model of a coronal loop.
The random motions at the boundaries shuffle the magnetic footpoints such
that there is no ordered twisting of the field-lines.
This random twisting does not facilitate the formation of magnetic structures
that can store large amount of energy.
Rather the system reaches a statistically steady state where integrated
physical quantities (magnetic energy, Poynting flux, dissipation rates and
radiative losses) fluctuate around their time-average values, so that
the injected energy per unit time is entirely dissipated on the average
(i.e., considering a long enough time interval).
In a Cartesian model the only way to store a large amount of energy, that can
subsequently give rise to larger magnetic energy release events, is to apply
a spatially isolated and symmetric $z$-boundary velocity.
For instance, a vortex with intensity
stronger than surrounding $z$-boundary motions can twist the coronal magnetic
field lines quasi-statically, thus storing magnetic energy, until a kink instability
develops \citep{2013ApJ...771...76R}. Similarly, an isolated $z$-boundary vortex,
even in the presence of strong nonlinear dynamics in the corona, can store energy
at large spatial scales via an inverse cascade of energy, with subsequent
energy release events in the micro-flare range \citep{2013ApJ...771...76R}.
Similarly also $z$-boundary shear motions, isolated or stronger than
surrounding motions, can store a large amount of energy as sheared
magnetic field lines that can subsequently be released impulsively
\citep{2005ApJ...622.1191D, 2009ApJ...704.1059D, 2010ApJ...722...65R}.
We want to stress the fact that the amount of energy entering from the footpoints is an \emph{outcome} of
the simulation since such energy depends on $B_{perp}$ that cannot be specified as a
boundary condition. That means that the loop \emph{nonlinear} dynamics
determines how much energy can be injected into the system and that the
``heating function" cannot be assigned a priori. Furthermore the almost perfect correlation
between $T_{max}$ and $J_{max}$, confirming that the peaks in temperature are due to the
local enhancement of the current, shows that the heating is due to local phenomena. These
phenomena are the results of the complex perpendicular dynamics driven by $z$-boundary
motions which induce a local increase of the heating which in this framework is due
exclusively to magnetic reconnection. Most of the dissipation occurs within localized current
sheets which disrupt rapidly on Alfv\'en time-scales when their perpendicular size decreases
to the smallest spatial scale present in our simulation. It is interesting to notice the good
correlation between the behavior of the Poynting Flux and the energy dissipation. The two
curves are very similar and shifted in time which means that when the system admits a bigger
average energy flux from the two bases, current starts piling up locally leading to an increase
of total energy dissipated and to a formation and disruption of localized current sheets. The
average Poynting flux depends on the length of the loop.
The time averaged flux is of the order of $1\times10^4$\,J~m$^{-2}$~s$^{-1}$
for the hotter loop (case C) and $5\times10^2$\,J~m$^{-2}$~s$^{-1}$ for the
cooler one (case A ).
The Poynting flux thus increases almost quadratically with magnetic guide field intensity
as in previous reduced MHD studies \citep{2008ApJ...677.1348R}.
As already mentioned the resolution of our simulations is coarse compared with the real
scales present in the corona and consequently we are using values of resistivity and viscosity
which are much higher than the real ones, which are unachievable using present computers.
We have verified that doubling the numerical
resolution, and therefore halving the resistivity and viscosity, changes the
significant results only weakly (as expected ).
Previous reduced MHD simulations have shown that total dissipation
rates and Poynting flux increase with increasing resolutions, saturating at resolutions of about
$256^2 \times 128$. In a fully developed turbulent regime dissipation rates are not expected
to depend on the Reynolds numbers beyond a certain threshold.
Previous simulations suggest that the resolutions adopted in this paper
are below but not too far from such a threshold
\citep{2008ApJ...677.1348R, 2010ApJ...722...65R}, confirming
that the results presented here for the radiative losses are realistic.
The challenging investigations of the dynamics in the high
Reynolds regime and their impact on observations - expected to increase intermittency effects, and ultimately require kinetic calculations on the dissipation scale, is left to dedicated future works.
Most phenomenological studies of coronal heating have so far concentrated on the thermodynamic response
of the coronal plasma
using one-dimensional hydrodynamic loop models with a prescribed heating function.
These models have been in common use in coronal physics for
over thirty years, and have been fundamental in providing a basic
framework for a wide variety of coronal
phenomena, including loop temperature distributions
\citep{1978ApJ...220..643R}, prominence formation \citep{1982ApJ...254..349O}
and, more germane to this paper, coronal heating
\citep{2014LRSP...11....4R,2006SoPh..234...41K,2004ApJ...605..911C}.
In the one-dimensional model heating is often represented as a constant
\citep[see, e.g.,][]{1981A&A....97...27C, 1982A&A...105L...1T}.
It also can be generalized to be a function of some combination of mass density,
temperature and thermal pressure. In the latter cases the heating can be both
spatially and temporally dependent.
A particular case of interest is that of impulsive heating, in which the heating function is turned
on for some span of time, and then shut off to allow the loop to evolve to a new equilibrium.
The heating can be localized in the photosphere or appear as bursts in the corona.
The main limitation of one-dimensional models
is that whatever functional dependence is chosen, the heating remains an
{\it ad hoc} function and the main task of the researcher is to see which of these
dependencies provides the best fit to observations. The chosen functional dependence is
thought to lead to some understanding of which mechanism heats the corona,
but the link with coronal heating theories and models remains
essentially undetermined.
For instance in the case of the Parker model investigated in this paper it is not
obvious at all what heating function in 1D models would represent it better.
One might be tempted, for example, to use a heating function varying very slowly with time,
since photospheric motions have a very low frequency compared to the
fast Alfv\'en crossing time. But previous reduced MHD and our simulations
show that the system develops turbulent dynamics, with the timescale
of the system strongly decreasing at smaller scales
\citep{2007ApJ...657L..47R, 2008ApJ...677.1348R, 2011PhRvE..83f5401R}.
This is independently confirmed by the recent finding that in such
systems current sheets form on very fast ideal timescales, with
their thickness thinning at least exponentially and reaching the
dissipative scale in about an Alfv\'en crossing time \citep{2013ApJ...773L...2R}.
Such thinning current sheets have also been shown to be
unstable to tearing modes with ideal growth rates \citep{2014ApJ...780L..19P},
with the formation of many magnetic islands and X-points
and the complex dynamics of so-called super-tearing or plasmoid instability
\citep{bss78, bisk86, 2007PhPl...14j0703L, 2008PhRvL.100w5001L} ensuing.
Determining the equivalent heating function
for 1D simulations from this framework of coronal heating is therefore
a complex task. In particular such heating function for the Parker model has never been investigated, and therefore
the 1D hydrodynamic models have not been \emph{de facto}
able to test it \citep{2015TESS....120308K}.
As observational evidence has accumulated that many loops are not isothermal, it has
become apparent that coronal loops cannot be modeled using a single flux tube
\citep{2010ApJ...723.1180S}.
The narrow temperature distributions \citep{2008ApJ...686L.131W} and their transient
nature \citep{2009ApJ...695..642U} point to multiple structures and coherence.
In an effort to account for these observations, refined multi-strand models
\citep[e.g][]{2009ASPC..415..221K} have been developed, in which an
ensemble of one dimensional loops is assembled in an attempt to construct a
three-dimensional loop. Our numerical simulations give strong support
to a multi-temperature coronal loop structure whose specific temperature
distribution is likely to depend on the loop parameters, similar
to the Emission Measure Distribution shown in Figure~10,
that we plan to further investigate in future work.
It is important to emphasize that,
as far as the thermodynamics is concerned, we are solving the same equations that are used in a reduced form in one-dimensional models. Looking at the big differences in
temperature appearing in the 2D plots in the mid-plane, it is easy to understand that the
temperature profiles along different field lines originating in different points of the mid-plane
can differ in a very substantial way since for all field lines the temperature at the footpoints is
$10^4$ K. No field line can be considered representative of what happens in the loop.
The limitation to one spatial dimension would leave out the self-consistent nonlinear
dynamics with the most significant energy transfers, responsible for the formation
of current sheets and thus the energy deposition at small scales, occurring in the
perpendicular directions. Additionally for energy to be transferred from the magnetic
field to the plasma magnetic reconnection must occur,
hence magnetic field lines are constantly being broken and reconnected
\citep{2007ApJ...662L.119S} strongly impacting the energy
distribution along different strands.
\acknowledgments
We thank an anonymous referee for helpful remarks.
We thank J. P. Dahlburg and J. M. Laming for helpful conversations.
This research was supported in part by NRL 6.2 funds, the NASA SR\&T program,
and by NASA through subcontracts with the Jet Propulsion Laboratory, California
Institute of Technology.
Computational resources supporting this
work were provided by LCP\&FD, and in part by the DOD HPCMP.
AIA and HMI data are courtesy of NASA/SDO and the AIA and HMI science teams.
Hinode is a Japanese mission developed and launched by ISAS/JAXA,
with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is
operated by these agencies in co-operation with ESA and NSC (Norway).
| 05ebf6332579bd14a3e46a2f9c7167ebdadf3268 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}\label{sec:1}
In this paper we study optimal investment problems for a financial market
model with memory. This market model $\mathcal{M}$ consists of $n$ risky
and one riskless assets. The price of the riskless
asset is denoted by $S_0(t)$ and that of the $i$th risky asset by $S_i(t)$.
We put $S(t)=(S_1(t),\dots,S_n(t))'$, where $A'$ denotes the transpose of a
matrix $A$. The dynamics of the $\mathbf{R}^n$-valued process
$S(t)$ are described by the stochastic differential
equation
\begin{equation}
\begin{split}
dS_i(t)&=S_i(t)\left[\mu_i(t) dt
+\sum\nolimits_{j=1}^n\sigma_{ij}(t)dY_j(t)\right],
\quad t\ge 0, \\
\quad S_i(0)&=s_i,\qquad\qquad\qquad\qquad\qquad\qquad\qquad i=1,\dots,n,
\end{split}
\label{eq:1.1}
\end{equation}
while those of $S_0(t)$ by the ordinary differential equation
\begin{equation}
dS_0(t)=r(t)S_0(t)dt,\quad t\ge 0,\quad S_0(0)=1,
\label{eq:1.2}
\end{equation}
where the coefficients $r(t)\ge 0$, $\mu_i(t)$, and
$\sigma_{ij}(t)$ are continuous deterministic functions on
$[0,\infty)$ and the initial prices $s_i$ are positive constants.
We assume that the $n\times n$ volatility matrix
$\sigma(t)=(\sigma_{ij}(t))_{1\le i,j\le n}$ is nonsingular for $t\ge 0$.
The major feature of the model $\mathcal{M}$ is the $\mathbf{R}^n$-valued
driving noise process $Y(t)=(Y_1(t),\dots,Y_n(t))'$ which has memory.
We define the $j$th component $Y_j(t)$ by the autoregressive type equation
\begin{equation}
\frac{dY_j(t)}{dt}=-\int_{-\infty }^{t}p_je^{-q_j(t-s)}\frac{dY_j(s)}{ds}ds
+\frac{dW_j(t)}{dt},\quad t\in\mathbf{R},\quad Y_j(0)=0,
\label{eq:1.3}
\end{equation}
where $W(t)=(W_1(t),\dots,W_n(t))'$, $t\in\mathbf{R}$, is an
$\mathbf{R}^n$-valued standard Brownian motion defined on a complete
probability
space $(\Omega,\mathcal{F},P)$, the derivatives $dY_j(t)/dt$ and $dW_j(t)/dt$
are in the random distribution sense, and $p_j$'s and $q_j$'s are
constants such that
\begin{equation}
0<q_j<\infty,\quad -q_j<p_j<\infty,\quad j=1,\dots,n
\label{eq:1.4}
\end{equation}
(cf.\ Anh and Inoue \cite{AI}).
Equivalently, we may define $Y_j(t)$ by the moving-average type
representation
\begin{equation}
Y_j(t)=W_j(t)-\int_{0}^{t}\left[ \int_{-\infty }^{s}p_je^{-(q_j+p_j)(s-u)}
dW_j(u)\right]
ds,\quad t\in\mathbf{R}
\label{eq:1.5}
\end{equation}
(see \cite[Examples 2.12 and 2.14]{AI}).
The components $Y_j(t)$, $j=1,\dots,n$, are Gaussian processes with
stationary increments
that are independent of each other. Each $Y_j(t)$ has short
memory that is described by the two parameters $p_j$ and $q_j$.
In the special case $p_j=0$,
$Y_j(t)$ reduces to the Brownian motion $W_j(t)$.
Driving noise processes with short or long memory of this kind
are considered in \cite{AI}, Anh et al.\ \cite{AIK} and
Inoue et al.\ \cite{INA}, for the case $n=1$.
We define
\[
\mathcal{F}_t:=\sigma\left(\sigma(Y(s): 0\le s\le t)\cup\mathcal{N}\right),
\quad t\ge 0,
\]
where $\mathcal{N}$ is the $P$-null subsets of $\mathcal{F}$.
This filtration $(\mathcal{F}_t)_{t\ge 0}$ is the underlying
information structure of the market model $\mathcal{M}$.
From (\ref{eq:1.5}), we can easily show that
$(Y(t))_{t\ge 0}$ is a semimartingale with respect to
$(\mathcal{F}_t)$ (cf.\ \cite[Section 3]{AI}).
In particular, we can interpret the stochastic differential equation
(\ref{eq:1.1}) in the usual sense. In actual calculations, however, we need
explicit semimartingale representations of $Y(t)$.
It should be noticed that (\ref{eq:1.5}) is not a semimartingale
representation of $Y(t)$ (except in the special case $p_j=0$).
For, $W_j(t)$ involves the information of $Y_j(s)$ with $s<0$
and vice versa.
The following two kinds of semimartingale representations of
$Y(t)$ are obtained in \cite[Example 5.3]{AIK} and \cite[Theorem 2.1]{INA},
respectively:
\begin{align}
Y_j(t)&=B_j(t)-\int_0^t\left[\int_0^s k_j(s,u)dY_j(u)\right]ds,
\quad t\ge 0,\quad j=1,\dots,n,
\label{eq:1.6} \\
Y_j(t)&=B_j(t)-\int_0^t\left[\int_0^s l_j(s,u)dB_j(u)\right]ds,
\quad t\ge 0,\quad j=1,\dots,n,
\label{eq:1.7}
\end{align}
where, for $j=1,\dots,n$, $(B_j(t))_{t\ge 0}$ is
the so-called \textit{innovation process\/}, i.e., an
$\mathbf{R}$-valued standard Brownian motion such that
$$
\sigma(Y_j(s): 0\le s\le t)=\sigma(B_j(s): 0\le s\le t),
\quad t\ge 0.
$$
Notice that $B_j$'s are independent of each other.
The point of \eqref{eq:1.6} and \eqref{eq:1.7} is that
the deterministic kernels $k_j(t,s)$ and $l_j(t,s)$ are
given explicitly by
\begin{align}
&k_j(t,s)=p_j(2q_j+p_j)\frac{(2q_j+p_j)e^{q_js}
-p_je^{-q_js}}{(2q_j+p_j)^2e^{q_jt}-p_j^2e^{-q_jt}},
\quad 0\le s\le t,
\label{eq:1.8}\\
&l_j(t,s)=e^{-(p_j+q_j)(t-s)}l_j(s), \quad 0\le s\le t,
\end{align}
with
\begin{equation}
l_j(s):=p_j\left[1-\frac{2p_jq_j}{(2q_j+p_j)^2e^{2q_js}-p_j^2}\right],
\quad s\ge 0.
\label{eq:1.10}
\end{equation}
We have the equalities
\begin{equation}
\label{eq:1.11}
\int_0^tk_j(t,s)dY_j(s)=\int_0^tl_j(t,s)dB_j(s), \quad t\ge 0,
\quad j=1,\dots,n.
\end{equation}
Many authors consider financial market models in which the standard driving
noise, that is, Brownian motion, is replaced by a different one, such as
fractional Brownian motion, so that the model can capture
\textit{memory effect}.
To name some related contributions, let us mention here
Comte and Renault \cite{CR1,CR2}, Rogers \cite{R}, Heyde \cite{He},
Willinger et al.\ \cite{WTT}, Barndorff-Nielsen and Shephard \cite{BaS},
Barndorff-Nielsen et al.\ \cite{BaNS}, Hu and {\O}ksendal \cite{HO},
Hu et al.\ \cite{HOS}, Elliott and van der Hoek \cite{EV}, and
Heyde and Leonenko \cite{HeL}.
In most of these references, driving noise processes are assumed to have
stationary increments since this is a natural requirement of
simplicity.
Among such models, the above model $\mathcal{M}$ driven by $Y(t)$ which is
a Gaussian process with {\it stationary increments\/} is possibly
the simplest one. One advantage of $\mathcal{M}$ is that, by the
semimartingale representations (\ref{eq:1.6}) and (\ref{eq:1.7}) of $Y(t)$,
it admits {\it explicit calculations\/} in problems such as those
considered in
this paper. Another advantageous feature of the model $\mathcal{M}$ is
that, assuming $\sigma_{ij}(t)=\sigma_{ij}$,
real constants, we can easily estimate the characteristic parameters
$p_j$, $q_j$ and $\sigma_{ij}$ from stock price data.
We consider this parameter estimation in Appendix C.
For the market model $\mathcal{M}$, we consider an agent who
has initial endowment $x\in (0,\infty)$ and invests
$\pi_i(t)X^{x,\pi}(t)$ dollars in the $i$th risky asset for
$i=1,\dots,n$ and $[1-\sum_{i=1}^{n}\pi_i(t)]X^{x,\pi}(t)$ dollars in
the riskless asset at each time $t$, where $X^{x,\pi}(t)$ denotes the agent's
wealth at time $t$. The wealth process $X^{x,\pi}(t)$ is governed by the
stochastic differential equation
\begin{equation}
\frac{dX^{x,\pi}(t)}{X^{x,\pi}(t)}
=\left[1-\sum\nolimits_{i=1}^n\pi_i(t)\right]\frac{dS_0(t)}{S_0(t)}
+ \sum\nolimits_{i=1}^{n}\pi_i(t)\frac{dS_i(t)}{S_i(t)}, \quad
X^{x,\pi}(0)=x.
\label{eq:1.12}
\end{equation}
Here, we choose the self-financing strategy
$\pi(t)=(\pi_1(t),\dots,\pi_n(t))'$ from the admissible class
\[
\mathcal{A}_T :=\left\{\pi=(\pi(t))_{0\le t\le T}:
\begin{split}
&\mbox{$\pi$ is an $\mathbf{R}^n$-valued, progressively measurable} \\
&\mbox{process satisfying $\int_{0}^{T}\Vert\pi(t)\Vert^2dt<\infty$ a.s.}
\end{split}
\right\}
\]
for the finite time horizon of length $T\in (0,\infty)$,
where $\Vert\cdot\Vert$ denotes the Euclidean norm of $\mathbf{R}^n$.
If the time horizon is infinite, we choose $\pi(t)$ from the class
\[
\mathcal{A}:=\left\{(\pi(t))_{t\ge 0}: (\pi(t))_{0\le t\le T}
\in\mathcal{A}_T\mbox{ for every }T\in (0,\infty)\right\}.
\]
Let $\alpha\in (-\infty,1)\setminus\{0\}$ and $c\in\mathbf{R}$.
In this paper, we consider the following three optimal investment
problems for the model $\mathcal{M}$:
\begin{align}
&V(T,\alpha):=\sup_{\pi\in\mathcal{A}_T}
\frac{1}{\alpha}E\left[(X^{x,\pi}(T))^{\alpha}\right],
\tag{\textbf{P1}}\\
&J(\alpha):=\sup_{\pi\in\mathcal{A}} \limsup_{T\to\infty}
\frac{1}{\alpha T}\log E\left[(X^{x,\pi}(T))^{\alpha}\right],
\tag{\textbf{P2}}\\
&I(c):=\sup_{\pi\in\mathcal{A}} \limsup_{T\to\infty}\frac{1}{T}
\log P\left[X^{x,\pi}(T)\ge e^{cT}\right].
\tag{\textbf{P3}}
\end{align}
The goal of Problem P1 is to maximize the expected utility of
wealth at the end of the finite horizon.
This classical optimal investment problem dates back to Merton \cite{Me}.
We refer to Karatzas and
Shreve \cite{KS} and references therein for work on this and related problems.
In Hu et al.\ \cite{HOS}, this problem is solved
for a Black--Scholes type model driven by fractional Brownian motion.
In Section \ref{sec:2}, assuming $p_j\ge 0$ for $j=1,\dots,n$,
we explicitly solve this problem for the model $\mathcal{M}$.
Our approach is based on a Cameron--Martin
type formula which we prove in Appendix A. This formula holds under the
assumption that a relevant Riccati type equation has a solution, and the key
step of our arguments is to show the existence of such a solution
(Lemma \ref{lem:2.1}).
The aim of Problem P2 is to maximize the growth rate of
expected utility of wealth over the infinite horizon.
This problem is studied by Bielecki and Pliska
\cite{BP}, and subsequently by other authors under various settings,
including Fleming and Sheu \cite{FS1,FS2}, Kuroda and Nagai \cite{KN},
Pham \cite{P1,P2}, Nagai and Peng \cite{NP}, Hata and Iida \cite{HI},
and Hata and Sekine \cite{HS1,HS2}.
In Section \ref{sec:3}, we solve Problem P2 for the model $\mathcal{M}$ by
verifying that a
candidate of optimal strategy suggested by the solution to Problem P1
is actually optimal. In so doing,
existence results on solutions to
Riccati type equations (Lemmas \ref{lem:2.1} and \ref{lem:3.5})
play a key role as in Problem P1.
The result of Nagai and Peng \cite{NP} on the
asymptotic behavior of solutions to Riccati equations,
which we review in Appendix B, is also an essential ingredient in
our arguments.
The purpose of Problem P3 is to maximize the large deviation probability that
the wealth grows at a higher rate than
the given benchmark $c$.
This problem is studied by Pham
\cite{P1,P2}, in which a significant result, that is, a duality relation
between Problems P2 and P3, is established.
Subsequently, this problem is studied by Hata and Iida
\cite{HI} and Hata and Sekine \cite{HS1,HS2} under different settings.
In Section \ref{sec:4}, we solve Problem P3 for the market
model $\mathcal{M}$. In the approach of \cite{P1,P2},
one needs an explicit expression of $J(\alpha)$.
Since our solution to Problem P2 is explicit, we can solve Problem
P3 for $\mathcal{M}$ using this approach.
As in \cite{P1,P2}, our solution to Problem 3 is given in the form of a
sequence of nearly optimal strategies.
For $c<\bar{c}$ with certain constant $\bar{c}$,
an optimal strategy, rather than such a nearly optimal sequence,
is obtained by ergodic arguments.
\section{Optimal investment over the finite horizon}\label{sec:2}
In this section, we consider the finite horizon optimization
problem P1 for the market model $\mathcal{M}$.
Throughout this section, we assume
$\alpha\in (-\infty,1)\setminus\{0\}$ and
\begin{equation}
0<q_j<\infty,\quad 0\le p_j<\infty,\quad j=1,\dots,n.
\label{eq:2.1}
\end{equation}
Thus $p_j\ge 0$ rather than $p_j>-q_j$ (see Remark \ref{rem:2.6} below).
Let $Y(t)=(Y_1(t),\dots,Y_n(t))'$ and $B(t)=(B_1(t),\dots,B_n(t))'$ be
the driving noise and innovation processes,
respectively, described in Section \ref{sec:1}.
We define an $\mathbf{R}^n$-valued deterministic function
$\lambda(t)=(\lambda_1(t),\dots,\lambda_n(t))'$ by
\begin{equation}
\lambda(t)
:=\sigma^{-1}(t)\left[\mu(t)-r(t)\mathbf{1}\right],
\quad t\ge 0,
\label{eq:2.2}
\end{equation}
where $\mathbf{1}:=(1,\dots,1)'\in\mathbf{R}^n$.
For the kernels $k_j(t,s)$'s in \eqref{eq:1.8}, we put
\[
k(t,s):=\mathrm{diag}(k_1(t,s),\dots,k_n(t,s)), \quad 0\le s\le t.
\]
We denote by $\xi(t)=(\xi_1(t),\dots,\xi_n(t))'$ the $\mathbf{R}^{n}$-valued
process $\int_0^tk(t,s)dY(s)$,
i.e.,
\begin{equation}
\label{eq:2.3}
\xi_j(t):=\int_0^tk_j(t,s)dY_j(s), \quad t\ge 0,\quad j=1,\dots,n.
\end{equation}
By \eqref{eq:1.1}, \eqref{eq:1.2}, \eqref{eq:1.6}, and \eqref{eq:1.12},
the wealth process $X^{x,\pi}(t)$ evolves according to
\[
\frac{dX^{x,\pi}(t)}{X^{x,\pi}(t)}
=r(t)dt+\pi'(t)\sigma(t)\left[\lambda(t)-\xi(t)\right]dt
+\pi'(t)\sigma(t)dB(t),\quad t\ge 0,
\]
whence, by the It\^{o} formula, we have, for $t\ge 0$,
\begin{equation}
\begin{split}
X^{x,\pi}(t)
&=x\exp\left[\int_0^t \left\{r(s) + \pi'(s)\sigma(s)
\left(\lambda(s)-\xi(s)\right)-
\frac{1}{2}\Vert\sigma'(s)\pi(s)\Vert^2 \right\}ds\right. \\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+ \left.\int_0^t \pi'(s)\sigma(s)dB(s)\right].
\end{split}
\label{eq:2.4}
\end{equation}
We define an $\mathbf{R}$-valued process $Z(t)$ by
\[
Z(t):=\exp\left[-\int_0^t\left\{\lambda(s)-\xi(s)\right\}'dB(s)
-\frac{1}{2}\int_0^t\left\Vert\lambda(s)
-\xi(s)\right\Vert^2ds\right], \quad t\ge 0.
\]
Since $\lambda(t)-\xi(t)$ is a continuous Gaussian
process, the process $Z(t)$ is a $P$-martingale
(see, e.g., Example 3(a) in Liptser and Shiryayev \cite[Section 6.2]{LS}).
We define the $\mathbf{R}$-valued process $(\Gamma(t))_{0\le t\le T}$ by
\[
\Gamma(t):=E\left[\left.Z^{\beta}(T)\right|\mathcal{F}_t\right],
\quad 0\le t\le T,
\]
where $\beta$ is the
conjugate exponent of $\alpha$, i.e.,
\[
(1/\alpha)+(1/\beta)=1.
\]
Notice that $0<\beta<1$ (resp.\ $-\infty<\beta<0$) if
$-\infty<\alpha<0$ (resp.\ $0<\alpha<1$).
In view of Theorem 7.6 in Karatzas and Shreve \cite[Chapter 3]{KS},
to solve Problem P1, we only have to derive a stochastic integral
representation for $\Gamma(t)$.
We define an $\mathbf{R}$-valued $P$-martingale $K(t)$ by
\[
K(t):=\exp\left[-\beta\int_0^t \{\lambda(s)-\xi(s)\}'dB(s)
-\frac{\beta^2}{2}\int_0^t \Vert\lambda(s)-\xi(s)\Vert^2ds\right],
\quad t\ge 0.
\]
Then, by Bayes' rule, we have
\[
\begin{split}
\Gamma(t)&=E\left[\left. K(T)\exp\left\{-\frac{1}{2}\beta(1-\beta)
\int_0^T \Vert\lambda(s)-\xi(s)\Vert^2ds\right\}\right|
\mathcal{F}_t\right] \\
&= K(t)\bar{E}\left[\left.\exp\left\{-\frac{1}{2}\beta(1-\beta)
\int_0^T \Vert\lambda(s)-\xi(s)\Vert^2ds\right\}\right|
\mathcal{F}_t\right]
\end{split}
\]
for $t\in [0,T]$, where $\bar{E}$ stands for the expectation with respect to
the probability measure $\bar{P}$ on $(\Omega,\mathcal{F}_T)$ such that
$d\bar{P}/dP=K(T)$.
Thus
\begin{equation}
\begin{split}
\Gamma(t)&= Z^{\beta}(t)\exp\left\{-\frac{1}{2}\beta
(1-\beta)\int_t^T \Vert\lambda(s)\Vert^2ds\right\}\\
&\quad\times \bar{E}\left[\left.\exp\left\{-\frac{1}{2}\beta
(1-\beta)\int_t^T \left(\Vert\xi(s)\Vert^2-2\lambda'(s)\xi(s)\right)ds
\right\}\right|\mathcal{F}_t\right].
\end{split}
\label{eq:2.5}
\end{equation}
We are to apply Theorem \ref{thm:A1} in Appendix A to \eqref{eq:2.5}.
By \eqref{eq:1.11}, the dynamics of $\xi(t)$ are described by
the $n$-dimensional stochastic differential equation
\begin{equation}
d\xi(t)=-(p+q)\xi(t)dt+l(t)dB(t),\quad t\ge 0,
\label{eq:2.6}
\end{equation}
where $p:=\mathrm{diag}(p_1,\dots,p_n)$,
$q:=\mathrm{diag}(q_1,\dots,q_n)$, and
$l(t):=\mathrm{diag}(l_1(t),\dots,l_n(t))$
with $l_j(t)$'s as in (\ref{eq:1.10}).
Write $\bar{B}(t):=B(t)+\beta\int_0^t[\lambda(s)-\xi(s)]ds$ for
$t\in [0, T]$. Then $\bar{B}(t)$ is an $\mathbf{R}^n$-valued standard Brownian
motion under $\bar{P}$. By \eqref{eq:2.6}, the process $\xi(t)$ evolves
according to
\begin{equation}
d\xi(t)=\left[\rho(t)+b(t)\xi(t)\right]dt+l(t)d\bar{B}(t),
\quad t\ge 0,
\label{eq:2.7}
\end{equation}
where $\rho(t)=(\rho_1(t),\dots,\rho_n(t))'$,
$b(t)=\mathrm{diag}(b_1(t),\dots,b_n(t))$ with
\begin{align}
&\rho_j(t):=-\beta l_j(t)\lambda_j(t), \quad\quad t\ge 0,\quad j=1,\dots,n,
\label{eq:2.8}\\
&b_j(t):=-(p_j+q_j)+\beta l_j(t), \quad t\ge 0,\quad j=1,\dots,n.
\label{eq:2.9}
\end{align}
By Theorem \ref{thm:A1} in Appendix A, we are led to consider
the following one-dimensional backward Riccati equations:
for $j=1,\dots,n$
\begin{equation}
\dot{R}_j(t)-l_j^2(t)R_j^2(t)+2b_j(t)R_j(t)+\beta(1-\beta)=0,
\quad 0\le t\le T,\quad R_j(T)=0.
\label{eq:2.10}
\end{equation}
The following lemma, especially (iii), is crucial in our arguments.
\begin{lem}\label{lem:2.1}Let $j\in\{1,\dots,n\}$.
\begin{enumerate}
\item If $p_j=0$, then {\rm \eqref{eq:2.10}} has a unique
solution $R_j(t)\equiv R_j(t;T)$.
\item If $-\infty<\alpha<0$, then
{\rm \eqref{eq:2.10}} has a unique nonnegative solution $R_j(t)\equiv
R_j(t;T)$.
\item If $p_j>0$ and $0<\alpha<1$, then
{\rm (\ref{eq:2.10})} has a unique solution $R_j(t)\equiv R_j(t;T)$ such that
$R_j(t)\ge b_{j}(t)/l_{j}^2(t)$ for $t\in [0,T]$.
\end{enumerate}
\end{lem}
\begin{proof}(i)\ If $p_j=0$, then (\ref{eq:2.10}) is linear,
whence it has a unique solution.
(ii)\ If $-\infty<\alpha<0$, then $\beta(1-\beta)>0$, so that,
by the well-known result on Riccati
equations (see, e.g., Fleming and Rishel \cite[Theorem 5.2]{FR} and
Liptser and Shiryayev \cite[Theorem 10.2]{LS}),
(\ref{eq:2.10}) has a unique nonnegative solution.
(iii)\ When $p_j>0$ and $0<\alpha<1$, write
\begin{equation}
a_{1}(t):=l_j^2(t),\quad
a_{2}(t):=b_j(t),\quad
a_3:=\beta(1-\beta),\quad t\ge 0.
\label{eq:2.11}
\end{equation}
Then the equation for $P(t):=R_j(t)-[a_{2}(t)/a_{1}(t)]$ becomes
\begin{equation}
\dot{P}(t)-a_{1}(t)P^2(t)+a_{4}(t)=0,\quad 0\le t\le T,
\label{eq:2.12}
\end{equation}
where
\[
a_{4}(t):=\frac{a_{2}^2(t)+a_{1}(t)a_3}{a_{1}(t)}
+\frac{d}{dt}\left[\frac{a_{2}(t)}{a_{1}(t)}\right].
\]
Since $dl_j(t)/dt>0$ and $\beta<0$, we see that
\[
\frac{d}{dt}\left[\frac{a_{2}(t)}{a_{1}(t)}\right]
=\frac{2(p_j+q_j)-\beta l_j(t)}{l_j(t)^3}\cdot\frac{dl_j}{dt}(t)>0.
\]
We write $a_{2}^2(t)+a_{1}(t)a_3$ as
\[
(1-\beta)\left[(p_j+q_j)^2-\{(p_j+q_j)-l_j(t)\}^2\right]
+[(p_j+q_j)-l_j(t)]^2,
\]
which is positive since $0\le l_j(t)\le p_j$.
Thus $a_{4}(t)>0$, so that
(\ref{eq:2.12}) has a unique nonnegative solution $P(t)\equiv P(t;T)$.
The desired solution to (\ref{eq:2.10})
is given by $R_j(t)=P(t)+[a_{2}(t)/a_{1}(t)]$.
\end{proof}
In what follows, we write $R_j(t)\equiv R_j(t;T)$ for the unique solution to
(\ref{eq:2.10}) in the sense of Lemma \ref{lem:2.1}.
Then $R(t):=\mathrm{diag}(R_1(t),\dots,R_n(t))$ satisfies
the backward matrix Riccati equation
\begin{equation}
\begin{split}
&\dot{R}(t)-R(t)l^2(t)R(t)+b(t)R(t)+R(t)b(t)+ \beta(1-\beta)I_n=0,\quad
0\le t\le T,\\
&R(T)=0,
\end{split}
\label{eq:2.13}
\end{equation}
where $I_n$ denotes the $n\times n$ unit matrix.
For $j=1,\dots,n$, let $v_j(t)\equiv v_j(t;T)$ be the solution to the
following one-dimensional linear equation:
\begin{equation}
\begin{split}
&\dot{v}_j(t)+[b_j(t)-l_j^2(t)R_j(t;T)]v_j(t)+
\beta(1-\beta)\lambda_j(t)-R_j(t;T)\rho_j(t)=0,\\
&\quad0\le t\le T,\quad v_j(T)=0.
\end{split}
\label{eq:2.14}
\end{equation}
Then $v(t)\equiv v(t;T):=(v_1(t;T),\dots,v_n(t;T))'$ satisfies
the matrix equation
\begin{equation}
\begin{split}
&\dot{v}(t)+[b(t)-l^2(t)R(t;T)]v(t)+
\beta(1-\beta)\lambda(t)-R(t;T)\rho(t)=0,\\
&\quad0\le t\le T,\quad v(T)=0.
\end{split}
\label{eq:2.15}
\end{equation}
We put, for $j=1,\dots,n$ and $(t,T)\in \Delta$,
\begin{equation}
g_j(t;T):= v_j^2(t;T)l_j^2(t)+2\rho_j(t)v_j(t;T)-l_j^2(t)R_j(t;T)
-\beta(1-\beta)\lambda_j^2(t),
\label{eq:2.16}
\end{equation}
where
\begin{equation}
\Delta:=\{(t,T): 0<T<\infty,\ 0\le t\le T\}.
\label{eq:2.17}
\end{equation}
We are now ready to give the desired representation for $\Gamma(t)$.
\begin{prop}\label{prop:2.2}
Write
\begin{equation}
\psi(t):=\Gamma(t)\left[-\beta\lambda(t)+
\{\beta-l(t)R(t;T)\}\xi(t)+l(t)v(t;T)\right],
\quad 0\le t\le T.
\label{eq:2.18}
\end{equation}
Then, for $t\in [0,T]$, we have
$\Gamma(t)=\Gamma(0)+\int_0^t\psi'(s)dB(s)$ with
\begin{equation}
\Gamma(0)
=\exp\left[\frac{1}{2}\int_0^T \sum\nolimits_{j=1}^{n} g_j(s;T)ds\right].
\label{eq:2.19}
\end{equation}
\end{prop}
\begin{proof}
It follows from \eqref{eq:2.5}, \eqref{eq:2.7}, \eqref{eq:2.13},
\eqref{eq:2.15} and Theorem \ref{thm:A1} that
\begin{equation}
\Gamma(t)=Z^{\beta}(t)\exp\left[\sum_{j=1}^n
\left\{v_j(t)\xi_j(t)-\frac{1}{2}\xi_j^2(t)R_j(t)
+\frac{1}{2}\int_t^T g_j(s;T)ds\right\}\right].
\label{eq:2.20}
\end{equation}
The equality (\ref{eq:2.19}) follows from this.
A straightforward calculation based on \eqref{eq:2.20}, \eqref{eq:2.6} and
the It\^{o} formula gives $d\Gamma(t)=\psi'(t)dB(t)$,
where $\psi(t)$ is as in \eqref{eq:2.18}.
Thus the proposition follows.
\end{proof}
Recall that we have assumed $\alpha\in (-\infty,1)\setminus\{0\}$ and
\eqref{eq:2.14}. Here is the solution to Problem P1.
\begin{thm}
\label{thm:2.3}
For $T\in (0,\infty)$, the strategy
$(\hat{\pi}_T(t))_{0\le t\le T}\in\mathcal{A}_T$ defined by
\begin{equation}
\hat{\pi}_T(t):=(\sigma')^{-1}(t)
\left[(1-\beta)\{\lambda(t)-\xi(t)\}-l(t)R(t;T)\xi(t)
+l(t)v(t;T)\right]
\label{eq:2.21}
\end{equation}
is the unique optimal strategy for Problem P1.
The value function $V(T)\equiv V(T,\alpha)$ in {\rm (P1)} is given by
\begin{equation}
V(T)=\frac{1}{\alpha}[xS_0(T)]^{\alpha}
\exp\left[\frac{(1-\alpha)}{2}
\sum\nolimits_{j=1}^{n}\int_0^T g_j(t;T)dt\right].
\label{eq:2.22}
\end{equation}
\end{thm}
\begin{proof}
By Theorem 7.6 in Karatzas and Shreve \cite[Chapter 3]{KS},
the unique optimal strategy $\pi_T(t)$ for Problem P1 is given by
\[
\pi_T(t):=(\sigma')^{-1}(t)
\left[\Gamma^{-1}(t)\psi(t)+\lambda(t)-\xi(t)\right],
\quad 0\le t\le T,
\]
which, by (\ref{eq:2.18}), is equal to $\hat{\pi}_T(t)$. Thus
the first assertion follows. By the same theorem in
\cite{KS},
$V(T)=\alpha^{-1}[xS_0(T)]^{\alpha}\Gamma^{1-\alpha}(0)$.
This and (\ref{eq:2.19}) give (\ref{eq:2.22}).
\end{proof}
\begin{rem}
\label{rem:2.4}
We can regard $\xi(t)=\int_0^tk(t,s)dY(s)$, which is the only
random term on the right-hand side of \eqref{eq:2.21},
as representing the memory effect.
To illustrate this point, suppose that $(\sigma_{ij}(t))$ is
a constant matrix.
Then, by \eqref{eq:C2} in Appendix C, we can express $Y(t)$, whence $\xi(t)$,
in terms of the past prices $S(u)$, $u\in [0,t]$, of the risky assets.
\end{rem}
\begin{rem}
\label{rem:2.5}
From \cite[Theorem 7.6]{KS}, we also find that
\[
X^{x,\hat{\pi}_T}(t)=x\frac{S_0(t)\Gamma(t)}{Z(t)\Gamma(0)},
\quad 0\le t\le T.
\]
\end{rem}
\begin{rem}
\label{rem:2.6}
Regarding \eqref{eq:2.1}, we assume this to ensure the existence of
solution to \eqref{eq:2.10} for $j=1,\dots,n$.
Under the weaker assumption \eqref{eq:1.4},
we could show by a different argument that, for $j=1,\dots,n$,
\eqref{eq:2.10} has a solution
if $\alpha\in(-\infty, \bar{\alpha}_j)\setminus\{0\}$, where
$\bar{\alpha}_j\in (0,1]$ is defined by
\[
\bar{\alpha}_j:=1\quad \mbox{if} \ 0\le p_j<\infty,\quad
:=\frac{(p_j+q_j)^2}{l_j^2(0)+q_j^2}\quad \mbox{if} \ -q_j<p_j<0.
\]
From this, we see that
the same result as Theorem \ref{thm:2.3} holds under \eqref{eq:1.4} if
$-\infty<\alpha<\bar{\alpha}$, $\alpha\ne 0$, where
$\bar{\alpha}:=\min\{\bar{\alpha}_j: j=1,\dots,n\}$.
However, we did not succeed in extending the result to the
most general case $-\infty<\alpha<1$, $\alpha\ne 0$.
Such an extension, if possible, would lead us to the solution of
Problem P3 under \eqref{eq:1.4} (see Remark \ref{rem:3.8}).
\end{rem}
\section{Optimal investment over the infinite horizon}\label{sec:3}
In this section, we consider the infinite horizon optimization problem P2
for the financial market model $\mathcal{M}$.
Throughout this section, we assume \eqref{eq:2.1} and the following two
conditions:
\begin{align}
&\lim_{T\to\infty}\frac{1}{T}\int_0^T r(t)dt=\bar{r}\quad
\mbox{with}\;\; \bar{r}\in [0,\infty),
\label{eq:3.1}\\
&\lim_{t\to\infty}\lambda(t)=\bar{\lambda}\quad
\mbox{with}\;\; \bar{\lambda}=(\bar{\lambda}_1,\dots,\bar{\lambda}_n)'\in
\mathbf{R}^n.
\label{eq:3.2}
\end{align}
Here recall $\lambda(t)=(\lambda_1(t),\dots,\lambda_n(t))'$ from
(\ref{eq:2.2}). In the main result of this section (Theorem \ref{thm:3.4}),
we will also assume
$\alpha^*<\alpha<1$, $\alpha\ne 0$, where
\begin{equation}
\alpha^*:=\max(\alpha_1^*,\dots,\alpha_n^*)
\label{eq:3.3}
\end{equation}
with
\begin{equation}
\alpha^*_j:=\left\{
\begin{split}
&-\infty \quad\mbox{if}\quad 0\le p_j\le 2q_j, \\
&-3-\frac{8q_j}{p_j-2q_j} \quad\mbox{if}\quad 2q_j<p_j<\infty.
\end{split}
\right.
\label{eq:3.4}
\end{equation}
Notice that $\alpha^*\in [-\infty,-3)$.
To give the solution to Problem P2, we take the following steps:
\begin{enumerate}
\item For the value function $V(T)\equiv V(T,\alpha)$ in (P1), we calculate
the following limit explicitly:
\begin{equation}
\tilde{J}(\alpha):=\lim_{T\to\infty}\frac{1}{\alpha T}\log [\alpha V(T)].
\label{eq:3.5}
\end{equation}
\item For $\hat{\pi}\in\mathcal{A}$ in \eqref{eq:3.14} below,
we calculate the growth rate
\begin{equation}
J^*(\alpha)
:=\lim_{T\to\infty}
\frac{1}{\alpha T}\log E\left[(X^{x,\hat{\pi}}(T))^{\alpha}\right],
\label{eq:3.6}
\end{equation}
and verify that $J^*(\alpha)=\tilde{J}(\alpha)$.
\item Since the definition of $V(T)$ implies
\begin{equation}
\limsup_{T\to\infty}\frac{1}{\alpha T}\log E[(X^{x,\pi}(T))^{\alpha}]
\le \tilde{J}(\alpha),\quad \forall\pi\in\mathcal{A},
\label{eq:3.7}
\end{equation}
we conclude that $\hat{\pi}$ is an optimal strategy for Problem P2 and that
the optimal growth rate $J(\alpha)$ in (P2) is given by
$J(\alpha)=J^*(\alpha)=\tilde{J}(\alpha)$.
\end{enumerate}
Let $\alpha\in (-\infty,1)\setminus\{0\}$ and $\beta$ be its conjugate
exponent as in Section \ref{sec:2}.
For $j=1,\dots,n$, recall $b_j(t)$ from (\ref{eq:2.9}).
We have $\lim_{t\to\infty}b_j(t)=\bar{b}_j$, where
\[
\bar{b}_j:=-(1-\beta)p_j-q_j.
\]
Notice that $\bar{b}_j<0$. We consider the equation
\begin{equation}
p_j^2x^2-2\bar{b}_jx-\beta(1-\beta)=0.
\label{eq:3.8}
\end{equation}
When $p_j=0$, we write $\bar{R}_j$
for the unique solution $\beta(1-\beta)/(2q_j)$ of this linear
equation. If $p_j>0$, then
\[
\bar{b}_j^2+\beta(1-\beta)p_j^2=(1-\beta)[(p_j+q_j)^2-q_j^2]+q_j^2\ge q_j^2>0,
\]
so that we may write $\bar{R}_j$ for the larger solution to the quadratic
equation \eqref{eq:3.8}. Let
\[
K_j:=\sqrt{\bar{b}_j^2+\beta(1-\beta)p_j^2}.
\]
Then $\bar{b}_j-p_j^2\bar{R}_j=-K_j<0$.
As in Section \ref{sec:2},
we write $R_j(t)\equiv R_j(t;T)$ for the unique solution to
(\ref{eq:2.10}) in the sense of Lemma \ref{lem:2.1}.
Recall $\Delta$ from (\ref{eq:2.17}).
The next proposition provides the necessary results on the asymptotic
behavior of $R_j(t;T)$.
\begin{prop}\label{prop:3.1}
Let $-\infty<\alpha<1$, $\alpha\ne 0$, and
$j\in\{1,\dots,n\}$. Then
\begin{enumerate}
\item $R_j(t;T)$ is bounded in $\Delta$.
\item $\lim_{T-t\to\infty,\ t\to\infty}R_j(t;T)=\bar{R}_j$.
\item For $\delta, \epsilon\in (0,\infty)$ such that $\delta + \epsilon<1$,
\[
\lim_{T\to\infty}\sup_{\delta T\le t\le (1-\epsilon)T}
|R_j(t;T)-\bar{R}_j|=0.
\]
\end{enumerate}
\end{prop}
\begin{proof}
If $p_j=0$, then $l_j(t)=0$ and $b_j(t)=-q_j<0$ for $t\ge 0$,
so that the assertions follow from Theorem \ref{thm:B3} in Appendix B.
We assume $p_j>0$.
Since
\[
\vert l_j(t)-p_j\vert\le \frac{p_j^2}{2(p_j+q_j)}e^{-2q_jt}, \quad t\ge 0,
\]
the function $l_j(t)$ converges to $p_j$ exponentially fast as $t\to\infty$.
Hence the coefficients of the equation (\ref{eq:2.10}) converge to
their counterparts in (\ref{eq:3.8}) exponentially fast, too.
If $-\infty<\alpha<0$, then the desired
assertions follow from Theorem \ref{thm:B1} in Appendix B
(due to Nagai and Peng \cite{NP}).
Suppose $0<\alpha<1$.
Let $a_1(t)$, $a_2(t)$ and $a_3$ be as in (\ref{eq:2.11}).
Since $R_j(t;T)\ge b_{j}(t)/l_{j}(t)^2$ and $b_{j}(t)/l_{j}(t)^2$ is bounded
from below in $\Delta$, so is
$R_j(t;T)$. To show that
$R_j(t;T)$ is bounded from above in $\Delta$,
we consider the solution $M_j(t)\equiv M_j(t;T)$ to the linear equation
\[
\dot{M}_j(t)+2[a_2(t)-\bar{R}_ja_1(t)]M_j(t)+a_3+a_1(t)\bar{R}^2_j=0,
\quad 0\le t\le T,\quad M_j(T)=0.
\]
Since $M_j(T)-R_j(T)=0$ and
\[
[\dot{M}_j(t)-\dot{R}_j(t)]+2[a_2(t)-\bar{R}_ja_1(t)][M_j(t)-R_j(t)]
=-a_1(t)\left[R_j(t)-\bar R_j\right]^2\le 0,
\]
we have $R_j(t;T)\le M_j(t;T)$ in $\Delta$. However,
$a_2(t)-\bar{R}_ja_1(t)\to
\bar{b}_j-\bar{R}_j\bar{p}_j^2<0$ as $t\to\infty$,
so that $M_j(t;T)$ is bounded from above in $\Delta$, whence
so is $R_j(t;T)$. The desired
assertions now follow from Theorem \ref{thm:B2} in Appendix B.
\end{proof}
Let $j\in\{1,\dots,n\}$. For $\rho_j(t)$ in (\ref{eq:2.8}),
we have $\lim_{t\to\infty}\rho_j(t)=\bar{\rho}_j$, where
\[
\bar{\rho}_j:=-\beta p_j\bar{\lambda}_j.
\]
Let $v_j(t)\equiv v_j(t;T)$ be the solution to \eqref{eq:2.14} as in
Section \ref{sec:2}. Define $\bar{v}_j$ by
\begin{equation}
\left(\bar{b}_j-p_j^2\bar{R}_j\right)\bar{v}_j+\beta(1-\beta)\bar{\lambda}_j
-\bar{R}_j\bar{\rho}_j=0.
\label{eq:3.9}
\end{equation}
\begin{prop}\label{prop:3.2}
Let $-\infty<\alpha<1$, $\alpha\ne 0$, and
$j\in\{1,\dots,n\}$. Then
\begin{enumerate}
\item $v_j(t;T)$ is bounded in $\Delta$.
\item $\lim_{T-t\to\infty,\ t\to\infty}v_j(t;T)=\bar{v}_j$.
\item For $\delta, \epsilon\in (0,\infty)$
such that $\delta + \epsilon<1$,
\[
\lim_{T\to\infty}\sup_{\delta T\le t\le (1-\epsilon)T}
\vert v_j(t;T)-\bar{v}_j\vert=0.
\]
\end{enumerate}
\end{prop}
\begin{proof}
The coefficients of (\ref{eq:2.14}) converge to their counterparts
in (\ref{eq:3.9}). Also,
\[
\lim_{T-t\to\infty, \; t\to\infty}[b_j(t)-l_j^2(t)R_j(t;T)]
= \bar{b}_j-p_j^2\bar{R}_j=-K_j<0.
\]
Thus the proposition follows from Theorem \ref{thm:B3} in Appendix B.
\end{proof}
For $j=1,\dots,n$ and $-\infty<\alpha<1$, $\alpha\ne 0$, we put
\begin{gather}
F_j(\alpha):=\frac{(p_j+q_j)^2\bar{\lambda}_j^2\alpha}
{\left[(1-\alpha)(p_j+q_j)^2+\alpha p_j(p_j+2q_j)\right]},
\label{eq:3.10}\\
\begin{split}
&G_j(\alpha)\\
&\ \ :=(p_j+q_j)-q_j\alpha-(1-\alpha)^{1/2}
\left[(1-\alpha)(p_j+q_j)^2+\alpha p_j(p_j+2q_j)\right]^{1/2}.
\end{split}
\label{eq:3.11}
\end{gather}
Recall the value function $V(T)\equiv V(T,\alpha)$ from (P1) and
its representation (\ref{eq:2.22}).
In the next proposition, we compute $\tilde{J}(\alpha)$ in \eqref{eq:3.5}.
\begin{prop}
\label{prop:3.3}
Let $-\infty<\alpha<1$, $\alpha\ne 0$. Then the limit
$\tilde{J}(\alpha)$ in \eqref{eq:3.5} exists and is given by
\begin{equation}
\tilde{J}(\alpha)=\bar{r}+\frac{(1-\alpha)}{2\alpha}
\sum_{j=1}^{n}\bar{g}_j,
\label{eq:3.12}
\end{equation}
where
\[
\bar{g}_j:=\bar{v}^2_jp_j^2+2\bar{\rho}_j\bar{v}_j-p_j^2\bar{R}_j
-\beta(1-\beta)\bar{\lambda}_j^2,\quad j=1,\dots,n.
\]
More explicitly,
\begin{equation}
\tilde{J}(\alpha)=\bar{r}+\frac{1}{2\alpha}\sum_{j=1}^nF_j(\alpha)
+\frac{1}{2\alpha}\sum_{j=1}^{n}G_j(\alpha).
\label{eq:3.13}
\end{equation}
\end{prop}
\begin{proof}
Recall $g_j(t;T)$ from (\ref{eq:2.16}).
By Propositions \ref{prop:3.1} and \ref{prop:3.2},
\[
\lim_{T\to\infty}\frac{1}{T}\int_0^T g_j(t;T)dt=\bar{g}_j,
\quad j=1,\dots,n.
\]
From this and (\ref{eq:2.22}),
\[
\begin{split}
\frac{1}{\alpha T}\log\left[\alpha V(T)\right]
&=\frac{\log x}{T} + \frac{1}{T}\int_0^T r(t)dt
+\frac{1-\alpha}{2\alpha}\sum_{j=1}^n\frac{1}{T}\int_0^T g_j(t;T)dt\\
&\to \bar{r}+\frac{1-\alpha}{2\alpha}\sum_{j=1}^{n}\bar{g}_j
\quad \mbox{as $T\to\infty$,}
\end{split}
\]
which implies \eqref{eq:3.12}.
We have $\bar{v}_j=\beta\bar{\lambda}_j(1-\beta+p_j\bar{R}_j)/K_j$.
Also,
\[
\begin{split}
\beta p_j^2(p_j\bar{R}_j)^2-2\beta p_j^2\bar{R}_jK_j
&=\beta p_j^2\bar{R}_j(p_j^2\bar{R}_j-2K_j)
=\beta p_j^2\bar{R}_j(\bar{b}_j-K_j)\\
&=\beta(\bar{b}_j^2-K_j^2)=-\beta^2(1-\beta)p_j^2,
\end{split}
\]
and $\beta(1-\beta)=-\alpha/(1-\alpha)^2$. Thus
\[
\begin{split}
\bar{v}^2_jp_j^2&+2\bar{\rho}_j\bar{v}_j
-\beta(1-\beta)\bar{\lambda}_j^2\\
&=\frac{\beta\bar{\lambda}_j^2}{K_j^2}
\left[\beta p_j^2(1-\beta+p_j\bar{R}_j)^2-2\beta p_j(1-\beta+p_j\bar{R}_j)K_j
-(1-\beta)K_j^2\right]\\
&=\frac{\beta\bar{\lambda}_j^2}{K_j^2}
\left[\{\beta p_j^2(p_j\bar{R}_j)^2-2\beta p_j^2\bar{R}_jK_j\}
+2\beta(1-\beta)p_j(p_j^2\bar{R}_j-K_j)\right.\\
&\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad
+\beta p_j^2(1-\beta)^2-(1-\beta)K_j^2\right]\\
&=\frac{\beta(1-\beta)\bar{\lambda}_j^2}{K_j^2}
\left[-\beta^2p_j^2+2\beta p_j\bar{b}_j+\beta(1-\beta)p_j^2
-\{\bar{b}_j^2+\beta(1-\beta)p_j^2\}\right]\\
&=-\frac{\beta(1-\beta)\bar{\lambda}_j^2}{K_j^2}[\bar{b}_j-\beta p_j]^2
=\frac{\alpha}{(1-\alpha)^2}\frac{(p_j+q_j)^2}{K_j^2}\bar{\lambda}_j^2.
\end{split}
\]
This and $p_j^2\bar{R}_j=\bar{b}_j+K_j$ imply
\[
\bar{g}_j=
\frac{\alpha}{(1-\alpha)^2}\frac{(p_j+q_j)^2}{K_j^2}\bar{\lambda}_j^2
-(\bar{b}_j+K_j).
\]
Since $(1-\alpha)(1-\beta)=1$, it follows that
\[
(1-\alpha)\bar{b}_j=(1-\alpha)[(\beta-1)p_j-q_j]=-p_j+(\alpha-1)q_j
=q_j\alpha-(p_j+q_j).
\]
Also,
\[
K_j^2=(p_j+q_j)^2-\beta p_j(p_j+2q_j)
=(p_j+q_j)^2+\frac{\alpha}{1-\alpha} p_j(p_j+2q_j).
\]
Combining, we obtain \eqref{eq:3.13}.
\end{proof}
Recall $\xi(t)$ from \eqref{eq:2.3}.
Taking into account (\ref{eq:2.21}),
we consider
$\hat{\pi}=(\hat{\pi}(t))_{t\ge 0}\in \mathcal{A}$ defined by
\begin{equation}
\hat{\pi}(t):=(\sigma')^{-1}(t)
\left[(1-\beta)\{\lambda(t)-\xi(t)\}
-p\bar{R}\xi(t)+p\bar{v}\right],
\quad t\ge 0,
\label{eq:3.14}
\end{equation}
where
$p:=\mathrm{diag}(p_1,\dots,p_n)$ as in Section \ref{sec:2},
and $\bar{R}:=\mathrm{diag}(\bar{R}_1,\dots,\bar{R}_n)$,
$\bar{v}:=(\bar{v}_1,\dots,\bar{v}_n)'$.
Recall that we have assumed \eqref{eq:2.1}, \eqref{eq:3.1}
and \eqref{eq:3.2}. Recall also $\alpha^*$ from \eqref{eq:3.3} with
\eqref{eq:3.4}.
Here is the solution to Problem P2.
\begin{thm}
\label{thm:3.4}
Let $\alpha^*<\alpha<1$, $\alpha\ne 0$.
Then $\hat{\pi}$ is an optimal strategy for Problem P2
with limit in {\rm \eqref{eq:3.6}}.
The optimal growth rate $J(\alpha)$ in {\rm (P2)} is given by
\begin{equation}
J(\alpha)=\bar{r}+\frac{1}{2\alpha}\sum_{j=1}^nF_j(\alpha)
+\frac{1}{2\alpha}\sum_{j=1}^{n}G_j(\alpha),
\label{eq:3.15}
\end{equation}
where $F_j$'s and $G_j$'s are as in {\rm \eqref{eq:3.10}} and
{\rm \eqref{eq:3.11}}, respectively.
\end{thm}
\begin{proof}
For simplicity, we put $X(t):=X^{x,\hat{\pi}}(t)$.
For $\tilde{J}(\alpha)$ in {\rm \eqref{eq:3.5}} and $J^*(\alpha)$
in {\rm \eqref{eq:3.6}},
we claim $J^*(\alpha)=\tilde{J}(\alpha)$, that is,
\begin{equation}
\lim_{T\to\infty}\frac{1}{\alpha T}\log E[X^{\alpha}(T)]=\tilde{J}(\alpha).
\label{eq:3.16}
\end{equation}
As mentioned before, \eqref{eq:3.7} and \eqref{eq:3.16} imply
that $\hat{\pi}$ is an optimizer
for Problem P2. The equality \eqref{eq:3.15} follows from
this and \eqref{eq:3.13}
We complete the proof of the theorem by proving \eqref{eq:3.16}.
\noindent {\it Step}\/ 1.\ We calculate $E[X^{\alpha}(T)]$.
Define the $\mathbf{R}$-valued martingale $L(t)$ by
\[
L(t):=\exp\left[\alpha\int_0^t\{\sigma'(s)\hat{\pi}(s)\}'dB(s)
-\frac{\alpha^2}{2}\int_0^t\Vert \sigma'(s)\hat{\pi}(s)
\Vert^2ds\right], \quad t\ge 0.
\]
From (\ref{eq:2.4}), we have
$X^{\alpha}(t)=[xS_0(t)]^{\alpha}L(t)\exp[\int_{0}^{t}N(s)ds]$
for $t\ge 0$,
where
\[
\begin{split}
&N(t):=\alpha\{\sigma'(t)\hat{\pi}(t)\}'\left[\lambda(t)-\xi(t)+
\frac{1}{2}(\alpha-1)\sigma'(t)\hat{\pi}(t)\right]\\
&=\frac{\alpha(1-\alpha)}{2}\{\sigma'(t)\hat{\pi}(t)\}'
\left[
(1-\beta)\{\lambda(t)-\xi(t)\}+p\bar{R}\xi(t)-p\bar{v}
\right]\\
&=-\frac{\beta}{2}
\left[\lambda(t)-\xi(t)-(1-\alpha)\{p\bar{R}\xi(t)-p\bar{v}\}\right]'\\
&\qquad\qquad\qquad\qquad
\cdot \left[\lambda(t)-\xi(t)+(1-\alpha)\{p\bar{R}\xi(t)-p\bar{v}\}\right]\\
&=-\frac{\beta}{2}
\left[
\{\lambda(t)-\xi(t)\}'\{\lambda(t)-\xi(t)\}
-(1-\alpha)^2\{p\bar{R}\xi(t)-p\bar{v}\}'\{p\bar{R}\xi(t)-p\bar{v}\}
\right].
\end{split}
\]
Notice that we have used
$(1-\alpha)(1-\beta)=1$, $\alpha(1-\beta)=-\beta$.
We write
\[
N(t)= -\frac{1}{2}\xi'(t)Q\xi(t)-h'(t)\xi(t)+\frac{1}{2}
\sum\nolimits_{j=1}^nu_j(t),
\]
where
\[
u_j(t):=\alpha(\alpha-1)\left[p_j^2\bar{v}_j^2
-(1-\beta)^2\lambda_j^2(t)\right],\quad t\ge 0,\quad j=1,\dots,n,
\]
and $Q=\mathrm{diag}(Q_1,\dots,Q_n)$,
$h(t)=h(t;T)=(h_1(t;T),\dots,h_j(t;T))'$ with
\begin{align*}
&Q_j:=\beta\left[1-(1-\alpha)^2p_j^2\bar{R}_j^2\right],\quad
t\ge 0,\quad j=1,\dots,n,\\
&h_j(t):=\alpha(\alpha-1)p_j^2\bar{R}_j\bar{v}_j-\beta\lambda_j(t),
\quad t\ge 0,\quad j=1,\dots,n.
\end{align*}
Therefore,
\begin{equation}
\begin{split}
E[X^{\alpha}(T)]
&=[xS_0(T)]^{\alpha}
\exp\left[\frac{1}{2}\sum\nolimits_{j=1}^n \int_0^T u_j(t)dt\right]\\
&\quad\times\bar{E}\left[\exp\left\{-\int_0^T
\left(\frac{1}{2}\xi'(t)Q\xi(t)+h'(t)\xi(t)\right)dt
\right\}\right],
\end{split}
\label{eq:3.17}
\end{equation}
where $\bar{E}$ denotes the expectation with respect to
the probability measure $\bar{P}$ on $(\Omega,\mathcal{F}_T)$ such that
$d\bar{P}/dP=L(T)$.
\noindent {\it Step}\/ 2.\ We continue the calculation of $E[X^{\alpha}(T)]$.
We are about to apply Theorem \ref{thm:A1} in Appendix A to \eqref{eq:3.17}.
Write $\bar{B}(t):=B(t)-\alpha\int_0^t\sigma'(s)\hat{\pi}(s)ds$ for $t\ge 0$.
Then $\bar{B}(t)$ is an $\mathbf{R}^n$-valued standard Brownian motion under
$\bar{P}$. By \eqref{eq:2.6}, the process $\xi(t)$ evolves according to
the $n$-dimensional stochastic differential equation
\begin{equation}
d\xi(t)=[\gamma(t)+d(t)\xi(t)]dt + l(t)d\bar{B}(t),\quad t\ge 0,
\label{eq:3.18}
\end{equation}
where
$d(t)=\mathrm{diag}(d_1(t),\dots,d_n(t))$,
$\gamma(t)=\mathrm{diag}(\gamma_1(t),\dots,\gamma_n(t))$ with
\begin{align*}
&d_j(t):=b_j(t)-\alpha p_j\bar{R}_jl_j(t),\quad
t\ge 0,\quad j=1,\dots,n,\\
&\gamma_j(t):=\rho_j(t)+\alpha p_jl_j(t)\bar{v}_j,
\quad t\ge 0,\quad j=1,\dots,n.
\end{align*}
For $j=1,\dots,n$, let $U_j(t)\equiv U_j(t;T)$ be the unique solution
to the one-dimensional backward Riccati equation
\begin{equation}
\dot{U}_j(t)-l_j^2(t)U_j^2(t)+2d_j(t)U_j(t)+Q_j=0,
\quad 0\le t\le T,\quad U_j(T)=0
\label{eq:3.19}
\end{equation}
in the sense of Lemma \ref{lem:3.5} below, and
let $m_j(t)\equiv m_j(t;T)$ be the solution to the
one-dimensional linear equation
\begin{equation}
\begin{split}
&\dot{m}_j(t)+[d_j(t)-l_j^2(t)U_j(t;T)]m_j(t)
-h_j(t)-U_j(t;T)\gamma_j(t)=0\\
&\quad 0\le t\le T, \quad m_j(T)=0.
\end{split}
\label{eq:3.20}
\end{equation}
Then, from \eqref{eq:3.17}--\eqref{eq:3.20} and Theorem \ref{thm:A1}, we obtain
\begin{equation}
E[X^{\alpha}(T)]
=[xS_0(T)]^{\alpha}
\exp\left[\frac{1}{2}\sum\nolimits_{j=1}^{n}\int_0^T f_j(t;T)dt\right],
\label{eq:3.21}
\end{equation}
where, for $(t,T)\in\Delta$ and $j=1,\dots,n$,
\[
f_j(t;T):=l_j^2(t)m_j^2(t;T)+2\gamma_j(t)m_j(t;T)
-l_j^2(t)U_j(t;T)+u_j(t).
\]
\noindent {\it Step}\/ 3.\ We compute the limit
$J^*(\alpha)$ in \eqref{eq:3.6}. Let $j\in\{1,\dots,n\}$.
Write
\[
\bar{d}_j:=\bar{b}_j-\alpha p_j^2\bar{R}_j.
\]
Then $d_j(t)$ converges to $\bar{d}_j$, as $t\to\infty$, exponentially fast.
Now
\begin{align*}
\bar{d}_j^2 + p_j^2Q_j&= (\bar{b}_j-\alpha p_j^2 \bar{R}_j)^2
+ p_j^2\beta\left[1-(1-\alpha)^2p_j^2\bar{R}_j^2\right] \\
&=\bar{b}_j^2 -2\alpha\bar{b}_j(\bar{b}_j+K_j)+\alpha^2(\bar{b}_j+K_j)^2
+ p_j^2\beta-\alpha(\alpha-1)(\bar{b}_j+K_j)^2 \\
&=(1-\alpha)\bar{b}_j^2 +\alpha K_j^2 +p_j^2\beta
=\bar{b}_j^2 +p_j^2\beta(1-\beta),
\end{align*}
which implies
\begin{equation}
\sqrt{\bar{d}_j^2 + p_j^2Q_j}=K_j>0.
\label{eq:3.22}
\end{equation}
Thus we may write $\bar{U}_j$ for the larger (resp.\ unique) solution of
the following equation when $p_j>0$ (resp.\ $p_j=0$):
\begin{equation}
p_j^2x^2 -2\bar{d}_jx-Q_j=0.
\label{eq:3.23}
\end{equation}
From (\ref{eq:3.22}), we also see that $\bar{d}_j-p_j^2\bar{U}_j=-K_j$.
Let $\bar{m}_j$ be the solution to
\begin{equation}
\left(\bar{d}_j-p_j^2\bar{U}_j\right)\bar{m}_j-\bar{h}_j
-\bar{U}_j\bar{\gamma}_j=0,
\label{eq:3.24}
\end{equation}
where
\[
\bar{h}_j:=\alpha(\alpha-1)p_j^2\bar{R}_j\bar{v}_j-\beta\bar{\lambda}_j,
\quad
\bar{\gamma}_j:=\bar{\rho}_j+\alpha p_j^2\bar{v}_j.
\]
By \eqref{eq:3.21}, we have
\[
\frac{1}{\alpha T}\log E[X^{\alpha}(T)]
= \frac{\log x}{T}+\frac{1}{T}\int_0^Tr(t)dt
+\frac{1}{2\alpha}\sum_{j=1}^n\frac{1}{T}\int_0^T f_j(t;T)dt.
\]
However, Propositions \ref{prop:3.6} and \ref{prop:3.7} below
imply
\[
\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}f_j(t;T)dt=\bar{f}_j
\]
with
\[
\bar{f}_j:=p_j^2\bar{m}_j^2+2\bar{\gamma}_j\bar{m_j}
-p_j^2\bar{U}_j+\alpha(\alpha-1)\left[p_j^2\bar{v}_j^2
-(1-\beta)^2\bar{\lambda}_j^2\right],
\]
so that
\begin{equation}
J^*(\alpha)=\bar{r}+\frac{1}{2\alpha}\sum_{j=1}^{n}\bar{f}_j.
\label{eq:3.25}
\end{equation}
\noindent {\it Step}\/ 4.\
Here we show that in fact \eqref{eq:3.16} holds.
First,
\[
p_j^2\bar{U}_j=\bar{d}_j+K_j=\bar{b}_j-\alpha p_j^2\bar{R}_j+K_j
=p_j^2\bar{R}_j-\alpha p_j^2\bar{R}_j=(1-\alpha)p_j^2\bar{R}_j,
\]
whence $\bar{U}_j=(1-\alpha)\bar{R}_j$
(which we can directly check when $p_j=0$).
Next,
\[
\begin{aligned}
\bar{h}_j+\bar{U}_j\bar{\gamma}_j
&=\alpha(\alpha-1)p_j^2\bar{R}_j\bar{v}_j-\beta\bar{\lambda}_j
+(1-\alpha)\bar{R}_j[-\beta p_j\bar{\lambda}_j+\alpha p_j^2\bar{v}_j]\\
&=\bar{\lambda}_j(-\beta +\alpha p_j\bar{R}_j)
=-(1-\alpha)\beta \bar{\lambda}_j[(1-\beta)+p_j\bar{R}_j],
\end{aligned}
\]
so that
\[
\bar{m}_j=\frac{(1-\alpha)}{K_j}\beta \bar{\lambda}_j[(1-\beta)+p_j\bar{R}_j]
=(1-\alpha)\bar{v}_j.
\]
Therefore,
\[
\begin{split}
\bar{f}_j
&=(1-\alpha)^2p_j^2\bar{v}^2_j
+2(1-\alpha)(\bar{\rho}_j+\alpha p_j^2\bar{v}_j)\bar{v}_j
-(1-\alpha)p_j^2\bar{R}_j\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+\alpha(\alpha-1)\left[p_j^2\bar{v}_j^2
-(1-\beta)^2\bar{\lambda}_j^2\right]\\
&=(1-\alpha)
\left[
p_j^2\bar{v}^2_j+2\bar{\rho}_j\bar{v}_j-p_j^2\bar{R}_j
-\beta(1-\beta)\bar{\lambda}_j^2
\right]=(1-\alpha)\bar{g}_j.
\end{split}
\]
From \eqref{eq:3.12}, \eqref{eq:3.25} and this, we obtain
$J^*(\alpha)=\tilde{J}(\alpha)$ or (\ref{eq:3.16}), as desired.
\end{proof}
In the proof above, we needed the following results.
\begin{lem}\label{lem:3.5}Let $j\in\{1,\dots,n\}$.
\begin{enumerate}
\item If $p_j=0$, then {\rm \eqref{eq:3.19}} has a unique
solution $U_j(t)\equiv U_j(t;T)$.
\item If $p_j>0$ and $\alpha_j^*<\alpha<0$, then {\rm \eqref{eq:3.19}} has
a unique nonnegative solution $U_j(t)\equiv U_j(t;T)$.
\item If $p_j>0$ and $0<\alpha<1$, then
{\rm \eqref{eq:3.19}} has a unique solution $U_j(t)\equiv U_j(t;T)$ such that
$U_j(t;T)\ge (1-\alpha)R_j(t;T)$ for $t\in [0,T]$, where
$R_j(t)\equiv R_j(t;T)$ is the solution to {\rm \eqref{eq:2.10}} in the sense
of Lemma {\rm \ref{lem:2.1} (iii)}.
\end{enumerate}
\end{lem}
\begin{proof}(i)\ When $p_j=0$, \eqref{eq:3.19} is linear, whence it
has a unique solution.
(ii)\ For $p_j>0$ and $\alpha<0$, we put
$f(x)=p_j^2x^2-2\bar{b}_jx-\beta(1-\beta)$.
Since $\bar{b}_j<0$ and $\beta(1-\beta)>0$,
the larger solution $\bar{R}_j$ to $f(x)=0$ satisfies
$p_j^2\bar{R}_j^2 <(1-\beta)^2$ if and only if
$f((1-\beta)/p_j)>0$. However,
this is equivalent to $-3p_j-2q_j<(p_j-2q_j)\alpha$. Thus, if $p_j>0$
and $\alpha_j^*<\alpha<0$, then $p^2_j\bar{R}^2_j <(1-\beta)^2$ or
$Q_j>0$, so that the Riccati equation
\eqref{eq:3.19} has a unique nonnegative solution.
(iii)\ Suppose $p_j>0$ and $0<\alpha<1$.
For the solution $R_j(t)\equiv R_j(t;T)$ to \eqref{eq:2.10} in the sense of
Lemma \ref{lem:2.1} (iii), we consider
\[
P_j(t):=\frac{U_j(t)}{1-\alpha}-R_j(t).
\]
Let $d_j(t)$ be as above. Then, \eqref{eq:3.19} becomes
\[
\begin{split}
&\dot{P}_j(t)-(1-\alpha)l_j^2(t)P^2_j(t)
-2\left[(1-\alpha)l_j^2(t)R_j(t)-d_j(t)\right]P_j(t)\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+\alpha[l_j(t)R_j(t)-p_j\bar{R}_j]^2=0,\quad
0\le t\le T,
\end{split}
\]
with $P_j(T)=0$. Since $(1-\alpha)l_j^2(t)>0$ and
$\alpha[l_j(t)R_j(t)-p_j\bar{R}_j]^2>0$,
this Riccati equation has a unique nonnegative solution.
Thus the assertion follows.
\end{proof}
\begin{prop}\label{prop:3.6}Let $\alpha^*<\alpha<1$, $\alpha\ne 0$, and
$j\in\{1,\dots,n\}$. Let $U_j(t;T)$ be the unique solution to
{\rm \eqref{eq:3.19}} in the sense of Lemma {\rm \ref{lem:3.5}},
and let $\bar{U}_j$ be the larger (resp.\ unique) solution to
{\rm \eqref{eq:3.23}} when $p_j>0$ (resp.\ $p_j=0$).
Then
\begin{enumerate}
\item $U_j(t;T)$ is bounded in $\Delta$.
\item $\lim_{T-t\to\infty,\ t\to\infty}U_j(t;T)=\bar{U}_j$.
\item For $\delta, \epsilon\in (0,\infty)$ such that
$\delta+\epsilon<1$,
\[
\lim_{T\to\infty}\sup_{\delta T\le t\le (1-\epsilon)T}
|U_j(t;T)-\bar{U}_j|=0.
\]
\end{enumerate}
\end{prop}
\begin{proof}
We assume $0<\alpha<1$ and $p_j>0$. Since
$U_j(t;T)\ge (1-\alpha)R_j(t;T)$ in $\Delta$ and
$R_j(t;T)$ is bounded from below
by Proposition \ref{prop:3.1}, so is $U_j(t;T)$.
Let $N_j(t)\equiv N_j(t;T)$ be the solution to the linear equation
\[
\dot{N}_j(t)+2[d_j(t)-l_j^2(t)\bar{U}_j]N_j(t)+Q_j+l_j^2(t)\bar{U}_j^2=0,
\quad 0\le t\le T,\quad N_j(T)=0.
\]
By \eqref{eq:3.22},
$d_j(t)-l_j^2(t)\bar{U}_j\to \bar{d}_j-p_j^2\bar{U}_j=-K_j<0$
as $t\to\infty$,
so that $N_j(t;T)$ is bounded from above in $\Delta$.
Since $N_j(T)-U_j(T)=0$ and
\[
[\dot{N}_j(t)-\dot{U}_j(t)]+2[d_j(t)-l_j^2(t)\bar{U}_j][N_j(t)-U_j(t)]
=-l_j^2(t)\left[U_j(t)-\bar U_j\right]^2\le 0,
\]
we have, as in the proof of Proposition \ref{prop:3.1},
$U_j(t;T)\le N_j(t;T)$ in $\Delta$. Thus $U_j(t;T)$ is also bounded from
above in $\Delta$. Combining, $U_j(t;T)$ is bounded in $\Delta$.
The rest of the proof is similar to that of Proposition
\ref{prop:3.1}, whence we omit it.
\end{proof}
\begin{prop}\label{prop:3.7}
Let $\alpha^*<\alpha<1$, $\alpha\ne 0$, and
$j\in\{1,\dots,n\}$. Let $m_j(t;T)$ and $\bar{m}_j$ be the solutions to
{\rm \eqref{eq:3.20}} and {\rm \eqref{eq:3.24}}, respectively. Then
\begin{enumerate}
\item $m_j(t;T)$ is bounded in $\Delta$.
\item $\lim_{T-t\to\infty,\ t\to\infty}m_j(t;T)=\bar{m}_j$.
\item For $\delta, \epsilon\in (0,\infty)$
such that $\delta+\epsilon<1$,
\[
\lim_{T\to\infty}\sup_{\delta T\le t\le (1-\epsilon)T}
\vert m_j(t;T)-\bar{m}_j\vert=0.
\]
\end{enumerate}
\end{prop}
The proof of Proposition \ref{prop:3.7}
is similar to that of Proposition \ref{prop:3.2}; so
we omit it.
\begin{rem}
\label{rem:3.8}
We note that the proof of Lemma \ref{lem:3.5} (iii) is still valid
under \eqref{eq:1.4} if there were a solution $R_j(t)\equiv R_j(t;T)$ to
\eqref{eq:2.10}. This implies that, to prove an analogue of
Theorem \ref{thm:3.4} with $0<\alpha<1$, which is relevant to Problem P3,
for \eqref{eq:1.4}, one may show
the existence of such $R_j(t)$ when $0<\alpha<1$. We did not succeed in
such an extension to Lemma \ref{lem:2.1} (iii) (see Remark \ref{eq:2.6}).
\end{rem}
\section{Large deviations probability control}\label{sec:4}
In this section, we study the large deviations probability control problem
P3 for the market model $\mathcal{M}$.
Throughout this section, we assume (\ref{eq:2.1}), (\ref{eq:3.1}),
(\ref{eq:3.2}) and
\begin{equation}
\mbox{either $\bar{\lambda}\ne (0,\dots,0)'$ or
$(p_1,\dots,p_n)\ne (0,\dots,0)$.}
\label{eq:4.1}
\end{equation}
For $x\in (0,\infty)$ and $\pi\in\mathcal{A}$,
let $L^{x,\pi}(T)$ be the growth rate defined by
\[
L^{x,\pi}(T):=\frac{\log X^{x,\pi}(T)}{T},\quad T>0.
\]
We have $P\left(L^{x,\pi}(T)\ge c\right)
=P\left(X^{x,\pi}(T)\ge e^{cT}\right)$.
Following Pham \cite{P1,P2},
we consider the optimal logarithmic moment generating function
\[
\Lambda(\alpha):=\sup_{\pi\in\mathcal{A}}
\limsup_{T\to\infty} \log E[\exp(\alpha TL^{x,\pi}(T))],\quad 0<\alpha<1.
\]
Since $\Lambda(\alpha)=\alpha J(\alpha)$ for $\alpha\in (0,1)$,
it follows from Theorem \ref{thm:3.4} that
\[
\Lambda(\alpha)=\bar{r} \alpha+\frac{1}{2}\sum_{j=1}^{n}F_{j}(\alpha)
+\frac{1}{2}\sum_{j=1}^{n}G_{j}(\alpha),\quad 0<\alpha<1,
\]
where $F_j$'s and $G_j$'s are as in {\rm \eqref{eq:3.10}} and
{\rm \eqref{eq:3.11}}, respectively.
\begin{prop}
\label{prop:4.1}
We have $(d\Lambda/d\alpha)(0+)=\bar{c}$ and
$\lim_{\alpha\uparrow 1}(d\Lambda/d\alpha)(\alpha)=\infty$, where
\[
\bar{c}
:=\bar{r}+\frac{1}{4}\sum_{j=1}^n\frac{p_j^2}{p_j+q_j}
+\frac{1}{2}\Vert \bar{\lambda}\Vert^2.
\]
\end{prop}
\begin{proof}
For $0<\alpha<1$, $\dot{F}_j(\alpha)$ is equal to
\[
\frac{(p_j+q_j)^2\bar{\lambda}_j^2}{\left[(1-\alpha)(p_j+q_j)^2
+\alpha p_j(p_j+2q_j)\right]}
+\frac{(p_j+q_j)^2\bar{\lambda}_j^2q_j^2\alpha}
{\left[(1-\alpha)(p_j+q_j)^2+\alpha p_j(p_j+2q_j)\right]^2}.
\]
From this, $\dot{F}_j(0+)=\bar{\lambda}_j^2$.
This also shows that
\[
\frac{dF_j}{d\alpha}(\alpha)\sim \bar{\lambda}_j^2(1-\alpha)^{-2},\quad
\alpha\uparrow 1
\]
if $p_j=0$ and $\bar{\lambda}_j\ne 0$.
On the other hand, for $0<\alpha<1$,
\[
\begin{split}
&\frac{dG_j}{d\alpha}(\alpha)=
-q_j+
\frac{(1-\alpha)^{-1/2}}{2}
\left[(1-\alpha)(p_j+q_j)^2+\alpha p_j(p_j+2q_j)\right]^{1/2}\\
&\qquad\qquad\qquad\qquad
+
\frac{q_j^2(1-\alpha)^{1/2}}{2\left[(1-\alpha)(p_j+q_j)^2
+\alpha p_j(p_j+2q_j)\right]^{1/2}}.
\end{split}
\]
This gives $(dG_j/d\alpha)(0+)=p_j^2/[2(p_j+q_j)]$. This
also yields
\[
\frac{dG_j}{d\alpha}(\alpha)\sim \frac{\sqrt{p_j(p_j+2q_j)}}{2}
(1-\alpha)^{-1/2},\quad
\alpha\uparrow 1
\]
if $p_j>0$. Thus the proposition follows.
\end{proof}
\begin{rem}
\label{rem:4.2}
From the proof of Proposition \ref{prop:4.1}, we see that
\[
\frac{d\Lambda}{d\alpha}(\alpha)\sim
\frac{(1-\alpha)^{-1/2}}{4}
\sum\nolimits_{j=1}^n\sqrt{p_j(p_j+2q_j)}
,\quad \alpha\uparrow 1
\]
if $p_j>0$ for all $j=1,\dots,n$, otherwise
\[
\frac{d\Lambda}{d\alpha}(\alpha)\sim
\frac{(1-\alpha)^{-2}}{2}\sum\nolimits_{1\le j\le n \atop p_j=0}
\bar{\lambda}_j^2,\quad \alpha\uparrow 1.
\]
\end{rem}
For $\alpha\in (0,1)$, we denote by $\hat{\pi}(t;\alpha)$ the optimal
strategy $\hat{\pi}(t)$ in (\ref{eq:3.14}).
Recall $I(c)$ from (P3).
From Theorem \ref{thm:3.4}, Proposition \ref{prop:4.1},
and Pham \cite[Theorem 3.1]{P1}, we immediately obtain
the following solution to Problem P3:
\begin{thm}
\label{thm:4.3}
We have
\[
I(c)=-\sup_{\alpha\in (0,1)}\left[\alpha c-\Lambda(\alpha)\right],
\quad c\in\mathbf{R}.
\]
Moreover, if $\alpha(d)\in (0,1)$ is such that
$\dot{\Lambda}(\alpha(d))=d\in (\bar{c},\infty)$, then,
for $c\ge \bar{c}$, the sequence of strategies
\[
\hat{\pi}^m(t):=\hat{\pi}(t;\alpha(c+\tfrac{1}{m}))
\]
is nearly optimal in the sense that
\[
\lim_{m\to\infty} \limsup_{T\to\infty}\frac{1}{T}\log
P\left(X^{x,\hat{\pi}^m}(T)\ge e^{cT}\right)=I(c),\quad c\ge\bar{c}.
\]
\end{thm}
\begin{rem}\label{rem:4.4}
Theorem 3.1 in Pham \cite{P1} is stated for a
model different from $\mathcal{M}$ but the arguments there are
so general that we can prove Theorem \ref{thm:4.3} in the same way.
\end{rem}
We turn to the problem of deriving an optimal strategy,
rather than a nearly optimal sequence, for the problem (P3)
when $c<\bar{c}$.
We define $\pi_0\in\mathcal{A}$ by
\[
\hat{\pi}_{0}(t):= (\sigma')^{-1}(t)
\left[\lambda(t)-\xi(t)\right], \quad t\ge 0,
\]
where recall $\xi(t)$ from \eqref{eq:2.3}.
From (\ref{eq:2.4}),
\[
\begin{split}
L^{x,\hat{\pi}_0}(T)
&=\frac{\log x}{T} +\frac{1}{T}\int_0^T r(t)dt
+\frac{1}{2T}\int_0^T\left\Vert\lambda(t)-\xi(t)\right\Vert^2dt\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+\frac{1}{T}\int_0^T\left[\lambda(t) - \xi(t)\right]'dB(t).
\end{split}
\]
\begin{prop}
\label{prop:4.5}
The rate $L^{x,\hat{\pi}_0}(T)$ converges to $\bar{c}$,
as $T\to\infty$, in probability.
\end{prop}
\begin{proof}
In this proof, we denote by $C$ positive constants, which may not be
necessarily equal.
For $j=1,\dots,n$, we write
\[
\lambda_j(t)-\xi_j(t)
=[\lambda_j(t)-\bar{\lambda}_j]+[\bar{\lambda}_j-p_jK(t)]+N(t),
\]
where $K(t)=\int_0^te^{-(p_j+q_j)(t-s)}dB_j(s)$ and
$N(t)=\int_0^te^{-(p_j+q_j)(t-s)}f(s)dB_j(s)$
with
\[
f(s)=\frac{2p_j^2q_j}{(2q_j+p_j)^2e^{2q_js}-p_j^2}.
\]
The process $K(t)$, the dynamics of which are given by
\[
dK(t)=-(p_j+q_j)K(t)dt+dB_j(t),
\]
is a positively recurrent one-dimensional diffusion process with
speed measure $m(dx)=2e^{-(p_j+q_j)x^2}dx$. By the ergodic theorem
(cf.\ Rogers and Williams \cite[v.53]{RW}), we have
\[
\lim_{T\to\infty}\frac{1}{T}\int_0^T[\bar{\lambda}_j-p_jK(t)]^2dt
=\int_{-\infty}^{\infty}(\bar{\lambda}_j-p_jy)^2\nu(dy)
=\bar{\lambda}_j^2+\frac{p_j^2}{2(p_j+q_j)}\quad \mbox{a.s.,}
\]
where $\nu(dy)$ is the
Gaussian measure with mean $0$ and variance $1/[2(p_j+q_j)]$.
Since $0\le f(s)\le Ce^{-2q_js}$, we have
\[
E\left[N^2(t)\right]\le C\int_0^te^{-2q_j(t+s)}ds
\le Ce^{-2q_jt}, \quad t\ge 0.
\]
Also, $E[K^2(t)]\le C$ for $t\ge 0$. Therefore,
\[
\frac{1}{T}\int_0^TE\left[\vert\{\bar{\lambda}_j-p_jK(t)\}N(t)\vert\right]
dt
\le \frac{C}{T}\int_0^TE[N^2(t)]^{1/2}dt \to 0,\quad T\to \infty.
\]
Similarly,
\[
\begin{split}
&\lim_{T\to\infty}\frac{1}{T}\int_0^T[\lambda_j(t)-\bar{\lambda}_j]^2dt
=\lim_{T\to\infty}\frac{1}{T}\int_0^TE\left[(\lambda_j(t)-\bar{\lambda}_j)
(\bar{\lambda}_j-p_jK(t))\right]dt\\
&\quad=\lim_{T\to\infty}\frac{1}{T}\int_0^TE\left[N^2(t)\right]dt
=\lim_{T\to\infty}\frac{1}{T}\int_0^TE\left[(\lambda_j(t)-\bar{\lambda}_j)N(t)
\right]dt
=0.
\end{split}
\]
Combining,
\[
\frac{1}{T}\int_0^T\left[\lambda_j(t)-\xi_j(t)\right]^2dt\to
\bar{\lambda}_j^2+\frac{p_j^2}{2(p_j+q_j)}, \quad T\to\infty,\quad
\mbox{\rm in probability.}
\]
Finally, for $j=1,\dots,n$ and $t\ge 0$,
\[
E\left[\left\{\lambda_j(t)-\xi_j(t)\right\}^2\right]
\le 2\lambda^2_j(t)+2E\left[\xi_j^2(t)\right]
\le C\left[1+\int_0^tl^2_j(t,s)ds\right]\le C,
\]
so that $(1/T)\int_0^T[\lambda_j(t)-\xi_j(t)]dB_j(t)\to 0$,
as $T\to\infty$, in $L^2(\Omega)$,
whence in probability.
Thus the proposition follows.
\end{proof}
\begin{thm}\label{thm:4.6}
For $c<\bar{c}$, $\hat{\pi}_0$ is optimal for Problem P3
with limit
\[
\lim_{T\to\infty}\frac{1}{T}\log
P\left(X^{x,\hat{\pi}_0}(T)\ge e^{cT}\right)=I(c),\quad c<\bar{c}.
\]
\end{thm}
\begin{proof}
Proposition \ref{prop:4.5} implies
$\lim_{T\to\infty}P\left(L^{x,\hat{\pi}_0}(T)\ge c\right)=1$
for $c<\bar{c}$, so that
\[
\lim_{T\to\infty}\frac{1}{T}\log
P\left(L^{x,\hat{\pi}_0}(T)\ge c\right)=0\ge
\sup_{\pi\in\mathcal{A}}\lim_{T\to\infty}\frac{1}{T}\log
P\left(L^{x,\pi}(T)\ge c\right),\quad
c<\bar{c}.
\]
Thus $\hat{\pi}_0$ is optimal if $c<\bar{c}$.
\end{proof}
\begin{rem}\label{rem:4.7}
From Theorem 10.1 in Karatzas and Shreve \cite[Chapter 3]{KS},
we see that
$\hat{\pi}_0$ is the log-optimal or growth optimal strategy in
the sense that
\[
\sup_{\pi\in\mathcal{A}}\limsup_{T\to\infty}\frac{1}{T}
\log X^{x,\pi}(T) = \limsup_{T\to\infty}\frac{1}{T}
\log X^{x,\hat{\pi}_{0}}(T) \quad\mbox{a.s.}
\]
We also find that $\lim_{\alpha\downarrow 0}\hat{\pi}(t;\alpha)
=\hat{\pi}_{0}(t)$ a.s.\ for
$t\ge 0$.
\end{rem}
| 52940b863875b9c5538b49878a1647a1be8766f4 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
Let $\Phi = (X,\phi,X^\vee,\phi^\vee)$ be a reduced root datum with Weyl group $W$. Let $V = X \otimes {\mathbb R}$ and $V^* \cong X^\vee \otimes {\mathbb R}$ such that the pairing $\langle \cdot, \cdot \rangle : V \times V^* \to {\mathbb R}$ is induced from the perfect pairing between $X$ and $X^\vee$, which will be denoted the same way. Let $Q^\vee$ be the coroot lattice. Choose simple roots $\Delta=\{\alpha_1, \ldots, \alpha_l\}$ and denote by $\phi^+$ the positive roots with respect to $\Delta$. Let $X^\vee_+ = \setdef{x \in X^\vee}{\langle \alpha,x \rangle \geq 0 \text{ for all } \alpha \in \phi^+}$ be the dominant cone. There is a natural action of $W$ on the group algebra ${\mathbb Z}[X^\vee]$. The algebra of symmetric polynomials $\Lambda$ is the algebra of invariants under this action. It is closely related to the representation theory of complex algebraic groups.
Let $G^\vee$ be a complex reductive linear algebraic group with Borel subgroup $B^\vee$ and maximal torus $T^\vee$ such that the associated root datum is the dual of $\Phi$. Assigning to a finite dimensional representation of $G^\vee$ its $T^\vee$-character yields an isomorphism from the representation ring of $G^\vee$ to~$\Lambda$. For $\lambda \in X^\vee_+$ the Schur polynomial $s_\lambda$ is the character of the irreducible highest weight module $V(\lambda)$ with highest weight $\lambda$. The Schur polynomials $\{s_\lambda\}_{\lambda \in X^\vee_+}$ are a basis of $\Lambda$. The Kostka number $k_{\lambda\mu}$ for $\lambda, \mu \in X^\vee_+$ is the weight multiplicity of $\mu$ in $V(\lambda)$, i.e. the dimension of the $\mu$-weight space $V(\lambda)_\mu$. The Littlewood-Richardson coefficient $c_{\lambda\mu}^\nu$ for $\lambda,\mu,\nu \in X^\vee_+$ is the multiplicity of $V(\nu)$ in $V(\lambda) \otimes V(\mu)$.
Combinatorially the $k_{\lambda\mu}$ and the $c_{\lambda\mu}^\nu$ can be defined as follows: For $\lambda \in X^\vee_+$ define the monomial symmetric function $m_\lambda = \sum_{\mu \in W\lambda} x^\mu$ where for $\mu \in X^\vee$ we denote the corresponding basis element of ${\mathbb Z}[X^\vee]$ by $x^\mu$. Then $\{m_\lambda\}_{\lambda \in X^\vee_+}$ is a basis of $\Lambda$ and the Kostka numbers are the entries of the transition matrix from the $m_\mu$ to the $s_\lambda$, i.e. we have $s_\lambda = \sum_{\mu \in X^\vee_+} k_{\lambda\mu} m_\mu$. The Littlewood-Richardson coefficients are the structure constants of $\Lambda$ with respect to the Schur polynomials, i.e. $s_\lambda s_\mu = \sum_{\nu \in X^\vee_+} c_{\lambda\mu}^\nu s_\nu$.
One of the main problems of combinatorial representation theory was to give combinatorial formulas for the $k_{\lambda\mu}$ and the $c_{\lambda\mu}^\nu$. One of the solutions is Littelmann's path model in~\cite{littelmann:94}. In~\cite{gaussentlittelmann:03} Gaussent and Littelmann introduced the gallery model and showed that it is equivalent to the path model. They express the $k_{\lambda\mu}$ and the $c_{\lambda\mu}^\nu$ as the number of certain galleries.
In this paper we give analogs of these combinatorial formulas for some $q$-analog of Schur polynomials, the so called Hall-Littlewood polynomials. We describe the transition matrix from monomial symmetric functions to Hall-Littlewood polynomials and we calculate products of Hall-Littlewood polynomials. Specializing $q$ in these formulas we get a new proof for the above mentioned formulas for Schur polynomials in terms of galleries.
Extending the base ring to ${\cal L}^- \mathrel{\mathop :}= {\mathbb Z}[q^{-1}]$ one gets new interesting bases. The Hall-Littlewood polynomials $\{P_{\lambda}(q^{-1})\}$ are a basis for $\Lambda_q \mathrel{\mathop :}= {\cal L}^-[X^\vee]^W$ (as ${\cal L}^-$-module). For $\lambda \in X^\vee_+$ they are defined by
\begin{equation*}
P_\lambda(q^{-1}) = \frac{1}{W_\lambda(q^{-1})} \sum_{w \in W} w \Big( x^\lambda \prod_{\alpha \in \phi^+} \frac{1-q^{-1}x^{-\alpha^\vee}}{1-x^{-\alpha^\vee}}\Big).
\end{equation*}
Here $W_\lambda \subset W$ is the stabilizer of $\lambda$ and $W_\lambda(q^{-1}) = \sum_{w \in W_\lambda} q^{-l(w)}$ where $l : W \to {\mathbb N}$ is the usual length function. The Hall-Littlewood polynomials are $q$-analogs of the Schur polynomials in the sense that $P_\lambda(0) = s_\lambda$. Moreover, we have $P_\lambda(1) = m_\lambda$. For these and other properties of the $P_\lambda(q^{-1})$ see the article~\cite{nelsenram:03} of Nelsen and Ram.
Define Laurent polynomials $L_{\lambda\mu}$ for $\lambda,\mu \in X^\vee_+$ by
\begin{equation*}
P_\lambda(q^{-1}) = \sum_{\mu \in X^\vee_+} q^{-\langle \rho,\lambda + \mu \rangle} L_{\lambda\mu} m_\mu,
\end{equation*}
where $\rho \mathrel{\mathop :}= \frac{1}{2} \sum_{\alpha \in \phi^+} \alpha$. By definition we have $q^{-\langle \rho,\lambda + \mu \rangle} L_{\lambda\mu} \in {\cal L}^-$. Moreover, since $P_\lambda(0) = s_\lambda$, the constant term of $q^{-\langle \rho,\lambda + \mu \rangle} L_{\lambda\mu}$ is $k_{\lambda\mu}$. So a combinatorial description of the $L_{\lambda\mu}$ yields a combinatorial description of the $k_{\lambda\mu}$. For non-dominant $\mu \in X^\vee$ we define $L_{\lambda\mu} = q^{\langle \rho, \mu - \mu^+ \rangle} L_{\lambda\mu^+}$, where $\mu^+ \in X^\vee_+$ is the unique dominant element in the $W$-orbit of $\mu$.
The main combinatorial tool for the description of the $L_{\lambda\mu}$ are galleries of generalized alcoves of a fixed type. For details on this and other unexplained notation see section~\ref{sec:galleries}. Following~\cite{gaussentlittelmann:03} we introduce positively folded galleries and associate to each positively folded gallery $\sigma$ a combinatorially defined polynomial $L_\sigma$ in definition~\ref{de:lsigma}. In section~\ref{sec:satakecoefficients} we prove (see also theorem~\ref{th:satakecoefficients} and the paragraph before it)
\begin{theorem}\label{th:basechange}
Let $\lambda \in X^\vee_+$ and $\mu \in X^\vee$. Denote by $t^\lambda$ the type of a minimal gallery from 0 to $\lambda$. Then $L_{\lambda\mu} = q^{-l(w_\lambda)} \sum_{\sigma} q^{l(w_0\iota(\sigma))} L_\sigma$. Here $w_\lambda \in W_\lambda$ is the maximal element and the sum is over all positively folded galleries $\sigma$ of type $t^\lambda$ and weight $\mu$ such that the initial direction $\iota(\sigma)$ is in the set of minimal representatives $W^\lambda$ of $W/W_\lambda$.
\end{theorem}
If $\Phi$ is of type $A$ one gets a combinatorial description of the $L_{\lambda\mu}$ with Young diagrams as a corollary of the results of Haglund, Haiman and Loehr on the monomial expansion of Macdonald polynomials~\cite{haglundhaimanloehr:05}.
We get a description of the $k_{\lambda\mu}$ in terms of galleries by evaluation at $q^{-1} = 0$. We introduce LS-galleries in definition~\ref{def:lsgalleries} (roughly speaking these are the galleries which survive the specialization $q^{-1} = 0$) and get
\begin{corollary}\label{co:kostkanumbers}
For $\lambda, \mu \in X^\vee_+$ the Kostka number $k_{\lambda\mu}$ is the number of LS-galleries of type $t^\lambda$ and weight $\mu$.
\end{corollary}
In definition~\ref{de:lsigma} we also introduce a second monic polynomial $C_\sigma$ for each gallery $\sigma$ which is closely related to $L_\sigma$. We prove that with this statistic one can calculate the structure constants of $\Lambda_q$ with respect to the Hall-Littlewood polynomials. More precisely, define $C_{\lambda\mu}^\nu$ for $\lambda,\mu,\nu \in X^\vee_+$ by
\begin{equation*}
P_\lambda(q^{-1}) P_\mu(q^{-1}) = \sum_{\nu \in X^\vee_+} q^{-\langle \rho, \mu - \lambda + \nu \rangle} C_{\lambda\mu}^\nu P_\nu(q^{-1}).
\end{equation*}
Let ${\cal C} \mathrel{\mathop :}= \setdef{x \in V^*}{\langle \alpha,x \rangle \geq 0 \text{ for all } \alpha \in \phi^+}$ be the dominant Weyl chamber. We prove in section~\ref{sec:structureconstants} (see also theorem~\ref{th:structureconstants} and the paragraph before it)
\begin{theorem}\label{th:producthalllittlewood}
Let $\lambda, \mu, \nu \in X^\vee_+$. Then $C_{\lambda\mu}^\nu = q^{-l(w_\mu)} \sum_{\sigma} q^{l(w_0\iota(\sigma))} C_\sigma F_{\mu\nu}^{\varepsilon(\sigma)}$. Here the sum is over all positively folded galleries $\sigma$ of type $t^\mu$ and weight $\nu$ starting in $\lambda$ such that they are contained in ${\cal C}$ and the final direction $\varepsilon(\sigma)$ is in $W_\nu W^{w_0\mu}$. The correction factor $F_{\mu\nu}^{\varepsilon(\sigma)}$ is contained in ${\cal L}^-$.
\end{theorem}
In~\cite{kapovichmillson:04} a similar formula for $C_{\lambda\mu}^\nu$ with some restrictions on $\lambda,\mu,\nu$ was given by Kapovich and Millson using geodesic triangles in some euclidean building associated to the situation.
For $q^{-1} = 0$ theorem~\ref{th:producthalllittlewood} yields a Littlewood-Richardson rule in terms of galleries.
\begin{corollary}\label{co:littlewoodrichardson}
For $\lambda, \mu, \nu$ in $X^\vee_+$ the Littlewood-Richardson coefficient $c_{\lambda\mu}^\nu$ is the number of LS-galleries $\sigma$ of type $t^\mu$ and weight $\nu - \lambda$ such that the translated gallery $\lambda + \sigma$ is contained in ${\cal C}$.
\end{corollary}
The combinatorial descriptions in the corollaries~\ref{co:kostkanumbers} and~\ref{co:littlewoodrichardson} are more or less the same as the above mentioned descriptions in~\cite{gaussentlittelmann:03}. But our proof is quite different and does not use the combinatorial results stated there. See remark~\ref{re:comparisongaussentlittelmann} for some further details.
For proving theorems~\ref{th:basechange} and~\ref{th:producthalllittlewood} we use the Satake isomorphism to identify ${\mathbb Z}[q^{\pm\sfrac{1}{2}}][X^\vee]^W$ with the spherical Hecke algebra with equal parameters associated to $\Phi$. Under this isomorphism, Hall-Littlewood polynomials correspond (up to some factor) to the Macdonald basis and the monomial symmetric functions correspond to the monomial basis of the spherical Hecke algebra (see remark~\ref{re:equalparameters}). So the above theorems can be proven in the spherical Hecke algebra.
This is done in the slightly more general setting of spherical Hecke algebras with arbitrary parameters. In section~\ref{sec:galleries} we state our combinatorial formulas regarding Satake coefficients, i.e. the entries of the transition matrix from the monomial basis to the Macdonald basis, and the structure constants of the spherical Hecke algebra with respect to the Macdonald basis. The formulas are then proven in sections~\ref{sec:satakecoefficients} and~\ref{sec:structureconstants}. We define a basis of the affine Hecke algebra indexed by the generalized alcoves introduced in section~\ref{sec:affineweylgroup} and show that right multiplication of this alcove basis by elements of the standard basis can be calculated using galleries.
It is well known that the Satake coefficients form a triangular matrix with respect to the usual partial order on $X^\vee$, i.e. $\mu \leq \lambda$ iff $ \lambda - \mu = \sum_{\alpha \in \Delta} n_\alpha \alpha^\vee$ for nonnegative integers $n_\alpha$. With our combinatorial description of the Satake coefficients we can show that all remaining entries are in fact nonzero.
Now take the spherical Hecke algebra with coefficients in ${\mathbb C}$ such that all parameters are powers of a fixed prime $p$. Our description of the $L_{\lambda\mu}$ implies that $L_{\lambda\mu} > 0$ for $\lambda, \mu \in X^\vee_+$ with $\mu \leq \lambda$. This gives a combinatorial proof of a positivity result of Rapoport in~\cite{rapoport:00}.
In section~\ref{sec:commutation} we compute the transition matrix between the alcove basis and the standard basis. This yields a $q$-analog of a commutation formula of Pittie and Ram~(\cite{pittieram:99}) in terms of galleries.
In section~\ref{sec:geometricinterpretation} we give a geometric interpretation of our combinatorics in the case $X^\vee = Q^\vee$. Fix a prime power $\mathbf{q}$ and let ${\mathbb F}_\mathbf{q}$ be the finite field with $\mathbf{q}$ elements. Let $K$ be the algebraic closure of ${\mathbb F}_\mathbf{q}$. Let $G$ be a semisimple, simply connected algebraic group over $K$ with root datum $\Phi$ corresponding to some choice of a Borel $B \subset G$ and a maximal torus $T \subset B$. Assume that all groups are defined and split over ${\mathbb F}_\mathbf{q}$.
From the definition of the geometric Satake isomorphism (see for example the survey article~\cite{haineskottwitzprassad:03} of Haines, Kottwitz and Prassad) it is known, that the evaluation at $\mathbf{q}$ of the coefficients $L_{\lambda\mu}$ gives the number of points of certain intersections in the affine Grassmanian of $G$ over ${\mathbb F}_\mathbf{q}$. We show that our combinatorics reflects this interpretation. Using results of Billig and Dyer in~\cite{billigdyer:94} we show that the galleries occurring in theorem~\ref{th:basechange} together with the associated coefficients parameterize decompositions of these intersections. We also give an interpretation of the alcove basis in this context.
\emph{Acknowledgments.} The author would like to thank P.~Littelmann and A.~Ram for various helpful suggestions and discussions.
\section{Affine Weyl group and alcoves}
\label{sec:affineweylgroup}
In this section we recall some facts on the (extended) affine Weyl group and on alcoves as in~\cite{bourbaki:81}. Furthermore, we introduce generalized alcoves.
The group $Q^\vee$ acts on $V^*$ by translations. The affine Weyl group is defined as the semidirect product $\Waffin = W \ltimes Q^\vee$. It acts on $V^*$ by affine transformations. For $\lambda \in Q^\vee$ denote by $\tau_\lambda \in \Waffin$ the associated translation. The affine Weyl group is generated by its affine reflections. Let $H^\mathfrak{a}$ be the union of all reflection hyperplanes of reflections in $\Waffin$. Then $H^\mathfrak{a} = \bigcup_{\alpha \in \phi^+, m \in \mathbb{Z}} H_{\alpha,m}$, where $H_{\alpha,m} = \setdef{x \in V^*}{\langle \alpha, x \rangle = m}$. Let $H_{\alpha,m}^{\pm} = \setdef{x \in V^*}{\langle \alpha, x \rangle \gtrless m}$ be the associated affine half spaces.
The connected components of $V^* \setminus H^\mathfrak{a}$ are called open alcoves. Their closures are the alcoves in $V^*$. Denote by $\mathcal{A}$ the set of all alcoves. The action of $\Waffin$ on $\mathcal{A}$ is free and transitive. For $A \in \mathcal{A}$ and $\lambda \in Q^\vee$ we have $\tau_\lambda A = \lambda + A = \setdef{\lambda + x}{x \in A}$.
The fundamental alcove $A_f = \{x \in V^*| 0 \leq \langle \alpha, x \rangle \leq 1 \text{ for all } \alpha \in \phi^+\} \in \mathcal{A}$ is a fundamental domain for the $\Waffin$-action on $V^*$. We get a bijection $\Waffin \rightarrow \mathcal{A}, w \mapsto A_w \mathrel{\mathop :}= wA_f$.
A face $F$ of an alcove $A$ is an intersection $F = A \cap H$ such that $H \subset H^\mathfrak{a}$ is a reflection hyperplane and $\Aff{F} = H$. Here $\Aff{F}$ is the affine subspace spanned by $F$. A wall of $A$ is some hyperplane $H \subset H^\mathfrak{a}$ such that $H \cap A$ is a face of $A$. The group $\Waffin$ is generated by the reflections $S^\mathfrak{a}$ at the walls of $A_f$. One has $S^\mathfrak{a} = S \cup \{s_{01}, \ldots, s_{0c}\}$, where $S = \{s_1, \ldots, s_l\}$ is the set of simple reflections of $W$ and $s_{0k}$ is the affine reflection at $H_{\theta_k,1}$. Here $\setdef{\theta_k}{k = 1, \ldots, c} \subset \phi^+$ is the set of maximal roots with respect to the partial order on $X^\vee$, so $c$ is the number of irreducible components of the Dynkin diagram of $\Phi$. Moreover, $(\Waffin, S^\mathfrak{a})$ is a Coxeter system.
Let $F$ be a face of $A_f$. The type of $F$ is the reflection at $\Aff{F}$. Extend this definition to all faces by demanding that the $\Waffin$-action preserves types.
Right multiplication of $\Waffin$ induces an action of $\Waffin$ on $\mathcal{A}$ from the right. For $A \in \mathcal{A}$ and $s \in S^\mathfrak{a}$ the alcove $As$ is the unique alcove not equal to $A$ having a common face of type $s$ with $A$. Let $F_s \subset A$ be the face of type $s$ and $\Aff{F_s} = H_{\alpha,m}$ for some $\alpha \in \phi^+$ and $m \in {\mathbb Z}$. The hyperplane $H_{\alpha,m}$ is called the separating hyperplane between $A$ and $As$. Call $A$ negative with respect to $s$ if $A$ is contained in $H_{\alpha,m}^-$ and denote this by $A \prec As$. Of course $A$ is called positive with respect to $s$ if $As$ is negative with respect to $s$. We have $A \prec As$ iff $\lambda + A \prec \lambda + As$ for all $\lambda \in Q^\vee$.
\begin{example}\label{ex:alcoveorder}
\begin{itemize}
\item For $A_w$ and $A_{ws}$ in the dominant chamber ${\cal C}$ we have $A_w \prec A_{ws}$ iff $w < ws$, where '$\leq$' is the usual Bruhat order on $\Waffin$.
\item Let $w \in W$ and $s \in S$. Then $A_w \prec A_{ws}$ iff $w > ws$.
\end{itemize}
\end{example}
There is also a natural action of $X^\vee$ on $V^*$ by translations. So we can extend the above definition and get the extended affine Weyl group $\tilde{W}^\mathfrak{a} \mathrel{\mathop :}= W \ltimes X^\vee$. Extending the above notation write $\tau_\mu$ for the translation by $\mu \in X^\vee$. The action of $\tilde{W}^\mathfrak{a}$ on $\mathcal{A}$ is no longer free and type preserving. The stabilizer $\Omega$ of $A_f$ is isomorphic to $X^\vee/Q^\vee$. The isomorphism is given by sending $g \in \Omega$ to the class of $g(0)$. So a set of representatives is given by $X^\vee \cap A_f$. We have $\tilde{W}^\mathfrak{a} \cong \Omega \ltimes \Waffin$ and every element $v \in \tilde{W}^\mathfrak{a}$ can be written as $v = wg$ for unique $w \in \Waffin$ and $g \in \Omega$. Although $\tilde{W}^\mathfrak{a}$ is no longer a Coxeter group, we can extend the definition of the length function by setting $l(v) = l(w)$. So multiplication by elements of $\Omega$ does not change the length. One also can extend the Bruhat order on $\tilde{W}^\mathfrak{a}$ as follows: Let $v = wg$ and $v' = w'g' \in \tilde{W}^\mathfrak{a}$ such that $w, w' \in \Waffin$ and $g,g' \in \Omega$. Then $v \leq v'$ iff $g = g'$ and $w \leq w'$ (in the usual Bruhat order on $\Waffin$).
As mentioned above, the action of $\tilde{W}^\mathfrak{a}$ on $\mathcal{A}$ is no longer free. So we introduce generalized alcoves $\tilde{\Alko}$ in order to work with the extended affine Weyl group as follows: Take an alcove $A \in \mathcal{A}$. Then some conjugate of $\Omega$ acts transitively on $A \cap X^\vee$ and this intersection is in natural bijection to $X^\vee / Q^\vee$. Define $\tilde{\Alko} \mathrel{\mathop :}= \setdef{(A,\mu) \in \mathcal{A} \times X^\vee}{\mu \in A}$. There is a natural embedding $\mathcal{A} \hookrightarrow \tilde{\Alko}$ sending an alcove $A \in \mathcal{A}$ to $(A,\mu)$ where $\mu$ is the unique element in $A \cap Q^\vee$. We identify $\mathcal{A}$ with its image in $\tilde{\Alko}$. We have a natural free $\tilde{W}^\mathfrak{a}$-action on $\tilde{\Alko}$ given by the natural action on the two components. In particular, $X^\vee$ acts on $\tilde{\Alko}$ by translations in both components. For $\lambda \in X^\vee$ and $A \in \tilde{\Alko}$ we also write $\lambda + A$ for $\tau_\lambda A$.
We get a bijection $\tilde{W}^\mathfrak{a} \rightarrow \tilde{\Alko}, w \mapsto A_w \mathrel{\mathop :}= w A_f$ extending the bijection $\Waffin \to \mathcal{A}$. In the same way as above we also get a right $\tilde{W}^\mathfrak{a}$-action on $\tilde{\Alko}$ where $\Omega$ acts only on the second factor. The definitions of face and type of a face carry over to this situation by demanding that $\tilde{W}^\mathfrak{a}$ acts type preserving.
Every generalized alcove $A$ is of the form $\mu + A_w$ for unique $\mu \in X^\vee$ and $w \in W$. Then $\mu$ is called the weight of $A$ and $w$ its direction. Denote this by $wt(A) \mathrel{\mathop :}= \mu$ and $\delta(A) \mathrel{\mathop :}= w$.
In various circumstances we will deal with stabilizer subgroups of $W$. We use the following notation for some notions related to them.
\begin{definition}\label{def:stabilizers}
Let $\mu \in X^\vee$ and $W_\mu \subset W$ its stabilizer. The maximal element of $W_\mu$ is denoted by $w_\mu$, the minimal representatives of $W/W_\mu$ by $W^\mu$ and the minimal element in the coset $\tau_\mu W$ by $n^\mu$.
\end{definition}
In particular, $W = W_0$ and $w_0$ is the longest element in $W$. We will frequently use some facts about the length function on $\tilde{W}^\mathfrak{a}$ summarized in
\begin{lemma}\label{le:lengthfunction}
Let $\lambda \in X^\vee_+$.
\begin{enumerate}
\item We have $l(\tau_\lambda) = 2 \langle \rho, \lambda \rangle$. In particular, $l$ is additive on $X^\vee_+$.
\item One has $\tau_\lambda w_\lambda = n^\lambda w_0$ and $l(\tau_\lambda) + l(w_\lambda) = l(n^\lambda) + l(w_0)$. Moreover, $n^\lambda \in W \tau_\lambda W$ is minimal.
\end{enumerate}
\end{lemma}
\section{Affine Hecke algebra}
\label{sec:affinehecke}
Details on the affine Hecke algebra of a root datum with unequal parameters can be found in Lusztig's article~\cite{lusztig:89}.
For defining the affine Hecke algebra we first have to fix parameters. Let $d : S^\mathfrak{a} \to {\mathbb N}$ be invariant under conjugation by elements of $\tilde{W}^\mathfrak{a}$. Let ${\cal L} = {\mathbb Z}\big[q^{\pm\sfrac{1}{2}}\big]$ and define $q_s = q^{d(s)}$ for $s \in S^\mathfrak{a}$. For $v \in \Waffin$ we set $q_v = \prod_{j = 1}^k q_{t_{j}}$ where $v = t_{1} \cdot \ldots \cdot t_{k}$ with $t_i \in S^\mathfrak{a}$ is a reduced decomposition of $v$. For arbitrary $v \in \tilde{W}^\mathfrak{a}$ let $q_v = q_{v'}$ where $v = v'g$ with $v' \in \Waffin$ and $g \in \Omega$.
For a subset $H \subset W$ define $H(q) = \sum_{w \in H} q_w$ and $H(q^{-1}) = \sum_{w \in H} q_w^{-1}$.
The affine Hecke algebra $\tilde{\kh}^\mathfrak{a}$ associated to the root datum $\Phi$ and the above choice of $d$ is a ${\cal L}$-algebra defined as follows: As a ${\cal L}$-module it is free with basis $\{T_w\}_{w \in \tilde{W}^\mathfrak{a}}$ and multiplication is given by
\begin{itemize}
\item $T_s^2 = q_s T_{id} + (q_s-1) T_s$ for all $s \in S^\mathfrak{a}$.
\item $T_v T_w = T_{vw}$ for all $v,w \in \tilde{W}^\mathfrak{a}$ such that $l(vw) = l(v) + l(w)$.
\end{itemize}
On $\tilde{\kh}^\mathfrak{a}$ there is a natural ${\mathbb Z}$-algebra involution $\overline{\cdot} : \tilde{\kh}^\mathfrak{a} \to \tilde{\kh}^\mathfrak{a}$. It is given by $\overline{T}_w = T_{w^{-1}}^{-1}$ for $w \in \tilde{W}^\mathfrak{a}$ and $\overline{q^{j}} = q^{-j}$.
For $\lambda \in X^\vee_+$ define $q_\lambda = q^{\sfrac{1}{2}\sum_{j = 1}^k d(t_{j})}$ where $\tau_\lambda = t_1 \cdot \ldots \cdot t_k g$ is a reduced decomposition with $g \in \Omega$. So we have $q_\lambda^2 = q_{\tau_\lambda}$. For arbitrary $\mu \in X^\vee$ define $q_\mu \mathrel{\mathop :}= q_\lambda q_{\lambda'}^{-1}$ where $\lambda, \lambda' \in X^\vee_+$ such that $\mu = \lambda - \lambda'$. Then $q_\mu$ is independent of the particular choice of $\lambda, \lambda'$ because of the additivity of the length function on $X^\vee_+$ (see lemma~\ref{le:lengthfunction}).
For each $\mu \in X^\vee$ define an element $X_\mu \in \tilde{\kh}^\mathfrak{a}$ by $X_\mu \mathrel{\mathop :}= q_\mu^{-1} T_{\tau_\lambda} T_{\tau_{\lambda'}}^{-1}$ where as above $\mu = \lambda - \lambda'$ with $\lambda, \lambda' \in X^\vee_+$. By the same reason as above $X_\mu$ does not depend on the choice of $\lambda$ and $\lambda'$. In particular we have $X_\lambda = q_\lambda^{-1} T_{\tau_\lambda}$ for all dominant $\lambda$. Thus one gets an inclusion of ${\cal L}$-algebras
\begin{align*}
{\cal L}[X^\vee] & \hookrightarrow \tilde{\kh}^\mathfrak{a} \\
x^\nu & \mapsto X_\nu
\end{align*}
and the image of ${\cal L}[X^\vee]^W$ is the center of $\tilde{\kh}^\mathfrak{a}$. We identify ${\cal L}[X^\vee]$ with its image.
Define $\mathbf{1}_0 = \sum_{w \in W} T_w \in \tilde{\kh}^\mathfrak{a}$. We have $T_w \mathbf{1}_0 = q_w \mathbf{1}_0$ for $w \in W$ and $\mathbf{1}_0^2 = \W{} \mathbf{1}_0$.
The spherical Hecke algebra ${\cal H}^{sph}$ is defined by
\begin{equation*}
{\cal H}^{sph} = \bigsetdef{h \in \frac{1}{\W{}} \tilde{\kh}^\mathfrak{a}}{T_w h = h T_w = q_w h \text{ for all } w \in W}.
\end{equation*}
The Macdonald basis of ${\cal H}^{sph}$ is given by $\{M_\lambda\}_{\lambda \in X^\vee_+}$ where
\begin{align*}
M_\lambda & \mathrel{\mathop :}= \frac{1}{\W{}} \sum_{w \in W \tau_\lambda W} T_w = \frac{1}{\W{} \W{\lambda}} \mathbf{1}_0 T_{n^\lambda} \mathbf{1}_0\\
& = \frac{q_\lambda q_{w_0}^{-1}}{\W{} \Winv{\lambda}} \mathbf{1}_0 X_\lambda \mathbf{1}_0.
\end{align*}
The second equality follows from lemma~\ref{le:lengthfunction} which yields $X_\lambda = q_{-\lambda} T_{n^\lambda} T_{w_0} \overline{T}_{w_\lambda}$. Moreover, $\Winv{\lambda} = q_{w_\lambda}^{-1} \W{\lambda}$. One obtains an isomorphism (Satake)
\begin{align*}
{\cal L}[X^\vee]^W & \xrightarrow[]{\cong} {\cal H}^{sph} \\
x & \mapsto \frac{1}{\W{}} x \mathbf{1}_0 \\
\intertext{In particular, we have}
m_\lambda & \mapsto Y_\lambda \mathrel{\mathop :}= \frac{1}{\W{}} \sum_{\mu \in W\lambda} X_\mu \mathbf{1}_0
\end{align*}
So we have two bases for ${\cal H}^{sph}$: The natural basis given by the Macdonald basis $\{M_\lambda\}_{\lambda \in X^\vee_+}$ and the monomial basis $\{Y_\lambda\}_{\lambda \in X^\vee_+}$ given by the images of the monomial symmetric functions under the Satake isomorphism. We are interested in the transition matrix from the monomial basis to the natural basis. (Re)define the family $\{L_{\lambda\mu}\}_{\lambda, \mu \in X^\vee_+}$ as modified entries of this transition matrix. More precisely, we have \begin{equation*}
M_\lambda = \sum_{\mu \in X^\vee_+} q_\mu^{-1} L_{\lambda\mu} Y_\mu.
\end{equation*}
For arbitrary $\mu \in X^\vee$ and dominant $\lambda \in X^\vee_+$ we set $L_{\lambda\mu} = q_{\mu-\mu^+} L_{\lambda\mu^+}$ where $\mu^+$ is the unique dominant element in the $W$-orbit of $\mu$.
We also calculate the structure constants of the spherical Hecke algebra with respect to the Macdonald basis. For this, (re)define $\{C_{\lambda\mu}^\nu\}_{\lambda,\mu,\nu \in X^\vee_+}$ as modified structure constants by
\begin{equation*}
M_\lambda M_\mu = \sum_{\nu \in X^\vee_+} q_{\lambda-\nu}^2 C_{\lambda\mu}^\nu M_\nu.
\end{equation*}
In the next chapter we give a description of the $L_{\lambda\mu}$ and the $C_{\lambda\mu}^\nu$ using galleries.
\begin{remark}\label{re:generalparameters}
This is not the most general choice of parameters for which the affine Hecke algebra is defined and where the theorems~\ref{th:satakecoefficients} and~\ref{th:structureconstants} are true. One important example is the following: Replace ${\cal L}$ by the image of the morphism ${\cal L} \to {\mathbb C}$ evaluating the variable $q$ at some fixed prime power. Hecke algebras of reductive groups over local fields are of this form (compare the end of section~\ref{sec:geometricinterpretation} for the case of equal parameters).
\end{remark}
\begin{remark}\label{re:equalparameters}
Now we want to clarify the relations of this section to symmetric polynomials and their $q$-analogs. In particular, we describe the relation between the coefficients defined above and the ones with the same names in section~\ref{sec:introduction}.
For this regard the case of equal parameters, i.e. $d(s) = 1$ for all $s \in S^\mathfrak{a}$. In this case we have $q_v = q^{l(v)}$ for $v \in \tilde{W}^\mathfrak{a}$ and $q_\mu = q^{\langle \rho, \mu \rangle}$ for $\mu \in X^\vee$. It is known (see for example~\cite[theorem 2.9]{nelsenram:03}) that the image of $P_\lambda(q^{-1})$ under the Satake isomorphism is $q_{-\lambda} M_\lambda$. So comparing the definitions of the $L_{\lambda\mu}$ and $C_{\lambda\mu}^\nu$ in section~\ref{sec:introduction} with the ones given here shows that the first ones are special cases of the latter ones. So the theorems stated there will follow from theorems~\ref{th:satakecoefficients} and~\ref{th:structureconstants} given in the next section.
\end{remark}
\section{Galleries}
\label{sec:galleries}
In this section we introduce galleries and some polynomials associated to them. We then give a precise meaning to the theorems stated in the introduction in the general setting of the last section. The galleries used here are a slight generalization of the usual galleries in a Coxeter complex since we regard generalized alcoves instead of alcoves.
\begin{definition}\label{de:galleries}
Let $t = (t_1, \ldots, t_k)$ with $t_i \in S^\mathfrak{a} \cup \Omega$. Let $s \in S^\mathfrak{a}$.
\begin{itemize}
\item
A gallery $\sigma$ of type $t$ connecting generalized alcoves $A$ and $B$ is a sequence $(A = A_0, \ldots, B = A_k)$ of generalized alcoves such that $A_{i+1} = A_i t_{i+1}$ if $t_{i+1} \in \Omega$ and $A_{i+1} \in \{A_i, A_i t_{i+1}\}$ if $t_{i+1} \in S^\mathfrak{a}$. In the case of $t_{i+1} \in S^\mathfrak{a}$ this means that $A_i$ and $A_{i+1}$ have a common face of type $t_{i+1}$.
\item
The initial direction $\iota(\sigma)$ is defined to be the direction $\delta(A_0)$ of the first generalized alcove. The weight $wt(\sigma)$ of $\sigma$ is $wt(A_k)$, the ending $e(\sigma)$ is $A_k$ and the final direction $\varepsilon(\sigma)$ is $\delta(A_k)$.
\item
The gallery $\sigma$ has a positive $s$-direction at $i$ if $t_{i+1} = s$, $A_{i+1} = A_i s$ and $A_i$ is negative with respect to $s$, i.e. $A_i \prec A_{i+1}$. The separating hyperplane is the wall of $A_i$ corresponding to the face of type $s$.
\item
The gallery $\sigma$ is $s$-folded at $i$ if $t_{i+1} = s$ and $A_{i+1} = A_i$. The folding hyperplane is the wall of $A_i$ corresponding to the face of type $s$. The folding is positive if $A_i \succ A_i s$.
\end{itemize}
We call $\sigma$ positively folded, if all foldings occurring are positive. A gallery is said to be minimal if it is of minimal length among all galleries connecting the same generalized alcoves.
\end{definition}
For the precise statement on the $L_{\lambda\mu}$ and the $C_{\lambda\mu}^\nu$ we need some statistics on galleries.
\begin{definition}\label{de:lsigma}
Let $\sigma$ be a positively folded gallery of type $t$. For $s \in S^\mathfrak{a}$ define
\begin{itemize}
\item $m_s(\sigma)$ the number of positive $s$-directions.
\item $n_s(\sigma)$ the number of positive $s$-folds.
\item $r_s(\sigma)$ the number of positive $s$-folds such that the folding hyperplane is not a wall of the dominant chamber ${\cal C}$.
\item $p_s(\sigma)$ the number of positive $s$-folds such that the folding hyperplane is a wall of~${\cal C}$.
\end{itemize}
In particular, $r_s(\sigma) + p_s(\sigma) = n_s(\sigma)$. Now we can define
\begin{itemize}
\item $L_\sigma = \prod_{s \in S^\mathfrak{a}} q_s^{m_s(\sigma)} (q_s-1)^{n_s(\sigma)}$ and
\item $C_\sigma = \prod_{s \in S^\mathfrak{a}} q_s^{m_s(\sigma)+p_s(\sigma)} (q_s-1)^{r_s(\sigma)}$.
\end{itemize}
\end{definition}
For a gallery $\sigma$ such that no folding hyperplane is a wall of ${\cal C}$ one has $L_\sigma = C_\sigma$. In the case of equal parameters (see remark~\ref{re:equalparameters}) we have $\deg L_\sigma = \deg C_\sigma$ for any gallery $\sigma$.
Fix some type $t = (t_1, \ldots, t_k)$. For $A \in \tilde{\Alko}$ and $\mu \in X^\vee$ let $\Gamma^+_t(A,\mu)$ be the set of all positively folded galleries of type $t$ starting in $A$ with weight $\mu$. Further let $\Gamma^+_t(\mu) = \coprod_{w \in W} \Gamma_t^+(A_w,\mu)$ be the set of all positively folded galleries of weight $\mu$ starting in the origin and let $\Gamma^+_{t}$ be the set of all positively folded galleries starting in the origin. Define
\begin{equation*}
L_t (\mu) \mathrel{\mathop :}= \sum_{\sigma \in \Gamma^+_t (\mu)} q_{w_0 \iota(\sigma)} L_\sigma.
\end{equation*}
So there is an additional contribution measuring the distance from $-A_f$ to the initial alcove.
\begin{remark}\label{re:alternativedef}
There is an alternative way of defining $L_t(\mu)$: For any $w \in W$ choose a minimal gallery $\sigma_w$ of type $t_w$ which connects $-A_f$ and $A_w$. Then $\sigma_w$ is a nonfolded gallery of length $l(w_0w) = l(w_0) - l(w)$ and it has only positive directions. The positively folded galleries of type $t_w' = (t_w,t)$ beginning in $-A_f$ correspond to the positively folded galleries of type $t$ starting in $A_{w}$. We get
\begin{equation*}
L_t(\mu) = \sum_{w \in W} \Big( \sum_{\sigma \in \Gamma^+_{t_w'} (-A_f,\mu)} L_\sigma \Big).
\end{equation*}
\end{remark}
Now we can give the formula for the Satake coefficients. Let $\lambda \in X^\vee_+$ and recall the notation introduced in definition~\ref{def:stabilizers}. Let $\sigma^\lambda$ be a minimal gallery connecting $A_f$ and $A_{n^\lambda}$ and denote its type by $t^\lambda$. Using the last definition we get polynomials $L_{t^\lambda}(\mu)$ for all $\mu \in X^\vee$. Up to some factor these are the $L_{\lambda\mu}$. More precisely we prove in section~\ref{sec:satakecoefficients}:
\begin{theorem}\label{th:satakecoefficients}
For $\mu \in X^\vee$ we have
\begin{equation*}
L_{\lambda\mu} = \frac{1}{W_\lambda(q)} L_{t^\lambda} (\mu).
\end{equation*}
Furthermore,
\begin{equation*}
L_{\lambda\mu} = q_{w_\lambda}^{-1} \sum_{\substack{\sigma \in \Gamma^+_{t^\lambda} (\mu)\\ \iota(\sigma) \in W^\lambda}} q_{w_0 \iota(\sigma)} L_\sigma.
\end{equation*}
In particular the $L_{t^\lambda}(\mu)$ do not depend on the choice of the minimal gallery $\sigma^\lambda$ and $L_{t^\lambda} (\mu) = q_{\mu - w\mu} L_{t^\lambda} (w\mu)$ for all $w \in W$.
\end{theorem}
\begin{remark}
One of the surprising implications of the last theorem is the $W$-invariance of the $L_{t^\lambda} (\mu)$ up to some power of $q$. This is surprising because even the cardinality of the sets $\Gamma^+_{t^\lambda} (w\mu)$ depends on $w$.
\end{remark}
\begin{remark}\label{re:minimalgalleries}
Let $w \in \Waffin$. The choice of a minimal gallery $\sigma$ connecting $A_f$ and $A_w$ is equivalent to the choice of a reduced expression for $w$. Let $t = (t_1, \ldots, t_k)$ be the type of $\sigma$. Then we have the reduced expression $w = t_1 \cdot \ldots \cdot t_k$.
Let $v \in \tilde{W}^\mathfrak{a}$. Then $v$ can be written as $v = wg$ with $w \in \Waffin$ and $g \in \Omega$. A minimal gallery $\sigma$ from $A_f$ to $A_v$ is given by a minimal gallery from $A_f$ to $A_w$ extended by $A_v$. So one can always arrange that at most the last entry of the type of a minimal gallery is in $\Omega$.
\end{remark}
Now it is quite natural to ask when $\Gamma^+_{t^\lambda}(\mu) \neq \emptyset$. Although the definition of galleries is a combinatorial one, it seems hard to give a combinatorial proof for the existence (or non existence) of a gallery of given type and weight. Let $\sigma$ be any gallery of type $t^\lambda$ starting in 0, ending in $A_v$ of weight $\mu$. Since the folding hyperplanes are root hyperplanes we always have $\lambda - \mu \in Q^\vee$. Moreover, $v \leq \iota(\sigma) n^\lambda$ by definition of the Bruhat order on $\tilde{W}^\mathfrak{a}$. This implies $\mu^+ \leq \lambda$. This also follows from the well known fact that the transition matrix from the monomial basis to the Macdonald basis is triangular with respect to the dominance ordering on~$X^\vee_+$.
The question of the existence of a gallery in $\Gamma^+_{t^\lambda}(\mu)$ does not depend on the choice of parameters $d$. So we can take $d = 1$ as in remark~\ref{re:equalparameters}. Since $P_\lambda(q^{-1})$ and $m_\mu$ are contained in~$\Lambda_q$ we have $q^{-\langle \rho, \lambda + \mu \rangle} L_{\lambda\mu} \in {\cal L}^-$. Moreover, $q^{-l(w_\lambda)} \W{\lambda} = \Winv{\lambda} \in {\cal L}^-$ and thus $q^{-\langle \rho, \lambda + \mu \rangle - l(w_\lambda)} L_{t^\lambda}(\mu) \in {\cal L}^-$. So we get the upper bound
\begin{equation}\label{eq:dimensionestimate}
\deg(L_\sigma) + l(w_0 \iota(\sigma)) \leq \langle \rho, \mu + \lambda \rangle + l(w_\lambda)
\end{equation}
for all $\sigma \in \Gamma^+_{t^\lambda}(\mu)$. The galleries with maximal degree are of special interest. So define
\begin{definition}\label{def:lsgalleries}
A gallery $\sigma \in \Gamma^+_{t^\lambda}$ is a LS-gallery if we have equality in the above equation, i.e. $\deg(L_\sigma) + l(w_0 \iota(\sigma)) = \langle \rho, wt(\sigma) + \lambda \rangle + l(w_\lambda)$.
\end{definition}
Since $L_\sigma$ is monic we get corollary~\ref{co:kostkanumbers} by evaluating theorem~\ref{th:satakecoefficients} at $q^{-1} = 0$.
The number of LS-galleries in $\Gamma_{t^\lambda}(w\mu)$ is $W$-invariant. This follows from the $W$-invariance (up to a power of $q$) of $L_{t^\lambda}(w\mu)$ in~\ref{th:satakecoefficients}. Now let $\mu \in X^\vee$ such that $\mu^+ \leq \lambda$. We know from representation theory that $k_{\lambda\mu^+} > 0$. So there exists a LS-gallery in $\Gamma^+_{t^\lambda}(\mu^+)$ and thus also in $\Gamma^+_{t^\lambda}(\mu)$ by the $W$-invariance.
Summarizing all this in the following corollary answers the above question on the existence of galleries with a given weight and sharpens the triangularity.
\begin{corollary}\label{co:lsgalleries}
The number of LS-galleries in $\Gamma^+_{t^\lambda} (\mu)$ is $k_{\lambda\mu^+}$. In particular we have $\Gamma^+_{t^\lambda} (\mu) \neq \emptyset$ iff $\mu$ occurs as a weight in $V(\lambda)$, i.e. $\mu^+ \leq \lambda$. Moreover, we have (for arbitrary parameters) $L_{\lambda\mu} \neq 0$ iff $\mu \leq \lambda$.
\end{corollary}
Specializing $q$ at some prime power we get $L_{\lambda\mu} > 0$ for all $\mu \leq \lambda$. This was shown by Rapoport for the case of spherical Hecke algebras of a reductive group over a local field~\cite{rapoport:00}.
\begin{remark}\label{re:comparisongaussentlittelmann}
For regular $\lambda$ the definition of galleries coincides with the one given in~\cite{gaussentlittelmann:03}. Instead of using generalized alcoves they regard galleries of alcoves together with an initial and final weight in $X^\vee$ contained in the first respectively last alcove. This is equivalent to our definition since we can always arrange such that at most the last component of $t^\lambda$ is in $\Omega$ (compare remark~\ref{re:minimalgalleries}). For nonregular $\lambda$ they regard degenerate alcoves. This is more or less the same as our choice of the initial direction. See also remark~\ref{re:minimalrepresentative} for a discussion of this choice.
The proof of corollary~\ref{co:kostkanumbers} in~\cite{gaussentlittelmann:03} is quite different from here. They define root operators on the set of all galleries of type~$t^\lambda$ starting in the origin. Then they show that the subset of LS-galleries is closed under these operators and defines the highest weight crystal with highest weight~$\lambda$.
\end{remark}
We now give the formula for the structure constants replacing $L_\sigma$ with $C_\sigma$. So let $\lambda \in X^\vee_+$ and $t$ be any type. Define $\Gamma^d_{t,\lambda}$ as the set of all positively folded galleries of type $t$ starting in $\lambda$ which are contained in the dominant chamber. Here we allow that folding hyperplanes are contained in the walls of $\mathcal{C}$. For $\nu \in X^\vee_+$ let $\Gamma^d_{t,\lambda}(\nu) \subset \Gamma^d_{t,\lambda}$ be the subset of galleries of weight $\nu$. Define
\begin{equation*}
C_{\lambda t}(\nu) = \sum_{\Gamma^d_{t,\lambda}(\nu)} q_{w_0 \iota(\sigma)} C_\sigma.
\end{equation*}
Now let $\lambda,\mu \in X^\vee_+$ and let $t^\mu$ be the type of a minimal gallery connecting $A_f$ and $A_{n^\mu}$ where $n^\mu \in \tau_\mu W$ is the minimal representative in $\tau_\mu W$. The above definition yields $C_{\lambda t^\mu}(\nu)$ for any $\nu \in X^\vee_+$. Define $F_{\mu\nu}^w \mathrel{\mathop :}= q_{w} \sum_{v \in W^{w_0\mu} \cap W_\nu w} q_v^{-1}$ for $\mu, \nu \in X^\vee_+$ and $w \in W$. In section~\ref{sec:structureconstants} we prove:
\begin{theorem}\label{th:structureconstants}
For $\lambda, \mu, \nu \in X^\vee_+$ we have
\begin{equation*}
C_{\lambda\mu}^\nu = \frac{W_\nu(q^{-1})}{W_\mu(q)} C_{\lambda t^\mu}(\nu).
\end{equation*}
Furthermore,
\begin{equation*}
C_{\lambda\mu}^\nu = q_{w_\mu}^{-1} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0 \iota(\sigma)} C_\sigma F_{\mu\nu}^{\varepsilon(\sigma)}.
\end{equation*}
In particular, the $C_{\lambda t^\mu}(\nu)$ do not depend on the choice of the minimal gallery.
\end{theorem}
So in contrast to theorem~\ref{th:satakecoefficients} we have a condition on the final direction $\varepsilon(\sigma)$ since $F_{\mu\nu}^w = 0$ iff $w \notin W_\nu W^{w_0\mu}$.
As above we can give an estimate for the degree of the $C_{\lambda t^\nu}(\nu)$ in the case of equal parameters and prove corollary~\ref{co:littlewoodrichardson}. From the last theorem we get $q^{-\langle \rho, \mu - \lambda + \nu \rangle - l(w_\mu)} C_{\lambda t^\mu}(\nu) \in {\cal L}^-$ and thus for any $\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)$ we have
\begin{equation*}
\deg C_\sigma + l(w_0 \iota(\sigma)) \leq \langle \rho, \mu - \lambda + \nu \rangle + l(w_\mu).
\end{equation*}
Since $\deg L_\sigma = \deg C_\sigma$ and translating a gallery by an element of $X^\vee$ does not change~$L_\sigma$ and the initial direction, corollary~\ref{co:littlewoodrichardson} is proven and we get
\begin{corollary}
For $\lambda, \mu, \nu \in X^\vee_+$ we have $C_{\lambda\mu}^\nu \neq 0$ if $c_{\lambda\mu}^\nu \neq 0 $.
\end{corollary}
Specializing $q$ to a prime power one gets that $C_{\lambda\mu}^\nu > 0$ if $c_{\lambda\mu}^\nu > 0$. For equal parameters this is proven in~\cite{kapovichmillson:04} and also by Haines in~\cite{haines:03} by geometric arguments using the affine Grassmanian of the Langlands dual $G$ of $G^\vee$ to calculate the degree and the leading coefficients of $C_{\lambda\mu}^\nu$.
Another interpretation of the structure constants was given by Parkinson~\cite{parkinson:05} using regular affine buildings.
\section{Satake coefficients}
\label{sec:satakecoefficients}
In this section we introduce the alcove basis of the extended affine Hecke algebra and show that right multiplication of this alcove basis by elements of the standard basis can be calculated using positively folded galleries. From this theorem~\ref{th:satakecoefficients} follows. We also show that one can replace positively folded galleries by negatively folded galleries.
\begin{definition}
Let $A \in \tilde{\Alko}$. Define $X_A = q_{-wt(A)} q_{\delta(A)} X_{wt(A)} \overline{T}_{\delta(A)}$.
\end{definition}
The set $\{X_A\}_{A \in \tilde{\Alko}}$ is a basis of $\tilde{\kh}^\mathfrak{a}$. Before we proceed, we need some properties of this basis. First let $\lambda \in X^\vee$ and $A \in \tilde{\Alko}$. One calculates
\begin{equation}\label{translationalcovebasis}
X_\lambda X_A = q_\lambda X_{\lambda + A}.
\end{equation}
Now assume $A = A_v$ to be dominant such that $\lambda \mathrel{\mathop :}= wt(A)$ is regular. Then $v = \tau_\lambda \delta(A)$. Moreover, $\tau_\lambda$ is of maximal length in $\tau_\lambda W$ by lemma~\ref{le:lengthfunction} and $l(v) = l(\tau_\lambda) - l(\delta(A))$. So we get $T_{\tau_\lambda} \overline{T}_{\delta(A)} = T_{\tau_\lambda \delta(A)} = T_v$ and thus
\begin{equation}\label{dominantalcovebasis}
X_A = q_{-\lambda} q_{\delta(A)} X_\lambda \overline{T}_{\delta(A)} = q_{\tau_\lambda}^{-1} q_{\delta(A)} T_{\tau_\lambda} \overline{T}_{\delta(A)} = q_v^{-1} T_v.
\end{equation}
Multiplying the elements of the alcove basis with $T_s$ from the right can be expressed in terms of the alcove order. It is a $q$-analog of the $\tilde{W}^\mathfrak{a}$-action on $\tilde{\Alko}$.
\begin{lemma}\label{le:qmultiplicationalcoves}
Let $A \in \tilde{\Alko}$. In $\tilde{\kh}^\mathfrak{a}$ we have
\begin{equation*}
X_A T_s =
\begin{cases}
q_s X_{As} & \text{ if } A \prec As\\
X_{As} + (q_s-1) X_A & \text{ if } A \succ As.
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
By~\eqref{translationalcovebasis} the assertion is invariant under translation, i.e. under left multiplication with some $X_\mu$. So it is enough to show the assertion for alcoves $A = A_v$ such that $wt(A)-\alpha^\vee$ is dominant and regular for all $\alpha \in \phi$. By~\eqref{dominantalcovebasis} we have $X_A = q_v^{-1} T_v$ and the multiplication law in $\tilde{\kh}^\mathfrak{a}$ yields
\begin{equation*}
T_v T_s =
\begin{cases}
T_{vs} & \text{ if } l(v) < l(vs) \\
q_s T_{vs} + (q_s-1) T_{v} & \text{ if } l(v) > l(vs).
\end{cases}
\end{equation*}
But for generalized alcoves in the dominant chamber increasing in the alcove order is equivalent to increasing the length of the corresponding elements of $\tilde{W}^\mathfrak{a}$ (see example~\ref{ex:alcoveorder}). Moreover, by the choice of $A$ we get $X_{As} = q_{vs}^{-1} T_{vs}$ as elements in $\tilde{\kh}^\mathfrak{a}$ again by~\eqref{dominantalcovebasis} and the assertion follows.
\end{proof}
Using the same arguments and the fact that multiplying by $T_g$ for $g \in \Omega$ does not change the length we get
\begin{lemma}\label{le:qmultiplicationomega}
For $A \in \tilde{\Alko}$ we have $X_A T_g = X_{Ag}$ as elements in $\tilde{\kh}^\mathfrak{a}$.
\end{lemma}
Now we can connect the multiplication in $\tilde{\kh}^\mathfrak{a}$ to the $L$-polynomials. For generalized alcoves $A$ and $B$ and any type $t$ define $\Gamma^+_t(A,B)$ to be the set of all positively folded galleries of type $t$ connecting $A$ and $B$ and set $L_t(A,B) = \sum_{\sigma \in \Gamma_t^+(A,B)} L_\sigma$.
\begin{lemma}\label{le:recursiongalleries}
Let $t = (t_1, \ldots, t_k), s \in S^\mathfrak{a}$, $t' = (t_1, \ldots, t_k, s)$, and fix generalized alcoves $A$ and $B$. We have
\begin{equation*}
L_{t'}(A,Bs) =
\begin{cases}
L_{t}(A,B) & \text{ if } B \succ Bs\\
q_s L_{t}(A,B) + (q_s-1) L_{t}(A,Bs) & \text{ if } B \prec Bs.
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
Let $\sigma' = (A, \ldots, C, Bs) \in \Gamma_{t'}^+(A,Bs)$. Then $C \in \{B, Bs\}$. We have $C = Bs$ iff $\sigma'$ is $s$-folded at $k+1$. Let $\sigma = (A, \ldots, C)$ and distinguish two cases:\\
$B \succ Bs$: We then have $C = B$ and $\sigma'$ is negative at $k+1$. So $\sigma \in \Gamma_{t}^+(A,B)$ and $L_{\sigma'} = L_\sigma$. Moreover, all galleries in $\Gamma_{t}^+(A,B)$ are obtained this way.\\
$B \prec Bs$: If $C = B$ we have $\sigma \in \Gamma_{t}^+(A,B)$ and $\sigma'$ is positive at $k+1$. So $L_{\sigma'} = q_s L_\sigma$ and one gets all galleries in $\Gamma_{t}^+(A,B)$ this way. If $C = Bs$ we have $\sigma \in \Gamma_{t}^+(A,Bs)$, $L_{\sigma'} = (q_s-1) L_\sigma$ and one obtains all galleries in $\Gamma_t^+(A,Bs)$ this way.\\
The lemma follows.
\end{proof}
Let $v \in \tilde{W}^\mathfrak{a}$ and $\sigma$ be a minimal gallery of type $t$ connecting $A_f$ and~$A_v$.
\begin{theorem}\label{th:multiplicationgalleries}
Given $A \in \tilde{\Alko}$ one has $X_A T_v = \sum_{B \in \tilde{\Alko}} L_t (A,B) X_B$.
\end{theorem}
\begin{proof}
Because of lemma~\ref{le:qmultiplicationomega} and since the $L$-polynomials are not affected by elements of $\Omega$ in the type it is enough to show the theorem for $v \in \Waffin$. The proof is done by induction on $l(v)$.\\
Let first $v = s \in S^\mathfrak{a}$: Distinguish two cases.\\
$A \prec As$: In this case $L_{(s)}(A,A) = 0$, $L_{(s)}(A,As) = q_s$ and $L_{(s)} (A,B) = 0$ for all other $B$ and $X_A T_s = q_s X_{As}$.\\
$A \succ As$: In this case $L_{(s)} (A,A) = q_s-1$, $L_{(s)} (A,As) = 1$ and $L_{(s)} (A,B) = 0$ for all other $B$ and $X_A T_s = X_{As} + (q_s-1) X_A$.\\
Now let $v \in \Waffin$, $s \in S^\mathfrak{a}$ such that $l(v) < l(vs)$ and $\sigma' = (A_0, \ldots, A_v, A_{vs})$ is a minimal gallery of type $t'$. Using the last lemma we get
\begin{align*}
X_A T_{vs} & = X_A T_v T_s
= \Big( \sum_{B \in \Waffin} L_t(A,B) X_B \Big) T_s \\
& = \sum_{B \prec Bs} q_s L_{t} (A,B) X_{Bs}
+ \sum_{B \succ Bs} L_{t} (A,B) X_{Bs}
+ \sum_{B \succ Bs} (q_s-1) L_{t} (A,B) X_B\\
& = \sum_{B \prec Bs} \big( q_s L_{t} (A,B) + (q_s-1) L_{t} (A,Bs) \big) X_{Bs}
+ \sum_{B \succ Bs} L_{t} (A,B) X_{Bs}\\
& = \sum_{B \in \tilde{\Alko}} L_{t'} (A,Bs) X_{Bs}
= \sum_{B \in \tilde{\Alko}} L_{t'} (A,B) X_B
\end{align*}
\end{proof}
In particular we get that $L_{t}(A,B)$ does not depend on $\sigma$ and $t$ but only on $v$. Thus the following definition is well defined.
\begin{definition}\label{de:lpolynomials}
For $v \in \tilde{W}^\mathfrak{a}$ define $L_v (A,B) \mathrel{\mathop :}= L_{t}(A,B)$ where $t$ is the type of a minimal gallery from $A_f$ to $A_v$.
\end{definition}
With these results we now can prove proposition~\ref{th:satakecoefficients}.
\begin{lemma}\label{le:macdonaldasmonomial}
For $\lambda \in X^\vee_+$ we have
\begin{equation*}
\mathbf{1}_0 T_{n^\lambda} \mathbf{1}_0 = \sum_{\mu \in X^\vee} q_{-\mu} L_{t^\lambda}(\mu) X_\mu \mathbf{1}_0.
\end{equation*}
\end{lemma}
\begin{proof}
We use the last theorem and the facts that $\overline{\mathbf{1}_0} = q_{w_0}^{-1} \mathbf{1}_0$ and $\overline{T}_w \mathbf{1}_0 = q_w^{-1} \mathbf{1}_0$ for all $w \in W$. So one calculates
\begin{align*}
\mathbf{1}_0 T_{n^\lambda} \mathbf{1}_0
& = q_{w_0} \sum_{w \in W} \overline{T}_w T_{n^\lambda} \mathbf{1}_0
= q_{w_0} \sum_{w \in W} q^{-1}_w X_{A_w} T_{n^\lambda} \mathbf{1}_0\\
& = q_{w_0} \sum_{w \in W} q_w^{-1} \sum_{\sigma \in \Gamma^+_{t^\lambda}, \iota(\sigma) = w} q_{-wt(\sigma)} q_{\varepsilon(\sigma)} L_\sigma X_{wt(\sigma)} \overline{T}_{\varepsilon(\sigma)} \mathbf{1}_0 \\
& = \sum_{w \in W} q_{w_0 w} \sum_{\sigma \in \Gamma^+_{t^\lambda}, \iota(\sigma) = w} q_{-wt(\sigma)} L_\sigma X_{wt(\sigma)} \mathbf{1}_0\\
& = \sum_{\sigma \in \Gamma^+_{t^\lambda}} q_{w_0 \iota(\sigma)} q_{-wt(\sigma)} L_\sigma X_{wt(\sigma)} \mathbf{1}_0
= \sum_{\mu \in X^\vee} q_{-\mu} L_{t^\lambda} (\mu) X_\mu \mathbf{1}_0
\end{align*}
where the last equality holds by the definition of $L_{t^\lambda}(\mu)$ in section~\ref{sec:galleries}.
\end{proof}
From this we get
\begin{equation*}
M_\lambda = \frac{1}{\W{}\W{\lambda}} \sum_{\mu \in X^\vee} q_{-\mu} L_{t^\lambda}(\mu) X_\mu \mathbf{1}_0.
\end{equation*}
But on the other hand $q_{-\mu} L_{\lambda\mu}$ for dominant $\mu$ is the coefficient of $M_\lambda$ with respect to $Y_\mu$. Moreover, for arbitrary $\nu \in X^\vee$ we defined $L_{\lambda\nu} = q_{\nu - \nu^+} L_{\lambda\nu^+}$. So we get
\begin{align*}
M_\lambda & = \sum_{\mu \in X^\vee_+} q_{-\mu} L_{\lambda\mu} Y_\mu
= \frac{1}{\W{}} \sum_{\mu \in X^\vee_+} \Big( \sum_{\nu \in W\mu} q_{-\nu} L_{\lambda\nu} X_\nu \mathbf{1}_0 \Big)\\
& = \frac{1}{\W{}} \sum_{\mu \in X^\vee} q_{-\mu} L_{\lambda\mu} X_\mu \mathbf{1}_0.
\end{align*}
Comparing coefficients of these two expansions we get
\begin{equation*}
L_{\lambda\mu} = \frac{1}{\W{\lambda}} L_{t^\lambda}(\mu)
\end{equation*}
which proves the first statement in~\ref{th:satakecoefficients}. The second statement can be obtained as follows: Every $w \in W$ can be written as $w = w_1 w_2$ for unique $w_1 \in W^\lambda$ and $w_2 \in W_\lambda$ such that $l(w) = l(w_1)+ l(w_2)$ (using the notation introduced in definition~\ref{def:stabilizers}). Define $\Sym{\lambda} = \sum_{w \in W_\lambda} T_w$. Since $\overline{T}_v \overline{T}_w = \overline{T}_{vw}$ for $v, w \in W$ with $l(v)+l(w) = l(vw)$ and $\overline{\Sym{\lambda}} = q_{w_\lambda}^{-1} \Sym{\lambda}$ we get
\begin{equation*}
\mathbf{1}_0 = q_{w_0} \sum_{w \in W^\lambda} \overline{T}_{w} \overline{\Sym{\lambda}}
= q_{w_0 w_\lambda} \sum_{w \in W^\lambda} \overline{T}_{w} \Sym{\lambda}.
\end{equation*}
If $v \in W_\lambda$ we have $l(v)+l(n^\lambda) = l(v n^\lambda)$. Moreover, $v n^\lambda = v \tau_\lambda w_\lambda w_0 = \tau_\lambda v w_\lambda w_0 = n^\lambda v'$ with $v' = w_0 w_\lambda v w_\lambda w_0$ by lemma~\ref{le:lengthfunction}. Then $l(v') = l(v)$ and $q_v = q_{v'}$. Thus $T_v T_{n^\lambda} \mathbf{1}_0 = T_{n^\lambda} T_{v'} \mathbf{1}_0 = q_v T_{n^\lambda} \mathbf{1}_0$ and we get
\begin{equation*}
\mathbf{1}_0 T_{n^\lambda} \mathbf{1}_0
= q_{w_0 w_\lambda} \W{\lambda} \sum_{w \in W^\lambda} \overline{T}_w \; T_{n^\lambda} \mathbf{1}_0.
\end{equation*}
Now the second statement follows the same way as in the proof of lemma~\ref{le:macdonaldasmonomial} using $q_{w_0 w_\lambda} \sum_{w \in W^\lambda} \overline{T}_w = q_{w_\lambda}^{-1} \sum_{w \in W^\lambda} q_{w_0 w} X_{A_{w}}$.
\begin{remark}\label{re:minimalrepresentative}
In the above considerations there are various other choices for the condition on the initial alcove. Let $v \in W_\lambda$. We have
\begin{equation*}
\mathbf{1}_0 = q_{w_0 w_\lambda v} \sum_{w \in W^\lambda} \overline{T}_{w v} \Sym{\lambda}
\end{equation*}
since $\overline{T}_v \Sym{\lambda} = q_v^{-1} \Sym{\lambda}$ and the last equation becomes
\begin{equation*}
\mathbf{1}_0 T_{n^\lambda} \mathbf{1}_0 = q_{w_0 w_\lambda v} \W{\lambda} \sum_{w \in W^\lambda v} \overline{T}_w \; T_{n^\lambda} \mathbf{1}_0.
\end{equation*}
Thus one gets
\begin{equation*}
L_{\lambda\mu} = q_{w_\lambda v}^{-1} \sum_{\substack{\sigma \in \Gamma^+_{t^\lambda} (\mu)\\ \iota(\sigma) \in W^\lambda v}} q_{w_0 \iota(\sigma)} L_\sigma.
\end{equation*}
The case considered above was $v = id$. In the case of equal parameters we get for any gallery $\sigma \in \Gamma^+_{t^\lambda}$ such that $\iota(\sigma) \in W^\lambda v$ the upper bound
\begin{equation*}
\deg L_\sigma + l(w_0 \iota(\sigma)) \leq \langle \rho, \lambda + wt(\sigma) \rangle + l(w_\lambda v).
\end{equation*}
One could define LS-galleries to be the ones such that $\iota(\sigma) \in W^\lambda v$ and where there is equality in the last equation. But only with the choice $v = id$ it is enough to impose this equality. The condition on the initial direction follows from this. In particular, for a LS-gallery $\sigma$ we have $\iota(\sigma) \in W^\lambda$.
For the definition of the $L_{t^\lambda}(\mu)$ we started with the minimal representative $n^\lambda$ and we showed that $L_{t^\lambda}(\mu)$ is independent of the initially chosen minimal gallery. One can allow even more freedom in this initial choice. Let $v \in W \tau_\lambda W$ and let $w, w' \in W_\lambda$ such that $v = w n^\lambda w^{'}$ and $l(w) + l(n^\lambda) + l(w') = l(v)$. If instead of $t^\lambda$ we use the type $t$ of a minimal gallery from $A_f$ to $A_v$ we get from the proof of~\ref{le:macdonaldasmonomial} that $L_{t}(\mu) = q_w q_{w'} L_{t^\lambda}(\mu)$ for any $\mu \in X^\vee$. It is clear that the number of LS-galleries in $\Gamma^+_{t}(\mu)$ (with the appropriate changes of the degree condition in the definition) is the same as in $\Gamma^+_{t^\lambda}(\mu)$ since they always encode $s_\lambda$. One also has a canonical bijection between these different sets of LS-galleries. But the total number of galleries in $\Gamma^+_{t}(\mu)$ really depends on the choice of $v$ and this number is minimal if we choose $n^\lambda$. There is another fact that singles out~$n^\lambda$: All the nonfolded galleries are LS-galleries.
\end{remark}
\begin{remark}\label{re:negative}
In definition~\ref{de:galleries} one can replace positive (respectively positively folded) by negative (respectively negatively folded), i.e. one gets $m_s^-(\sigma)$ and $n_s^-(\sigma)$ for each negatively folded gallery $\sigma$. With the obvious changes this yields polynomials $L^-_\sigma$ nonzero only for negatively folded galleries. Going further, one gets $\Gamma^-_t(A,B)$, $L^-_t(A,B)$ and recursions (using the same notations as in~\ref{le:recursiongalleries})
\begin{equation*}
L^-_{t'} (A,Bs) =
\begin{cases}
L^-_t (A,B) & \text{ if } B \prec Bs\\
q_s L^-_t (A,B) + (q_s-1) L^-_t (A,Bs) & \text{ if } B \succ Bs.
\end{cases}
\end{equation*}
Since $\overline{T}_s = q^{-1}_s (T_s + (1-q_s) T_{id})$ for $s \in S^\mathfrak{a}$ we get from lemma~\ref{le:qmultiplicationalcoves} that
\begin{equation*}
X_A \overline{T}_s =
\begin{cases}
X_{As} + (q^{-1}_s - 1)& \text{ if } A \prec As\\
q^{-1}_s X_{As} & \text{ if } A \succ As
\end{cases}
\end{equation*}
for any $A \in \tilde{\Alko}$ and $s \in S^\mathfrak{a}$. Under the hypotheses of theorem~\ref{th:multiplicationgalleries} we get
\begin{equation*}
X_A \overline{T}_v = \sum_{B \in \mathcal{A}} \overline{L^-_t (A,B)} X_B.
\end{equation*}
If one defines
\begin{equation*}
L^-_t (\mu) = \sum_{\sigma \in \Gamma^{-}_t (\mu)} q_{\iota(\sigma)} L^-_\sigma
\end{equation*}
we also can express the $L_{\lambda\mu}$ with negatively folded galleries. For this note that left multiplication by $w_0$ on $\tilde{\Alko}$ induces a type preserving bijection $\phi : \Gamma^+_t \to \Gamma^-_t$ for any type $t$. We have $L^-_{\phi(\sigma)} = L_\sigma$ and $\iota(\phi(\sigma)) = w_0 \iota(\sigma)$. In particular, we get the equality $L_t (\mu) = L^-_t (w_0\mu)$. Combining this with the semi-invariance of the $L_{\lambda\mu}$ with respect to $\mu$ we get
\begin{equation*}
L_{\lambda\mu} = q_{\mu - w_0\mu} L_{\lambda, w_0\mu} = \frac{q^2_\mu}{\W{\lambda}} L_{t^\lambda}(w_0\mu) = \frac{q^2_\mu}{\W{\lambda}} L^-_{t^\lambda}(\mu)
\end{equation*}
which gives an expression of $L_{\lambda\mu}$ in terms of negatively folded galleries by the definition of $L^-_{t^\lambda} (\mu)$.
\end{remark}
\section{Structure constants}
\label{sec:structureconstants}
In this section we calculate the structure constants of the spherical Hecke algebra with respect to the Macdonald basis and prove theorem~\ref{th:structureconstants} and thus theorem~\ref{th:producthalllittlewood} and its corollary.
\begin{lemma}
Let $A = \mu + A_w$ be a dominant generalized alcove such that $A s$ is no longer dominant. Let $H_{\alpha_i,0}$ be the hyperplane separating $A$ and $A s$. Then we have $X_A T_{s} = T_{s_i} X_A$.
\end{lemma}
\begin{proof}
We have $s_i A = A s$ and $A \succ As$. So $s_i$ and $s$ are conjugate in $\tilde{W}^\mathfrak{a}$ and thus $q_{s_i} = q_s$. Distinguish two cases:\\
If $s = s_{\theta,1}$ with $\theta \in \Theta$ we have $\langle \alpha_i, \mu \rangle = 1$ and thus $s_i(\mu) = \mu - \alpha_i^\vee$ and $s_i A = s_i(\mu) + A_{s_i w}$. But on the other hand we have $A s = (\mu + w\theta_k^\vee) + A_{w s_\theta}$ and so $w\theta^\vee = -\alpha_i^\vee$. In particular, $s_i w < w$. From~\cite[lemma 2.7(d) and proposition 3.6]{lusztig:89} we know that $q_{\alpha_i^\vee} = q_s$ in this case and
\begin{equation*}
T_{s_i} X_\mu = X_{\mu -\alpha_i^\vee} T_{s_i} + (q_{s_i} - 1) X_\mu.
\end{equation*}
Together with $s_i w < w$ this yields
\begin{equation*}
T_{s_i} X_\mu \overline{T}_w = X_{\mu-\alpha_i^\vee} \overline{T}_{s_i w} + (q_{s_i} - 1) X_\mu \overline{T}_w
\end{equation*}
and thus
\begin{equation*}
T_{s_i} X_A = X_{A_s} + (q_{s_i} -1) X_A = X_A T_s
\end{equation*}
where the last equality follows from $A \succ As$.\\
If $s = s_j \in S$ we have $s_i(\mu) = \mu$ and $w^{-1}(\alpha_i) = \alpha_j$. So here $s_i w > w$. Using $T_{s_i} X_\mu = X_\mu T_{s_i}$ one obtains the desired equality as above.
\end{proof}
We keep the notation of the last lemma and get $\mathbf{1}_0 X_A T_{s} = \mathbf{1}_0 T_{s_i} X_A = q_s \mathbf{1}_0 X_A$ (recall that $q_{s_i} = q_s$). For a generalized alcove $A$ and a type $t$ define $\Gamma^+_{t,A}$ to be the set of all positively folded galleries of type $t$ with initial alcove $A$.
Let $t = (t_1, \ldots, t_k)$ be a type and define $T_t = T_{t_1} \cdot \ldots \cdot T_{t_k}$. From theorem~\ref{th:multiplicationgalleries} we get
\begin{equation*}
X_A T_t = \sum_{\sigma \in \Gamma^+_{t,A}} L_\sigma X_{e(\sigma)}
\end{equation*}
where $e(\sigma)$ is the ending of $\sigma$ as introduced in definition~\ref{de:galleries}. This yields
\begin{equation*}
\mathbf{1}_0 X_A T_t = \sum_{\sigma \in \Gamma^+_{t,A}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}.
\end{equation*}
Setting $t' = (s, t)$ we obtain by the same arguments
\begin{equation*}
\mathbf{1}_0 X_A T_{s} T_t = \sum_{\sigma \in \Gamma^+_{t',A}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}.
\end{equation*}
Since $\mathbf{1}_0 X_A T_s T_t= q_s \mathbf{1}_0 X_A T_t$ we get the following
\begin{lemma}\label{le:latwalls1}
Let $t$ be any type and let $A$ be a dominant generalized alcove such that $As$ is no longer dominant. Setting $t' = (s,t)$ we have
\begin{equation*}
q_s \sum_{\sigma \in \Gamma^+_{t,A}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}
= \sum_{\sigma \in \Gamma^+_{t',A}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}.
\end{equation*}
\end{lemma}
Now let $\lambda \in X^\vee_+$. Then the generalized alcove $A \mathrel{\mathop :}= \lambda + A_w$ is dominant iff $w^{-1} \in W^\lambda$. Let $w^{-1} \in W^\lambda$ and $v \in W_\lambda$. Since $\overline{T}_v X_\lambda = X_\lambda \overline{T}_v$ we get
\begin{equation*}
\overline{T}_v X_{A} = q^{-1}_{v} X_{\lambda+A_{vw}} = q^{-1}_{v} X_{vA}.
\end{equation*}
Since $v \in W_\lambda$ we get the equality (using the notation introduced before the last lemma)
\begin{equation*}
\mathbf{1}_0 X_{vA} T_t = q_v \mathbf{1}_0 \overline{T}_{v} X_A T_t = \mathbf{1}_0 X_{A} T_t.
\end{equation*}
For later use observe that $v A = \lambda + A_{vw}$ and thus $v A$ is no longer dominant. We get
\begin{lemma}\label{le:latwalls2}
Let $\lambda \in X^\vee_+$, $w^{-1} \in W^\lambda$ and $v \in W_\lambda$. Let $A = \lambda + A_w$. For any type $t$ we have
\begin{equation*}
\sum_{\sigma \in \Gamma^+_{t,A}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}
= \sum_{\sigma \in \Gamma^+_{t,vA}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}.
\end{equation*}
\end{lemma}
Now let $\lambda, \mu \in X^\vee_+$. Let $w_\mu \in W_\mu$ and $n^\mu \in \tau_\mu W$ as in definition~\ref{def:stabilizers}. Let $t^\mu$ denote the type of a minimal gallery from $A_f$ to $A_{n^\mu}$. As in the proof of lemma~\ref{le:macdonaldasmonomial} we get
\begin{align}
\mathbf{1}_0 X_\lambda \; \mathbf{1}_0 T_{n^\mu}
& = \mathbf{1}_0 X_\lambda \sum_{\sigma \in \Gamma^+_{t^\mu}} q_{w_0 \iota(\sigma)} L_\sigma X_{e(\sigma)}\\
\label{eq:multiplication}
& = q_{\lambda} \sum_{\sigma \in \Gamma^+_{t^\mu,\lambda}} q_{w_0 \iota(\sigma)} L_{\sigma} \mathbf{1}_0 X_{e(\sigma)}.
\end{align}
Here $\Gamma^+_{t^\mu,\lambda}$ is the set of all galleries of type $t^\mu$ starting in $\lambda$ and the last equality holds since translating a gallery $\sigma$ by $\lambda$ does not change $L_\sigma$. So we have an expansion for the product in terms of $X_{A}$ for $A \in \tilde{\Alko}$. But we need the expansion in terms of $X_A$ for dominant $A$ to compute the structure constants.
\begin{theorem}\label{th:reducingtodominant}
For $\lambda, \mu \in X^\vee_+$ we have
\begin{equation*}
\mathbf{1}_0 X_\lambda \mathbf{1}_0 T_{n^\mu}
= q_{\lambda} \Winv{\lambda} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}} q_{w_0\iota(\sigma)} C_\sigma \mathbf{1}_0 X_{e(\sigma)}.
\end{equation*}
\end{theorem}
\begin{proof}
For the proof of this theorem we use lemmas~\ref{le:latwalls1} and~\ref{le:latwalls2} to show that the contribution of the galleries with non-dominant weights in the formula~\eqref{eq:multiplication} is exactly the contribution of the $p_s$.\\
First assume $\lambda$ is regular. Then the first generalized alcove of every gallery starting in $\lambda$ is dominant. Let $\eta \in \Gamma^+_{t^\mu,\lambda}$ be a gallery leaving the dominant chamber. Let $\gamma$ be the maximal initial subgallery of $\eta$ contained in ${\cal C}$ and let $A$ be $e(\gamma)$. Then $\eta$ is not folded after $A$ and the next generalized alcove in $\eta$ is of the form $As$ for some $s \in S^\mathfrak{a}$. Denote by $\Gamma^+_{\gamma} \subset \Gamma^+_{t^\mu,\lambda}$ the set of galleries starting with $\gamma$. By lemma~\ref{le:latwalls1} we have that
\begin{equation*}
\frac{q_s}{q_s-1} \sum_{\sigma \in \Gamma^+_{\gamma}, \sigma \text{ folded at } A} L_\sigma \mathbf{1}_0 X_{e(\sigma)}
= \sum_{\sigma \in \Gamma^+_{\gamma}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}.
\end{equation*}
So the contribution of all galleries starting with $\gamma$ is the same as the contribution of the galleries starting with $\gamma$ and staying in ${\cal C}$ at $A$, if the contribution of the folding at $A$ is $q_s$ instead of $q_s-1$. Iteration of this procedure eventually yields
\begin{equation*}
\sum_{\sigma \in \Gamma^+_{\gamma}} L_\sigma \mathbf{1}_0 X_{e(\sigma)}
= \sum_{\sigma \in \Gamma^+_{\gamma}, \sigma \subset {\cal C}} C_\sigma \mathbf{1}_0 X_{e(\sigma)}
\end{equation*}
which proves the theorem for regular $\lambda$.\\
If $\lambda$ is non-regular we have to apply lemma~\ref{le:latwalls2} to obtain the theorem because in this case the first alcove of a gallery starting in $\lambda$ can be non-dominant. In this case its contribution has a part coming from the initial direction, which we did not need to consider in the regular case. But lemma~\ref{le:latwalls2} tells us that the contribution arising from these alcoves is the same as the contribution from the dominant ones. More precisely we have for $w^{-1} \in W^\lambda$ and $v \in W_\lambda$
\begin{equation*}
\sum_{\sigma \in \Gamma^+_{t^\mu,\lambda}, \iota(\sigma) = w} q_{w_0w} L_\sigma \mathbf{1}_0 X_{e(\sigma)}
= q_v \sum_{\sigma \in \Gamma^+_{t^\mu,\lambda}, \iota(\sigma) = vw} q_{w_0vw} L_\sigma \mathbf{1}_0 X_{e(\sigma)}
\end{equation*}
and thus
\begin{equation*}
\Winv{\lambda} \sum_{\sigma \in \Gamma^+_{t^\mu,\lambda}, \iota(\sigma) = w} q_{w_0w} L_\sigma \mathbf{1}_0 X_{e(\sigma)}
= \sum_{\sigma \in \Gamma^+_{t^\mu,\lambda}, \iota(\sigma) \in W_\lambda w} q_{w_0\iota(\sigma)} \mathbf{1}_0 X_{e(\sigma)}.
\end{equation*}
Since the sum over all $w^{-1} \in W^\lambda$ of the left hand side of the last equation is exactly the contribution of the galleries starting in ${\cal C}$, the theorem follows.
\end{proof}
\begin{remark}
The proofs for multiplying Schur polynomials using paths are of a similar type as above (see for example~\cite[section~6]{littelmann:94}). First one gets a formula involving also Schur polynomials associated to paths leaving the dominant chamber. Then one shows that the contributions of the leaving paths cancel each other. This is done by combinatorial arguments, i.e. one can see which paths cancel each other. In contrast to this we do not have any concrete information about this cancellation process.
\end{remark}
Now we can prove the first part of theorem~\ref{th:structureconstants} respectively theorem~\ref{th:producthalllittlewood}. We multiply the equation of the last theorem from the right by $\mathbf{1}_0$ and get by the definition of the Macdonald basis
\begin{align*}
M_\lambda M_\mu
&= \frac{q_{\lambda} q_{w_0}^{-1}}{W(q)\Winv{\lambda}} \frac{1}{\W{}\W{\mu}} \mathbf{1}_0 X_\lambda \mathbf{1}_0 \, \mathbf{1}_0 T_{n^\mu} \mathbf{1}_0\\
&= \frac{q_{\lambda}^2 q_{w_0}^{-1}}{\W{} \W{\mu}} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}} q_{w_0\iota(\sigma)} C_\sigma \mathbf{1}_0 X_{e(\sigma)} \mathbf{1}_0\\
&= \frac{q_{\lambda}^2 q_{w_0}^{-1}}{\W{} \W{\mu}} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}} q_{-wt(\sigma)} q_{w_0\iota(\sigma)} C_\sigma \mathbf{1}_0 X_{wt(\sigma)} \mathbf{1}_0\\
&= \frac{q_{\lambda}^2}{\W{\mu}} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}} q_{-wt(\sigma)}^2 q_{w_0\iota(\sigma)} C_\sigma \Winv{wt(\sigma)} M_{wt(\sigma)}\\
& = \frac{q_{\lambda}^2}{\W{\mu}} \sum_{\nu \in X^\vee_+} q_{-\nu}^2 \Winv{\nu} C_{t^\mu,\lambda}(\nu) M_\nu.
\end{align*}
To prove the second part of theorem~\ref{th:structureconstants} and thus theorem~\ref{th:producthalllittlewood} we need one more step. It is not possible to impose conditions on the initial direction as in theorem~\ref{th:satakecoefficients}. Instead we impose conditions on the final direction to get rid of the fraction $\frac{1}{\W{\mu}}$. For doing this we need some preparation. The situation is more difficult than the case of Satake coefficients since now two stabilizers instead of one are involved. So we first need some information on the interplay between them.
We use the notation for stabilizer subgroups introduced in definition~\ref{def:stabilizers}. Moreover, for any $\nu \in X^\vee$ let $\Sym{\nu} = \sum_{w \in W_\nu} T_w$ be the corresponding symmetrizer. Note that $W_{w_0\mu} = w_0 W_{\mu} w_0$ and thus $q_{w_\mu} = q_{w_{w_0\mu}}$ and $\W{\mu} = \W{w_0\mu}$.
Let $Y = \sum_{w \in W} R_w \overline{T}_w \in \tilde{\kh}^\mathfrak{a}$ with $R_w \in {\cal L}$. Assume $Y \in \tilde{\kh}^\mathfrak{a} \Sym{w_0\mu}$. Then $R_{w} = R_{wv}$ for any $w \in W$ and $v \in W_{w_0\mu}$ and thus
\begin{equation}\label{eq:endingdirection}
Y = q_{w_\mu}^{-1} \sum_{w \in W^{w_0\mu}} R_w \overline{T}_{w} \Sym{w_0\mu}
\end{equation}
since for $w \in W^{w_0\mu}$ we have $\overline{T}_w \overline{\Sym{w_0\mu}} = \sum_{v \in W_{w_0\mu}} \overline{T}_{w v}$ and $\overline{\Sym{w_0 \mu}} = q_{w_\mu}^{-1} \Sym{w_0\mu}$.
Let $\nu \in X^\vee_+$ and take $Y$ of a special form, namely $Y = \sum_{w^{-1} \in W^\nu} R_w \Sym{\nu} \overline{T}_w$. For $w \in W$ denote by $w^\nu$ the minimal element of the coset $W_\nu w$. In particular $(w^\nu)^{-1} \in W^\nu$. Expanding $Y$ in terms of the $\overline{T}_w$ yields
\begin{equation*}
Y = q_{w_\nu} \sum_{w \in W} R_{w^\nu} \overline{T}_w.
\end{equation*}
So if in addition $Y \in \tilde{\kh}^\mathfrak{a} \Sym{w_0\mu}$ we get $Y = q_{w_\nu} q_{w_\mu}^{-1} \sum_{w \in W^{w_0\mu}} R_{w^\nu} \overline{T}_{w} \Sym{w_0\mu}$ by the considerations above.
We calculate $Y \mathbf{1}_0$ and get $Y \mathbf{1}_0 = q_{w_\nu} q_{w_\mu}^{-1} \W{\mu} \sum_{w \in W^{w_0\mu}} q_w^{-1} R_{w^\nu} \mathbf{1}_0$. Thus
\begin{align}\label{eq:doublecosetmultiplication}
Y \mathbf{1}_0 = q_{w_\nu} \Winv{\mu} \sum_{w^{-1} \in W^\nu} q_w^{-1} F_{\mu\nu}^{w} R_{w} \mathbf{1}_0
\end{align}
where $F_{\mu\nu}^{w} \mathrel{\mathop :}= q_w \sum_{v \in W^{w_0\mu} \cap W_\nu w} q_v^{-1}$. Observe that $W^{w_0\mu} \cap W_\nu w \neq \emptyset$ iff $w \in W_\nu W^{w_0\mu}$. In particular, we get for regular $\nu$ that $F_{\mu\nu}^{w} = 1$ if $w \in W^{w_0\mu}$ and 0 else.
Now we relate this to our problem. We have
\begin{equation*}
\W{\mu} \mathbf{1}_0 T_{n^\mu} = \mathbf{1}_0 \Sym{\mu} T_{\tau_\mu} T_{w_\mu} \overline{T}_{w_0}
= \mathbf{1}_0 T_{\tau_\mu} T_{w_\mu} \Sym{\mu} \overline{T}_{w_0} = \mathbf{1}_0 T_{n^\mu} T_{w_0} \Sym{\mu} \overline{T}_{w_0}.
\end{equation*}
But $T_{w_0} T_w \overline{T}_{w_0} = T_{w_0 w w_0}$ for all $w \in W$ and thus $T_{w_0} \Sym{\mu} \overline{T}_{w_0} = \Sym{w_0\mu}$. So $\mathbf{1}_0 T_{n^\mu} \in \tilde{\kh}^\mathfrak{a} \Sym{w_0\mu}$.
We have $\W{\nu} \mathbf{1}_0 X_\nu = \mathbf{1}_0 X_\nu \Sym{\nu}$ since $T_w X_\nu = X_\nu T_w$ for any $w \in W_\nu$. So
the contribution of $\mathbf{1}_0 X_\nu$ in theorem~\ref{th:reducingtodominant} is given by
\begin{equation*}
\sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0\iota(\sigma)} C_\sigma \mathbf{1}_0 X_{e(\sigma)}
= \frac{q_{-\nu}}{\W{\nu}} \mathbf{1}_0 X_\nu \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0\iota(\sigma)} q_{\varepsilon(\sigma)} C_\sigma \Sym{\nu} \overline{T}_{\varepsilon(\sigma)}.
\end{equation*}
As already observed before, $\nu + A_v \subset {\cal C}$ with $v \in W$ iff $v^{-1} \in W^\nu$. So the final directions of the galleries $\sigma$ occurring in the last equation satisfy $(\varepsilon(\sigma))^{-1} \in W^\nu$. If we define $Y \mathrel{\mathop :}= \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0\iota(\sigma)} q_{\varepsilon(\sigma)} C_\sigma \Sym{\nu} \overline{T}_{\varepsilon(\sigma)}$ then $Y$ is of the kind considered above and $Y \in \tilde{\kh}^\mathfrak{a} \Sym{w_0\mu}$. So we can apply~\eqref{eq:doublecosetmultiplication} and get
\begin{equation*}
\sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0\iota(\sigma)} q_{\varepsilon(\sigma)} C_\sigma \Sym{\nu} \overline{T}_{\varepsilon(\sigma)} \mathbf{1}_0
= q_{w_\nu} \Winv{\mu} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0 \iota(\sigma)} C_\sigma F_{\mu \nu}^{\varepsilon(\sigma)} \mathbf{1}_0.
\end{equation*}
Bringing all this together we can multiply the assertion of theorem~\ref{th:reducingtodominant} from the right by $\mathbf{1}_0$ and get
\begin{equation*}
\mathbf{1}_0 X_\lambda \mathbf{1}_0 T_{n^\mu} \mathbf{1}_0
= q_\lambda \Winv{\lambda} \Winv{\mu} \sum_{\nu \in X^\vee_+} \frac{q_{-\nu}}{\Winv{\nu}} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0 \iota(\sigma)} C_\sigma F_{\mu \nu}^{\varepsilon(\sigma)} \mathbf{1}_0 X_\nu \mathbf{1}_0.
\end{equation*}
Now we can calculate the coefficient of $M_\nu$ in the product $M_\lambda M_\mu$ as above. It is equal to
\begin{equation*}
q_{\lambda-\nu}^2 q_{w_\mu}^{-1} \sum_{\sigma \in \Gamma^d_{t^\mu,\lambda}(\nu)} q_{w_0 \iota(\sigma)} C_\sigma F_{\mu \nu}^{\varepsilon(\sigma)}
\end{equation*}
which proves the second part of theorem~\ref{th:structureconstants} and thus~\ref{th:producthalllittlewood}.
\begin{remark}
Consider the case of equal parameters and let $w^{-1} \in W^\nu$. Then we have $F_{\mu\nu}^w = q^{l(w)} \sum_{v \in W^{w_0\mu} \cap W_\nu w} q^{-l(v)}$. By definition of $W^\nu$ we have $l(v) \geq l(w)$ for all $v \in W_\nu w$ and thus $F_{\mu\nu}^w \in {\cal L}^-$. Moreover, the constant term of $F_{\mu\nu}^w$ is 1 iff $w \in W^{w_0\mu}$.
\end{remark}
\begin{remark}
One can proceed the same way to obtain a formula for the Satake coefficients as in the second part of theorem~\ref{th:satakecoefficients} with a condition on the final direction. For stating the results we consider again the situation of section~\ref{sec:satakecoefficients}. So $\lambda \in X^\vee_+$ and $t^\lambda$ is the type of a minimal gallery from $A_f$ to $A_{n^\lambda}$. Applying the above considerations (for $\lambda$ instead of $\mu$) yields $\mathbf{1}_0 T_{n^\lambda} \in \tilde{\kh}^\mathfrak{a} \Sym{w_0\lambda}$. A formula for $\mathbf{1}_0 T_{n^\lambda}$ is given by (see the proof of lemma~\ref{le:macdonaldasmonomial}) $\sum_{\sigma \in \Gamma_{t^\lambda}^+} q_{-wt(\sigma)} q_{\varepsilon(\sigma)} q_{w_0\iota(\sigma)} L_\sigma X_{wt(\sigma)} \overline{T}_{\varepsilon(\sigma)}$. So we get by~\eqref{eq:endingdirection}
\begin{equation*}
\mathbf{1}_0 T_{n^\lambda} = q_{w_\lambda}^{-1} \sum_{\substack{\sigma \in \Gamma_{t^\lambda}^+\\ \varepsilon(\sigma) \in W^{w_0\lambda}}} q_{-wt(\sigma)} q_{\varepsilon(\sigma)} q_{w_0\iota(\sigma)} L_\sigma X_{wt(\sigma)} \overline{T}_{\varepsilon(\sigma)} \Sym{w_0\lambda}.
\end{equation*}
Multiplying by $\mathbf{1}_0$ from the right then yields
\begin{equation*}
M_\lambda = \frac{q_{w_\lambda}^{-1}}{\W{}} \sum_{\mu \in X^\vee} q_{-\mu} \sum_{\substack{\sigma \in \Gamma_{t^\lambda}^+(\mu) \\ \varepsilon(\sigma) \in W^{w_0\lambda}}} q_{w_0\iota(\sigma)} L_\sigma X_{\mu} \mathbf{1}_0
\end{equation*}
and thus $L_{\lambda\mu} = q_{w_\lambda}^{-1} \sum_{\substack{\sigma \in \Gamma_{t^\lambda}^+(\mu) \\ \varepsilon(\sigma) \in W^{w_0\lambda}}} q_{w_0\iota(\sigma)} L_\sigma$. Moreover, we see that for a LS-gallery $\sigma$ we have $\varepsilon(\sigma) \in W^{w_0\lambda}$.
\end{remark}
\section{Commutation formula}
\label{sec:commutation}
Here we give two other applications of theorem~\ref{th:multiplicationgalleries}. The first is a nice combinatorial description of the base change matrix between the standard basis $\{T_w\}_{w \in \tilde{W}^\mathfrak{a}}$ and the basis $\{X_\lambda \overline{T}_w\}_{\mu \in X^\vee, w \in W}$.
\begin{corollary}\label{co:standardasalcove}
Let $v \in \tilde{W}^\mathfrak{a}$ and fix some minimal gallery of type $t$ connecting $A_f$ and $A_v$. Then
\begin{equation*}
T_v = \sum_{\sigma \in \Gamma^+_{t}, \iota(\sigma)= id} L_\sigma X_{e(\sigma)}
= \sum_{\sigma \in \Gamma^+_{t}, \iota(\sigma) = id} q^{-1}_{wt(\sigma)} q_{\varepsilon(\sigma)} L_\sigma X_{wt(\sigma)} \overline{T}_{\varepsilon(\sigma)}.
\end{equation*}
\end{corollary}
Now let $w \in W^\lambda$ and recall the notation from definition~\ref{def:stabilizers}. Then $l(w w_\lambda) = l(w) + l(w_\lambda)$ and thus $T_{w w_\lambda} = T_w T_{w_\lambda}$. We also have $T_{\tau_\lambda} = T_{n^\lambda} T_{w_0} \overline{T}_{w_\lambda}$ by lemma~\ref{le:lengthfunction}. Since $T_{w_\lambda} X_\lambda = X_\lambda T_{w_\lambda}$ we get
\begin{equation*}
T_w T_{w_\lambda} X_\lambda = T_w X_\lambda T_{w_\lambda} = q_\lambda^{-1} T_w T_{n^\lambda} T_{w_0} \overline{T}_{w_\lambda} T_{w_\lambda} = q_\lambda^{-1} T_{w n^\lambda} T_{w_0}.
\end{equation*}
Applying corollary~\ref{co:standardasalcove} to $v = wn^\lambda$ and using $\overline{T}_y T_{w_0} = T_{y w_0}$ for $y \in W$ we get a $q$-analog of a commutation formula of Pittie and Ram in the nil-affine Hecke algebra~\cite{pittieram:99}.
\begin{corollary}
Let $\lambda \in X^\vee_+$ and $w \in W^\lambda$. Let $t$ be the type of a minimal gallery connecting $A_f$ and $A_{wn^\lambda}$. Then
\begin{equation*}
T_{ww_\lambda} X_\lambda = q^{-1}_\lambda T_{wn^\lambda} T_{w_0} = \sum_{\sigma} q^{-1}_{wt(\sigma) +\lambda} q_{\varepsilon(\sigma)} L_\sigma X_{wt(\sigma)} T_{\varepsilon(\sigma)w_0}
\end{equation*}
where the sum is over all $\sigma \in \Gamma^+_t$ starting in $A_f$.
\end{corollary}
\section{Geometric interpretation}
\label{sec:geometricinterpretation}
In this section we reformulate results of~\cite{billigdyer:94} using galleries to show some relations of our combinatorics to geometry. We show that this geometrical interpretation is compatible with the geometrical interpretation of the $L_{\lambda\mu}$ arising from the geometric definition of the Satake isomorphism (see~\cite{haineskottwitzprassad:03}). Since there are no new results in this section we are rather sketchy.
Assume that $X^\vee = Q^\vee$. So $\tilde{W}^\mathfrak{a} = \Waffin$ and generalized alcoves are alcoves. We use negatively folded galleries as in remark~\ref{re:negative}. We regard the affine Hecke algebra $\tilde{\kh}^\mathfrak{a}$ specialized at some prime power $\mathbf{q}$ and all polynomials evaluated at $\mathbf{q}$.
Details of the following constructions and their relation to affine Kac-Moody algebras can be found in Kumar's book~\cite{kumar:02}. We use the notation from the introduction, i.e. $G$ is a semisimple, simply connected algebraic group over $K$ with root datum $\Phi$ associated to a Borel $B \subset G$ and a maximal torus $T \subset B$ and all groups are defined and split over ${\mathbb F}_\mathbf{q}$. Let ${\cal K} = K((t))$ be the field of Laurent series and denote by ${\cal O} = K[[t]] \subset {\cal K}$ the ring of formal power series. The evaluation at $0$ induces a map $ev : {\cal O} \to K, t \mapsto 0$. This induces a morphism of groups $ev : G({\cal O}) \to G$. Define $\Kmg{B} = ev^{-1}(B)$. Further we set $\Kmg{G} = G({\cal K})$ and let $\Kmg{N} \subset \Kmg{G}$ be the normalizer of $T$ in $\Kmg{G}$. Then $(\Kmg{G}, \Kmg{B}, \Kmg{N}, T({\cal O}))$ is a Tits system with Weyl group $\Waffin$.
From the theory of Tits system one has a Bruhat decomposition $\Kmg{G} = \bigsqcup_{w \in \Waffin} \Kmg{U}_w w \Kmg{B}$ with $\Kmg{U}_w$ isomorphic to ${\mathbb A}_K^{l(w)}$. On the other hand there is the Iwasawa decomposition $\Kmg{G} = \bigsqcup_{w \in \Waffin} U({\cal K}) w \Kmg{B}$ where $U \subset B$ is the unipotent radical. These decompositions are compared by Billig and Dyer in~\cite{billigdyer:94}.
\begin{theorem}[\cite{billigdyer:94}]\label{th:generalizedbruhat}
For $w \in \Waffin$ and $s = s_i \in S^\mathfrak{a}$ one has
\begin{equation*}
U({\cal K}) w \Kmg{B} s \Kmg{B} =
\begin{cases}
U({\cal K}) ws \Kmg{B} & \text{if } A_w \succ A_{ws}\\
U({\cal K}) w \Kmg{B} \sqcup U({\cal K}) ws \Kmg{B} & \text{if } A_w \prec A_{ws}.
\end{cases}
\end{equation*}
\end{theorem}
Let $w \in \Waffin$ and let $\sigma$ be a minimal gallery of type $t = (t_1, \ldots, t_k)$ which connects $A_f$ and $A_w$. Define a map $\eta :\Kmg{U}_w \to \Gamma^-_t$ by
\begin{equation*}
\eta(u) = (A_f, A_{w_1}, \ldots, A_{w_k}) \text{ iff } u t_1 \cdot \ldots \cdot t_j \in U({\cal K}) w_j \Kmg{B} \text{ for } j \in \{1, \ldots, k\}.
\end{equation*}
It follows from the last theorem that $\eta$ is well defined.
The affine flag variety is the quotient $\Kmg{G}/\Kmg{B}$. It is the set of closed points of an ind-variety defined over ${\mathbb F}_\mathbf{q}$ such that the Bruhat cells $\Kmg{B} w \cdot \Kmg{B}$ for $w \in \Waffin$ are isomorphic to affine spaces of dimension $l(w)$. For $Y \subset \Kmg{G}$ denote by $Y \cdot \Kmg{B}$ its image in $\Kmg{G}/\Kmg{B}$. For a negatively folded gallery $\sigma$ denote by $m^-(\sigma)$ the total number of negative directions and by $n^-(\sigma)$ the total number of negative folds. Then one has
\begin{theorem}[\cite{billigdyer:94}]\label{th:galleriesbruhat}
\begin{enumerate}
\item If $\sigma \in \Gamma^-_t$ then $\eta^{-1}(\sigma) \cong K^{m^-(\sigma)} \times (K^*)^{n^-(\sigma)}$.
\item If $v \in \Waffin$ then $\Kmg{B} w \cdot \Kmg{B} \cap U({\cal K}) v \cdot \Kmg{B} = \bigsqcup_{\sigma \in \Gamma^-_{t} (A_f,A_v)} \eta^{-1}(\sigma) w \cdot \Kmg{B}$.
\end{enumerate}
\end{theorem}
For $Y \subset \Kmg{G}/\Kmg{B}$ denote by $|Y|$ the number of ${\mathbb F}_\mathbf{q}$-rational points. Let $L^-_w (A_f,A_v)$ be the analog of definition~\ref{de:lpolynomials} for negatively folded galleries. From the last theorem we get
\begin{corollary}\label{co:geometricinterpretation}
If $v,w \in \Waffin$ then $|\Kmg{B} w \cdot \Kmg{B} \cap U({\cal K}) v \cdot \Kmg{B}| = L^-_w (A_f,A_v)$.
\end{corollary}
\begin{remark}~\label{re:positiveopposite}
Looking at positively folded galleries starting in $-A_f$ one can calculate intersections $\Kmg{B}^- w \cdot \Kmg{B} \cap U^-({\cal K}) v \cdot \Kmg{B}$ in the same way as in corollary~\ref{co:geometricinterpretation}. Here $\Kmg{B}^-$ is obtained from the opposite Borel $B^-$ in the same way as $\Kmg{B}$ from $B$ and $U^-$ is the unipotent radical of $B$.
\end{remark}
As already mentioned in the introduction the definition of the \emph{geometric} Satake isomorphism yields a geometric interpretation of the $L_{\lambda\mu}$. For stating this we have to consider intersections in the affine Grassmanian $\Kmg{G}/G({\cal O})$. For $\lambda, \mu \in X^\vee$ let $X_{\lambda\mu}^- \mathrel{\mathop :}= G({\cal O}) \tau_\lambda \cdot G({\cal O}) \cap U^-({\cal K}) \tau_\mu \cdot G({\cal O})$. Then $L_{\lambda\mu} = |X_{\lambda\mu}^-|$. This interpretation can also be recovered using the last corollary as follows.
The group $G({\cal O})$ is the parabolic subgroup of $\Kmg{G}$ associated to the classical Weyl group $W \subset \Waffin$, i.e. $G({\cal O}) = \bigsqcup_{w \in W} \Kmg{B} w \Kmg{B}$. Let $\pi : \Kmg{G}/\Kmg{B} \to \Kmg{G}/G({\cal O})$ be the canonical projection. From general theory of Tits systems one knows that
\begin{equation*}
\pi_{|\Kmg{B} w \cdot \Kmg{B}} : \Kmg{B} w \cdot \Kmg{B} \to \Kmg{B} w \cdot G({\cal O})
\end{equation*}
is an isomorphism iff $w$ is of minimal length in $wW$. So we get an isomorphism
\begin{equation*}
\pi_{|\sqcup_{w \in W^\lambda} \Kmg{B} w n^\lambda \cdot \Kmg{B}} :
\bigsqcup_{w \in W^\lambda} \Kmg{B} w n^\lambda \cdot \Kmg{B} \to
G({\cal O}) \tau_\lambda \cdot G({\cal O}).
\end{equation*}
If $\mu \in Q^\vee$ then
\begin{equation*}
\pi^{-1}(U({\cal K}) \tau_\mu \cdot G({\cal O})) = \bigsqcup_{w \in W} U({\cal K}) \tau_\mu w \cdot \Kmg{B}.
\end{equation*}
Bringing this together and setting $X_{\lambda\mu} = G({\cal O}) \tau_\lambda \cdot G({\cal O}) \cap U({\cal K}) \tau_\mu \cdot G({\cal O})$ we get
\begin{align*}
|X_{\lambda\mu}|
& = \sum_{\substack{w \in W^\lambda \\ v \in W}} | \Kmg{B} w n^\lambda \cdot \Kmg{B} \cap U({\cal K}) \tau_\mu v \cdot \Kmg{B}|\\
& = \sum_{\substack{w \in W^\lambda \\ v \in W}} L^-_{w n^\lambda} (A_f,\tau_\mu v)
= \sum_{w \in W^\lambda} L^-_{w n^\lambda} (A_f,\mu)\\
& = \frac{1}{W_\lambda(\mathbf{q})} L^-_{t^\lambda}(\mu) = \mathbf{q}^{-2\langle \rho, \mu \rangle} L_{\lambda\mu}.
\end{align*}
The last equalities follow from the analog of the second statement in lemma~\ref{le:macdonaldasmonomial} for negatively folded galleries. Using the fact that $X^-_{\lambda\mu} = w_0 X_{\lambda, w_0\mu}$ and $L_{\lambda,\mu} = \mathbf{q}^{\langle \rho, \mu-w_0\mu \rangle} L_{\lambda,w_0\mu} = \mathbf{q}^{2\langle \rho, \mu \rangle} L_{\lambda,w_0\mu}$ we get as in~\cite{gaussentlittelmann:03} that
\begin{equation*}
|X^-_{\lambda\mu}| = \sum_{\sigma \in \Gamma^+_{t^\lambda}(\mu), \iota(\sigma) \in W^\lambda} \mathbf{q}^{l(w_0\iota(\sigma))} L_\sigma.
\end{equation*}
It is well known that the affine Hecke algebra $\tilde{\kh}^\mathfrak{a}$ specialized at $\mathbf{q}$ can be interpreted as the convolution algebra of $\Kmg{B}_\mathbf{q}$-invariant functions with finite support on the affine flag variety $\Kmg{G}_\mathbf{q} / \Kmg{B}_\mathbf{q}$ (see for example~\cite{haineskottwitzprassad:03}). In this setting the generator $T_w$ corresponds to the characteristic function on $\Kmg{B}_\mathbf{q} w \cdot \Kmg{B}_\mathbf{q}$. Using theorem~\ref{th:generalizedbruhat} one can give a similar interpretation for the alcove basis.
Let $t_w \in F$ be the characteristic function on $U^-({\cal K}_\mathbf{q}) w \cdot \Kmg{B}_\mathbf{q}$ (which in general does not have finite support) and let $M$ be the subspace spanned by all $t_w$. Using theorem~\ref{th:generalizedbruhat} one can show that $M$ is closed under right convolution by $\tilde{\kh}^\mathfrak{a}$. Moreover, for $w \in \Waffin$ and $s \in S^\mathfrak{a}$ one gets
\begin{equation*}
t_w * T_s =
\begin{cases}
t_{ws} & \text{if } A_w \prec A_{ws}\\
\mathbf{q} t_{ws} + (\mathbf{q}-1) t_w & \text{if } A_w \succ A_{ws}.
\end{cases}
\end{equation*}
So by lemma~\ref{le:qmultiplicationalcoves} the map
\begin{align*}
M & \to \tilde{\kh}^\mathfrak{a}\\
t_v & \mapsto \mathbf{q}^{\langle \rho, \mu(A_v) \rangle} X_{wt(A_v)} \overline{T}_{\delta(A_v)} = \mathbf{q}^{2 \langle \rho, \mu(A_v) \rangle - l(\delta(A_v))} X_{A_v}
\end{align*}
is an isomorphism of right $\tilde{\kh}^\mathfrak{a}$-modules.
\begin{remark}
In~\cite{billigdyer:94} the cited results are shown for any Kac-Moody group and any generalized system of positive roots. Theorem~\ref{th:generalizedbruhat} is then formulated with distinguished subexpressions instead of folded galleries. All facts also follow quite immediately using the methods from~\cite{gaussentlittelmann:03} in this case.\\
The explicit formulas for the action of $\tilde{\kh}^\mathfrak{a}$ on $M$ are known (at least to experts). It is essentially the action of $\tilde{\kh}^\mathfrak{a}$ on the space of Iwahori fixed vectors of the universal unramified principal series. Corollary~\ref{co:geometricinterpretation} follows from these explicit formulas.
\end{remark}
| 2ef4494cc7e29bde5ab529777eeed8905190216f | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction} \label{sec1}
Variable structure control methods and in particular sliding mode controls,
are by now recognised as classical tools for the regulation of systems
governed by ordinary differential equations in a finite dimensional
setting. For an overview of the finite-dimensional theory see \cite{Utk92}.
While being easy to design, they possess attractive properties of
robustness and insensitivity with respect to disturbances and unmodelled
dynamics. These characteristics are all the more important when dealing
with infinite-dimensional systems. In many control applications such as
heat transfer processes, chemical processes, flexible manipulators the
state evolution is governed by a partial differential equation. The
complexity of these plants results in models having significant degrees
of uncertainty. Thus motivated, recent research has been devoted to the
extension of sliding mode control and therefore the use of discontinuous
feedback laws, to the infinite-dimensional setting.
While earlier works \cite{OrlUtkARC82,OrlUtkAUTO87,OrlUtkAMCS98} were
confined to some special classes of systems, at present both theory and
application of sliding mode control have been extended to a rather
general setting \cite{OrlTAC00,OrlDocTAC02,OrlLouChrIJC04,LevDIE02,
LevEJC02,LevCC04}. In particular in \cite{OrlTAC00} the key concept of
equivalent control is introduced in a general Hilbert space framework
for evolution equations governed by unbounded linear operators that
generate $C_0$-semigroups. Also it is shown that, under some stability
assumptions, the ideal sliding can be uniformly approximated by ``real"
motions evolving in a boundary layer of the sliding manifold, thus
ensuring the validity of the method for application purposes. The
relationship between the equivalent control method and generalised
solutions of infinite-dimensional systems with discontinuous
right-hand side is presented in \cite{LevDIE02,LevEJC02}.
All the results in the above cited literature only take into consideration
distributed control systems, i.e. they deal with bounded input operators.
In this paper we make a first attempt to consider the extension of
sliding modes to a class of boundary control problems in a general setting.
To the author's knowledge there exist only a few results in this direction
in the linear case \cite{DraUtkIJC92,DraOzgSV94}, where by application of
integral transformations the problem is reduced to the control of a
finite-dimensional differential-difference equation. Our approach goes
instead in the direction of \cite{ZolB89}. In Section \ref{sec2} we define
the general abstract variational framework in which we set up our control
problem. In particular, the main assumptions we make on the operator governing
the evolution, are weak continuity and coerciveness, so that both linear and
non-linear operators are comprised in this setting. In Section \ref{sec3}
we present our main result: a Faedo-Galerkin method is used to construct
a sequence of finite-dimensional approximations of the given problem. On
each of these the standard variable structure control theory of \cite{Utk92}
can be applied. We then assume that for each approximation a control law is
chosen to constrain the evolution in a boundary layer of a given sliding
manifold and study the limit as the dimensions diverge. We show that, under
some growth assumption on the norm of these controls, a limit motion exists,
which satisfies the sliding condition. Then, in Section \ref{sec4} we
apply the obtained results to the Neumann boundary control of a heat equation.
\section{Abstract setting and problem statement} \label{sec2}
In this paper we are going to consider a class of parabolic partial
differential equations with controllers acting on the boundary. In
particular we will study the case of Neumann boundary conditions and
finite dimensional control space. Also, we suppose that a manifold
$S$ is given, on which we want to restrict the motion of our system.
We then analyse the problem of the existence of an admissible control
law for which this ideal sliding motion is possible.
\begin{ex} \label{ex1}
Before going into the details of the precise abstract setting of the
problem, we show an example of application to give an idea of the family
of systems we intend to study.
Let $\Omega$ be a bounded, open subset of $\relax\ifmmode I\!\!R\else$I\!\!R$\fi^n$
with smooth boundary $\Gamma$, $T>0$ and $\Delta$ be the laplacian differential
operator on $\relax\ifmmode I\!\!R\else$I\!\!R$\fi^n$. Consider the following evolution equation
\begin{equation} \label{pdeinex}
\pde {\parder{}{Q}{t}(t,x)=\Delta Q(t,x)+q(x)Q(t,x)} {t\in (0,T),\; x\in \Omega}
{\parder{}{Q}{\nu}(t,\sigma)= u(t)g(\sigma)} {t\in (0,T),\; \sigma\in \Gamma}
{Q(0,x) = Q_0(x)} {x\in \Omega.}
\end{equation}
Here $Q:[0,T]\times \Omega\rightarrow \relax\ifmmode I\!\!R\else$I\!\!R$\fi$ represents the evolution of the ``state
vector", $u:[0,T]\rightarrow \relax\ifmmode I\!\!R\else$I\!\!R$\fi$ is a scalar control law, $g:\Gamma\rightarrow \relax\ifmmode I\!\!R\else$I\!\!R$\fi$
and $q:\relax\ifmmode I\!\!R\else$I\!\!R$\fi^n\rightarrow \relax\ifmmode I\!\!R\else$I\!\!R$\fi$ is bounded. This equation represents a model of
heat conduction with both diffusion and heat generation (if $q$ is
nonnegative).
Now for $\gamma:\Omega\rightarrow\relax\ifmmode I\!\!R\else$I\!\!R$\fi$ we can define (informally) a sliding surface $S$
as the set of functions $f:\Omega\rightarrow \relax\ifmmode I\!\!R\else$I\!\!R$\fi$ such that
\[
\int_\Omega f(x) \gamma(x)\, dx = 0
\]
In this case a sliding motion $Q(t,x)$ on $S$ would satisfy
\[
\int_\Omega Q(t,x) \gamma(x)\, dx = 0, \quad t>0
\]
\end{ex}
\subsection{Variational formulation} \label{subsec2}
The setting of the abstract problem follows \cite{Lio69,Lio71,LioMag72}:
let $V$ be a separable, reflexive Banach space, $H$ be a Hilbert space,
$V\subset H$ with continuous injection. The space $H$ is identified with
its dual, while we denote by $V'$ the dual space of $V$, so that we have
\[
V\subset H \subset V'.
\]
For $u_1$, $u_2\in H$ the scalar product in $H$ will be denoted by
$(u_1,u_2)$ and the derived norm by $|u_i|$. We will denote by $\|\cdot\|$
the norm in $V$ and by $\|\cdot\|_*$ that in $V'$. The dual pairing
between the two spaces will be written as $\langle \cdot,\cdot \rangle$.
Also, we will assume that on $V$ it is defined a semi-norm $[\cdot]$ such
that
\begin{equation} \label{vnorm}
[v]+\lambda|v|\geq \beta\|v\|, \quad \forall v\in V,\quad\mbox{for some }
\lambda,\beta >0.
\end{equation}
It is assumed that all the above (infinite-dimensional) spaces are
real vector spaces; results can be extended to the complex case with
the necessary modifications. For any $T>0$ we can define the following
spaces of vector-valued functions:
\[
L^2(0,T;V) = \{f:[0,T]\rightarrow V:
\int_0^T\|f(t)\|^2 dt<+\infty\}
\]
\[
L^\infty(0,T;H) = \{f:[0,T]\rightarrow H:
\sup_{t\in [0,T]}|f(t)| <+\infty\}.
\]
The space $L^2(0,T;V')$ can be defined analogously. Also, it is possible
to define on these spaces a concept of derivative, in a distributional
sense (see i. e. \cite{Lio71} Chapter III). The following
result \cite{LioMag72} will be useful in the sequel.
\begin{teo}\label{teocont}
Let
\[
W(0,T) = \left\{ f\in L^2(0,T;V)\,:\, \frac {df}{dt} \in L^2(0,T;V')
\right \}.
\]
All functions in $W(0,T)$ are, after eventual modification on a null
measure set, continuous from $[0,T]$ in $H$, i.e. $ W(0,T)\subset
C^0(0,T;H)$.
\end{teo}
\vskip .3cm \indent
For $t\in (0,T)$ let $A(t):V\rightarrow V'$ be an operator satisfying the
following assumptions:
\begin{itemize}
\item for all $v,w\in V$ the map
\begin{equation} \label{meas}
t\mapsto \langle A(t)v,w\rangle \mbox{ is measurable;}
\end{equation}
\item for all $t$ and any $ u,v,\omega \in V$ the map
\begin{equation} \label{hemi}
\alpha\mapsto \langle A(t)(u+\alpha v),w\rangle
\mbox{ is continuous;}
\end{equation}
\item there exist constants $c_1>0$, $c_2\geq 0$ such that
\begin{equation} \label{bound}
\|A(t)v\|_*\leq c_1\|v\|+c_2, \quad \forall v\in V;
\end{equation}
\item there exist constants $\alpha>0$ and $\nu\in \relax\ifmmode I\!\!R\else$I\!\!R$\fi$ such that
\begin{equation} \label{coercive}
\langle A(t)v,v\rangle \geq \alpha [v]^2 + \nu\, |v|^2 \forall v\in V.
\end{equation}
\item $A(\cdot)$ is $2$-weakly continuous, i.e.
\begin{multline} \label{2weak}
v_k\rightarrow v\mbox{ weakly in } W(0,T) \Longrightarrow \\
A(\cdot)v_k(\cdot)\rightarrow A(\cdot)v(\cdot) \mbox{ weakly in }
L^2(0,T;V').
\end{multline}
\end{itemize}
Let $U\subset \relax\ifmmode I\!\!R\else$I\!\!R$\fi^m$ be closed and convex and let $f:[0,T]\times U
\rightarrow V'$ satisfy the following condition: there exists a constant $C>0$
such that for any $u:[0,T]\rightarrow U$, $u\in L^2(0,T)$
\begin{equation} \label{contf}
\int_0^T\|f(t,u(t))\|^2_*\leq C \|u\|^2_2 ,
\end{equation}
where $\|u\|_2$ is the usual $L_2$-norm (it will always be understood
that control laws $u$ take values in $U$, so that we will write
$u\in L^2(0,T)$ instead of $L^2(0,T;U)$).
We are now ready to write the abstract evolution equation we are
going to study. The evolution of the system will be given by a
vector-valued function $y\in W(0,T)$ satisfying the following
abstract Cauchy problem
\begin{equation} \label{abspro}
\diffeq { \frac {dy}{dt} + A(t)y(t) = f(t,u(t))\quad {\rm q.o.}\,t }
{y(0)=y_0,}
\end{equation}
with $u\in L^2(0,T)$ and for some $y_0\in H$ (by Theorem \ref{teocont}
this makes sense). The differential equation above as to be understood
as an equality in the dual space $V'$, i.e. setting
\begin{equation} \label{defa}
a(t;v,w) = \langle A(t)v,w\rangle,\quad t>0,\;v,w, \in V
\end{equation}
and in view of Theorem \ref{teocont}, the differential problem
(\ref{abspro}) is equivalent to the following variational formulation
\begin{equation} \label{varpro}
\diffeq { \frac{d}{dt}(y(t),v) + a(t;y(t),v) = \langle f(t,u(t)),v\rangle
\;\; \forall v\in V, }
{y(0)=y_0}
\end{equation}
Existence and uniqueness results of the solution of such equations,
under our assumptions, can be found in \cite{Lio69} under
monotonicity assumptions and in \cite{Lio71,LioMag72} for the linear
case.
\begin{ex} \label{ex1-1}
Let us see how Example \ref{ex1} fits into this framework. Let
$H=L^2(\Omega)$ and
\[
V=H^1(\Omega)=\left \{f\in H:\,\parder{}{f}{x_i} \in H\,\,i=1,\ldots,n
\right \}.
\]
On $V$ we set $[v]^2=|\nabla v|^2$ and $\|v\|^2= [v]^2+|v|^2$.
Let $v\in V$ be arbitrary; by scalar multiplication and using
Green's formula one finds that the solution $Q$ of (\ref{pdeinex})
has to satisfy
\begin{eqnarray*}
\frac d{dt}(Q(t,\cdot),v) & = & \int_{\Omega} \Delta Q(t,x)
v(x)\,dx +(q\,Q(t,\cdot),v)\\
& = & -\int_{\Omega} \nabla_x Q(t,v)\cdot \nabla v(x)\,dx \\
& & +\int_\Gamma u(t)g(\sigma) \, v(\sigma)\,d\sigma +(q\,Q(t,\cdot),v).
\end{eqnarray*}
Therefore setting $y_0=Q_0$ and $y(t)=Q(t,\cdot)$ we get the
(autonomous) variational formulation of our abstract setting in
the form (\ref{varpro}) with
\begin{equation} \label{ainex}
a(v,w) = (\nabla v,\nabla w)-(qv,w)
\end{equation}
and
\begin{equation} \label{finex}
\langle f(t,u),v\rangle = \int_\Gamma ug(\sigma) \, v(\sigma)\,d\sigma.
\end{equation}
Now (\ref{hemi}) and (\ref{bound}) are easily verified and
(\ref{coercive}) follows from
\[
a(v,v) = [v]^2 -(qv,v)\geq [v]^2-(\sup_{\Omega} q)\,|v|^2.
\]
Also, the operator $A$ defined as $\langle Av,w\rangle:=a(v,w)$ is
linear and bounded, therefore it is $2$-weakly continuous and
we have (\ref{2weak}).
Moreover, on $V$ the trace operator $\tau$ of restriction of a
function to the boundary of $\Omega$ is well defined \cite{LioMag72}.
The range of $\tau$ is the Banach space $Z=H^{1/2}(\Gamma)$ and $\gamma$ is
continuous from $V$ onto $H^{1/2}(\Gamma)$. Therefore $f$ is well
defined for any $g$ in the dual of $H^{1/2}(\Gamma)$, hence for
example for all $g\in L^2(\Gamma)$ and obviously satisfies (\ref{contf})
with $C=\|g\|_{L^2(\Gamma)}\,\|\tau\|_{{\cal L}(V,Z)}$.
\end{ex}
\section{Main results} \label{sec3}
In this section we introduce the concept of sliding surface for the
control problem (\ref{varpro}) and show how sliding motions can be
defined in this context.
Assume we are working in the framework set up in Section \ref{sec2}.
Thanks to separability, there exists a countable basis for $V$, so
that it is possible to define a family $\{V_k\}_{k\in\en}$ of finite
dimensional subspaces of $V$
\[
V_k={\rm span\ } \{v_{1,k},\ldots,v_{N_k,k}\}
\]
such that
\[
V_k\subset V_{k+1},\quad \mathop{\bigcup_{k\in\en}} V_k=V.
\]
Then it is possible to define approximate solutions of (\ref{varpro})
by projecting on the subspaces $V_k$, using the standard Faedo-Galerkin
method. We thus define the following family of variational problems:
find $y_k:[0,T]\rightarrow V_k$ such that
\begin{equation} \label{varprok}
\diffeq { \frac{d}{dt}(y_k(t),v) + a(t;y_k(t),v) = \langle f(t,u_k(t)),v\rangle
\; \forall v\in V_k, }
{y_k(0)=y_{0,k}}
\end{equation}
with $y_{0,k}\in V_k$ for all $k$ and a sequence $\{u_k\}$ in $L^2(0,T)$.
Note that, since $V_k$ has dimension $N_k$, the above problem can be
written as an ordinary differential equation. In fact, since
$y_k(t)\in V_k$, there exists a vector $\xi_k(t) \in \relax\ifmmode I\!\!R\else$I\!\!R$\fi^{N_k}$
such that
\[
y_k(t)=\sum_{i=1}^{N_K} (\xi_k(t))_i\,v_{i,k}.
\]
The differential equation in (\ref{varprok}) is satisfied for all
$v\in V_k$ iff it is valid for every element of the basis of $V_k$.
Therefore, if
\[
y_{0,k}=\sum_{i=1}^{N_K} (\xi_{0,k})_i \,v_{i,k} ,
\]
\[
f_k(t)=(\,\langle f(t,u_k(t)),v_{i,k}\rangle\,)_{i=1, \ldots,N_k},
\]
and
\begin{eqnarray*}
A_k(t)=(a^{(k)}_{ij}(t))_{i,j=1,\ldots,N_k}, & \; &
a^{k}_{ij}(t)=a(t;v_{i,k},v_{j,k}),\\
M_k=(m^{(k)}_{ij}(t))_{i,j=1,\ldots,N_k}, & \; &
m^{k}_{ij}(t)=(v_{i,k},v_{j,k}),
\end{eqnarray*}
the differential problem (\ref{varprok}) is equivalent to the
following ordinary Cauchy problem
\begin{equation} \label{odek}
\diffeq{M_k\,\dot{\xi}_k(t)+A_k\,\xi_k(t)=f_k(t)}
{\xi_k(0)=\xi_{0,k}.}
\end{equation}
We now prove a convergence result for the approximations $y_k$, under
some conditions on the controls sequence $\{u_k\}$.
\begin{teo} \label{teoconv}
Let the assumptions in Section \ref{sec2} be satisfied and $\{u_k\}$
be a sequence in $L^2(0,T)$. Let $y_k$ be the solution of
(\ref{varprok}) and suppose that $y_{0,k}\rightarrow y_0$ in $H$ for
$k\rightarrow +\infty$. Suppose moreover that the following condition on the
growth of the control norms is satisfied
\begin{equation} \label{congrow}
\|u_k\|_{L^2(0,t)}^2\leq M\int_0^t\,|y_k(s)|^2\,ds+N,
\; t\leq T
\end{equation}
for some non-negative constants $M$ and $N$ and that $f$ is
the following weak continuity assumption
\begin{multline} \label{weakcontf}
u_k\rightarrow u^* \mbox{ weakly in } L^2(0,T) \mbox{ then } \\
f(\cdot,u_k(\cdot))\rightarrow f(\cdot,u(\cdot))\mbox{ weakly in }
L^2(0,T;V')
\end{multline}
Then there exist a
control law $u^*\in L^2(0,T)$ and a function $y^*\in W(0,T)$
verifying (\ref{varpro}), such that, for some subsequence,
\begin{eqnarray*}
& & y_k\rightarrow y^* \mbox{ weakly in } W(0,T) \\
& & y_k\rightarrow y^* \mbox{ weakly* in } L^\infty(0,T;H)\\
& & u_k\rightarrow u^* \mbox{ weakly in } L^2(0,T).
\end{eqnarray*}
\end{teo}
\vskip .3cm
\bdim
Writing (\ref{varprok}) for $v=y_k(t)$ we get
\[
(\dot{y}_k(t),y_k(t))+a(t;y_k(t),y_k(t))=\langle f(t,u_k(t)),y_k(t)\rangle.
\]
As the first term on the left is in fact the time derivative of
$|y_k(t)|^2/2$, integrating the above identity we have
\begin{multline*}
\frac 12 |y_k(t)|^2 + \int_0^t a(t;y_k(s),y_k(s))\,ds = \\
\frac 12 |y_k(0)|^2 +\int_0^t \langle f(t,u_k(s)),y_k(s)\rangle\,ds.
\end{multline*}
By (\ref{coercive}), (\ref{contf}) and (\ref{vnorm}) we obtain the
following inequality
\begin{multline*}
\frac 12 |y_k(t)|^2 + \alpha\int_0^t [y_k(s)]^2 ds \leq \\
\frac 12 |y_k(0)|^2 - \nu \int_0^t |y_k(s)|^2 ds \\
+c\,\|u_k\|_2 \,\left( \int_0^t [y_k(s)]^2 \,ds
+\int_0^t [y_k(s)]^2\,ds\right)^{1/2}
\end{multline*}
for some constant $c>0$. Consider now for $x\geq 0$ the function
$h(x)=(\alpha x)/2-c\sqrt{x}$. It is easy to show that it has minimum
for $x=(c/\alpha)^2$, therefore $c\sqrt{x}\leq (\alpha x+c^2/\alpha)/2$, thus
\begin{multline*}
\frac 12 |y_k(t)|^2 + \frac \alpha 2\int_0^t [y_k(s)]^2 ds \leq \\
\frac 12 |y_k(0)|^2 + \left(\frac\alpha 2+|v|\right)
\int_0^t |y_k(s)|^2 ds +\frac {c^2}{2\alpha}\|u_k\|_2^2.
\end{multline*}
Now, since by hypothesis $|y_{0,k}-y_0|$ tends to zero, the
term $|y_k(0)|^2$ is bounded. Moreover by (\ref{congrow})
\begin{equation} \label{ineqen}
|y_k(t)|^2 +\alpha \int_0^t [y_k(s)]^2 ds \leq
c_1 + c_2 \int_0^t |y_k(s)|^2 \,ds
\end{equation}
for some constants $c_1,c_2>0$. Since $\alpha>0$ we get
\[
|y_k(t)|^2\leq c_1+c_2\int_0^t |y_k(s)|^2\,ds
\]
Therefore, by Gronwall's lemma we obtain for some constant
$K>0$
\begin{equation} \label{boundH}
\|y_k\|_{L^{\infty}(0,T;H)} = \sup_{t\in [0,T]}
|y_k(t)|\leq K
\end{equation}
therefore from (\ref{ineqen}) we also have
\[
\int_0^T[y_k(s)]^2\,ds \leq {\rm const}
\]
and lastly, using (\ref{vnorm}) and (\ref{bound})
\begin{eqnarray*}
\|y_k\|_{L^2(0,T;V)} =\left (\int_0^T\|y_k(s)\|^2\,ds \right )
\leq {\rm const}, \\
\int_0^T\|A(t)y_k(t)\|_*\,dt\leq {\rm const}.
\end{eqnarray*}
Since spheres are weakly compact in both $L^2(0,T;V)$ and
$L^2(0,T;V')$, weakly* compact in $L^{\infty}(0,T;H)$, we can
extract a
subsequence of $\{y_k\}$ (which for simplicity we still denote
by $\{y_k\}$) converging to some $y^*\in L^2(0,T;V)\cap
L^{\infty}(0,T;H)$ for both the weak topology of $L^2(0,T;V)$
and the weak* topology of $L^{\infty}(0,T;H)$ and such that
$Ay_k$ weakly converges to some $\eta$ in $L^2(0,T;V')$. By
(\ref{congrow}) we also have that $\|u_k\|_2$ is bounded, thus
eventually passing to a further subsequence, there exists
$u^*\in L^2(0,T)$ such that $u_k$ converges to $u^*$ weakly in
$L^2(0,T)$. Also, by (\ref{weakcontf}) we can proceed as in
the proof of Theorem 1.1, p. 159 of \cite{Lio69} to conclude
that
\[
\diffeq {\frac d{dt} y^*(t)+\eta(t) = f(t,u^*(t))}
{y(0)=y_0.}
\]
Also, by a standard argument (see i.e. \cite{ZolB89}, Theorem 3)
one can prove that $\dot{y}_k\rightarrow \dot{y}^*$ weakly in
$L^2(0,T;V')$, i.e. $y_k\rightarrow y^*$ weakly in $W(0,T)$. Thus,
by (\ref{2weak}) $\eta(t)=A(t)y^*(t)$ and the proof is complete.
\edim
Having achieved the above convergence result, we introduce as in
\cite{ZolB89}, a set $D$ which can be either $V$ or a sufficiently
large open subset of $H$ and a mapping $s:D\rightarrow \relax\ifmmode I\!\!R\else$I\!\!R$\fi^m$ continuously
Fr\'echet differentiable on $D$. The sliding
surface $S$ we consider is defined as $S=\{y\in D:\,s(y)=0\}$.
Proceeding as in \cite{ZolB89}, by slightly modifying proofs, it is
possible to prove the following
\begin{cor} \label{corsli}
Let the assumptions of Theorem \ref{teoconv} hold. Let $z_k(t)
=s(y_k(t))$ and assume that one of the following is satisfied:
\begin{itemize}
\item [(1)] $D=V$, $s$ is affine and $z_k\rightarrow 0$ uniformly
in $t$;
\item [(2)] ${\cal B}_H(0,K)\subset D$, $V$ is compactly embedded
in $H$ (here ${\cal B}_H$ denotes a ball in $H$, while
$K$ is defined in (\ref{boundH}) above) and
$z_k(t)\rightarrow 0$ for almost every $t\in [0,T]$.
\end{itemize}
Then the limit motion $y^*$ of Theorem \ref{teoconv} belongs to
the sliding manifold $S$.
\end{cor}
\begin{rem}
Note that by (\ref{odek}) every $y_k$ solves a finite-dimensional
problem, thus for the approximate solutions all results of the
classical theory of variable structure systems and sliding mode
control of \cite{Utk92} are valid. Therefore existence results
for system motions satisfying the requirements in Corollary
\ref{corsli} and design methods to achieve them are available.
See also the discussion of existence under relaxed hypotheses
developed in \cite{ZolB89}.
\end{rem}
\addtolength{\textheight}{-2.8cm}
\section{An application} \label{sec4}
In this Section we show an application of the obtained results on
the control problem introduced in Example \ref{ex1}. We have already
proved (see Example \ref{ex1-1}) that this partial differential
equation with Neumann control fits in the abstract setting of Section
\ref{sec2}. It is also easy to prove that for $f$ as in (\ref{finex})
the condition (\ref{weakcontf}) is satisfied. In fact, if $u_k\rightarrow u$
weakly in $L^2(0,T)$, for any $\varphi\in L^2(0,T;V)$ we have
\begin{multline*}
\int_0^T \langle\, [f(t,u_k(t))- f(t,u(t))]\,,\,\varphi(t)\,\rangle \,dt = \\
\int_0^T[u_k(t)-u(t)]\, \int_\Gamma g(\sigma) \, \varphi(t)(\sigma)\,d\sigma\;dt
\end{multline*}
which converges to zero since by H\"older's inequality and
continuity of the trace operator on $V$
\begin{multline*}
\int_0^T \left( \int_\Gamma |g(\sigma)| \, |\varphi(t)(\sigma)|\,d\sigma
\right)^2dt \leq \\
\|g\|_{L^2(\Gamma)}^2 \int_0^T\|\varphi(t)\|^2\,dt < +\infty.
\end{multline*}
We then set $s:H\rightarrow \relax\ifmmode I\!\!R\else$I\!\!R$\fi$, $s(x)=(x,\gamma)$ and $S=\ker S$.
For convenience we suppose that the chosen bases of the subspaces
$V_k$ are orthonormal, so that the matrix $M_k$ in (\ref{odek}) is
the identity (this is not restrictive since in the general case
$M_k$ is symmetric, positive definite and a linear change of
coordinates is sufficient to reconduct this problem to the
orthonormal one). Then, setting $g_k=(\, (g,\tau v_{i,k})
_{L^2(\Gamma)}\,)_{i=1, \ldots,N_k}$, (\ref{odek}) can be rewritten
as
\[
\diffeq{\dot{\xi}_k(t)+A_k\,\xi_k(t)=u_k(t)g_k}
{\xi_k(0)=\xi_{0,k}.}
\]
Then $z_k(t)=s(y_k(t))=(y_k(t),\gamma)=\gamma_k^T\xi_k(t)$, with
$\gamma_k=(\, (v_{i,k},\gamma)\,)_{i=1, \ldots,N_k}$. Let $V(t)=
z_k^2(t)/2$; then
\[
\dot{V}(t) = z_k(t)\,\dot{z}_k(t) = z_k(t)\,[\gamma_k^T\,
(-A_k\xi_k(t) +u_k(t) g_k)\,].
\]
By standard finite dimensional theory \cite{Utk92}
a sliding mode exists on $S_k=\{x\in\relax\ifmmode I\!\!R\else$I\!\!R$\fi^{N_k}:\,\gamma_k^Tx=0\}$ if
$\gamma_k^Tg_k \neq 0$. Also, in this case, setting
\[
u_k(t) = -U(t)\; \frac{{\rm sign}\,(z_k(t))}{\gamma_k^Tg_k}
\]
with $U(t)> |\gamma_k^TA_k\xi_k(t)|$ the sliding
surface is globally attractive and reached in finite time.
Moreover, if $\delta_k>0$ and $|s(y_k(0))|<\delta_k$ the control
\[
u_k(t) = -\frac{U(t)}{\gamma_k^Tg_k}\; \frac{z_k(t)}{|z_k(t)|+\delta_k}
\]
constrains the motion of the system in a $\delta_k$-boundary layer
of $S_k$. Let us now consider the term $\gamma_k^TA_k\xi_k(t)$;
since we assumed that the basis of $V_k$ is orthonormal, we have
\[
\gamma_k^TA_k\xi_k(t) = a(y_k(t),P_k\,\gamma),
\]
where $P_k:V\rightarrow V_k$ is the projection on $V_k$. Likewise we
have
\[
\gamma_k^Tg_k = \int_\Gamma g(\sigma)\,(P_k\,\gamma)(\sigma)\,ds.
\]
Thus, if for example $\gamma\in V$ and
\[
\int_\Gamma g(\sigma)\,\gamma(\sigma)\,d\sigma\neq 0,
\]
since $P_k\,\gamma\rightarrow \gamma$ in $V$, there exists $K$ such that
$\gamma_k^Tg_k\neq 0$ for all $k\geq K$. In order to apply Theorem
\ref{teoconv} we also have to show that (\ref{congrow}) holds.
Recalling that
\[
a(y_k(t),P_k\,\gamma) =(\nabla y_k(t),\nabla P_k\,\gamma)
-(qy_k(t),P_k\,\gamma)
\]
we just have to show that, at least for suitable $\gamma$-s, the first
term can be estimated using $|y_k(t)|$. Proceeding formally, by
Green's formula we have
\begin{multline*}
(\nabla y_k(t),\nabla P_k\,\gamma)= -(y_k(t),\Delta(P_k\,\gamma)) \\
+ \int_\Gamma y_k(t)(\sigma)\,\parder{}{}{\nu}(P_k\,\gamma) (\sigma)\,d\sigma.
\end{multline*}
Thus (\ref{congrow}) can be satisfied if sufficiently regular
decompositions $\{V_k\}$ of $H^1(\Omega)$ are chosen and if the
function $\gamma$ satisfies $\parder{}{}{\nu}P_k\,\gamma=0$, at least
on some subsequence. For example this is true if $\gamma\in V_N$
for some $N$ and $\parder{}{}{\nu}\gamma=0$.
\begin{rem}
In this paper we have chosen a variational setting for our
problem, by which we can encompass also some non-linear partial
differential equations. For the linear case, another common
abstract setting involves semigroup theory. In the above
example our operator $A:V\rightarrow V'$ could be, in some sense,
substituted by ${\cal A}:{\cal D}({\cal A})\subset H\rightarrow H$, with
\[
{\cal D}({\cal A}) = \left \{ y\in H^2(\Omega):\, \parder{}{y}{\nu} =0
\right \},\\
\quad {\cal A} y = \Delta y+qy.
\]
Note that the last condition on $\gamma$ above is related to
``$\gamma\in{\cal D}({\cal A}^*)$", which is frequently encountered in the
literature on output control of infinite-dimensional systems.
\end{rem}
\begin{rem}
In many applications the function $z(t)$ of the example represents
the system's output. The modulus of the control law we have chosen
depends on the whole state norm, which could be unavailable for
measurement. In \cite{OrlLouChrIJC04} observers are designed to
overcome this difficulty in the case of distributed control. It
would be interesting to study their application to this case also.
\end{rem}
\section{Conclusions and future work}
In this paper we have analysed the convergence behaviour of finite
dimensional Faedo-Galerkin approximations of a class of variational
problems, when sliding motions are taken into consideration. We have
thus shown that, under some growth hypothesis on the norms of the
controls, a sliding motion exists.
This is a first attempt to extend variable structure control
to boundary control problems for infinite-dimensional systems
and much work has still to be done in this area. Apart from the need
to extend these results to different boundary control problems, it
would be interesting to study how these results are related to a
notion of equivalent control, which has already by introduced in
the infinite-dimensional setting and to approximability of ideal
sliding motions by real ones.
\bibliographystyle{plain}
| f657bc8079ff41fbf00ef1bafda54f3a676facbb | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section*{Introduction}
A {\em polyhedron} is a
two-dimensional complex which is obtained from several oriented
$p$-gons by identification of corresponding sides.
We will consider euclidean and hyperbolic polyhedra. Let's take a
point of the polyhedron and take a sphere (euclidean or hyperbolic)
of
a small radius at this point. The intersection of the sphere with
the polyhedron is a graph, which is called the {\em link} at this
point.
We consider polyhedra such that all links of all
vertices are generalized 4-gons.
We say that a polyhedron $P$ is an
$(m,n)$-polyhedron if the girth of any link of $P$ is at
least $m$ and each face of $P$ is a polygon with at least $n$
edges.
Let $P$ be a {$(m,n)$-polyhedron such
that $m$ and $n$} satisfy the inequality $mn \geq 2(m+n)$.
This inequality appears, for example,
in small cancellation theory
\cite{LS}.
The universal covering of a $(m,n)$-polyhedron with the metric
introduced in \cite[p.~165]{BBr} is a complete metric space
of non-positive curvature in the sense of Alexandrov and
Busemann \cite{GH}.
In this note we construct examples of
$(4,3)$-polyhedra. It follows from \cite{BS}, that
the fundamental groups of our polyhedra satisfy
the property (T) of Kazhdan. (Another relevant reference
is \cite{Z})
\bigbreak
\noindent{\bf Definition.}
A {\em generalized $m$-gon} is a graph which is a union of subgraphs,
called apartments, such that:
\begin{itemize}
\item[1.] Every apartment is a cycle composed of $2m$ edges
for some fixed $m$.
\item[2.] For any two edges there is an apartment containing both of them.
\item[3.] If two apartments have a common edge, then there is
an isomorphism between them fixing their intersection pointwise.
\end{itemize}
\bigbreak
\noindent
{\bf Definition.} Let $\mathcal{P}(p,m)$ be a tessellation of the
hyperbolic plane by regular polygons with $p$ sides,
with angles $\pi/m$ in each vertex where $m$ is an integer.
A {\em hyperbolic building} is a polygonal complex $X$,
which can be expressed as the union of subcomplexes called apartments
such that:
\begin{itemize}
\item[1.] Every apartment is isomorphic to $\mathcal{P}(p,m)$.
\item[2.] For any two polygons of $X$, there is an apartment
containing both of them.
\item[3.] For any two apartments $A_1, A_2 \in X$ containing
the same polygon, there exists an isomorphism $ A_1 \to A_2$
fixing $A_1 \cap A_2$.
\end{itemize}
\bigbreak
Let $C_p$ be a polyhedron whose faces are $p$-gons
and whose links are generalized $m$-gons with $mp>2m+p$.
We equip every face of $C_p$ with the hyperbolic
metric such that all sides of the polygons are geodesics
and all angles are $\pi/m$. Then the universal covering of such
polyhedron is a hyperbolic building, see \cite{Paulin}.
The polyhedra we construct in this note are from this class,
since $p=3$, $m=4$.
Our construction gives examples of hyperbolic triangular buildings
with regular triangles as chambers, which were known before,
on our knowledge. Examples of hyperbolic buildings with right-angled
triangles were constructed by M.Bourdon \cite{Bourdon}.
In the case $p=3$, $m=3$, i.e. $C_p$ is a simplex, we can give
a euclidean metric to every face. In this metric
all sides of the triangles
are geodesics. The universal coverings of these polyhedra
with the euclidean metric are euclidean buildings, see \cite{BB}, \cite{Ba}.
Recall (cf. \cite{Ronan}) that
a {\em generalized m-gon} is a connected, bipartite graph of
diameter $m$ and girth $2m$, in which each vertex lies on at least
two edges. A graph is {\em bipartite} if its set of vertices can
be partitioned into two disjoint subsets such that no two vertices
in the same subset lie on a common edge. The vertices of
one subset we will call black vertices and the
vertices of the other subset the white ones. The {\em diameter} is
the maximum distance between two vertices and the {\em girth} is
the length of a shortest circuit.
Let $G$ be a connected bipartite graph on $q+r$ vertices,
$q$ black vertices and $r$ white ones. Let $\mathcal{A}$ and
$\mathcal{B}$ be two alphabets on $q$ and $r$ letters respectively,
$\mathcal{A}=\{x_1, x_2, \ldots, x_q\}$ and $\mathcal{B}=\{y_1, y_2, \ldots,
y_r\}$.
We mark every black vertex with an element from
$\mathcal{A}$ and every white vertex with an element from $\mathcal{B}$.
\bigbreak
\section{Polygonal presentation.}
We recall a definition of polygonal presentation
introduced in \cite{V}.
\medbreak
\noindent {\bf Definition.} Suppose we have $n$ disjoint connected
bipartite graphs\linebreak
$G_1, G_2, \ldots G_n$.
Let $P_i$ and $Q_i$ be the sets of black and white vertices
respectively in $G_i$, $i=1,\dots,n$; let $P=\bigcup P_i$,
$Q=\bigcup Q_i$, $P_i \cap P_j = \emptyset$,
$Q_i \cap Q_j = \emptyset$
for $i \neq j$ and
let $\lambda$ be a bijection $\lambda: P\to Q$.
A set $\mathcal{K}$ of $k$-tuples $(x_1,x_2, \ldots, x_k)$, $x_i \in P$,
will be called a {\em polygonal presentation} over $P$ compatible
with $\lambda$ if
\begin{itemize}
\item[(1)] $(x_1,x_2,x_3, \ldots ,x_k) \in \mathcal{K}$ implies that
$(x_2,x_3,\ldots,x_k,x_1) \in \mathcal{K}$;
\item[(2)] given $x_1,x_2 \in P$, then $(x_1,x_2,x_3, \ldots,x_k) \in \mathcal{K}$
for some $x_3,\ldots,x_k$ if and only if $x_2$ and $\lambda(x_1)$
are incident in some $G_i$;
\item[(3)] given $x_1,x_2 \in P$, then $(x_1,x_2,x_3, \ldots ,x_k) \in \mathcal{K}$
for at most one $x_3 \in P$.
\end{itemize}
If there exists such $\mathcal{K}$, we will call $\lambda$ a {\em basic bijection}.
Polygonal presentations for $n=1$, $k=3$ were listed in $\cite{Cart}$
with the incidence graph of the finite projective plane of
order two or three as the graph $G_1$.
Some polygonal presentations for $k > 3$ were constructed in \cite{V}.
\medskip
Now we construct polygonal
presentations with $k=3$,$n=1$,
but the graph $G_1$ is a generalized 4-gon.
$T_1$:
$(x_1,x_2,x_7)$
$(x_1,x_8, x_{11})$
$(x_1,x_{14},x_5)$
$(x_2,x_4,x_{13})$
$(x_{12},x_4,x_2)$
$(x_4,x_9,x_3)$
$(x_6,x_8,x_3)$
$(x_{14},x_6,x_3)$
$(x_{12},x_{10},x_5)$
$(x_{13},x_{15},x_5)$
$(x_{12},x_9,x_6)$
$(x_{11},x_{10},x_7)$
$(x_{14},x_{13},x_7)$
$(x_9,x_{15},x_8)$
$(x_{11},x_{15},x_{10})$
$T_2$:
$(x_1,x_{10},x_1)$
$(x_1,x_{15},x_2)$
$(x_2,x_{11},x_9)$
$(x_2,x_{14},x_3)$
$(x_3,x_7,x_4)$
$(x_3,x_{15},x_{13})$
$(x_4,x_8,x_6)$
$(x_{12},x_{11},x_4)$
$(x_5,x_8,x_5)$
$(x_5,x_{10},x_{12})$
$(x_6,x_{14},x_6)$
$(x_7,x_{12},x_7)$
$(x_{13},x_9,x_8)$
$(x_{14},x_{15},x_9)$
$(x_{13},x_{11},x_{10})$
\medskip
Let's show, that these sets are desired polygonal
presentations. Remark, that the smallest generalized
4-gon can be presented in the following way:
its ``points'' are pairs $(ij)$, where $i,j=1,...,6$, $i \neq j$
and ``lines'' are triples of those pairs where all $i,j$ are
different. We mark pairs $(ij)$, where $i,j=1,...,6$, $i \neq j$
by numbers from 1 to 15 in natural order. Now one can
check by direct examination, that the graph $G_1$
is really the smallest generalized 4-gon.
The polygonal presentation $T_1$ and $T_2$ are not equivalent
since there is no automorphism of the generalized
4-gon, which transforms one to another.
\section{Construction of polyhedra.}
We can associate a polyhedron $K$ on $n$ vertices with
each polygonal presentation $\mathcal{K}$ as follows:
for every cyclic $k$-tuple $(x_1,x_2,x_3,\ldots,x_k)$ from
the definition
we take an oriented $k$-gon on the boundary of which
the word $x_1 x_2 x_3\ldots x_k$ is written. To obtain
the polyhedron we identify the corresponding sides of our
polygons, respecting orientation.
We will say that the
polyhedron $K$ {\em corresponds} to the polygonal
presentation $\mathcal{K}$.
\medskip
\noindent {\bf Lemma \cite{V}} A polyhedron $K$ which
corresponds to a polygonal presentation $\mathcal{K}$ has
graphs $G_1, G_2, \ldots, G_n$ as the links.
\medskip
\noindent
{\bf Remark.} Consider a polygonal
presentation $\mathcal{K}$. Let $s_i$ be the number of vertices
of the graph $G_i$ and $t_i$ be the number of edges of $G_i$,
$i=1,\dots,n$.
If the polyhedron $K$ corresponds to the polygonal
presentation $\mathcal{K}$, then $K$ has $n$ vertices
(the number of vertices of $K$ is equal to the number of graphs),
$k \sum_{i=1}^n s_i$ edges and $\sum_{i=1}^n t_i$
faces, all faces are polygons with $k$ sides.
\medskip
For polygonal presentation $T_i,i=1,2$
take 15 oriented regular hyperbolic triangles
with angles $\pi/4$, whrite words from the presentation
on their boundaries and glue together respecting
orientation sides with the same letters.
The result is a hyperbolic polyhedron with one vertex and
15 faces and the universal covering is a triangular
hyperbolic building. The fundamental group of the polyhedron
acts simply transitively on vertices of the building.
The group has 15 generators and 15 relations, which
come naturally from the polygonal presentation.
| 7c17552795af575856faf02990ec795bc0e26185 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
The moonshine vertex operator algebra $V^{\natural}$ constructed
by Frenkel-Lepowsky-Meurman \cite{FLM1},\cite{FLM2} not only
proves a conjecture by McKay-Thompson but also plays a
fundamental role in shaping the theory of vertex operator algebra.
In the introduction of \cite{FLM2}, Frenkel-Lepowsky-Meurman
conjectured that the $V^{\natural}$ can be characterized by the
following three conditions:
(a) the VOA $V^{\natural}$ is the only irreducible ordinary module
for itself;
(b) the central charge of $V^{\natural}$ is 24;
(c) $V^{\natural}_1=0.$
We call their conjecture {\it the Frenkel-Lepowsky-Meurman conjecture. }
These conditions are natural analogues of conditions which
characterize the binary Golay code and the Leech lattice.
Conditions (b) and (c) are clear from the construction. Condition
(a) is proved in \cite{D} by using the 48 commuting Virasoro
elements of central charge $\frac 1 2$ discovered in \cite{DMZ}.
Furthermore, $V^{\natural}$ is rational \cite{DLM2},\cite{DGH}.
Although the theory of vertex operator algebra has developed a lot
since \cite{FLM2}, including some uniqueness results for certain
VOAs \cite{LX}, \cite{DM2}, \cite{DM3}, there has been no real
progress in proving their conjecture.
In this paper we prove two weak versions of the Frenkel-Lepowsky-Meurman conjecture:
\begin{thmm}\label{mt1} Let $V$ be a $C_2$-cofinite vertex operator algebra satisfying
(a)-(c). We also assume that $V_2$ is isomorphic to the Griess
algebra.
Then $V$ is isomorphic to $V^{\natural}.$
\end{thmm}
In the second main theorem, we replace condition (a) by the
assumption that $\dim V_n\leq \dim V^{\natural}_n$ for $n\geq 3.$
\begin{thmm}\label{mt2} Let $V$ be a simple vertex operator algebra satisfying
(b)-(c). We also assume that $V_2$ is isomorphic to the Griess
algebra and $\dim V_n\leq \dim V^{\natural}_n$ for $n\geq 3.$
Then $V$ is isomorphic to $V^{\natural}.$
\end{thmm}
We now discuss the theorems and background. The weight two
subspace $V^{\natural}_2$ of $V^{\natural}$ with the product which takes the pair $u, v$ to
$u_1v$, where $u_1$ is the component operator of the
vertex operator $Y(u,z)=\sum_{n\in\mathbb Z}u_nz^{-n-1}$ \cite{FLM2}, is
the Griess algebra \cite{G}, which is a commutative nonassociative
algebra of dimension 196884. Moreover, $V^{\natural}$ is generated
by $V^{\natural}_2$ and $V^{\natural}$ is an irreducible module
for the affinization of the Griess algebra \cite{FLM2}. So, in
order to understand the moonshine vertex operator algebra, one
must know the Griess algebra and its affinization very well. It
seems that a complete proof of FLM's uniqueness conjecture needs a
better understanding of the Griess algebra. Unfortunately, there
does not yet exist a characterization of the Griess algebra
(independent of its connection to the monster simple group).
Also, the affinization of the Griess algebra is not a Lie algebra
and lacks a highest weight module theory. From this point of view,
$V^{\natural}$ is a very difficult vertex operator algebra.
The study of the moonshine vertex operator algebras in terms of
minimal series of the Virasoro algebras was initiated in
\cite{DMZ}. This is equivalent to the study the maximal
associative subalgebra of the Griess algebra. In \cite{DMZ}, we
find 48 mutually commutative Virasoro algebras with central charge
$\frac 1 2.$ As a result, a tensor product $T_{48}$ of 48 vertex
operator algebras, associated to the highest weight unitary
representations of the Virasoro algebra with central charge $\frac
1 2$ is a subalgebra of $V^{\natural}$ and $V^{\natural}$
decomposes into a direct sum of finitely many irreducible modules
for $T_{48}$ as $T_{48}$ is rational and the homogeneous summands for $V$ are finite dimensional. A lot of progress on
the study of the moonshine vertex operator algebra has been made
by using the subalgebra $T_{48}$ and vertex operator subalgebras
associated to the other minimal unitary series for the Virasoro
algebras \cite{DLMN}, \cite{DGH}, \cite{KLY}, \cite{M3}. The
discovery of the $T_{48}$
inside $V^{\natural}$ also inspired
the study of code vertex operator algebras and framed vertex
operator algebras \cite{M2}, \cite{DGH}.
A {\it frame} in $V^{\natural}$ is a set of 48 mutually orthogonal Virasoro
elements with central charge $\frac 1 2.$ The subalgebra $T_{48}$
depends on a frame as studied in \cite{DGH}. It is proved in
\cite{DGH} that for any choice of 48 commuting Virasoro algebras
there are two codes $C$ and $D$ associated to the decomposition of
$V^{\natural}$ into irreducible $T_{48}$-modules. Each irreducible
$T_{48}$-module is a tensor product of 48 unitary highest weight
modules $L(\frac{1}{2},h)$ for the Virasoro algebra with central
charge $\frac 1 2$ where $h$ can take only three values
$0,\frac{1}{2}, \frac{1}{16}.$ The code $C$ tells us the
irreducible $T_{48}$-modules occurring in $V^{\natural}$ which are
a tensor product of $L(\frac{1}{2},h)$ for $h=0$ or $\frac{1}{2}.$
Similarly, the code $D$ indicates the appearance of irreducible
$T_{48}$-modules whose tensor factors have at least one
$L(\frac{1}{2},\frac{1}{16}).$ The fusion rules for the vertex
operator algebra $L(\frac{1}{2},0)$ indicate that we should
consider a frame so that $C$ has maximal possible dimension and
$D$ minimal possible dimension. These respective dimensions are 41
and 7. The reason for using our particular frame is that, for the
code VOA which arises, {\it all irreducible modules are simple
currents} (see Theorem \ref{fusion} in this paper). The uniqueness
of $V^\natural$ then follows from known uniqueness results for
certain smaller VOAs, those which are simple current extensions of
code VOAs.
The main strategy in proving the theorem is to use this particular
frame. Since we assume that the weight 2 subspace of the abstract
vertex operator algebra in the theorem is isomorphic to the Griess
algebra, we can use the theory of framed vertex operator algebra
developed in \cite{DGH} and \cite{M2} to investigate the structure
of such vertex operator algebras.
Although we assume that $V_2\cong V_2^\natural$ (as algebras), we
can not claim automatically that any VF in $V^\natural$
corresponds to a VF in $V$. The difficult point is to prove that
a Virasoro vector in $V_2^\natural$ generates a subVOA which is
{\it simple}, i.e. an irreducible highest weight module. This is
where we make use of the other assumptions in our main theorems.
The proof involves both character theory for the Virasoro algebra
with central charge $\frac 1 2$ and an explicit expression for the
$J$-function.
It seems that there is still a long way to go to settle the FLM
conjecture. The main difficulty is that we do not have much theory
of finite dimensional commutative nonassociative algebras which
could be applicable to a 196884-dimensional degree 2 summand of a
VOA satisfying our conditions (a,b,c) (see \cite{G1}). In a
sense, this paper reduces the uniqueness of the moonshine vertex
operator algebra to the uniqueness of the Griess algebra.
\section{Notations}
Most of our notations are fairly standard in the VOA literature.
For the reader, we note a few below.
\bigskip
codes $C=C(F), D=D(F)$: see Section 4;
codes $\mathcal{C}, \mathcal{D}$: see Section 6;
$j(q), J(q)$ : the elliptic modular function and the elliptic
modular function with constant term set equal to 0, i.e.,
$J(q)=j(q)-744$;
$\langle \omega_i \rangle$ : the subVOA generated by $\omega_i$;
VF : Virasoro frame, see Section 4;
$Vir(\omega_i)$ : the Virasoro algebra spanned by the modes of
the Virasoro element $\omega_i$ and the scalars;
$V^I$ : see Section 4;
$V^0$ or $V^{\emptyset}$ : the case of $V^I$ for $I=0$ or
$\emptyset$;
$V^\natural$ : the moonshine VOA, constructed in \cite{FLM2};
$(V^\natural)^0$ : this is $V^0$ for $V=V^\natural$.
\section{Various modules for vertex operator algebras}
\setcounter{equation}{0}
Let $(V,Y,{\bf 1},\omega)$ be a vertex operator algebra. We recall
various notion of modules (cf. \cite{FLM2}, \cite{DLM1}).
\bigskip
A {\em weak} $V$ module is a vector space $M$ with a linear map
$Y_M:V \rightarrow End(M)[[z,z^{-1}]]$ where $v \mapsto
Y_M(v,z)=\sum_{n \in \mathbb Z}v_n z^{-n-1}$, $v_n \in End(M)$. In
addition $Y_M$ satisfies the following:
1) $v_nw=0$ for $n>>0$ where $v \in V$ and $w \in M$;
2) $Y_M( {\textbf 1},z)=Id_M$;
3) The Jacobi Identity
\begin{eqnarray*}
& &z^{-1}_0\delta\left(\frac{z_1-z_2}{z_0}\right)
Y_M(u,z_1)Y_M(v,z_2)-z^{-1}_0\delta\left(\frac{z_2-z_1}{-z_0}\right)
Y_M(v,z_2)Y_M(u,z_1)\\
& & \ \ \ =z_2^{-1}\delta\left(\frac{z_1-z_0}{z_2}\right)
Y_M(Y(u,z_0)v,z_2)
\end{eqnarray*}
holds.
\bigskip
An {\em admissible} $V$ module is a weak $V$ module which carries
a $\mathbb Z_+$-grading, $M=\bigoplus_{n \in \mathbb Z_+} M(n)$, such that
$v_m M(n) \subseteq M(n+{\rm wt} v-m-1)$
\bigskip
An {\em ordinary} $V$ module is a weak $V$ module which carries a
$\mathbb C$-grading, $M=\bigoplus_{\lambda \in \mathbb C} M_{\lambda}$, such that:
1) $dim(M_{\lambda})< \infty$ for all $\lambda \in \mathbb C$;
2) $M_{\lambda+n}=0$ for fixed $\lambda$ and $n<<0$ (depending on $\lambda$);
3) $L(0)w=\lambda w={\rm wt}(w) w$, for $w \in M_\lambda$.
\bigskip
It is easy to prove that an ordinary module is admissible.
A vertex operator algebra is called {\em rational} if every
admissible module is a direct sum of simple admissible modules.
That is, a VOA is rational if there is complete reducibility of
the category of admissible modules. It is proved in \cite{DLM2}
that if $V$ is rational there are only finitely many irreducible
admissible modules up to isomorphism and each irreducible
admissible module is ordinary.
A vertex operator algebra $V$ is called {\em holomorphic} if it is
rational and the only irreducible ordinary module is itself. In
this case $V$ is also the only irreducible admissible module.
A vertex operator algebra is called {\em regular} if every weak
module is a direct sum of simple ordinary modules. So, regularity
implies rationality.
A vertex operator algebra $V$ is called {\it $C_2$-cofinite} if
$V/C_2(V)$ is finite dimensional where $C_2(V)=\<u_{-2}v|u,v\in
V\>.$
\section{Framed vertex operator algebras}
\setcounter{equation}{0}
In this section we review the framed vertex operator algebras and
related results from \cite{DMZ} and \cite{DGH}.
Let $L(c,h)$ be the irreducible highest weight module for the
Virasoro algebra with central charge $c$ and highest weight $h.$
The $L(\frac 1 2 ,0)$-module $L(\frac{1}{2},h)$ is unitary if and
only if $h=0,\frac{1}{2},\frac{1}{16}$ \cite{FQS}, \cite{GKO}.
Moreover, $L(\frac{1}{2},0)$ is a rational vertex operator algebra
and $L(\frac{1}{2},h)$ for $h= 0,\frac{1}{2},\frac{1}{16}$ gives
a complete list of inequivalent irreducible
$L(\frac{1}{2},h)$-modules.
We first recall the notion of framed vertex operator algebra. Let
$r$ be a nonnegative integer. A {\it framed vertex operator
algebra} (FVOA) is a simple vertex operator
algebra $(V,Y,{\bf 1},\omega)$
satisfying the following conditions: there exist
$\omega_i\in V$ for $i=1$,~$\ldots$,~$r$ such that (a) each
$\omega_i$ generates a copy of the simple Virasoro vertex operator
algebra $L(\frac{1}{2},0)$ of central charge $\frac{1}{2}$ and the
component operators $L^i(n)$ of
$Y(\omega_i,z)=\sum_{n\in\mathbb Z}L^i(n)z^{-n-2}$ satisfy
$[L^i(m),L^i(n)]=(m-n)L^i(m+n)+\frac{m^3-m}{24}\delta_{m,-n};$ (b)
The $r$ Virasoro algebras $Vir(\omega_i )$, spanned by the modes
of $Y(\omega_i ,z)$ and the identity, are mutually commutative;
and (c) $\omega=\omega_1+\cdots+\omega_{r}$. The set
$\{\omega_1,\ldots,\omega_r\}$ is called a {\it Virasoro frame} (VF).
>From now on we assume that $V$ is a FVOA of central charge
$\frac{r}{2}$ with frame $F:=\{\omega_1,\ldots,\omega_r\}$. Let $T_r$
be the vertex operator algebra generated by $\omega_i$ for
$i=1,...,r.$ Then $T_r$ is isomorphic to
$L(\frac{1}{2},0)^{\otimes r}$ and its irreducible modules are the
$L(h_1,...,h_r):=L(\frac{1}{2},h_1)\otimes\cdots \otimes
L(\frac{1}{2},h_r)$ for $h_i=0,\frac{1}{2},\frac{1}{16}.$ Since
$T_r$ is a rational vertex operator algebra, $V$ is a completely
reducible $T_r$-module. That is,
\begin{equation}\label{2.1}
V \cong \bigoplus_{h_i\in\{0,\frac{1}{2},\frac{1}{16}\}}
m_{h_1,\ldots, h_{r}}L(h_1,\ldots,h_{r})
\end{equation}
where the nonnegative integer $m_{h_1,\ldots,h_r}$ is the
multiplicity of $L(h_1,\ldots,h_r)$ in $V$. In particular, all the
multiplicities are finite and $m_{h_1,\ldots,h_r}$ is at most $1$
if all $h_i$ are different from $\frac{1}{16}$.
There are two binary codes $C=C(F)$ and $D=D(F)$ associated to the
decomposition (\ref{2.1}). In order to define the code $D$ we
identify a subset $I$ of $\{1,...,r\}$ with a codeword
$d=(d_1,...,d_r)\in \mathbb F_2^r$ where $d_i=1$ if $i\in I$ and $d_0=0$
elsewhere. Let $I$ be a subset of $\{1,\ldots,r\}$. Define $V^I$
as the sum of all irreducible submodules isomorphic to one of the
irreducibles $L(h_1,\ldots , h_r)$ such that $h_i=\frac{1}{16}$ if
and only if $i \in I$. Then
$$V=\bigoplus_{I\subseteq \{1,\ldots,r\}}V^I.$$
Set
\begin{equation}\label{2.2}
D=D(F):=\{I\in \mathbb F_2^r \mid V^I \ne 0\}.
\end{equation}
For $c=(c_1,...,c_r)\in \mathbb F_2^r$, we define
$V(c)=m_{h_1,...,h_r}L(h_1,...,h_r)$ where $h_i=\frac{1}{2}$ if
$c_i=1$ and $h_i=0$ elsewhere. Set
\begin{equation}\label{2.3}
C=C(F):=\{c\in \mathbb F_2^r \mid V(c) \ne 0\}.
\end{equation}
Then $V^{\emptyset}=V^0=\bigoplus_{c\in C}V(c)$.
Here we summarize the main result about FVOAs from \cite{DGH}
\begin{thm}\label{DGH} Let $V$ be a FVOA. Then
(a) $V=\oplus_{n\geq 0}V_n$ with $V_0=\mathbb C {\bf 1}.$
(b) $V$ is rational.
(c) $C$ and $D$ are binary codes and
$$C\subset D^{\perp}=\{x=(x_1,...,x_r)\in \mathbb F_2^r|x\cdot d=0
\forall d\in D\}.$$ Moreover, $V$ is holomorphic if and only if
$C=D^{\perp}.$
(d) $V^{0}$ is a simple vertex operator algebra and the $V^I$ are
irreducible $V^{0}$-modules. Moreover $V^I$ and $V^J$ are
inequivalent if $I\ne J$.
(e) For any $I,J\in D$ and $0\ne v\in V^J$ we have
$V^{I+J}=span\{u_nv|u\in V^I,n\in\mathbb Z\}.$
(f) Let $I \subseteq \{1,\ldots,r\}$ be given and suppose that
$(h_1,\ldots,h_r)$ and $(h_1',\ldots,h'_r)$ are $r$-tuples with
$h_i$, $h_i'\in\{0,\frac{1}{2},\frac{1}{16}\}$ such that $h_i=\frac{1}{16}$
(resp.~$h_i'=\frac{1}{16}$) if and only if $i\in I$. If both
$m_{h_1,\ldots,h_r}$ and $m_{h_1',\ldots,h'_r}$ are nonzero then
$m_{h_1,\ldots,h_r}=m_{h_1',\ldots,h'_r}$. That is, all
irreducible modules inside $V^I$ for $T_r$ have the same
multiplicities.
(g) For any $c,d\in C$ and $0\ne v\in V(d)$ we have
$V(c+d)=span\{u_nv|u\in V(c),n\in\mathbb Z\}.$
\end{thm}
\section{Code VOA $M_C$}
In this section we review and extend results on code VOAs and
their modules, following \cite{M1}-\cite{M3} and \cite{La}.
We shall sometimes consider an integer modulo 2 as its Euclidean
lift, i.e., its representative 0 or 1 in $\mathbb Z$, so that when $\alpha
\in \mathbb Z_2$, $\frac 1 2 \alpha$ makes sense as the rational number 0 or
$\frac 1 2$.
Let $C$ be an even binary code. For any $\alpha= (\alpha_1, \dots,
\alpha_n)\in C$, denote
\[
M_\alpha= L(\frac{1}2,\frac{\alpha_1}2) \otimes \cdots \otimes
L(\frac{1}2,\frac{\alpha_n}2) \quad \text{ and }\quad M_C=
\bigoplus_{\alpha\in C} M_\alpha.
\]
Note that $M_C$ is a simple current extension of
$T_n=L(\frac{1}2,0)^{\otimes {n}}$ and it has a unique VOA
structure over $\mathbb C$ (cf. \cite{DM2}, \cite{M2}). This will be
used to deduce the uniqueness of $V^\natural$.
\begin{rem} We use $M_C$ for a code VOA instead of $M_D$ given
in \cite{M2} in this paper. This is consistent with our code $C$
defined in Section 3. In fact, $M_C$ is a framed VOA with frame
$F$ satisfying $C(F)=C$ and $D(F)=0.$
\end{rem}
\begin{rem}
For any $\beta\in \mathbb{Z}_2^n$, one can define an automorphism
$\sigma_{\beta}:M_C \to M_C$ by
\[
\sigma_{\beta}(u)=(-1)^{\langle \alpha, \beta \rangle}u\qquad \text{ for
} u\in M_{\alpha}.
\]
This automorphism is called a coordinate automorphism.
Note that $\sigma_{\beta}=\sigma_{\beta'}$ if and only if
$\beta+\beta'\in C^{\bot}$ and the subgroup $P$ generated by $\{
\sigma_{\beta}|\ \beta \in \mathbb Z_2^n\}$ is isomorphic to $\mathbb Z_2^n/
C^\perp$. Moreover, the fixed subalgebra ${M_C}^P$ is $T_n$ (cf.
\cite{M1}).
\end{rem}
We first study the representations of the code VOA $M_C.$ Let $W$
be an irreducible $M_{C}$-module. Then $W$ can be written as a
direct sum of irreducible $T:=T_n$-modules,
\[
W\cong \bigoplus_{h_{i}\in \{0,\frac{1}2,\frac{1}{16}\}} m_{h_{1},
\dots,h_{n}} L(h_1,\cdots, h_n).
\]
\begin{defn}
Define $\tau(L(h_1,\cdots, h_n))=(a_{1},\cdots ,%
a_{n})$ $\in \mathbb{Z}_{2}^{n}$ such that
\[
a_i=\left\{
\begin{array}{lll}
0 & \text{\quad if }& h_{i}=0\text{ or }\frac{1}{2} \\
1 & \text{\quad if }&h_{i}=\frac{1}{16}
\end{array}
\right. .
\]
This binary word is called the $\tau $-word of $L(h_1,\cdots, h_n).$
\end{defn}
By the fusion rules for $L\left( \frac{1}{2},0\right) $, the $\tau
$-words for all irreducible $T$-submodules of $W$ are the same.
Thus, we can also define the {\it $\tau $-word of $W$} by
\[
\tau\left( W\right) =\tau(L(h_1,\cdots, h_n)),
\]
where $L(h_1,\cdots, h_n)$ is any irreducible $T$-submodule of
$W$.
The following proposition is an easy consequence of the fusion
rules (cf. \cite{DGH} and \cite{M2}).
\begin{prop}
\label{even}Let $C$ be an even code and let $W$ be an irreducible module of $%
M_{C}$. Then $\tau\left( W\right) $ is orthogonal to $C$.
\end{prop}
Now we shall give more details about the structure of the
irreducible module $W.$ The details can be found in \cite{M2}. Let
$\beta \in C^{\perp }:=\{\alpha\in \mathbb{Z}_2^n|\ \langle
\alpha,\gamma\rangle=0 \ \text{ for all } \gamma\in C\}$ and
$C_{\beta }:=\{\alpha \in C|\ \mathrm{supp}\,\alpha \subseteq
\mathrm{supp}\,\beta \}$.
Let the group $\hat{C}=\left\{ \pm e^{k}|\ k\in C\right\} $ be a
central extension of $C$ by $\{\pm 1\}$ such that
$$e^he^k=(-1)^{\langle
h,k\rangle}e^ke^h$$
for any $h,k\in C$ and denote $\hat{C}_{\beta
}:=\left\{ \pm e^{k}|\ k\in C_{\beta}\right\}\subset \hat{C} $.
Let $H$ be a maximal self-orthogonal subcode of $C_{\beta }$. Then
$\hat{H}=\{\pm e^{\alpha }|\alpha \in H\}$ is a maximal abelian
subgroup of $\hat{C}_{\beta }$ (it is automatically normal since
it contains the commutator subgroup of $\hat{C}_{\beta }$). Take a linear character
$\chi : \hat{H}\rightarrow \{\pm 1\}$ with $\chi \left(
-e^{0}\right) =-1$ and define a 1-dimensional $\hat{H}$-module
$F_{\chi }$ by the action
\[
e^{\alpha }p=\chi \left( e^{\alpha }\right) p\qquad\text{ for }p\in F_{\chi },%
\text{ }\alpha \in H.
\]
We use ``$h_1\times h_2$'' to abbreviate a few of the well-known
fusion rules involving $L(\frac 1 2 , h_1)$ and $L(\frac 1 2,
h_2)$, i.e., $0\times h=h\times 0 =h$ for $h\in \{0,
\frac{1}2,\frac 1{16}\}$, $\frac{1}2\times \frac{1}2=0$ and
$\frac{1}2\times \frac{1}{16}=\frac{1}{16}\times
\frac{1}2=\frac{1}{16}$.
For any $h^{i}\in \{0,\frac{1}{2},\frac{1}{16}\},i=1,\cdots ,n$, with
${\tau}\left( \otimes_{i=1}^n L\left( \frac{1}{2},h^{i}\right)
\right) =\beta $, we define
\[
U=\left(\otimes _{i=1}^nL\left( \frac{1}{2},h^{i}\right)\right)
\otimes F_{\chi }.
\]
Then $U$ becomes an $M_{H}$-module with the vertex operator
defined by
\[
Y\left( \left( \otimes_{i=1}^n u^{i}\right) \otimes e^{\alpha
},z\right) =\left(\otimes
_{i=1}^{n}I^{\frac{a_{i}}{2},h^{i}}\left( u^{i},z\right)
\right)\otimes \chi \left( e^{\alpha }\right),
\]
where $u^{i}\in L(\frac{1}2,\frac{a_{i}}2)$, $\left( a_{1}, \dots,
a_{n}\right) \in H$, and $I^{\frac{a_{i}}{2},h^{i}}$ is
a nonzero
intertwining operator of type
\[
\left(
\begin{array}{ccc}
&L(\frac{1}2, \frac{a_i}2\times h^i)&\\ L(\frac{1}2,
\frac{a_i}2)&& L(\frac{1}2, h^i)
\end{array}
\right).
\]
We shall denote this $M_{H}$-module by $U\left( \left(
h^{i}\right) ,\chi \right) $ or $U\left( h^{i}\right) \otimes
F_{\chi }$.
Let $\{\beta _{j}=\left( b_{j}^{i}\right) \}_{j=1}^{s}$ be a transversal of $%
H$ in $C$ and
\[
X=\bigoplus _{\beta _{j}\in C/H}\left\{U\left( h^{i}\times \frac{b_{j}^{i}}{2}%
\right) \otimes \left( e^{\beta _{j}}\otimes _{\hat{H}}F_{\chi
}\right) \right\},
\]
Note that $X$ does not
depend on the choice of the transversal of $H$ in $C$ and $X$ is an $M_{H}$%
-module.
The following results can be found in Miyamoto \cite{M2}.
\begin{thm}
$X$ is an $M_{C}$-module with
\[
Y\left( u^{\gamma }\otimes e^{\gamma },z\right) =\left(
\otimes_{i=1}^n I\left( u^{i },z\right) \right) \otimes e^{\gamma
}
\]
for any $\gamma \in C$ and $u^{\gamma}=\otimes_{i=1}^n u^i\in
M_\gamma$. We shall denote $X$ by $\mathrm{Ind}_{H}^{C}U(
(h^i),\chi)$.
\end{thm}
\medskip
\begin{thm}\label{miya}
For any irreducible $M_{C}$-module $W$, there is a pair $\left(
\left( h^{i}\right) ,\chi \right) $ such that
\[
W\cong \mathrm{Ind}_{H}^{C}\left( U\left( \left( h^{i}\right)
,\chi \right) \right),
\]
where $\tau\left( W\right) =\tau\left( L\left( h^{1}, \cdots,
h^n\right) \right) =\beta $, $H$ is a maximal self-orthogonal
subcode of $C_{\beta}=\{\alpha \in C|\,\mathrm{supp}\,\alpha
\subseteq \mathrm{supp}\,\beta \}$ and $\chi $ is a linear character
of $\hat{H}$. Moreover, the structure of the $M_{C}$-module $W$ is
uniquely determined by an irreducible $M_{H}$-submodule of $W$.
\end{thm}
\medskip
Next we shall give a description of all irreducible $M_C$-modules
by using some binary words. Let $C$ be an even code of length $n.$
For a given $\beta\in C^\perp$ and $\gamma\in \mathbb Z_2^n$, we define
\[
h_{\beta, \gamma}=(h_{\beta, \gamma}^1,\dots, h_{\beta,
\gamma}^n)\in \{0, \frac{1}2,\frac{1}{16}\}^n
\]
such that
\begin{equation*}
h_{\beta, \gamma}^i=
\begin{cases}
\frac{1}{16}& \text{ if }\quad \beta_i=1,\\
\frac{\gamma_i}2 & \text{ if } \quad\beta_i=0.
\end{cases}
\end{equation*}
Denote $U(h_{_{\beta, \gamma}})=U(h_{\beta, \gamma}^1,\dots,
h_{\beta, \gamma}^n)=L(h_{\beta, \gamma}^1,\cdots,h_{\beta,
\gamma}^n)$. Fix a maximal self-orthogonal subcode $H_\beta$ of
the code $C_\beta=\{ \alpha\in C|\, \mathrm{supp}\,\alpha \subset
\mathrm{supp}\,\beta\}$ and define a character
$\chi_{\gamma}:\hat{H_\beta}\to \mathbb{C}$ of the abelian group
$\hat{H}_\beta$ by
\[
\chi_{\gamma}(-e^0)=-1 \quad \text{ and }\quad
\chi_{\gamma}(e^{\alpha})=(-1)^{\langle \alpha, \gamma\rangle} \quad
\text{ for }\alpha\in H_\beta.
\]
Then $(\beta,\gamma)$ determines an irreducible $M_C$-module
\[
M_C(\beta,\gamma)=\mathrm{Ind}_{H_\beta}^C U(h_{\beta,
\gamma}^1,\dots, h_{\beta, \gamma}^n)\otimes F_{\chi_\gamma}.
\]
When there is no confusion, we shall simply denote
$M_C(\beta,\gamma)$ by $M(\beta, \gamma)$.
\begin{lem}
The definition of $M(\beta, \gamma)$ is independent of the choice
of the self-orthogonal subcode $H_\beta$ of $C_\beta$.
\end{lem}
\noindent {\bf Proof: \,} Let $H$ be another maximal self-orthogonal subcode of
$C_\beta$ and let $\psi_\gamma: \hat{H} \to \mathbb{C}$ be a
character of $\hat{H}$ such that $\psi_\gamma(e^\xi)=(-1)^{\langle
\xi, \gamma\rangle}$ and $\psi_\gamma(-e^0)=-1$. Then we can
construct another $M_C$-module
\[
\mathrm{Ind}_{H}^C U(h_{\beta, \gamma}^1,\dots, h_{\beta,
\gamma}^n)\otimes F_{\psi_\gamma}.
\]
By Miyamoto's Theorem\,(Theorem \ref{miya}), the structure of this
module is uniquely determined by the structure of the $M_H$
submodule $U(h_{\beta, \gamma}^1,\dots, h_{\beta,
\gamma}^n)\otimes F_{\psi_\gamma} $. Thus,
\[
\mathrm{Ind}_{H}^C U(h_{\beta, \gamma}^1,\dots, h_{\beta,
\gamma}^n)\otimes F_{\psi_\gamma}\cong \mathrm{Ind}_{H_\beta}^C
U(h_{\beta, \gamma}^1,\dots, h_{\beta, \gamma}^n)\otimes
F_{\chi_\gamma}
\]
if and only if $ \mathrm{Ind}_{H_\beta}^C U(h_{\beta,
\gamma}^1,\dots, h_{\beta, \gamma}^n)\otimes F_{\chi_\gamma}$
contains an $M_H$-submodule isomorphic to $U(h_{\beta,
\gamma}^1,\dots, h_{\beta, \gamma}^n)\otimes F_{\psi_\gamma}. $ It
is equivalent to the fact that $
\langle
\mathrm{Res}_{\hat{H}}\mathrm{Ind}_{\hat{H}_{\beta}}^{\hat{C_\beta}}{\chi_\gamma},
\psi_\gamma \rangle \neq 0,$ where
$\langle
\mathrm{Res}_{\hat{H}}\mathrm{Ind}_{\hat{H}_{\beta}}^{\hat{C_\beta}}{\chi_\gamma},
\psi_\gamma \rangle$ denotes the multiplicity of the character
$\psi_\gamma$ in
$\mathrm{Res}_{\hat{H}}\mathrm{Ind}_{\hat{H}_{\beta}}^{\hat{C_\beta}}{\chi_\gamma}$.
On the other hand,
\[
\mathrm{Res}_{\hat{H}}
\mathrm{Ind}_{\hat{H}_{\beta}}^{\widehat{H_{\beta}+H}}
{F_{\chi_\gamma}}\cong
\bigoplus_{\alpha\in (H+H_\beta)/H_{\beta}}
e^\alpha\otimes F_{\chi_\gamma}\cong \bigoplus_{\alpha\in H/H\cap
H_{\beta}} e^\alpha\otimes F_{\chi_\gamma}.
\]
Let
\[
w_\psi= \frac{1}{\abs{H\cap H_\beta}}\sum_{\alpha \in H}
\psi_\gamma(\alpha) e^\alpha \otimes v,
\]
where $v\in F_{\chi_\gamma}$. Then $w_\psi \in
\mathrm{Ind}_{\hat{H}_{\beta}}^{\widehat{H_{\beta}+H}}
F_{\chi_\gamma}$ and for any $x\in H$,
\begin{align*}
e^x\cdot w_\psi &=\frac{1}{\abs{H\cap H_\beta}}\sum_{\alpha \in H}
(-1)^{\langle \gamma,\alpha\rangle} e^x e^\alpha \otimes v \\
&=(-1)^{\langle \gamma,x\rangle}\frac{1}{\abs{H\cap
H_\beta}}\sum_{\alpha \in H}
(-1)^{\langle \gamma,\alpha+x\rangle} e^{x+ \alpha} \otimes v\\
&=(-1)^{\langle \gamma,x\rangle} w_\psi=\psi_\gamma(e^x)w_\psi.
\end{align*}
Hence $\mathbb{C} w_\psi$ affords the $\hat{H}$-character
$\psi_\gamma$ inside
$\mathrm{Ind}_{\hat{H}_{\beta}}^{\widehat{H_{\beta}+H}} F_
{\chi_\gamma} \subset
\mathrm{Ind}_{\hat{H}_{\beta}}^{\hat{C_\beta}} F_{\chi_\gamma}$
and
\begin{align*}
\langle
\mathrm{Res}_{\hat{H}}\mathrm{Ind}_{\hat{H}_{\beta}}^{\hat{C_\beta}}{\chi_\gamma},
\psi_\gamma \rangle \neq 0
\end{align*}
as desired. \mbox{ $\square$}
\begin{lem}\label{uniq}
Let $\beta_1, \beta_2\in C^\perp$ and $\gamma_1,\gamma_2\in
\mathbb Z_2^n$. Let $H_\beta$ be a maximal self-orthogonal subcode of
$C_\beta$ and
let $$(H_\beta)^{\perp_{\beta}}:=\{\alpha\in \mathbb Z_2^n\,|\, \mathrm{supp}\,
\alpha \subset \mathrm{supp}\,\beta \text{ and } \langle \alpha,
\xi\rangle=0\ \text{ for all } \xi\in H_\beta\}.$$ Then the
irreducible $M_C$-modules $M(\beta_1,\gamma_1)$ and $
M(\beta_2,\gamma_2)$ are isomorphic if and only if
\[
\beta_1=\beta_2\qquad \text{ and }\qquad \gamma_1+\gamma_2\in
C+(H_\beta)^{\perp_{\beta}}.
\]
\end{lem}
\noindent {\bf Proof: \,} By the definition of $M(\beta,\gamma)$, it is easy to see that
$M(\beta_1,\gamma_1)\cong M(\beta_2,\gamma_2)$ if $
\beta_1=\beta_2$ and $\gamma_1+\gamma_2\in C$. Moreover, if
$\beta_1=\beta_2=\beta$ and $ \gamma_1+\gamma_2\in
(H_\beta)^{\perp_{\beta}}$, then
$h_{\beta_1,\gamma_1}=h_{\beta_2,\gamma_2}$ and
$\chi_{\gamma_1}=\chi_{\gamma_2}$ for any choice of $H_\beta$.
Thus, $M(\beta_1,\gamma_1)\cong M(\beta_2,\gamma_2)$ if
\[
\beta_1=\beta_2\qquad \text{ and }\qquad \gamma_1+\gamma_2\in
C+(H_\beta)^{\perp_{\beta}},
\]
Now suppose that $M(\beta_1,\gamma_1)\cong M(\beta_2,\gamma_2)$.
Then they have the same $\tau$-word and $\beta_1=\beta_2$. Let
$\beta := \beta_1=\beta_2$. Let $H_\beta$ be a maximal
self-orthogonal subcode of $C_\beta$. Since
$M(\beta_1,\gamma_1)\cong M(\beta_2,\gamma_2)$,
$M(\beta_1,\gamma_1)$ contains the $M_{H_\beta}$-module
$U(h_{\beta,\gamma_2})\otimes \chi_{\gamma_2}$. Thus, there exists
an element $\delta \in C$ such that
\[
h_{\beta, \gamma_1} \times \frac{\delta}2 = h_{\beta, \gamma_2}
\qquad \text{ and } \qquad e^\delta \otimes F_{\chi_{\gamma_1}}
\cong F_{\chi_{\gamma_2}}.
\]
Since $h_{\beta, \gamma_1} \times \frac{\delta}2 = h_{\beta,
\gamma_2}$, $\delta+\gamma_1+\gamma_2 \in\mathbb Z_2^\beta$, where
$\mathbb Z_2^\beta=\{ \alpha\in {\mathbb Z_2}^n|\, \mathrm{supp}\,\alpha \subset
\mathrm{supp}\,\beta\}$. Moreover, $e^\delta \otimes
F_{\chi_{\gamma_1}} \cong F_{\chi_{\gamma_2}}$ implies that
\[
(-1)^{\langle \delta +\gamma_1,\alpha\rangle}=(-1)^{\langle
\gamma_2,\alpha\rangle}\qquad \text{ for all } \alpha\in H_\beta.
\]
Therefore, $\delta+\gamma_1+\gamma_2\in {H_\beta}^\perp$ and we
have $ \delta+\gamma_1+\gamma_2 \in {H_\beta}^\perp\cap
\mathbb Z_2^\beta=(H_\beta)^{\perp_{\beta}}$ and $\gamma_1+\gamma_2 \in
C+(H_\beta)^{\perp_{\beta}}. $ \mbox{ $\square$}
\begin{lem}
The code $C+H_\beta^{\bot_\beta}$ is independent of the choice of
the self-orthogonal subcode $H_\beta$.
\end{lem}
\proof
Let $H$ be another maximal self-orthogonal subcode of $C_\beta$.
Then we have $|H|=|H_\beta|$. First we will consider the
intersection $H\cap H_\beta$ of $H$ and $H_\beta$ and its orthogonal
complement in $\mathbb Z_2^\beta$.
\medskip
\noindent \textbf{Claim:} $(H\cap H_\beta)^{\perp_\beta}=
H_\beta^{\bot_\beta}+H$.
It is easy to see that $H_\beta^{\bot_\beta}$ and $H$ are both
contained in $(H\cap H_\beta)^{\perp_\beta}$. Hence we have $
H_\beta^{\bot_\beta}+ H \subset (H\cap H_\beta)^{\perp_\beta}$. Now note
that $H_\beta^{\bot_{\beta}}\cap H= H_\beta^{\bot_{\beta}}\cap (C_\beta \cap
H)= (H_\beta^{\bot_{\beta}}\cap C_\beta)\cap H=H_\beta\cap H$ and $\dim H=
\dim H_\beta$. By computing the dimensions, we have
\[
\begin{split}
\dim (H_\beta^{\bot_\beta}+ H)&= \dim H_\beta^{\bot_\beta}
+\dim H - \dim (H_\beta^{\bot_\beta}\cap H )\\
&= (|\beta| -\dim H_\beta) + \dim H - \dim
(H_\beta\cap H)\\
&=|\beta| - \dim (H_\beta\cap H)\\
& =\dim (H\cap H_\beta)^{\perp_\beta}.
\end{split}
\]
Hence we have $ (H\cap H_\beta)^{\perp_\beta}= H_\beta^{\bot_\beta}+
H$.
\medskip
By the claim, we have
$$(H\cap H_\beta)^{\perp_\beta}= H+H_\beta^{\perp_\beta}
\subset C+H_\beta^{\perp_\beta}. $$ Therefore, $ C+(H\cap
H_\beta)^{\perp_\beta} \subset C+H_\beta^{\perp_\beta}$. On the other
hand, $C+H_\beta^{\bot_\beta}$ is clearly contained in $C+(H\cap
H_\beta)^{\perp_\beta}$ and thus $
C+H_\beta^{\perp_\beta} = C+(H\cap
H_\beta)^{\perp_\beta}$. Similarly, we also have $ C+H^{\bot_\beta}=
C+(H\cap H_\beta)^{\perp_\beta}$ and hence $C+H_\beta^{\perp_\beta}=
C+H^{\bot_\beta}$ as desired. \mbox{ $\square$}
\medskip
Next we shall compute the fusion rules among some irreducible
$M_C$-modules. We recall a theorem proved by
Miyamoto\,\cite{M2,M3}. Let $C$ be an even linear code.
\begin{thm}
For any $\alpha\in \mathbb Z_2^n$, the $M_C$-module $\displaystyle M(0,
\alpha)= M_{\alpha+C}= \oplus_{\delta\in \alpha+C} M_\delta $ is a simple
current module. Moreover,
\[
M_{\alpha+C}\times M(\beta, \gamma) = M(\beta, \alpha+\gamma)
\]
for any irreducible $M_C$-module $M(\beta,\gamma)$.
\end{thm}
\medskip
Now by using the associativity and commutativity of the fusion
rules, we also have the following Lemma.
\begin{lem}\label{trans}
Let $\beta_1,\beta_2\in C^\perp$ and $\gamma\in \mathbb Z_2^n$. Then $$ \dim
I_{M_C} \binom{ M(\beta_1+\beta_2, \gamma)}{ M(\beta_1,0)\qquad
M(\beta_1,0)}= \dim I_{M_C} \binom{ M(\beta_1+\beta_2,
\alpha_1+\alpha_2+\gamma)}{ M(\beta_1,\alpha_1)\qquad \qquad \qquad
M(\beta_1,\alpha_2)}$$ for any $\alpha_1,\alpha_2 \in \mathbb Z_2^n$.
\end{lem}
\noindent {\bf Proof: \,} For any $\gamma\in \mathbb Z_2^n$, let
\[
m_\gamma= \dim I_{M_C} \binom{ M(\beta_1+\beta_2, \gamma)}{
M(\beta_1,0)\qquad M(\beta_1,0)}.
\]
Then we have
\[
M(\beta_1, 0)\times M(\beta_2, 0)= \sum_{\gamma \in \mathbb Z_2^n/K}
m_\gamma\, M(\beta_1+\beta_2, \gamma),
\]
where $K={C+(H_{\beta_1+\beta_2})^{\perp_{\beta_1+\beta_2}}}$.
Since the fusion product is associative and commutative, we have
\[
\begin{split}
&M(\beta_1, \alpha_1)\times M(\beta_2, \alpha_2)\\
= &\left [M(
0, \alpha_1)\times M(\beta_1, 0)\right] \times \left [M(0,
\alpha_2)\times M(\beta_2, 0)\right]\\
=& \left [M(0,\alpha_1)\times M(0,\alpha_2)\right] \times \left
[M(\beta_1,
0)\times M(\beta_2, 0)\right]\\
=& M(0, \alpha_1+\alpha_2)\times \left [M(\beta_1, 0)\times M(\beta_2,
0)\right]\\
= & M(0, \alpha_1+\alpha_2)\times \big(\sum_{\gamma \in \mathbb Z_2^n/K}
m_\gamma\, M(\beta_1+\beta_2, \gamma)\big)\\
=& \sum_{\gamma \in \mathbb Z_2^n/K} m_\gamma\, M(\beta_1+\beta_2, \alpha_1+
\alpha_2+\gamma).
\end{split}
\]
Hence, we also have
\[
m_\gamma = \dim I_{M_C} \binom{ M(\beta_1+\beta_2,
\alpha_1+\alpha_2+\gamma)}{ M(\beta_1,\alpha_1)\qquad \qquad \qquad
M(\beta_1,\alpha_2)}
\]
as desired. \mbox{ $\square$}
\medskip
For the later purpose we also need some facts about the Hamming
code VOA $M_{H_8}$ from \cite{M2,M3} (see also \cite{La}).
Let $H_8$ be the Hamming code $[8,4,4]$ code, i.e., the code
generated by the rows of
\[
\begin{pmatrix}
1111& 1111\\
1111& 0000\\
1100& 1100\\
1010& 1010
\end{pmatrix}.
\]
Let $\{e^{1},\cdots ,e^{8}\}$ be the standard frame for $M_{H_8}$.
Let $q^0={\bf 1}$ be the vacuum element of $L(\frac 1 2,0)$ and let
$q^1$ be a highest weight vector of $L(\frac 1 2,\frac 1 2)$ such
that $q^1_0q^1={\bf 1}$. For any $\alpha=(\alpha_1, \dots, \alpha_8) \in H_8$,
let
\[
q^\alpha=q^{\alpha_1}\otimes \cdots \otimes q^{\alpha_8}\in M_\alpha,
\]
where $q^{\alpha_k}$ is a norm 1 highest weight vector for the $k$-th
tensor factor with respect to the action of our $T_8$. Then
$q^\alpha$ is a highest weight vector in $M_\alpha$. Moreover, we have
\[
{q^\alpha}_1 q^\beta=
\begin{cases}
2 \sum_{1}^8 \alpha_i e^i &\text{ if } \alpha=\beta,\\
q^{\alpha+\beta} & \text{ if } |\,\alpha\cap \beta|=2,\\
0& \text{ otherwise,}
\end{cases}
\]
for any
$\alpha,\beta\in H_8$ with $|\alpha| =|\beta|=4$.
The following results are obtained in \cite{M2}.
\begin{lem}
\label{h8} Let $\nu _{i}$ be the binary word whose $i$-th entry is
$1$ and all other entries are $0$. Define $\alpha _{i}:=\nu
_{1}+\nu _{i}$.
In the Hamming code VOA $ M_{H_{8}}$, there exist exactly three
Virasoro frames, namely,
\begin{equation*}
\{e^{1},\cdots ,e^{8}\},\left\{ d^{1},\cdots ,d^{8}\right\} ,\text{ and }%
\left\{ f^{1,}\cdots ,f^{8}\right\}
\end{equation*}
where
$$d^{i}=S^{\alpha ^{i}}=\frac{1}{8}(e^{1}+\cdots +e^{8})+\frac{1}{8}\sum_{\beta \in H_{8},\left| \beta \right| =4}(-1)^{\<\alpha
_{i},\beta \>}q^{\beta }\otimes e^{\beta },$$
$$f^{i}=S^{\nu _{i}}=\frac{1}{8}(e^{1}+\cdots
+e^{8})+\frac{1}{8}\sum_{\beta \in H_{8},\left| \beta \right|
=4}(-1)^{\<\nu _{i},\beta \>}q^{\beta }\otimes e^{\beta }$$.
\end{lem}
\begin{thm} Let $L$ be an irreducible
$M_{H_{8}}$-module with half-integral or integral weight. Then,
$L$ is isomorphic to one of the following:
\begin{enumerate}
\item $M_{\nu _{1}+\nu _{i}+H_{8}}$ with respect to $\{e^{1},\cdots ,e^{8}\}
$ for all $i=1,\cdots ,8.$
\item $M_{\nu _{i}+H_{8}}$ with respect to $\{e^{1},\cdots ,e^{8}\}$ for
all $i=1,\cdots ,8.$
\item $M_{\nu _{i}+H_{8}}$ with respect to $\left\{ d^{1},\cdots
,d^{8}\right\} $ for all $i=1,\cdots ,8.$
\item $M_{\nu _{i}+H_{8}}$ with respect to $\left\{ f^{1,}\cdots
,f^{8}\right\} $ for all $i=1,\cdots ,8.$
\end{enumerate}
Moreover, all modules in (3) and (4) are isomorphic to $\otimes
_{i=1}^{8}L(\frac{1}{2},\frac{1}{16})$ as $T_8$-modules.
\end{thm}
As a corollary, we have the following theorem. The proof can be
found in \cite{La} (see also \cite{M2,M3}).
\begin{thm}\label{fh8}
For any $\beta_1= (0^8) $ or $(1^8)$ and $\beta_2\in H_8$, we have
\[
M(\beta_1, \alpha_1)\times_{_{M_{H_8}}} M(\beta_2, \alpha_2)= M(\beta_1+
\beta_2, \alpha_1+\alpha_2)
\]
Consequently, all irreducible $M_{H_{8}}$-modules with
half-integral or integral weight are simple current modules.
\end{thm}
\begin{rem}
For any $\alpha\in \mathbb Z_2^8/ H_8$, $\alpha$ uniquely determines a
character $\chi_{\alpha}\in \mathrm{Irr}\, H_8$ such that
$\chi_{\alpha}(\gamma) =(-1)^{\langle \alpha, \gamma\rangle}$ for any $\gamma
\in H_8$. By using this identification, our module $M(\beta, \alpha)$
actually corresponds to the class $[\beta, \chi_\alpha]$ defined in
Section 5 of \cite{La}.
\end{rem}
\section{The moonshine vertex operator algebra $V^{\natural}$}
\setcounter{equation}{0}
Let $V^{\natural}$ be the moonshine vertex operator algebra
\cite{FLM1}-\cite{FLM2}. The following theorem can be found in
\cite{DGH}.
\begin{thm}\label{t3.1}
There exists a VF in $V^{\natural}$, called
$F:=\{\omega_1,...,\omega_{48}\}$, so that the code $\mathcal{C}:=C(F)$
associated to this VF has length $48$ and dimension $41$. The code
$\mathcal{D}:=D(F)=\mathcal{C}^{\perp}$ has generator matrix
$$\left(\begin{array}{ccc}
1111 1111 1111 1111 & 0000 0000 0000 0000 & 0000 0000 0000 0000 \\
0000 0000 0000 0000 & 1111 1111 1111 1111 & 0000 0000 0000 0000 \\
0000 0000 0000 0000 & 0000 0000 0000 0000 & 1111 1111 1111 1111 \\
0000 0000 1111 1111 & 0000 0000 1111 1111 & 0000 0000 1111 1111 \\
0000 1111 0000 1111 & 0000 1111 0000 1111 & 0000 1111 0000 1111 \\
0011 0011 0011 0011 & 0011 0011 0011 0011 & 0011 0011 0011 0011 \\
0101 0101 0101 0101 & 0101 0101 0101 0101 & 0101 0101 0101 0101
\end{array}\right)_{\textstyle .}$$
\end{thm}
\begin{rem} The weight enumerator of $\mathcal{D}$ is
given by $$X^{48}+3X^{32}+120X^{24}+3X^{16}+1$$ and the minimal
weight of $\mathcal{C}$ is $4$. Moreover, $\mathcal{D}$ is self-orthogonal and
hence $\mathcal{D} \subset \mathcal{C}.$ (The codes $\mathcal{D}$ and $\mathcal{C}$ are denoted
by $S^\natural$ and $D^\natural$, respectively in \cite{M3}.)
\end{rem}
\begin{lem}\label{codec} The code $\mathcal{C}$ in Theorem \ref{t3.1} is generated
by the weight 4 codewords.
\end{lem}
\noindent {\bf Proof: \,} First we note that the code $\mathcal{D}=D(F)$ is generated by the
elements of the form
\[
(1^{16},0^{16},0^{16}),\ (0^{16},1^{16},0^{16}), \
(0^{16},0^{16},1^{16}) \quad \text{ and }\quad (\alpha, \alpha, \alpha) ,
\quad \alpha \in \mathrm{RM}(1,4),
\]
where $\mathrm{RM}(r,m)$ denote the $r$-th order Reed-Muller code
of length $2^m$ (cf. \cite{CS}).
Since $\mathrm{RM}(1,4)^\perp= \mathrm{RM}(2,4)$, we have
\[
\mathcal{C}=\mathcal{D}^\perp=\{ (\alpha, \beta, \gamma)|\ \alpha+\beta+\gamma\in
\mathrm{RM}(2,4),\ \alpha, \beta, \gamma \text{ even }\}.
\]
Hence the code $\mathcal{C}$ can be generated by the elements
\[
(\alpha, 0, 0),\ (0, \beta, 0), (0, 0, \gamma), \quad \alpha, \beta, \gamma
\text{ are generators of }\mathrm{RM}(2,4)
\]
and
\[
(\alpha, \beta, 0),\ (\alpha, 0, \beta), (0, \alpha, \beta), \quad \alpha, \beta
\text{ are even and } \alpha+\beta \text{ is a generator of }
\mathrm{RM}(2,4).
\]
Note that the Reed Muller code $\mathrm{RM}(2,4)$ is of dimension
$11$ and is generated by the elements of the form
\[
(\alpha, 0),\quad (0,\alpha), \quad \alpha\in H_8,
\]
and
\[
(1100\, 0000\, 1100\, 0000), \ (1010\, 0000\, 1010\, 0000), \
(1000\, 1000\, 1000\, 10000).
\]
\medskip
Since the Hamming code $H_8$ is generated by its weight $4$
elements, the codes $\mathrm{RM}(2,4)$ and $\mathcal{C}$ are generated
by the weight $4$ codewords also. \mbox{ $\square$}
In the next theorem, see \ref{DGH}(d) for the meaning of
$(V^{\natural})^0$.
\begin{lem} The vertex operator subalgebra $(V^{\natural})^0$ is
isomorphic to $M_{\mathcal{C}}$ and is uniquely determined by the set of
weight 4 codewords of $\mathcal{C}.$
\end{lem}
\noindent {\bf Proof: \,} By the uniqueness of the code VOA, $(V^{\natural})^0$ and
$M_{\mathcal{C}}$ are isomorphic. Since $\mathcal{C}$ is generated by the weight 4
codewords of $\mathcal{C},$ the vertex operator algebra structure of
$(V^{\natural})^0$ is uniquely determined by the generators of the
group $\mathcal{C}.$ \mbox{ $\square$}
We now determine the irreducible modules and the fusion rules for
the code VOA $M_{\mathcal{C}}$.
\begin{rem}
In the next result,
$\mathrm{RM}(r,m)$ denote the $r$-th order Reed-Muller code
of length $2^m$ (cf. \cite{CS}). Note that the Reed Muller codes
are nested in the sense that $\mathrm{RM}(r+1, m)\supset
\mathrm{RM}(r,m)$ and $\mathrm{RM}(r+1, m+1)\supset
\mathrm{RM}(r,m)\oplus \mathrm{RM}(r,m)$, where the direct sum
corresponds to a partition of indices by an affine hyperplane and
its complement.
\end{rem}
The following properties of the code $\mathcal{C}$ can be derived easily
from the definition.
\begin{prop}
Let $\mathcal{D}$ and $\mathcal{C}$ be defined as above. For any $\beta\in \mathcal{D}$,
denote $$\mathcal{C}_\beta :=\{ \alpha\in \mathcal{C}|\ \mathrm{supp}\,\alpha
\subset \mathrm{supp}\,\beta\}.$$
\begin{enumerate}
\item If $|\beta|=16$, then $\mathcal{C}_\beta\cong \mathrm{RM}(2,4)$.
\item If $|\beta|=24$, then
$ \mathcal{C}_\beta\cong \{ (\alpha, \gamma, \delta)|\ \alpha+\gamma+\delta \in
H_8 \text{ and } \alpha,\gamma, \delta \text{ even}\}. $
\item If $|\beta|=32$, then $ \mathcal{C}_\beta\cong \mathrm{RM}(3,5).$
\item If $|\beta|=48$, then $ \mathcal{C}_\beta = \mathcal{C}$.
\end{enumerate}
Note that the Hamming code $H_8\cong \mathrm{RM}(1,3)$. Hence, for
$\beta \ne 0$, $\mathcal{C}_\beta$ contains a self-dual subcode which is
isomorphic to a direct sum of $|\beta|/8$ copies of the Hamming code
$H_8$.
\end{prop}
\noindent {\bf Proof: \,} Let $\beta\in \mathcal{D}$ and $n=|\beta|$, the weight of $\beta$. Let
$p_\beta: \mathbb Z_2^{48}\to \mathbb Z_2^n$ be the natural projection of
$\mathbb Z_2^{48}$ to the support of $\beta$. Since $\mathcal{C}= \mathcal{D}^\perp$, it
is easy to see that $\mathcal{C}_\beta\cong {p_\beta(\mathcal{D})}^\perp$.
\medskip
\noindent \textbf{Case 1.} $|\beta|=16$. In this case, $
p_\beta(\mathcal{D})$ is generated by the codewords $(1^{16})$, $(0^8\,1^8),
(0^4\ 1^4)^2$, $(0^2\ 1^2)^4$ and $(0\ 1)^8$ and is isomorphic to
the Reed Muller code $\mathrm{RM}(1,4)$. Since
$$\mathrm{RM}(r,m)^\perp \cong \mathrm{RM}(m-r-1,m)$$
for any
$1\leq r\leq m$ we have $\mathcal{C}_\beta\cong \mathrm{RM}(2,4)$ as
desired.
\bigskip
\noindent \textbf{Case 2.} $|\beta|=24$. In this case,
$p_\beta(\mathcal{D})$ is of dimension $6$ and is isomorphic to a code
generated by
\[
(1^8\, 0^8\, 0^8),\ (0^8\, 1^8\, 0^8),\ (0^8, 0^8, 1^8)\ \text{
and }\ (\alpha, \alpha,\alpha),\ \alpha\in H_8.
\]
Hence $\mathcal{C}_\beta\cong \{ (\alpha, \gamma, \delta)|\ \alpha+\gamma+\delta
\in H_8 \text{ and } \alpha,\gamma, \delta \text{ even}\}$.
\bigskip
\noindent \textbf{Case 3.} $|\beta|=32$. $p_\beta(\mathcal{D})\cong
\mathrm{RM}(1,5)$ and hence $\mathcal{C}_\beta\cong \mathrm{RM}(3,5)$.
\bigskip
\noindent \textbf{Case 4.} $|\beta|=48$. It is clear that
$\mathcal{C}_\beta=\mathcal{C}$ in this case. \mbox{ $\square$}
\medskip
Now by using Lemma \ref{uniq}, we have the following theorem.
\begin{thm}\label{indeed}
Let $\mathcal{D}$ and $\mathcal{C}$ be defined as above. Then
\[
\{ M_\mathcal{C}(\beta, \gamma)\, |\ \beta\in \mathcal{D} \text{ and } \gamma \in
\mathbb Z_2^{48}/ \mathcal{C}\}
\]
is the set of all inequivalent irreducible modules for $M_\mathcal{C}$.
\end{thm}
\noindent {\bf Proof: \,} By the previous proposition, we can choose $H_\beta$ such that
it is a direct sum of $|\beta|/8$ copies of the Hamming code $H_8$.
In this case, $(H_\beta)^{\perp_{\beta}}= H_\beta$ and we have $\mathcal{C}=
\mathcal{C}+(H_\beta)^{\perp_{\beta}}$. Hence $ \{ M_\mathcal{C}(\beta, \gamma)\, |\
\beta\in \mathcal{D} \text{ and } \gamma \in \mathbb Z_2^{48}/ \mathcal{C}\} $ is the set
of all inequivalent irreducible modules for $M_\mathcal{C}$ by Lemma
\ref{uniq} . \mbox{ $\square$}
\medskip
Next we shall compute the fusion rules among irreducible
$M_\mathcal{C}$-modules. The main tool is the representation theory of the
Hamming code VOA $M_{H_8}$ given in Section 4. First we recall the
following theorem from \cite{DL}.
\begin{thm}\label{dandl} Let $W^{1},W^{2}$ and $W^{3}$ be $V$-modules and let
$I$ be an intertwining operator of type
\[
\left(
\begin{array}{ccc}
& W^{3} & \\ W^{1} & & W^{2}
\end{array}
\right) .
\]
Assume that $W^{1}$ and $W^{2}$ have no proper submodules
containing $v^{1}$ and $v^{2}$, respectively. Then $I\left(
v^{1},z\right) v^{2}=0\text{ implies }I\left( \cdot ,z\right) =0.$
\end{thm}
\begin{lem}\label{ub} For any $\beta_1, \beta_2,
\beta_3\in \mathcal{D}$ and $\alpha_1, \alpha_2, \alpha_3\in \mathbb Z_2^{48}$, we have
\[
\dim I_{M_\mathcal{C}} \binom{ M(\beta_3, \alpha_3)}{M(\beta_1, \alpha_1)\qquad
M(\beta_2, \alpha_2)} \leq 1
\]
and
\[
\dim I_{M_\mathcal{C}} \binom{ M(\beta_3, \alpha_3)}{M(\beta_1, \alpha_1)\qquad
M(\beta_2, \alpha_2)}=0
\]
unless $\beta_3=\beta_1+\beta_2$ and $ \alpha_3 \equiv \alpha_1+\alpha_2 \ {\rm mod}
\ \mathcal{C}$.
\end{lem}
\noindent {\bf Proof: \,} Recall that $\dim \mathcal{D}=7$ and the weight enumerator of $\mathcal{D}$ is
$X^{48}+3X^{32}+120X^{24}+3X^{16}+1.$
Without loss, we may assume that $\beta_3=\beta_1+\beta_2$; otherwise,
\[
\dim I_{M_\mathcal{C}} \binom{ M(\beta_3, \alpha_3)}{M(\beta_1, \alpha_1)\qquad
M(\beta_2, \alpha_2)}=0.
\]
Let $\bar{\beta_1}= (1^{48})+ \beta_1$. Then $\bar{\beta_1}$ is also in
$\mathcal{D}$. Thus, there exist self-orthogonal codes $H_{\beta_1}$ and
$H_{\bar{\beta_1}}$ of $\mathcal{C}$ such that both $H_{\beta_1}$ and
$H_{\bar{\beta_1}}$ are direct sums of Hamming $[8,4,4]$ codes. Let
$E= H_{\beta_1}\oplus H_{\bar{\beta_1}}\cong {H_8}^{\oplus 6}$. If the
weight of $\beta_2$ is a multiple of $16$ (i.e., $0,16, 32,$ or
$48$), then $|\mathrm{supp}\, \beta_1 \cap \mathrm{supp}\, \beta_2|$ is a multiple of
$8$. In this case, it is possible to find maximal self-orthogonal
subcodes $H_{\beta_2}$ of $\mathcal{C}_{\beta_2}$ and $H_{\beta_1+\beta_2}$ of
$\mathcal{C}_{\beta_1+\beta_2}$ such that $H_{\beta_2}$ and $H_{\beta_1+\beta_2}$
are isomorphic to direct sums of Hamming codes and are both
contained in $E$. Then as an $M_E$-module,
\[
M_{\mathcal{C}}(\beta_i, \alpha_i)= \bigoplus_{\delta\in \mathcal{C}/ E} M_{E}(\beta_i,
\alpha_i+\delta).
\]
Note that $H_{\beta_i}\subset E$ for any $i=1,2,3$ and hence
$M_{\mathcal{C}}(\beta_i, \alpha_i)$ is a direct sum of inequivalent
irreducible $M_E$-modules. Thus by Theorem \ref{fh8} and
\ref{dandl}, we have
\[
\dim I_{M_\mathcal{C}} \binom{ M(\beta_1+\beta_2, \alpha_3)}{M(\beta_1,
\alpha_1)\qquad M(\beta_2, \alpha_2)} \leq \dim I_{M_E} \binom{
M_E(\beta_1+\beta_2, \alpha_3)}{M_E(\beta_1, \alpha_1)\qquad M_E(\beta_2,
\alpha_2)}\leq 1
\]
and
\[
\dim I_{M_\mathcal{C}} \binom{ M(\beta_1+\beta_2, \alpha_3)}{M(\beta_1,
\alpha_1)\qquad M(\beta_2, \alpha_2)}=0
\]
unless $\alpha_3=\alpha_1+\alpha_2$.
\medskip
Finally, we shall treat the case for which all $\beta_1$, $\beta_2$
and $\beta_1+\beta_2$ are of weight $24$. For simplicity, we may
assume that $\beta_1=(1^8\, 0^8\, 1^8\, 0^8\,1^8\, 0^8)$ and $\beta_2=
(1^4\, 0^4\, \dots\, 1^4\, 0^4)$. Then $\beta_3=\beta_1+\beta_2= (0^4\,
1^8\, 0^8\, 1^8\, 0^8\,1^8\,0^4)$. In this case, we have $E=
H_{\beta_1}\oplus H_{\bar{\beta}_1}\cong {H_8}^{\oplus 6}$,
$H_{\beta_2}\cong H_8\oplus H_8 \oplus H_8$ and
$H_{\beta_1+\beta_2}\cong H_8\oplus H_8 \oplus H_8$.
\medskip
Note that $E+H_{\beta_1+ \beta_2}=E+H_{\beta_2}$ in this case. Moreover,
we have
\[
E_{\beta_2}=\{ \alpha\in E|\, \mathrm{supp}\,\alpha \subset
\mathrm{supp}\,\beta_2\}= E\cap H_{\beta_2}\quad \text{ and }
E_{\beta_3}=E_{\beta_1+\beta_2}= E\cap H_{\beta_1+ \beta_2}.
\]
Let $\mathcal{H} := E+ H_{\beta_2}= E+H_{\beta_1+\beta_2}$. Then the
$M_{\mathcal{C}}$-module $M_\mathcal{C}(\beta_i, \alpha_i), i=2,3,$ can be decomposed
as
\[
M_{\mathcal{C}}(\beta_i, \alpha_i)= \bigoplus_{\delta\in \mathcal{C}/ \mathcal{H}}
M_{\mathcal{H}}(\beta_i, \alpha_i+\delta).
\]
\medskip
\noindent \textbf{Claim: } $M_{\mathcal{H}}(\beta_i, \alpha_i+\delta)$
is irreducible as an $M_E$-module for any $\delta\in \mathcal{C}/\mathcal{H}$.
\medskip
Proof. Let $W=M_{\mathcal{H}}(\beta_i, \alpha_i+\delta)$ and
$\mathcal{H}_{\beta_i}= \{ \alpha\in \mathcal{H}|\, \mathrm{supp} \, \alpha \subset \mathrm{supp} \,
\beta_i\}$. Then $H_{\beta_i}$ is a maximal self-orthogonal subcode of
$\mathcal{H}_{\beta_i}$. Let $U( \mathbf{h})\otimes F_{\chi}$ be an
irreducible $M_{H_{\beta_i}}$-submodule of $W$. Then
$$W=\mathrm{Ind}_{H_{\beta_i}}^\mathcal{H} U( \mathbf{h})\otimes F_{\chi}=
\bigoplus_{\delta\in \mathcal{H}/ H_{\beta_i}} U( \mathbf{h}\times
\frac{\delta}2)\otimes( e^\delta\otimes F_{\chi}). $$
Since $E_{\beta_i}= E\cap H_{\beta_i}\subset H_{\beta_i}$,
$U(\mathbf{h})\otimes F_{\chi}$ is also an irreducible
$M_{E_{\beta_i}}$-module. Hence,
\[
W'= \mathrm{Ind}_{E_{\beta_i}}^E U( \mathbf{h})\otimes F_{\chi}=
\bigoplus_{\delta\in E/ E_{\beta_i}} U( \mathbf{h}\times
\frac{\delta}2)\otimes( e^\delta\otimes F_{\chi})
\]
is an irreducible $M_E$-submodule of $W$. Note that
\[
|\mathcal{H}/ H_{\beta_i}|= | (E+H_{\beta_i})/ H_{\beta_i}|=| E/ (E\cap
H_{\beta_i})|= |E/ E_{\beta_i}|.
\]
Therefore, we have $W=W'$ and $W$ is an irreducible $M_E$-module.
Now, by Theorem \ref{fh8} and \ref{dandl}, we have
\[
\begin{split}
& \dim I_{M_\mathcal{C}} \binom{ M(\beta_1+\beta_2, \alpha_3)}{M(\beta_1,
\alpha_1)\qquad M(\beta_2, \alpha_2)} \\
\leq & \dim I_{M_\mathcal{H}} \binom{ M_\mathcal{H}(\beta_1+\beta_2,
\alpha_3)}{M_\mathcal{H}(\beta_1, \alpha_1)\qquad M_\mathcal{H}(\beta_2, \alpha_2)}\\
\leq &\dim I_{M_E} \binom{ M_E(\beta_1+\beta_2, \alpha_3)}{M_E(\beta_1,
\alpha_1)\qquad M_E(\beta_2, \alpha_2)}\leq 1
\end{split}
\]
and
\[
\dim I_{M_\mathcal{C}} \binom{ M(\beta_1+\beta_2, \alpha_3)}{M(\beta_1,
\alpha_1)\qquad M(\beta_2, \alpha_2)}=0
\]
unless $\alpha_3=\alpha_1+\alpha_2$. Note that $M_{\mathcal{H}}(\beta_2,\alpha_2)=
M_E(\beta_2,\alpha_2)$ and $M_{\mathcal{H}}(\beta_1+\beta_2,\alpha_1+\alpha_2)=
M_E(\beta_1+\beta_2,\alpha_1+\alpha_2)$ as $M_E$-modules. \mbox{ $\square$}
\medskip
\begin{thm}\label{fusion}
The fusion rules among irreducible $M_\mathcal{C}$ modules are given by
\[
M(\beta_1, \alpha_1) \times M(\beta_2, \alpha_2) = M(\beta_1+\beta_2,
\alpha_1+\alpha_2),
\]
where $\beta_1, \beta_2\in \mathcal{D}$ and $\alpha_1, \alpha_2\in \mathbb Z_2^{48}/\mathcal{C}$.
In particular, each irreducible $M_{\mathcal{C}}$-module is a simple
current.
\end{thm}
\noindent {\bf Proof: \,} By Lemma \ref{trans} and \ref{ub}, it remains to show that
\[
I_{M_\mathcal{C}} \binom{ M(\beta_1+\beta_2, \alpha_1+\alpha_2)}{M(\beta_1,
\alpha_1)\qquad M(\beta_2, \alpha_2)}\neq 0,
\]
for some $\alpha_1, \alpha_2\in \mathbb Z_2^{48}$. Nevertheless, such kind of
intertwining operators does exist and can be realized inside the
Leech lattice VOA $V_\Lambda$. In fact, there exists a Virasoro
frame of $V_\Lambda$ such that $V_\Lambda$ can be decomposed as
\[
V_{\Lambda}\cong \bigoplus_{\beta\in \mathcal{D}} M_\mathcal{C}(\beta, \gamma_{\beta}),
\quad \text{ for some } \gamma_{\beta}\in \mathbb Z_2^{48}/ \mathcal{C}.
\]
We shall refer to \cite{DGH} or \cite{M3} for details. \mbox{ $\square$}
\section{Proof of the main theorems}
\setcounter{equation}{0}
We first prove Theorem \ref{mt1}. So we assume that (1) $V$ is a
vertex operator algebra satisfying conditions (a)-(c), (2) $V_2$
is isomorphic to the Griess algebra, (3) $V$ is $C_2$-cofinite.
\begin{lem}\label{l4.1} $V$ is truncated below 0 and $V_0=\mathbb C {\bf 1}.$
\end{lem}
\noindent {\bf Proof: \,} First we prove that $V_n=0$ if $n$ is negative. If this is
not true, take the smallest $n$ such that $V_n\ne 0.$ Then each
$0\ne v\in V_n$ generates a highest weight module for the Lie
algebra $\mathbb C L(1)\oplus \mathbb C L(0)\oplus \mathbb C L(-1)$ (which is
isomorphic to $sl(2,\mathbb C).$) According to the structure of the
highest weight modules for $sl(2,\mathbb C)$ we know that $L(-1)^iv\ne 0$
for $i=0,...,-2n.$ Since $n$ is less than or equal to $-1$ we
see that $L(-1)^{-n+1}v\ne 0.$ Since the weight of $L(-1)^{-n+1}v$
is 1, we immediately have a contradiction as $V_1=0$ by
assumption.
We now prove that $V_0$ is one dimensional. Note that
$L(-1)V_0=0.$ So each nonzero vector $v\in V_0$ is a vacuum-like
vector \cite{Li}.
As a result, we have a $V$-module
isomorphism $f_v: V \to V$ by sending $u$ to $u_{-1}v$ for $u\in
V$ \cite{Li}. By Schur's lemma, $f_v$ must be a multiple of the
identity map. As a result, $f_v({\bf 1})=v$ is a multiple of the
vacuum. This shows that $V_0$ is spanned by the vacuum. \mbox{ $\square$}
\bigskip
\begin{lem}\label{l4.2r} $V$ is a holomorphic vertex operator algebra.
\end{lem}
\noindent {\bf Proof: \,} It is proved in \cite{DLM2} that if $U$ is a vertex operator
algebra such that $U=\oplus_{n\geq 0}U_n$ with $U_0$ being
1-dimensional and $U_1=0$ and that $U$ is the only irreducible
ordinary module for itself then any ordinary module is completely
reducible. So any ordinary $V$-module is a direct sum of copies of
$V.$ Since $V$ is $C_2$-cofinite, any submodule generated by a
single vector in any admissible module is an ordinary module (see
\cite{ABD}). This shows that any admissible $V$-module is
completely reducible. That is, $V$ is rational. This together with
condition (a) gives conclusion that $V$ is holomorphic. \mbox{ $\square$}
\bigskip
\begin{lem}\label{lj} The $q$-dimension $ch_q V=q^{-1}\sum_{n\geq 0}(\dim
V_n)q^n$ of $V$ is $J(q).$
\end{lem}
\noindent {\bf Proof: \,} Since $V$ is holomorphic and $C_2$-cofinite, by the modular
invariance result in \cite{Z}, $ch_q V$ is a modular function on
the full modular group, and thus equal to $J(q)$ by noting that
$V_0=\mathbb C {\bf 1}$ and $V_1=0.$ \mbox{ $\square$}
\bigskip
Since $V$ is irreducible and $V_0/L(1)V_1$ is one dimensional,
there is a unique nondegenerate symmetric invariant bilinear form
$(\cdot,\cdot)$ on $V$ such that $({\bf 1},{\bf 1})=1$ (see \cite{Li}). That
is,
$$(Y(u,z)v,w)=(-z^{-2})^{{\rm wt} u} (v,Y(e^{zL(1)}u,z^{-1})w)$$
for homogeneous $u\in V.$ In particular, the restriction of
$(\cdot,\cdot)$ to each $V_n$ is nondegenerate. As a result,
$(\cdot,\cdot)$ defines a nondegenerate symmetric invariant
bilinear form on the Griess algebra $V_2$ such that $(u,v)=u_3v$
for $u,v\in V_2.$
From now on we will fix the vectors $\{\omega_1,...,\omega_{48}\}$
of $V_2$ given in Theorem \ref{t3.1}. Since we only assume that
$V_2$ is isomorphic to the Griess algebra we do not know if the
bilinear form $(u,v)=u_3v$ defined on $V_2$ is the same as the
bilinear form defined on $V^{\natural}_2$ using the same formula.
So it is not clear that $\{\omega_1,...,\omega_{48}\}$ forms a VF
in $V.$
Since $V_2$ is a simple commutative nonassociative algebra, we
need a result on the bilinear forms over
a finite dimensional simple commutative nonassociative
algebra $B.$ A bilinear form $(\cdot,\cdot)$ on $B$ is called {\it
invariant} if $(ab,c)=(b,ac),$ for all $a, b, c \in B$. The next
result applies to any finite dimensional simple algebra.
\begin{lem}\label{lb} The space of nondegenerate symmetric invariant
bilinear forms on $B$ is at most one-dimensional.
\end{lem}
\noindent {\bf Proof: \,} Let $(\cdot,\cdot)$ and $\<\cdot,\cdot\>$ be two nondegenerate
symmetric invariant bilinear forms on $B.$ Then there is a linear
isomorphism $f: B\to B$ such that $(u,v)=\<f(u),v\>$ for all
$u,v\in V.$ For any $a\in B$ we have
$$\<f(au),v\>=(au,v)=(u,av)=\<f(u),av\>=\<af(u),v\>.$$
That is, $f(au)=af(u).$ Let $B_{\lambda}$ be the eigenspace of $f$
with eigenvalue $\lambda\ne 0$ Then $B_{\lambda}$ is an ideal of $B.$
This shows that $B=B_{\lambda}.$ So $f=\lambda \ id_B.$ As a
result, $(\cdot,\cdot)=\lambda\<\cdot,\cdot\>,$ as desired.
\mbox{ $\square$}
\begin{lem}\label{ladd} Each $\omega_i$ is a Virasoro vector with central
charge $\frac 1 2$ and for all $m, n$,
$$[L^i(m),L^j(n)]=0$$
if
$i\ne j$ where $Y(\omega_i,z)=\sum_{n\in\mathbb Z}L^i(n)z^{-n-2}.$
\end{lem}
\noindent {\bf Proof: \,} We first prove that each $\omega_i$ is a Virasoro vector of
central charge $\frac{1}{2}.$ That is, the component operators
$L^i(n)$ of $Y(\omega_i,z)=\sum_{n\in\mathbb Z}L^i(n)z^{-n-2}$ satisfies
the Virasoro algebra relation with central charge $\frac{1}{2}.$
Clearly $\omega_i\cdot\omega_i=L^i(0)\omega_i=2\omega_i$ by the
product in $B.$ So, $\omega_i$ is a Virasoro vector with central
charge $c_i$ defined by $c_i{\bf 1} =2L^i(2)\omega_i.$ Note that
$L^i(0)$ is semisimple on $V_2$ and the eigenvalues of $L^i(0)$
are $2,0,\frac{1}{2}$ and $\frac{1}{16}$ (see \cite{DGH}). Since
the bilinear form is invariant, we see that the eigenspaces with
different eigenvalues are orthogonal. So the restriction of the
bilinear form to each eigenspace is nondegenerate. It is known
from \cite{DGH} that the eigenspace with eigenvalue $2$ is one
dimensional and is spanned by $\omega_i.$ As a result,
$L^i(0)\omega_i$ is nonzero and $c_i\ne 0.$ We must prove that
$c_i=\frac{1}{2}.$
Recall from \cite{DM3} that the Griess algebra is a simple
commutative nonassociative algebra. Let $\<\cdot,\cdot\>$ be the
bilinear from defined on $V_2^{\natural}$ and $(\cdot,\cdot)$ be
the bilinear form defined on $V_2.$ By Lemma \ref{lb},
$(\cdot,\cdot)$ is a multiple of $\<\cdot,\cdot\>.$ Note that
$\<\omega,\omega\> =(\omega,\omega)=12.$ We conclude that these
two bilinear forms are exactly the same. So
$(\omega_i,\omega_i)=\<\omega_i,\omega_i\>=\frac{1}{4}.$ That is
$c_i=\frac{1}{2}.$
Let $i\ne j.$ Since $(\omega_i,\omega_j)=0$ and
$L^i(0)\omega_j=0$, we see immediately that $[L^i(m),L^j(n)]=0$
for all $m,n\in \mathbb Z.$ \mbox{ $\square$}
\begin{thm}\label{t4.2} The $\{\omega_1,...,\omega_{48}\}$ forms a VF in $V$
and $V$ is a FVOA.
\end{thm}
\noindent {\bf Proof: \,} We only need to prove that vertex operator subalgebra
$\<\omega_i\>$ generated by $\omega_i$ is isomorphic to
$L(\frac{1}{2},0)$ for the Virasoro algebra $Vir_i$ generated by
$L^i(m)$ for $m\in \mathbb Z.$ It is clear that $\<\omega_i\>$ is a
highest weight module with highest weight $0$ for $Vir_i.$ Then
there are two possibilities. Either $\<\omega_i\>$ is the Verma
module modulo the submodule generated by $L^i(-1){\bf 1}$ or
$\<\omega_i\>$ is isomorphic to $L(\frac{1}{2},0)$, according to the structure theory of highest weight modules for the Virasoro algebra with central charge $\frac 1 2$ \cite{FF}. We now assume that
the first possibility happens.
In this case the $q$-character of $\<\omega_i\>$ is equal to
$$ch_q\<\omega_i\>=q^{-1/48}\frac{1}{\prod_{n\geq 2}(1-q^n)}.$$
Let $U$ be the vertex operator subalgebra of $V$ generated by
$\omega_j$ for $j=1,...,48.$ Then we have
$$U=\<\omega_1\>\otimes \cdots \otimes \<\omega_{48}\>$$
is a tensor product. Let $ f(q^{\frac 1 n }),g(q^{\frac 1 n })\in
\mathbb R[[q^{1/n},q^{-1/n}]]$ for some positive integer $n.$ We write $
f(q^{\frac 1 n })\leq g(q^{\frac 1 n })$ if the coefficient of
$q^m$ in $ f(q^{\frac 1 n })$ is less than or equal to that in
$g(q^{\frac 1 n })$ for all $m.$ It is well known that the
$q$-character of $L(\frac{1}{2},0)$ is equal to
$$\frac{1}{2}q^{-1/48}\left(\prod_{n\geq 0}(1+q^{n+\frac 1 2})+\prod_{n\geq 0}(1-q^{n+\frac 1 2})\right)$$
(cf. \cite{KR}). Thus we have
\begin{equation}
\label{ine} ch_qU\geq f(q^{\frac 1 2 })
\end{equation}
where
$$ f(q^{\frac 1 2 }):=q^{-1}\frac{1}{2^{47}}
\left(\prod_{n\geq 0}(1+q^{n+\frac 1 2})+\prod_{n\geq 0}
(1-q^{n+\frac 1 2})\right)^{47}\prod_{n\geq 2}\frac{1}{(1-q^n)}.$$
Clearly, both $ch_qU$ and $f(q^{\frac 1 2})$ are convergent for
$0<|q|<1$, when $q$ is regarded as a complex number. So, we can
and do treat both $ch_qU$ and $ f(q^{\frac 1 2 })$ as functions
for $0<q<1$ and the inequality (\ref{ine}) still holds as
functions.
We have already proved in Lemma \ref{lj} that the graded dimension
of $V$ is $J(q)$ which of course also converges for $0<|q|<1.$ In
the following we will take $q$ to be a real number in the domain
$(0,1).$ Since $U$ is a subspace of $V,$ we have
$$\frac{ch_q U}{J(q)}\leq 1.$$
Let $L$ be the Niemeier lattice of type $D_{24}.$ Then the lattice
vertex operator algebra $V_L$ is a module for the affine Lie
algebra $D_{24}^{(1)}.$ Denote the irreducible highest weight
module for $D_{24}^{(1)}$ of level $k$ by $L_k(\lambda)$ where
$\lambda$ is a dominant weight of the finite dimensional Lie
algebra of type $D_{24}.$ Let $\lambda_i$ be the fundamental
weights of Lie algebra of type $D_{24}$ for $i=1,...,24$ so that
$\lambda_{23}$ and $\lambda_{24}$ are the half spin weights. (We
are using the labelling of simple roots given in \cite{H}.) Then
as a module for $D_{24}^{(1)}$ $V_L$ is a direct sum
$$V_L=L_1(0)\oplus L_1(\lambda_{23})$$
following from the structure of lattice $L.$ It is well-known that
$$ch_qV_L=\frac{\theta_L(q)}{\eta(q)^{24}}=J(q)+2\times (24)^2-24$$
where $2\times (24)^2-24=1128$ is the dimension of the Lie algebra
of type $D_{24},$
$$\theta(q)=\sum_{\alpha\in L}q^{(\alpha,\alpha )/2}$$
is the theta function of the lattice $L$ and
$$\eta(q)=q^{1/24}\prod_{n\geq 1}(1-q^n).$$ So we have
$$J(q)<ch_qV_L$$
as a function in $q\in (0,1).$
On the other hand, using the Boson-Fermion correspondence given in
\cite{F}, we see that the characters of the fermion realizations
of $L_1(0)$ and $L_1(\lambda_{23})$ satisfy the following
relations
$$ch_qL_1(0)\leq ch_qL_1(0)+ ch_qL_1(\lambda_1)=q^{-1}\prod_{n\geq 0}(1+q^{n+\frac 1 2})^{48}$$
$$ch_qL_1(\lambda_{23})=q^{-1}\prod_{n>0}(1+q^n)^{48}< 2q^{-1}\prod_{n\geq 0}(1+q^{n+\frac 1 2})^{48}$$
As a result we have
$$J(q)\leq ch_q V_L\leq 3q^{-1}\prod_{n\geq 0}(1+q^{n+\frac 1 2})^{48}.$$
Note that
$$ f(q^{\frac 1 2 })\geq q^{-1}\frac{1}{2^{47}}\prod_{n\geq 0}(1+q^{n+\frac 1 2})^{47}\prod_{n\geq 2}\frac{1}{(1-q^n)}$$
So finally we have
\begin{equation}\label{fi}
\frac{ch_qU}{ch_qV}\geq \frac{1}{2^{47}3}\prod_{n\geq
0}\frac{1}{(1+q^{n+\frac 1 2})} \prod_{n\geq 2}\frac{1}{(1-q^n)}.
\end{equation}
Clearly, the right hand side of (\ref{fi}) goes to infinity as $q$
goes to 1. This is a contradiction to $\frac{ch_qU}{ch_qV}\leq 1.$
\mbox{ $\square$}
\begin{rem} From the proof Theorem \ref{t4.2} we see that we in fact prove
a stronger result: If $\{u_1,...,u_{48}\}$ are 48 mutually
commutative Virasoro elements of central charge $\frac 1 2$ then
$\{u_1,...,u_{48}\}$ is a VF.
\end{rem}
\begin{rem}\label{ra} In the proof of Theorem \ref{t4.2} we only use the fact
that $ch_q V=J(q).$ In fact, the proof goes through if we assume
that $\dim V_n\leq V^{\natural}_n$ for $n\geq 3.$ So Theorem
\ref{t4.2} holds with the assumptions given in Theorem \ref{mt2}.
\end{rem}
{\bf Proof of Theorem \ref{mt1}:} By Theorem \ref{t4.2}, $V$ is
an FVOA with VF $F:=\{\omega_1,...,\omega_{48}\}.$ Let $U$ be the
vertex operator subalgebra generated by $V_2.$ Then $U$ is also a
FVOA with the same VF. Since $F$ is a VF in both $U$ and $V$, we
use a subscript $U$ to indicate dependence of the associated
binary codes on $U$. We have that $\mathcal{C}$ is a subcode of $C_U(F)$
and $\mathcal{D}$ is a subcode of $D_U(F).$ Since $D_U(F)\subset
C_U(F)^{\perp},$ and $\mathcal{D}=\mathcal{C}^{\perp}$ we immediately see that
$\mathcal{C}=C_U(F)$ and $\mathcal{D}=D_U(F).$
Note that $\mathcal{C}$ is a subgroup of $C(V)$ and $\mathcal{D}$ is a subgroup of
$D(F).$ Since $V$ is holomorphic by Lemma \ref{lj},
$C(F)=D(F)^{\perp}$ (see Theorem \ref{DGH}). This implies that
$C_U(F)=C(F)=\mathcal{C}$ and $D_U(F)=D(F)=\mathcal{D}.$
Now $M_{\mathcal{C}}=U^{0}$ is a vertex operator subalgebra of $V.$ Then by
Theorem \ref{DGH}, $V$ is a direct sum of inequivalent irreducible
$M_{\mathcal{C}}$-modules. By Theorem \ref{indeed}, for each $\delta\in \mathcal{D}$
there exists a unique $\gamma_\delta\in \mathbb Z_2^{48}/\mathcal{C}$ such that
$M(\delta,\gamma_\delta)$ is isomorphic to a submodule of $V.$
Then
$$V \cong \bigoplus_{\delta\in D}M(\delta,\gamma_\delta)$$
as $M_{\mathcal{C}}$-module. Similarly, $V^{\natural}$ has a decomposition
$$V^{\natural} \cong \bigoplus_{\delta\in D}M(\delta,\beta_\delta)$$
where $\beta_\delta\in \mathbb Z_2^{48}/\mathcal{C}$. In the case that
the lowest
weight of $M(\delta,\beta_\delta)$ is 0 or 2, we have $\beta_\delta=\gamma_\delta.$ Since
every module for $M_{\mathcal{C}}$ is a simple current by Theorem
\ref{fusion}, by the uniqueness of simple current extension
theorem in \cite{DM2}, it is sufficient to show that
$M(\delta,\gamma_\delta)$ and $M(\delta,\beta_\delta)$ are isomorphic $M_{\mathcal{C}}$-modules.
For $\delta\in \mathcal{D}$ we denote the lowest weight of $M(\delta,\beta_\delta)$ by
$w(\delta).$ Set
$$X=\{(\delta,\beta_\delta)|\, \delta\in \mathcal{D}, w(\delta)=0, 2\}.$$
Since $V^{\natural}$ is
generated by $V^{\natural}_2$ (see \cite{FLM2}), the group
$G:=\{(\delta,\beta_\delta)|\delta\in \mathcal{D}\}$ is a subgroup of $D\times
\mathbb Z_2^{48}/\mathcal{C}$ generated by $X.$ So, the group $H:=
\{(\delta,\gamma_\delta)|\delta\in \mathcal{D}\}$ is a subgroup of $D\times
\mathbb Z_2^{48}/\mathcal{C}$ and contains $G$ as a subgroup. As a result,
$G=H.$ By Theorem \ref{indeed}, $M(\delta,\gamma_\delta)$ and $M(\delta,\beta_\delta)$
are indeed isomorphic $M_{\mathcal{C}}$-modules. \mbox{ $\square$}
\bigskip
{\bf Proof of Theorem \ref{mt2}:} In this case, the conclusions of Lemmas \ref{l4.1},
\ref{lb}, \ref{ladd} and Theorem \ref{t4.2} still hold (see Remark
\ref{ra}).
Let $U$ be as in the proof of Theorem \ref{mt1}. Since $U$ is
generated by the Griess algebra, and $\mathcal{D}\subset \mathcal{C},$ $C(U)=\mathcal{C}$
and $M_{\mathcal{C}}$ is a subalgebra of $U.$ From the proof of Theorem
\ref{mt1} we see that
$$U\cong \bigoplus_{\delta\in D}M(\delta,\gamma_\delta)$$
as $M_{\mathcal{C}}$-modules. The same argument used in the proof of
Theorem \ref{mt1} shows that $U$ and $V^{\natural}$ are
isomorphic. So we have
$$J(q)=ch_q V^{\natural}\geq ch_q V\geq ch_q U=J(q).$$
As a result, $U=V.$ This completes the proof. \mbox{ $\square$}
\bigskip
We give an application of Theorem \ref{mt2}. Let $U$ be the $\mathbb Z_3$
orbifold construction given in \cite{DM1}. It has been expected
for a long time that $U$ and $V^{\natural}$ are isomorphic vertex
operator algebras. The isomorphism follows from Theorem \ref{mt2}
easily now.
\begin{cor} $V^{\natural}$ and $U$ are isomorphic.
\end{cor}
\noindent {\bf Proof: \,} $U$ satisfies the conditions in Theorem \ref{mt2}. In
particular, ${\rm ch}_q U=J(q).$ \mbox{ $\square$}
| c6735f4c17c21b11c060c02d41b50d641688553a | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
During the past seven years there have appeared over two hundred
research papers on ${\cal PT}$-symmetric quantum systems. This was
initially triggered by the surprising observation of Bessis and
Zinn-Justin and its subsequent numerical verification by Bender
and his co-workers \cite{bender-PRL98-JMP99} that certain
non-Hermitian but ${\cal PT}$-symmetric Hamiltonians, such as
\begin{equation}
H=p^2+x^2+i\epsilon x^3~~~~{\rm with}~~~~~~\epsilon\in\mathbb{R}^+,
\label{cubic}
\end{equation}
have a purely real spectrum. This observation suggested the
possibility to use these Hamiltonians in the description of
certain quantum systems. Since the ${\cal PT}$-symmetry of a
non-Hermitian Hamiltonian $H$, i.e., the condition $[H,{\cal
PT}]=0$, did not ensure the reality of its spectrum, a crucial
task was to seek for the necessary and sufficient conditions for
the reality of the spectrum of a given non-Hermitian Hamiltonian
$H$. This was achieved in \cite{p2p3} where it was shown, under
the assumptions of the diagonalizability of $H$ and discreteness
of its spectrum, that the reality of the spectrum was equivalent
to the existence of a positive-definite inner product
$\langle\cdot,\cdot\rangle_+$ that rendered the Hamiltonian self-adjoint,
i.e., for any pair ($\psi,\phi$) of state vectors
$\langle\psi,H\phi\rangle_+=\langle H\psi,\phi\rangle_+$.
Another condition that is equivalent to the reality of the
spectrum of $H$ is that it can be mapped to a Hermitian
Hamiltonian $h$ via a similarity transformation
\cite{p2p3,jpa-2003}; there is an invertible Hermitian operator
$\rho$ such that
\begin{equation}
H=\rho^{-1} h\, \rho.
\label{similar}
\end{equation}
The positive-definite inner product $\langle\cdot,\cdot\rangle_+$ and the
operator $\rho$ entering (\ref{similar}) are determined by a
positive-definite operator $\eta_+$ according to
\cite{p2p3,jpa-2003}
\begin{eqnarray}
&&\langle\cdot,\cdot\rangle_+:=\langle\cdot|\eta_+\cdot\rangle,
\label{inn=inn}\\
&&\rho=\sqrt{\eta_+},
\label{rho}
\end{eqnarray}
and the Hamiltonian satisfies the $\eta_+$-pseudo-Hermiticity
condition \cite{p1}:
\begin{equation}
H^\dagger=\eta_+H\eta_+^{-1}.
\label{ph}
\end{equation}
Here $\langle\cdot|\cdot\rangle$ stands for the standard $(L^2)$ inner
product that determines the (reference) Hilbert space ${\cal H}$
as well as the adjoint $H^\dagger$ of $H$,
\cite{jpa-2004c}.\footnote{The adjoint $A^\dagger$ of an operator
$A$ is the unique operator satisfying, for all $\psi,\phi\in{\cal
H}$, $\langle\psi|A^\dagger\phi\rangle=\langle A\psi|\phi\rangle$. $A$ is called
Hermitian if $A^\dagger=A$.}
It is this, so-called metric operator, $\eta_+$ that determines
the kinematic structure (the physical Hilbert space and the
observables) of the desired quantum system. Note however that
$\eta_+$ is not unique \cite{p4,jmp-2003,geyer-cjp}.\footnote{it
is only unique up to symmetries of the Hamiltonian,
\cite{jmp-2003}.} In \cite{p2p3} we have not only established the
existence of a positive definite metric operator $\eta_+$ and the
corresponding positive-definite inner product
$\langle\cdot,\cdot\rangle_+$ for a diagonalizable
Hamiltonian\footnote{For a treatment of non-diagonalizable
pseudo-Hermitian Hamiltonians see \cite{jmp-02d,ss,jmp-04}. Note
that diagonalizability of the Hamiltonian is a necessary condition
for applicability of the standard quantum measurement theory
\cite{jpa-2004c}. It is also necessary for the unitarity of the
time-evolution, for a non-diagonalizable Hamiltonian is never
Hermitian (its evolution operator is never unitary \cite{jmp-04})
with respect to a positive-definite inner product,
\cite{jmp-02d,ss}.} with a discrete real spectrum, but we have
also explained the role of antilinear symmetries such as ${\cal
PT}$ and offered a method for computing the most general $\eta_+$.
An alternative approach that yields a positive-definite inner
product for a class of ${\cal PT}$-symmetric models is that of
\cite{bender-PRL-2002}. As shown in \cite{jmp-2003,jpa-2005}, the
${\cal CPT}$-inner product proposed in \cite{bender-PRL-2002} is
identical with the inner product
$\langle\cdot,\cdot\rangle_+=\langle\cdot|\eta_+\cdot\rangle$ for a particular
choice of $\eta_+$.
Under the above mentioned conditions every Hamiltonian having a
real spectrum determines a set ${\cal U}_{H+}$ of
positive-definite metric operators. To formulate a consistent
unitary quantum theory having $H$ as its Hamiltonian, one needs to
choose an element $\eta_+$ of ${\cal
U}_{H+}$.\footnote{Alternatively one may choose sufficiently many
operators with real spectrum to construct a so-called irreducible
set of observables which subsequently fixes a metric operator
$\eta_+$, \cite{geyer}.} Each choice fixes a positive-definite
inner product $\langle\cdot,\cdot\rangle_+$ and defines the physical
Hilbert space ${\cal H}_{\rm phys}$ and the observables. The
latter are by definition \cite{critique} the operators $O$ that
are self-adjoint with respect to $\langle\cdot,\cdot\rangle_+$,
alternatively they are $\eta_+$-pseudo-Hermitian. These can be
constructed from Hermitian operators $o$ acting in ${\cal H}$
according to \cite{jpa-2004c}
\begin{equation}
O=\rho^{-1} o\, \rho.
\label{observable}
\end{equation}
In particular, one can define $\eta_+$-pseudo-Hermitian position
$X$ and momentum $P$ operators \cite{critique,jpa-2004c}, express
$H$ as a function of $X$ and $P$, and determine the underlying
classical Hamiltonian for the system by letting $\hbar\to 0$ in
the latter expression, \cite{jpa-2004c,p64}. Alternatively, one
may calculate the equivalent Hermitian Hamiltonian $h$ and obtain
its classical limit (again by letting $\hbar\to 0$).
Another application of the $\eta_+$-pseudo-Hermitian position
operator $X$ is in the construction of the physical localized
states:
\begin{equation}
|\xi^{(x)}\rangle:=\rho^{-1}|x\rangle.
\label{localized}
\end{equation}
These in turn define the physical position wave function,
$\Psi(x):=\langle\xi^{(x)},\psi\rangle_+=\langle x|\rho|\psi\rangle$, and the
invariant probability density,
\begin{equation}
\varrho(x):=\frac{|\Psi(x)|^2}{\int_{-\infty}^\infty |
\Psi(x)|^2dx}=\frac{|\langle x|\rho|\psi\rangle|^2}{
\langle\psi,\psi\rangle_+},
\label{density}
\end{equation}
for a given state vector $|\psi\rangle$, \cite{jpa-2004c,p64}.
The above prescription for treating ${\cal PT}$-symmetric and more
generally pseudo-Hermitian Hamiltonians with a real spectrum has
been successfully applied in the study of the ${\cal
PT}$-symmetric square well in \cite{jpa-2004c} and the cubic
anharmonic oscillator (\ref{cubic}) in \cite{p64}.\footnote{See
also \cite{banerjee}.} Both these systems have a discrete
nondegenerate energy spectrum, and the results of \cite{p1,p2p3}
are known to apply to them. The aim of the present paper is to
seek whether these results (in particular the construction method
for $\eta_+$) may be used for treating a system with a continuous
spectrum.\footnote{The question whether the theory of
pseudo-Hermitian operators as outlined in \cite{p1,p2p3} is
capable of treating a system having scattering states was posed to
the author by Zafar Ahmed during the 2nd International Workshop on
Pseudo-Hermitian Hamiltonians in Quantum Physics, held in Prague,
June 14-16, 2004.} This question is motivated by the desire to
understand field-theoretical analogues of ${\cal PT}$-symmetric
systems which should admit an $S$-matrix formulation. Furthermore,
there are some basic questions related to the nonlocal nature of
the Hermitian Hamiltonian $h$ and the pseudo-Hermitian observables
such as $X$ and $P$ especially for ${\cal PT}$-symmetric
potentials with a compact support (i.e., potentials vanishing
outside a compact region).
To achieve this aim we will focus our attention on a simple toy
model recently considered as an effective model arising in the
treatment of the electromagnetic waves travelling in a planar slab
waveguide that is half and half filed with gain and absorbing
media, \cite{ruschhaupt}. This model has a standard Hamiltonian,
\begin{equation}
H=\frac{p^2}{2m}+v(x),
\label{H}
\end{equation}
and a ${\cal PT}$-symmetric imaginary potential,
\begin{equation}
v(x):=i\zeta[\theta(x+\mbox{\small$\frac{L}{2}$})+
\theta(x-\mbox{\small$\frac{L}{2}$})-2\,\theta(x)]=
\left\{\begin{array}{ccc}
0&{\rm for}&|x|\geq \mbox{\small$\frac{L}{2}$}~~{\rm or}~~x=0\\
i\zeta &{\rm for}& x\in (-\mbox{\small$\frac{L}{2}$},0)\\
-i\zeta&{\rm for}& x\in (0,\mbox{\small$\frac{L}{2}$}),
\end{array}\right.
\label{v}
\end{equation}
where $L\in (0,\infty)$ is a length scale, $\zeta\in [0,\infty)$
determines the degree of non-Hermiticity of the system, and
$\theta$ is the step function:
\begin{equation}
\theta(x):=\left\{\begin{array}{ccc}
0&{\rm for}&x<0\\
\mbox{\small$\frac{1}{2}$}&{\rm for}&x=0\\
1&{\rm for}& x>0.
\end{array}\right.
\label{theta}
\end{equation}
The Hamiltonian~(\ref{H}) differs from a free particle Hamiltonian
only within
$(-\mbox{\small$\frac{L}{2}$},\mbox{\small$\frac{L}{2}$})$ where
it coincides with the Hamiltonian for the ${\cal PT}$-symmetric
square well \cite{sw-bmq,jpa-2004c}.
It is important to note that unlike in \cite{ruschhaupt} we will
consider the potential (\ref{v}) as defining a fundamental
(non-effective) quantum system having a unitary time-evolution
(and $S$-matrix). Therefore our approach will be completely
different from that pursued in \cite{ruschhaupt} and the earlier
studies of effective (optical) non-Hermitian Hamiltonians,
\cite{other}.
To the best of author's knowledge, the only other non-Hermitian
Hamiltonian with a continuous (and doubly degenerate) spectrum
that is shown to admit a similar treatment is the one arising in
the two-component formulation of the free Klein-Gordon equation
\cite{cqg,kg}. Compared to (\ref{H}), this Hamiltonian defines a
technically much simpler system to handle, because it is
essentially a tensor product of an ordinary Hermitian Hamiltonian
and a $2\times 2$ matrix pseudo-Hermitian Hamiltonian.
\section{Metric Operator}
The essential ingredient of our approach is the metric operator
$\eta_+$. For a diagonalizable Hamiltonian with a discrete
spectrum it can be expressed as
\begin{equation}
\eta_+=\sum_n \sum_{a=1}^{d_a} |\phi_n,a\rangle\langle\phi_n,a|,
\label{eta}
\end{equation}
where $n$, $a$, and $d_n$ are a spectral label, a degeneracy
label, and the multiplicity (degree of degeneracy) for the
eigenvalue $E_n$ of $H$, respectively, and $\{|\phi_n,a\rangle\}$ is a
complete set of eigenvectors of $H^\dagger$ that together with the
eigenvectors $|\psi_n,a\rangle$ of $H$ form a biorthonormal system,
\cite{p1,p2p3}.
Now, consider a diagonalizable Hamiltonian with a purely
continuous doubly degenerate real spectrum $\{E_k\}$, where $k\in
(0,\infty)$. We will extend the application of (\ref{eta}) to this
Hamiltonian by changing $\sum_n\cdots $ to $\int dk\cdots$. This
yields
\begin{equation}
\eta_+=\int_0^\infty dk\,(|\phi_k,+\rangle\langle\phi_k,+|+
|\phi_k,-\rangle\langle\phi_k,-|),
\label{eta=}
\end{equation}
where we have used $\pm$ as the values of the degeneracy label
$a$, \cite{kg}. The biorthonormal system
$\{|\psi_k,a\rangle,|\phi_k,a\rangle\}$ satisfies
\begin{eqnarray}
H|\psi_k,a\rangle&=&E_k|\psi_k,a\rangle,~~~~~~~
H^\dagger|\phi_k,a\rangle=E_k|\phi_k,a\rangle,
\label{eg-va}\\
\langle\phi_k,a|\psi_\ell,b\rangle&=&\delta_{ab}\delta(k-\ell),~~~~~~~
\int_0^\infty (|\psi_k,+\rangle\langle\phi_k,+|+
|\psi_k,-\rangle\langle\phi_k,-|)\,dk=1,
\label{complete}
\end{eqnarray}
where $\delta_{ab}$ and $\delta(k)$ stand for the Kronecker and
Dirac delta functions, respectively, $k\in(0,\infty)$, and
$a,b\in\{-,+\}$,
We define the eigenvalue problem for the Hamiltonian (\ref{H})
using the oscillating (plane wave) boundary conditions at
$x=\pm\infty$ similarly to the free particle case which
corresponds to $\zeta=0$. To simplify the calculation of the
eigenvectors we first introduce the following dimensionless
quantities.
\begin{eqnarray}
&&{\rm x}:=(\mbox{\small$\frac{2}{L}$})\,x,~~~~~~
{\rm p}:=(\mbox{\small$\frac{L}{2\hbar}$})\,p,~~~~~~
Z:=(\mbox{\small$\frac{mL^2}{2\hbar^2}$})\,\zeta,~~~~~~
{\rm H}:=(\mbox{\small$\frac{mL^2}{2\hbar^2}$})\,H={\rm p}^2+{\rm v}({\rm x}),
\label{scale1}\\
&&{\rm v}({\rm x}):=i Z[\theta({\rm x}+1)+
\theta({\rm x}-1)-2\,\theta({\rm x})]=
\left\{\begin{array}{ccc}
0&{\rm for}&|{\rm x}|\geq 1~~{\rm or}~~{\rm x}=0\\
i Z &{\rm for}& {\rm x}\in (-1,0)\\
-i Z&{\rm for}& {\rm x}\in (0,1).
\end{array}\right.
\label{r-v}
\end{eqnarray}
The eigenvalue problem for the scaled Hamiltonian ${\rm H}$
corresponds to the solution of the differential equation
\begin{equation}
\left[-\frac{d^2}{d{\rm x}^2}+{\rm v}({\rm x})-{\rm E}_k\right]\psi({\rm x})=0,
\label{DE}
\end{equation}
that is subject to the condition that $\psi$ is a differentiable
function at the discontinuities ${\rm x}=-1,0,1$ of ${\rm v}$. Introducing
$\psi_1:(-\infty,-1]\to\mathbb{C}$, $\psi_-:[-1,0]\to\mathbb{C}$,
$\psi_+:[0,1]\to\mathbb{C}$, and $\psi_2:[1,\infty)\to\mathbb{C}$ according to
\begin{equation}
\psi({\rm x})=:\left\{\begin{array}{ccc}
\psi_1({\rm x})&{\rm for}&{\rm x}\in (-\infty,-1]\\
\psi_-({\rm x})&{\rm for}&{\rm x}\in [-1,0]\\
\psi_+({\rm x})&{\rm for}&{\rm x}\in [0,1]\\
\psi_2({\rm x})&{\rm for}&{\rm x}\in [1,\infty),
\end{array}\right.
\label{psi}
\end{equation}
we have
\begin{eqnarray}
\psi_1(-1)&=&\psi_-(-1),~~~~~~~\psi'_1(-1)=\psi'_-(-1),
\label{c1}\\
\psi_-(0)&=&\psi_+(0),~~~~~~~\psi'_-(0)=\psi'_+(0),
\label{c2}\\
\psi_+(1)&=&\psi_2(1),~~~~~~~\psi'_+(1)=\psi'_2(1).
\label{c3}
\end{eqnarray}
Now, imposing the plane-wave boundary condition at ${\rm x}=\pm\infty$
and demanding that the eigenfunctions $\psi$ be ${\cal
PT}$-invariant, which implies
\begin{equation}
\psi_-(0)=\psi_+(0)^*,~~~~~~~~~~~~
\psi_-'(0)=-\psi_+'(0),
\label{c4}
\end{equation}
we find $E_k=k^2$, i.e., the spectrum is real positive and
continuous, and
\begin{equation}
\psi_1({\rm x})=A_1 e^{ik{\rm x}}+B_1e^{-ikx},~~~~
\psi_2({\rm x})=A_2 e^{ik{\rm x}}+B_2e^{-ikx},~~~~
\psi_\pm({\rm x})=A_\pm e^{ik_\pm{\rm x}}+B_\pm e^{-ik_\pm{\rm x}},
\label{psi=}
\end{equation}
where
\begin{eqnarray}
&&k_\pm:=\sqrt{k^2\pm iZ},
\label{k-pm}\\
&&A_1=A_3^*=\frac{e^{ik}}{\sqrt{2\pi}}\left[
L_-(k)u+K_-(k)v\right],~~~~~
B_1=B_3^*=\frac{e^{-ik}}{\sqrt{2\pi}}\left[
L_-(-k)u+K_-(-k)v\right],~~~~~~~~
\label{A1,B1}\\
&&L_-(k):=\frac{1}{2}\left(\cos k_--\frac{ik_-\sin
k_-}{k}\right),~~~~~
K_-(k):=\frac{1}{2}\sqrt{\frac{k_+}{k_-}}
\left(\frac{k_-\cos k_-}{k}-i\sin
k_-\right),
\label{L-K}\\
&&A_\pm=\frac{1}{\sqrt {8\pi}}
\left[u+\left(\frac{k_+}{k_-}\right)^{\pm 1/2}
v\right],~~~~~~~~~~
B_\pm=\frac{1}{\sqrt {8\pi}}
\left[u-\left(\frac{k_+}{k_-}\right)^{\pm 1/2}
v\right],
\label{AB-pm}
\end{eqnarray}
and $u,v\in\mathbb{R}$ are arbitrary constants (possibly depending on $k$
and/or $Z$ and not both vanishing).
The presence of the free parameters $u$ and $v$ is an indication
of a double degeneracy of the eigenvalues ${\rm E}_k=k^2$. We will
select $u$ and $v$ in such a way as to ensure that in the limit
$Z\to 0$ we recover the plane-wave solutions of the free particle
Hamiltonian, i.e., we demand $\lim_{Z\to 0}\psi({\rm x}) = e^{\pm
ik{\rm x}}/\sqrt{2\pi}$. This condition is satisfied if we set
\begin{equation}
u=1,~~~~~~~~~v=\pm 1.
\label{u-v}
\end{equation}
In the following we use the superscript $\pm$ to identify the
value of a quantity obtained by setting $u=1$ and $v=\pm 1$. In
this way we introduce $A_1^\pm,B_1^\pm,A_2^\pm,B_2^\pm, A_\pm^\pm,
B_\pm^\pm$, and $\psi^\pm$. The latter define the basis
(generalized \cite{bohm-qm}) eigenvectors $|\psi_k,\pm\rangle$ by
$\langle{\rm x}|\psi_k,\pm\rangle:=\psi^\pm({\rm x})$.
The next step is to obtain $|\phi_k,\pm\rangle$. In view of the
identity ${\rm H}^\dagger={\rm H}|_{Z\to -Z}$, we can easily obtain the
expression for the eigenfunctions $\phi$ of $H^\dagger$.
Introducing
\begin{equation}
\phi({\rm x})=:\left\{\begin{array}{ccc}
\phi_1({\rm x})&{\rm for}&{\rm x}\in (-\infty,-1]\\
\phi_-({\rm x})&{\rm for}&{\rm x}\in [-1,0]\\
\phi_+({\rm x})&{\rm for}&{\rm x}\in [0,1]\\
\phi_2({\rm x})&{\rm for}&{\rm x}\in [1,\infty),
\end{array}\right.
\label{phi}
\end{equation}
we have
\begin{equation}
\phi_1({\rm x})=C_1 e^{ik{\rm x}}+D_1e^{-ikx},~~~~
\phi_2({\rm x})=C_2 e^{ik{\rm x}}+D_2e^{-ikx},~~~~
\phi_\pm({\rm x})=C_\pm e^{ik_\mp{\rm x}}+D_\pm e^{-ik_\mp{\rm x}},
\label{phi=}
\end{equation}
where
\begin{eqnarray}
&&C_1=C_3^*=\frac{e^{ik}}{\sqrt{2\pi}}\left[
L_+(k)r+K_+(k)s\right],
~~~~
D_1=D_3^*=\frac{e^{-ik}}{\sqrt{2\pi}}\left[
L_+(-k)r+K_+(-k)s\right],~~~~~~~~~~
\label{D1}\\
&&L_+(k):=L_-(-k)^*,~~~~~~~~~~~~~~~~~~~~~
K_+(k):=-K_-(-k)^*,
\label{LK-plus}\\
&&C_\pm=\frac{1}{\sqrt {8\pi}}
\left[r+\left(\frac{k_+}{k_-}\right)^{\mp 1/2}
s\right],~~~~~~~~~~
D_\pm=\frac{1}{\sqrt {8\pi}}
\left[r-\left(\frac{k_+}{k_-}\right)^{\mp 1/2}
s\right],
\label{CD-pm}
\end{eqnarray}
and $r,s\in\mathbb{R}$ are (possibly $k$- and/or $Z$-dependent) parameters
that are to be fixed by imposing the biorthonormality condition
(\ref{complete}). The latter is equivalent to a set of four
(complex) equations (corresponding to the four possible choice for
the pair of indices $(a,b)$ in the first equation in
(\ref{complete})) which are to be solved for the two real unknowns
$r$ and $s$. This together with the presence of the delta function
in two of these equation make the existence of a solution quite
nontrivial.
We checked these equations by expanding all the quantities in
powers of the non-Hermiticity parameter $Z$ up to (but not
including) terms of order two and found after a long and tedious
calculation (partly done using Mathematica) that indeed all four
of these equations are satisfied, if we set $r=u=1$ and $s=v=\pm
1$. Again we will refer to this choice using superscript $\pm$. In
particular, we have $\phi^\pm=\psi^\pm|_{Z\to -Z}$ and $
\langle{\rm x}|\phi_k,\pm\rangle:=\phi^\pm({\rm x})$.
Having obtained $|\phi_k,\pm\rangle$ we are in a position to calculate
the metric operator (\ref{eta=}). We carried out this calculation
using first order perturbation theory in $Z$. It involved
expanding the $\phi_1^\pm({\rm x}), \psi_2^\pm({\rm x})$, and
$\psi_\pm^\pm({\rm x})$ in powers of $Z$, substituting the result in
\begin{equation}
\langle {\rm x}|\eta_+|{\rm y}\rangle=\int_0^\infty
[\phi^+({\rm y})^*\,\phi^+({\rm x})+\phi^-({\rm y})^*\,\phi^-({\rm x})]dk
\label{eta-int}
\end{equation}
which follows from (\ref{eta=}), and using the identities:
\begin{equation}
\int_{-\infty}^\infty e^{i a k}dk=2\pi\delta(a),
~~~\int_{-\infty}^\infty \frac{e^{i a k}}{k}\,dk=
i\pi\,{\rm sign}(a),~~~
\int_{-\infty}^\infty \frac{e^{i a k}-e^{i b k}}{k^2}\,dk=
\pi(|b|-|a|)
\label{identities}
\end{equation}
(where $a,b\in\mathbb{R}$ and ${\rm sign}(a):=\theta(a)-\theta(-a)$) to
perform the integral over $k$ for all 16 possibilities for the
range of values of the pair of independent variables $({\rm x},{\rm y})$
in (\ref{eta-int}). To simplify the presentation of the result, we
introduce: $I_1:=(-\infty,-1)$, $I_-:=(-1,0)$, $I_+:=(0,1)$,
$I_2:=(1,\infty)$ and define the functions ${\rm
E}_{\mu,\nu}:I_\mu\times I_\nu\to\mathbb{C}$ by
\[{\cal E}_{\mu,\nu}({\rm x},{\rm y}):=\langle {\rm x}|\eta_+|{\rm y}\rangle~~~~
{\rm for~all}~~~~{\rm x}\in I_\mu,~{\rm y}\in
I_\nu,~~~\mu,\nu\in\{1,-,+,2\}.\]
Then after a very long calculation we find
\begin{eqnarray}
{\cal E}_{1,1}({\rm x},{\rm y})&=&\delta({\rm x}-{\rm y})+
{\mbox{\small$\frac{i}{2}$}}\,{\rm sign}({\rm x}-{\rm y})\,Z
+{\cal O}(Z^2),
\label{e11}\\
{\cal E}_{-,1}({\rm x},{\rm y})&=&
{\mbox{\small$\frac{i}{8}$}}\,
(2-{\rm x}-{\rm y}-|{\rm x}+{\rm y}+2|)\,Z+{\cal O}(Z^2),
\label{e12}\\
{\cal E}_{+,1}({\rm x},{\rm y})&=&
{\mbox{\small$\frac{i}{8}$}}\,
(2-{\rm x}-{\rm y}-|{\rm x}+{\rm y}+2|)\,Z+{\cal O}(Z^2),
\label{e13}\\
{\cal E}_{2,1}({\rm x},{\rm y})&=&{\mbox{\small$\frac{i}{8}$}}\,
(4+2|{\rm x}+{\rm y}|-|{\rm x}+{\rm y}+2|-|{\rm x}+{\rm y}-2|)\,Z+{\cal O}(Z^2),
\label{e14}\\
{\cal E}_{1,-}({\rm x},{\rm y})&=&-{\mbox{\small$\frac{i}{8}$}}\,
(2-{\rm x}-{\rm y}-|{\rm x}+{\rm y}+2|)\,Z+{\cal O}(Z^2),
\label{e21}\\
{\cal E}_{-,-}({\rm x},{\rm y})&=&\delta({\rm x}-{\rm y})-
{\mbox{\small$\frac{i}{4}$}}\,{\rm sign}({\rm x}-{\rm y})
({\rm x}+{\rm y})\,Z+{\cal O}(Z^2),
\label{e22}\\
{\cal E}_{+,-}({\rm x},{\rm y})&=&{\mbox{\small$\frac{i}{4}$}}\,
|{\rm x}+{\rm y}|\,Z+{\cal O}(Z^2),
\label{e23}\\
{\cal E}_{2,-}({\rm x},{\rm y})&=&{\mbox{\small$\frac{i}{8}$}}\,
(2+{\rm x}+{\rm y}-|{\rm x}+{\rm y}-2|)\,Z+{\cal O}(Z^2),
\label{e24}\\
{\cal E}_{1,+}({\rm x},{\rm y})&=&-{\mbox{\small$\frac{i}{8}$}}\,
(2-{\rm x}-{\rm y}-|{\rm x}+{\rm y}+2|)\,Z+{\cal O}(Z^2),
\label{e31}\\
{\cal E}_{-,+}({\rm x},{\rm y})&=&-{\mbox{\small$\frac{i}{4}$}}\,
|{\rm x}+{\rm y}|\,Z+{\cal O}(Z^2),
\label{e32}\\
{\cal E}_{+,+}({\rm x},{\rm y})&=&\delta({\rm x}-{\rm y})+
{\mbox{\small$\frac{i}{4}$}}\,{\rm sign}({\rm x}-{\rm y})
({\rm x}+{\rm y})\,Z+{\cal O}(Z^2),
\label{e33}\\
{\cal E}_{2,+}({\rm x},{\rm y})&=&{\mbox{\small$\frac{i}{8}$}}\,
(2+{\rm x}+{\rm y}-|{\rm x}+{\rm y}-2|)\,Z+{\cal O}(Z^2),
\label{e34}\\
{\cal E}_{1,2}({\rm x},{\rm y})&=&-{\mbox{\small$\frac{i}{8}$}}\,
(4+2|{\rm x}+{\rm y}|-|{\rm x}+{\rm y}+2|-|{\rm x}+{\rm y}-2|)\,Z+{\cal O}(Z^2),
\label{e41}\\
{\cal E}_{-,2}({\rm x},{\rm y})&=&-{\mbox{\small$\frac{i}{8}$}}\,
(2+{\rm x}+{\rm y}-|{\rm x}+{\rm y}-2|)\,Z+{\cal O}(Z^2),
\label{e42}\\
{\cal E}_{+,2}({\rm x},{\rm y})&=&-{\mbox{\small$\frac{i}{8}$}}\,
(2+{\rm x}+{\rm y}-|{\rm x}+{\rm y}-2|)\,Z+{\cal O}(Z^2),
\label{e43}\\
{\cal E}_{2,2}({\rm x},{\rm y})&=&\delta({\rm x}-{\rm y})+
{\mbox{\small$\frac{i}{2}$}}\,{\rm sign}({\rm x}-{\rm y})\,Z
+{\cal O}(Z^2),
\label{e44}
\end{eqnarray}
where ${\cal O}(Z^2)$ stands for terms of order two and higher in
powers of $Z$. It is quite remarkable that we can obtain from
(\ref{e11}) -- (\ref{e44}) a single formula for
$\langle{\rm x}|\eta_+|{\rm y}\rangle$ which is valid for all ${\rm x},{\rm y}\in\mathbb{R}$,
namely
\begin{equation}
\langle{\rm x}|\eta_+|{\rm y}\rangle=\delta({\rm x}-{\rm y})+
{\mbox{\small$\frac{i}{8}$}}\,
(4+2|{\rm x}+{\rm y}|-|{\rm x}+{\rm y}+2|-|{\rm x}+{\rm y}-2|)\,
{\rm sign}({\rm x}-{\rm y})\,Z+{\cal O}(Z^2).
\label{eta-y-x=}
\end{equation}
Note that $\langle{\rm x}|\eta_+|{\rm y}\rangle^*=\langle{\rm y}|\eta_+|{\rm x}\rangle$ which is
consistent with the Hermiticity of $\eta_+$.
\section{Physical Observables and Localized States}
The physical observables of the system described by the
Hamiltonian (\ref{H}) are obtained from the Hermitian operators
acting in ${\cal H}=L^2(\mathbb{R})$ by the similarity transformation
(\ref{observable}). This equation involves the positive square
root $\rho$ of $\eta_+$ which takes the form \cite{p64}
\begin{equation}
\rho^{\pm 1}=e^{\mp Q/2},
\label{rho=}
\end{equation}
if we express $\eta$ in the exponential form
\begin{equation}
\eta_+=e^{-Q}.
\label{exp}
\end{equation}
In view of (\ref{rho=}) and the Backer-Campbell-Hausdorff
identity,
\begin{equation}
e^{-A}B\,e^A=B+[B,A]+\frac{1}{2!}\,[[B,A],A]+\cdots
\label{cbh}
\end{equation}
(where $A$ and $B$ are linear operators), physical observables
(\ref{observable}) satisfy \cite{p64}:
\begin{equation}
O=o-\frac{1}{2}\,[o,Q]+\frac{1}{8}\,[[o,Q],Q]+\cdots.
\label{O-calc}
\end{equation}
If we expand $\eta_+$ and $Q$ in powers of $Z$,
\begin{equation}
\eta_+=1+\sum_{\ell=1}^\infty
\eta_{+_\ell}Z^\ell,~~~~~~~
Q=\sum_{\ell=1}^\infty
Q_\ell Z^\ell,
\label{expand}
\end{equation}
where $\eta_{+_\ell}$ and $Q_\ell$ are $Z$-independent Hermitian
operators, we find using (\ref{exp}) that
\begin{equation}
Q_1=-\eta_{+_1},~~~~~~Q_2=-\eta_{+_2}+\frac{1}{2}\,
\eta_{+_1}^2.
\label{Q12}
\end{equation}
Combining this relation with (\ref{O-calc}), we have
\begin{equation}
O=o-\frac{1}{2}[o,Q_1]\,Z+\frac{1}{8}(-4[o,Q_2]+
[[o,Q_1],Q_1])\,Z^2+
{\cal O}(Z^3).
\label{pert-1}
\end{equation}
In the following we calculate the $\eta_+$-pseudo-Hermitian
position ($X$) and momentum $(P)$ operators, \cite{p64}, up to
(but no including) terms of order $Z^2$. This is because so far we
have only calculated $\eta_{+_1}$ which in view of
(\ref{eta-y-x=}) satisfies
\begin{equation}
\langle{\rm x}|\eta_{+_1}|{\rm y}\rangle={\mbox{\small$\frac{i}{8}$}}\,
(4+2|{\rm x}+{\rm y}|-|{\rm x}+{\rm y}+2|-|{\rm x}+{\rm y}-2|)\,
{\rm sign}({\rm x}-{\rm y}),~~~~~~~~\forall {\rm x},{\rm y}\in\mathbb{R}.
\label{eta-1-xy}
\end{equation}
Substituting the scaled position (${\rm x}$) and momentum $({\rm p})$
operator for $o$ in (\ref{pert-1}), using (\ref{eta-1-xy}), and
doing the necessary algebra, we find
\begin{eqnarray}
\langle{\rm x}|{\rm X}|{\rm y}\rangle&=&{\rm x}\,\delta({\rm x}-{\rm y})+
{\mbox{\small$\frac{i}{16}$}}\,
(4+2|{\rm x}+{\rm y}|-|{\rm x}+{\rm y}+2|-|{\rm x}+{\rm y}-2|)|{\rm x}-{\rm y}|\,Z+{\cal
O}(Z^2),~~~~~
\label{X=}\\
\langle{\rm x}|{\rm P}|{\rm y}\rangle&=&-i\partial_{{\rm x}}\,\delta({\rm x}-{\rm y})+{\mbox{\small$\frac{1}{8}$}}\,
[2\,{\rm sign}({\rm x}+{\rm y})-{\rm sign}({\rm x}+{\rm y}+2)
\nonumber\\
&&\hspace{3.5cm}-
{\rm sign}({\rm x}+{\rm y}-2)]\:{\rm sign}({\rm x}-{\rm y})\,Z+{\cal
O}(Z^2),
\label{P=}
\end{eqnarray}
where ${\rm X}:=2 X/L$ and ${\rm P}:=L P/(2\hbar)$ are dimensionless
$\eta_+$-pseudo-Hermitian position and momentum operators,
respectively.
As seen from (\ref{X=}), both ${\rm X}$ and ${\rm P}$ are manifestly
nonlocal and non-Hermitian (but pseudo-Hermitian) operators.
Furthermore,
\[ \langle{\rm x}|{\rm P}|{\rm y}\rangle=\langle{\rm x}|{\rm p}|{\rm y}\rangle+{\cal O}(Z^2)~~~~
{\rm for}~~~~{\rm x}\notin [-1,1]~~{\rm and}~~{\rm y}\notin [-1,1].\]
If we scale back the relevant quantities in (\ref{X=}) and
(\ref{P=}) according to (\ref{scale1}), we find
\begin{eqnarray}
\langle x|X|y\rangle&=& x\,\delta(x-y)+
{\mbox{\small$\frac{im}{4\hbar^2}$}}\,
(2L+2|x+y|-|x+y+L|-|x+y-L|)|x-y|\,\zeta+{\cal
O}(\zeta^2),~~~~~
\label{X=scale}\\
\langle x|P|y\rangle&=&-i\hbar\partial_x
\delta(x-y)+{\mbox{\small$\frac{m}{4\hbar}$}}\,
[2\,{\rm sign}(x+y)-{\rm sign}(x+y+L)\nonumber\\
&&\hspace{4cm}-{\rm sign}(x+y-L)]\,
{\rm sign}(x-y)\,\zeta+{\cal O}(\zeta^2).
\label{P=scale}
\end{eqnarray}
Again note that the contributions of order $\zeta$ to $P$ vanish,
if both $x$ and $y$ take values outside
$[-\mbox{\small$\frac{L}{2}$},\mbox{\small$\frac{L}{2}$}]$.
Next, we compute the localized states $\xi^{(x)}$ of the system.
The corresponding state vectors are defined by (\ref{localized}).
Using this equation as well as (\ref{rho=}), (\ref{expand}),
(\ref{Q12}), (\ref{eta-1-xy}), and (\ref{scale1}) we have the
following expression for the $x$-representation of a localized
state $\xi^{(y)}$ centered at $y\in\mathbb{R}$.
\begin{equation}
\langle x|\xi^{(y)}\rangle=\delta(x-y)-
\frac{im\zeta}{8\hbar^2}\,
(2L+2|x+y|-|x+y+L|-|x+y-1|)\,
{\rm sign}(x-y)+{\cal O}(\zeta^2).
\label{localized-wf}
\end{equation}
Because the linear term in $\zeta$ is imaginary, the presence of a
weak non-Hermiticity only modifies the usual (Hermitian) localized
states by making them complex (non-real) while keeping their real
part intact. Note however that for a fixed $y$ the imaginary part
of $\langle x|\xi^{(y)}\rangle$ does not tend to zero as $|x-y|\to\infty$.
This observation which seems to be in conflict with the usual
notion of localizability has a simple explanation. Because the
usual $x$ operator is no longer an observable, it does not
describe the position of the particle. This is done by the
pseudo-Hermitian position operator $X$; it is the physical
position wave function $\Psi(x):=\langle\xi^{(x)},\psi\rangle_+$ that
defines the probability density of localization in space
(\ref{density}). The physical position wave function for the
localized state $\xi^{(y)}$ is given by
$\langle\xi^{(x)},\xi^{(y)}\rangle_+= \langle x|y\rangle=\delta(x-y)$ which is the
expected result.
In summary, the notion of localizability in space is directly
linked with the choice of the physical position operator. An
important by-product of the recent intensive investigation of
non-Hermitian ${\cal PT}$-symmetric systems is the realization of
the fact that one my formulate a unitary quantum system for which
the choice of the Hilbert space and observables, particularly the
position operator, is not a priori fixed.
\section{Equivalent Hermitian Hamiltonian and Classical
Limit}
The calculation of the equivalent Hermitian Hamiltonian $h$ for
the Hamiltonian (\ref{H}) is similar to that of the physical
observables. In view of (\ref{similar}), (\ref{rho=}),
(\ref{cbh}), (\ref{expand}), and the last equation in
(\ref{scale1}) which we express as
\begin{equation}
{\rm H}={\rm p}^2+i\nu({\rm x})Z~~~~{\rm with}~~~~\nu({\rm x}):=
\theta({\rm x}+1)+\theta({\rm x}-1)-2\theta({\rm x}),
\label{scale-H}
\end{equation}
we have
\begin{eqnarray}
{\rm h}&=&{\rm p}^2+h_1Z+h_2Z^2+{\cal O}(Z^3),
\label{h-expand}\\
{\rm h}_1&:=&i\nu({\rm x})+\frac{1}{2}[{\rm p}^2,Q_1],
\label{h1}\\
{\rm h}_2&:=&\frac{1}{8}\{
4[{\rm p}^2,Q_2]+4i[\nu({\rm x}),Q_1]+[[{\rm p}^2,Q_1],Q_1]\}.
\label{h2}
\end{eqnarray}
where
\begin{equation}
{\rm h}:=\rho\,{\rm H}\rho^{-1}=mL^2 h/(2\hbar^2)
\label{rh}
\end{equation}
is the dimensionless Hermitian Hamiltonian associated with ${\rm H}$.
Next, we substitute (\ref{Q12}) and (\ref{eta-1-xy}) in the
identity
\[
\langle{\rm x}|[{\rm p}^2,Q_1]|{\rm v}\rangle=
(\partial_{{\rm y}}^2-\partial_{{\rm x}}^2)\langle{\rm x}|Q_1|{\rm y}\rangle,\]
and perform the necessary algebra. We then find
$\langle{\rm x}|[{\rm p}^2,Q_1]|{\rm v}\rangle=-2i\nu({\rm x})\delta({\rm x}-{\rm y})$. Therefore,
\begin{equation}
[{\rm p}^2,Q_1]=-2i\nu({\rm x}),
\label{id1}
\end{equation}
and in view of (\ref{h1})
\begin{equation}
{\rm h}_1=0.
\label{h1=}
\end{equation}
This was actually to be expected, for both the operators appearing
on the right-hand side of (\ref{h1}) are anti-Hermitian, while its
left-hand side is Hermitian. The fact that an explicit calculation
of the right-hand side of (\ref{h1}) yields the desired result,
namely (\ref{h1=}), is an important check on the validity of our
calculation of $\eta_{+_1}$. It may also be viewed as an
indication of the consistency and general applicability of our
method, that was initially formulated for systems with a discrete
spectrum \cite{jpa-2004c,p64}.
According to (\ref{h1=}),
\begin{equation}
{\rm h}={\rm p}^2+{\rm h}_2Z^2+{\cal O}(Z^3).
\label{h=h2}
\end{equation}
Hence, in order to obtain a better understanding of the nature of
the system described by the Hamiltonian $H$, we need to calculate
${\rm h}_2$. Equations (\ref{h2}) and (\ref{Q12}) suggest that this
calculation demands a complete knowledge of $\eta_{+_2}$ which in
turn requires the calculation of $\langle{\rm x}|\eta_{+_2}|{\rm y}\rangle$ for
all 16 possibilities for the ranges of ${\rm x}$ and ${\rm y}$. This is an
extremely lengthy calculation in which one must deal with infinite
integrals of the form $\int_{-\infty}^\infty e^{iak}/k^n dk$ with
$n=2,3,4$.\footnote{These may be easily regularized as is well
known in typical field theory calculations.} We will not include
the result of this calculation here, not only because it is too
lengthy but most importantly because, as we will show in the
following, the knowledge of $\langle{\rm x}|\eta_{+_1}|{\rm y}\rangle$ turns out
to be sufficient for the calculation of ${\rm h}_2$. To see this we
first employ (\ref{id1}) to express ${\rm h}_2$ in the form
\begin{equation}
{\rm h}_2=\frac{1}{4}\,\left( 2[{\rm p}^2,Q_2]+i[\nu({\rm x}),Q_1]\right).
\label{id2}
\end{equation}
Now, we recall that ${\rm p}^2$, $Q_2$, $\nu({\rm x})$ and $Q_1$ are all
Hermitian operators. Therefore $[{\rm p}^2,Q_2]$ and $i[\nu({\rm x}),Q_1]$
are respectively anti-Hermitian and Hermitian. In view of
(\ref{id2}) and the Hermiticity of ${\rm h}_2$, this implies that
\begin{equation}
[{\rm p}^2,Q_2]=0.
\label{id2.5}
\end{equation}
Hence,
\begin{equation}
{\rm h}_2=\frac{i}{4}\,[\nu({\rm x}),Q_1]=\frac{i}{4}[\eta_{+1},\nu({\rm x})],
\label{id3}
\end{equation}
where we have also made use of the first equation in (\ref{Q12}).
We should also mention that the identities~(\ref{id1}) and
(\ref{id2.5}) can be directly obtained from the pseudo-Hermiticity
condition (\ref{ph}) by substituting (\ref{exp}) in (\ref{ph}) and
using (\ref{cbh}) and (\ref{expand}).
We can easily use (\ref{eta-1-xy}) and (\ref{id3}) to yield the
expression for the integral kernel of ${\rm h}_2$, namely
\begin{equation}
\langle{\rm x}|{\rm h}_2|{\rm y}\rangle={\mbox{\small$\frac{1}{32}$}}\,
(4+2|{\rm x}+{\rm y}|-|{\rm x}+{\rm y}+2|-|{\rm x}+{\rm y}-2|)\,
{\rm sign}({\rm x}-{\rm y})[\nu({\rm x})-\nu({\rm y})],
~~~~~~~~\forall {\rm x},{\rm y}\in\mathbb{R}.
\label{h2-xy}
\end{equation}
As seen from this equation,
\begin{equation}
\langle{\rm x}|{\rm h}_2|{\rm y}\rangle=0,~~~~{\rm if}~~~~
{\rm x}\notin [-1,1]~~{\rm and}~~{\rm y}\notin [-1,1].
\label{h2=zero}
\end{equation}
We can express ${\rm h}_2$ as a function of the ${\rm x}$ and ${\rm p}$ by
performing a Fourier transformation on the ${\rm y}$ variable
appearing in (\ref{h2-xy}), i.e., computing
\begin{equation}
\langle{\rm x}|{\rm h}_2|{\rm p}\rangle:=(2\pi)^{-1/2}\int_{-\infty}^\infty
\langle{\rm x}|{\rm h}_2|{\rm y}\rangle \, e^{i{\rm p} {\rm y}}d{\rm y}.
\label{F-trans}
\end{equation}
This yields ${\rm h}_2$ as a function of ${\rm x}$ and ${\rm p}$, if we order
the factors by placing ${\rm x}$'s to the left of ${\rm p}$'s. We can
easily do this by expanding $\langle{\rm x}|{\rm h}_2|{\rm p}\rangle$ in powers of
${\rm p}$. Denoting the ${\rm x}$-dependent coefficients by $\omega_n$, we
then have
\begin{equation}
{\rm h}_2=\sum_{n=0}^\infty \omega_n({\rm x})\,{\rm p}^n,
\label{expand-h2}
\end{equation}
where we have made the implicit assumption that
$\langle{\rm x}|{\rm h}_2|{\rm p}\rangle$ is a real-analytic function of ${\rm p}$.
The Fourier transform of $\langle{\rm x}|{\rm h}_2|{\rm y}\rangle$ can be performed
explicitly.\footnote{One way of doing this is to use the integral
representations of the absolute value and sign function, as given
in (\ref{identities}), to perform the ${\rm y}$-integrations in
(\ref{F-trans}) and use the identities
\[ \int_{-\infty}^\infty \frac{e^{ia
u}du}{u(u-k)}=\frac{i\pi}{k}(e^{iak}-1)\,{\rm sign}(a),~~~~
\int_{-\infty}^\infty \frac{e^{ia
u}du}{u(u-k)^2}=\frac{i\pi}{k^2}[1+(iak-1)e^{iak}]\,{\rm sign}
(a),~~~~~~~\forall a,k\in\mathbb{R},\]
to evaluate the remaining two integrals. The resulting expression
is too lengthy and complicated to be presented here.} We have
instead used Mathematica to calculate $\langle{\rm x}|{\rm h}_2|{\rm p}\rangle$ and
found the coefficients $\omega_n$ for $n\leq 5$. It turns out that
indeed $\langle{\rm x}|{\rm h}_2|{\rm p}\rangle$ does not have a singularity at
${\rm p}=0$, and that $\omega_0, \omega_2,\omega_4$ are real and
vanish outside $(-3,3)$ while $\omega_1,\omega_3,\omega_5$ are
imaginary and proportional to $\theta({\rm x})-1/2$ outside $(-3,3)$.
As we will explain momentarily these properties are necessary to
ensure the Hermiticity of ${\rm h}$.
Figures~1, 2 and 3 show the plots of real part of $\omega_n$ for
$n=0,2,4$ and the imaginary part of $\omega_n$ for $n=1,3,5$.
\begin{figure}[p]
\centerline{\epsffile{omega02.eps}}
\centerline{
\parbox{14cm}{\caption{Graph of the real part of $\omega_0$
(dashed curve) and $\omega_2$ (full curve).}\label{fig1}}}
\vspace{1cm}
\centerline{\epsffile{omega13.eps}}
\centerline{
\parbox{14cm}{\caption{Graph of the imaginary part of $\omega_1$
(dashed curve) and $\omega_3$ (full curve).}\label{fig2}}}
\end{figure}
\begin{figure}
\centerline{\epsffile{omega45.eps}}
\centerline{
\parbox{14cm}{\caption{Graph of the real part of $\omega_4$
(dashed curve) and the imaginary part of $\omega_5$
(full curve).}\label{fig3}}}
\end{figure}
As seen from these figures (the absolute value of) $\omega_n$
sharply decreases with $n$, which suggests that a truncation of
(\ref{expand-h2}) yields a a good approximation for the action of
${\rm h}_2$ on the wave functions with bounded and sufficiently small
${\rm x}$-derivatives.
If we use $\langle{\rm p}|{\rm h}_2|{\rm x}\rangle=\langle{\rm x}|{\rm h}_2|{\rm p}\rangle^*$ to determine
the form of ${\rm h}_2$ and suppose that $\omega_{2n}({\rm x})$ are real
and $\omega_{2n+1}({\rm x})$ are imaginary for all $n=0,1,2,3,\cdots$,
we find
\[{\rm h}_2=\sum_{n=0}^\infty {\rm p}^n\omega_n({\rm x})^*=
\sum_{n=0}^\infty [{\rm p}^{2n}\omega_{2n}({\rm x})-
{\rm p}^{2n+1}\omega_{2n+1}({\rm x})].\]
Adding both sides of this relation to those of (\ref{expand-h2})
and diving by two, we obtain
\begin{equation}
{\rm h}_2=\frac{1}{2}\,\sum_{n=0}^\infty
\{a_n({\rm x}),{\rm p}^{2n}\},~~~~~~~~~
a_n({\rm x}):=\omega_{2n}({\rm x})+i\omega'_{2n+1}({\rm x}),
\label{confine}
\end{equation}
where $\{\cdot,\cdot\}$ stands for the anticommutator, a prime
denotes a derivative, and we have made use of the identity:
$[f({\rm x}),{\rm p}^m]=\{if'({\rm x}),{\rm p}^{m-1}\}$. It is important to note
that because $\omega_{2n}({\rm x})$ are real and $\omega_{2n+1}({\rm x})$
are imaginary, $a_n({\rm x})$ are real. Moreover, outside $(-3,3)$,
$\omega_{2n}({\rm x})$, $\omega_{2n+1}'({\rm x})$, and consequently $a_n$
vanish. Therefore, we can express ${\rm h}_2$ in the manifestly
Hermitian form~(\ref{confine}) with all the ${\rm x}$-dependent
coefficient functions vanishing outside $(-3,3)$. Figure~4 shows
the plots of $a_n$ for $n=0,1,2$. They are all even functions of
${\rm x}$ with an amplitude of variations that decreases rapidly as
$n$ increases.
\begin{figure}[p]
\centerline{\epsffile{a0.eps}}
\vspace{1.5cm}
\centerline{\epsffile{a1.eps}}
\vspace{1,5cm}
\centerline{\epsffile{a2.eps}}
\centerline{
\parbox{14cm}{\caption{Graph of $a_0$, $a_1$ and $a_2$}
\label{fig4}}}
\end{figure}
Next, we scale back the relevant quantities and use
(\ref{scale1}), (\ref{rh}), (\ref{h=h2}), and (\ref{confine}) to
obtain
\begin{equation}
h=\frac{p^2}{2m}+\frac{\zeta^2}{2}\sum_{n=0}^\infty
\{\alpha_n(x),p^{2n}\}+{\cal
O}(\zeta^3),~~~~~~~~~~~~~~~
\alpha_n(x):=2m \left(\mbox{$\frac{L}{2\hbar}$}\right)^{2(n+1)}
a_n(\mbox{\small $\frac{2x}{L}$ }).
\label{unscaled-h}
\end{equation}
In view of the fact that $a_n$ and $\alpha_n$ are real-valued even
functions, $h$ is a manifestly Hermitian ${\cal P}$- and ${\cal
T}$-symmetric Hamiltonian. We can also express it in the form
\begin{equation}
h=\frac{1}{4}\{m^{-1}_{\rm eff}(x),p^2\}+ w(x)+
\frac{\zeta^2}{2}\sum_{n=2}^\infty
\{\alpha_n(x),p^{2n}\}+{\cal
O}(\zeta^3),
\label{eff}
\end{equation}
where
\[ m_{\rm eff}(x):=\frac{m}{1+2m\zeta^2\alpha_1(x)},~~~~~~~~~~~~~
w(x):=\zeta^2\alpha_0(x).\]
Therefore, for low energy particles where one may neglect terms
involving 4th and higher powers of $p$, the Hamiltonian $h$ and
consequently $H$ describe motion of a particle with an effective
position dependent mass $m_{\rm eff}(x)$ that interacts with the
potential $w(x)$. Figure~5 shows a graph of $m_{\rm eff}(x)$ for
$m=1/2,\hbar=1,L=2$ and $\zeta=1/3$. For the same values of these
parameters, $w(x)=a_0(x)/9$. See Figure~4 for a graph of $a_0$.
\begin{figure}[ht]
\centerline{\epsffile{meffective.eps}}
\centerline{
\parbox{12cm}{\caption{Graph of the effective mass $m_{\rm eff}$
(full curve) for $m=\mbox{\small$\frac{1}{2}$},\hbar=1,L=2$ and
$\zeta=\mbox{\small$\frac{1}{3}$}$. The dashed curve represents
$m=\mbox{\small$\frac{1}{2}$}$}\label{fig5}}}
\end{figure}
If we replace $(x,p)$ of (\ref{unscaled-h}) and (\ref{eff}) with
their classical counterparts $(x_c,p_c)$, we obtain the
`classical' Hamiltonian:
\begin{equation}
\tilde H_c=\frac{p_c^2}{2m}+\frac{\zeta^2}{2}\sum_{n=0}^\infty
\alpha_n(x_c)\:p_c^{2n}+{\cal O}(\zeta^3)=
\frac{p_c^2}{2m_{\rm eff}(x_c)}+w(x_c)+
\frac{\zeta^2}{2}\sum_{n=2}^\infty
\alpha_n(x_c)\:p_c^{2n}+{\cal O}(\zeta^3),
\label{class-H}
\end{equation}
which coincides with the free particle Hamiltonian outside the
{\em physical interaction region}, i.e., $(\mbox{\small
$-\frac{3L}{2}$},\mbox{\small $\frac{3L}{2}$})$. The fact that
this region is three times larger than the support $(\mbox{\small
$-\frac{L}{2}$},\mbox{\small $\frac{L}{2}$})$ of the potential
$v(x)$ is quite surprising. Note also that $\tilde H_c$ is an even
function of both the position $x_c$ and momentum $p_c$ variables.
Figure~6 shows the phase space trajectories associated with the
Hamiltonian $\tilde H_c$ for $L=2$, $\hbar=1$, $m=1/2$,
$\zeta=Z=1/3$.
\begin{figure}[ht]
\centerline{\epsffile{contour.eps}}
\centerline{
\parbox{12cm}{\caption{Phase space trajectories of the
Hamiltonian $\tilde H_c(x_c,p_c)$ for $m=\mbox{\small$\frac{1}{2}$},\hbar=1,L=2$ and
$\zeta=\mbox{\small$\frac{1}{3}$}$. The horizontal and vertical
axes are respectively those of $x_c$ and $p_c$.}\label{fig6}}}
\end{figure}
For large values of the momentum the trajectories are open curves
describing the scattering of a particle due to an interaction that
takes place within the physical interaction region, $(-3,3)$. For
sufficiently small values of the momentum closed trajectories are
generated. These describe a particle that is trapped inside the
physical interaction region. This is consistent with the fact that
for small $p_c$, $\tilde H_c$ is dominated by the potential term
$w(x_c)$ which in view of its relation to $a_0(x)$ and Figure~4
can trap the particle.
We wish to emphasize that because we have not yet takes the
$\hbar\to 0$ limit of $\tilde H_c$, we cannot identify it with the
the true classical Hamiltonian $H_c$ for the quantum Hamiltonian
$h$ and consequently $H$. Given the limitations of our
perturbative calculation of $\tilde H_c$, we are unable to
determine this limit.\footnote{This is in contrast with both the
${\cal PT}$-symmetric square well and the ${\cal PT}$-symmetric
cubic anharmonic oscillator studied in \cite{jpa-2004c} and
\cite{p64}, respectively. In the former system the presence of an
exceptional spectral point imposes the condition that $\zeta$ must
be of order $\hbar^2$ or higher and consequently the classical
system is the same as that of the Hermitian infinite square well
\cite{jpa-2004c}. In the latter system, the $\hbar\to 0$ limit of
the associated Hermitian Hamiltonian can be easily evaluated and
classical Hamiltonian obtained \cite{p64}.} Therefore, we cannot
view the presence of closed phase space trajectories for $\tilde
H_c$ as an evidence for the existence of bound states of $h$ and
$H$. This is especially because these trajectories are associated
with very low momentum values where the quantum effects are
expected to be dominant.
\section{Conclusion}
In this paper we explored for the first time the utility of the
methods of pseudo-Hermitian quantum mechanics in dealing with a
non-Hermitian ${\cal PT}$-symmetric potential $v(x)$ that has a
continuous spectrum. Using these methods we were able to obtain
the explicit form of the metric operator, the pseudo-Hermitian
position and momentum operators, the localized states, and the
equivalent Hermitian Hamiltonian perturbatively.
Our analysis revealed the surprising fact that the physical
interaction region for this model is three times larger than the
support of the potential, i.e., there is a region of the
configuration space in which $v(x)$ vanishes but the interaction
does not seize.
A simple interpretation for this peculiar property is that the
argument $x$ of the potential $v(x)$ is not a physical observable
and the support $(\mbox{\small $-\frac{L}{2}$},\mbox{\small
$\frac{L}{2}$})$ of $v(x)$ being a range of eigenvalues of $x$
does not have a direct physical meaning. This observation
underlines the importance of the Hermitian representation of
non-Hermitian (inparticular ${\cal PT}$-symmetric) Hamiltonians
having a real spectrum.
The Hermitian representation involves a nonlocal Hamiltonian that
is not suitable for the computation of the energy spectrum or the
$S$-matrix of the theory. Yet it provides invaluable insight in
the physical meaning and potential applications of
pseudo-Hermitian and ${\cal PT}$-symmetric Hamiltonians and is
indispensable for the determination of the other observables of
the corresponding quantum systems.
| 9278eaeba68dd85de17b1b65d7a2e9dcc83f1ff3 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
This is the third in a series of
articles \cite{DSZ1, DSZ2} (see also \cite{Z2}) by the authors on
statistics of critical points of
random holomorphic sections and their applications to
the vacuum selection problem in string/M theory. We recall that,
in these articles, a `vacuum' in string theory
is a Calabi-Yau manifold of complex dimension $d = 3$ which forms the $6$ `small
dimensions' of the $10$-dimensional universe, together with a choice of
orientifolding and flux. Mathematically,
vacua are critical points of a {\it superpotential} $W$, a
holomorphic section of a line bundle ${\mathcal L} \to \mathcal{C}$ over
the configuration space $\mathcal{C}$ which will be recalled in \S
\ref{BGR}. The `vacuum selection problem' is that there exists no
principle at present which selects a unique superpotential, nor a
unique critical point of a given superpotential, out of a large
ensemble of possible vacua. This motivates the program of
studying statistics
of vacua, whose basic problems are to
count the number of vacua satisfying physically natural
constraints and to determine how they are distributed in $\mathcal{C}$
(see
\cite{Doug, DD, AD, DGKT, KL, Sil}). In this article, we present
the first rigorous results on counting vacua with remainder
estimates. In particular, we justify and improve on the approximations made in
\cite{DD}.
Our previous articles \cite{DSZ1, DSZ2} were devoted to the
statistics of critical points of Gaussian random holomorphic
sections of line bundles over complex manifolds. The principal
issue we face in this article is that the physically relevant
ensembles of superpotentials are not Gaussian but rather are
discrete ensembles of `quantized flux' superpotentials which form
a set of lattice points in a hyperbolic shell in $H^3(X, {\mathbb C})$.
This hyperbolic shell is defined by the inequality (known as the
{\it tadpole constraint})
\begin{equation} \label{TC} 0 \leq Q[\phi] \leq L,\;\;\;
\end{equation} where
\begin{equation} \label{HRFORM} Q[\phi]= Q(\phi,\bar\phi) = -\sqrt{-1}\, \int_X \phi
\wedge
\bar{\phi} \end{equation} is the Hodge-Riemann bilinear form.
As will be recalled in \ \S \ref{IP}, $Q$ is an indefinite
quadratic form, whose `null cone' $\{G: Q[G] = 0 \}$ is a real
quadric hypersurface which separates $H^3(X, {\mathbb C})$ into the
interior $\{W : Q[G]
> 0\} $ and the exterior where $Q[G] < 0$. As will be seen below (Propositions
\ref{POS} and \ref{HRPLUS})
, only
flux superpotentials corresponding to lattice points in $\{G :
Q[G]
> 0\} $
contribute vacua, and that is why we consider the shell
(\ref{TC}).
Our main results show that as $L \to \infty$, the statistics of
critical points relative to the discrete lattice ensemble is well
approximated by the statistics of critical points relative to the
continuum ensemble in the shell, which is dual to the Gaussian
ensembles of \cite{DSZ1, DSZ2} and is therefore well understood.
Thus, the vacuum statistics problem in string/M theory is a
mixture of two kinds of equidistribution problems:
\begin{enumerate}
\item The distribution of radial projections of lattice points
onto a quadric hypersurface;
\item The distribution of critical points of a continuous ensemble
of random holomorphic sections (related to a Gaussian ensemble) of
a negative line bundle, and their interpretation in the special
geometry of Calabi-Yau moduli spaces.
\end{enumerate}
The equidistribution problem in (2) is analyzed in detail in
\cite{DSZ1, DD}, so the main purpose of this paper is to analyze
(1) and to combine it with the previous analysis of (2).
At the end of this article in \S \ref{FCFP} and in \cite{Z2}, we
compare the mathematical results of this article to discussions of
vacua in the string theory literature.
\subsection{\label{BGR}Background to the results}
To state our results, we will need some notation (see \S \ref{CY}
for more details). The models we consider in this article are
called type IIb flux compactifications \cite{GVW, GKP}. We fix a
complex $3$-dimensional Calabi-Yau manifold $X$, i.e. a complex
manifold with trivial canonical bundle $K_X \simeq \mathcal{O}$ and with
first Betti number $b_1(X)=0$. In some of the physics literature,
it is also assumed that $H^{2,0}(X)=0$, but our results hold
without this assumption. For each complex structure $z$ on $X$,
there is a corresponding Hodge decomposition
\begin{equation} \label{HD} H^3(X, {\mathbb C}) = H^{3,0}_z(X) \oplus H^{2,1}_z(X)\oplus
H^{1,2}_z(X)\oplus H^{0, 3}_z(X). \end{equation} The space
$H^{3,0}_z(X)$ of $(3, 0)$-forms relative to $z$ is
one-dimensional and is spanned by a nowhere vanishing holomorphic
volume form $\Omega_z.$ We also put $b_3 =b_3(X)= \dim H^3(X, {\mathbb R})$,
$h^{p,q} =h^{p,q}(X) = \dim_{{\mathbb C}} H^{p,q}(X)$. Thus, $b_3 = 2(h^{2,1} + 1)$.
When we speak of vacua of string theory compactified on the
Calabi-Yau space $X$, we refer to classical vacua of the effective
supergravity theory it determines. As discussed in \cite{St2}, the
effective supergravity Lagrangian is derived by `integrating out'
or neglecting the massive modes (positive
eigenvalues) of various operators. The data of effective supergravity consists
of $(\mathcal{C}, \mathcal{L}, W)$ where:
\begin{enumerate}
\item $\mathcal{C}$ is the configuration space;
\item $\mathcal{L} \to \mathcal{C}$ is a holomorphic line bundle.
\item the superpotential $W$ is a holomorphic section of $\mathcal{L}$.
\end{enumerate}
In type IIb flux compactifications the configuration space is the
moduli
space of Calabi-Yau (Ricci flat K\"ahler ) product metrics on $X \times T^2$. At this time of writing,
the study of vacua in string theory is simplified by
replacing the moduli space of Calabi-Yau metrics by the moduli space of complex
structures on $X$ (see e.g. \cite{Doug, AD}). In the case where
$h^{2,0}(X)=0$, this is equivalent to fixing the K\"ahler class
$[\omega]\in H^2(X,{\mathbb R})$ of the Calabi-Yau metrics. Hence we define
the configuration space to be
\begin{equation} \label{CCAL} \mathcal{C} = \mathcal{M} \times \mathcal{E}, \end{equation}
where $\mathcal{M}$ is the moduli space of complex structures on $X$ and
where $\mathcal{E} = \mathcal{H}/ SL(2, {\mathbb Z})$ is the moduli space of elliptic
curves. Throughout this paper we identify $\mathcal{C}=\mathcal{M}\times
\mathcal{E}$ with a fundamental domain $\mathcal{D}$ for the modular group
$\Gamma$ in the Teichm\"uller space $\mathcal{T}\! eich(X)\times\mathcal{H}$
of complex structures (see \S\ref{GCY}). For simplicity of
exposition,
we refer to restrictions to $\mathcal{D}$ of holomorphic
objects on $\mathcal{T}\! eich(X)\times\mathcal{H}$ as holomorphic objects
over $\mathcal{C}$.
The line bundle $\mathcal{L}$ is defined to be the dual line bundle to
the Hodge bundle $H^{3,0}(X) \otimes H^{1,0}(T^2) \to \mathcal{C}$,
where $T^2 = {\mathbb R}^2/{\mathbb Z}^2$. We give $\mathcal{C}$ the {\it Weil-Petersson
K\"ahler form\/} $\omega_{WP}$ induced from the Weil-Petersson metric
on $\mathcal{L}$ (see \S \ref{SUSYCRIT}). To be precise, $\mathcal{L}$ is a
holomorphic line bundle over $\mathcal{T}\! eich(X)\times\mathcal{H}$, and
$W$ is a holomorphic section of $\mathcal{T}\! eich(X)\times\mathcal{H}$. But
as mentioned above, by holomorphic sections $W\in
H^0(\mathcal{C},\mathcal{L})$ we mean restrictions to $\mathcal{D}$ of holomorphic
sections of $H^0(\mathcal{T}\! eich(X)\times\mathcal{H}, \mathcal{L}).$
Type IIb flux compactifications
contain two non-zero harmonic $3$-forms $F, H \in H^3(X, {\mathbb Z})$
which are known respectively as the RR (Ramond-Ramond) and NS
(Neveu-Schwartz) $3$-form field strengths. We combine them into a
complex flux $G = F + i H \in H^3(X, {\mathbb Z} \oplus i {\mathbb Z})$. The
parameter $\tau \in \mathcal{E}$ is known as the dilaton-axion and may
be viewed as the period of $\omega_{\tau} = dx + \tau dy$ over
the one-cycle dual to $dy$ in $T^2$. Given $G \in H^3(X, {\mathbb Z}
\oplus \sqrt{-1} {\mathbb Z}),$ physicists define the corresponding flux
superpotential $W_G$ by:
\begin{equation} \label{WSUBG} W_{G}(z, \tau) = \int_X ( F + \tau H) \wedge
\Omega_{z}, \end{equation} where $\Omega_z \in H^{3, 0}(X)$. This
is not well-defined as a function on $\mathcal{C}$ since $\Omega_z$ and
$\tau$ depend on a choice of frame. To be more precise, $G \in
H^3(X, {\mathbb C})$ determines a section $W_{G}$ of the line bundle
$$\mathcal{L}=(H^{3,0}(X) \otimes H^{1,0}(T^2))^* \to \mathcal{T}\! eich(X)\times\mathcal{H}$$
by making $G$
into the following linear functional on $H^{3,0}_z (X) \otimes
H^{1,0}_{\tau}(T^2):$
\begin{equation} \label{WGG} \langle W_{G} (z, \tau), \Omega_{z} \otimes \omega_{\tau} \rangle
= \int_{X \times T^2 } (F \wedge dy - H \wedge dx) \wedge (
\Omega_{z} \wedge \omega_{\tau}).
\end{equation}
The map $G\to W_{G}$ defines an injective real (but not complex) linear map
which embeds complex integral fluxes
\begin{equation} H^3(X, {\mathbb Z} \oplus \sqrt{-1} {\mathbb Z}) \to H^0(\mathcal{C},
\mathcal{L}) \end{equation} as a lattice of rank $2 b_3$ in $H^0(\mathcal{M}
\times \mathcal{E}, \mathcal{L})$ which we call the lattice
$\mathcal{S}^{{\mathbb Z}}$ of {\it integral flux
superpotentials}. The real span \begin{equation} \mathcal{S} = {\mathbb R}
\mathcal{S}^{{\mathbb Z}} \subset H^0(\mathcal{M}, \mathcal{L}) \end{equation} of
$\mathcal{S}^{{\mathbb Z}}$ is also important, and will be referred as the space
of {\it flux superpotentials}. We emphasize here that $\mathcal{S}$ is
not a complex vector space, nor are any of the associated spaces
discussed below. We also use the (real-linear) map $G\mapsto W_G$ to
regard
$Q$ as a quadratic form on
$\mathcal{S}$, writing
\begin{equation} \label{QW} Q[W_G]:=Q[G]= -\sqrt{-1}\int_X G\wedge\overline G =
2\int_X F\wedge H\;,\qquad G=F+iH\in H^3(X,{\mathbb C})\;.\end{equation}
The bundles $H_{z}^{3,0} \to \mathcal{M}$ and $H_{\tau}^{1, 0} \to
\mathcal{E}$ carry Weil-Petersson Hermitian metrics $h_{WP}$ defined
by
\begin{equation}\label{HWP} h_{WP}(\Omega_{z}, \Omega_{z}) = e^{-K(z, \bar{z})} =
i\int_X
\Omega_{z} \wedge \overline{\Omega}_{z},\end{equation} and their
associated Chern connections $\nabla_{WP}$. They induce dual
metrics and connections on $\mathcal{L}$. We denote the connection
simply by $\nabla$.
\subsection{Statement of the problem}
Given a flux superpotential $W$, there is an associated potential
energy on $\mathcal{C}$ defined by
\begin{equation} \label{V} V_W(Z) = |\nabla W(Z)|^2 - 3 |W(Z)|^2.
\end{equation}
(See \cite{WB} for background on $V$).
By a vacuum we mean a critical point of $V(Z)$ on $\mathcal{C}$.
In this paper, we only study supersymmetric vacua,
namely $Z\in \mathcal{C}$ which are connection critical points in the
sense that $\nabla_{WP} W(Z) = 0.$ We denote the set of
supersymmetric vacua of $W$ by
\begin{equation}\label{CRITSET} Crit(W) = \{Z \in \mathcal{C}: \nabla_{WP} W(Z) = 0\}.
\end{equation}
Our goal is thus to count and find the distribution law of the
supersymmetric vacua
\begin{equation} \label{VACUAL} \{\mbox{SUSY vacua}\} =
\bigcup_{\textstyle G\in\mathcal{S}^{\mathbb Z}: Q[G]
\leq L} Crit (W_G)
\end{equation} as $W_{G}$ varies over the lattice $\mathcal{S}^{{\mathbb Z}}$
within the hyperbolic shell (\ref{TC}). To define the distribution
law, we introduce the {\it incidence relation}
\begin{equation} \label{ICAL} \mathcal{I} = \{(W_G, Z) \in \mathcal{S} \times \mathcal{C}: \nabla W_G (Z) = 0 \}.
\end{equation}
We shall view $\mathcal{C}$ as a fundamental domain for the modular
group $\Gamma$ in Teichm\"uller space (cf. \S \ref{CY}). The
incidence variety $\mathcal{I}$ is then a real $2m$-dimensional
subvariety of $\mathcal{C}\times \mathcal{S}$ with the following diagram
of projections:
\begin{equation}\label{DIAGRAM} \begin{array}{ccccc} &\hspace{-.4in}
\mathcal{I}& \hspace{-.3in}
\subset \mathcal{C} \times \mathcal{S} & \\
\rho \swarrow & \searrow \pi & \\ \mathcal{C} & \mathcal{S}
\end{array} \end{equation} The fiber $\pi^{-1}(W)$ is the set
$Crit(W)$ of critical points of $W$ in $\mathcal{C}$. Since $\mathcal{C}$ is
regarded as a fundamental domain in Teichm\"uller space, the map
$\pi$ is not surjective: there exist $W$ with no critical points
in $\mathcal{C}$; hence $\pi(\mathcal{C})$ is a domain with boundary in
$\mathcal{S}$ (see \S \ref{EXZERO}). Critical points can move out of
$\mathcal{C}$ as $W$ varies in $\mathcal{S}$. (There is a similar but more
complicated theory of non-supersymmetric vacua \cite{DD2}.)
The fibers of $\rho$ are the
subspaces
\begin{equation} \label{QZ} \mathcal{S}_Z : = \{W \in \mathcal{S}: \nabla_{WP} W(Z) = 0
\},
\end{equation}
which play a crucial role in this article. They have the
remarkable Hodge theoretic identifications,
\begin{equation}\label{FZHODGE} \mathcal{S}_{z,\tau} \equiv H^{2,1}_z(X)
\oplus H^{0, 3}_z(X)\quad (\mbox{Proposition}\;\; \ref{POS}).
\end{equation}
It then follows (see Proposition \ref{H3XR2}) that
$\mathcal{I}\buildrel{\rho} \over \to \mathcal{C}$ is a vector bundle (with
fiber $\approx{\mathbb C}^{b_3/2}$) over a manifold with boundary. Another
key point is that
the restrictions of $Q$ to the fibers are always positive
definite:
\begin{equation}\label{QZPOS} Q |_{ H^{2,1}_z(X) \oplus H^{0, 3}_z(X)}
\gg 0 \quad (\mbox{Proposition}\;\; \ref{HRPLUS}), \end{equation}
i.e. $\mathcal{S}_Z$ lies in the positive cone
$\{Q(\phi,\overline{\phi}) > 0\}$ of the indefinite quadratic
(Hodge-Riemann) form
(\ref{HRFORM}) (cf. \S \ref{IP}).
We now define the {\it discriminant locus} $$\widetilde\mathcal{D}=\{(Z,W)\in
\mathcal{I}:\det H^c W(Z)=0\} $$ of points $(Z,W)\in\mathcal{I}$ such that
$Z$ is a degenerate critical point of $W$, where $H^c W(Z)$ is the
{\it complex Hessian\/} of $W$ at the critical point $Z$ as
defined in (\ref{HmatriX})--(\ref{H''}). Equivalently, $\widetilde\mathcal{D}$
is the set of critical points of the second projection
$\mathcal{I}\buildrel{\pi} \over \to \mathcal{S}$ together with the singular
points of $\mathcal{I}$. Its image $\mathcal{D} = \pi(\widetilde \mathcal{D})$ under $\pi$
is the discriminant variety of superpotentials with degenerate
critical points.
For each $W \in \mathcal{S}\smallsetminus\{0\}$, we
define its distribution of (non-degenerate) critical points as the measure $C_W$ on
$\mathcal{I}\smallsetminus\widetilde\mathcal{D}$ defined by
\begin{equation} \langle C_W, \psi \rangle = \sum_{Z \in Crit(W)} \psi(Z,
W),\end{equation} for $\psi \in \mathcal{C}(\mathcal{I})$ such that
$\rho({\operatorname{Supp\,}}\psi)$ is relatively compact in $\mathcal{C}$ and ${\operatorname{Supp\,}}\psi$
is disjoint from $\widetilde\mathcal{D}$. A more general definition of $C_W$ is
\begin{equation}\label{CWintro} C_W = |\det H^c W(Z)| \;\; \nabla W^* \delta_0
\end{equation}
which will be discussed in \S \ref{HDCP}. We make these
assumptions on $\psi$ so that the sum on the right side is a
finite and well-defined sum. Indeed, the pull back is not
well-defined (without further work) on $\widetilde \mathcal{D}$. We will say
more about $\widetilde \mathcal{D}$ after the statement of Theorem \ref{MAIN}.
The basic sums we study are :
\begin{eqnarray}
\mathcal{N}_{\psi}(L) & = & \sum \big\{\langle C_N, \psi
\rangle :N \in \mathcal{S}^{\mathbb Z} ,\ Q[N] \leq L\} \nonumber \\
& = & \sum \big\{\psi(Z, N):{ (Z,N) \in \mathcal{I},\ N\in\mathcal{S}^{\mathbb Z},\ 0 \leq Q[N]
\leq L }\big\}\;.\label{Nsum}
\end{eqnarray} For instance, when $\psi \equiv \chi_{K}$ is the characteristic
function of a compact subset $K \subset \subset \mathcal{I} \smallsetminus \widetilde
\mathcal{D}$, $N_{\psi}(L)$ counts the total number of non-degenerate
critical points lying over $\rho(K)$ coming from all integral
flux superpotentials with $Q[W] \leq L$. Physicists are
naturally interested in counting the number of vacua with close to
the observed values of the cosmological constant and other
physical quantities, and hence would study sums relevant to such
quantities. For instance, the {\it cosmological constant} of the
theory defined by a vacuum $Z$ is the value $V(Z)$ of the
potential there (see \cite{DD}, \S 3.3). Thus, we may state the
main problem of this paper:
\begin{prob} \label{CRITPROB} Find the asymptotics and remainder for
$\mathcal{N}_{\psi}(L)$ as $L \to \infty. $ \end{prob}
As indicated above, this problem is very closely related to the
pure lattice point problem of measuring the rate of uniform
distribution of radial projections of lattice points onto the
surface of a quadric hypersurface. More generally one could
consider any smooth strictly convex set $Q\subset {\mathbb R}^n$ ($n\ge
2)$ with $0\in Q^\circ$. Associated to $Q$ is the norm $|X|_Q$ of
$X\in {\mathbb R}^n$ defined by
$$Q=\{X\in{\mathbb R}^n:|X|_Q<1 \}\,.$$ To measure the equidistribution
of radial projections of lattice points to $\d Q$, one considers
the sums \begin{equation} \label{SF} S_f (t) = \sum_{k \in
{\mathbb Z}^n\cap tQ\smallsetminus\{0\}} f\left(\frac{k}{|k|_Q}\right), \quad
\mbox{with } \ f \in C^{\infty}(\d Q),\ t>0. \end{equation}
The
parallel lattice point problem is then
\begin{prob} \label{LATTICEPROB} Find the asymptotics and remainder for
$S_f(t)$ as $t \to \infty. $ \end{prob}
\subsection{Statement of the results} In
Theorem \ref{localvdC}, we obtain a van der Corput type estimate for the
lattice point problem \ref{LATTICEPROB}. For the critical point
problem, we first give an elementary formula which is based on a trivial
lattice counting estimate (which is useful since it is sometimes sharp),
namely where the remainder term is simply a count of the cubes of the
lattice which intersect the boundary. We denote by
$\chi_{Q_Z}$ the characteristic function of the shell $\{W\in\mathcal{S}_Z:0 <
Q_Z[W] < 1\}
$.
\begin{prop} \label{LNMO} Suppose that $\psi = \chi_K$
where $K\subset\mathcal{I}$ such that $(Z,W)\in K \Leftrightarrow
(Z,rW) \in K$ for $r\in{\mathbb R}^+$. Assume further that $\rho(K)$ is
relatively compact in $\mathcal{C}$ and $\pi(\d K)$ is piecewise smooth.
Then
$$\mathcal{N}_{\psi}(L) = L^{b_3}\left[
\int_{\mathcal{C}}
\int_{\mathcal{S}_{Z}} \psi(Z,W)\,|\det H^c W(Z)|
\chi_{Q_{Z}}(W) \,dW\,d{\operatorname{Vol}}_{WP}(Z) + O\left(L^{-1/2} \right)\right].
$$
\end{prop}
Here and in Theorem \ref{MAIN} below, $dW$ means the multiple of
Lebesgue measure on $\mathcal{S}_Z$ which gives the volume form for the
positive-definite quadratic form $Q_Z=Q|_{\mathcal{S}_Z}$.
We note that the
integral converges, since by (\ref{QZPOS}), $\{Q_Z \leq 1\}$ is
an ellipsoid of finite volume.
It would be interesting to know if the remainder estimate is sharp
for any domain $K \subset \mathcal{I}$. In the pure lattice point
Problem \ref{LATTICEPROB}, the corresponding `trivial estimate' is
sharp. For instance, consider the domain $K = S^{n-1}_+ \subset
S^{n-1}$ formed by the northern hemisphere and put $\psi =
\chi_K$. Then the remainder term
$$\sum_{k \in {\mathbb Z}^n, |k|\leq \sqrt{L}} \chi_K\left(\frac{k}{|k|}\right)
- L^{\frac{n}{2}} \int_K f dA
$$
reflects the concentration of projections of lattice points on the
boundary $\d S^{n-1}_+$, namely a great equatorial sphere. When
the equator is defined by $x_{n}= 0$, the lattice points
projecting over the equator are the lattice points in ${\mathbb Z}^{n-1}
\subset {\mathbb R}^{n-1}$ and the number with $|k| \leq \sqrt{L}$ is of
size $\sim L^{\frac{n-1}{2}}.$ Analogously one may ask if there
are domains $K \subset \mathcal{C}$ along which critical points
concentrate to the same maximal degree. Some evidence that the
answer is `no' will be presented in \S \ref{LNMOPROOF}.
Our main result stated below is a much sharper van der Corput
type asymptotic estimate of $\mathcal{N}_{\psi}(L)$ as $L \to \infty$
for homogeneous test functions which vanish near the discriminant
locus. Here, we say that a function $\psi\in\mathcal{C}(\mathcal{I})$ is
homogeneous of order $\alpha$ if
$$\psi(Z,rW) = r^\alpha\psi(Z,W),\qquad (Z,W)\in \mathcal{I},\ r\in{\mathbb R}^+\;.$$
We consider homogeneous functions since they include (smoothed)
characteristic functions as well as the cosmological constant
(which is homogeneous of degree $2$).
\begin{theo}\label{MAIN} Let $\psi\in\mathcal{C}^\infty(\mathcal{I})$ be homogeneous of order
$\alpha\ge 0$ and suppose that $\rho({\operatorname{Supp\,}}\psi)$ is a compact subset of
$\mathcal{C}$ and ${\operatorname{Supp\,}}\psi
\cap\widetilde\mathcal{D}=\emptyset$. Then
$$\mathcal{N}_{\psi}(L) = L^{b_3+\alpha/2}\left[\int_{\mathcal{C}}
\int_{\mathcal{S}_{Z}}\psi(Z,W)\, |\det H^c W(Z)|
\, \chi_{Q_{Z}}(W) \,dW\,d{\operatorname{Vol}}_{WP}(Z) +
O\left(L^{-\frac{2b_3}{2b_3+1}}\right)\right].
$$
\end{theo}
It is reasonable to make the assumption ${\operatorname{Supp\,}}\psi
\cap\widetilde\mathcal{D}=\emptyset$, because degenerate critical points
cannot be physically acceptable vacua in string/M theory. Indeed,
the Hessian of $W$ at a critical point defines the `fermionic
mass matrix' of the theory, and a degenerate critical point would
give rise to massless fermions which are not observed in physics.
(See \cite{WB} for definitions of the mass matrix.)
Let us note some key features of the geometry of $\widetilde \mathcal{D}$ which
play a role in the assumptions (and proofs) of Proposition
\ref{LNMO} and Theorem \ref{MAIN}.
First, as observed in \cite{DSZ1,DSZ2}, its defining
equation
\begin{equation}\label{detHc}\det H^c W(Z) = \det(H^*H-|W|^2I) =
0\end{equation} is real valued; here, $H$ is the holomorphic Hessian (see
\S
\ref{CRITHESS}). Hence, $\widetilde \mathcal{D} \subset \mathcal{I}$ is a real
analytic
hypersurface (with boundary). For test functions $\psi$ which do not vanish on $\widetilde \mathcal{D}$,
the expression $\langle C_W, \psi \rangle$ (when well-defined) can jump
as one passes from one component of $\mathcal{S}\smallsetminus \mathcal{D}$ to another
or across the boundary of $\mathcal{C}$.
It follows from \eqref{detHc} that
$\widetilde \mathcal{D} \cap (\{Z\} \times \mathcal{S}_Z)$ is a real conic hypersurface
for all
$Z\in\mathcal{C}$. Thus $\widetilde \mathcal{D} \to \mathcal{C}$ is a bundle of
conic hypersurfaces and $\rho(\widetilde \mathcal{D}) = \mathcal{C}$; i.e., every point
of moduli space is a degenerate critical point of some superpotential.
We further note that $\mathcal{S}
\smallsetminus \mathcal{D}$ consists of a finite number of connected components,
and that
$\pi:\mathcal{I}\smallsetminus\widetilde\mathcal{D}\to \pi(\mathcal{S})\smallsetminus \mathcal{D}$ is
a finite covering over each connected component of $\pi(\mathcal{S}) \smallsetminus
\mathcal{D}$.
\subsection{\label{SGCPD}Special geometry and critical point density}
In obtaining reliable order of magnitude results on numbers of
vacua in a given string/M
model, it is important to estimate the size of the leading coefficient
$$ \int_{\mathcal{C}} \psi(Z)
\int_{\mathcal{S}_{Z}} |\det H^c W(Z)|
\chi_{Q_{Z}}(W) \,dW\,d{\operatorname{Vol}}_{WP}(Z) $$
and of the remainder. Since little is known about the volume of
$\mathcal{C}$ at present (cf. \cite{LuS1}), we concentrate on estimating the integrand
\begin{equation} \label{LEADDEN} \mathcal{K}^{\operatorname {crit}}(Z): = \int_{\mathcal{S}_{Z}}
|\det H^c W(Z)|
\chi_{Q_{Z}} dW \end{equation}
in the $b_3$ aspect. It is also important to study the behavior
of the $\mathcal{K}^{\operatorname {crit}}(Z)$ as $Z$ tends to `infinity' in $\mathcal{C}$,
or to a singular point such as a conifold point (when one
exists).
A key feature of $\mathcal{K}^{\operatorname {crit}}(Z)$ is that it is the integral of
a homogeneous function of order $b_3$ over a space of dimension
$\dim_{\mathbb R}\mathcal{S}_Z= b_3 = 2 (h^{2,1} + 1) $. Among the known
Calabi-Yau $3$-folds it is common to have $300 < b_3 < 1000$,
hence the integral is often over a space of large dimension. The
$b_3$-dependence is sensitive since (e.g.) the ratio of the
$L^{\infty}$ norm to the $L^2$ norm of a homogeneous function of
degree $b_3$ in $b_3$ variables can be of order $b_3^{b_3}.$
It is useful to have alternative
formulas for the leading coefficient, and we now present a few.
We will use them to suggest conjectures on the order of magnitude
of $\mathcal{K}^{\operatorname {crit}}(Z)$ in the $b_3$ aspect in \S \ref{FCFP}.
First, using the homogeneity of the integrand, we may rewrite the
integral in terms of a Gaussian density
\begin{eqnarray}\label{Kcritgauss}
\mathcal{K}^{\operatorname {crit}}(Z) &=& \frac1{b_3!}
\int_{\mathcal{S}_Z} |\det H^c W(Z)| e^{- \langle Q_Z W, W \rangle}
dW\,. \end{eqnarray}
This formula shows that
$\mathcal{K}^{\operatorname {crit}}$ is formally analogous to
density of critical points of random holomorphic sections relative to a Gaussian
measure
studied in \cite{DSZ1}. For this reason, we call
(\ref{LEADDEN}) the {\it critical point density}. However, the
measure $e^{- Q[W]}\chi_{\{0<Q<1\}}(W)dW$
is of infinite volume, so the analogy should not be taken too literally. The density $\mathcal{K}^{\operatorname {crit}}(Z)$ is
well-defined despite the infinite volume of the underlying
measure on $\mathcal{S}$ because the fibers $Q_Z$ of $\rho|_Q$ are of
finite volume. Indeed, the conditional measures of $e^{- Q[W]}
dW$ are standard
(un-normalized) Gaussian measures $e^{- Q_Z(W)} dW$.
Next, we rewrite the integrals by the methods
in \cite{DSZ1, DSZ2}.
The first method is to change variables to the Hessian $H^c
W(Z)$, i.e. to
`push-forward' the
$\mathcal{S}_{Z}$ integral under the Hessian map \begin{equation}
\label{HSUBZ} H_Z: \mathcal{S}_{Z} \to {\operatorname{Sym}}(m, {\mathbb C}) \oplus {\mathbb C},\;\;\;
H_Z(W) = H^c W(Z), \end{equation} where $m = \dim
\mathcal{C}=h^{2,1}+1$. In \cite{DSZ1, DSZ2}, we used this change of
variables to simplify the formulas for the density of critical
points. There, however, the spaces of holomorphic sections of the
line bundles $L \to M$ were so large that the image of the Hessian
map was the entire space ${\operatorname{Sym}}(m, {\mathbb C}) \oplus {\mathbb C}$ of complex
Hessians of rank equal to the dimension $m = \dim M$. In the case
of type IIb flux compactifications, the dimension of the
configuration space $\mathcal{C}$ is as large as the dimension of the
space $\mathcal{S}$ of sections, and the Hessian map is by no means
surjective. Indeed, in Lemma \ref{RANGE}, we prove that the
Hessian map is an isomorphism to a real $b_3$-dimensional space
$\mathcal{H}_Z\oplus{\mathbb C}$, where $\mathcal{H}_Z$ is spanned (over ${\mathbb R}$) by the
$2h^{2,1}$ Hermitian matrices
\begin{equation}\label{HCALZ}\xi^j:= \left(
\begin{array}{cc}
0& e_j \\
e_j^t & \mathcal{F}^j(z)
\end{array}
\right), \qquad \xi^{h^{2,1}+j}:= \left(
\begin{array}{cc}
0& i e_j \\
ie_j^t &-i \mathcal{F}^j(z)
\end{array}
\right),\qquad {j = 1, \dots, h^{2,1}}\ .
\end{equation}
Here, $e_j$ is the $j$-th standard basis element of ${\mathbb C}^{h^{2,1}}$
and
$\mathcal{F}^j(z) \in {\operatorname{Sym}}(h^{2,1}, {\mathbb C})$ is the matrix $ \left( \mathcal{F}^{\bar j}_{i
k} (z) \right)$ whose entries define the `Yukawa couplings' on
$\mathcal{M}$ (see (\ref{CANDY}), \S\ref{SGMS} or \cite{St1, Can1})
with respect to normal coordinates at the point $z\in\mathcal{M}$.
Since $\mathcal{H}_Z$ is not a complex subspace of ${\operatorname{Sym}}(m, {\mathbb C})$, we
regard ${\operatorname{Sym}}(m, {\mathbb C})$ as a real vector space with inner product
\begin{equation}\label{real}(A,B)_{\mathbb R}=\Re\langle A, B \rangle_{HS}
=\Re(\mbox{Trace}\, A B^*)\;.\end{equation}
To state our next result, we let $ \Lambda_Z$ be the operator
given by the distortion under the Hessian map (see \S
\ref{DISTORTION}):
\begin{equation}\label{defC}\big((\Lambda_Z\oplus I_{\mathbb C})^{-1} H_Z W,\, H_Z W \big)_{\mathbb R}
=Q[W]\qquad (W\in\mathcal{S}_Z),
\end{equation} where $Q[W]$ is given by \eqref{QW}. In terms of the basis
$\{\xi^a\}_{1\le a\le 2h^{2,1}}$,
$$\Lambda_Z\xi^a=\sum_{b = 1}^{2 h^{2,1}}\Lambda_{ab}\xi^b\;,
\quad \Lambda_{ab}= (\xi^a,\xi^b)_{\mathbb R} \;.$$ The $\Lambda$ matrix has
the block form
\begin{equation}\label{LAZY} (\Lambda_{ab}) = \begin{pmatrix} \Lambda' & \Lambda'' \\\Lambda'' &
\Lambda'\end{pmatrix}, \qquad \Lambda'_{jk}= 2\delta_{jk} + \Re \;
\mbox{Tr}\; \mathcal{F}^j \mathcal{F}^{k*},\ \ \Lambda''_{jk}= \Im \; \mbox{Tr}\;
\mathcal{F}^j \mathcal{F}^{k*}\;.\end{equation}
In Proposition \ref{LAMBDARICCI}, we show that the $(1,1)$ form
\begin{equation} \label{LAMBDAFORM} \omega_\Lambda:=\frac i2 \sum (\Lambda'_{jk}
+i\Lambda''_{jk}) dz^j \wedge d\bar z^k=\frac i2 \sum
\left[2\delta_{jk} + \mbox{Tr}\; \mathcal{F}^j(z_0) \mathcal{F}^{k*}(z_0)\right]
dz^j \wedge d\bar z^k \end{equation}
is the so-called Hodge metric $(m + 3) \omega_{WP} + Ric (\omega_{WP})$ of
the Weil-Petersson metric \cite{Lu, W2}.
By the injectivity of the Hessian map (stated in Lemma
\ref{RANGE}), we can make the change of variables $W{\buildrel
{H_Z}\over \mapsto}(H,x)$ in \eqref{LEADDEN}--\eqref{Kcritgauss}
to obtain the following alternate formulas for $\mathcal{K}^{\operatorname {crit}} (Z)$:
\begin{eqnarray}\mathcal{K}^{\operatorname {crit}} (Z)& = &\frac 1 {\sqrt{\det
\Lambda_Z}} \int_{\mathcal{H}_Z \oplus {\mathbb C}} \left|\det H^*H - |x|^2 I\right|
\chi_{\Lambda_Z} (H, x) dH dx,\nonumber \\ & = &\frac 1
{b_3!\sqrt{\det \Lambda_Z}} \int_{\mathcal{H}_Z \oplus {\mathbb C}} \left|\det H^*H -
|x|^2 I\right|\;\; e^{-(\Lambda^{-1}_Z H, H)_{\mathbb R} - |x|^2}\,dH\,dx
\label{PF}
\end{eqnarray}
where $\chi_{\Lambda_Z}$ is the characteristic function of the
ellipsoid $\{(\Lambda_Z^{-1} H, H)_{\mathbb R} + |x|^2 \leq 1\}.$ These formulas
are analogous to Theorem 1 and Corollary 2 of \cite{DSZ1}, the
key difference being that here we integrate over a moving subspace
$\mathcal{H}_Z$ of symmetric matrices.
We similarly have the following alternative formulations of
Proposition \ref{LNMO} and Theorem \ref{MAIN}:
\begin{cor}\label{MAIN2}
Let $\psi = \chi_{K}$, where $K \subset \mathcal{I} $ is as in Proposition \ref{LNMO},
and let $\tilde{\psi}(Z, H_Z W) = \psi(Z, W)$. Then,
{\small\begin{eqnarray*}\mathcal{N}_{\psi}(L) = \frac{L^{b_3}}{b_3!} \Big[\int_{\mathcal{C}}
\frac{1}{\sqrt{\det \Lambda_Z}} \int_{\mathcal{H}_Z
\oplus {\mathbb C}} \tilde{\psi}(Z; H, x) \left|\det H^*H - |x|^2
I\right|\;
e^{-(\Lambda^{-1}_Z H, H)_{\mathbb R} - |x|^2}\,dH\,dx \,d{\operatorname{Vol}}_{WP}(Z) \\ +O(L^{-1/2})\Big].
\end{eqnarray*}}\end{cor}
\begin{cor}\label{MAIN3} Let $\psi\in\mathcal{C}^\infty(\mathcal{I})$ be homogeneous of order
$\alpha\ge 0$ and suppose that $\rho({\operatorname{Supp\,}}\psi)$ is a compact subset of
$\mathcal{C}$ and ${\operatorname{Supp\,}}\psi
\cap\widetilde\mathcal{D}=\emptyset$.
Let $\tilde{\psi}(Z, H_Z W) = \psi(Z, W)$. Then,
\begin{eqnarray*}\mathcal{N}_{\psi}(L) &=& \frac{L^{b_3+\alpha/2}}{\Gamma(b_3+\alpha/2+1)}
\left[\int_{\mathcal{C}}
\frac{1}{\sqrt{\det \Lambda_Z}} \int_{\mathcal{H}_Z \oplus {\mathbb C}}
\tilde{\psi}(Z; H, x)\right.\\&&\quad\left. \times \left|\det H^*H
- |x|^2 I\right|\; e^{-(\Lambda^{-1}_Z H, H)_{\mathbb R} - |x|^2}\,dH\,dx
\,d{\operatorname{Vol}}_{WP}(Z) + O\left(L^{-\frac{2b_3}{2b_3+1}}\right)\right].
\end{eqnarray*}\end{cor}
It is not obvious how to estimate the dependence of the integral
for $\mathcal{K}^{\operatorname {crit}}(Z)$ on the subspace $\mathcal{H}_Z$. There are two
natural ways to parameterize this space. One (which is used in
\cite{DD}) is to use as a basis of $\mathcal{H}_Z$ the Hessians of a
$Q_Z$-orthonormal basis of $\mathcal{S}_Z$. A second method is to use
the orthonormal basis of eigenmatrices $\{H_j\}$ of $\Lambda_Z$ with
respect to the inner product \eqref{real}. We thus put $\Lambda_Z
H_j(Z) = \mu_j(Z) H_j(Z)$, and $H(y, Z) = \sum_j y_j H_j(Z)$. We
also let $D(\mu)$ denote the diagonal matrix with entries $\mu_j$.
Changing variables to $\mu_j^{1/2} y$ cancels $\frac 1 {\sqrt{\det
\Lambda_Z}}$ and we obtain:
\begin{cor} \label{CORLEAD1} We have:
$$\mathcal{K}^{\operatorname {crit}} (Z) = \int_{|y|^2 + |x|^2 \leq 1} \left|\det H(D(\mu)y, Z) ^*H(D(\mu) y, Z) - |x|^2
I\right|dy dx. $$
\end{cor}
In \S \ref{FCFP} we will discuss some conjectural bounds on the
density of critical point based on the assumption that the
subspaces $\mathcal{H}_Z$ are sufficiently random subspaces of
${\operatorname{Sym}}(h^{2,1}, {\mathbb C})$.
\subsection{\label{INDEXDENSITY}Index density}
The absolute value in the expressions for the distribution of
critical points $C_W$ of a single section (\ref{CWintro}) and the
expected distribution of critical points of a random section
(e.g., \eqref{PF}) make it very difficult to
estimate the order of magnitude of the density of critical
points. A simplifying `approximation'
is to drop the absolute value around the determinant. The
resulting density is
{\it index density} for critical points. It was used
in \cite{AD} and \cite{DD} to give a lower bound for the
critical point density.
To be precise, we modify (\ref{CWintro}) by defining the signed
distribution of critical points of $W$ as the measure $C_W$ on
$\mathcal{I}\smallsetminus\widetilde\mathcal{D}$ given by
\begin{equation} \langle Ind_W, \psi \rangle = \sum_{Z \in Crit(W)} \left(\mbox{sign} \det D^2 W(Z)\right) \psi(Z,
W),\end{equation} where sign$\,a=1,0,-1$ if $a$ is positive, 0, or
negative, respectively. We then study the sums
\begin{eqnarray}
\mathcal{I} nd_{\psi}(L) & = & \sum \big\{\langle Ind_N, \psi \rangle :N
\in \mathcal{S}^{\mathbb Z} ,\ Q[N] \leq L\}.\label{Isum}
\end{eqnarray} For instance, if $\psi(Z,W)= \chi_{K}(Z)$ is the characteristic
function of a compact set $K \subset \mathcal{C}$, then $\mathcal{I}
nd_{\psi}(L)$ is the sum $ \sum_{Z \in Crit(W) \cap K}
\left(\mbox{sign} \det D^2 W(Z)\right)$ over all non-degenerate
critical points lying over $K$ of all integral flux
superpotentials with $Q[W] \leq L$.
Simultaneously with Proposition \ref{LNMO}, we
obtain formula (1.5) of Ashok-Douglas \cite{AD} with an estimate for the error
produced by passing from the sum to the integral (cf.\ \S \ref{COUNTING}):
\begin{theo}\label{MAININD} Let $K$ be a compact subset of $\mathcal{C}$ with piecewise
smooth boundary. Then
$$\mathcal{I} nd_{\chi_K}(L) = \frac{(\pi L)^{b_3}}{b_3!\, 2^{b_3/2}} \left[\int_K
c_m(T^{* (1,0)}(\mathcal{C})\otimes \mathcal{L},\omega_{WP}\otimes h^*_{WP}) +
O\left(L^{-1/2}\right)\right],
$$
where $m=\dim\mathcal{C}=b_3/2$ and $c_m(T^{* (1,0)}(\mathcal{C})\otimes
\mathcal{L},\,\omega_{WP}\otimes h^*_{WP}) =\frac 1{\pi^m}\det\left(-R-
\omega_{WP} \otimes I \right)$ is the $m$-th Chern form of $T^{*
(1,0)}(\mathcal{C})\otimes \mathcal{L}$ with respect to the Weil-Petersson
metric $\omega_{WP}\otimes h^*_{WP}$.
\end{theo}
Here, $R = \sum_{i j} R^{k}_{\ell i \bar{j} } dz^i \wedge
d\bar{z}^{\bar{j}}$ is the curvature $(1,1)$ form of $T^{*
(1,0)}(\mathcal{C})$ regarded as an $m \times m$ Hermitian-matrix-valued
$2$-form (with $m = \dim \mathcal{C}$= $b_3/2$) and $\omega_{WP} \otimes
I$ is a scalar 2-form times the $m \times m$ identity matrix. The
determinant is defined as in Chern-Weil theory. The only
additional step in the proof is the evaluation (given in Lemma
\ref{INDCURV}) of the analogue of (\ref{Kcritgauss}) in terms of
the curvature form:
\begin{equation}\label{Indcritgauss}
\int_{\mathcal{S}_Z} \det H^c W(Z) e^{- \langle Q_Z W, W \rangle} dW
=\left(\frac \pi 2\right)^m\;\frac{ \det\left(-R- \omega_{WP}
\otimes I \right)}{d{\operatorname{Vol}}_{WP}}\,.\end{equation}
Recall that the Chern-Gauss-Bonnet theorem tells us
that if $W$ is a holomorphic section of a complex line
bundle $L\to M_m$ over a compact complex manifold such that
$\nabla W$ has only non-degenerate zeros, then
$$c_m( T^{* (1,0)} M \otimes L) = Ind\, \nabla W:= \sum_{p: \nabla W (p) = 0}
\mbox{sign}\; \det\; H^c W(p). $$ However, the Chern-Gauss-Bonnet theorem does
not apply in our setting, and indeed $Ind\, \nabla W$ is not constant in $W$,
since
$\mathcal{C}$ is an incomplete K\"ahler manifold and critical points can occur on the boundary
or disappear. There exists a
Chern-Gauss-Bonnet theorem for manifolds with boundary which
expresses $Ind \nabla W$ as $c_n(E)$ plus a boundary
correction depending on $W$, but the correction term involves
integrating a differential form over the boundary and that
becomes problematic when the boundary is highly irregular as in the case of $\mathcal{C}$.
Nevertheless, the theorem shows that asymptotically the average index
density equals the Chern-Gauss-Bonnet form.
\subsection{\label{RELATIONS}Relations to prior results in the physics and mathematics literature}
We now relate our results to the physics literature on the
number of vacua and the complexity of the string theory landscape
as well as to the mathematical literature on lattice points. A
more detailed discussion of the landscape aspects is given in \S
\ref{FCFP}.
First, the string/M aspects. Over the last five years or so, many
physics articles have been devoted to estimating the number of
candidate vacua $N_{vac}$ of string/M theory, in particular those
which are consistent with the standard model. The candidate vacua
are often pictured as valleys in a `string theory landscape', which
is the graph of the effective potential. The number of vacua is
often stated as being around $10^{500}$. In \cite{BP}
Bousso-Polchinski related the number of vacua to the number of
quantized fluxes $N$ satisfying a constraint $|N| \leq L$, which
implies $N_{vac}(L) \sim \frac{L^{b_3}}{b_3!}$ (see also \cite{AD,
Sil}). In the specific type IIb flux compactifications studied in
this paper, the constraint is hyperbolic rather than elliptic (as
imagined in \cite{BP}), and the more precise estimate $N_{vac}(L)
\sim \frac{L^{b_3}}{b_3!} f(b_3)$ was given in \cite{AD, DD} where
$f(b_3)$ is the moduli space integral of the Gaussian integral in
\eqref{PF}; it will be discussed further in \S
\ref{FCFP}. There we will also review the heuristics and the
mathematics of the landscape in more detail.
What do our results imply about the number of vacua? Since
Proposition \ref{LNMO} and Theorem \ref{MAIN} are asymptotic
results as $L \to \infty$, they are most useful when $L^{b_3}$
is very large. But it is difficult to quantify `very large' due to
the complexity of the leading coefficient (\ref{LEADDEN}), of the
remainder and of the volume of $\mathcal{C}$. Hence, we cannot make
precise estimates on the number of vacua at this time.
However, to bridge our results with estimates in string theory,
we make a speculative attempt in \S \ref{HEURISTICS} to draw
order of magnitude conclusions from Theorem \ref{MAIN}. We will
use the symbol $\simeq$ in an informal sense of `same order of
magnitude' (factorial, exponential and so on). There we give a
heuristic estimate of $\mathcal{K}^{\operatorname {crit}}(Z) \simeq \frac{1}{b_3!}
(b_3/2)! \mu^{b_3}$ for certain $\mu > 0$. More precisely, we give
heuristic upper and lower bounds with different $\mu$ which are
irrelevant when comparing factorials. To obtain an order of
magnitude for $\frac{f(b_3)}{b_3!}$ one would need to integrate
$\mathcal{K}^{\operatorname {crit}}$ over $\mathcal{C}$. At this time, the order of magnitude
of the Weil-Petersson volume $Vol_{WP}(\mathcal{C}) $ of $\mathcal{C}$ is not
known, even approximately (Z. Lu). We can however make a plausible
estimate for the integral of $\mathcal{K}^{\operatorname {crit}}$ over the region where
the norm of $\Lambda_Z$ is bounded by a uniform constant
(independent of $b_3$). Since $\Lambda_Z$ is essentially the
Hodge metric, regions where $||\Lambda_Z|| \leq \mu$
are regions $K_{\mu}$ where the norm of the Ricci curvature of
$\omega_{WP}$) is bounded above by a uniform constant. It appears
likely that the volume of such regions is bounded above by the
volume of balls in ${\mathbb C}^{b_3/2}$ of fixed radius (Z. Lu). Since the
volume of balls in ${\mathbb C}^{b_3/2}$ decays like $\frac{1}{(b_3/2)}!$,
we would find that the number of vacua in $K_{\mu}$ would be
approximately $\frac{L^{b_3}}{b_3!} \mu^{b_3}$.
Now, in
the physical models, $L$ is not a free parameter but is determined
by $X$. In the case when there exists an involution $g$ of $X$
(an `orientifolding') and a Calabi-Yau $4$-fold $Z$ which is an
elliptic fibration over $X/g,$ the `tadpole' number is then given
by:
\begin{equation} \label{TADPOLE} \mbox{ tadpole number}: \;\;
L = \chi(Z)/24. \end{equation}
In many known examples \cite{KLRY}, one has $300 < b_3 < 1,000$
and $L \simeq C b_3$ where $1/3 \leq C \leq 3$. Hence the
number of vacua in $K_{\mu}$ (and possibly in all of $\mathcal{C}$) with
the tadpole constraint $L \sim C b_3$ would have exponential
growth $\frac{(C b_3)^{b_3}}{b_3!} \mu^{b_3}$.
Next we turn to the purely lattice point aspects of the problem.
From a mathematical point of view, this article combines
statistical algebraic geometry in the sense of \cite{BSZ1, DSZ1,
DSZ2} with the study of radial projections of lattice points. As
far as we know, the radial projection of lattice points problem
has not been studied systematically before in mathematics (we
thank B. Randol for helping to sort out the historical background
on this problem). The much harder problem of equidistribution of
lattice points of fixed height $R$, i.e. lying on a sphere or
hyperboloid of fixed radius $R$, has been studied by Yu.\ Linnik,
C. Pommerenke \cite{Pom}, W. Duke and others. But the remainders
obtained in this more delicate problem are not as accurate as ours
are for the bulk problem of projecting all lattice points of
height $< R$. Counting projections of lattice points in domains of
a hypersurface is equivalent to counting lattice points in certain
cones, and there are some additional studies of this by methods of
automorphic forms. In certain right circular cones with a flat
top, Duke and Imamoglu \cite{DO} use Dirichlet series and Shimura
lifts to obtain the leading order asymptotics. Radial projections
of lattice points additionally bear some resemblance to rational
points. Some results and references for this problem are contained
in \cite{DO}. In \cite{Z2}, the general problem of counting radial
projections of lattice points in smooth domains of non-degenerate
hypersurfaces is studied. In \cite{NR}, some further results are
given on radial projections of lattice points, in particular in
the case of hypersurfaces with flat spots or in the case of
polyhedra.
\bigskip\noindent{\it Acknowledgement:\/}
We would like to thank Zhiqin Lu for many helpful comments
regarding the Weil-Petersson and Hodge metrics on the moduli space
of a Calabi-Yau 3-fold. In particular, our discussion of the
Weil-Petersson volume $V_{WP}(\mathcal{C})$ and estimates of the
eigenvalues of $\Lambda_Z$ are based on his remarks.
\section{\label{CY}Background on Calabi-Yau manifolds and complex
geometry}
As mentioned in the introduction, the supersymmetric vacua of type
IIb flux compactifications on a $CY_3$ are critical points of
holomorphic sections of the holomorphic line bundle $\mathcal{L} \to
\mathcal{C}$ dual to the Hodge bundle $H^{4,0}(X \times T^2)$, where the configuration
space $\mathcal{C}$ is the moduli space $\mathcal{M} \times \mathcal{E}$ of product
complex structures on $X \times T^2$. In this section, we give the
geometric background necessary for the analysis of critical points
and Hessians of the holomorphic sections $W_G$ of (\ref{WSUBG}).
The most significant aspects of Calabi-Yau geometry in the study
of critical points of flux superpotentials are the following:
\begin{itemize}
\item The space $\mathcal{S}_Z$ of flux superpotentials with $\nabla
W_G(Z) = 0$ may be identified with the space $H^3_{Z}(X)$ of
fluxes $G = F + i H$ with the special Hodge decomposition $F +
\tau H \in H^{2,1}_z (X) \oplus H^{0, 3}_z(X). $ See Proposition
\ref{POS}.
\item The space $H^{2,1}_z (X) \oplus H^{0, 3}_z(X) \subset H^3(X,
{\mathbb C})$ is a positive complex Lagrangian subspace. See Proposition
\ref{HRPLUS}. Hence, $\mathcal{S}_Z$ is endowed with an inner product.
\end{itemize}
In addition, we review the relation between holomorphic
derivatives, covariant derivatives and second fundamental forms
for holomorphic frames $\Omega_z$ of the Hodge bundle, and recall
the definition of the prepotential. These results are needed for
the calculations in Lemmas \ref{Lambda} and \ref{RANGE}. Much of
this material is essentially standard \cite{Can1, St1, DD}, but it
is not always stated precisely in the physics sources. We
therefore present it here for the sake of clarity and
completeness.
\subsection{Geometry of Calabi-Yau manifolds}\label{GCY}
We recall that a Calabi-Yau
$d$-fold $M$ is a complex $d$-dimensional manifold with trivial
canonical bundle $K_M$, i.e. $c_1(M) = 0$. By the well-known
theorem of Yau, there exists a unique Ricci flat K\"ahler metric in
each K\"ahler class on $M$. In this article, we fix the K\"ahler
class, and then the Calabi-Yau metrics correspond to the complex
structures on $M$ modulo diffeomorphisms. We denote the moduli
space of complex structures on $M$ by $\mathcal{M} $.
As mentioned in the introduction, the Calabi-Yau manifolds of
concern in this article are the $4$-folds $M = X \times T^2,$
where $T^2 = {\mathbb R}^2/{\mathbb Z}^2$. The $T^2$ factor plays a special role,
and the geometric aspects mainly concern $X$.
We only consider product Calabi-Yau metrics and complex structures on $M$.
Thus, the
configuration space $\mathcal{C} = \mathcal{M} \times \mathcal{E}$ where $\mathcal{M} $
is the moduli space of complex structures on $X$ and where
$\mathcal{E}$ is the moduli space of elliptic curves. We denote a point
of $\mathcal{C} $ by $Z = (z, \tau)$ where $z$ denotes a complex
structure on $X$ and where $\tau$ denotes the complex structure on
$T^2$ corresponding to the elliptic curve $ {\mathbb C}/{\mathbb Z} \oplus {\mathbb Z} \tau.$
It is often simplest to view the moduli space of complex
structures on $X$ as the quotient by the mapping class group
$\Gamma$ of the Teichm\"uller space $\mathcal{T} \!eich(X)$, where
$$\mathcal{T} \!eich(X) =\{\mbox{complex structures on }\; X\}/{\operatorname{Diff}}_0 $$
where $J \sim J'$ if there exists a diffeomorphism $\phi \in
{\operatorname{Diff}}_0$ isotopic to the identity satisfying $\phi^*J' = J.$ The
mapping class group is the group of connected components of the
diffeomorphism group,
$$\Gamma_X := {\operatorname{Diff}}(X)/{\operatorname{Diff}}_0(X). $$
We shall identify $\mathcal{M} $ with a
fundamental domain for $\Gamma_X$ in $\mathcal{T} \!eich(X)$, and $\mathcal{E}$
with the usual modular domain in $\mathcal{H}$.
The mapping class group for a Calabi-Yau
$d$-fold has a representation on
$H^d(M, {\mathbb R})$ which preserves the intersection form $Q$, which is symplectic in odd dimensions,
and indefinite symmetric in even dimensions. In odd dimensions, this representation gives a
homomorphism $\phi: \Gamma_M \to \mbox{Sp}(b_d(M), {\mathbb Z})$,
while in even dimensions it gives a homomorphism
to the corresponding orthogonal group.
It was proved by D. Sullivan \cite{Sul} that if $d
\geq 3$, then $\phi(\Gamma_M)$ is an (arithmetic) subgroup of
finite index (in $\mbox{Sp}(b_d(M), {\mathbb Z})$ if $d$ is odd), and the kernel of
$\phi$ is a finite subgroup.
On any CY manifold $M$ of dimension $d$, the space $H^{d,0}_z(M)$
of holomorphic $(d, 0)$ forms for a complex structure $Z$ is
one-dimensional. It depends holomorphically on $Z$ and hence
defines a complex holomorphic line bundle
$\mathcal{L}_\mathcal{M}^*=H^{d,0}\to {\mathcal M}$, which we refer to as the
`Hodge bundle.' The Hodge bundle is equipped with the
Weil-Petersson (WP) Hermitian metric of (\ref{HWP}), which we
repeat here:
\begin{equation}\label{hWP}
h_{WP}(\Omega_z, \Omega_z) = i^{d^2}\int_X \Omega \wedge \overline \Omega .
\end{equation}
For a holomorphic Hermitian line bundle
$(L, h) \to M$ and local holomorphic frame $e_L$ over an open set
$U \subset M$, we write
\begin{equation}\label{Kpot1}|e_L(z)|_h^2 =
e^{-K(z)}. \end{equation} The connection $1$-form in this frame is
the $(1,0)$ form $- \partial K(z)$, and the curvature $(1,1)$
-form is given by
$$\omega=\frac{i}{2}\Theta_h= \frac i2 \partial\dbar K, \qquad K =-\log|e_L|^2_h. $$
The Hermitian line bundle is said to be positive if $\omega$ is a positive
$(1,1)$ form, in which case $K$ is called the K\"ahler potential. The Hermitian
line bundle
$(L,h)$ is negative if
$\omega$ is a negative
$(1,1)$ form.
In particular, the curvature of the Weil-Petersson metric on $H^{d,0}\to {\mathcal
M}$ is a positive $(1,1)$ form on $\mathcal{M} $, and hence it
defines a K\"ahler form with potential (with respect to the frame $\Omega_z$)
\begin{equation} \label{KP} K_{WP} = - \log h_{WP}(\Omega_z, \Omega_z) =-
\log\; i\int_X \Omega \wedge \overline{\Omega}. \end{equation} For
instance, consider the Hodge bundle $H^{1,0}_{\tau} \to \mathcal{E}$.
It has a standard frame $dx + \tau dy$ for which $K = -\log \Im
\tau.$ Here, $\tau$ is the standard coordinate on the upper half
plane. Then $\partial K = - \frac{1}{\tau - \bar{\tau}} d \tau$
and the K\"ahler form is $- \frac{i}{2 (\tau - \bar{\tau})^2} d
\tau \wedge d \bar{\tau} >> 0.
$
In the product situation of $M = X \times T^2$, $H^{4, 0}_{z,
\tau} (X \times T^2) = H^{3,0}_z(X) \otimes H^{1,0}_{\tau}(T^2)$.
Thus, the line bundle $H^{4,0}(X \times T^2) \simeq H^{3,0}(X)
\otimes H^{1,0}(T^2) \to \mathcal{C}$ is an exterior tensor product and
the WP metric is a direct product. We denote an element of
$H^{3,0}_z(X)$ by $\Omega_z$, and an element of
$H^{1,0}_{\tau}(T^2)$ by $\omega_{\tau}$. We often assume that
$\omega_{\tau} = dx + \tau dy$.
\subsection{Variational derivatives and covariant derivatives}\label{derivatives}
The bundle $H^{3, 0}_z(X) \to \mathcal{M} $ is a holomorphic line
bundle. Since $H^{3, 0}_z(X) \subset H^3(X, {\mathbb C})$, one can view a
holomorphically varying family $\Omega_z \in H^{3, 0}_z(X)$ as a
holomorphic map $\mathcal{M} \to H^3(X, {\mathbb C})$ or as a holomorphic section
of $H^{3, 0}_z(X)$. As a holomorphic vector valued function,
$\Omega_z$ may be differentiated in $z$. If $z_1, \dots,
z_{h^{2,1}}$ are local holomorphic coordinates, and if
$\{\frac{\partial}{\partial z_j}\}$ are the coordinate vector
fields, then $\frac{\partial \Omega}{\partial z_j}$ is a
well-defined element of $H^3(X, {\mathbb C})$.
By the Griffiths transversality theorem, see \cite{GHJ},
\cite{Can1}, (5.4)) or \cite{W1, W2},
\begin{equation} \label{FV}
\frac{\partial \Omega_z }{\partial z^{j}} = k_{j}(z) \Omega_z+ \chi_{j}
,\end{equation} where $\chi_{j} \in H^{2,1}_{z}(X)$ and where $k \in
C^{\infty}(\mathcal{M} )$. Note that although $\frac{\partial \Omega_z }{\partial
z^{j}}$ is holomorphic, neither term on the right hand side is separately
holomorphic.
We define a Levi-Civita connection on the bundle $H^{3,0} \to
\mathcal{M}$ by orthogonally projecting the derivatives $\frac{\partial
\Omega_z }{\partial z^{j}}$ onto $H^{3,0}$. This defines the
Weil-Petersson connection $\nabla_{WP}$ on $H^{3,0} \to \mathcal{M}$,
$$\nabla_{WP} :
C^{\infty}(\mathcal{M}, \mathcal{L}) \to C^{\infty}(\mathcal{M}, \mathcal{L} \otimes
T^*).$$ It follows from \eqref{FV} that
\begin{equation} \frac{\partial}{\partial z^{j}} \int_X \Omega_z
\wedge \overline{\Omega}_z = k_{j} \int_X \Omega_z \wedge
\overline{\Omega}_z, \end{equation} which by (\ref{KP}) implies
\begin{equation} k_{j} = - \frac{\partial K}{\partial
z^{j}}.
\end{equation} Hence,
$$\nabla_{WP} \Omega_z = -\d K \otimes \Omega_z=
\sum k_j dz_j\otimes \Omega_z $$
is the Chern connection of the Weil-Petersson Hermitian metric.
We also define the forms
\begin{equation} \label{DALPHA} \left\{ \begin{array}{l} \mathcal{D}_j \Omega_z = \frac{\partial }{\partial
z^{j}}\Omega + \frac{\partial K}{\partial z^{j}}
\Omega\\ \\
\mathcal{D}_j \mathcal{D}_k \Omega_z = (\frac{\partial }{\partial z^{j}} +
\frac{\partial K}{\partial z^{j}})( \frac{\partial }{\partial
z^{k}} + \frac{\partial K}{\partial z^{k}} ) \Omega_z. \end{array}
\right.
\end{equation}
We then have
\begin{equation}\label{Dj} \mathcal{D}_j \Omega_{z}=
\frac{\partial \Omega_z }{\partial z^{j}}-k_j\Omega_z = \chi_j \in
H^{2,1}(X_z).
\end{equation}
The operator $\mathcal{D}_j \Omega_z$ is analogous to the second
fundamental form $II(X, Y) = (\tilde{\nabla}_X Y)^{\perp}$ of an
embedding, i.e. it is the `normal' component of the ambient
derivative. It is known that the first variational derivatives
span $H^{2,1}$ (see e.g. \cite{W1, W2}. (In the physics
literature, $D_{\alpha}$ is often described as a connection, and
is often identified with $\nabla_{WP}$, but this is not quite
correct as it is applied to $\Omega_z$).
The Weil-Petersson Hermitian metric $\sum G_{i \bar{j}}dz_i\,d\bar z_{\bar j}$ on
$\mathcal{M}$ is the curvature
$(1,1)$-form of the Hodge bundle. From \eqref{KP} and \eqref{Dj}, we have:
\begin{equation}
\label{GIJ} G_{j \bar{k}} = \frac{\partial^2}{\partial z^j\partial \bar z^k}
K(z,
\bar{z}) = -
\frac{\int_{\mathcal{M}} \mathcal{D}_j \Omega_z \wedge \overline{\mathcal{D}_k \Omega_z}
}{\int_{\mathcal{M}} \Omega_z \wedge \overline{\Omega}_z}. \end{equation}
\subsection{Yukawa couplings and special geometry of the moduli space}\label{SGMS}
In formula \eqref{PF}, the density of critical points is
expressed as an integral over a space $\mathcal{H}_Z \oplus {\mathbb C}$, where
$\mathcal{H}_Z$
is a subspace of the complex symmetric matrices
${\operatorname{Sym}}(h^{2,1} + 1, {\mathbb C})$ spanned by the special matrices $\xi^j$
given in \eqref{HCALZ}. Their components $\mathcal{F}^{\bar j}_{ik}(z)$
are known as Yukawa couplings and defined as follows: A priori,
$\mathcal{D}_k \mathcal{D}_j \Omega_z \in H^{2,1} \oplus H^{1,2}$, and
moreover its $H^{2,1}$ component vanishes (see e.g.\
\cite[(5.5)]{Can1}). Hence we may define $\mathcal{F}_{k j}^{\bar l}$ by
\begin{equation} \label{CANDY} \mathcal{D}_k \mathcal{D}_j \Omega_z = -\sqrt{-1}\, e^K
\mathcal{F}_{k j}^{\bar l} \overline{\mathcal{D}_l \Omega}\,\qquad (1\le j,k,l\le h^{2,1}).
\end{equation} See also \cite[(28)]{St1}. It is further shown in
\cite[(37)]{St1} (see also \cite[(4.8)]{AD},
\cite[Theorem~3.1]{LuS2}) that the Riemann tensor of the
Weil-Petersson metric on the moduli space $\mathcal{M}$ of Calabi-Yau
three-folds is related to the Yukawa couplings by
\begin{equation} \label{RIEMANN} R_{i \bar{j} k \bar{\ell}} = G_{i
\bar{j}} G_{k \bar{\ell}} + G_{i \bar{\ell}} G_{k \bar{j}} -
e^{2K} \sum_{p,q} G^{p \bar{q}} \mathcal{F}_{i k p} \overline{{\mathcal{F}}_{j
\ell q}}\ .
\end{equation}
The Yukawa couplings are related to the periods of $\Omega_z$ and
to the so-called prepotential of $\mathcal{M}$. We pause to recall the
basic relations and to direct the reader to the relevant
references.
First, we consider periods. As a basis of $H_3(X, {\mathbb R})$ we choose
the symplectic basis consisting of dually paired Lagrangian
subspaces of $A$-cycles $A_a$ and $B$-cycles $B_a$. The
periods of $\Omega_z \in H^{3,0}_z(X)$ over the $A$-cycles
$$\zeta^a= \int_{A_a} \Omega_z\qquad (1\le a\le h^{2,1}+1=b_3/2)$$
define holomorphic coordinates on $\mathcal{L}^*_\mathcal{M}=H^{3,0}\to
\mathcal{M}$. Alternately, we can view the $\zeta^a$ as `special'
projective coordinates on $\mathcal{M}$. The periods of $\Omega_z$ over
the $B$-cycles are then holomorphic functions of the $\zeta^a$.
The principal fact is that the image of $\mathcal{L}^*_\mathcal{M}$ under the
period map is a complex Lagrangian submanifold of $H^3(M,{\mathbb C})$, and
thus is determined by a single holomorphic function, the
``prepotential''
$\mathcal{F}=\mathcal{F}(\zeta^1,
\dots, \zeta^{b_3/2}):\mathcal{L}^*_\mathcal{M}\to{\mathbb C}$ such that
\begin{equation} \int_{B_a} \Omega_z = \frac{\partial
\mathcal{F}}{\partial \zeta^a}\; .\end{equation} Furthermore, $\mathcal{F}$ is
homogeneous of degree $2$ in the periods $\zeta^a$,
$$ \sum_{j = 1}^{b_3/2} \zeta^a\frac{\partial
\mathcal{F}}{\partial \zeta^a} = 2 \mathcal{F}(z), $$ and hence may be viewed
as a holomorphic section of $\mathcal{L}^{\otimes 2}_\mathcal{M}$.
The local holomorphic $3$-form $\Omega_z$ may be expressed in
terms of the Poincar\'e duals of the symplectic basis by:
\begin{equation} \Omega_z = \sum_{a = 1}^{b_3/2} \left(\zeta^{a}
\widehat A_a - \frac{\partial
\mathcal{F}}{\partial \zeta^a}\, \widehat B_a\right)\;. \end{equation}
(See \cite{Can1}, (3.8)). Further, in these
coordinates, the K\"ahler potential (\ref{KP}) of the
Weil-Petersson metric may be written as
$$K(z, \bar{z}) = - \log i \left( \sum_{a = 1}^{b_3/2} \zeta^a\overline
{\frac{\partial \mathcal{F}}{\partial \zeta^a}} - \bar{\zeta^a} \frac{\partial
\mathcal{F}}{\partial \zeta^a} \right). $$
We also have: \begin{equation}\label{fcal}\mathcal{F}_{k j}^{\bar l} =
\sum_{r=1}^{h^{2,1}} G^{r\bar l}\frac{\partial^3 \mathcal{F}}{\partial
z^r \partial z^j
\partial z^k}\;.\end{equation}
See \cite[(4.5)]{Can1} and \cite[(64)]{St1}.
\bigskip
In summary, we reproduce the table from \cite{Can1}:
\begin{equation} \label{CANTABLE} \begin{array}{ll} \mbox{Derivatives of the Basis}
\quad & \mbox{spans}
\\ & \\
\Omega & H^{3,0} \\ & \\
\mathcal{D}_j \Omega & H^{2,1} \\ & \\
\mathcal{D}_k \mathcal{D}_j \Omega = - i e^K \mathcal{F}_{k j}^{\bar{\gamma}}
\overline{\mathcal{D}_{\gamma} \Omega} & H^{12} \\ & \\
\mathcal{D}_k \mathcal{D}_{\bar{j}} \Omega = G_{k \bar{j}} \overline{\Omega} &
H^{03}
\end{array} \end{equation}
\subsubsection{\label{CCALSTUFF}$\mathcal{C}$ as the moduli
space of complex structures on $X \times T^2$}
Above, we have reviewed the geometry of the moduli space of
complex structures on the Calabi-Yau three-fold. Our configuration
space $\mathcal{C} = \mathcal{M} \times \mathcal{H}$ may be viewed as (a component
of) the moduli space of complex structures on $X \times T^2$. This
point of view is used in \cite{DD}, but because the $T^2$ factor
plays a distinguished role we do not emphasize this identification
here. Further, formula (\ref{RIEMANN})
needs to be modified for the moduli space of complex structures on
a Calabi-Yau four-fold. In \cite[Theorem~3.1]{LuS2}), the Riemann
tensor of the Weil-Petersson metric on the moduli space of a
Calabi-Yau manifold of arbitrary dimension is shown to be
\begin{equation} \label{RIEMANN4} R_{i \bar{j} k \bar{\ell}} = G_{i
\bar{j}} G_{k \bar{\ell}} + G_{i \bar{\ell}} G_{k \bar{j}} -
\frac{\langle \mathcal{D}_k \mathcal{D}_i \Omega, \overline{\mathcal{D}_{\ell} \mathcal{D}_j \Omega
\rangle }}{\int_\mathcal{M} \Omega \wedge \overline{\Omega}} .
\end{equation}
In the case of three-folds, the vectors $\mathcal{D}_j \Omega$ form an
orthonormal basis for $H^{2,1}$ and one can write the inner
product in the form (\ref{RIEMANN}).
\subsection{Hodge-Riemann form and inner products}\label{IP}
The Hodge-Riemann bilinear form on $H^3(X, {\mathbb R})$ is the
intersection form $(\alpha ,\bar{e})\mapsto \int_X \alpha\wedge\bar{e}$.
We consider the sesquilinear pairing:
\begin{equation} \label{ourQ}(\alpha,\bar{e})\mapsto Q(\alpha,\bar\bar{e}) = -\sqrt{-1}\, \int_X
\alpha\wedge\bar\bar{e}\;,\quad \alpha,\bar{e}\in H^3(X,{\mathbb C})\;.\end{equation}
An important fact is that under the Hodge
decomposition (\ref{HD}) for a given complex structure, the
Hodge-Riemann form is definite in each summand:
\begin{equation} \label{INT} (-1)^p Q(\alpha,\bar\alpha)> 0,\quad \alpha\in H^{p,
3-p}(X,{\mathbb C}),
\end{equation} whose sign depends only on the parity of $p$. (See
\cite[\S7]{GH}. Note that our definition of $Q$ has the extra sign
$-\sqrt{-1}$. The inequality \eqref{INT} holds only for
primitive forms, but in our case all harmonic 3-forms are primitive,
since we are assuming that $H^1(M,{\mathbb C})=0$.)
To restate \eqref{INT}:
\begin{prop}\label{HRPLUS} Let $\dim X = 3$, and let $b_1(X)=0$. Then for each
$z
\in
\mathcal{M}
$, the Hodge-Riemann form is positive definite on
$H^{2,1}_z \oplus H^{0, 3}_z$ and negative definite on $H^{3,0}_z
\oplus H^{1,2}_z. $ \end{prop}
By Griffiths transversality (see \eqref{FV}), for any local
holomorphic frame
$\Omega_z$, $\mathcal{D}_j \Omega_z \in H^{2,1}_z$ and these elements span
$H^{2,1}_z$. Also, $\overline{\Omega}_z$ spans $H^{0, 3}$. These forms
provide us with an orthonormal basis for $ H^{2,1}_{z} \oplus H^{0, 3}_{z}$:
\begin{prop} \label{ONB} If $\{z_j\}$ are coordinates at
$z_0$ such that $\left\{\d/{\d z_j}|_{z_0}\right\}$ are orthonormal, and if
$h_{WP}(\Omega_{z_0},
\Omega_{z_0})= 1$, then the basis $\{\mathcal{D}_j \Omega_{z_0}, \overline{\Omega}(z_0)\}$ is
a complex orthonormal basis of $ H^{2,1}_{z_0} \oplus H^{0, 3}_{z_0}$ with
respect to the Hodge Riemann form $Q.$
\end{prop}
\begin{rem} Here and below, when we say that a basis of a complex vector space is complex
orthonormal we mean that it is a complex basis and is orthonormal
for the given inner product. By a real orthonormal basis of the
same vector space we mean an orthonormal basis of the underlying
real vector space.
\end{rem}
\begin{proof} It suffices to show that:
$$\begin{array}{rl}
\rm (i) & Q( \mathcal{D}_j \Omega_z, \overline{ \mathcal{D}_k \Omega_z}) = -i \int_X
\mathcal{D}_j \Omega_z
\wedge
\overline{\mathcal{D}_k \Omega_z } = G_{j \bar{k}} e^{- K} \\ & \\
\rm (ii) & Q( \mathcal{D}_j \Omega_z, {\Omega_z}) = -i\int_X
\mathcal{D}_{\bar{j}} \Omega_z
\wedge {\Omega_z} = 0 \\ & \\
\rm (iii) & Q(\overline\Omega_z, \Omega_z) = -i\int_X \overline\Omega_z
\wedge {\Omega_z} = h_{WP}(\Omega_z, \Omega_z)
\end{array}
$$
\smallskip
Equation (i) follows from (\ref{GIJ}), (ii) is by type considerations, and
(iii) follows from (\ref{HWP}).
\end{proof}
\begin{rem}
In the language of complex symplectic geometry, Proposition \ref{HRPLUS} says
that
$H^{2,1}_z \oplus H^{0, 3}_z$ is a positive complex polarization
of $H^3(X, {\mathbb C})$. Let us recall the definitions. The space
$(H^3(X, {\mathbb R}), Q)$ of real $3$-cycles with its intersection form
$Q(\alpha,\bar{e}) = -i\int_M\alpha\wedge\bar{e}$ is a real symplectic
vector space. After complexifying, we obtain the complex
symplectic vector space $(H^3(X, {\mathbb C}), Q). $ In general, if $(V,
\omega)$ is a real symplectic vector space and if
$(V_{{\mathbb C}}, \omega_{{\mathbb C}})$ is its complexification, a complex
Lagrangian subspace $F \subset V_{{\mathbb C}}$ is called a polarization.
The polarization is called real if $F = \overline{F}$ and complex
if $F \cap \overline{F} = \{0\}$. The polarization $F$ is called
{\it positive} if $i \omega(v, \bar{w})$ is positive definite on
$F$.
In our setting, $(V, \omega) = (H^3(X, {\mathbb R}), Q) $. We observe
that for any complex structure $z$ on $X$ (as a complex manifold),
the Hodge decomposition may be written in the form
$$H^3(X, {\mathbb C}) = F \oplus \overline{F}, \;\; F = H^{2, 1} \oplus H^{0, 3}\;\;
\overline F = H^{3,0} \oplus H^{1, 2}, \;\; , $$ where
$F$ is complex Lagrangian. By Proposition \ref{HRPLUS}, this
polarization is positive, i.e.
$$Q(v, \bar v) >0\;, \;\; v \in F\smallsetminus\{0\}\;. $$
\end{rem}
\section{\label{BG}Critical points of superpotentials}
In this section, we assemble some basic facts about critical
points and Hessians of flux superpotentials.
\subsection{Flux superpotentials as holomorphic sections}
As discussed in the previous section, $\mathcal{L} \to \mathcal{C}$ is a
negative line bundle. On a compact complex manifold, a negative
line bundle has no holomorphic sections. However, $(\mathcal{C},
\omega_{WP})$ is a non-compact, incomplete K\"ahler manifold of
finite Weil-Petersson volume (see \cite{LuS1} for the latter
statement), and the line bundle $\mathcal{L} \to \mathcal{C}$
has many holomorphic
sections related to the periods of $X \times T^2$.
The sections relevant to this article are the flux superpotentials
$W_G$ of (\ref{WSUBG})-(\ref{WGG}). $W_G$ depends on two real
fluxes $F, H \in H^3(X, {\mathbb Z})$, which we combine into a complex
integral flux
$$G = F + i H \in H^3(X, {\mathbb Z} \oplus \sqrt{-1} {\mathbb Z}).$$
The main reason to form this complex combination is that it
relates the tadpole constraint (\ref{TC}) on the pair $(F, H)$ to
the Hodge-Riemann form (\ref{HRFORM}). However, none of subsequent
identifications preserves this complex structure, and the reader
may prefer to view $G$ as just the pair $G = (F, H) \in H^3(X, {\mathbb Z})
\oplus H^3(X, {\mathbb Z})$. Alternately, we can identifying $G=F+iH\in
H^3(X,{\mathbb C})$ with the real cohomology class
$$\widetilde G := F \wedge dy - H \wedge dx \in H^4(X
\times T^2, {\mathbb R})\approx H^3(X,{\mathbb C})\;.$$
We shall consider the (real-linear) embedding
$$\mathcal{W}: H^3(X, {\mathbb C}) \to H^0(\mathcal{C}, \mathcal{L}), \qquad G\mapsto W_G\;,$$
where $W_G$ is given by formula (\ref{WGG}); i.e.,
$$\big(W_G(z,\tau),\, \Omega_z\otimes\omega_\tau\big) = \int _{X\times T^2}
\widetilde G\wedge
\Omega_z\wedge\omega_\tau\;.$$ We denote
by
$\mathcal{S}=$Image$(\mathcal{W})$ the range of this map, and by
$$\mathcal{S}^{{\mathbb Z}}=\mathcal{W}\big(H^3(X, {\mathbb Z} \oplus i{\mathbb Z})\big)$$ the lattice of
sections satisfying the integrality condition. The map $G
\to W_G$ is not complex linear, so $\mathcal{S}$ is not a complex
subspace of
$H^0(\mathcal{M} \times \mathcal{E}, \mathcal{L})$. Rather, it is a real subspace of
dimension $2 b_3$ (over ${\mathbb R}$) and $\mathcal{S}^{{\mathbb Z}}$ is a lattice of
rank $2b_3$ in it. In fact $\mathcal{S}\approx {\mathbb R}^{2b_3}$ is
totally real in $H^0(\mathcal{C},\mathcal{L})\approx {\mathbb C}^{2b_3}$.
We choose
local holomorphic frames $\Omega_{z}$ of the Hodge bundle
$H^{3,0} \to \mathcal{M}$ and $\omega_{\tau} = dx + \tau dy$ of $H^{1,0}
\to\mathcal{E}$ and let $\Omega_{z}^*\otimes\omega_{\tau}^*$ denote
the dual co-frame of $\mathcal{L}$. A holomorphic section of $\mathcal{L}$
can then be expressed as $W = f(z, \tau) \Omega_{z}^* \otimes
\omega_{\tau}^*$ where $f \in \mathcal{O}(\mathcal{C})$ is a local holomorphic
function. If $W = W_G$ is a flux superpotential, then the
corresponding function $f_G$ is given by:
\begin{equation} \label{FGG} f_{G} (z, \tau)
= \int_{X \times T^2 } (F \wedge dy - H \wedge dx) \wedge (
\Omega_{z} \wedge \omega_{\tau}).
\end{equation}
When $\omega_{\tau} = dx + \tau dy$ (on a fundamental domain in
Teichm\"uller space), we obtain the simpler form:
\begin{equation} \label{FSUBG} f_{G}(z, \tau) = \int_X ( F + \tau H) \wedge
\Omega_{z}. \end{equation}
\subsection{\label{CRITHESS}Critical points and Hessians of
holomorphic sections}
As preparation for critical points of superpotentials, we recall
some basic notations and facts concerning critical points and
Hessians of holomorphic sections of a general line bundle $L\to M$ (see \cite{DSZ1}).
Let $(L, h) \to M$ be a holomorphic Hermitian line bundle, let
$e_L$ denote a local frame over an open set $U$ and write a
general holomorphic section as $s = f e_L$ with $f \in \mathcal{O}(U)$.
Recall that the Chern connection $\nabla_h$ of $h$ is given locally
as $\nabla (f e_L) = (\partial f - f
\partial K) \otimes e_L$, where $K=-\log \|e_L\|^2_h$, i.e.
\begin{equation}\label{covar}
\nabla s=\sum_{j=1}^m\left( \frac{\d f}{\d z^j} -f \frac{\d K}{\d
z^j}\right)dz^j \otimes e_L=\sum_{j=1}^m e^{K}\frac{\d }{\d
z^j}\left(e^{-K}\, f\right)dz^j \otimes e_L\;.
\end{equation}
The critical point equation thus reads,
$$ \frac{\d f}{\d z^j} -f \frac{\d K}{\d
z^j} = 0. $$
The Hessian of a holomorphic section $s$ of $(L, h) \to M$ at a critical point $Z_0$
is the tensor
$$D \nabla s(Z_0) \in T^* \otimes T^* \otimes L$$
where $D$ is a connection on $T^* \otimes L$. At a critical
point $Z_0$, $D \nabla s(Z_0)$ is independent of the choice of
connection on $T^*$. In a local frame and in local coordinates we
have
\begin{equation}\label{Hjq}
D'\nabla' s(Z_0)=\sum_{j,q}H'_{jq} dz^q\otimes dz^j\otimes
e_L,\qquad D''\nabla' s(Z_0)=\sum_{j,q}H''_{jq} d\bar z^q\otimes
dz^j\otimes e_L\,.\end{equation} The Hessian $D \nabla s(Z_0)$ at a
critical point thus determines the complex symmetric matrix $H^c$
(which we call the `complex Hessian'):
\begin{equation}\label{HmatriX} H^c:=
\begin{pmatrix} H' &H''\\[6pt] \overline{H''} &\overline{H'}
\end{pmatrix} =\begin{pmatrix} H' &-f(Z_0)\Theta\\[8pt]
-\overline{f(Z_0)\Theta} &\overline{H'}
\end{pmatrix}\;,
\end{equation} whose components are given by
\begin{eqnarray}H'_{jq} &=& (\frac{\d}{\d
z^j} - \frac{\d K}{\d z^j}) (\frac{\d}{\d z^q} - \frac{\d K}{\d z^q}) f(Z_0)\;,\label{H'}\\
H''_{jq} &=& -\left.f \frac{\d^2 K}{\d z^j\d\bar
z^q}\right|_{Z_0}=-f(Z_0)\Theta_{jq}\,,\quad
\Theta_h(Z_0)=\sum_{j,q}\Theta_{jq}dz^j\wedge d\bar
z^q\;.\label{H''}\end{eqnarray}
\subsection{\label{SUSYCRIT}Supersymmetric critical points and the
Hodge decomposition}
We now specialize to the critical point equations for flux
superpotentials $W_G(z, \tau)$. An important observation that is
now standard in the physics literature is
that the complex moduli $(z, \tau)$ at which a flux superpotential
$W_G(z, \tau)$ satisfies $\nabla W_G = 0$ are characterized by the
following special Hodge decomposition of $H^3(X, {\mathbb C})$ at $z$ (see
\cite{AD}, (3.5)--(3.8)).
A local holomorphic frame for the
Hodge bundle $\mathcal{L} \to \mathcal{C}$ is $e_\mathcal{L}= \Omega_z^*\otimes
\omega_\tau^*$, where $\Omega_z^*$ is dual to the local frame
$\Omega_z$ of the Hodge line bundle $H^{3,0}\to\mathcal{M}$ and
$\omega_\tau^*$ is dual to the local frame $\omega_{\tau} = dx + \tau
dy $ of $H^{1,0}\to\mathcal{E}$. We let $K(Z)=K_X(z)+K_{T^2}(\tau)$ be
the K\"ahler potential for the local frame $\Omega_z \otimes
\omega_{\tau}$ of the (positive) Hodge bundle $\mathcal{L}^*$. We then
have
\begin{equation}\label{Kpot}|e_{\mathcal{L}}(Z)|_h^2 = |\Omega_z \otimes
\omega_{\tau}|_{h_{WP}}^{-2}= e^{K(Z)} = e^{ K_X(z)} e^{
K_{T^2}(\tau)} \;.
\end{equation}
Hence, the Weil-Petersson K\"ahler potential on $\mathcal{C}$ is
$$K(Z)=-\log \int_X \Omega_z \wedge \overline\Omega_z - \log
(\bar{\tau} - \tau). $$ In particular, the $\tau$-covariant
derivative on $\mathcal{L}$ is given in the local frame $e_\mathcal{L}$ by
\begin{equation} \label{DELTAU} \nabla_\tau = \frac{\partial}{\partial
\tau} + \frac{1}{\bar\tau -\tau}. \end{equation}
Hence with $W_G =
f_G \,e_\mathcal{L}$, we have
\begin{eqnarray} \label{NABLATAU} \nabla_{\tau} f_{G}&=&
\int_X \left[H +
\frac{1}{\bar\tau -
\tau} (F + \tau H)\right] \wedge \Omega_z \nonumber \\
&=&\frac{1}{\bar\tau - {\tau}} \int_X ( F + \bar{\tau} H) \wedge
\Omega_z .\end{eqnarray}
To compute the $z$-derivatives, we see from \S\ref{derivatives} and
\eqref{FSUBG}--\eqref{covar} that
\begin{eqnarray} \nabla_{z^j} f_G&=&
\left(\frac{\partial f_G}{\partial
z^{j}} + \frac{\partial K}{\partial z^{j}} f_G\right ) (z,
\tau) = \int_X (F + \tau H) \wedge \left( \frac{\partial
\Omega_z}{\partial z^{j}} + \frac{\partial K}{\partial
z^{j}} \Omega_z\right) \nonumber \\ & = & \int_X (F + \tau H)
\wedge
\chi_{j} = 0, \end{eqnarray}
for $1\le j\le h^{2,1}$.
Thus, the supersymmetric critical point equations for the flux
superpotential $W_G$ read: \begin{equation} \label{CPSYSTEM}
\left\{
\begin{array}{l} \int_X (F + \tau H) \wedge \mathcal{D}_j \Omega_z =
0 \qquad (1\le j \le h^{2,1})\\ \\
\int_X (F + {\tau}H) \wedge \overline\Omega_z = 0. \end{array}
\right. \end{equation}
As in (\ref{QZ}), we denote by $\mathcal{S}_Z$ $(Z = (\tau, z))$ the
space of superpotentials $W_G$ with $\nabla W_G(Z) =0 $. Although
the equation is complex linear on $H^0(\mathcal{C},\mathcal{L})$, $\mathcal{S}$ is
not a complex subspace of $H^0(\mathcal{C},\mathcal{L})$, so
$\mathcal{S}_Z$ is a real but not complex vector space. Put another way,
for each $Z= (z, \tau)$, the critical point equation determines a
real subspace
\begin{equation} \label{H3} H^3_{Z}(X, {\mathbb C})= \mathcal{W}^{-1}(\mathcal{S}_Z)
=
\{F + i H,
\;\; F, H \in H^3(X, {\mathbb R}), \;\; (\ref{CPSYSTEM}) \; \mbox{is true}
\}. \end{equation}
The critical point equations (\ref{CPSYSTEM}) put $b_3=2(h^{2,1} +
1)$ independent real linear conditions on $2 b_3$ real unknowns
$(F,H)$.
\begin{prop} \label{POS} \cite{AD, DD} Let $G = F + i H$ with $F, H \in
H^3(X, {\mathbb R})$, and let $\langle W_G(z, \tau), \Omega_z \wedge
\omega_{\tau}\rangle
= \int_X (F + \tau H) \wedge \Omega_z $ be the associated
superpotential.
If $\nabla_{z, \tau} W_G(z, \tau) = 0$, then $(F + \tau H) \in
H^{2, 1}_z \oplus H^{0, 3}_z$. Moreover, the map $$I_{\tau} :
H^3(X, {\mathbb C}) \to H^3(X, {\mathbb C}), \;\;\;\; I_{ \tau}(F + i H) = F +
\tau H$$ restricts to give real linear isomorphisms
$$I_{z, \tau}: H^3_{z, \tau} \to H^{2, 1}_z(X) \oplus
H^{0, 3}_z(X), \;\; $$ of real vector spaces.
\end{prop}
\begin{proof} We first prove that $(F + i H) \mapsto F + \tau H$ takes
$H^3_Z \mapsto H^{2, 1}_z \oplus H^{0, 3}_z$. Suppose that $\nabla W_G=0$.
Since the $\chi_j(z)$ span $H^{2,1}_{z}$, we conclude from the first equation of
\eqref{CPSYSTEM} that $(F+\tau H)^{1,2}_z=0$; by the second equation, we also have
$(F+\tau H)^{3,0}_z=0$. Thus $F + \tau H \in H^{2,1}_z \oplus H^{0, 3}_z. $
Since $I_{z, \tau}$ is injective and since $\dim_{{\mathbb R}} H^3_{z,
\tau} = \dim_{\mathbb R} H^{2,1}_z \oplus H^{0, 3}_z = b_3$, it is clearly an
isomorphism.
\end{proof}
\subsection{The map $(z, \tau) \to H^3_{z, \tau}$}
As $(z, \tau)$ varies over $\mathcal{C}$, how do the spaces $H^{3}_{z,
\tau}$ move in $H^3(X, {\mathbb C})$? This question is important in
relating the pure lattice point problem in $H^3(X, {\mathbb C})$ to the
vacuum distribution problem in $\mathcal{C}$. It depends on the
geometry of the diagram
\begin{equation}\label{DIAGRAM2} \begin{array}{ccccc}
\mathcal{I}& \hspace{-.6in}\subset \mathcal{C} \times H^3(X, {\mathbb C}) \\
\rho \swarrow \quad \searrow \pi & \\ \qquad
\mathcal{C} \qquad H^3(X, {\mathbb C}), \end{array} \end{equation} where
$\mathcal{I} = \{(z, \tau, F, H): F
+ i H \in H^3_{(z, \tau)} (X)\},$ which is a replica of
(\ref{DIAGRAM}) in which $\mathcal{S}$ is replaced by $H^3(X, {\mathbb C})$.
To answer this question, we first note that for
each $(z, \tau) \in
\mathcal{C}$, the real-linear map $$H^3_{z,\tau} \to
H^3(X,{\mathbb R}),\qquad F+iH\mapsto H$$ is bijective. Injectivity follows
by noting that
$$F\in H^3_{z,\tau} \implies F\in H^{2,1}_z\oplus H^{0,3}_z
\implies F=\bar F\in H^{1,2}_z\oplus H^{3,0}_z \implies F=0.$$
Since both spaces have dimension $b_3$, bijectivity follows. Thus
there is a real linear isomorphism $\iota_{z, \tau}: H^3(X, {\mathbb R})
\to H^3_{z, \tau}$ of the form $$\iota_{z, \tau}(H) = F(z, \tau,
H) + i H\;.$$
To describe $F(z, \tau,H)$, we form the $z$-dependent basis
\begin{equation} \label{basis} \{\Re D_1 \Omega_z,\dots\,, \Re
D_{h^{2,1}}\Omega_z\,, \Re\Omega_z\,, \Im D_1 \Omega_z\,,\dots,\Im
D_{h^{2,1}} \Omega_z\,,- \Im \Omega_z \} \end{equation} of $H^3(X,
{\mathbb R}).$ We then have \begin{equation}F(z, \tau, H)= J_\tau
H\,,\end{equation} where $J_\tau$ is given by the block matrix
\begin{equation}\label{J} J_{\tau} = \begin{pmatrix} \Re \tau\, I_m
& - \Im \tau\, I_m
\\ & \\
\Im \tau\, I_m &\Re\tau\, I_m \end{pmatrix} \;\;\; (m = h^{2,1} +
1)\,,\end{equation} with respect to the basis \eqref{basis}.
This yields the following proposition:
\begin{prop} \label{H3XR2}
The mapping $(z, \tau, H) \mapsto (z, \tau, \iota_{z,
\tau}(H))$ gives an isomorphism\\ $ \mathcal{C} \times
H^3(X, {\mathbb R}) \simeq\mathcal{I}$.
\end{prop}
An important consequence is:
\begin{prop} For any open subset $U \subset \mathcal{C}$, the cone
$\bigcup_{(z, \tau) \in U} H^3_{(z, \tau)} (X)\smallsetminus\{0\}$ is open in
$H^3(X, {\mathbb C})\smallsetminus\{0\}$.
\end{prop}
\begin{proof} We must show that
$$\pi\big[\mathcal{I}\cap \{U\times H^3(X,{\mathbb C})\}\big]\smallsetminus\{0\}$$ is open. By
Proposition
\ref{H3XR2}, it suffices to show that the image of the map
$$\iota:
U
\times [H^3(X,
{\mathbb R}) \smallsetminus\{0\}]\to H^3(X, {\mathbb C}), \;\; \iota(z, \tau, H)=\iota_{z,
\tau}(H) = F(z, \tau, H) + i
H\;,$$ is open. We fix $(z_0, \tau_0, H_0)$ and consider the
derivative $D\iota|_{z_0, \tau_0, H_0}$ on $T_{z_0, \tau_0} \mathcal{C}
\times H^3(X, {\mathbb R})$. since the linear map $\iota_{z,\tau}$ is
bijective, if we vary $H$, we get all of
$H^3_{z, \tau}$, so the issue is to prove that we obtain the
complementary space by taking variations in $\tau, z$.
First, $H^3_{z, \tau} = I_{\tau}^{-1} (H^{2,1}_z \oplus H^{0,
3}_z).$ The $z$ variations of $H^{2,1}_z \oplus H^{0, 3}_z$ span
this space plus $H^{1,2}_z$. By \eqref{basis}--\eqref{J},
variations in $\Re \tau$, resp. $\Im \tau$, produce $\Re \Omega_z,
\Im \Omega_z$ and hence $H^{3,0}_z=$span$(\Omega_z)$ is also in the
image.
\end{proof}
\begin{rem}
We could also ask what kind of set is swept out in
$\bigcup_{z \in U} H^{2,1}_z \oplus H^{0, 3}_z $ as $z$ ranges
over an open set $U \subset \mathcal{M}$. Since $\dim_{{\mathbb C}} U = h^{2,1}$,
the image of this map is a real codimension two submanifold.
\end{rem}
\subsection{Inner product on $\mathcal{S}_Z$}
In Theorem \ref{MAIN}, we have expressed $\mathcal{N}_{\psi}(L)$ in
terms of a Gaussian type ensemble of holomorphic sections in
$\mathcal{S}_Z$. We now specify the inner product, Gaussian measure and
Szeg\"o kernel on this space.
\begin{prop}\label{CRITPOS}
The Hodge-Riemann Hermitian inner product on $H^3(X, {\mathbb C})$
restricts for each $Z = (z, \tau)$ to define a complex valued
inner product on $H^3_Z$ which satisfies $Q_Z[G] > 0$ for all $G
\not= 0$. Moreover, the map $I_\tau: H^3_Z \to
H^{2,1}_z \oplus H^{0, 3}_z$ satisfies
$Q[I_\tau G] = \Im\tau\; Q[G]. $
\end{prop}
\begin{proof} It follows by
Proposition \ref{HRPLUS} that the symmetric bilinear form
\begin{equation}\label{QQ}Q[F + \tau H] = i^3\int_X (F + \tau H) \wedge
\overline{(F +
\tau H)} = \Im\tau\; Q[F + i H] \end{equation} on
$H^3_{z,
\tau}(X, {\mathbb C})$ in (\ref{H3}) is positive definite.\end{proof}
Recall that we have the real-linear isomorphisms
\begin{eqnarray} &H^3(X,{\mathbb C}) &\buildrel {\mathcal{W}}\over
\longrightarrow \ \mathcal{S} \subset H^0(\mathcal{C},\mathcal{L})\nonumber\\
&{I_\tau}\!\downarrow &\hspace{1.5in}.\label{realiso}\\
&H^3(X,{\mathbb C})\nonumber\end{eqnarray} where $I_\tau (F+iH)=F+\tau H$.
Restricting \eqref{realiso} to fluxes with a critical point at
$Z=(z,\tau)$, we have isomorphisms
\begin{eqnarray} &H^3_Z &\hspace{-.3in}\buildrel {\mathcal{W}}\over
\longrightarrow \ \mathcal{S}_Z\nonumber\\
&I_\tau\!\downarrow\quad&\hspace{.5in}.\label{realisoF}\\
&H^{2,1}_z \oplus H^{0,
3}_z\nonumber\end{eqnarray}
We let $\widetilde Q$ denote the Hermitian inner product on $H^{2,1}_z
\oplus H^{0, 3}_z$ transported from $(H^3_Z,Q)$ by $I_\tau$; i.e.,
\begin{equation}\label{funnyQ} \widetilde Q [C] = Q\left[I_\tau^{-1}
(C)\right]\;, \quad C\in H^{2,1}_z \oplus H^{0,
3}_z\;. \end{equation} Hence by \eqref{QQ}, we have:
\begin{equation}\label{QfunnyQ} Q [C] =(\Im\tau)\, \widetilde Q[C]\;.
\end{equation}
\section{Counting critical points: proof of Proposition
\ref{LNMO}}\label{COUNTING}
We now prove the first result on counting critical points of flux
superpotentials $W_G$ where $G$ satisfies the tadpole constraint
(\ref{TC}). Before starting the proof, we review the geometry of
the lattice point problem and the critical point problem.
We wish to count vacua in a region of moduli space as $G$ varies
over fluxes satisfying the tadpole constraint. Equivalently, we
count inequivalent vacua in Teichm\"uller space. That is, $\Gamma$
acts on the pairs $(W, Z)$ of superpotentials and moduli by
$$\gamma \cdot (G, Z) = (\phi(\gamma) \cdot G, \gamma \cdot Z), $$
Therefore $\Gamma$ acts on the incidence relation (\ref{ICAL}).
We only wish to count critical points modulo the action of
$\Gamma$. To do this, there are two choices: we could break the
symmetry by fixing a fundamental domain $\mathcal{D}_{\Gamma} \subset
\mathcal{C}$ for $\Gamma$ in $\mathcal{C}$, i.e. only count critical points in
a fundamental domain. Or we could fix a fundamental domain for
$\phi(\Gamma)$ in $H^3(X, {\mathbb C})$ and count all critical points of
these special flux superpotentials. When we do not know the group
$\phi(\Gamma)$ precisely, it seems simpler to take the first
option and that is what we do in Proposition \ref{LNMO} and
Theorem \ref{MAIN}. We note that the number of critical points of
$W_G$ in Teichm\"uller space equals the number of critical points
of the $\Gamma$-orbit of $W_G$ in $\mathcal{C}$.
The level sets $Q[G] = C$ for $C
> 0$ are hyperboloids contained in $\{G: Q[G]
> 0\} $ and thus the tadpole
constraint defines a hyperbolic shell in $\{G : Q[G]
> 0\} .$ The critical point
equation $\nabla W_G(Z) = 0$ is homogeneous of degree $1$ in $G$.
Hence, summing a homogeneous function over $G \in \{G : Q[G]
> 0\} $ with $Q[G]
\leq L$ may be viewed as summing a function on the hyperboloid
$Q[G] = 1$ over the radial projections of the lattice points $G$
in the shell $Q[G] \leq L.$ The number which project over a
compact subset of $Q[G] = 1$ is finite.
\subsection{Approximating the sum by an integral}\label{LNMOPROOF}
Our main argument in the proof of Proposition \ref{LNMO} is the following
lemma:
\begin{lem}\label{sumtoint} Let $\psi = \chi_K$ where $K \subset \mathcal{I}$ is as in
Proposition \ref{LNMO}. Then
$$\mathcal{N}_{\psi}(L) = L^{b_3}\left[
\int_{\mathcal{S}} \langle C_W,\psi\rangle
\chi_{Q}(W)\, dW + O\left( L^{-1/2}\right) \right]\;.
$$
\end{lem}
\begin{proof} We consider the integer-valued function
$$f(W)=\langle C_W,
\psi
\rangle =
\sum_{\{Z:
\nabla W(Z) = 0\}} \psi(Z,W)= \#\{Z\in\mathcal{C}:(Z,W)\in K\}.$$ We note that the
characteristic function of the set $\{0 \leq Q[W] \leq L\}$ is
$\chi_Q(W/\sqrt L)$.
Using our symplectic basis to identify $ H^3(X, {\mathbb Z} \oplus
\sqrt{-1} {\mathbb Z})$ with ${\mathbb Z}^{2b_3}$, we have
$$ \mathcal{N}_{\psi}(L)= \sum_{N\in{\mathbb Z}^{2b_3}} f(N) \chi_Q(N/\sqrt L) =
\sum_{N\in{\mathbb Z}^{2b_3}} f(N/\sqrt L) \chi_Q(N/\sqrt L)
=\sum_{N\in{\mathbb Z}^{2b_3}} g(N/\sqrt L)\;,$$ where $$g=f\chi_Q\,.$$
We note that $f$ is constant on each connected
component of $\mathcal{S}\smallsetminus[\mathcal{D}\cup\pi(\d K)]$. Since the number of
these components is finite, $f$ is bounded. We let
$S(\mathcal{S}_Z)=\{N\in\mathcal{S}_Z:\|N\|=1\}$, where $\|N\|$ denotes the
norm in ${\mathbb Z}^{2b_3}$. Since $Q_{Z}$ is positive definite, the
sphere $S(\mathcal{S}_Z)$ is contained in the interior of the cone
$\{W\in \mathcal{S}:Q[W]\ge 0\}$. Let
\begin{equation}\label{K}A_\psi=\sup_{Z\in\rho({\operatorname{Supp\,}}\psi)}
\|Q_Z^{-1}\|<+\infty.\end{equation} Then
\begin{equation}\label{Kinv}\inf \left\{Q[W]: W\in
\bigcup_{Z\in\rho({\operatorname{Supp\,}}\psi)} S(\mathcal{S}_Z)\right\}= 1/A_\psi
>0.\end{equation} Now let
\begin{equation}\label{Q0}Q_0:= \{W: Q[W]\le 1, |W|\le
A_\psi\}\supset {\operatorname{Supp\,}} g.\end{equation}
Approximating sums by integrals, we have
$$L^{-b_3} \mathcal{N}_{\psi}(L)= L^{-b_3}\sum_{N\in{\mathbb Z}^{2b_3}} g(N/\sqrt L) = \int_{{\mathbb R}^{2b_3}}
g(W)\,dW+\sum_{N\in{\mathbb Z}^{2b_3}} E_{N,L},$$ where \begin{eqnarray*}
E_{N,L}& =& \int_{\mathcal{R}_{N,L}} [g(N/\sqrt L)-g(W)]\,dW,\\&& \mathcal{R}_{N,L}
=\{W=(W_1,\dots,W_{2b_3})\in{\mathbb R}^{2b_3}: N_j<W_j < N_j+ 1/\sqrt L\}.\end{eqnarray*}
Let $$B=Q_0\cap[\d Q\cup\mathcal{D}\cup\pi(\d K)]\;.$$
Since $g$ is locally constant on $\mathcal{S}\smallsetminus B$, the error $E_{N,L}$ vanishes whenever
$\mathcal{R}_{N,L}\cap B=\emptyset$.
Hence
$$\sum_{N\in{\mathbb Z}^{2b_3}} E_{N,L} \le (\sup f)
L^{-b_3}\big[ \#\{N: \mathcal{R}_{N,L}\cap B\neq \emptyset\}\big]= L^{-b_3}\,O\left
(\sqrt{L}^{2b_3 - 1}\right)=O(L^{-1/2}).$$
\end{proof}
\subsubsection{The index density}
By applying precisely the same argument for $\mathcal{I} nd_{\psi}(L),$
we obtain
\begin{lem}\label{ICALFIRST} Let $\psi = \chi_K$ where $K \subset \mathcal{I}$ is as in
Proposition \ref{LNMO}. Then
$$\mathcal{I} nd_{\psi}(L) =
L^{b_3}\left[
\int_{\{Q[W]\le 1\}}\langle Ind_W, \psi \rangle \,dW + O\left( L^{-1/2}\right) \right]\;.
$$
\end{lem}
\medskip
\subsubsection{Non-clustering of critical points} Before concluding
the proof of Proposition \ref{LNMO}, we briefly consider the
question of whether there exist real hypersurfaces $\Gamma \subset
\mathcal{C}$ with the property that $\sim \sqrt{L}^{2b_3 - 1}$ critical
points of norm $\leq L$ cluster within a $1/L$ tube around
$\Gamma$. A domain in $\mathcal{C}$ whose boundary contained a piece of
$\Gamma$ would attain the remainder estimate in Proposition
\ref{LNMO}.
Since the number of critical points corresponding to $G \in H^3(X,
{\mathbb Z} \oplus \sqrt{-1} {\mathbb Z})$ is bounded, such clustering of critical
points could only occur if a sublattice of rank $2b_3 - 1$
clustered around the hypersurface \begin{equation} \label{GAMMA}
\bigcup_{(z, \tau) \in \Gamma} H^3_{z, \tau} \subset H^3(X, {\mathbb C}).
\end{equation}
There do exist real hypersurfaces in $H^3(X, {\mathbb C})$ for which such
exceptional clustering occurs, namely hyperplanes containing a
sublattice of rank $2b_3 - 1$. We refer to such a hyperplane as a
rational hyperplane $L$. For instance, any pair of integral cycles
$\gamma_1, \gamma_2$ defines a rational hyperplane
$$L = L_{\gamma_1, \gamma_2} = \{G = F + i H \in H^3(X, {\mathbb C}): \ell(F + i G):= \int_{\gamma_1} F + \int_{\gamma_2} H = 0\}.$$
As mentioned in the introduction, projections of the lattice
points $H^3(X, {\mathbb Z} \oplus \sqrt{-1} {\mathbb Z})$ to $\d Q$ concentrate to
sub-leading order $\sqrt{L}^{2b_3 - 1}$ around the hypersurface of
$\d Q$ obtained by intersecting it with a rational hyperplane.
However, rational hyperplanes never have the form (\ref{GAMMA}).
Indeed, under the correspondence $\rho \circ \pi^*$ defined by
the diagram (\ref{DIAGRAM2}), the image of a hyperplane always
covers a region and not a hypersurface of $\mathcal{C}$. That is,
$$ \dim ( L \cap H^3_{z, \tau}) > 1 \;\; \forall (z, \tau) \in \mathcal{C}. $$
Indeed, under the identification $H^3_{z, \tau} \simeq H^3(X, {\mathbb R}),
$ $L |_{H^3_{z, \tau}}$ becomes the real linear functional $L(H)
= \int_{\gamma_1} F(z, \tau, H) + \int_{\gamma_2} H$ on $H^3(X,
{\mathbb R})$. Here, we use that $F(z, \tau, H)$ is linear in $H$. Hence,
$\dim L \cap H^3_{z, \tau} \geq b_3 - 1$ for any $(z, \tau)$.
As will be studied in \cite{Z2}, clustering to order
$\sqrt{L}^{2b_3 - 1}$ can only occur if the second fundamental
form of (\ref{GAMMA}) is completely degenerate. Hence the fact
that rational hyperplanes never have this form is strong evidence
that there are no smooth hypersurfaces $\Gamma \subset \mathcal{C}$ for
which lattice points cluster to subleading order around
(\ref{GAMMA}).
\subsection{\label{HDCP}Hessians and density of critical points}
The final step in the proof of Proposition~\ref{LNMO} is to change
the order of integration over $\mathcal{C}$ and over $\mathcal{S}_Z$:
\begin{lem} \label{DSZFORM} We have:
$$\int_{\{Q[W]\le 1\}}\langle C_W, \psi \rangle \,dW= \int_{\mathcal{C}}
\int_{\mathcal{S}_Z}\psi(Z,W)\, |\det H^c W(Z)|\,
\chi_{Q_{Z}}(W) \,dW\,d{\operatorname{Vol}}_{WP}(Z). $$
\end{lem}
Combining the formulas in Lemmas \ref{sumtoint} and \ref{DSZFORM},
we obtain the formula of Proposition~\ref{LNMO}.
The proof of Lemma \ref{DSZFORM} is in two parts. The first is an elementary
exercise in changing variables in an integral, which we accomplish below by relating
both sides to pushforwards from the incidence variety in the diagram
(\ref{DIAGRAM}). The second part involves special geometry, and is given in the
next section.
We may
interpret the integral
$$\int_{\{Q[W]\le 1\}}\langle C_W, \psi \rangle \,dW$$
as an integral over $\mathcal{I}$ as follows. Implicitly, it defines a
measure
$d\mu_{\mathcal{I}}$ so that
\begin{equation} \label{CW} \int_{\mathcal{I}} \psi(Z,W)\, d\mu_{\mathcal{I}} =
\int_{\{Q[W]\le 1\}}\langle C_W, \psi \rangle \,dW.
\end{equation}
The measure $d\mu_{\mathcal{I}}$ may be expressed in terms of the Leray
measure $d \mathcal{L}_{\mathcal{I}}$ defined by a measure $d\nu$ on $\mathcal{S}$
and the `evaluation map'
$$\epsilon: (Z,W) \in \mathcal{C} \times \mathcal{S} \to \nabla W(Z). $$
The Leray form is the quotient $d \mathcal{L}_{\mathcal{I}}:= \frac{ dV_{WP}\times d\nu }{d
\epsilon}$, i.e. the unique form satisfying
$$d \mathcal{L}_{\mathcal{I}} \times d \epsilon = dV_{WP}\times d\nu. $$
This measure is often written $\delta(\nabla W(Z)) dW dV(Z)$ in the physics
literature.
As suggested by the physics formula, $d\mu_{\mathcal{I}} = \nabla s(Z)^*
\delta_0.$ However, this formula is somewhat ambiguous. If we
regard $s$ as fixed, then it is simply the pullback of $\delta_0$
under $Z \to \nabla s(Z).$ It is then well-known that
\begin{equation} \label{PB} \nabla s^* \delta_0 = \sum_{Z: \nabla s(Z) = 0}
\frac{\delta_Z}{|\det H^c s(Z)|}. \end{equation} However, when
interchanging the order of integration, we really wish to think of
it as a function of $s$ for fixed $Z$. So we now have a function
$\epsilon_Z(s) = \nabla s(Z)$ which may be viewed as $$
\epsilon_Z: \mathcal{S} \to {\mathbb C}^m \equiv {\mathbb R}^{b_3}, $$ where $m=h^{2,1}+1=
{\frac{1}{2}} b_3$. So now the zero set $\epsilon_Z^{-1}(0)$ is the
subspace $\mathcal{S}_Z$ rather than the discrete set $Crit(s)$.
To simplify the notation,
we now consider the general situation where we have a real
$n$-dimensional manifold
$M$ and a space $\mathcal{S}$ of functions $F : M_n \to {\mathbb R}^n$. In
our case,
$F=
\nabla s$ and
$M$ is a coordinate neighborhood in $\mathcal{C}$ where $M$ has local coordinates
$(x_1,\dots,x_n)$ and
$\mathcal{L}$ has a local frame. Suppose that
$ 0$ is a regular value of $F$, so that $F$ is
a local diffeomorphism in a neighborhood $U$ of any point $x_0$ of $F^{-1}(0)$. Let
$h=F_{|U}^{-1}$ in a neighborhood of $0$. Then for
$\phi$ supported in a neighborhood of $x_0$, put
$$\langle F^* \delta_0, \phi \rangle = \langle \delta_0,
\phi(h(y)) |\det d h (y)| \rangle. $$
Let $\dim_{\mathbb R}\mathcal{S} = d\ge n$. In our case, $d =2b_3>n=b_3$, so we introduce a
supplementary linear map: for a point $u\in U\subset M$,
$\mathcal{S}_u$ is the kernel of $\epsilon_u$, and we supplement $\epsilon_u$ with
the projection $\Pi_u: \mathcal{S} \to \mathcal{S}_u.$ Then,
$$(\epsilon_u, \Pi_u) : \mathcal{S} \to {\mathbb R}^n \oplus \mathcal{S}_u $$
is a linear isomorphism. Hence it equals its derivative, so
$$\begin{array}{lll} \langle \epsilon_u^{*} \delta_0, \phi \rangle &
=& \langle \delta_0, \phi((\varepsilon_u, \Pi_u)^{-1}) |\det
(\varepsilon_u, \Pi_u)^{-1} | \rangle.
\end{array}$$
Now, $\mathcal{S}$ is equipped with an inner product, which induces an inner product
on ${\mathbb R}^n\oplus \mathcal{S}_u$. We choose an orthonormal basis $\{S_1,
\dots, S_n\}$ of $\mathcal{S}_u^{\perp}, $ and $\{S_{n + 1}, \dots, S_d \}$ for
$\mathcal{S}_u$. Since $\Pi_u : \mathcal{S}_u \to \mathcal{S}_u$ is the identity,
$(\varepsilon_u, \Pi_u)$ has a block diagonal matrix relative to the bases of
$\mathcal{S}=\mathcal{S}^\perp_u\oplus \mathcal{S}_u$ and ${\mathbb R}^n\oplus \mathcal{S}_u$, with the identity in
the
$\mathcal{S}_u $-$\mathcal{S}_u$ block. Hence,
$\det (\varepsilon_u, \Pi_u) = \det( \varepsilon_u|_{\mathcal{S}^\perp})$ where the determinant is with
respect to these bases.
The general case of formula (\ref{CW}) states
that
\begin{equation} d\mu_{\mathcal{I}} = |\det D W(u)| \times
\frac{\chi_Q du\times dW }{d \epsilon}. \end{equation}
We then compute the $\mathcal{I}$ integral as an iterated integral
using the other singular fibration $\pi$, i.e. by first
integrating over the fibers $\mathcal{S}_u$: \begin{equation}\label{Iint0}\int_{\mathcal{I}}
\psi(u) d\mu_{\mathcal{I}} =
\int_U \int_{\mathcal{S}_u}
\frac{\psi(u)}{|\det( \varepsilon_u|_{\mathcal{S}_u^\perp})|} \chi_{Q_u}(W) |\det
DW(u)|\,dW\,du\;.\end{equation}
Returning to our case where $F=\nabla s$, \eqref{Iint0} becomes
\begin{equation}\label{Iint} \int_{\mathcal{I}} \psi(Z) d\mu_{\mathcal{I}}=
\int_{\mathcal{C}} \int_{\mathcal{S}_Z}\frac{\psi(Z,W)}{|\det(
\varepsilon_Z|_{\mathcal{S}_Z^\perp})|}
|\det H^c W(Z)|
\, \chi_{Q_{Z}}(W) \,dW\,d{\operatorname{Vol}}_{WP}(Z). \end{equation}
\subsection{Completion of the proof of Lemma \ref{DSZFORM}} To complete
the proof of the lemma, we need to show that
$|\det( \varepsilon_Z|_{\mathcal{S}_Z^\perp})| = 1$ with respect to normal coordinates
and an adapted frame at $Z_0=(z_0,\tau_0)\in M$.
Recalling \eqref{realiso}--\eqref{realisoF}, we write
$$\widetilde\mathcal{S}_Z^\perp=
I_\tau\circ\mathcal{W}^{-1}(\mathcal{S}_Z^\perp)=H^{3,0}_z\oplus H^{1,2}_z\;.$$
A complex orthonormal basis for $\widetilde \mathcal{S}_{Z_0}^\perp$ relative
to $Q$ is $\{\bar\chi_0,\bar\chi_1,\dots,\bar\chi_{h^{2,1}}\}$,
where $\chi_0=
\overline\Omega_{z_0}$. A basis (over ${\mathbb R}$) for $\mathcal{S}_{Z_0}^\perp$
is $$
\overline{U}_j:= \mathcal{W}\circ I_\tau^{-1}(\bar\chi_j),\quad \
\overline{V}_j:= \mathcal{W}\circ
I_\tau^{-1}(\sqrt{-1}\,\bar\chi_j),\qquad 0\le j\le h^{2,1}\;.$$
The basis $\{\overline{U}_j,\ \overline{V}_j\}$ is orthogonal with
respect to $Q_{Z_0}$, but not orthonormal. By \eqref{QfunnyQ}
\begin{equation}\label{imtau1}Q[\overline{U}_j]=\widetilde Q[\bar\chi_j]
= \frac 1{\Im\tau }\, Q[\bar\chi_j]=\frac
1{\Im\tau },\qquad Q[\overline{V}_j]=\widetilde
Q\left[\sqrt{-1}\,\bar\chi_j\right] = \frac
1{\Im\tau }.\end{equation}
To compute $\det( \varepsilon_{Z_0}|{\mathcal{S}_{Z_0}^\perp})$, we let
$(z_1,\dots,z_{h^{2,1}})$ be normal coordinates about
$z_0\in\mathcal{M}$, and we let $\nabla_j f$ be given by
$$\nabla_{\d/\d z^j}(fe_\mathcal{L}) = (\nabla_j f)\otimes e_\mathcal{L},$$ for
$1\le j\le h^{2,1}$. We find it convenient to use the coordinate
$\tau\in\mathcal{E}$, although it is not normal, and we use the
normalized covariant derivative
\begin{equation}\label{nabla0}\nabla_0:=
(\Im\tau)\,\nabla_\tau.\end{equation} Now we write
$$\overline{U}_j=f_j(z)\,\Omega_z^*\otimes\omega_\tau^*,\quad
\overline{V}_j=g_j(z)\,\Omega_z^*\otimes\omega_\tau^*,$$ where the local frame
$\Omega_z$ is normal at $z_0$, and $\omega_\tau = dx+\tau\,dy$. Note
that the Weil-Petersson norm $|\omega_\tau^*|$ is given by
\begin{equation}\label{imtau2}|\omega_\tau^*| = |dx+\tau\,dy|^{-1} = \frac
1{(\Im\tau)^{1/2}}\ .\end{equation}
Taking into account \eqref{imtau1}--\eqref{imtau2}, the $\Im\tau $
factors cancel out, and we obtain
$$\det( \varepsilon_{Z_0}|{\mathcal{S}_{Z_0}^\perp})) =\det\left.
\begin{pmatrix}\Re\nabla_j f_k & \Re\nabla_jg_k\\ \\
\Im\nabla_j f_k& \Im\nabla_jg_k\end{pmatrix}\right|_{{Z_0}},\quad
\mbox{for }\ {0\le j,k\le h^{2,1}} \;.$$
We now evaluate the entries of the matrix. By Proposition \ref{ONB}, we
have
$$\nabla_{k} f_j(Z) =
\int_X
\overline{\mathcal{D}_j
\Omega_{z_0}}
\wedge \mathcal{D}_k
\Omega_{z}, \quad \nabla_{k} g_j(Z) = \int_X
i\,\overline{\mathcal{D}_j
\Omega_{z_0}}
\wedge \mathcal{D}_k
\Omega_{z}, $$ and hence
$$\nabla_j f_k(Z_0) =-i\, \delta_{jk}, \quad \nabla_j g_k(Z_0) = \delta_{jk},
\qquad\mbox{for }\ j,k\ge 1.$$
Also $$\nabla_kf_0=\int_X \Omega_{z_0}\wedge[\mathcal{D}_k\Omega_{z_0}-(\d K/\d
z_j)\Omega_{z_0}] =0,
\quad \nabla_kg_0=i\nabla_kf_0=0\qquad\mbox{for }\ k\ge 1.$$
By \eqref{NABLATAU}, we have $$\nabla_0(f_j)=(\Im \tau)\,\nabla_\tau(f_j) =\int_X
\mathcal{D}_j\Omega_{z_0}\wedge \Omega_{z_0} =0,
\quad
\nabla_0(g_j) = -i\int_X\Omega_{z_0}\wedge \Omega_{z_0} =0,\quad j\ge 1,$$
and $$\nabla_0(f_0) = \int_X\overline{\Omega_{z_0}}\wedge \Omega_{z_0} =i,\quad
\nabla_0(g_0) = \int_X\overline{i\Omega_{z_0}}\wedge \Omega_{z_0} =1.$$
Therefore,
$$|\det( \varepsilon_{Z_0}|{\mathcal{S}_{Z_0}^\perp}))| =
\left|\det
\begin{pmatrix}0&I\\ \\
D(1,-1,\dots,-1)&0\end{pmatrix}\right|=1\;.$$
\qed
\section{\label{MAINPROOF}Proof of Theorem \ref{MAIN}}
In this section we prove Theorem \ref{MAIN}, which is a combination of an
equidistribution theorem for radial projections of lattice points and an
equidistribution theorem for critical points.
\subsection{\label{LVDC}A local van der Corput Theorem} We first illustrate the
method of proof of Theorem \ref{MAIN} by providing a van der
Corput type asymptotic estimate for the radial distribution of
lattice points (Theorem \ref{localvdC}). The estimate has much in
common with the classical van der Corput estimate of Hlawka,
Randol and others on lattice points in dilates of smooth convex
sets (see for example, \cite{Ran, H}), and we adapt the proof of
the classical estimate to obtain our asymptotic equidistribution
theorem.
Let $Q\subset {\mathbb R}^n$ ($n\ge 2)$ be a bounded, smooth, strictly
convex set with $0\in Q^\circ$. Let $|X|_Q$ denote the norm of
$X\in {\mathbb R}^n$ given by \begin{equation}\label{Q}Q=\{X\in{\mathbb R}^n:|X|_Q<1
\}\,.\end{equation} To measure the equidistribution of projections of
lattice points, we consider the sums
$$S_f (t) = \sum_{k \in {\mathbb Z}^n\cap tQ\smallsetminus\{0\}} f\left(\frac{k}{|k|_Q}\right),
\quad \mbox{with } \ f \in C^{\infty}(\d Q),\ t>0. $$ We extend
$f$ to ${\mathbb R}^n$ as a homogeneous function of degree $0$, so that
$f(k)=f\left(\frac{k}{|k|_Q}\right)$. Our purpose is to obtain the
following asymptotics of $S_f(t)$:
\begin{theo}\label{localvdC} $$S_f(t) =
t^n\int_Q f\,dX + O(t^{n - \frac{2n}{n+1}}), \;\; t \to \infty. $$
\end{theo}
From this it is simple to obtain asymptotics of $S_f(t)$ when $f
\in C^{\infty}(\d Q)$ is extended as a homogeneous function of any
degree $\alpha$ to ${\mathbb R}^n$:
\begin{cor}\label{homovdC} Let $f\in\mathcal{C}^\infty({\mathbb R}^n\smallsetminus\{0\})$ be
homogeneous of degree $\alpha> 0$, and let
$$S_f(t)= \sum_{k\in {\mathbb Z}^n\cap tQ} f(k)\;,\qquad t>0$$
Then $$S_f(t) = t^{n + \alpha} \int_Q f\,dX +
O(t^{n - \frac{2n}{n+1} + \alpha}), \;\; t \to \infty.
$$
\end{cor}
\subsubsection{Littlewood-Paley}
To deal with the singularity of $f$ at $x = 0$ we use a dyadic
Littlewood-Paley decomposition in the radial direction. Let $\eta
\in C_0^{\infty}$ with $\eta(r) = 1$ for $r \leq 1$ and with
$\eta(r) = 0$ for $r \geq 2.$ We then define
$$\rho \in C_0^{\infty}({\mathbb R}),\;\; \rho (r ) = \eta(r) -
\eta(2 r). $$ Then $\rho(r)$ is supported in the shell $1/2 \leq
r \leq 2$, hence $\rho(2^j r)$ is supported in the shell $2^{- j -
1} \leq r \leq 2^{ -j + 1}$. We then have:
$$\eta(r) = \sum_{j = 0}^{\infty} \rho(2^{j} r),\;\;\; (r \not= 0). $$
Indeed,
$$\sum_{j = 0}^{J} \rho (2^j r) = \eta(r) - \eta(2^J
r) \to \eta(r)$$ by the assumption that $\eta \in C_0^{\infty}$.
We then write
\begin{eqnarray} S_f(t)& = &
\sum_{k \in {\mathbb Z}^n} f(k) \chi_{[0,1]}(\frac{|k|_Q}{t})\ = \
S'_f(t)+S''_f(t), \nonumber \\&& S'_f(t) =
\sum_{k \in {\mathbb Z}^n} f(k) \eta(\frac{|k|_Q}{t})
,\\ && S_f''(t)\ = \ \sum_{k \in {\mathbb Z}^n} f(k) (\chi_{[0,1]} -
\eta)(\frac{|k|_Q}{t})).\label{twosums}
\end{eqnarray}
We can assume without loss of generality that $f\ge 0$. We begin
with the first sum in $ S'_f(t)$:
\begin{lem}\label{first}
$$ S'_f(t) = t^n \int_{{\mathbb R}^n}
f(X) \eta (|X|_Q) dX + O(\log t).
$$
\end{lem}
\begin{proof}
We write the sum as
$$ S'_f(t)=\sum_{j = 0}^{\infty} \sum_{k \in {\mathbb Z}^n} f(k)
\rho (\frac{2^j |k|_Q}{t}).$$
We further break up the dyadic sum into $\sum_{j = 0}^{J(t)} +
\sum_{j = J(t) + 1}^{\infty}$ with $J(t)$ to be determined later.
We first consider
$$S'_1:=\sum_{j = 0}^{J(t)} \sum_{k \in {\mathbb Z}^n} f(k) \rho
(\frac{2^j |k|_Q}{t}).$$ The function $f(X) \rho (2^j |X|_Q) \in
C_0^{\infty}({\mathbb R}^n)$ when $f$ is homogeneous of degree $0$ and
smooth on $\d Q$. Hence we may apply the Poisson summation formula
to the $k$ sum to obtain
$$ S'_1=\sum_{j = 0}^{J(t)} \sum_{N \in {\mathbb Z}^n} \int_{{\mathbb R}^n} e^{- i
\langle X, N \rangle} f(X) \rho (\frac{2^j |X|_Q}{t}) dX.$$
The terms with $N = 0$ sum up to \begin{eqnarray*}t^n \int_{{\mathbb R}^n}
f(X) \left\{ \sum_{j = 0}^{J(t)} \rho(2^j |X|_Q)\right\} dX &=&
t^n \int_{{\mathbb R}^n} f(X) \left\{
\eta(|X|_Q)-\eta(2^{J(t)+1}|X|_Q))\right\} dX\\ &=& t^n
\int_{{\mathbb R}^n} f(X) \eta(|X|_Q) dX +O(t^n2^{-nJ(t)}),\end{eqnarray*}
where the last estimate is a consequence of the fact that
$\eta(2^{J(t)+1}|X|_Q)$ is supported on $2^{-J(t)}Q$.
To estimate the remaining terms in the sum $S'_1$, we make the
change of variables $Y = 2^{j} X/ t$ in the integral to obtain
$$ 2^{- n j} t^n \int_{{\mathbb R}^n}
f(Y) \rho ( |Y|) e^{- i 2^{-j} t \langle Y, N \rangle} dY.$$
Since the integrand is smooth, this term has the upper bound $$
c\, 2^{-n j} t^n (1 + 2^{-j} |N| t)^{-K},\;\;\; \forall K > 0.$$
(Again, we let $c$ denote a constant; $c$ depends on $f$ and $K$,
but is independent of $j,t,N$.) The sum over $N\neq 0$ is then
bounded by
\begin{eqnarray*} c\,t^n \sum_{j \leq J(t)} 2^{- n j} \sum_{N
\not= 0} (1 + 2^{-j} |N| t)^{-K} & \sim & t^n \sum_{j \leq
J(t)} 2^{- n j} \int_0^{\infty} (1 + 2^{-j} r t)^{-K} r^{n-1} dr
\\ & = &\sum_{j \leq J(t)}\int_0^{\infty} (1 + s)^{-K} s^{n-1} ds
\ = \ c\,J(t). \end{eqnarray*} Therefore
$$S'_1= t^n \int_{{\mathbb R}^n}
f(X) \eta(|X|_Q) dX +O(t^n2^{-nJ(t)}) + O(J(t)).$$
Recall that $S'_f(t)=S'_1+S'_2$, where
$$S'_2= \sum_{j = J(t) + 1}^{\infty} \sum_{k \in {\mathbb Z}^n}
f(\frac{k}{|k|_Q}) \rho (\frac{2^j |k|_Q}{t}).$$ Since $$\sum_{j =
J(t) + 1}^{\infty} \rho (\frac{2^j |k|_Q}{t}) =
\eta\left(\frac{2^{J(t)} |k|_Q}{t}\right) \le
\chi_{t2^{-J(t)}Q}\;,$$ the remainder $S'_2$ is bounded by the
total number of lattice points in the shell $|k|_Q \leq 2^{- J(t)}
t$, hence is of order $t^n 2^{- n J(t)}$. It follows that
\begin{equation}\label{firstterm}S'_f(t)= t^n \int_{{\mathbb R}^n}
f(X) \eta(|X|_Q) dX +O(t^n2^{-nJ(t)}) + O(J(t)).\end{equation} To
balance the terms, we choose $J(t) = \log_2 t$, and then the last
two terms of \eqref{firstterm} have the form
$$O(t^nt^{-n})+O(\log t) = O(\log t).$$
\end{proof}
\subsubsection{Stationary phase.}
Theorem \ref{localvdC} is an immediate consequence of Lemma
\ref{first} and the following assymptotics of the second sum
$S''_f(t)$ from \eqref{twosums}:
\begin{lem}\label{remainder}
$$S''_f(t)= t^n \int_{{\mathbb R}^n}
f(X) (\chi_{[0,1]} - \eta) (|X|_Q) dX + O(t^{n - \frac{2n}{n +
1}}).$$ \end{lem}
\begin{proof}
Let $$g(X)=f(X)(\chi_{[0,1]} -\eta)(|X|_Q)$$ and mollify $g$ by a
radial approximate identify $\phi_{\epsilon}$ to obtain a smooth
approximation $g_\varepsilon = g*\phi_\varepsilon$. We claim that
\begin{equation}\label{smoothing}
S_f''(t)=\sum_{k\in{\mathbb Z}^n}g\left(\frac kt\right) =
\sum_{k\in{\mathbb Z}^n}g_\varepsilon\left(\frac kt\right) +O(\varepsilon t^n)\;.
\end{equation} To see this, we break the sum into two parts. The
first part is over the lattice points $k$ with $k/t$ in an
$\epsilon$ tube $T_\varepsilon$ about $\{|X|_Q=1\}$. The number of such
$k$ is $O(\varepsilon t^n)$, so this part contributes the stated error.
For the remaining sum, the error is
$$\left|\sum_{k\in{\mathbb Z}^n\smallsetminus tT_\varepsilon}\left[g\left(\frac kt\right) -
g_\varepsilon\left(\frac kt\right)\right]\right| \le \sum_{k/t\in{\operatorname{Supp\,}}
g\smallsetminus T_\varepsilon}\varepsilon\sup_{|X|_Q>1}|dg(X)| =O(\varepsilon t^n),$$ which verifies
\eqref{smoothing}.
The Poisson summation formula then gives
$$\sum_{k\in{\mathbb Z}^n} g_{\epsilon} (k/t) = t^n \sum_{N\in{\mathbb Z}^n}
\hat{g}_{\epsilon}(2 \pi t N) = t^n \sum \hat{g}(2 \pi t N)
\hat{\phi}(2 \pi t \epsilon N). $$ The term $N = 0$ yields
$$t^N\int_{{\mathbb R}^n}g_\varepsilon(X)dX= t^n \int_{{\mathbb R}^n}
f(X) (\chi_{[0,1]} - \eta) (|X|_Q) dX +O(\varepsilon t^n),$$ where the
last inequality is by breaking up the integral into two parts as
above.
As for the remainder terms $N\neq 0$, we now show that
\begin{equation}\label{SP} \hat{g}(2 \pi t N) =
O\left( (1+|tN|)^{-\frac{(n+1)}{2}}\right) \;.\end{equation} To
verify \eqref{SP}, we write \begin{eqnarray*} g &=& -f\rho h\ =\
-(f\rho) (h\eta_2)\,, \qquad \mbox{with} \quad \eta_2(X)=\eta
(\textstyle{\frac{1}{2}}|X|_Q)\,,\quad h=\theta\circ\lambda,\\&& \lambda(X) =|X|_Q
-1\,,\quad \theta(t)=
\mbox{Heaviside function}=\left\{\begin{array}{ll}0,& \ \mbox{if }\ t<0,\\
1,& \ \mbox{if }\ t\ge 0.\end{array}\right.\end{eqnarray*} Since
$\widehat g = - \widehat{f\rho} * \widehat {h\eta_2}$ and $\widehat {f\rho}$ is
rapidly decaying, it suffices to show that $\widehat {h\eta_2}$
satisfies \eqref{SP}. (Here, we use the elementary estimate
$\|\alpha*\bar{e}\|_{(K)} \le c \|\alpha\|_{(K+n+1)} \|\bar{e}\|_{(K)}$, where
$\|\alpha\|_{(K)} = \sup_{x\in{\mathbb R}^n}(1+|x|)^K |\alpha(x)|$.) Taking partial
derivatives,
$$\mathcal{D}_j(h\eta_2) = \mathcal{D}_j\eta_2 +(\delta_0\circ\lambda)\,\mathcal{D}_j\lambda.$$
Since the latter term is given by integration over $\d Q$, which
is strictly convex, the standard stationary phase method (see
H\"ormander \cite{Ho}) immediately gives
$(\delta_0\circ\lambda)\raisebox{2pt}{$\widehat{\ }$}(x) = O(x^{-\frac{(n-1)}{2}})$, and hence
$$x_j\,\widehat{h\eta_2}=[\mathcal{D}_j(h\eta_2)]\raisebox{2pt}{$\widehat{\ }$} =
O\left(x^{-\frac{(n-1)}{2}}\right),$$ which implies \eqref{SP}.
Hence the remainder is bounded above by
\begin{equation}\label{sum}c\, t^n \sum_{N \not= 0} (1+|t N|)^{-\frac{(n+1)}{2}}
(1 + | \epsilon tN|)^{-K}.\end{equation} The sum \eqref{sum} can
be replaced by the integral
\begin{eqnarray*} c\,t^n \int_{{\mathbb R}^n}(1+|t N|)^{-\frac{(n+1)}{2}} (1 + |
\epsilon tN|)^{-K}\,dN &=& c\,t^n \int_0^\infty
(1+t r)^{-\frac{(n+1)}{2}} (1 + \epsilon tr)^{-K}r^{n-1}\,dr\\
&=& c\,\varepsilon^{\frac {1-n}2}\int_0^\infty
(\varepsilon+s )^{-\frac{(n+1)}{2}} (1+ s)^{-K}s^{n-1}\,ds\\
&\le& c\,\varepsilon^{\frac {1-n}2}\int_0^\infty
(1+ s)^{-K}s^{\frac{n-3}2}\,ds \ =\ c\,\varepsilon^{\frac {1-n}2}.
\end{eqnarray*}
Hence $$S''_f(t)= t^n \int_{{\mathbb R}^n} f(X) (\chi_{[0,1]} - \eta)
(|X|_Q) dX + O(\varepsilon t^n) +O(\varepsilon^{-(n-1)/2}).$$ To optimize, we
choose $\epsilon$ so that $\varepsilon t^n = \epsilon^{-(n-1)/2}$, i.e. $
\epsilon = t^{-2n/(n+1)}$, which gives the result. (To be
precise, it is the sum of the terms in \eqref{sum} with $|N|\ge
\sqrt n$ that is bounded by the above integral. But there are
only finitely many terms with $|N|<\sqrt n$, and each of these
terms is $<c\,t^{n-\frac{n+1}2}$, which is better than $O(t^{n -
\frac{2n}{n+1}})$ when $n\ge 2$.)\end{proof}
\subsubsection{Van der Corput for homogeneous weights $f$. Proof of
Corollary \ref{homovdC}:} This time, we have
$$S_f (t) = \sum_{k \in {\mathbb Z}^n\cap tQ\smallsetminus\{0\}}|k|_Q^{\alpha}\,
f\left(\frac{k}{|k|_Q}\right). $$
The set of norms of lattice points $ \{t_j \in {\mathbb R}^+: \exists k \in
{\mathbb Z}^n \ \ni\ |k|_{Q} = t_j\}$ is a countable set without accumulation
point. We order the $t_j$ so that $t_j \leq t_{j + 1}$.
We then define the monotone increasing step function on ${\mathbb R}$
$$\mu(T) = \sum_{j: t_j \leq T} \left\{\sum_{k: |k|_Q = t_j}
f\left(\frac{k}{|k|_Q}\right)\right\}. $$
It is clear that
$$\mu(T) = S_{f_0}(T),\;\;\; f_0(x) = \frac{f(x)}{|x|_Q}. $$
Hence, by Theorem \ref{localvdC},\begin{equation}
\label{SF0} S_{f_0}(t) = t^{n } \int_Q f_0\,dX + O(t^{n -
\frac{2n}{n+1}}), \;\; t \to \infty. \end{equation}
We further have
\begin{equation}\label{Sf}S_f(T) = \int_0^T t^{\alpha} d
\mu(t).\end{equation} Indeed,
$$d\mu(t) = \sum_{j} \left\{\sum_{k: |k|_Q = t_j}
f\left(\frac{k}{|k|_Q}\right)\right\} \delta(t_j), $$ and
$$ \int_0^T t^{\alpha} d \mu(t) = \sum_{j: t_j \leq T}\left
\{\sum_{k: |k|_Q = t_j} f\left(\frac{k}{|k|_Q}\right)\right\}
t_j^{\alpha} = S_f(T). $$
Integrating \eqref{Sf} by parts and applying \eqref{SF0}, we get
\begin{eqnarray*}S_f(T)&=& T^{\alpha} \mu(T) - \alpha \int_0^T
t^{\alpha - 1}
\mu(t)dt\\&=&T^{\alpha}\left[ T^{n } \int_Q f_0\,dX + O(T^{n -
\frac{2n}{n+1}})\right] - \alpha \int_0^T t^{\alpha - 1}
\left[t^{n } \int_Q f_0\,dX + O(t^{n - \frac{2n}{n+1}})\right]
dt\\ &=& T^{n + \alpha} \left[\int_Q f_0\,dX\right]
\frac{n}{\alpha + n} + O(T^{n - \frac{2n}{n+1} +
\alpha})\\
&=&T^{n + \alpha} \int_Q f\,dX +
O(T^{n - \frac{2n}{n+1} + \alpha})\,.
\end{eqnarray*}
\qed
\subsection{Van der Corput for critical points}
We prove Theorem \ref{MAIN} by following the arguments of the
proofs of Theorem
\ref{localvdC} and Corollary \ref{homovdC} with hardly any changes.
We first assume that $\psi$ is homogeneous of order 0 in
$\mathcal{S}$. We let $K_\psi = \rho({\operatorname{Supp\,}}\psi)\subset\mathcal{C}$, a
compact set.
To begin, we recall that if $W$ has critical points, then $W$ is
in the `light cone' $Q[W]>0$. For $W$ in the light cone, we write
$$|W|_Q = Q[W]^{\frac{1}{2}},\qquad \mbox{for }\ Q[W]>0.$$
The main difference between this case and our previous one, is
that now the set
$Q$ given by \eqref{Q}, in addition to not being convex, is not compact.
However, since only with those $W$ with critical points in the
support of $\psi$ contribute to the sum, we consider
$$Q_\psi:=Q\cap \mathcal{S}_\psi,\qquad \mathcal{S}_\psi=\left(\bigcup_{\tau\in
K_\psi}\mathcal{S}_\tau\right),$$ which is a compact subset of $\mathcal{S}$.
We let $f(W)=\langle C_W, \psi \rangle$, which is a smooth
function supported in $\mathcal{S}_\psi$. Then
$$\mathcal{N}_\psi(L)= S_f(L)= \sum_{k \in {\mathbb Z}^n\cap \sqrt L\,Q\smallsetminus\{0\}}
f(k),$$ as before. Now we follow the previous proof, with
$t=\sqrt L$. Our first modification is to verify
\eqref{smoothing}, we instead let
$T_\varepsilon$ be the epsilon tube over $\mathcal{S}_\psi\cap\d Q$. Finally,
the estimate $(\delta_0\circ\lambda)\raisebox{2pt}{$\widehat{\ }$}(t) =
O(t^{-\frac{(n-1)}{2}})$, which was based on the convexity of $Q$
in our previous argument, holds in this case, since the phase
$\psi(Y)=L\langle Y,N\rangle$ has (two) non-degenerate critical
points whenever $N$ is in the light cone. Thus we have
$$\mathcal{N}_{\psi}(L) = L^{b_3}\left[
\int_{\{Q[W]\le 1\}}\langle C_W, \psi \rangle \,dW
+ O\left(L^{-\frac{2b_3}{2b_3+1}}\right)\right].
$$
The case $\alpha=0$ now follows from Lemma \ref{DSZFORM}, and the
general case then follows exactly as in the proof of Corollary
\ref{homovdC}. \qed
\section{Special geometry and
density of critical points}\label{SG}
The aim of this section is to compute the critical point density
$\mathcal{K}^{\operatorname {crit}}(Z)$ and verify Corollaries \ref{MAIN2}--\ref{MAIN3}. At
the same time, we compute the index density and prove Theorem
\ref{MAININD}. As in \cite{DSZ1}, we do this by pushing forward
the integrand of (\ref{Kcritgauss}) under the Hessian map. The
Hessian map turns out to be an isomorphism, hence the discussion
is more elementary than in \cite{DSZ1}. To make the change of
variables, we first evaluate the image of the Hessian using the
special geometry of Calabi-Yau moduli spaces and then check how
the Hessian map distorts inner products. Our discussion gives an
alternate approach to the formulas in the article \cite{DD}, and
connects the special critical point density formula in this
article with the general ones in \cite{DSZ1, DSZ2}.
\subsection{The range of the Hessian map}
We now study the {\it complex Hessian map\/}:
\begin{equation} \label{COMPHES} H^c(Z) : \;\; W \to
\begin{pmatrix}
H' &-x\,\Theta(Z)\\
-\bar x\,\bar\Theta(Z)&\bar H'\end{pmatrix} . \end{equation} To
describe $H^c(Z)$ in local coordinates, we fix a point
$Z_0=(z_0,\tau_0)$ and choose normal coordinates
$\{z^1,\dots,z^{h^{2,1}}\}$ at $z_0\in\mathcal{M}$. We let $\Omega$ be a
local normal frame for $H^{3,0}\to\mathcal{M}$ at $z_0$, and we let
$\omega=dx+\tau\,dy$. Recall that $\omega$ is not a normal frame, since
$|\omega_\tau| = (\Im\tau)^{1/2}$. We let $\widetilde e_\mathcal{L}=
(\Im\tau_0)^{1/2}\,\Omega^*\otimes \omega^*$, so that $|\widetilde
e_\mathcal{L}(Z_0)|=1$.
As in \S\ref{CRITHESS}, the matrix $ (H_{jk})$
of the holomorphic Hessian is given by
\begin{equation}\label{HjqCY}
H'(Z_0)=\sum_{j,q}H'_{jq} dz^q\otimes dz^j\otimes \widetilde
e_\mathcal{L}|_{Z_0}\;, \quad 0\le j,q\le h^{2,1}\;,\end{equation} where
$$dz^0|_{Z_0}=\frac 1{\Im\tau_0}\,d\tau|_{Z_0}$$ is the unit
holomorphic cotangent vector (with respect to the Weil-Petersson,
or hyperbolic, metric on $\mathcal{E}$) at $\tau_0$.
We wish to express formulas \eqref{HmatriX}--\eqref{H'} for the complex
Hessian in terms of these coordinates and frames. We write
$$(\nabla_j f) \otimes e_\mathcal{L}=\nabla_{\d/\d z^j}(fe_\mathcal{L}),\quad 1\le
j\le h^{2,1},\qquad (\nabla_0 f)
\otimes e_\mathcal{L}=(\Im\tau_0)\nabla_{\d/\d \tau}(fe_\mathcal{L})
\;.$$ ($\nabla_0$ is the normalized covariant
$\tau$-derivative given by \eqref{nabla0}.)
The complex Hessian matrix is given by:
\begin{equation}\label{HmatriXCY} H^c(Z_0)
=\begin{pmatrix} H'(Z_0) &f(Z_0)\,I\\[8pt]
\overline{f(Z_0)}\,I &\overline{H'(Z_0)}
\end{pmatrix}\;,\qquad H' =\Big(\nabla_j\nabla_q
f\Big)_{0\le j,q\le h^{2,1}}\;.\end{equation}
Identifying the
off-diagonal components with $f(Z_0) \in {\mathbb C}$, we view the image space as a
subspace of $
{\operatorname{Sym}}(h^{2,1} + 1, {\mathbb C}) \oplus {\mathbb C}$, so we can write the Hessian map in
the form
$$H_{Z_0} : \mathcal{S}_Z \to{\operatorname{Sym}}(h^{2,1} + 1, {\mathbb C})\oplus{\mathbb C},\qquad W\mapsto
\big(H'(Z_0), f(Z_0)\big)\;.$$
\begin{lem} \label{RANGE} The range of the Hessian map $H_{Z_0}:
\mathcal{S}_{Z_0}
\to {\operatorname{Sym}}(h^{2,1} + 1, {\mathbb C})\oplus{\mathbb C} $ is of the form
$\mathcal{H}_{Z_0}\oplus{\mathbb C}$, where $\mathcal{H}_{Z_0}$ is a real subspace
of $
{\operatorname{Sym}}(h^{2,1} + 1, {\mathbb C})$ of real dimension $ 2h^{2,1}$
spanned over ${\mathbb R}$ by the matrices
$$ \xi^k=
\left(
\begin{array}{cc}
0& e_k \\ e_k^t & \mathcal{F}^k(z)
\end{array}
\right), \quad\xi^{h^{2,1}+k}= \left(
\begin{array}{cc}
0& \sqrt{-1}\, e_k \\\sqrt{-1} \, e_k^t &
-\sqrt{-1}\,\mathcal{F}^k(z)
\end{array}
\right)\,, \quad 1\le k\le h^{2,1}\,,$$ given by \eqref{HCALZ},
where $e_k$ is the $k$-th standard basis element of ${\mathbb C}^{h^{2,1}}$
and $\mathcal{F}^k(z) \in {\operatorname{Sym}}(h^{2,1}, {\mathbb C})$ is the matrix $ \left(
\mathcal{F}^{\bar k}_{jq} (z) \right)$ of \eqref{CANDY}.
\end{lem}
In other words, $\mathcal{H}_{Z_0}$ is the set of matrices of the form
\begin{equation}\label{HZ}\left(\begin{array}{cc}
0& (\bar v_1, \dots,\bar v_{h^{2,1}}) \\
(\bar v_1, \dots,\bar v_{h^{2,1}})^t &
\sum_{k=1}^{h^{2,1}} \mathcal{F}^k(z) v_k
\end{array} \right)\,,\qquad (v_1,\dots,v_{h^{2,1}})\in
{\mathbb C}^{h^{2,1}}\;.\end{equation} We emphasize that $\mathcal{H}_{Z} \subset
{\operatorname{Sym}}(h^{2,1} + 1, {\mathbb C})$ is only a real and not a complex subspace.
We also note that $\dim_{\mathbb R} \mathcal{H}_Z = 2h^{2,1}$ and hence $\dim_{\mathbb R}
(\mathcal{H}_Z\oplus{\mathbb C}) = b_3=\dim_{\mathbb R}\mathcal{S}_Z\,$; i.e., $H_Z$ is an
isomorphism.
\medskip\noindent
{\it Proof of Lemma \ref{RANGE}:\/} We shall use the notation
$1\le j,k,l\le h^{2,1}$, $0\le \alpha,\bar{e},\gamma\le h^{2,1}$. By
\eqref{realisoF}, we have the (real-linear) isomorphism
$$\widetilde\mathcal{W}_{Z_0}=\mathcal{W}\circ I_\tau^{-1}:H^{2,1}_{z_0}\oplus H^{0,3}_{z_0}
\buildrel {\approx}\over \to\mathcal{S}_{Z_0}.$$ Recall that
$H^{2,1}_{z_0}\oplus H^{0,3}_{z_0}$ has a complex orthonormal
basis $\{\chi_\alpha\}$ of the form
$$\chi_j= \mathcal{D}_j\Omega_{Z_0},\quad 1\le j\le h^{2,1},\qquad \chi_0=
\overline{\Omega}_{Z_0}.$$ By \eqref{QfunnyQ}, a real orthonormal
basis of $\mathcal{S}_{Z_0}$ is
$$U_\alpha:=(\Im\tau )^{1/2}\, \widetilde\mathcal{W}_{Z_0}(\chi_\alpha),\quad V_\alpha:=
(\Im\tau )^{1/2}\, \widetilde\mathcal{W}_{Z_0}(\sqrt{-1}\,\chi_\alpha).$$ We write:
$$U_\alpha=f_\alpha\,\widetilde e_\mathcal{L}\;,
\quad
V_\alpha=g_\alpha\,\widetilde
e_\mathcal{L}\;;$$ equivalently
$$\widetilde\mathcal{W}_{Z_0}(\chi_\alpha)=f_\alpha\, e_\mathcal{L}\;,
\quad \widetilde\mathcal{W}_{Z_0}(\sqrt{-1}\,\chi_\alpha)=g_\alpha\, e_\mathcal{L}\;.$$
We must compute the matrices $$H'_{Z_0} (f_\alpha\widetilde e_\mathcal{L})=
\big(\nabla_\bar{e}\nabla_\gamma f_\alpha\big)|_{Z_0},\quad H'_{Z_0}
(g_\alpha\widetilde e_\mathcal{L})= \big(\nabla_\bar{e}\nabla_\gamma g_\alpha\big)|_{Z_0},
$$ where $H'_{Z_0}:\mathcal{S}_{Z_0} \to {\operatorname{Sym}}(h^{2,1} + 1, {\mathbb C})$ is the
holomorphic Hessian map.
We shall show that: \begin{equation} \label{BIGDISPLAY}
\left\{\begin{array}{rl}
\rm (i) & \nabla_0^2 f_G(Z_0) = 0, \;\; \forall G \in H^3_{Z_0}(X, {\mathbb C}) \quad
(\mbox{where }\ W_G=f_G\,e_\mathcal{L})\\ &\mbox{and thus }\
\nabla_0^2 f_\alpha(Z_0)= \nabla_0^2 g_\alpha(Z_0)=0,\\ & \\ \rm(ii) &
\nabla_j \nabla_0 f_0 (Z_0) =\nabla_j \nabla_0 g_0 (Z_0)= 0,\\ \\
\rm (iii) &
\nabla_k \nabla_j f_0(Z_0) = \nabla_k \nabla_j g_0(Z_0) = 0, \\ &
\\\rm (iv) &
\nabla_k \nabla_0 f_j(Z_0) = -\sqrt{-1}\,\delta_{jk},\quad \nabla_k \nabla_0
g_j(Z_0) = -\delta_{jk},
\\ & \\\rm (v) &
\nabla_k \nabla_l f_j (Z_0) = \mathcal{F}_{kl }^{\bar j} ,\quad
\nabla_k \nabla_l g_j (Z_0) = \sqrt{-1}\, \mathcal{F}_{kl }^{\bar
j}.\end{array} \right.
\end{equation}
First,
\begin{equation} \label{FIRSTTAU} \nabla_0 f_G(z, \tau) =
\frac{|\Im\tau_0|}{\Im\tau } \int_X ( F +
\bar{\tau} H) \wedge \Omega_z.
\end{equation}
It follows that
$$\nabla_0^2 f_G(z_0, \tau_0) =
\frac{|\Im\tau_0|^2}{\Im\tau}\frac
\d{\d\tau} \int_X (F + \bar{\tau} H) \wedge \Omega_z = 0$$ by the
critical point equation $\nabla_0 f_G(z_0, \tau_0)=0$. This proves
(i).
Next, differentiating \eqref{FIRSTTAU} with $f_G=f_\alpha$, we get
$$\nabla_j \nabla_0 f_\alpha (Z_0) = \int \overline {\chi_\alpha} \wedge
\mathcal{D}_j\Omega_{Z_0}=\int \overline {\chi_\alpha} \wedge \chi_j =
-i\,\delta_{j\alpha},$$ and similarly, $$\nabla_j \nabla_0 g_\alpha
(Z_0) =\int \overline {i\,\chi_\alpha} \wedge \chi_j = -\delta_{j\alpha}.$$
This verifies (ii) and (iv).
Finally, we have by \eqref{CANDY},
$$
\nabla_k\nabla_j f_\alpha = \int \chi_\alpha\wedge \mathcal{D}_k\mathcal{D}_j\Omega=
-i\sum_l \mathcal{F}^{\bar l}_{kj}\int\chi_\alpha\wedge
\overline{\mathcal{D}_l\Omega},$$ and hence
$$\nabla_k\nabla_j f_\alpha(Z_0) = -i\sum_l
\mathcal{F}^{\bar l}_{kj}\int\chi_\alpha\wedge \overline\chi_l= -i\sum_l
\mathcal{F}^{\bar l}_{kj}\delta_{l\alpha} = \left\{ \begin{array}{ll} -
i\mathcal{F}^{\bar \alpha}_{kj} &\quad \mbox{for }\ \alpha\ge 1\\0&\quad
\mbox{for }\ \alpha=0\end{array}\right..$$ We also have
$\nabla_k\nabla_j g_\alpha(Z_0) =i\nabla_k\nabla_j f_\alpha(Z_0)$,
verifying (iii) and (v).
Thus, the holomorphic Hessian $H'(Z_0)$ maps the orthonormal fluxes
\begin{equation}\label{ortho}iU_1,\ \dots,\ iU_{ h^{2,1}},\ -iV_1,\ \dots,\
-iV_{ h^{2,1}}\end{equation} to the matrices
$\xi^1,\dots,\xi^{2h^{2,1}}$ given by \eqref{HCALZ}. Furthermore,
$$f_0(Z_0)=1,\ H'(U_0)=0,\ g_0(Z_0)=i, \ H'(V_0)=0,$$ while
$$f_j(Z_0)=g_j(Z_0)=0.$$ Thus $H^c(Z_0)$ maps the orthonormal fluxes
\eqref{ortho} to the elements $\xi^a\oplus 0\in {\operatorname{Sym}}(h^{2,1} + 1,
{\mathbb C})\oplus{\mathbb C}$, and maps $U_0$ to $0\oplus 1$ and $V_0$ to $0\oplus
i$. \qed
\subsection{Distortion of inner product under the Hessian map}\label{DISTORTION}
We recall that the space
${\operatorname{Sym}}(h^{2,1} + 1,
{\mathbb C})$ of complex symmetric matrices, regarded as a real vector space, has the
inner product \begin{equation} \label{IPH} (A,B)_{\mathbb R}=\Re\langle A,
B \rangle_{HS} =\Re(\mbox{Trace}\, A B^*)\;. \end{equation}
Recalling that $\mathcal{S}_Z=\widetilde\mathcal{W}_Z(H^{2,1}_z\oplus H^{0,3}_z)$, we consider
its codimension 1 subspace
$$\mathcal{S}'_Z=\widetilde\mathcal{W}_Z(H^{2,1}_z).$$ By the proof of Lemma \ref{RANGE},
the holomorphic Hessian map
\begin{equation} H_Z: \mathcal{S}'_Z \to \mathcal{H}_Z \end{equation}
is bijective, but as a map between inner product spaces, it is not an isometry.
The distortion is given by the positive
definite operator $\Lambda_Z$.
We write $$\Lambda_Z\xi^a=\sum_{b = 1}^{2
h^{2,1}}\Lambda_{ab}\xi^b,$$ so that
$$(\xi^a,\xi^b)_{\mathbb R} = (\Lambda_Z^{-1} \Lambda_Z\xi^a,\xi^b)_{\mathbb R} =
\sum_c\Lambda_{ac}(\Lambda_Z^{-1} \xi^c,\xi^b)_{\mathbb R}=\sum_c\Lambda_{ac}\delta_{cb} =
\Lambda_{ab}.$$
Tracing through the definitions, we obtain that $(\Lambda_{ab})$ is the
matrix
\begin{equation} \label{Lambda} \begin{pmatrix} \Lambda' & \Lambda'' \\\Lambda'' &
\Lambda'\end{pmatrix}, \qquad \Lambda'_{jk}= 2\delta_{jk} + \Re \;
\mbox{Tr}\; \mathcal{F}^j \mathcal{F}^{k*},\ \ \Lambda''_{jk}= \Im \; \mbox{Tr}\;
\mathcal{F}^j \mathcal{F}^{k*}\;.\end{equation} of Hilbert-Schmidt inner
products of the matrices in Lemma \ref{RANGE}. Hence,
\begin{equation} \Lambda_{jk}' +\sqrt{-1}\, \Lambda_{jk}''=
2\delta_{jk} + \mbox{Tr}\; \mathcal{F}^j \mathcal{F}^{k*},\end{equation}
\medskip To tie this discussion
together with that in \cite{AD} and \cite[\S 2.1]{DSZ2}, we note
that we can consider $\mathcal{H}_Z$ as a complex vector space by
redefining complex multiplication in $\mathcal{H}_Z$:
$$c\odot \begin{pmatrix} 0 & u\\u^t & A\end{pmatrix}=
\begin{pmatrix} 0 & \bar c u\\\bar cu^t & cA\end{pmatrix}.$$ We
then define a Hermitian inner product on $\mathcal{H}_Z$:
$$\left( \begin{pmatrix} 0 & u\\u^t & A\end{pmatrix},
\overline{\begin{pmatrix} 0 & v\\v^t & B\end{pmatrix}}\right) =
2\bar u \cdot v + \mbox{Tr}(AB^*).$$
We recall from \eqref{defC} that \begin{equation}
\label{LAMBDADEF2} \Lambda_Z = \sum_{j = 1}^{ h^{2,1}} \xi^j
\otimes \xi^{j*},
\end{equation} where the
$\xi^j$ are $(h^{2,1}+1) \times
(h^{2,1}+1)$ matrices. Each term $\xi^j \otimes \xi^{j*}$ in
$\Lambda_Z$ may be expressed in matrix form as $\left( \xi^j_{ab}
\,\bar\xi^{j}_{cd} \right)$; i.e.,
\begin{equation}\label{LZ}(\Lambda_ZH)_{kl}=
\sum_{p,q}[\Lambda_Z]_{kl}^{pq}H_{pq}, \quad [\Lambda_Z]_{kl}^{pq}=
\sum_{j=1}^{h^{2,1}}\xi^j_{kl}\,\bar\xi^j_{pq}, \quad 0\le k,l,p,q
\le h^{2,1} .\end{equation} As in \cite[\S 2.1]{DSZ2}, the
result may be expressed in terms of the Szeg\"o kernel $\Pi_Z$,
i.e.\ the kernel of the
orthogonal projection onto $\mathcal{S}_Z.$ By (\ref{BIGDISPLAY}) and
(\ref{LAMBDADEF2}),we have
\begin{equation}\label{lambdaszego}\left[\Lambda_Z\right]^{p q}_{kl} =
\nabla_{\zeta_k}\nabla_{\zeta_l}\nabla_{\bar\eta_{
p}}\nabla_{\bar\eta_{ q}}
F_Z(\zeta,\eta)|_{\zeta=\eta=Z},\end{equation} where $F_Z$ is the
local representative of $\Pi_Z$ in a frame (cf. \cite{DSZ2}).
In addition, $\Lambda_Z$ determines an operator
$\tilde{\Lambda}_Z$ on the space $\mathcal{H}^c$ of complex matrices of
the form
\begin{equation}\label{HmatriX2} H^c:=
\begin{pmatrix} H & x I\\ \\
\overline{x} I &\overline{H}
\end{pmatrix}\;, \;\; H \in {\operatorname{Sym}}(h^{21}, {\mathbb C}),
\end{equation}
defined by
\begin{equation}\label{HmatriX3} \tilde{\Lambda}_Z
\begin{pmatrix} H & x I\\ \\
\overline{x} I &\overline{H}
\end{pmatrix} = \begin{pmatrix} \Lambda_Z H & x I\\ \\
\overline{x} I &\overline{ \Lambda_Z H}
\end{pmatrix}
\end{equation}
We now relate the $(1,1)$-form $\omega_\Lambda$ of
(\ref{LAMBDAFORM}) and the operator $\Lambda$ to the curvature of
the Weil-Petersson metric on $\mathcal{C}$.
\begin{prop}\label{LAMBDARICCI} We have:
\begin{enumerate}
\item[i)] $[\Lambda_Z]^{j q}_{j' q'} = - G^{q\bar p}R^{j}_{j' q'\bar p}
+\delta^j_{j'} \delta_{q'}^q
+ \delta_{ q'}^j \delta_{j'}^q $, where $R$ is the curvature
tensor of the Weil-Petersson metric on $\mathcal{C}$;
\item[ii)] $\omega_\Lambda = (m + 3) \omega_{WP} + Ric(\omega_{WP})$
where $Ric$ is the Ricci curvature $(1,1)$ form of the
Weil-Petersson metric of $\mathcal{M}$, i.e.
$$Ric(\omega_{WP}) = \frac i2\sum_{i \bar{j}} Ric_{i \bar{j}} dz^i \wedge
d\bar{z}^j, \;\;\; Ric_{i \bar{j}} := - G^{k \bar{\ell}} R_{i
\bar{j} k \bar{\ell}}. $$ Thus, $\omega_\Lambda$ is the Hodge metric
\cite{Lu, W2}.
\end{enumerate}
\end{prop}
\begin{proof}
To prove (i), it suffices to combine (\ref{LZ}) and
(\ref{RIEMANN4}), raising and lowering indices as appropriate. (In
\eqref{LZ}, a normal frame at $Z$ is assumed.)
For (ii) we note that the $(1,1)$-form \begin{equation}
\omega_\Lambda= \frac i2 \sum \left[2\delta_{i j} + \mbox{Tr}\;
\mathcal{F}^i(Z) \mathcal{F}^{j*}(Z)\right] dz^i \wedge d\bar z^j
\end{equation} On the other hand, by (\ref{RIEMANN}),
\begin{equation} \label{RICCI}\begin{array}{lll} Ric_{i \bar{j}} & = & - G^{k
\bar{\ell}} \left[ G_{i \bar{j}} G_{k \bar{\ell}} + G_{i
\bar{\ell}} G_{k \bar{j}} - \frac{1}{\int_\mathcal{M}\Omega\wedge \bar \Omega} \;
\sum_{p,q} G^{p \bar{q}} \mathcal{F}_{i k p} \overline{{\mathcal{F}}_{j \ell
q}} \right] \\ &&\\
& = & - (m + 1) G_{i \bar{j}} + Tr \mathcal{F}^i \mathcal{F}^{j *}
\end{array}\end{equation}\end{proof}
\begin{rem} To facilitate comparison with \cite{AD, DSZ1}, we
note that our notational conventions are the same as in
\cite{DSZ1}. In \cite{AD}, the Szeg\"o kernel $\Pi_Z$ is denoted
$G_Z$. The formulas in \cite{AD} (4.8) are the same as
(\ref{LZ}), resp.\ Proposition \ref{LAMBDARICCI}(1). Also
$F_{ab|\bar{c} \bar{d}} = \Lambda_{ab}^{pq}G_{p\bar c}G_{q\bar
d}.$ The coefficients $F_{a\bar{b}| c \bar{d}}$ in \cite{AD}
correspond to the off-diagonal blocks of $\tilde{\Lambda}$.
\end{rem}
\subsection{Proof of Theorem \ref{MAININD}}
All but one of the ingredients of the proof are precisely the same
as in Theorem \ref{MAIN}. We first define the analogue of
\eqref{Kcritgauss} and \eqref{PF} for the signed sum:
\begin{eqnarray} \mathcal{I} nd(Z) &:=& \int_{\mathcal{S}_{Z}}
\det H^c W(Z)\,
\chi_{Q_{Z}} dW \nonumber\\& =
& \frac 1 {b_3!\,\sqrt{\det \Lambda_Z}} \int_{\mathcal{H}_Z \oplus {\mathbb C}} \det
\left( H^*H - |x|^2 I\right)\, e^{-(\Lambda^{-1}_Z H, H)_{\mathbb R} -
|x|^2}\,dH\,dx\;.\label{INDDEN}
\end{eqnarray}
By Lemma \ref{ICALFIRST} and the proof of Lemma \ref{DSZFORM}, we
conclude that
\begin{equation} \mathcal{I} nd_{\chi_K}(L)= L^{b_3}\left[\int_K \mathcal{I}
nd(Z)\,d{\operatorname{Vol}}_{WP} + O(L^{-1/2})\right]\;.\end{equation}
To complete the proof of Theorem \ref{MAININD}, we evaluate the
integral in \eqref{INDDEN}:
\begin{lem} \label{INDCURV} We have
$$ b_3!\, \mathcal{I} nd(Z)\,d{\operatorname{Vol}}_{WP}
=\frac {\pi^{2m}}{2^m}\, c_m(T^{* (1,0)}(\mathcal{C})\otimes
\mathcal{L},\omega_{WP}\otimes h^*_{WP})= \left(\frac \pi 2\right)^m
\det\left(-R - \omega \otimes I \right)\;.$$ \end{lem}
\begin{proof}
This follows by a supersymmetric formula for the determinant, used
in this context in \cite{AD} and also in \cite{BSZ2}. We briefly
review the fermionic formalism referring to \cite{BGV, BSZ2} for
further details in a similar setting.
Let
$M=\left(M^{j}_{j'}\right)$ be an $n \times n$ complex matrix.
Then,
\begin{equation} \det M= \int^{B^{2n}}
e^{-\langle
M\eta,\bar\eta\rangle} d\eta\,,\qquad \langle
M\eta,\bar\eta\rangle = \sum_{j,j'}\eta_j M^{j}_{j'} \bar\eta_{j'}
\,,\end{equation} where $\eta_j,\bar\eta_j$ ($1\le j\le n$) are
anti-commuting (or ``fermionic") variables. The integral
$\int^B = \int^{B^{2n}}$ is the Berezin integral, a notation for the linear
functional $\int^B:\bigwedge^\bullet {\mathbb C}^{2n}\to {\mathbb C}$ defined by
$$\int^B|_{\bigwedge^t {\mathbb C}^{2n}}=0\quad \mbox{for \ } t<2n\,,\quad
\textstyle \int^B \left(\prod_{j}\bar\eta_j\eta_j\right)=1\,.$$
We now apply this formalism to $\det \left( H^*H - |x|^2 I\right)
= \det H^c$ where $H^c$ is defined as in (\ref{HmatriX2}) and
refer to the discussion in \S \ref{DISTORTION}. The matrix $H^c$ is
of rank $b_3$, and we write
\begin{equation}\label{susy-det} \det H^c = \int^{B^{2b_3}}
e^{-\langle H^c
(\eta, \bar\eta ),(\theta, \bar\theta )\rangle} d\eta d\theta
\,,\end{equation} where $\eta=(\eta_1,\dots,\eta_{b_3/2}),\
\theta=(\theta_1,\dots,\theta_{b_3/2})$, and
$$\langle H^c (\eta, \bar\eta ),(\theta, \bar\theta )\rangle= \sum
\left(H_{jk}\eta_j\theta_k+ x\delta_{jk}\eta_j\bar\theta_k+ \bar
x\delta_{jk}\bar\eta_j\theta_k + \bar
H_{jk}\bar\eta_j\bar\theta_k\right).$$ The quadratic form
$(\Lambda^{-1}_Z H, H)_{{\mathbb R}} + |x|^2$ in the exponent of the Gaussian
integral may be expressed in the form $\frac{1}{2}
(\tilde{\Lambda}_Z^{-1} H^c, H^c)$, where $\tilde{\Lambda}_Z$ is
the restriction of the operator defined in (\ref{HmatriX3}) to
$\mathcal{H}_Z^c$. Indeed, both quadratic forms are equivalent to
$Q_Z(W, W)$ under a linear change of variables ($W \to H_Z(W) $ in
the case of $\Lambda_Z$ and $W \to H^c(W)$ in the case of
$\tilde{\Lambda}_Z$).
Then
\begin{equation} \label{fourier} b_3! \;\mathcal{I} nd(Z) =
\frac{1}{\sqrt{\det\tilde \Lambda_Z}} \int_{\mathcal{H}^c_Z}
\int^{B^{2b_3}} e^{- \langle H^c (\eta, \bar\eta ),(\theta,
\bar\theta )\rangle- \langle \tilde{\Lambda}_Z^{-1} H^c, H^c
\rangle } dH^c d\eta d\theta. \end{equation} We let $$\Omega=
(\eta, \bar{\eta})\otimes (\theta, \bar{\theta})^t =
\begin{pmatrix} (\eta_j\theta_k) & (\eta_j\bar\theta_k)\\
(\bar\eta_j\theta_k) & (\bar\eta_j\bar\theta_k)\end{pmatrix},$$
so that $\langle H^c (\eta, \bar\eta ),(\theta, \bar\theta
)\rangle = \left(H^c, \Omega \right)= \mbox{Tr}\,H^c \Omega^t$.
Then the $dH^c$ integral in \eqref{fourier} becomes the Fourier
transform of the Gaussian function $e^{-\langle
\tilde{\Lambda}^{-1} H^c, H^c \rangle }$ evaluated at $i\Omega$.
Recalling that the Fourier transform of $e^{- \langle A x, x
\rangle/2}$ equals $(2 \pi)^{n/2} (\det A)^{-1/2} e^{- \langle
A^{-1} \xi, \xi \rangle/2}$, we have that the $dH^c$ integral
equals $(\det \tilde{\Lambda})^{{\frac{1}{2}}} e^{-\frac{1}{4} \langle
\tilde{\Lambda} \Omega, \Omega \rangle}$. After cancelling $(\det
\tilde{\Lambda})^{{\frac{1}{2}}}$, we obtain \begin{equation} b_3!\mathcal{I}
nd(Z) = \pi^m \int^{B^{2b_3}} e^{- \frac{1}{4} ( \tilde{\Lambda}
\Omega, \Omega )_{\mathbb R}} d\eta d\theta,
\end{equation}
where in normal
coordinates, we have (by (\ref{HmatriX3}) and Proposition
\ref{LAMBDARICCI}) \begin{eqnarray*} ( \tilde{\Lambda}_Z \Omega,
\Omega )_{\mathbb R} &= &
\mbox{Trace} \left[\begin{pmatrix} \Lambda_Z \eta \otimes \theta &
\eta \otimes \bar{\theta}\\
\bar{\eta} \otimes \theta &\bar\Lambda_Z \bar\eta \otimes
\bar\theta
\end{pmatrix} \begin{pmatrix} \eta \otimes \theta &
\eta \otimes \bar{\theta}\\ \bar{\eta} \otimes \theta &\bar\eta
\otimes \bar{\theta}
\end{pmatrix}^*\right] \\ &= &
\sum_{jq j'q'}\left (\Lambda^{jq}_{j'q'} \eta_j \theta_q
\bar{\eta}_{j'} \bar{\theta_{q'}} + \bar\Lambda^{jq}_{j'q'}
\bar\eta_j \bar\theta_q {\eta}_{j'} {\theta_{q'}}\right)
+\sum_{jq} \left(\eta_j\bar\theta_q \bar\eta_j\theta_q
+\bar\eta_j\theta_q \eta_j\bar\theta_q \right)
\\ &= &
2\sum_{jq j'q'}\left (\Lambda^{jq}_{j'q'} - \delta_{j j'}
\delta_{q q'} \right)\eta_j \theta_q
\bar{\eta}_{j'} \bar{\theta_{q'}}\\
&= &2 \sum_{jq j'q'} \left( R_{j \bar{j'}q \bar{q'} } +
\delta_{jq} \delta_{j'q'} \right) \eta_j \theta_q\bar{\eta}_{j'}
\bar{\theta_{q'}}.
\end{eqnarray*} (Here we used the fact that $\bar
\Lambda^{jq}_{j'q'}=\Lambda^{j'q'}_{jq}$; see \eqref{LZ}.) Thus
\begin{eqnarray*} b_3!\mathcal{I} nd(Z) &= &
\pi^m \int^{B^{2b_3}} e^{ - {\frac{1}{2}}\left(R_{j \bar{j'}q \bar{q'}
} + \delta_{jq} \delta_{j'q'} \right) \eta_j \bar{\eta}_{j'}
\theta_q \bar{\theta}_{q'} } d\eta d\theta \\
& = & \left(\frac{\pi}{2}\right)^m\; \frac{\det \left(- R -
\omega \otimes I \right) }{d{\operatorname{Vol}}_{WP}}\;.
\end{eqnarray*}\end{proof}
\begin{rem} The index density computation in special geometry is
closely related to the asymptotics in \cite[\S 5]{DSZ2} for
critical point densities for powers of a positive line bundle $L$
on a compact K\"ahler manifold $M$. The expansions in \S 5.1 of
\cite{DSZ2} can be used to show that the (first few) terms in the
asymptotic expansion of the index density equal those of the Chern
form corresponding to $ c_m(T^{*1,0}\otimes L^N)$.
\end{rem}
\subsection{\label{EXAMPLES}Examples}
We describe in this section the critical point distribution for
the cases where the dimension $h^{2,1}(X)$ of the moduli space is
0 and 1, i.e.\ when $\dim\mathcal{C}$ is 1 and 2, respectively.
\subsubsection{\label{EXZERO}$h^{2,1}(X) = 0$}
The simplest example is the case where the Calabi-Yau manifold $X$
is rigid, i.e. $\mathcal{M} = \{pt\} $. (See \cite{AD, DD} for further
details and computer graphics of critical points in this case.)
Then only the parameter $\tau \in \mathcal{H}$ varies. Let $G=F+iH$, and
consider the flux superpotential $W_G$. Its critical point
equation is $$F+\tau H\in H^{0,3}$$ (since in this case
$H^{2,1}(X,{\mathbb C})=0$). So we write $$F=A\Omega+\overline {A\Omega}\;,\quad
H=B\Omega+\overline {B\Omega}\;,\qquad A=a_1+ia_2,\
B=b_1+ib_2\in{\mathbb Z}+\sqrt{-1}\,{\mathbb Z}\;.$$ Then writing $W_G=W_{A,B}$, we
have
$$\nabla W_{A,B}=0 \iff F+\tau H\in H^{0,3} \iff A+\tau B = 0 \iff \tau=-\frac
AB.$$
Each flux superpotential $W_{A,B}\in\mathcal{S}$ (with $A, B \in {\mathbb C}$)
has a unique critical point in $\mathcal{H}$, which may or may not lie
in the fundamental domain $\mathcal{C}$. In the notation of
(\ref{DIAGRAM}),
$$\pi(\mathcal{S}) = \{W_{A, B}: - { \frac{A}{B}} \in \mathcal{C} \}$$
is a domain with boundary in ${\mathbb C}^2$. Each $SL(2, {\mathbb Z})$-orbit of
fluxes (or superpotentials) contains a unique element whose
critical point lies in $\mathcal{C}$, so $\pi(S)$ is a fundamental
domain for the action of $\Gamma$ on $\mathcal{S}$.
Thus, counting critical points is equivalent to counting $SL(2,
{\mathbb Z})$ orbits of superpotentials satisfying the tadpole constraint.
The pair $(A, B)$ corresponds to the element $\left(
\begin{array}{ll} a_1 & b_1 \\ a_2 & b_2 \end{array} \right)
\in GL(2, {\mathbb Z})$ and the Hodge-Riemann form quadratic form
may be identified with the indefinite
quadratic form
$$Q[(A, B)] = a_1 b_2 - b_2 a_1$$
on ${\mathbb R}^4 $.
The
modular group $SL(2, {\mathbb Z})$ acts by the standard diagonal action on
$(A, B) \in {\mathbb R}^2 \times {\mathbb R}^2$ preserving $Q[(A, B)]$ or
equivalently by left multiplication preserving $\det$. Thus, the
set of superpotentials satisfying the tadpole constraint is
parametrized by:
$$\left\{\left( \begin{array}{ll} a_1 & b_1 \\ a_2 & b_2 \end{array}
\right) \in GL(2, {\mathbb Z}): 0<\det \left( \begin{array}{ll} a_1 & b_1 \\
a_2 & b_2
\end{array} \right) \leq L \right\}, $$
and we want to count the number of
$SL(2, {\mathbb Z})$-orbits in this set.
Counting the number of $SL(2, {\mathbb Z})$ orbits in $\mathcal{D}_L$ is
equivalent to determining the average order of the classical
divisor function $\sigma(m)$, see for instance Hardy-Wright
\cite[Theorem 324]{HW}:
\begin{equation} \label{ONEP} \mathcal{N}^{\operatorname {crit}}(L) = \sum_{m = 1}^L \sum_{k | m} k = \sum_{m = 1}^L \sigma(m) \sim \frac{\pi^2}{12} L^2
+ O(L \log L). \end{equation} As verified in \cite{DD} (and as
follows very simply from Theorem \ref{MAIN}), the critical points
are uniformly distributed relative to the hyperbolic area form.
\subsubsection{$h^{2,1}(X)=1$}
We now illustrate our notation and results with the case
where the moduli space of complex structures on $X$ is
one-dimensional over ${\mathbb C}$. (This case is also studied in \cite{DD}
from a slightly different point of view.) In this case, there is a
single Yukawa coupling $\mathcal{F}_{11}^{\bar 1} (z)$ defined by $D_z^2
\Omega_z = \mathcal{F}^{\bar 1}_{11} (z) \overline{D_z \Omega_z}. $
The space $\mathcal{S}_{z, \tau} \simeq
H^{2,1} \oplus H^{0, 3} \simeq {\mathbb C}^2$. The space is spanned as a
real vector space by four superpotentials $U_0, U_1, V_0,V_1$
corresponding to $\{\overline{\Omega_z}, \mathcal{D}_z \Omega_z,i\overline{\Omega_z},
i\mathcal{D}_z \Omega_z\}$. By the proof of Lemma \ref{RANGE}, the holomorphic
Hessians of $U_0$ and $V_0$ at a critical point equal zero, so we only need to
consider the holomorphic Hessian map on $U_1$ and $V_1$. The corresponding
space of Hessians is the real $2$-dimensional subspace $\mathcal{H}_Z$ of ${\operatorname{Sym}}(2,
{\mathbb C})$ spanned by
$$ \xi^1 = \begin{pmatrix} 0 & 1 \\
1& F(z) \end{pmatrix}, \;\;\;\;\xi^2 = \sqrt{-1}
\begin{pmatrix} 0 & 1
\\
1 & -F(z) \end{pmatrix} , $$
where we write $F=\mathcal{F}_{11}^{\bar 1}$.
Hence, we may parameterize the space $\mathcal{H}_Z$ of holomorphic Hessians by
$$w = y_1 + i y_2 \mapsto H(w) =
\left( \begin{array}{ll} 0 & w \\ & \\
w & F(z) \bar{w}\end{array} \right). $$
By \eqref{Kcritgauss}, we have:
$$\mathcal{K}^{\operatorname {crit}} (Z) = \frac 1{2!} \int_{{\mathbb C}\oplus {\mathbb C}} |\det(H(w)^*H(w) - |x|^2 I)
|\;\; e^{- |w|^2+|x|^2} dw dx . \;\;$$ We note that
$$\det(H(w)^*H(w) - |x|^2 I) = |w|^4 + |x|^4 - (2+|F(z)|^2) |x|^2
|w|^2. $$ Hence
$$\mathcal{K}^{\operatorname {crit}} (Z) = \frac 1{2!} \int_{{\mathbb C}\oplus {\mathbb C}} \left| |w|^4 + |x|^4 - (2+|F(z)|^2) |x|^2
|w|^2\right|\,e^{- |w|^2+|x|^2}\, dw\, dx , $$ agreeing with
(3.19) of \cite{DD}. There, the integral is evaluated as
$$\mathcal{K}^{\operatorname {crit}}(Z) = \frac{\pi^2}{2}\left(2 - |{F}|^2 + \frac{2
|{F}|^3}{\sqrt{4 + |\tilde{F}|^2}}\right). $$
\begin{rem} In this example, the discriminant variety is given by
$$ \widetilde\mathcal{D} = \{(Z,x\,W_0(Z)+w\,W_1(Z))\in\mathcal{I}:
|w|^2-|x|^2= \pm |wxF(z)^2|\},$$ where $W_\alpha = U_\alpha +iV_\alpha$.
The matrix $\Lambda$ is given by
$$\Lambda=\begin{pmatrix} 2+|F|^2 &0\\0& 2+|F|^2\end{pmatrix}.$$
\end{rem}
\section{\label{FCFP}Problems and heuristics on the string theory landscape}
In this section, we continue the discussion begun in \S
\ref{RELATIONS} on the bearing of our methods and results on the
physicists' picture of the string theory landscape. We briefly
review some of the heuristic estimates in the physics discussions,
and then discuss a number of mathematical pitfalls in the
heuristics. In \S \ref{PROBLEMS}, we state some mathematical
problems suggested by the heuristics and by rigorous vacuum
statistics. In \S \ref{HEURISTICS}, we give our own (tentative)
heuristic estimate of the dependence of the critical point density
$\mathcal{K}^{\operatorname {crit}}(Z)$ on the dimension $b_3/2$ of $\mathcal{C}$.
\subsection{Complexity of the string theory landscape}
As mentioned in \S \ref{RELATIONS}, the possible vacua in string/M
theory are often represented as valleys in a complex string theory
landscape, and the number of valleys is often estimated at
$10^{500}$.
L. Susskind and others have argued that such a large number of
possible vacua should essentially be a consequence of the large
number of variables in the potential. A common and general
argument to arrive at this number of vacua without specifying any
particular string theory model is to reason that the potential
energy is a function of roughly $1000$ variables.
A generic polynomial $f$ of degree $d$ on ${\mathbb C}^m$ has
$(d - 1)^m$ critical points since critical
points are solutions of the $m$ equations $\frac{\partial
f}{\partial z_j} (w) = 0$ of degree $d - 1$. Thus, the number of
critical points would seem to grow at an exponential rate in the
number of variables. Such an exponential growth rate of critical
points also appears in the physics of spin glasses, where the
growth in the number of metastable states (local minima of the
Hamiltonian) in terms of the number of variables is often used to
measure the complexity of the energy landscape. In special model
of random Hamiltonians on domains in ${\mathbb R}^N$, exponential growth of
the number of local minima in $N$
has recently been proved rigorously \cite{Fy}.
In the specific models of type IIb flux compactifications on
a CY $3$-fold $X$, the number of variables is $b_3(X)$. As
mentioned above, for a typical $CY$ $3$-fold, $b_3$ is often
around $300$ and sometimes as high as $1000$ (cf.\ \cite{ GHJ,
Can1}), and therefore the scalar potential $V_W$ in (\ref{V}) is a
function of this number of variables. By naive counting of
variables one would thus arrive at a figure like $10^{500}$ for
such models. The more sophisticated estimate $N_{vac} \simeq
\frac{L^{b_3}}{b_3!} f(b_3)$ in flux compactifications (see \S
\ref{RELATIONS} for the notation) does not supplant the naive
counting argument since the order of magnitude of $f(b_3) $
is unknown. We recall that it is the integral over $\mathcal{C}$ of the Gaussian
integral in \eqref{PF} (see (\ref{FB3}). The Gaussian integral for
$\mathcal{K}^{\operatorname {crit}}$ in that line resembles to some extent
the integral formula for the expected number of
critical points in spin glass theory, which has exponential growth
(see e.g. \cite{Fy}).
Although the naive counting of variables or the analogy to
complexity of energy landscapes bring some insight into vacuum
counting, we now point out some pitfalls in estimating numbers of
vacua or the coefficient $f(b_3)$ in flux compactifications on
this basis.
\begin{enumerate}
\item The critical point equation (\ref{CRITSET}) is $C^{\infty}$
but not holomorphic, so vacua are critical points of a real system
of equations, and it is not obvious how many connection critical
points to expect even a polynomial of a given degree to have. This
number depends on the connection, and is studied in detail in
\cite{DSZ1, DSZ2} and in the present paper.
\item A flux superpotential $W$
is not a polynomial and it is not clear how to assign it a `degree' which
reflects its number of critical points on all of Teichm\"uller
space, or equivalently, the number of critical points in $\mathcal{C}$
corresponding to the $\Gamma$-orbit of $W$.
Examples (e.g. in \S \ref{EXZERO}) show that this number can be relatively small.
\item It seems reasonable to say that the number of fluxes rather than the number of
critical points per flux that dominates the number of vacua.
In flux compactifications, the landscape should therefore be
viewed as the graph of the scalar potential $V_W(Z)$ on $\mathcal{C}
\times \mathcal{S}$, i.e. as a function of both variables $W, Z$, and
the local minima should be viewed as pairs $(W_G, Z)$ with $G
\in H^3(X, {\mathbb Z} \oplus \sqrt{-1} {\mathbb Z})$ and with $Z \in Crit(W_G). $
\item However (see the problems below) it is not straightforward
to define `per vacua',
since the tadpole constraint is
hyperbolic, and the total number of lattice points in the shell $0 < Q[G] <
L$ is infinite.
\item In estimating $\mathcal{K}^{\operatorname {crit}}(Z)$ we are fixing $Z$ in the
interior of $\mathcal{C}$. But there could exist singular points of
$\mathcal{C}$ at which $\mathcal{K}^{\operatorname {crit}}(Z)$ blows up (see \cite{DD} for
discussion of conifold points). It would also be interesting to
study $\mathcal{K}^{\operatorname {crit}}(Z)$ as $Z \to \d \mathcal{C}$.
\item As mentioned in \S \ref{RELATIONS} (see also \S \ref{HEURISTICS}), there may be a significant difference between the order of
magnitude of the density of critical points and of the number of
critical points, since
$\mathcal{C}$ is an incomplete K\"ahler manifold of possibly quite small
volume. See \cite{LuS1} for the current state of the art on the
volume. There is no analogue of the small volume of the
configuration space in spin glass complexity.
\item The tadpole constraint (\ref{TC}) becomes much more highly
constraining as the number $b_3$ of variables increases for fixed
$L$ and is responsible for the factor $\frac{1}{(b_3)!}$ in
Theorem \ref{MAIN}. Again, no such feature exists in complexity
estimates in spin glasses.
\end{enumerate}
\subsection{\label{PROBLEMS}Problems}
The issues mentioned above (and the detailed heuristics in \S
\ref{HEURISTICS}) suggest a number of problems. The ultimate goal
is:
\begin{prob} \label{BIGMAINPROB}
Does string theory contain a
vacuum consistent with the standard model, and if so, how many?
Find examples of Calabi-Yau manifolds, and any other postulated structures,
for which it is certain that such a vacuum exists.
\end{prob}
Now testing consistency with the standard model requires elucidating
far more structure of a candidate vacuum -- the gauge group, the matter
content, and so forth -- than we are considering here. To address this
ultimate problem, one would need many more statistical results, along
the lines set out in \cite{Doug}. However one can make arguments
(admittedly quite speculative at this point) that the dominant
multiplicity in vacuum counting arises from the multiplicity of flux vacua
we are discussing here.
An important problem in this context is
\begin{prob} \label{MAINPROB}
How large does $L$ need to be to ensure that there exists a
vacuum with
\begin{equation} \label{cosbound}
|W_G(Z)|^2 \le \lambda_*
\end{equation}
for a specified $\lambda_*$ ? In that case, how many such
vacua are there? Find examples of Calabi-Yau manifolds where it is
certain that such a vacuum exists.
\end{prob}
To solve this problem for type IIb flux compactifications, we
would need to sharpen Theorem \ref{MAIN} in many ways which lead
to the subsequent problems stated below.
The constraint (\ref{cosbound}) on $|W_G(Z)|^2$ is a simple
example of `consistency with the standard model.' If the real
world were (counter-factually) exactly supersymmetric, this would
be the constraint that the vacuum should have a cosmological
constant $V_W(Z) = -3|W_G(Z)|^2$ (as in (\ref{V})) consistent with
the known value. While the physical discussion requires taking
supersymmetry breaking into account, as discussed in \cite{DD2},
vacua can exist in which supersymmetry is broken by effects not
taken into account here, making additional contributions to the
vacuum energy which lift the exact vacuum energy to be consistent
with the known value (essentially, zero). For such a vacuum, the
quantity $3|W_G(Z)|^2$ would be the mass squared of the gravitino,
a quantity which could be constrained by physical observations.
An independent motivation for (\ref{cosbound}) is that some proposals
for stabilizing the moduli we did not discuss, such as that of \cite{KKLT}, are
believed only to work under such a constraint.
In any case, as discussed in \cite{DD} (\S 3.3), one can count such vacua by
choosing the test function to be $\theta ( \lambda_* - |W_G(Z)|^2)$
where $\theta (x) = 1$ for $x> 0$ and $= 0$ for $x
\leq 0.$ This test function is not homogeneous but can be handled
by the methods of this paper (loc. cit.).
Theorem \ref{MAIN} is asymptotic in $L$ and we have also analyzed
to some degree the $b_3$ dependence. But as mentioned in \S
\ref{RELATIONS}, $L$ depends on the topology of $X$. There, we
stated that in many examples $L \simeq C b_3$ with $1/3 \leq C
\leq 3$. To bridge one gap between Theorem \ref{MAIN} and Problem
\ref{MAINPROB}, we state:
\begin{prob} How are the order of magnitudes of $b_3(X)$ and $L$ of
(\ref{TADPOLE}) related as $X$ varies over topologically distinct
Calabi-Yau manifolds?
\end{prob}
We have already mentioned the importance of obtaining effective
estimates in $b_3$ of the coefficient (\ref{LEADDEN}) in Theorem
\ref{MAIN}:
\begin{prob} Obtain an effective estimate of $\mathcal{K}^{\operatorname {crit}}(Z)$ and of
its integral over $\mathcal{C}$ in $b_3$. Also, obtain such an estimate
of the remainder.
\end{prob}
Among the difficulties with this problem is that $\mathcal{K}^{\operatorname {crit}}(Z)$
depends on special features of the moduli space $\mathcal{C}$ which
depend on more than just the dimension $b_3$ and which may change
in an irregular way as the dimension increases. We consider this
problem below in \S \ref{HEURISTICS}.
To gain insight into the size of the leading coefficient
(\ref{LEADDEN}), one could write the principal term in Theorem
\ref{MAIN} in the form $\frac{L^{b_3}}{b_3!} \times f(b_3)$ that
is often used in string theory (cf. \S \ref{RELATIONS}), with
$f(b_3)$ the Gaussian integral in \eqref{PF}. As mentioned above,
it is natural to try to separate out the effects of the number of
fluxes and the number of vacua per flux, or more precisely:
\begin{enumerate}
\item the number of fluxes $G$ satisfying the tadpole constraint
with a critical point in a compact subset $\mathcal{K} \subset \mathcal{C}$;
\item the number of critical points `per flux', or more precisely per $\Gamma$-orbit of fluxes, in
$\mathcal{K}$ (see \S \ref{EXZERO} to clarify this distinction);
\item the total number of critical points in $\mathcal{K}$ of all fluxes satisfying the tadpole constraint.
\end{enumerate}
We can define the first quantity precisely as
the sum
$$\Theta_K(L) = \sum_{G \in H^3(X,
{\mathbb Z} \oplus i {\mathbb Z}): Q[G] \leq L} \theta\left( \sum_{Z \in \mathcal{C}:
\nabla W_G(Z) = 0} \chi_K(Z)\right). $$ Thus, the problem we pose
is:
\begin{prob} Determine the asymptotics of $\Theta_K(L)$ as $L \to
\infty$.
\end{prob}
The second quantity is the ratio $\mathcal{N}_{K}(L)/\Theta_K(L).$ A
possibly more tractable way to restate this problem is in terms of
the `average number of critical points' of a superpotential $W_G$
in $\mathcal{K}$. To define `average' we need to introduce a probability
measure on $\mathcal{F}$ which is compatible with $\chi_Q dW$. The most
natural probability measures seem to be the normalized Gaussian
measures $\gamma_{Z_0}$ on the spaces $\mathcal{S}_{Z_0}$ defined by the
inner product $Q_{Z_0}$.Thus, we ask for the average number of
critical points of $W \in \mathcal{S}_{Z_0}$ with respect to
$\gamma_{Z_0}$. It would be interesting to study the number of
critical points in a fixed $\mathcal{K} \subset \mathcal{C}$ or in all of
$\mathcal{C}$ or indeed in all of Teichm\"uller space (which corresponds
to counting critical points in $\mathcal{C}$ for a $\Gamma$-orbit of
fluxes).
We observe that $W \in \mathcal{S}_{Z_0}$ has a critical point at $Z$
if and only if $W \in \mathcal{S}_{Z_0} \cap \mathcal{S}_Z$. In the case of
flux superpotentials, $\dim \mathcal{S}_{Z_0} = \frac{1}{2} \dim \mathcal{F}$
so for generic pairs $Z, Z_0$, $\mathcal{S}_{Z_0} \cap \mathcal{S}_Z = \{0\}$.
Thus, ${\mathbf E}\,_{Z_0} (\# Crits(W))$ will be an integral over the
special variety $\Sigma_{Z_0} = \{Z \in {\mathbb C}: \dim \mathcal{S}_{Z_0} \cap
\mathcal{S}_Z
> 0\}$. This variety is obviously stratified by
$h^{2,1}$ strata $\Sigma_d$ on which the dimension $d$ takes the
values $d = 1, 2, \dots, h^{2,1}$, and ${\mathbf E}\,_{Z_0} (\# Crits(W))$ is
a sum of integrals over each strata.
\begin{prob} Determine the asymptotics of
${\mathbf E}\,_{Z_0} (\chi_{Q_{Z_0} (G/ L)} \# Crits(W_G))$
\end{prob}
We also recall that in Theorem \ref{MAIN} we ignored the effect of
the discriminant variety and the boundary of the region of
$\mathcal{C}$.
\begin{prob}
Estimate the remainder if $\psi$ does not vanish near the
discriminant variety $\mathcal{D}$, or if $\psi$ is a characteristic
function of a smooth region $K \subset \mathcal{C}.$ Investigate the
boundary behavior as $\mathcal{K}$ fills out to $\mathcal{C}$.
\end{prob}
An analogue problem about studying accumulation of lattice points
around boundaries of domains on non-degenerate surfaces is studied
in \cite{ZZ}.
\subsection{Heuristic estimate of the critical point
density}\label{HEURISTICS}
We now present a heuristic estimate on the $b_3$-dependence of
the critical point density (relative to the Weil-Petersson volume
form)
\begin{eqnarray}\label{Kcritgauss2}
\mathcal{K}^{\operatorname {crit}}(Z) &=& \frac 1 {b_3!\sqrt{\det \Lambda_Z}} \int_{\mathcal{H}_Z
\oplus {\mathbb C}} \left|\det H^*H - |x|^2 I\right|\;\; e^{-(\Lambda^{-1}_Z H,
H)_{\mathbb R} - |x|^2}\,dH\,dx \end{eqnarray} for $Z$ in regions of
moduli space where the norm of $\Lambda_Z$ satisfies bounds
independent of $b_3$.
We recall (cf. Proposition \ref{LAMBDARICCI}) that $\Lambda_Z$ is the
Hodge metric, hence we are studying the density of critical
points in regions $K \subset \mathcal{C}$ where the absolute values of
the eigenvalues of the Ricci curvature of the Weil-Petersson
metric $\omega_{WP}$ are bounded by a uniform constant. In the
notation $N_{vac}(L) \sim \frac{L^{b_3}}{b_3!} f(b_3)$, we have
\begin{equation} \label{FB3} f(b_3) = \int_{\mathcal{C}} \chi_K(Z)
\frac{1}{\sqrt{\det \Lambda_Z}} \int_{\mathcal{H}_Z \oplus {\mathbb C}} \left|\det
H^*H - |x|^2 I\right|\;\; e^{-(\Lambda^{-1}_Z H, H)_{\mathbb R} - |x|^2}\,dH\,dx
,\end{equation}
where $\mathcal{K}$ is the region in which we are counting the critical points.
Our heuristic estimate is that the Gaussian integral (i.e. $b_3!
\mathcal{K}^{\operatorname {crit}}(Z)$) has growth rate $(b_3/2)! N_{\mu}^{b_3}$ for $Z$
in a region $K = K_{\mu}$ of moduli space where $||\Lambda_Z||
\leq \mu$. Here, $N_{\mu}$ is a constant depending only on $\mu$.
It follows that $\mathcal{K}^{\operatorname {crit}}(Z)$ would have the decay rate
$b_3^{-b_3/2}$ for $Z$ in $K_{\mu}$. We note that this heuristic
estimate is consistent with the heuristic estimate given by
Ashok-Douglas \cite{AD} that $\mathcal{K}^{\operatorname {crit}}(Z)$ should have the
same order of magnitude as $\mathcal{I} nd (Z)$ (\ref{INDDEN}). By
Proposition \ref{INDCURV}, $b_3 ! \mathcal{I} nd(Z)$ is a differential
form depending polynomially on the curvature. The density of $b_3
! \mathcal{I} nd(Z)$ relative to $dVol_{WP} =
\frac{\omega_{WP}^{b_3/2}}{(b_3/2)!}$ thus has the growth
$(b_3/2)! N_{\mu}^{b_3}$ we predict. We present the new
heuristic to give evidence that the absolute value only changes
the coefficient and not the order of magnitude in vacuum
counting.
Before going into the heuristic estimate, we first discuss the
consequences for vacuum counting.
As mentioned in the
introduction, it has been tentatively conjectured at this time
of writing (Z. Lu) that the Weil-Petersson volume of $K_{\mu}$ is
bounded above by the volume of a ball of radius $r(\mu)$ in
${\mathbb C}^{b_3/2}$ depending only on $\mu$, and the latter volume decays
like $\frac{1}{(b_3/2)!}$. Thus it would appear that
$N_{vac, K_{\mu}}(L) \sim \frac{(C_1 L
N_{\mu})^{b_3}}{b_3!}$. We include a constant $C_1$ to take into
account the dependence on various parameters including $r(\mu)$,
factors of $\pi$ and so on. If we then take the (often) observed
value $L \sim C b_3$ with $C \in [\frac{1}{3}, 3]$, then the
number of vacua in $K_{\mu}$ satisfying the tadpole constraint
would grow at an exponential rate in $b_3$.
We now explain the heuristic estimate regarding the order of
magnitude of $\mathcal{K}^{\operatorname {crit}}(Z)$ (\ref{LEADDEN}): the latter depends
on two inputs, the subspace $\mathcal{H}_Z$ (or equivalently the
orthogonal projection $P_Z$ onto $\mathcal{H}_Z$) and the eigenvalues of
$\Lambda_Z$. To obtain upper and lower bounds on $\mathcal{K}^{\operatorname {crit}}(Z)$ we
note that
\begin{equation} \label{MUMIN} 2 P_Z \leq \Lambda_Z \leq \mu_{\max}(Z)
P_Z, \end{equation} where $\mu_{\max}(Z)$ is the maximum
eigenvalue of $\Lambda_Z$.
We recall here that $\Lambda_Z$ is the matrix of the
Hodge metric (see (\ref{LAZY})), and its eigenvalues can be
estimated in terms of the Weil-Petersson metric and its curvature
(cf. \cite{Lu}). In particular, its minimum eigenvalue satisfies
$\mu_{\min}(Z) \ge 2$, and that explains the lower bound $2 P_Z$
in (\ref{MUMIN}). For most CY
$3$-folds $X$, the Weil-Petersson metric on $\mathcal{C}$ is incomplete,
and $\mu_{\max} (Z)\to \infty$ as $Z$ tends to the boundary (Z.
Lu).
By (\ref{MUMIN}), we have
\begin{equation} \label{BOUNDS} J_{-} (\mu, P_Z) \leq (b_3!) \mathcal{K}^{\operatorname {crit}}(Z)
\leq J_+(\mu, P_Z), \;\;\;(\forall \mu \geq \mu_{\max}(Z))\end{equation} where \begin{eqnarray} J_+(\mu, P_Z) :
& = & \frac 1 {^{b_3/2 - 1} } \int_{\mathcal{H}_Z \oplus {\mathbb C}} \left|\det
H^*H - |x|^2 I\right|\;\; e^{- \left( \mu^{-1} \mbox{Tr} H^*H -
|x|^2 \right)}\,dH\,dx,
\end{eqnarray}
and where
\begin{eqnarray} J_-(\mu, P_Z) : & = &
\frac 1 { \mu^{(b_3/2 - 1)} } \int_{\mathcal{H}_Z \oplus {\mathbb C}} \left|\det
H^*H - |x|^2 I\right|\;\; e^{- \left( 2^{-1} \mbox{Tr} H^*H -
|x|^2 \right)}\,dH\,dx,
\end{eqnarray}
Thus we obtain upper and lower bounds for the density in regions
$K_{\mu} \subset \mathcal{C}$ for which the absolute values of the
eigenvalues of the Hodge metric relative to the Weil-Petersson
metric satisfy $\mu_{\max}(Z) \leq \mu$. We have bounded the
determinant of $\Lambda$ by a power of an extremal eigenvalue, but
it could also be identified with the volume density of the Hodge
metric. We note that the lower bound tends to zero and the upper
bound tends to infinity in $\sim \pm b_3$ powers of
$\mu_{\max}(Z)$ as $Z \to \d \mathcal{C}$ when the Weil-Petersson
metric is incomplete and the norm of the Ricci curvature of
$\omega_{WP}$ tends to infinity.
We now estimate $J_{\pm} (\mu, P_Z)$ under the assumption that
$\mathcal{H}_Z$ is a `sufficiently random' subspace.
The subspace
$\mathcal{H}_Z$ is a real subspace of dimension $b_3 - 2$ of ${\operatorname{Sym}}(b_3/2
- 1, {\mathbb C}) $, but by modifying the definition of the complex
structure it becomes a complex $b_3/2$-dimensional one. Hence, we
may view $Z \to \mathcal{H}_Z$ as a map $\mathcal{C} \to Gr(b_3/2 - 1,
{\operatorname{Sym}}(b_3/2 - 1, {\mathbb C}))$ to the complex Grassmannian of $b_3/2 - 1$
dimensional complex subspaces. Lacking knowledge of the
distribution of the image of $Z \to \mathcal{H}_Z$, we make the
assumption that it is random, or more precisely we approximate
$J_{\pm}(\mu, P_Z)$ by the expected value of $J_{\pm}(\mu, P)$,
where $P$ is the projection corresponding to a random element
$\mathcal{H} \in Gr(b_3/2 - 1, {\operatorname{Sym}}(b_3/2 - 1, {\mathbb C}))$.
This approximation by the expected value seems to be reasonable
because
Grassmannians $Gr(k, N)$ are examples of Gromov-Milman 'Levy
families' of Riemannian manifolds for which concentration of
measure phenomena hold as $N \to \infty$ \cite{GM, T}.
Concentration of measure refers to a metric space $(X, d)$ with a
probability measure $P$ and a concentration function $\alpha(P,
t)$, which is the smallest number such that the measure of a set
$A$ and the metric tube $A_t = \{x: d(x, A) < t\}$ around $A$ are
related by $P(A) \geq 1/2 \implies P(A_t) \geq 1 - \alpha(P, t).$
If $f$ is a Lipschitz function and if $M_f$ is a median for $f$,
we put $A = \{x: f(x) \leq M_f\}$, and then $P(|f - M_f| > t)
\leq 2 \alpha(P, \frac{t}{||f||_{Lip}}). $ Concentration of
measure occurs if $\alpha(P, t)$ decays rapidly in $t$, and thus
$f$ is highly concentrated around its median. In a L\'evy family
$(X_N, d_N)$, the functions $\alpha_N(P, t)$ decay at ever faster
rates depending on $N$. For instance on the unit $N$-sphere $S^N$,
the rate is (a universal constant times) $e^{- \frac{(N - 1)}{2}
t^2}$.
In our setting, the family consists of Grassmannians $Gr(b_3/2 - 1,
{\operatorname{Sym}}(b_3/2 - 1, {\mathbb C}))$
equipped with the invariant probability measure $d\nu$ and with the
standard bi-invariant
metric. It is pointed out in \cite{GM} that $Gr(k, N)$ is a
L\'evy family for fixed $k$ (see section (3.3) of \cite{GM}), and the same argument should apply to
$k_N \sim N/2$. Moreover, $\{U(N)\}$ with its Haar probability
measure and bi-invariant metric is L\'evy, and by section (2.1)
of \cite{GM}
its quotients should be.
The function $f$ is $J_{\pm}(\mu, P)$ for fixed $\mu$. Since we are mainly interested in factorial
dependencies, we set $\mu = 1$ and change the exponent $2^{-1}$ to $1$
to make the Gaussian measure a probability measure. In general, the result would
be modified by a $\pm b_3$ power of $\mu$. In this heuristic
discussion, we will not attempt to determine $\alpha_N(P, t)$ or
$M_f$ but will assume that $ \alpha(P, \frac{t}{||f||_{Lip}})$
has rapid decrease in $t$ which improves with the dimension. We
also note that when $\alpha(P, t)$ is small, we can replace the
median of $J_{\pm}(\mu, P)$ (with $\mu = 1$) by its mean
$$ \int_{Gr(b_3/2 - 1, {\operatorname{Sym}}(b_3/2 - 1,
{\mathbb C})} \left\{\int_{\mathcal{H} \oplus {\mathbb C}} |\det (H^*H - |x|^2 I)| e^{- Tr
H^* H - |x|^2} dH dx\right\} d\nu(\mathcal{H})$$ with a small error
(cf.\ \cite{T}). This mean equals \begin{equation} \label{MEAN}
\int_{{\operatorname{Sym}}(b_3/2 - 1, {\mathbb C}) \oplus {\mathbb C}} |\det (H^*H - |x|^2 I)| e^{-
Tr H^* H - |x|^2} dH dx \end{equation} since both measures are
invariant probability measures and are therefore equal. Here we
ignore factors of $(2\pi)$ (etc.) for the sake of simplicity,
since we are primarily interested in the factorially growing
quantities. Due to the concentration of measure, the spaces
$\mathcal{H}_Z$ would have to be very `rare events' if $J_{\pm}(\mu,
P_Z)$ differed appreciably from its mean. We note that since
$H^3_Z$ is a complex polarization, $P_Z$ has special features that
do not hold for random subspaces, but we have no reason to believe
that these special features
bias $J(\mu, P_Z)$ away from its mean.
We now observe that (\ref{MEAN}) (with any choice of $\mu$)
is
similar to the integral for the density of critical points for
holomorphic sections of $\mathcal{O}(N) \to \C\PP^m$ with $m = b_3/2 - 1$
with respect to the Fubini-Study connection for a fixed degree
$N$ \cite{DSZ2} (\S 4). There, the $\Lambda_Z$ matrix was (for
every $Z$) a two-block diagonal matrix with a large scalar block
and a $1 \times 1$ scalar block. When $\mu = 1$ (\ref{MEAN})
agrees with that $\mathcal{O}(N) \to \C\PP^m$ density in the case
$N = 1$. As noted in \cite{DSZ2}, the total number of critical points
of a given Morse index appears to grow at a rate $N^m$ times a
rational quantity in $m$ as $m \to \infty$. This growth rate may
also be easily verified for the Euler characteristic $\chi(T^{*
1,0} \otimes \mathcal{O}(N))$, i.e. the alternating sum over the Morse
indices, which is given by
\begin{eqnarray*}\chi(T^{* 1,0} \otimes \mathcal{O}(N))&=& \left(
\frac{c(\mathcal{O}(N-1))^{m+1}}{c(\mathcal{O}(N))}, [\C\PP^m]\right)\ =\ \frac
{(N-1)^{m+1}+(-1)^m}{N}\;.\end{eqnarray*} Since the volume of
$\C\PP^m$ is $\frac{1}{m!}$, this would imply that the density of
critical points grows like $m!$ with the dimension. On this basis,
we would expect that $J_{\pm}(\mu, P_Z)$ for $\mu \simeq 1$ grows
with the dimension at the rate $(b_3/2)! N_{\mu}^{b_3}$ for some
$N_{\mu} >0$.
We note that the Ashok-Douglas heuristic that the density of
critical points should have the same order of magnitude as the
index density is indeed correct in the setting of $\mathcal{O}(N)
\to \C\PP^m$. Further, the origin of the factorials $(b_3/2)!$ is
essentially in both the $\mathcal{C}$ and $\C\PP^m$ settings.
Thus our heuristics give $\mathcal{K}^{\operatorname {crit}}(Z) \sim \frac{(b_3/2)!
N_{\mu}^{b_3}}{b_3!}$. If we integrate over $K_{\mu}$ and apply
the conjectural volume bound $\frac{1}{(b_3/2)!)}$ for $K_{\mu}$,
we would get roughly $\frac{L^{b_3} N_{\mu}^{b_3}}{b_3!}$.
Further applying the observed relation $L \sim C b_3$ with $C \in
[1/3, 3]$ gives an exponential growth rate for numbers of vacua in
$K_{\mu}$.
| f5d5f40d251ec39bf7f82cca7921045584aaeb26 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
This work aims at identifying efficient and accurate models for non-linear,
slender elastic structures. Although there exists a wide range of
one-dimensional beam and rod theories, understanding nonlinear effects arising
during the compression of wide
columns~\citep{lubbers2017nonlinear,chen2020snapping}, predicting the
emergence of shape due to heterogeneous pre-stress generated by growth or
thermal effects in slender
filaments~\citep{Liu-Huang-EtAl-Structural-Transition-from-2014,%
turcaud2020twisters}
and designing structures made of complex nonlinear materials such as nematic
elastomers or active materials
\citep{agostiniani2017shape,tomassetti2017capturing} remain challenging
tasks. Well-established, classical rod theories account for the stretching,
bending and twisting strains in a linear way and therefore do not account for
finite-strain or finite-thickness effects. Extensions have been proposed to
account for some of these effects, but their justification is often patchy or
relies on restrictive hypotheses on the kinematics or the constitutive
behavior; their range of applicability is thus often limited and sometimes
ill-defined. This leaves researchers, engineers and designers with two
alternatives: either rely on full three-dimensional finite
elasticity~\citep{scherzinger1998asymptotic,%
Goriely-Vandiver-EtAl-Nonlinear-Euler-buckling-2008,%
de2011nonlinear,%
chen2020snapping}
or build their own specific reduced model~\citep{lubbers2017nonlinear}.
Should they choose this latter option, however, a clear and rigorous
methodology for deriving such a model is lacking.
This work builds up on a dimension reduction procedure introduced by the
authors in an abstract setting~\citep{LESTRINGANT2020103730} which is
applied here to the case of a hyper-elastic prismatic solid which can stretch,
bend and twist arbitrarily in three dimensions. The present work extends our
previous work on one-dimensional structures that can just
stretch~\citep{Audoly-Hutchinson-Analysis-of-necking-based-2016,%
LESTRINGANT2020103730}.
The proposed method yields one-dimensional models that account for
stretching, bending and twisting modes in a non-linear way. It is
asymptotically correct; a scaling estimate of the error in energy with
respect to the full three-dimensional theory is available in terms of
the slenderness parameter. The one-dimensional model is derived based
on an assumption of slow longitudinal variations, implemented by a
two-scale expansion. Effectively, this approach splits the original
three-dimensional problem into a set of relaxation problems formulated
in the two-dimensional cross-section, and a one-dimensional
variational problem at the scale of the structure, as noted in
previous work, {\tmem{e.g.}}, by~\citet{berdichevskii1981energy},
\citet{bermudez1984justification}, \citet{trabucho1989existence} and
\citet{sanchez1999statics}, among others.
We improve on existing approaches to asymptotic dimension reduction in three
key aspects.
\begin{itemize}
\item Our method is variational. While most of the existing work has started
from the three-dimensional equilibrium
equations~\citep{bermudez1984justification,trabucho1996mathematical}, we
base our reduction on the energy formulation of the three-dimensional
problem. This helps keeping the derivation as simple as possible, and makes
the variational structure of the one-dimensional model stand out without any
effort.
\item We start from finite elasticity. Most of the existing work has been
limited to linear
strains~\citep{trabucho1996mathematical,yu2004elasticity,hodges2006nonlinear}
but the one-dimensional models derived using the proposed method can retain
nonlinearities coming from both the geometry and from the constitutive
behavior.
\item Our one-dimensional model is high-order and asymptotically correct,
{\tmem{i.e.}}, it captures the energy cost arising from the longitudinal
gradients of the stretching, bending and twisting strains. Besides
increasing the accuracy and expanding the range of validity of the model,
gradient terms have been found to help capture localization phenomena very
accurately~\citep{Lestringant-Audoly-A-diffuse-interface-model-2018,%
lestringant2020one}.
\end{itemize}
Some of the models from the literature include one or two of these
ingredients. \citet{berdichevskii1981energy},
\citet{hodges2006nonlinear} and \citet{yu2012variational} use a
variational approach,
\citet{trabucho1989existence} and \citet{nolde2018asymptotic} introduce
higher-order terms, \citet{jiang2016nonlinear} and \citet{cimetiere1988asymptotic}
handle finite strains, \citet{moulton2020morphoelastic} work with
finite elasticity in a variational setting. Yet this paper is the
first attempt to combine these three aspects in a unified procedure.
The proposed approach has been designed to be as general as possible. It does
not make any specific assumptions regarding the symmetry of the constitutive
law, such as isotropy~\citep{cimetiere1988asymptotic}. It is not limited to
small rotations, or to specific shapes of the cross-section. It can readily be
applied to a variety of constitutive behaviors, and in particular it can
handle inhomogeneous pre-strain as well as inhomogeneous elastic properties
across the sections. By lack of space, we cannot provide detailed
illustrations for all these capabilities but we shortly discuss
in~{\textsection}\ref{sec:nonlinear-energy-formulation} how these cases can be
covered. Besides, the approach is systematic: it is carried out by applying a
sequence of steps, much like a cooking recipe, and it lends itself naturally
to a numerical implementation.
The manuscript is organized as follows. In section~\ref{sec:full-model}, we
introduce the center-line based representation of a prismatic hyper-elastic
solid and derive the energy functional governing its elastic equilibrium in a
non-linear, three-dimensional setting. In section~\ref{sec:ideal-model}, we
introduce a relaxation method which achieves the one-dimensional reduction
formally. In section~\ref{sec:asymptotic-1d-reduction}, we combine this
relaxation method with a two-scale expansion and derive a concrete recipe for
obtaining one-dimensional models. This method is applied to the linear
analysis of the twisting of a prismatic bar in section~\ref{sec:twisting}, and
to the weakly non-linear analysis of the Euler buckling of a circular cylinder
in section~\ref{sec:Euler-buckling}.
Our mathematical notations are as follows. We use boldface for vectors such as
$\tmmathbf{e}_1$ and tensors $\tmmathbf{F}$. The longitudinal and transverse
coordinates in the prismatic body are denoted as $S$ and $\tmmathbf{T}= (T_1,
T_2)$, respectively. Einstein's implicit summation rules are used throughout,
whereby repeated indices appearing on the same side of an equal side are
implicitly summed; any index appearing once on each side of an equation is
considered to be a dummy index, {\tmem{i.e.}}, the equation holds implicitly
for any value of the index. In addition, the range of Greek indices such as
$\alpha$ is implicitly restricted to the cross-sectional directions, $\alpha
\in \{ 1, 2 \}$ although Latin indices such as $i$ run over the three
directions of the Cartesian space, $i \in \{ 1, 2, 3 \}$; as a result,
$T_{\alpha} \, \tmmathbf{d}_{\alpha} (S)$ stands for $\sum_{\alpha =
1}^2 T_{\alpha} \, \tmmathbf{d}_{\alpha} (S)$. The prime notation is
reserved for derivatives with respect to the longitudinal coordinate $S$ and
we use the notation $\partial_{\alpha}$ for partial derivatives along the
cross-sectional directions,
\[ \begin{array}{ll}
f' (S, \tmmathbf{T}) = \frac{\partial f}{\partial S} (S, \tmmathbf{T}) &
\partial_{\alpha} f (S, \tmmathbf{T}) = \frac{\partial f}{\partial
T_{\alpha}} (S, \tmmathbf{T}) .
\end{array} \]
The $\nabla$ notation is reserved for a differentiation with respect to the
macroscopic strain $\tmmathbf{h}$, see equation~(\ref{eq:nabla-notation}). The
notation $\tmmathbf{a} \odot \tmmathbf{b}= \frac{1}{2}\,(\tmmathbf{a} \otimes
\tmmathbf{b}+\tmmathbf{b} \otimes \tmmathbf{a})$ denotes the symmetrized
outer product of two vectors. The restriction of a function $f (S,
\tmmathbf{T})$ to a cross-section with coordinate $S$ is denoted as
$ f |_S$: this object is a function of the transverse coordinates
$\tmmathbf{T}$ only, such that $ f |_S (\tmmathbf{T}) = f (S,
\tmmathbf{T})$. Finally, functionals have their arguments inside square
brackets: the notation $\Psi [\tmmathbf{r}, \tmmathbf{d}_i]$ for the energy
functional implies that the arguments of $\Psi$ are the entire functions
$\tmmathbf{r}$ and $\tmmathbf{d}_i$ and not just their local values.
\section{3d model in center-line based representation}\label{sec:full-model}
In this section, the non-linear equilibrium of a finitely-strained prismatic
hyper-elastic solid is formulated without approximation. Attention is limited
to the formulation of the elasticity problem and no attempt to solve it is
made until the next sections. The formulation makes use of a center-line based
representation, which sets the stage for the forthcoming dimension reduction.
A similar parameterization was introduced in earlier work by
\citet{hodges2006nonlinear} in the framework of linear elasticity, and
extended to finite elasticity by
\citet{jiang2016nonlinear} and \citet{jiang2016nonlinear2}, where it is used as a basis
for a numerical approach to dimension reduction, without any account for the
gradient effect.
\subsection{Center-line based
representation}\label{ssec:centerline-based-parameterization}
\begin{figure}
\centerline{\includegraphics{rod-3d-geom-with-labels.pdf}}
\caption{Center-line based representation of a prismatic solid in
(a)~reference and (b)~actual configurations.\label{fig:geom}}
\end{figure}
We consider a prismatic solid in reference configuration, see
figure~\ref{fig:geom}a. We denote by $\ell$ its initial length, by $S$ the
arc-length coordinate along its axis, such that $0 \leqslant S \leqslant
\ell$, by $\tmmathbf{T}= (T_1, T_2)$ the transverse coordinates and by
$(\tmmathbf{e}_1, \tmmathbf{e}_2, \tmmathbf{e}_3)$ an orthonormal frame
initially aligned with the axes $T_1$, $T_2$ and $S$, respectively. The
cross-section domain is denoted as $\Omega \subset \mathbb{R}^2$. Let $\mathrm{d}
A = \mathrm{d} T_1 \, \mathrm{d} T_2$ be the area element in the domain
$\Omega$ and $| \Omega | = \iint_{\Omega} \mathrm{d} A$ the cross-section area.
The average of a function $f (\tmmathbf{T})$ over a cross-section is denoted
as
\[ \langle f (\tmmathbf{T}) \rangle = \frac{1}{| \Omega |} \,
\iint_{\Omega} f (\tmmathbf{T}) \, \mathrm{d} A. \]
The coordinates $(S, \tmmathbf{T}) \in (0, \ell) \times \Omega$ of a material
point in reference configuration are used as Lagrangian variables in the
elasticity problem. The position of this material point in the actual
configuration is denoted as $\tmmathbf{x} (S, \tmmathbf{T})$, see
figure~\ref{fig:geom}b. We do not assume that the internal stress is zero in
the reference configuration, i.e., pre-stress is allowed.
In terms of the mapping $\tmmathbf{x}$ from the reference to the actual
configuration, we define an apparent center-line $\tmmathbf{r} (S)$ passing
through the centroids of the cross-sections,
\begin{equation}
\tmmathbf{r} (S) = \langle \tmmathbf{x} (S, \tmmathbf{T}) \rangle,
\label{eq:ctr-of-mass-constraint-x}
\end{equation}
and a unit tangent to the center-line $\tmmathbf{d}_3 (S)$,
\begin{equation}
\tmmathbf{d}_3 (S) = \frac{\frac{\mathrm{d} \tmmathbf{r}}{\mathrm{d} S} (S)}{\left|
\frac{\mathrm{d} \tmmathbf{r}}{\mathrm{d} S} (S) \right|} .
\label{eq:d3-from-rPrime}
\end{equation}
The unit vector $\tmmathbf{d}_3 (S)$ can be complemented by two vectors
$\tmmathbf{d}_1 (S)$ and $\tmmathbf{d}_2 (S)$ forming an orthonormal frame,
the orientation of $\tmmathbf{d}_1 (S)$ and $\tmmathbf{d}_2 (S)$ in the plane
perpendicular to $\tmmathbf{d}_3 (S)$ being fixed by the condition
\begin{equation}
\forall S \quad \left\langle T_{\alpha} \, \tmmathbf{d}_{\alpha}
(S) \times (\tmmathbf{x} (S, \tmmathbf{T}) -\tmmathbf{r} (S)) \right\rangle
\cdot \tmmathbf{d}_3 (S) = 0. \label{eq:twist-condition-x}
\end{equation}
By Einstein's implicit summation rule and by our convention that Greek indices
are restricted to cross-sectional directions, the left-hand side in the
equation above is implicitly summed over $\alpha \in \{ 1, 2 \}$. By
equation~(\ref{eq:twist-condition-x}), the orthonormal frame $\tmmathbf{d}_i
(S)$ captures the average rotation of the cross-section about the tangent
$\tmmathbf{d}_3 (S)$ at any point $S$ along the center-line. The orthonormal
vectors $\tmmathbf{d}_i (S)$ are called the {\tmem{directors}} in the theory
of rods.
The condition that the directors are orthonormal writes as
\begin{equation}
\tmmathbf{d}_i (S) \cdot \tmmathbf{d}_j (S) = \delta_{i j},
\label{eq:di-orthonormal-frame}
\end{equation}
for any $S$ and any integers $i$ and $j$, where $\delta_{i j}$ is
Kronecker's symbol, equal to 1 when $i = j$ and to 0 otherwise.
The original transformation can be recovered as
\begin{equation}
\tmmathbf{x} (S, \tmmathbf{T}) =\tmmathbf{r} (S) + y_i (S, \tmmathbf{T})
\, \tmmathbf{d}_i (S), \label{eq:x-centerline-based-crspondence}
\end{equation}
where $y_i (S, \tmmathbf{T}) = (\tmmathbf{x} (S, \tmmathbf{T}) -\tmmathbf{r}
(S)) \cdot \tmmathbf{d}_i (S)$ for $i = 1, 2, 3$ are the microscopic
displacement functions (`displacement' is an abuse of language, since this
quantity is non-zero in the reference configuration but we will use it
anyway). In terms of the displacement $y_i$, the constraints in
equations~(\ref{eq:ctr-of-mass-constraint-x}) and~(\ref{eq:twist-condition-x})
write
\begin{equation}
\begin{array}{crll}
\forall (S, i) & \langle y_i (S, \tmmathbf{T}) \rangle & = & 0\\
\forall S & \quad \left\langle \eta_{\alpha \beta} \,
T_{\alpha} \, y_{\beta} (S, \tmmathbf{T}) \right\rangle & = & 0.
\end{array} \label{eq:yibar-kinematic-conditions}
\end{equation}
By Einstein's conventions, the first equation with a non-repeated Latin index
holds for $i = 1, 2, 3$, while the second equation with repeated Greek indices
contains an implicit sum over $\alpha, \beta \in \{ 1, 2 \}$. In the equation
above, $\eta_{\alpha \beta}$ is the skew-symmetric symbol, such that
$\eta_{1 1} = \eta_{2 2} = 0$, $\eta_{1 2} = 1$ and
$\eta_{2 1} = - 1$.
Equation~(\ref{eq:x-centerline-based-crspondence}) shows that $\tmmathbf{r}
(S)$, $\tmmathbf{d}_i (S)$ and $y_i (S, \tmmathbf{T})$ can be used to
parameterize the deformed configuration. Indeed, it can be checked easily that
there is a one-to-one correspondence between the unknown $\tmmathbf{x} (S,
\tmmathbf{T})$ on the one hand, and the unknowns $\tmmathbf{r} (S)$,
$\tmmathbf{d}_i (S)$ and $y_i (S, \tmmathbf{T})$ on the other hand, provided
$\tmmathbf{d}_i (S)$ satisfies the orthonormality
condition~(\ref{eq:di-orthonormal-frame}) and $y_i (S, \tmmathbf{T})$
satisfies the four scalar kinematic
constraints~(\ref{eq:yibar-kinematic-conditions}). We will use \ $\tmmathbf{r}
(S)$, $\tmmathbf{d}_i (S)$ and $y_i (S, \tmmathbf{T})$ as the main unknowns:
we refer to this as the {\tmem{center-line based parameterization}}. It is
natural to work with this parameterization in the context of dimension
reduction as it brings in the macroscopic variables of the one-dimensional rod
model, $\tmmathbf{r} (S)$ and $\tmmathbf{d}_i (S)$.
Note that the apparent center-line $\tmmathbf{r} (S)$ is not a material
line---in the case of a hollow cylinder for instance, the curve $\tmmathbf{r}
(S)$ does not even lie within the material domain. Similarly, the directors
$\tmmathbf{d}_i (S)$ do not provide a detailed description of the microscopic
displacement on their own: the only information conveyed by the directors
frame $\tmmathbf{d}_i (S)$ is the {\tmem{average}} rotation of the
cross-section about the center-line, see~(\ref{eq:twist-condition-x}). Neither
the fact that the material frame $\tmmathbf{d}_i (S)$ is orthonormal, nor the
fact that $\tmmathbf{d}_3 (S)$ is parallel to the center-line, see
equation~(\ref{eq:d3-from-rPrime}), implies any assumption or restriction on
the microscopic displacement field: as noted above, the center-line based
representation can represent {\tmem{any}} microscopic transformation
$\tmmathbf{x} (S, \tmmathbf{T})$.
\subsection{Apparent stretching, twisting and bending
strain}\label{ssec:apparent-strain}
Together, the center-line $\tmmathbf{r} (S)$ and the directors $\tmmathbf{d}_i
(S)$ define what is known as a framed curve. The standard kinematic analysis
of framed curves goes as follows. First, we define the axial strain
$\varepsilon (S)$ by the relation
\begin{equation}
\tmmathbf{r}' (S) = (1 + \varepsilon (S)) \, \tmmathbf{d}_3 (S),
\label{eq:rPrime-epsilon-d3}
\end{equation}
which implies the condition of adaptation~(\ref{eq:d3-from-rPrime}).
Second, we define the bending strain $\kappa_1 (S)$ and $\kappa_2 (S)$ and the
twisting strain $\kappa_3 (S)$ by the relation
\begin{equation}
\tmmathbf{d}_i' (S) = - \eta_{i j k} \, \kappa_j
(S) \, \tmmathbf{d}_k (S), \label{eq:kappa-i}
\end{equation}
where $\eta_{i j k}$ is the antisymmetric (Levi-Civita)
symbol of order 3. This equation defines the quantities $\kappa_j (S)$
uniquely since the frame of directors $\tmmathbf{d}_i (S)$ is orthonormal for
all $S$. The quantities $\kappa_i (S)$ defined in this way are the components
of the rotation gradient $\tmmathbf{\kappa} (S) = \kappa_i (S) \,
\tmmathbf{d}_i (S)$ as we have $\tmmathbf{d}_i' (S) = - \eta_{i j
k} \, \kappa_j (S) \, \tmmathbf{d}_k (S) = \kappa_j
(S) \, \eta_{j i k} \, \tmmathbf{d}_k (S)
= \kappa_j (S) \, \tmmathbf{d}_j (S) \times \tmmathbf{d}_i (S)
=\tmmathbf{\kappa} (S) \times \tmmathbf{d}_i (S)$ for any $S$ and any integer
$i$.
The strain measures are collected in a {\tmem{macroscopic strain}} vector
\[ \tmmathbf{h}= (\varepsilon, \kappa_1, \kappa_2, \kappa_3) . \label{eq:h} \]
They will be referred to as {\tmem{apparent}} strain measures as they depend
on the center-line and on the directors, which are immaterial in the following
sense. Consider for instance a thin cylindrical tube made up of a soft matrix
and inextensible fibers initially oriented parallel to the axis of the
cylinder: upon twisting, the cylinder will shorten due to the inextensibility
of the fibers, making the apparent axial strain negative, $\varepsilon < 0$,
even though the longitudinal strain along any of the material (helical) fibers
is actually zero.
\subsection{Microscopic strain}\label{ssec:full-microscopic-strain}
With a view of formulating an elasticity problem for the prismatic body, we
derive the microscopic strain based on the center-line
representation~(\ref{eq:x-centerline-based-crspondence}). The deformation
gradient $\tmmathbf{F}$ such that $\mathrm{d} \tmmathbf{x}=\tmmathbf{F} \cdot
(\mathrm{d} \tmmathbf{T}, \mathrm{d} S)$ is first introduced as
\begin{equation}
\tmmathbf{F}= \partial_{\alpha} \tmmathbf{x} \otimes \tmmathbf{e}_{\alpha}
+\tmmathbf{x}' \otimes \tmmathbf{e}_3 = \partial_{\alpha} y_i (S,
\tmmathbf{T}) \, \tmmathbf{d}_i (S) \otimes \tmmathbf{e}_{\alpha} +
t_i (S, \tmmathbf{T}) \, \tmmathbf{d}_i (S) \otimes \tmmathbf{e}_3,
\label{eq:transformation-gradient}
\end{equation}
where $t_i =\tmmathbf{x}' \cdot \tmmathbf{d}_i$ is the deformed material
tangent that was initially oriented parallel to the axis,
\[ t_i = (1 + \varepsilon (S)) \, \delta_{i 3} + \eta_{i
j k} \, \kappa_j (S) \, y_k (S,
\tmmathbf{T}) + y_i' (S, \tmmathbf{T}) . \]
Next, we consider the microscopic Green--St-Venant deformation tensor
$\tmmathbf{E}= \frac{1}{2} \, (\tmmathbf{F}^T \cdot
\tmmathbf{F}-\tmmathbf{I})$ where $\tmmathbf{I}$ is the $3 \times 3$ identity
matrix,
\begin{equation}
\tmmathbf{E}= \frac{t_i^2 - 1}{2} \, \tmmathbf{e}_3 \otimes
\tmmathbf{e}_3 + t_i \, \partial_{\alpha} y_i \,
\tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_3 + \frac{\partial_{\alpha} y_i
\, \partial_{\beta} y_i - \delta_{\alpha \beta}}{2}
\, \tmmathbf{e}_{\alpha} \otimes \tmmathbf{e}_{\beta} .
\label{eq:E-tmp}
\end{equation}
We denote as $\tmmathbf{Y}= (Y_1, Y_2, Y_3)$ and $\tmmathbf{Y}^{\dag} =
(Y^{\dag}_1, Y^{\dag}_2, Y^{\dag}_3)$ the collections of functions $Y_i$ and
$Y_i^{\dag}$ obtained by {\tmem{restricting}} the microscopic displacement and
its longitudinal gradient {\tmem{to a cross-section}}, {\tmem{i.e.}},
\[ \begin{array}{ll}
Y_i = y_i |_S & Y_i^{\dag} = y_i' |_S .
\end{array} \]
By convention, the dagger in $\tmmathbf{Y}^{\dag}$ means that this
cross-sectional function evaluates to a longitudinal gradient of strain,
{\tmem{i.e.}}, $Y_i^{\dag} (\tmmathbf{T}) = y_i' (S, \tmmathbf{T})$; daggers
are roughly equivalent to primes but strictly speaking the quantity
$\tmmathbf{Y}$ cannot bear a prime $' = \frac{\mathrm{d}}{\mathrm{d} S}$ as it is a
function of $\tmmathbf{T}$ only and not of $S$.
With this notation, the strain $\tmmathbf{E}$ from equation~(\ref{eq:E-tmp})
can be written as $\tmmathbf{E}=\tmmathbf{E} (\tmmathbf{T}; \tmmathbf{h} (S) ;
\tmmathbf{y} |_S, \tmmathbf{y}' |_S)$ where
\begin{equation}
\begin{array}{r}
\tmmathbf{E} (\tmmathbf{T}; \tmmathbf{h}; \tmmathbf{Y},
\tmmathbf{Y}^{\dag}) = \frac{t_i^2 - 1}{2} \, \tmmathbf{e}_3
\otimes \tmmathbf{e}_3 + t_i \, \partial_{\alpha} Y_i
(\tmmathbf{T}) \, \tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_3 +
\frac{\partial_{\alpha} Y_i (\tmmathbf{T}) \, \partial_{\beta}
Y_i (\tmmathbf{T}) - \delta_{\alpha \beta}}{2} \,
\tmmathbf{e}_{\alpha} \otimes \tmmathbf{e}_{\beta}\\[.4em]
\text{where $t_i = (1 + \varepsilon) \, \delta_{i 3} +
\eta_{i j k} \, \kappa_j \, Y_k
(\tmmathbf{T})$} + Y_i^{\dag} (\tmmathbf{T})
\end{array} \label{eq:E-function}
\end{equation}
The dependence of $\tmmathbf{E}$ on $\tmmathbf{h}= (\varepsilon, \kappa_1,
\kappa_2, \kappa_3)$ arises through the auxiliary quantity $t_i$.
A couple comments on the notation $\tmmathbf{E} (\tmmathbf{T}; \tmmathbf{h}
(S) ; \tmmathbf{Y}= \tmmathbf{y} |_S, \tmmathbf{Y}^{\dag} =
\tmmathbf{y}' |_S)$ used in equation~(\ref{eq:E-function}) are in
order. The notation implies that the strain at any point $\tmmathbf{T}$ of the
cross-section can be calculated in terms of the local macroscopic strain
$\tmmathbf{h} (S)$, and of the {\tmem{restrictions}} of the displacement
$ \tmmathbf{y} |_S$ and of its longitudinal gradient $
\tmmathbf{y}' |_S$ to the cross-section of interest. In particular, the
notation captures the fact that the strain does not depend on the higher-order
longitudinal gradients of displacement, such as $\tmmathbf{y}''_S$. Besides,
the gradients of the displacement along the cross-section directions, namely
$\partial_1 y_i = \frac{\partial y_i}{\partial T_1}$ and $\partial_2 y_i =
\frac{\partial y_i}{\partial T_2}$, are not listed as a argument to
$\tmmathbf{E} (\tmmathbf{T}; \tmmathbf{h}; \tmmathbf{Y}, \tmmathbf{Y}^{\dag})$
as they are reconstructed `internally' from the cross-sectional restriction
$\tmmathbf{Y}$ as $\partial_{\alpha} y_i (S, \tmmathbf{T}) = \partial_{\alpha}
Y_i (\tmmathbf{T})$. As a result, the dependence of the strain on longitudinal
gradients of the displacement is explicit in this notation, but that on
transverse gradients is not.
\subsection{Energy formulation}\label{sec:nonlinear-energy-formulation}
In the classical elasticity theory, the strain energy $\Phi$ is obtained by
integration of a strain energy density $w$,
\begin{equation}
\Phi [\tmmathbf{h}, \tmmathbf{y}] = \int_0^{\ell} \iint_{\Omega} w
(\tmmathbf{T}, \tmmathbf{E}) \, \mathrm{d} A \, \mathrm{d} S,
\label{eq:canonicalForm}
\end{equation}
where the microscopic strain $\tmmathbf{E}$ appearing as an argument to $w$ is
given by equation~(\ref{eq:E-function}) as
\begin{equation}
\tmmathbf{E}=\tmmathbf{E} (\tmmathbf{T}; \tmmathbf{h} (S) ;
\tmmathbf{y} |_S, \tmmathbf{y}' |_S) .
\label{eq:E-in-canonical-form}
\end{equation}
The bracket notation in $\Phi [\tmmathbf{h}, \tmmathbf{y}]$ indicates that
$\Phi$ is a functional of its arguments.
The form of the elastic potential in equation~(\ref{eq:canonicalForm}), which
serves as a starting point for our dimension reduction method, is completely
general. In particular, the following situations can be handled (not all of
which can be illustrated in this paper, by lack of space). The elastic
properties of the body can be inhomogeneous across the section, as indicated
by the explicit dependence of the density of strain energy $w (\tmmathbf{T},
\tmmathbf{E})$ on the transverse coordinate $\tmmathbf{T}$ in
equation~(\ref{eq:canonicalForm}). Arbitrary hyper-elastic constitutive laws
can be specified through the choice of the energy density $w$; in particular,
no assumption is made on the symmetries of the material. Arbitrary pre-stress
distributions can be taken into account by an appropriate choice of $w$, the
pre-stress being the quantity $\frac{\partial w}{\partial \tmmathbf{E}}
(\tmmathbf{T}, \tmmathbf{0})$. It is also possible to treat the case where the
elastic or geometric properties of the body vary slowly in the longitudinal
direction, as discussed in the conclusion.
We assume that the prismatic solid is subjected to conservative forces,
represented by a density of external potential $V (\tmmathbf{r},
\tmmathbf{d}_i)$. At equilibrium, the total potential energy
\begin{equation}
\Psi [\tmmathbf{r}, \tmmathbf{d}_i, \tmmathbf{y}] = \Phi [\tmmathbf{h},
\tmmathbf{y}] + \int_0^{\ell} V (\tmmathbf{r} (S), \tmmathbf{d}_i (S))
\, \mathrm{d} S, \label{eq:full-problem-total-potential-energy}
\end{equation}
is stationary with respect to the unknowns $\tmmathbf{r}$, $\tmmathbf{d}_i$
and $\tmmathbf{y}$. The macroscopic strain $\tmmathbf{h} (S) = (\varepsilon
(S), \kappa_1 (S), \ldots, \kappa_3 (S))$ is a dependent variable which can be
obtained in terms of the main unknowns $\tmmathbf{r}$ and $\tmmathbf{d}_i$
using equations~(\ref{eq:rPrime-epsilon-d3}) and~(\ref{eq:kappa-i}).
The stationarity of the total potential
energy~(\ref{eq:full-problem-total-potential-energy}) is subject to the
condition~(\ref{eq:di-orthonormal-frame}) that the directors are orthonormal,
to the constraint of adaptation $\tmmathbf{r}' - (1 + \varepsilon) \,
\tmmathbf{d}_3 =\tmmathbf{0}$ in equation~(\ref{eq:rPrime-epsilon-d3}), and to
the kinematic constraints~(\ref{eq:yibar-kinematic-conditions}) on the
displacement. We rewrite the latter as
\begin{equation}
\forall S \quad \tmmathbf{q} ( \tmmathbf{y} |_S) =\tmmathbf{0},
\label{eq:constraint-q}
\end{equation}
where $\tmmathbf{q} (\tmmathbf{Y})$ lists the constraints applicable to the
cross-sectional restriction of the displacement $\tmmathbf{Y}=
\tmmathbf{y} |_S$,
\begin{equation}
\tmmathbf{q} (\tmmathbf{Y}) = \left( \langle Y_1 (\tmmathbf{T}) \rangle,
\langle Y_2 (\tmmathbf{T}) \rangle, \langle Y_3 (\tmmathbf{T}) \rangle,
\left\langle \eta_{\alpha \beta} \, T_{\alpha} \,
Y_{\beta} (\tmmathbf{T}) \right\rangle \right) . \label{eq:q-vector}
\end{equation}
The first three constraints prevent the center-line from drifting away from
the real material cross-sections---we use a redundant formulation where only
the sum $\tmmathbf{r}+ y_i \, \tmmathbf{d}_i$ is physically
meaningful, see equation~(\ref{eq:x-centerline-based-crspondence}).
In equation~(\ref{eq:full-problem-total-potential-energy}), the potential $V
(\tmmathbf{r}, \tmmathbf{d}_i)$ of the external load (per unit length $\mathrm{d}
S$) depends on the macroscopic variables but not on the microscopic
displacement. This is an assumption in our model. It can typically be
justified by the scaling hypotheses that are introduced in the classical work
on dimension reduction---typically, if the load varies on a length-scale much
larger than the cross-section diameter, its potential can be derived by
assuming that cross-sections are rigid, which yields an expression of the
form~(\ref{eq:full-problem-total-potential-energy}). If, however, the external
load varies quickly or induces large strain, it might become necessary to
couple the potential $V$ with the microscopic displacement $\tmmathbf{y}$.
This requires an extension of our work, which entails appending the
microscopic variables coupled to the external load as additional entries
inside the vector $\tmmathbf{h}$. This is however beyond the scope of the
present paper, where attention is limited to an external loading of the
form~(\ref{eq:full-problem-total-potential-energy}).
\subsection{Summary}
We have completed the energy formulation of the elasticity problem. In the
center-line based parameterization, the unknowns are the center-line
$\tmmathbf{r} (S)$, the directors $\tmmathbf{d}_i (S)$ and the microscopic
displacement $y_i (S, \tmmathbf{T})$. The center-line $\tmmathbf{r} (S)$ and
the directors $\tmmathbf{d}_i (S)$ define a framed curve which is associated
with a macroscopic strain $\tmmathbf{h} (S)$, where $\tmmathbf{h}=
(\varepsilon, \kappa_1, \kappa_2, \kappa_3)$, $\varepsilon$ is the axial
strain, $\kappa_1$ and $\kappa_2$ are the curvature strains and $\kappa_3$ is
the twisting strain, see section~\ref{ssec:apparent-strain}. The microscopic
strain is then given as $\tmmathbf{E}=\tmmathbf{E} (\tmmathbf{T}; \tmmathbf{h}
(S) ; \tmmathbf{y} |_S, \tmmathbf{y}' |_S)$ in
equation~(\ref{eq:E-function}). The total potential energy $\Psi
[\tmmathbf{r}, \tmmathbf{d}_i, \tmmathbf{y}]$ governing the elasticity problem
is given in equation~(\ref{eq:full-problem-total-potential-energy}), and in
particular the elastic strain energy $\Phi [\tmmathbf{h}, \tmmathbf{y}] =
\iiint w (\tmmathbf{T}, \tmmathbf{E}) \, \mathrm{d} A \,
\mathrm{d} S$ is given in equation~(\ref{eq:canonicalForm}). The equilibrium
equations can be derived variationally, taking into account the kinematic
constraints~(\ref{eq:constraint-q}) for the microscopic displacement, as well
as the orthonormality and the adaptation conditions in
equations~(\ref{eq:di-orthonormal-frame}) and~(\ref{eq:rPrime-epsilon-d3}).
\section{Ideal one-dimensional model}\label{sec:ideal-model}
In this section we explore a formal method for reducing the equilibrium of the
prismatic solid, which is a problem in three-dimensional elasticity, to a
one-dimensional problem. The reduction is based on the relaxation of the
microscopic displacement $\tmmathbf{y}$. The relaxation problem will be
introduced in a formal way in this section; it will not be solved explicitly
until we introduce additional assumptions in the forthcoming sections.
\subsection{Condensing out the microscopic displacement by a thought
experiment}
What we refer to as a \emph{relaxation of the microscopic displacement
$\tmmathbf{y}$} is a minimization of the strain energy functional $\Phi
[\tmmathbf{h}, \tmmathbf{y}]$ for a prescribed distribution of macroscopic
strain $\tmmathbf{h} (S)$,
\begin{equation}
\Phi^{\star} [\tmmathbf{h}] = \min_{\tmmathbf{y} \text{ s.t. } (\forall S)
\tmmathbf{q} ( \tmmathbf{y} |_S) =\tmmathbf{0}} \Phi
[\tmmathbf{h}, \tmmathbf{y}] . \label{eq:relax-y}
\end{equation}
Note that the relaxation over {\tmstrong{$y$}} is subject to the kinematic
conditions $(\forall S)\; \tmmathbf{q} ( \tmmathbf{y} |_S)
=\tmmathbf{0}$ ensuring that the microscopic displacement is consistent with
the center-line deformation, prescribed through the macroscopic strain
$\tmmathbf{h}$.
We assume that the optimization problem for $\tmmathbf{y}$ is such that the
minimum is attained and denote as $\tmmathbf{y}=\tmmathbf{y}^{\star}
[\tmmathbf{h}]$ the optimum:
\begin{equation}
\Phi^{\star} [\tmmathbf{h}] = \Phi [\tmmathbf{h}, \tmmathbf{y}^{\star}
[\tmmathbf{h}]], \label{eq:phi-star-by-relaxation}
\end{equation}
where all quantities obtained by relaxing the microscopic displacement
$\tmmathbf{y}$ are marked with an asterisk.
We also assume that $\tmmathbf{y}^{\star} [\tmmathbf{h}]$ is the only
stationary point of $\Phi [\tmmathbf{h}, \tmmathbf{y}]$, so that
$\tmmathbf{y}^{\star} [\tmmathbf{h}]$ is characterized by the variational
problem
\begin{equation}
\left( \forall \hat{\tmmathbf{y}} \text{ such that $\forall S\, \tmmathbf{q}
( \hat{\tmmathbf{y}} |_S) =\tmmathbf{0}$} \right) \quad
\frac{\partial \Phi}{\partial \tmmathbf{y}} [\tmmathbf{h},
\tmmathbf{y}^{\star} [\tmmathbf{h}]] \cdot \hat{\tmmathbf{y}} = 0.
\label{eq:variational-eq-for-y-star}
\end{equation}
All these assumptions are typically satisfied under appropriate convexity and
compactness assumptions.
In equation~(\ref{eq:variational-eq-for-y-star}), the notation $\frac{\partial
f}{\partial \tmmathbf{y}} [\tmmathbf{h}, \tmmathbf{y}] \cdot
\hat{\tmmathbf{y}}$ refers to the Fr{\'e}chet derivative of the functional $f
[\cdot]$ at point $(\tmmathbf{h}, \tmmathbf{y})$ in the direction
$\hat{\tmmathbf{y}}$. The problem for $\tmmathbf{y}^{\star} [\tmmathbf{h}]$
in~(\ref{eq:variational-eq-for-y-star}) is a non-linear elasticity problem
with pre-stress in three dimensions, and is typically impossible to solve in
closed form.
A key remark is as follows. If we were able to solve for the optimal
microscopic displacement $\tmmathbf{y}=\tmmathbf{y}^{\star} [\tmmathbf{h}]$,
we could define a one-dimensional strain energy potential $\Phi^{\star}$
simply by inserting $\tmmathbf{y}^{\star} [\tmmathbf{h}]$ into the
three-dimensional strain energy, $\Phi^{\star} [\tmmathbf{h}] = \Phi
[\tmmathbf{h}, \tmmathbf{y}^{\star} [\tmmathbf{h}]]$, see
equation~(\ref{eq:phi-star-by-relaxation}). Based on this strain energy
functional, one could then build a one-dimensional rod model governed by the
total potential energy functional
\begin{equation}
\Psi^{\star} [\tmmathbf{r}, \tmmathbf{d}_i] = \Phi^{\star} [\tmmathbf{h}] +
\int_0^{\ell} V (\tmmathbf{r} (S), \tmmathbf{d}_i (S)) \, \mathrm{d} S.
\label{eq:ideal-1d-total-potential-energy}
\end{equation}
In this one-dimensional model, $\tmmathbf{r} (S)$ and $\tmmathbf{d}_i (S)$ are
the unknowns, subjected to the same kinematic conditions as earlier in
section~\ref{sec:full-model}, and the macroscopic strain $\tmmathbf{h}$ is a
dependent variable that can be calculated as earlier
({\textsection}\ref{ssec:apparent-strain}).
We refer to this model as the {\tmem{ideal one-dimensional model}}. It is
{\tmem{one-dimensional}} in the sense that it exposes the macroscopic
variables only, the microscopic displacement
$\tmmathbf{y}=\tmmathbf{y}^{\star} [\tmmathbf{h}]$ being reconstructed `under
the hood'. It is {\tmem{ideal}} in the sense that it is rigorously equivalent
to the three-dimensional elasticity problem from section~\ref{sec:full-model},
as shown in \ref{app-sec:original-ideal-same-equilibrium}. This shows that dimension reduction is really a relaxation problem.
\subsection{Equilibrium and constitutive laws}\label{ssec:ideal-equilibrium}
We derive the equilibrium equations and the constitutive laws of the ideal
one-dimensional model variationally, starting from the total energy potential
$\Psi^{\star} [\tmmathbf{r}, \tmmathbf{d}_i]$ in
equation~(\ref{eq:ideal-1d-total-potential-energy}).
The densities of external force $\tmmathbf{p} (S)$ and external moment
$\tmmathbf{m} (S)$ are first identified from the variation $\hat{V}$ of the
external potential as follows,
\begin{equation}
\int_0^{\ell} \hat{V} \, \mathrm{d} S = - \int_0^{\ell} (\tmmathbf{p}
(S) \cdot \hat{\tmmathbf{r}} (S) +\tmmathbf{m} (S) \cdot
\hat{\tmmathbf{\theta}} (S)) \, \mathrm{d} S. \label{eq:V-peturb}
\end{equation}
where $\hat{\tmmathbf{r}}$ is the perturbation to the center-line and
$\hat{\tmmathbf{\theta}}$ the infinitesimal rotation of the directors
$\tmmathbf{d}_i$, such that $\hat{\tmmathbf{d}}_i (S) =
\hat{\tmmathbf{\theta}} (S) \times \tmmathbf{d}_i (S)$. As usual in the
principle of virtual work, we limit attention to perturbations
$\hat{\tmmathbf{r}}$ and $\hat{\tmmathbf{\theta}}$ such that the incremental
form of the kinematic constraint~(\ref{eq:rPrime-epsilon-d3}) is satisfied.
As shown in \ref{app-sec:equilibrium-ideal-model}, the
condition that the energy $\Psi^{\star} [\tmmathbf{r}, \tmmathbf{d}_i]$ is
stationary yields the extensible variant of the classical Kirchhoff equations
for the equilibrium of thin rods,
\begin{equation}
\begin{gathered}
N (S) =\tmmathbf{R} (S) \cdot \tmmathbf{d}_3 (S)\\[.2em]
\tmmathbf{R}' (S) +\tmmathbf{p} (S) =\tmmathbf{0}\\[.2em]
\tmmathbf{M}' (S) +\tmmathbf{r}' (S) \times \tmmathbf{R} (S) +\tmmathbf{m}
(S) =\tmmathbf{0}
\end{gathered} \label{eq:rod-strong-equilibrium}
\end{equation}
together with constitutive laws for the one-dimensional stress variables $N
(S)$ and $M_i (S)$. These constitutive laws can be identified from the first
variation of the strain energy (\ref{eq:phi-star-by-relaxation}) with respect
to the macroscopic strain as follows,
\begin{equation}
N (S) \, \hat{\varepsilon} + M_i (S) \, \hat{\kappa}_i
\equiv \iint_{\Omega} \Sigma_{i j} (\tmmathbf{T}, \tmmathbf{E}
(\tmmathbf{T}; \tmmathbf{h} (S) ; \tmmathbf{y}^{\star}
[\tmmathbf{h}] |_S, \tmmathbf{y}^{\star} [\tmmathbf{h}]' |_S))
\, \frac{\partial E_{i j}}{\partial \tmmathbf{h}}
(\tmmathbf{T}; \tmmathbf{h} (S) ; \tmmathbf{y}^{\star}
[\tmmathbf{h}] |_S, \tmmathbf{y}^{\star} [\tmmathbf{h}]' |_S)
\cdot \hat{\tmmathbf{h}} \, \mathrm{d} A.
\label{eq:internal-stress-full-model}
\end{equation}
Here $\hat{\tmmathbf{h}} = (\hat{\varepsilon}, \hat{\kappa}_1, \hat{\kappa}_2,
\hat{\kappa}_3)$ is a perturbation to the macroscopic strain and
$\tmmathbf{\Sigma}$ is the microscopic Piola-Kirchhoff stress tensor,
\begin{equation}
\tmmathbf{\Sigma} (\tmmathbf{T}, \tmmathbf{E}) = \frac{\partial w}{\partial
\tmmathbf{E}} (\tmmathbf{T}, \tmmathbf{E}) . \label{eq:microscopic-stress}
\end{equation}
In
equations~(\ref{eq:rod-strong-equilibrium}--\ref{eq:internal-stress-full-model}),
$\tmmathbf{R} (S)$ is the internal force, its component $N (S)$ along
$\tmmathbf{d}_3 (S)$ is called the normal force, $\tmmathbf{M} (S)$ is the
internal moment and $M_i (S) =\tmmathbf{M} (S) \cdot \tmmathbf{d}_i (S)$ are
its components in the directors basis. A microscopic interpretation of the
internal stress $N (S)$ and $M_i (S)$ based on
equation~(\ref{eq:internal-stress-full-model}) is given in
equation~(\ref{eq:app-NM-interpretation}) from~\ref{app:microscopic-interpretation-1d-internal-stress}. The last two
lines in equation~(\ref{eq:rod-strong-equilibrium}) are the Kirchhoff
equations for the equilibrium of rods; they are a balance of forces and
moments on an infinitesimal segment, respectively. The equilibrium
equations~(\ref{eq:rod-strong-equilibrium}) must be complemented by boundary
conditions which can be derived variationally and vary from one problem to
another.
As discussed in \ref{app-sec:original-ideal-same-equilibrium},
equations~(\ref{eq:rod-strong-equilibrium}--\ref{eq:internal-stress-full-model})
governing the equilibrium of the ideal one-dimensional model are
mathematically equivalent to those governing the original three-dimensional
model from section~\ref{sec:full-model}. The one-dimensional model involves no
approximation. It achieves the ultimate in dimension reduction: it hides the
microscopic variables while preserving the solutions of the original
three-dimensional problem. Incidentally, it also makes the connection with the
classical Kirchhoff equations~(\ref{eq:rod-strong-equilibrium}) for elastic
rods. Unfortunately, the constitutive
laws~(\ref{eq:internal-stress-full-model}) are in effect useless as they
depend on the optimal microscopic displacement $\tmmathbf{y}^{\star}
[\tmmathbf{h}]$, which is not available in closed form: the one-dimensional
potential $\Phi^{\star} [\tmmathbf{h}]$ is a mathematical object that hides a
daunting problem in non-linear three-dimensional elasticity.
In the following section, we construct approximations to the ideal
one-dimensional model that are mathematically tractable.
\section{Asymptotically exact one-dimensional
models}\label{sec:asymptotic-1d-reduction}
\subsection{Strategy}
Even though it cannot be used directly, the ideal one-dimensional model from
section~\ref{sec:ideal-model} offers a natural starting point for building
one-dimensional approximations to the original three-dimensional problem. A
critical help towards this goal is furnished by our previous
work~\citep{LESTRINGANT2020103730}, in which a method to calculate the
relaxed displacement $\tmmathbf{y}^{\star} [\tmmathbf{h}]$ in powers of the
successive gradients of $\tmmathbf{h} (S)$ has been obtained. In this section,
we apply this asymptotic method and obtain approximations to the ideal strain
energy $\Phi^{\star} [\tmmathbf{h}]$. This leads us to {\tmem{concrete}} rod,
models which are accurate approximations of the original three-dimensional
problem when the gradients of the macroscopic strain $\tmmathbf{h} (S)$ are
small.
The reduction method from~\citet{LESTRINGANT2020103730} assumes that the
macroscopic strain varies on a length-scale $\sim \rho / \zeta$ much larger
than the typical dimension of the cross-section $\sim \rho$, where $\zeta \ll
1$ is a small scalar parameter that is used as an expansion parameter,
\[ \begin{array}{ll}
\tmmathbf{h} (S) =\mathcal{O} (1) & \frac{\mathrm{d}^i \tmmathbf{h}}{\mathrm{d}
S^i} (S) =\mathcal{O} (\zeta^i) .
\end{array} \]
We emphasize that $\tmmathbf{h} (S)$ is allowed to vary by a finite amount
across the length $\ell$ of the structure as long as $\tmmathbf{h}' (S)
=\mathcal{O} (\zeta)$ remains small everywhere: unlike most (if not all) of
the alternate methods from the literature, ours does not require the strain
$\tmmathbf{h} (S)$ to remain {\tmem{uniformly}} close to a specific value
$\tmmathbf{h}_0$, for all values of $S$---this property is particularly useful
for the analysis of localization, as discussed
by~\citet{Audoly-Hutchinson-Analysis-of-necking-based-2016}
and~\citet{lestringant2020one}. Besides, the expansion has been shown to
give extremely accurate results even beyond its strict conditions of
mathematical validity, when the gradient $\tmmathbf{h}' (S)$ is not small.
The reduction method uses as input the expressions of the strain $\tmmathbf{E}
(\tmmathbf{T}; \tmmathbf{h} (S) ; \tmmathbf{Y}, \tmmathbf{Y}^{\dag})$ and the
constraint $\tmmathbf{q} (\tmmathbf{Y})$ relevant to our particular problem
from equations~(\ref{eq:E-function}) and~(\ref{eq:q-vector}), and furnishes an
approximation to the one-dimensional strain energy functional
\begin{equation}
\Phi^{\star} [\tmmathbf{h}] \approx \Phi_{(2)}^{\star} [\tmmathbf{h}] + \ell
\, \mathcal{O} (\zeta^3) \label{eq:reduced-model-energy-sketch}
\end{equation}
of the form
\begin{equation}
\Phi_{(2)}^{\star} [\tmmathbf{h}] = \int_0^{\ell} \left[ W_{\text{hom}}
(\tilde{\tmmathbf{h}} (S)) +\tmmathbf{A} (\tmmathbf{h} (S)) \cdot
\tmmathbf{h}' (S) + \frac{1}{2} \, \tmmathbf{h}' (S) \cdot
\tmmathbf{D} (\tmmathbf{h} (S) \cdot \tmmathbf{h}' (S) \right]
\, \mathrm{d} S, \label{eq:phi-gr}
\end{equation}
where $\tilde{h}_i = h_i (S) + \xi_i (\tmmathbf{h} (S)) \, h_i'' (S)$
(no implicit sum on $i$) is a modified strain measure, see
equation~(\ref{eq:hi-tilde}) below.
The reduction method of~\citet{LESTRINGANT2020103730} is summarized
in \ref{app:compendium}. Explicit expressions for the potential
$W_{\text{hom}} (\tmmathbf{h})$, the coefficients $\xi_i (\tmmathbf{h})$
entering in the alternate strain measure $\tilde{\tmmathbf{h}}$, and for the
elastic moduli $\tmmathbf{A} (\tmmathbf{h})$ and $\tmmathbf{D} (\tmmathbf{h})$
are available. Both geometric and material nonlinearities are accounted for,
as reflected by the fact that the quantities $\xi_i$, $\tmmathbf{A}$ and
$\tmmathbf{D}$ all depend on $\tmmathbf{h}$, typically in a non-linear way.
A lower-order approximation $\Phi^{\star} [\tmmathbf{h}] \approx
\Phi_{(0)}^{\star} [\tmmathbf{h}] + \ell \, \mathcal{O} (\zeta)$ can
also be obtained by discarding the gradient terms \ in $\Phi_{(2)}^{\star}
[\tmmathbf{h}]$, which are of order $\zeta$ or higher,
\begin{equation}
\Phi_{(0)}^{\star} [\tmmathbf{h}] = \int_0^{\ell} W_{\text{hom}}
(\tmmathbf{h} (S)) \, \mathrm{d} S. \label{eq:phi-no-gradient}
\end{equation}
Unlike $\Phi_{(2)}^{\star} [\tmmathbf{h}]$, the strain potential
$\Phi_{(0)}^{\star} [\tmmathbf{h}]$ does not capture the gradient effect: the
strain energy $W_{\text{hom}} (\tmmathbf{h})$ in $\Phi_{(0)}^{\star}
[\tmmathbf{h}]$ is a function of the local strain $\tmmathbf{h} (S)$ only.
The term $\tmmathbf{A} (\tmmathbf{h} (S)) \cdot \tmmathbf{h}' (S)$ being
incompatible with the most common material symmetries, see
section~\ref{sssec:beam-symmetries}, the modulus $\tmmathbf{A} (\tmmathbf{h}
(S))$ is often zero. In this case, $\Phi_{(0)}^{\star} [\tmmathbf{h}]$ is a
better approximation than announced above, {\tmem{i.e.}}, \ $\Phi^{\star}
[\tmmathbf{h}] \approx \Phi_{(0)}^{\star} [\tmmathbf{h}] + \ell \,
\mathcal{O} (\zeta^2)$; by a similar argument the other estimate can be
improved as well in the presence of additional symmetries, {\tmem{i.e.}},
$\Phi^{\star} [\tmmathbf{h}] \approx \Phi_{(2)}^{\star} [\tmmathbf{h}] + \ell
\, \mathcal{O} (\zeta^4)$.
In the remainder of this section, we apply the reduction method from our
previous work to the prismatic solid. The potential $W_{\text{hom}}
(\tmmathbf{h})$ entering in the lower-order approximation $\Phi_{(0)}^{\star}
[\tmmathbf{h}]$ is derived in
section~\ref{ssec:non-regularized-model-outline}. The higher-order
approximation $\Phi_{(2)}^{\star} [\tmmathbf{h}]$ is derived in the subsequent
section~\ref{sec:gradient-effect}.
\subsection{Analysis of homogeneous
solutions}\label{ssec:non-regularized-model-outline}
As recalled in \ref{app:compendium-homogeneous}, the
elastic potential $W_{\text{hom}} (\tmmathbf{h})$ has to be constructed from a
catalog of homogeneous solutions. By homogeneous solutions, we refer to the
case where neither $\tmmathbf{h} (S)$ nor the microscopic displacement
$\tmmathbf{y} (S, \tmmathbf{T})$ depend on $S$. Homogeneous solutions are
analyzed in this section; accordingly, the macroscopic strain $\tmmathbf{h}=
(\varepsilon, \kappa_1, \kappa_2, \kappa_3)$ is treated as a constant. Doing
so, we are temporarily limiting attention to configurations of the center-line
that are a helix, an arc of circle or a straight line.
The optimal microscopic displacement $\tmmathbf{y}^{\star} [\tmmathbf{h}]$
being now independent of $S$, we denote by $\tmmathbf{Y}^{\tmmathbf{h}} =
(\tmmathbf{y}^{\star} [\tmmathbf{h}]) |_S$ its restriction to any
particular cross-section $S$. Then,
\[ \tmmathbf{y}^{\star} [\tmmathbf{h}] (S, \tmmathbf{T})
=\tmmathbf{Y}^{\tmmathbf{h}} (\tmmathbf{T}) \qquad \text{(homogeneous case,
$\tmmathbf{h}$ is constant)} . \]
Here, $\tmmathbf{Y}^{\tmmathbf{h}} = (Y_i^{\tmmathbf{h}})_{1 \leqslant i
\leqslant 3}$ denotes a triple of functions defined over the cross-section,
each function $Y_i^{\tmmathbf{h}} (\tmmathbf{T})$ being a component of the
local displacement in the basis of directors, see
equation~(\ref{eq:x-centerline-based-crspondence}). The superscript
$\tmmathbf{h}$ in the notation $\tmmathbf{Y}^{\tmmathbf{h}}$ serves both as an
abbreviation for `homogeneous', and as a container for the macroscopic strain
values, on which $\tmmathbf{Y}^{\tmmathbf{h}} =\tmmathbf{Y}^{(\varepsilon,
\kappa_1, \kappa_2, \kappa_3)}$ depends.
From equation~(\ref{eq:E-function}), the strain $\tilde{\tmmathbf{E}}
(\tmmathbf{T}, \tmmathbf{h}, \tmmathbf{Y}) =\tmmathbf{E} (\tmmathbf{T};
\tmmathbf{h}; \tmmathbf{Y}, \tmmathbf{0})$ relevant to homogeneous solutions
writes
\begin{equation}
\begin{array}{r}
\tilde{\tmmathbf{E}} (\tmmathbf{T}, \tmmathbf{h}, \tmmathbf{Y}) =
\frac{\tilde{t}_i^2 - 1}{2} \, \tmmathbf{e}_3 \otimes
\tmmathbf{e}_3 + \tilde{t}_i \, \partial_{\alpha} Y_i
(\tmmathbf{T}) \, \tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_3 +
\frac{\partial_{\alpha} Y_i (\tmmathbf{T}) \, \partial_{\beta}
Y_i (\tmmathbf{T}) - \delta_{\alpha \beta}}{2} \,
\tmmathbf{e}_{\alpha} \otimes \tmmathbf{e}_{\beta}\\[.3em]
\text{where $\tilde{t}_i = (1 + \varepsilon) \, \delta_{i
3} + \eta_{i j k} \, \kappa_j
\, Y_k (\tmmathbf{T})$} .
\end{array} \label{eq:homogeneous-strain}
\end{equation}
This specific expression of the strain is derived from the generic one in
equation~(\ref{eq:E-function}) with the gradient term $\tmmathbf{Y}^{\dag}$
set to zero.
For any value of the macroscopic strain $\tmmathbf{h}$, the relaxed
displacement $\tmmathbf{Y}^{\tmmathbf{h}}$ of the homogeneous solution must be
found by minimizing the strain energy potential $\Phi$ with respect to
$\tmmathbf{Y}$ among all the microscopic displacements satisfying the
kinematic conditions $\tmmathbf{q} (\tmmathbf{Y}) =\tmmathbf{0}$, see
equations~(\ref{eq:relax-y}) or~(\ref{eq:variational-eq-for-y-star}). This
leads to the following variational problem, as derived in
equation~(\ref{eq:Yh-variational-abstract}) from the appendix,
\begin{equation}
\begin{gathered}
\iint_{\Omega} Y_i^{\tmmathbf{h}} (\tmmathbf{T}) \, \mathrm{d} A
=\tmmathbf{0}\\
\iint_{\Omega} \left[ T_1 \, Y_2^{\tmmathbf{h}} (\tmmathbf{T})
- T_{2 \,} \, Y_1^{\tmmathbf{h}} (\tmmathbf{T}) \right]
\, \mathrm{d} A =\tmmathbf{0}\\
\forall \hat{\tmmathbf{Y}} \quad \iint_{\Omega} \left[
\tmmathbf{\Sigma} (\tmmathbf{T}, \tmmathbf{E}^{\tmmathbf{h}}
(\tmmathbf{T})) \mathbin:
\widehat{\tilde{\tmmathbf{E}}}^{\tmmathbf{h}} (\tmmathbf{T}) +
F_i^{\tmmathbf{h}} \, \hat{Y}_i (\tmmathbf{T}) + Q^{\tmmathbf{h}}
\, \left( T_1 \, \hat{Y}_2 (\tmmathbf{T}) - T_{2
\,} \, \hat{Y}_1 (\tmmathbf{T}) \right) \right]
\, \mathrm{d} A = 0.
\end{gathered}
\label{eq:app-red-variational-pb-homogeneous}
\end{equation}
where $\widehat{\tilde{\tmmathbf{E}}}^{\tmmathbf{h}} (\tmmathbf{T}) =
\frac{\mathrm{d} \tilde{\tmmathbf{E}}}{\mathrm{d} \tmmathbf{Y}} (\tmmathbf{T},
\tmmathbf{h}, \tmmathbf{Y}^{\tmmathbf{h}}) \cdot \hat{\tmmathbf{Y}}$ is the
virtual change of strain, and the four scalars $(F_1^{\tmmathbf{h}},
F_2^{\tmmathbf{h}}, F_3^{\tmmathbf{h}}, Q^{\tmmathbf{h}})$ are Lagrange
multipliers enforcing the constraints $\tmmathbf{q}
(\tmmathbf{Y}^{\tmmathbf{h}}) = 0$ that have been spelled out in the first two
lines of equation~(\ref{eq:app-red-variational-pb-homogeneous}). This
variational problem is a two-dimensional, non-linear problem of elasticity in
the cross-section $\Omega$ with pre-strain that depends on $\tmmathbf{h}$. For
the simple examples given at the end of this paper, the solution
$\tmmathbf{Y}^{\tmmathbf{h}}$ will be obtained analytically but a numerical
solution might be required in more complex geometries.
Solving this variational problem repeatedly for all possible values of
$\tmmathbf{h}$, one obtains a catalog of homogeneous solutions
$\tmmathbf{Y}^{\tmmathbf{h}}$ indexed by the macroscopic
strain~$\tmmathbf{h}$. The elastic potential $W_{\text{hom}} (\tmmathbf{h})$
is then defined as the strain energy per unit length of the homogeneous
solution $\tmmathbf{Y}^{\tmmathbf{h}}$,
\begin{equation}
W_{\text{hom}} (\tmmathbf{h}) = \iint_{\Omega} w (\tmmathbf{T},
\tmmathbf{E}^{\tmmathbf{h}} (\tmmathbf{T})) \, \mathrm{d} A \text{,
\quad$\tmop{where} \tmmathbf{E}^{\tmmathbf{h}} (\tmmathbf{T}) =
\tilde{\tmmathbf{E}} (\tmmathbf{T}, \tmmathbf{h},
\tmmathbf{Y}^{\tmmathbf{h}})$} . \label{eq:Wh-def}
\end{equation}
The lower-order one-dimensional strain energy potential $\Phi_{(0)}^{\star}
[\tmmathbf{h}]$ can then be readily constructed from
equation~(\ref{eq:phi-no-gradient}): most engineering models for slender
structures make use of the energy potential $\Phi_{(0)}^{\star}
[\tmmathbf{h}]$ which we have just obtained, see
equation~(\ref{eq:twisting-Phi0}) for instance.
In terms of the catalog of homogeneous solutions
$\tmmathbf{Y}^{\tmmathbf{h}}$, we introduce the following auxiliary quantities
relevant to the homogeneous solution,
\begin{equation}
\begin{split}
F^{\tmmathbf{h}}_{i 3} (\tmmathbf{T}) & = (1 + \varepsilon)
\, \delta_{i 3} + \eta_{i j k}
\, \kappa_j \, Y_k^{\tmmathbf{h}} (\tmmathbf{T})\\
F^{\tmmathbf{h}}_{i \alpha} (\tmmathbf{T}) & =
\partial_{\alpha} Y_i^{\tmmathbf{h}} (\tmmathbf{T})\\
\tmmathbf{E}^{\tmmathbf{h}} (\tmmathbf{T}) & = \tmmathbf{E}
(\tmmathbf{T}; \tmmathbf{h}; \tmmathbf{Y}^{\tmmathbf{h}}, \tmmathbf{0})\\
\tmmathbf{\Sigma}^{\tmmathbf{h}} (\tmmathbf{T}) & = \tmmathbf{\Sigma}
(\tmmathbf{T}, \tmmathbf{E}^{\tmmathbf{h}} (\tmmathbf{T}))\\
\tmmathbf{K}^{\tmmathbf{h}} (\tmmathbf{T}) & = \frac{\partial^2
w}{\partial \tmmathbf{E}^2} (\tmmathbf{T}, \tmmathbf{E}^{\tmmathbf{h}}
(\tmmathbf{T}))
\end{split} \label{eq:gr-effect-homogeneous-qties}
\end{equation}
where $F^{\tmmathbf{h}}_{i j} (\tmmathbf{T})$ are the components of
the deformation gradient $\tmmathbf{F}^{\tmmathbf{h}} (S, \tmmathbf{T}) = F_{i
j}^{\tmmathbf{h}} (\tmmathbf{T}) \, \tmmathbf{d}_i (S)
\otimes \tmmathbf{e}_j$, $\tmmathbf{E}^{\tmmathbf{h}}$ is the microscopic
strain, $\tmmathbf{\Sigma}^{\tmmathbf{h}}$ the microscopic stress, and
$\tmmathbf{K}^{\tmmathbf{h}}$ is the matrix of tangent elastic moduli.
\subsection{Analysis of the gradient effect}\label{sec:gradient-effect}
This section aims at deriving the higher-order approximation
$\Phi_{(2)}^{\star} [\tmmathbf{h}]$ to the strain energy, that captures the
gradient effect. We do so by following the general method from
section~\ref{sec-app:general-method-gradient} in \ref{app:compendium}.
Given a distribution of macroscopic stress $\tmmathbf{h} (S)$, the idea of the
method is to seek the solution to the relaxation problem~(\ref{eq:relax-y}) in
the form
\begin{equation}
\tmmathbf{y}^{\star} [\tmmathbf{h}] (S, \tmmathbf{T})
=\tmmathbf{Y}^{\tmmathbf{h} (S)} (\tmmathbf{T})
+\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h} (S)} (\tmmathbf{h}' (S),
\tmmathbf{T}) +\mathcal{O} (\zeta^2), \label{eq:y-star-expansion}
\end{equation}
where $\tmmathbf{Y}^{\tmmathbf{h} (S)}$ is the displacement predicted by the
catalog of homogeneous solutions based on the local value $\tmmathbf{h} (S)$
of the macroscopic strain, $\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h} (S)}
(\tmmathbf{T}, \tmmathbf{h}' (S))$ is a correction proportional to the local
strain gradient $\tmmathbf{h}' (S)$, to be determined, and $\mathcal{O}
(\zeta^2)$ denotes higher-order terms which do not enter in the determination
of $\Phi_{(2)}^{\star} [\tmmathbf{h}]$. We proceed to show how the correction
$\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h} (S)} (\tmmathbf{h}' (S),
\tmmathbf{T})$ can be obtained, which is a first step towards constructing the
functional $\Phi_{(2)}^{\star} [\tmmathbf{h}]$.
\subsubsection{Optimal displacement}
As shown in previous work and summarized in \ref{app:compendium}, the
optimal correction $\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h} (S)}
(\tmmathbf{T}, \tmmathbf{h}' (S))$ can be found by solving a variational
problem on the cross-section that effectively enforces the optimality
condition~(\ref{eq:relax-y}). This variational problem makes use of an
operator $\mathcal{B}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{Z})$ that
takes the strain $\tmmathbf{h}$, its gradient $\tmmathbf{h}^{\dag}$ and a
generic displacement field $\tmmathbf{Z}$ defined on the cross-section as
arguments. We follow the step-by-step recipe from the appendix to build the
operator $\mathcal{B}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{Z})$,
based on the knowledge of the homogeneous solutions
$\tmmathbf{Y}^{\tmmathbf{h}}$. Auxiliary operators need to be introduced in
this process.
The first step is to identify the {\tmem{structure operators}}
$\tmmathbf{e}_{i j}^k$, which are the gradients with respect to
$\tmmathbf{h}$, $\tmmathbf{h}^{\dag}$, $\tmmathbf{Y}$ and
$\tmmathbf{Y}^{\dag}$ of the microscopic strain function $\tmmathbf{E}
(\tmmathbf{T}; \tmmathbf{h}; \tmmathbf{Y}, \tmmathbf{Y}^{\dag})$ in
equation~(\ref{eq:E-function}) about a homogeneous solution. These structure
operators are defined in \ref{app-sec:structure-operators} and are
calculated in \ref{app:structure-operators}. They are purely
geometric quantities.
Next, the linear increment of strain $\mathcal{E}^{\tmmathbf{h}}
(\tmmathbf{T}, \tmmathbf{h}^{\dag}, \tmmathbf{Z})$ associated with a small
strain gradient $\tmmathbf{h}^{\dag} = (\varepsilon^{\dag}, \kappa_1^{\dag},
\kappa_2^{\dag}, \kappa_3^{\dag})$ and with the corrective displacement
$\tmmathbf{Z}$ are obtained from equation~(\ref{eq:perturbed-strain-Ecal}) as
\begin{equation}
\mathcal{E}^{\tmmathbf{h}} (\tmmathbf{T}, \tmmathbf{h}^{\dag}, \tmmathbf{Z})
= (\tmmathbf{h}^{\dag} \cdot \nabla Y^{\tmmathbf{h}}_i (\tmmathbf{T}))
\, F^{\tmmathbf{h}}_{i j} (\tmmathbf{T}) \,
\tmmathbf{e}_j \odot \tmmathbf{e}_3 + \eta_{i j k}
\, \kappa_k \, F^{\tmmathbf{h}}_{j l}
(\tmmathbf{T}) \, Z_i (\tmmathbf{T}) \, \tmmathbf{e}_l
\odot \tmmathbf{e}_3 + F^{\tmmathbf{h}}_{i j} (\tmmathbf{T})
\, \partial_{\alpha} Z_i (\tmmathbf{T}) \tmmathbf{e}_j \odot
\tmmathbf{e}_{\alpha} . \label{eq:Lh}
\end{equation}
The last argument $\tmmathbf{Z}= (Z_1, Z_2, Z_3)$ is a triple of functions
defined on the cross-section, representing the corrective displacement;
$\mathcal{E}^{\tmmathbf{h}}$ is viewed as an operator acting on the
cross-sectional functions $(Z_1, Z_2, Z_3)$ that are not yet known.
The $\nabla$ notation is systematically used to denote a gradient with respect
to the macroscopic gradient $\tmmathbf{h}$, with the convention that the
increment of $\tmmathbf{h}$ is applied by a left multiplication: in
equation~(\ref{eq:Lh}), the first term in the right-hand side must be
interpreted as
\begin{equation}
\begin{split}
\tmmathbf{h}^{\dag} \cdot \nabla Y^{\tmmathbf{h}}_i (\tmmathbf{T}) & =
\frac{\partial Y^{\tmmathbf{h}}_i (\tmmathbf{T})}{\partial \tmmathbf{h}}
\cdot \tmmathbf{h}^{\dag}\\
& = \frac{\partial Y^{(\varepsilon, \kappa_1, \kappa_2, \kappa_3)}_i
(\tmmathbf{T})}{\partial \varepsilon} \, \varepsilon^{\dag} +
\frac{\partial Y^{(\varepsilon, \kappa_1, \kappa_2, \kappa_3)}_i
(\tmmathbf{T})}{\partial \kappa_1} \, \kappa_1^{\dag}
\end{split} \label{eq:nabla-notation}
\end{equation}
Following the general method, we introduce yet another operator
$\tmmathbf{C}_{\tmmathbf{h}}^{(1)}$. For a given value of the macroscopic
strain $\tmmathbf{h}$ and for a triple $\tmmathbf{Z}^{\dag}$ of scalar
functions $Z_i^{\dag}$ defined over the cross-sections (representing the
components of the longitudinal gradient of the corrective displacement, hence
the dagger notation), $\tmmathbf{C}_{\tmmathbf{h}}^{(1)}$ is defined from
equation~(\ref{eq:ACB-operators}) as
\begin{equation}
\tmmathbf{C}_{\tmmathbf{h}}^{(1)} \cdot \tmmathbf{Z}^{\dag} =
\iint_{\Omega} F^{\tmmathbf{h}}_{i j} (\tmmathbf{T}) \,
\Sigma_{j 3}^{\tmmathbf{h}} (\tmmathbf{T}) \, Z_i^{\dag}
(\tmmathbf{T}) \, \mathrm{d} A. \label{eq:A-C1h}
\end{equation}
A related operator $\nabla \tmmathbf{C}^{(1)}_{\tmmathbf{h}}$ is defined by a
formal integration by parts with respect to the $\dag$ symbol (formally
representing the longitudinal derivative $\mathrm{d} / \mathrm{d} S$), see
equation~(\ref{eq:minus-grad-C1h-abstract}),
\begin{equation}
-\tmmathbf{h}^{\dag} \cdot \nabla \tmmathbf{C}^{(1)}_{\tmmathbf{h}} \cdot
\tmmathbf{Z}= - \iint_{\Omega} \left( \frac{\mathrm{d} \left(
F^{\tmmathbf{h}}_{i j} (\tmmathbf{T}) \, \Sigma_{j
3}^{\tmmathbf{h}} (\tmmathbf{T}) \right)}{\mathrm{d} \tmmathbf{h}}
\cdot \tmmathbf{h}^{\dag} \right) \, Z_i (\tmmathbf{T}) \,
\mathrm{d} A. \label{eq:minus-grad-C1h}
\end{equation}
In our previous work, we have shown that the perturbation to the strain energy
per unit length caused by a strain gradient $\tmmathbf{h}^{\dag}$ and by the
corrective displacement $\tmmathbf{Z}$ is given by the operator
$\mathcal{B}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{Z})$ defined in
equation~(\ref{eq:ACB-operators}) as
\begin{multline}
\mathcal{B}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{Z}) =
\iint_{\Omega} \frac{1}{2} \, \mathcal{E}^{\tmmathbf{h}}
(\tmmathbf{T}, \tmmathbf{h}^{\dag}, \tmmathbf{Z}) \mathbin:
\tmmathbf{K}^{\tmmathbf{h}} (\tmmathbf{T}) \mathbin:
\mathcal{E}^{\tmmathbf{h}} (\tmmathbf{T}, \tmmathbf{h}^{\dag},
\tmmathbf{Z}) \, \mathrm{d} A \ldots\\
%
\hspace{2em} + \iint_{\Omega} \left( \left(
\frac{1}{2} \, \sum_i (\tmmathbf{h}^{\dag} \cdot \nabla
Y^{\tmmathbf{h}}_i (\tmmathbf{T}))^2 \, \Sigma_{3
3}^{\tmmathbf{h}} (\tmmathbf{T}) \right) +\tmmathbf{h}^{\dag} \cdot \nabla
Y^{\tmmathbf{h}}_i (\tmmathbf{T}) \, \left( \eta_{i j
k} \, \kappa_j \, \Sigma_{3
3}^{\tmmathbf{h}} (\tmmathbf{T}) \, Z_k (\tmmathbf{T}) +
\Sigma_{\beta 3}^{\tmmathbf{h}} (\tmmathbf{T}) \,
\partial_{\beta} Z_i (\tmmathbf{T}) \right) \right. \ldots\\
%
\hspace{6em} + \frac{1}{2} \, \left( \delta_{i j}
\, \kappa_l^2 - \kappa_i \, \kappa_j \right) \,
\Sigma_{3 3}^{\tmmathbf{h}} (\tmmathbf{T}) \, Z_i
(\tmmathbf{T}) \, Z_j (\tmmathbf{T}) + \eta_{i j
k} \, \kappa_j \, \Sigma_{\alpha
3}^{\tmmathbf{h}} (\tmmathbf{T}) \, Z_k (\tmmathbf{T})
\, \partial_{\alpha} Z_i (\tmmathbf{T}) \ldots\\
%
\hspace{8em} \left. + \frac{1}{2} \, \Sigma_{\alpha
\beta}^{\tmmathbf{h}} (\tmmathbf{T}) \, \partial_{\alpha} Z_i
(\tmmathbf{T}) \, \partial_{\beta} Z_i (\tmmathbf{T}) \right)
\, \mathrm{d} A -\tmmathbf{h}^{\dag} \cdot \nabla
\tmmathbf{C}^{(1)}_{\tmmathbf{h}} \cdot \tmmathbf{Z}.
\label{eq:B-cal}
\end{multline}
This operator $\mathcal{B}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{Z})$
is a quadratic form with respect to each one of its arguments
$\tmmathbf{h}^{\dag}$ and $\tmmathbf{Z}$.
The optimal corrective displacement $\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h}}
(\tmmathbf{h}^{\dag})$ is characterized by the fact that it is the stationary
point of the quadratic functional $\mathcal{B}^{\tmmathbf{h}}
(\tmmathbf{h}^{\dag}, \tmmathbf{Z})$ over the set of cross-sectional functions
$\tmmathbf{Z}$'s satisfying the kinematic constraint $\tmmathbf{q}
(\tmmathbf{Z}) =\tmmathbf{0}$; this is fully in line with the interpretation
of $\mathcal{B}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{Z})$ as the
increment of strain energy caused by the gradient effect. This leads to the
variational problem stated in equation~(\ref{eq:Z-variational-pb-abstract}):
given $\tmmathbf{h}$ and $\tmmathbf{h}^{\dag}$, find the corrective
cross-sectional displacement $\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h}}
(\tmmathbf{h}^{\dag})$ and the Lagrange multipliers
$\tmmathbf{F}_{\text{opt}}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag})$ and
$Q_{\text{opt}}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag})$ such that
\begin{equation}
\begin{gathered}
\iint_{\Omega} Z_{\text{opt}, i}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag},
\tmmathbf{T}) \, \mathrm{d} A =\tmmathbf{0}\\
\iint_{\Omega} \left[ T_1 \, Z_{\text{opt}, 2}^{\tmmathbf{h}}
(\tmmathbf{h}^{\dag}, \tmmathbf{T}) - T_{2 \,} \,
Z_{\text{opt}, 1}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{T})
\right] \, \mathrm{d} A =\tmmathbf{0}\\
\forall \hat{\tmmathbf{Z}} \quad \frac{\partial
\mathcal{B}^{\tmmathbf{h}}}{\partial \tmmathbf{Z}} \left(
\tmmathbf{h}^{\dag}, \tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h}}
(\tmmathbf{h}^{\dag}) \right) \cdot \hat{\tmmathbf{Z}} + \iint_{\Omega}
\left[ F_{\text{opt}, i}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}) \,
\hat{Z}_i (\tmmathbf{T}) + Q^{\tmmathbf{h}}_{\text{opt}}
(\tmmathbf{h}^{\dag}) \, \left( T_1 \, \hat{Z}_2
(\tmmathbf{T}) - T_{2 \,} \, \hat{Z}_1 (\tmmathbf{T})
\right) \right] \, \mathrm{d} A = 0,
\end{gathered} \label{eq:Z-variational-pb}
\end{equation}
As $\mathcal{B}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}, \tmmathbf{Z})$ is
quadratic, this is a two-dimensional problem of linear elasticity in the
cross-section, with residual strain proportional to $\tmmathbf{h}^{\dag}$: its
solution $\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h}} (\tmmathbf{T},
\tmmathbf{h}^{\dag})$ is linear with respect to the strain gradient
$\tmmathbf{h}^{\dag}$.
This completes the determination of the corrective displacement
$\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag})$ as a function
of the macroscopic strain $\tmmathbf{h}$ and its longitudinal gradient
$\tmmathbf{h}^{\dag}$.
\subsubsection{Definition of the one-dimensional model}
We have solved for the corrective displacement
$\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h} (S)} (\tmmathbf{h}' (S),
\tmmathbf{T})$. There remains to insert the relaxed displacement
$\tmmathbf{y}^{\star} [\tmmathbf{h}]$ from
equation~(\ref{eq:y-star-expansion}) into the original strain energy
$\Phi^{\star} [\tmmathbf{h}] = \Phi [\tmmathbf{h}, \tmmathbf{y}^{\star}
[\tmmathbf{h}]]$. This yields the reduced strain energy $\Phi_{(2)}^{\star}
[\tmmathbf{h}]$ announced earlier in equation~(\ref{eq:phi-gr}),
\[ \Phi_{(2)}^{\star} [\tmmathbf{h}] = \int_0^{\ell} \left[ W_{\text{hom}}
(\tilde{\tmmathbf{h}} (S)) +\tmmathbf{A} (\tmmathbf{h} (S)) \cdot
\tmmathbf{h}' (S) + \frac{1}{2} \, \tmmathbf{h}' (S) \cdot
\tmmathbf{D} (\tmmathbf{h} (S) \cdot \tmmathbf{h}' (S) \right]
\, \mathrm{d} S, \]
together with explicit expressions for the elastic moduli $\tmmathbf{A}
(\tmmathbf{h})$, $\tmmathbf{D} (\tmmathbf{h})$ and for the modified strain
$\tilde{\tmmathbf{h}} (S)$, which are obtained as follows---the reader is
referred to \ref{app:compendium} for details.
The auxiliary one-dimensional moduli $\tmmathbf{B} (\tmmathbf{h})$ and
$\tmmathbf{C} (\tmmathbf{h})$ are obtained by inserting the optimal
displacement into the operators $\mathcal{B}^{\tmmathbf{h}}$ and
$\tmmathbf{C}_{\tmmathbf{h}}^{(1)}$ introduced earlier, and by identification
from the following equations,
\begin{equation}
\begin{aligned}
\frac{1}{2} \, \tmmathbf{h}^{\dag} \cdot \tmmathbf{B}
(\tmmathbf{h}) \cdot \tmmathbf{h}^{\dag} & = \mathcal{B}^{\tmmathbf{h}}
\left( \tmmathbf{h}^{\dag}, \tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h}}
(\tmmathbf{h}^{\dag}) \right)\\
\tmmathbf{C} (\tmmathbf{h}) \cdot \tmmathbf{h}^{\dag} & =
\tmmathbf{C}_{\tmmathbf{h}}^{(1)} \cdot
\tmmathbf{Z}_{\text{opt}}^{\tmmathbf{h}} (\tmmathbf{h}^{\dag}) .
\end{aligned}
\label{eq:B-C}
\end{equation}
The one-dimensional moduli $\tmmathbf{A} (\tmmathbf{h})$ and $\tmmathbf{D}
(\tmmathbf{h})$ appearing in $\Phi_{(2)}^{\star} [\tmmathbf{h}]$ write, from
section~\ref{sec-app:elimination-of-boundary-terms},
\begin{equation}
\begin{aligned}
\tmmathbf{A} (\tmmathbf{h}) \cdot \tmmathbf{h}^{\dag} & =
\iint_{\Omega} \tmmathbf{\Sigma}^{\tmmathbf{h}} (\tmmathbf{T})
\mathbin: \mathcal{E}^{\tmmathbf{h}} (\tmmathbf{T},
\tmmathbf{h}^{\dag}, \tmmathbf{0}) \, \mathrm{d} A\\
\tmmathbf{D} (\tmmathbf{h}) & = \tmmathbf{B} (\tmmathbf{h}) + 2
\, \frac{\mathrm{d} \tmmathbf{C}}{\mathrm{d} \tmmathbf{h}} (\tmmathbf{h})
\end{aligned}
\label{eq:D-of-h}
\end{equation}
and the modified strain $\tilde{\tmmathbf{h}} (S)$ is defined by
\begin{equation}
\begin{aligned}
\tilde{h}_i (S) & = h_i (S) + \xi_i (\tmmathbf{h} (S)) \, h_i''
(S) \text{\quad (no sum on $i$)}\\
\xi_i (\tmmathbf{h}) & = \frac{C_i (\tmmathbf{h})}{\frac{\partial
W_{\text{hom}}}{\partial h_i} (\tmmathbf{h})},
\end{aligned} \label{eq:hi-tilde}
\end{equation}
where $C_i (\tmmathbf{h})$ is the $i$-th component of $\tmmathbf{C}
(\tmmathbf{h})$, {\tmem{i.e.}}, the coefficient in factor of $h_i^{\dag}$ in
$\tmmathbf{C} (\tmmathbf{h}) \cdot \tmmathbf{h}^{\dag}$, see
equation~(\ref{eq:Ci-in-practice}).
This completes the construction of the functional $\Phi_{(2)}^{\star}
[\tmmathbf{h}]$. The process is long but straightforward. It can be turned
into a fully automated procedure using symbolic calculations, something which
we will explore in future work. In the remainder of this paper, we illustrate
the procedure by carrying out the calculations for two problems that are
tractable analytically; the first problem is linear and the other one is
non-linear.
\section{Illustration in a linear setting: twisting of a thick
bar}\label{sec:twisting}
\begin{figure}
\centerline{\includegraphics{twisting-geom-with-labels.pdf}}
\caption{Pure twisting of a thick bar: actual
configuration.\label{eq:fig-twisting}}
\end{figure}
A number of authors have addressed higher-order effects in beam models for
prismatic solids in the limited context of linear elasticity
\citep{trabucho1989existence,nolde2018asymptotic,buannic2001higher,buannic2001higher2}.
In this section, we derive a higher-order model for the twisting of a
prismatic bar using our method; we show that its predictions are consistent
with prior work from the literature. This provides a first illustration of the
reduction procedure described in section~\ref{sec:asymptotic-1d-reduction}.
\subsection{Problem setting}
We consider the pure torsion of a linearly elastic bar including higher-order
effect. To simplify the calculations, we make some convenience assumptions.
First, the elastic material is assumed to be linear and isotropic with
homogeneous properties. This corresponds to a microscopic strain energy
density
\begin{equation}
w (\tmmathbf{T}, \tmmathbf{E}) = \frac{1}{2} \, \left( \lambda
\, \tmop{tr}^2 \tmmathbf{E}+ 2 \, \mu \,
\tmmathbf{E} \mathbin: \tmmathbf{E} \right), \label{eq:linear-w}
\end{equation}
where $\lambda$ and $\mu$ are the Lam{\'e} elastic moduli. Second, we assume
that the cross-section $\Omega$ has two perpendicular axes of symmetry, and we
set up the cross-section coordinates $(T_1, T_2)$ along these axes; as a
result, the cross-section is invariant by the two mirror symmetries
$T_{\alpha} \longleftarrow (- T_{\alpha})$. This symmetry assumption decouples
the twisting mode from the bending and stretching
modes~\citep{trabucho1989existence}. It allows us to analyze twisting while
setting $\varepsilon = 0$ (no stretching) and $\kappa_{\alpha} = 0$ (no
bending). We will therefore have to deal with a single non-zero macroscopic
strain $\kappa_3$ (twisting), which we rename as $\tau = \kappa_3$.
Accordingly, we shrink the macroscopic strain $\tmmathbf{h}$ to a vector with
length 1,
\[ \tmmathbf{h}= (\tau) . \]
In the context of linear elasticity, the main unknown is the (true)
displacement $v_i (S, \tmmathbf{T})$ from the reference to the actual
configuration. It is connected to the microscopic positional unknown $y_i (S,
\tmmathbf{T})$ used so far by
\begin{equation}
\begin{array}{rcccc}
y_{\alpha} (S, \tmmathbf{T}) & = & T_{\alpha} & + & v_{\alpha} (S,
\tmmathbf{T})\\
y_3 (S, \tmmathbf{T}) & = & & & v_3 (S, \tmmathbf{T})
\end{array} \label{eq:y-to-v}
\end{equation}
Inserting the equation above into equation~(\ref{eq:E-function}), one obtains
the strain $\tmmathbf{E}$ as
\begin{equation}
\tmmathbf{E} (\tmmathbf{T}; \tau ; \tmmathbf{V}, \tmmathbf{V}^{\dag}) = -
\tau \, \eta_{\alpha \beta} \, T_{\beta}
\, \tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_3 + \partial_{\alpha}
V_i (\tmmathbf{T}) \, \tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_i +
V_i^{\dag} (\tmmathbf{T}) \, \tmmathbf{e}_i \odot \tmmathbf{e}_3 .
\label{eq:E-twist-linear}
\end{equation}
Here, $\tmmathbf{V}$ and $\tmmathbf{V}^{\dag}$ denote the restrictions of the
displacement and its longitudinal gradient to a particular cross-section,
\begin{equation}
\begin{array}{ll}
V_i = v_i |_S & V_i^{\dag} = v_i^{\dag} |_S
\end{array} \label{eq:V-restrictions}
\end{equation}
In linear elasticity, the cross-sectional {\tmem{displacements}}
$\tmmathbf{V}$ and $\tmmathbf{V}^{\dag}$ are used as the unknowns
parameterizing the microscopic displacement, instead of the cross-sectional
positions $\tmmathbf{Y}$ and $\tmmathbf{Y}^{\dag}$ relevant to the non-linear
setting.
As we are working in the context of linear elasticity, we linearize all
quantities with respect to the twisting strain $\tau$, to the displacement
$\tmmathbf{V}$ and to its longitudinal gradient $\tmmathbf{V}^{\dag}$. Such a
linearization has been carried out silently in the right-hand side of
equation~(\ref{eq:E-twist-linear}), in particular.
In the forthcoming sections, we apply the method from
section~\ref{sec:asymptotic-1d-reduction} and derive a one-dimensional model
describing the twisting of the linearly elastic bar that includes the gradient
effect.
\subsection{Analysis of homogeneous solutions}
We first focus on homogeneous solutions, obtained by discarding the gradient
of the displacement in the local basis, $\tmmathbf{V}^{\dag} =\tmmathbf{0}$,
in equation~(\ref{eq:E-twist-linear}). This yields the strain of homogeneous
solutions as
\[ \tilde{\tmmathbf{E}} (\tmmathbf{T}, \tau, \tmmathbf{V}) = - \tau
\, \eta_{\alpha \beta} \, T_{\beta} \,
\tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_3 + \partial_{\alpha} V_i
(\tmmathbf{T}) \, \tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_i . \]
Homogeneous solutions $\tmmathbf{V}^{(\tau)}$ are the stationary points
$\tmmathbf{V}=\tmmathbf{V}^{(\tau)}$ of the strain energy $\frac{1}{2}
\, \iint_{\Omega} \left( \lambda \, \tmop{tr}^2
\tilde{\tmmathbf{E}} + 2 \, \mu \, \tilde{\tmmathbf{E}}
\mathbin: \tilde{\tmmathbf{E}} \right) \, \mathrm{d} A$ where
$\tilde{\tmmathbf{E}} = \tilde{\tmmathbf{E}} (\tmmathbf{T}, \tau,
\tmmathbf{V})$, subject to the kinematic constraint $\tmmathbf{q}
(\tmmathbf{V}) =\tmmathbf{0}$ in equation~(\ref{eq:q-vector}). With the help
of equation~(\ref{eq:app-red-variational-pb-homogeneous}), the variational
problem satisfied by $\tmmathbf{V}^{(\tau)}$ writes as
\begin{equation}
\begin{gathered}
\iint_{\Omega} V_i^{(\tau)} (\tmmathbf{T}) \, \mathrm{d} A
=\tmmathbf{0}\\
\iint_{\Omega} \left[ T_1 \, V_2^{(\tau)} (\tmmathbf{T}) -
T_{2 \,} \, V_1^{(\tau)} (\tmmathbf{T}) \right]
\, \mathrm{d} A =\tmmathbf{0}\\
\forall \hat{\tmmathbf{V}} \quad \iint_{\Omega} \left[
\tmmathbf{\Sigma} (\tmmathbf{T}, \tmmathbf{E}^{(\tau)} (\tmmathbf{T}))
\mathbin: \widehat{\tilde{\tmmathbf{E}}}^{(\tau)} (\tmmathbf{T}) +
F_i^{(\tau)} \, \hat{V}_i (\tmmathbf{T}) + Q^{(\tau)} \,
\left( T_1 \, \hat{V}_2 (\tmmathbf{T}) - T_{2 \,}
\, \hat{V}_1 (\tmmathbf{T}) \right) \right] \, \mathrm{d} A
= 0.
\end{gathered} \label{eq:twist-V-tau-pb}
\end{equation}
where $\tmmathbf{E}^{(\tau)} (\tmmathbf{T}) = \tilde{\tmmathbf{E}}
(\tmmathbf{T}, \tau, \tmmathbf{V}^{(\tau)})$ is the microscopic strain,
$\tmmathbf{\Sigma} (\tmmathbf{T}, \tmmathbf{E}^{(\tau)} (\tmmathbf{T})) =
\frac{\mathrm{d} w}{\mathrm{d} \tmmathbf{E}} (\tmmathbf{T}, \tmmathbf{E}^{(\tau)}
(\tmmathbf{T})) = \lambda \, \tmop{tr} \tmmathbf{E}^{(\tau)}
(\tmmathbf{T}) \times \tmmathbf{I}+ 2 \, \mu \,
\tmmathbf{E}^{(\tau)} (\tmmathbf{T}) = 2 \, \mu \, \left( -
\tau \, \eta_{\alpha \beta} \, T_{\beta} +
\partial_{\alpha} V_3 (\tmmathbf{T}) \right) \, \tmmathbf{e}_{\alpha}
\odot \tmmathbf{e}_3 + \left( \lambda \, \partial_{\gamma} V_{\gamma}
(\tmmathbf{T}) \, \delta_{\alpha \beta} + 2 \, \mu
\, \partial_{\alpha} V_{\beta} (\tmmathbf{T}) \right) \,
\tmmathbf{e}_{\alpha} \odot \tmmathbf{e}_{\beta} + \lambda \,
\partial_{\gamma} V_{\gamma} (\tmmathbf{T}) \, \tmmathbf{e}_3 \otimes
\tmmathbf{e}_3$ is the microscopic stress, $\tmmathbf{I}$ is the $3 \times 3$
identity matrix, $\widehat{\tilde{\tmmathbf{E}}}^{(\tau)} (\tmmathbf{T}) =
\partial_{\alpha} \hat{V}_i (\tmmathbf{T}) \, \tmmathbf{e}_{\alpha}
\odot \tmmathbf{e}_i$ is the virtual increment of strain, and
$\tmmathbf{F}^{(\tau)}$ and $Q^{(\tau)}$ are Lagrange multiplier enforcing the
constraints written in the first two lines of
equation~(\ref{eq:twist-V-tau-pb}).
By inserting these expressions into~(\ref{eq:twist-V-tau-pb}), one obtains two
decoupled problems, namely one for the cross-sectional displacement
$V_{\alpha} (\tmmathbf{T})$ which has no source term, and one for the
longitudinal displacement $V_3 (\tmmathbf{T})$ having a source term
proportional to the kinematic strain $\tau$. The solution writes
\begin{equation}
V_{\alpha}^{(\tau)} (\tmmathbf{T}) = 0 \qquad V_3^{(\tau)} (\tmmathbf{T}) =
\tau \, \omega (\tmmathbf{T}) \label{eq:twisting-homogeneous-sol}
\end{equation}
where $\omega (\tmmathbf{T})$ is the classical warping
function~\citep{trabucho1989existence}, defined as the solution of the
variational problem
\begin{equation}
\iint_{\Omega} \omega \, \mathrm{d} A = 0 \text{\quad and\quad}
\forall \hat{\omega} \quad \iint_{\Omega} \partial_{\alpha} \omega
\, \partial_{\alpha} \hat{\omega} \, \mathrm{d} A = -
\iint_{\Omega} \eta_{\alpha \beta} \, T_{\alpha}
\, \partial_{\beta} \hat{\omega} \, \mathrm{d} A.
\label{eq:warping-function-variational}
\end{equation}
The function $\omega (\tmmathbf{T})$ depends on the geometry of the
cross-section only.
In terms of the solution~(\ref{eq:twisting-homogeneous-sol}), one can
reconstruct the microscopic strain and the microscopic stress as
\begin{equation}
\begin{array}{rllll}
\tmmathbf{E}^{(\tau)} (\tmmathbf{T}) & = & \tilde{\tmmathbf{E}}
(\tmmathbf{T}, \tau, \tmmathbf{V}) & = & \tau \, \left( -
\eta_{\alpha \beta} \, T_{\beta} + \partial_{\alpha}
\omega (\tmmathbf{T}) \right) \, \tmmathbf{e}_{\alpha} \odot
\tmmathbf{e}_3\\
\tmmathbf{\Sigma}^{(\tau)} (\tmmathbf{T}) & = & 2 \, \mu
\, \tmmathbf{E}^{(\tau)} (\tmmathbf{T}) & = & \mu \,
\tau \, \left( - \eta_{\alpha \beta} \,
T_{\beta} + \partial_{\alpha} \omega (\tmmathbf{T}) \right) \,
(\tmmathbf{e}_{\alpha} \otimes \tmmathbf{e}_3 +\tmmathbf{e}_3 \otimes
\tmmathbf{e}_{\alpha})
\end{array} \label{eq:torsion-homogeneousStress}
\end{equation}
Next, the strain energy density $W_{\text{hom}} (\tau)$ defined
in~(\ref{eq:Wh-def}) is found by inserting the strain $\tmmathbf{E}^{(\tau)}
(\tmmathbf{T})$ into~(\ref{eq:linear-w}), which yields
\begin{equation}
W_{\text{hom}} (\tau) = \frac{1}{2} \, \mu \, J
\, \tau^2, \label{eq:torsion-Whom}
\end{equation}
where $J$ is the torsional constant, classically defined as
\begin{equation}
\begin{split}
J & = \iint_{\Omega} \sum_{\alpha} \left( - \eta_{\alpha
\beta} \, T_{\beta} + \partial_{\alpha} \omega (\tmmathbf{T})
\right)^2 \, \mathrm{d} A\\
& = \iint_{\Omega} (T_1^2 + T_2^2) \, \mathrm{d} A -
\iint_{\Omega} ((\partial_1 \omega)^2 + (\partial_2 \omega)^2)
\, \mathrm{d} A.
\end{split} \label{eq:torsion-J}
\end{equation}
The last equality can be established by using of an identity obtained by
setting $\hat{\omega} = \omega$ in~(\ref{eq:warping-function-variational}).
In view of equation~(\ref{eq:phi-no-gradient}), the one-dimensional strain
energy is
\begin{equation}
\Phi_{(0)}^{\star} [\tau] = \int_0^{\ell} \frac{1}{2} \, \mu
\, J \, \tau^2(S) \, \mathrm{d} S.
\label{eq:twisting-Phi0}
\end{equation}
We have recovered the classical linear model for the twisting of bars,
ignoring the gradient effect for the moment. Repeating a similar reduction but
for stretching and bending modes rather than for the twisting mode, one can
recover the strain energy potential governing the planar Euler-Bernoulli beam
model, $\Phi^{\star}_{(0)} [\varepsilon, \kappa] = \int \left[ \frac{Y
\, A}{2} \, \varepsilon^2 (S) + \frac{Y \, I}{2}
\, \kappa^2 (S) \right] \, \mathrm{d} S$, where $\varepsilon$
and $\kappa$ are the stretching and bending strain measures, respectively, and
$Y \, A$ and $Y \, I$ are the classical traction and bending
moduli, respectively. Here, we have considered rods made up of a uniform,
linearly elastic, isotropic material here; extensions accounting for
anisotropic or non-linear elastic materials \citep{cimetiere1988asymptotic},
for inhomogeneous elastic properties in the cross-section
\citep{hodges2006nonlinear}, or for a pre-strain distribution across the
cross-section
\citep{cicalese2017global,kohn2018bending,moulton2020morphoelastic} can be
obtained easily by following the same procedure.
\subsection{Gradient effect}
In \ref{app:twisting}, we derive the one-dimensional energy
functional capturing the gradient effect associated with a non-uniform
distribution of twist $\tau (S)$. We do so by applying the general recipe from
section~\ref{sec:gradient-effect} to the strain function $\tmmathbf{E}
(\tmmathbf{T}; \tau ; \tmmathbf{V}, \tmmathbf{V}^{\dag})$ in
equation~(\ref{eq:E-twist-linear}). The main results can be summarized as
follows.
A second torsional constant, classically called the warping constant, is
defined by
\begin{equation}
J_{\omega} = \iint_{\Omega} \omega^2 (\tmmathbf{T}) \, \mathrm{d} A.
\label{eq:Jw}
\end{equation}
The gradient of kinematic twist $\tau' (S)$ gives rise to a corrective
displacement along to the cross-section
\begin{equation}
Z_{\alpha}^{\text{opt}} (\tau^{\dag}, \tmmathbf{T}) = \tau^{\dag}
\, u_{\alpha} (\tmmathbf{T}) \qquad Z_3^{\text{opt}} (\tau^{\dag},
\tmmathbf{T}) = 0 \label{eq:twisting-corrective-displacement}
\end{equation}
where $u_{\alpha} (\tmmathbf{T})$ for $\alpha = 1$, 2 are two functions
satisfying the variational problem
\begin{equation}
\forall \hat{u}_{\alpha} \iint_{\Omega} \left( \left\{ \lambda
\, \partial_{\rho} u_{\rho} \, \delta_{\alpha
\beta} + 2 \, \mu \, \frac{\partial_{\alpha} u_{\beta} +
\partial_{\beta} u_{\alpha}}{2} \right\} \, \partial_{\beta}
\hat{u}_{\alpha} + \left( \lambda \, \omega \,
\partial_{\alpha} \hat{u}_{\alpha} - \mu \, \partial_{\alpha}
\omega \, \hat{u}_{\alpha} \right) - F_{\alpha} \,
\hat{u}_{\alpha} - Q \, \eta_{\alpha \beta} \,
T_{\alpha} \, \hat{u}_{\beta} \right) \, \mathrm{d} A = 0
\label{eq:twisting-psi-alpha-variational-pb}
\end{equation}
and the kinematic constraints
\begin{equation}
\iint_{\Omega} u_{\alpha} \, \mathrm{d} A = 0 \qquad
\iint_{\Omega} \eta_{\alpha \beta} \, T_{\alpha}
\, u_{\beta} \, \mathrm{d} A = 0.
\label{eq:twist-corrective-displ-cstr}
\end{equation}
In equation~(\ref{eq:twisting-psi-alpha-variational-pb}), $\hat{u}_{\alpha}
(\tmmathbf{T})$ for $\alpha = 1$, 2 are test functions defined on the
cross-section, and $(F_1, F_2, Q)$ are three scalar multipliers enforcing the
kinematic constraints. The solutions $u_{\alpha} (\tmmathbf{T})$ depend on the
cross-section geometry and on Poisson's ratio $\nu = \frac{\lambda}{2
\, (\lambda + \mu)}$.
In terms of the corrective displacement $u_{\alpha}$ and of the warping
function $\omega (\tmmathbf{T})$ found earlier, see
equation~(\ref{eq:warping-function-variational}), one can define three
additional constants,
\begin{equation}
\begin{split}
D_{\lambda} & = \lambda \, \iint_{\Omega} \omega
(\tmmathbf{T}) \, \partial_{\alpha} u_{\alpha} (\tmmathbf{T})
\, \mathrm{d} A\\
D_{\mu} & = \mu \, \iint_{\Omega} \partial_{\alpha} \omega
(\tmmathbf{T}) \, u_{\alpha} (\tmmathbf{T}) \, \mathrm{d}
A\\
D_{\omega} & = \left( \lambda + 2 \, \mu \right) \,
J_{\omega} + D_{\lambda}
\end{split} \label{eq:twist-D-sub-x}
\end{equation}
The final expression of the one-dimensional strain energy is
\begin{equation}
\Phi_{(2)}^{\star} [\tau] = \int_0^{\ell} \left( \frac{1}{2} \, \mu
\, J \, \left( \tau (S) + \frac{D_{\mu}}{\mu \,
J} \, \frac{\mathrm{d}^2 \tau}{\mathrm{d} S^2} \right)^2 + \frac{1}{2}
(D_{\omega} + D_{\mu}) \, \left( \frac{\mathrm{d} \tau}{\mathrm{d} S}
\right)^2 \right) \, \mathrm{d} S. \label{eq:twisting-final-phi}
\end{equation}
To the best of our knowledge, this simple one-dimensional strain energy for
the twisting of a linearly elastic bar is not known from the literature. It
captures the gradient effect and is asymptotically correct. It underpins some
of the results of \citet{trabucho1989existence}, as discussed below, in an
accessible form.
By combining equations~(\ref{eq:x-centerline-based-crspondence}),
(\ref{eq:y-star-expansion}), (\ref{eq:y-to-v}),
(\ref{eq:twisting-homogeneous-sol})
and~(\ref{eq:twisting-corrective-displacement}), the solution in displacement
is found as $\tmmathbf{x} (S, \tmmathbf{T}) = S \, \tmmathbf{e}_3 +
\left( T_{\alpha} + \tau' (S) \, u_{\alpha} (\tmmathbf{T}) \right)
\, \tmmathbf{d}_{\alpha} (S) + \tau (S) \, \omega
(\tmmathbf{T}) \, \tmmathbf{e}_3 (S)$ where $\tmmathbf{d}_1 (S)
=\tmmathbf{e}_1 + \theta (S) \, \tmmathbf{e}_2$ and $\tmmathbf{d}_2
(S) =\tmmathbf{e}_2 - \theta (S) \, \tmmathbf{e}_1$ are the rotated
directors and $\theta (S)$ is the twisting angle, see
figure~\ref{eq:fig-twisting} (recall that we are working in the linear
setting). The usual relation defining the twisting strain $\tau (S) = \theta'
(S)$ as the gradient of the twisting angle $\theta (S)$ is recovered in the
process.
The particular case of an elliptical cross-section is worked out in
\ref{app-twist-elliptical}: with $a$ and $b$ as the
semi-major and semi-minor axes, in any order, the constants appearing
in the energy $\Phi_{(2)}^{\dag}$ are calculated as
\begin{equation}
\begin{array}{llll}
J = \pi \, \frac{a^3 \, b^3}{a^2 + b^2}\quad & J_{\omega}
= \frac{1}{24} \, \frac{(b^2 - a^2)^2}{a^2 + b^2} \, J\quad &
D_{\mu} = 8 \, \mu \, J_{\omega} \, \left(
\frac{a \, b}{a^2 + b^2} \right)^2\quad & D_{\omega} = Y \,
J_{\omega}
\end{array} \text{{\hspace{1.2em}}(elliptical cross-section)}
\label{eq:elliptical-X-section}
\end{equation}
where $Y$ is the Young modulus,
\begin{equation}
Y = \mu \, \frac{3 \, \lambda + 2 \, \mu}{\lambda
+ \mu} . \label{eq:twisting-Young-modulus}
\end{equation}
\subsection{Equilibrium}
In the presence of a distributed external twisting moment $m_3 (S)$ per unit
length $\mathrm{d} S$, the total potential energy of the bar is
\[ \Psi^{\star} [\theta] = \Phi_{(2)}^{\star} [\theta'] - \int_0^{\ell} m_3
(S) \, \theta (S) \, \mathrm{d} S, \]
see equation~(\ref{eq:ideal-1d-total-potential-energy}). The equations of
equilibrium of the bar can be found by making $\Psi^{\star} [\theta]$
stationary with respect to $\theta$. Upon integration by parts and after
several simplifications, one obtains the equilibrium equation in the interior
as
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d} S} \left( \mu \, J \, \tau -
(D_{\omega} - D_{\mu}) \, \tau'' \right) + m_3 (S) = 0,
\label{eq:twist-equil-eq-with-gradient}
\end{equation}
along with the applicable boundary conditions. Note the plus sign in front of
$D_{\mu}$ in equation~(\ref{eq:twisting-final-phi}) and the minus sign in
equation~(\ref{eq:twist-equil-eq-with-gradient}).
\subsection{Comments}
The equilibrium equation~(\ref{eq:twist-equil-eq-with-gradient}) underpins the
analysis of the gradient effect in twisted prismatic bar done
by~\citet{trabucho1989existence}, as shown in \ref{app:Trabucho};
however this simple and important equation did not appear explicitly in this
work.
In equation~(\ref{eq:twist-equil-eq-with-gradient}), the quantity inside the
derivative $M_3 = \mu \, J \, \tau - (D_{\omega} - D_{\mu})
\, \tau''$ can be interpreted as the internal twisting moment in the
bar; it is made up of the prediction $M_3 = \mu \, J \,
\tau$ \ of the classical model without gradient effect, and of a correction
coming from the gradient effect; it is a hallmark of higher-order gradient
models that the stress not only depends on the local strain but also on its
gradients. The quantity $M_3 = \mu \, J \, \tau -
(D_{\omega} - D_{\mu}) \, \tau''$ is identical to that obtained by
the general constitutive law in
equation~(\ref{eq:internal-stress-full-model}), as can be checked.
In the particular case of a circular cross-section, $a = b$, the gradient
effect is absent: in equation~(\ref{eq:elliptical-X-section}), $J_{\omega} =
0$ and therefore $D_{\mu} = D_{\omega} = 0$.
The gradient model for a twisted bar has been derived here in the context of
linear elasticity. A non-linear extension of this model can be obtained along
the same lines; in order to build the catalog of solutions having homogeneous
twist, a non-linear one-dimensional boundary-value problem must be solved,
which requires some numerical solution in general. In the non-linear model,
the constitutive law is of the form $M_3 = H (\tau) + Q (\tau) \,
\tau''$. Consistency with the linear model is warranted by the approximations
$H (\tau) \approx \mu \, J \, \tau$ and $Q (\tau) \approx -
(D_{\omega} - D_{\mu})$ which hold when linear elasticity is applicable,
{\tmem{i.e.}}, when the microscopic strain is small, $| \tau | \ll 1 / \max
(a, b)$. This shows that the linear model derived in this section is
applicable as long as the absolute value of the twisting strain $\tau$ remains
small.
\section{Illustration in a weakly non-linear setting: buckling of a thick
beam}\label{sec:Euler-buckling}
Euler buckling ({\tmem{i.e.}}, the buckling of an elastic cylinder subjected
to an axial compressive force) can be analyzed using the classical theory of
rods: this yields a prediction for the buckling load which is accurate for
infinitely slender beams. With the aim to characterize the buckling load of
thicker beams, several authors have derived corrections to the Euler buckling
load in powers of the aspect-ratio. This requires restarting from the
non-linear theory of elasticity in three dimensions, as both constitutive and
geometric nonlinearities affect these corrections.
In an early and remarkable work,
\citet{Fosdick-Shield-Small-bending-of-a-circular-1963} have carried out
what is essentially a linear bifurcation analysis of a hyper-elastic cylinder
having a finite length-to-radius ratio. They obtained a prediction of the
buckling load that connects with Euler's prediction in the limit of a slender
cylinder, thereby showing consistency of the buckling analyses based on
three-dimensional versus one-dimensional models. However, their solution
assumes that the internal moment in the cylinder is proportional to the local
value of the center-line curvature. This is questionable for thick beams: the
internal moment $M$ given by equation~(\ref{eq:internal-stress-full-model})
depends on higher derivatives of the curvature as well, as earlier in
equation~(\ref{eq:twist-equil-eq-with-gradient}). It is therefore unclear
whether their analysis is valid beyond the infinitely slender case.
In more recent work, \citet{scherzinger1998asymptotic} derived the first
buckling load of a stubby hyper-elastic cylinder in powers of its
aspect-ratio, starting from the full theory of three-dimensional elasticity
with finite strain---a similar analysis has been carried out independently
by~\citet{Goriely-Vandiver-EtAl-Nonlinear-Euler-buckling-2008} and \citet{de2011nonlinear}.
Here, we show that the results of \citet{scherzinger1998asymptotic} can be
recovered by (i)~deriving a non-linear {\tmem{one-dimensional}} model for the
stubby cylinder that captures the gradient effect, using our reduction method
and (ii)~by carrying out a linear bifurcation analysis of this one-dimensional
model.
By doing so, our goal is twofold: we provide another illustration of our
reduction method and we verify its predictions in a weakly non-linear setting.
\subsection{Problem setting}\label{sec:beam-problem-setting}
We revisit the buckling problem of~\citet{scherzinger1998asymptotic} as
follows. We consider a prismatic elastic body having length is $\ell$ in the
undeformed configuration: in figure~\ref{fig:thick-Euler-buckling}a, the
particular case of a cylinder with initial radius $\rho$ is shown. We use
Cartesian coordinates such that the axis $\tmmathbf{e}_3$ is aligned with the
initial axis of the cylinder, and one of the terminal faces of the cylinder is
centered on the origin $O$ of the coordinate system. The two ends $S = 0$ and
$S = \ell$ are assumed to slide without friction on two planes perpendicular
to $\tmmathbf{e}_3$, {\tmem{i.e.}}, the displacement along $\tmmathbf{e}_3$ is
zero on the terminal faces of the cylinder; in particular the longitudinal
displacement is restrained on the terminal faces,
\begin{equation}
\tmmathbf{x} (0, \tmmathbf{T}) \cdot \tmmathbf{e}_3 = 0
\qquad
(\tmmathbf{x}
(\ell, \tmmathbf{T}) -\tmmathbf{r} (\ell)) \cdot \tmmathbf{e}_3 = 0.
\label{eq:Euler-buckling-sliding-condition}
\end{equation}
The distance between the planes is changed from $\ell$ in the natural
configuration, to $\ell \, (1 + \varepsilon)$ in the actual
configuration with $- 1 < \varepsilon < 0$. We seek the critical value of
$\varepsilon$ corresponding to the occurrence of the first buckling modes.
We assume that the prismatic body is made up of an isotropic material, having
uniform elastic properties. We also assume that the cross-section domain
$\Omega$ is mirror-symmetric about the axes $\tmmathbf{e}_1$ and
$\tmmathbf{e}_2$ in reference configuration, i.e., it is invariant by both
$(T_1, T_2) \leftarrow (T_1, - T_2)$ and $(T_1, T_2) \leftarrow (- T_1, T_2)$;
this warrants
\begin{equation}
\left\langle (T_1)^i \, (T_2)^j \right\rangle = 0 \text{ whenever
$i$ or $j$ is odd, or both are odd}
\label{eq:Euler-buckling-cross-section-symmetry}
\end{equation}
With these assumptions, the two bending modes and the stretching modes are
uncoupled, and we limit attention to buckling modes such that the center-line
remains in the plane perpendicular to $\tmmathbf{e}_1$. We denote by $\theta
(S)$ the rotation of the material frame about the constant vector normal
$\tmmathbf{d}_1 (S) =\tmmathbf{e}_1$ to the plane of deformation.
The analysis of less symmetric cross-sections, non-isotropic materials, or
elastic properties that vary in the cross-section is more involved but it does
not raise any fundamental difficulty.
The prismatic body is homogeneous and made of an isotropic hyper-elastic
material. We can for instance use the same constitutive model $w
(\tmmathbf{T}, \tmmathbf{E}) = w_{\text{ST}} (\tmmathbf{E})$ as used
by~{\citet{scherzinger1998asymptotic}}, which reads, after restoring a missing
coefficient $1 / 4$ in their equation~[55], $w_{\text{ST}} (\tmmathbf{E}) =
A_{\text{ST}} \, \Big( \frac{I_1}{I_3^{1 / 3}} - 3 \Big) +
B_{\text{ST}} \, \Big( \frac{I_2}{I_3^{2 / 3}} - 3 \Big) +
\frac{1}{4} \, \frac{A_{\text{ST}} + B_{\text{ST}}}{24} \,
\frac{1 + \nu_{\text{ST}}}{1 - 2 \, \nu_{\text{ST}}} \,
\Big( I_3^2 - \frac{1}{I_3^2} \Big)^2$, where $I_1 = \tmop{tr}
\tmmathbf{C}$, $I_2 = \frac{1}{2} \, (I_1^2 - \tmop{tr}
(\tmmathbf{C}^2))$, $I_3 = \det \tmmathbf{C}$ and $\tmmathbf{C}=\tmmathbf{I}+
2 \, \tmmathbf{E}$. However, we do not specify the isotropic
constitutive law for the moment. The only constitutive assumptions which we
will use in the forthcoming analysis is that, in the unbuckled configuration,
the stress is uniaxial and the incremental constitutive law is transversely
isotropic: this holds for {\tmem{any}} isotropic constitutive law.
Specifically, our analysis makes use of the three constitutive functions
$w_{\text{tr}} (\varepsilon)$, $Y_{\text{t}} (\varepsilon)$, $p (\varepsilon)$
that characterize the non-linear material response in simple traction, where
$\varepsilon$ is the longitudinal engineering strain: $w_{\text{tr}}
(\varepsilon)$ is the strain energy density of the material in simple
traction, $Y_{\text{t}} (\varepsilon)$ is the tangent Young modulus and $p
(\varepsilon)$ is the transverse stretch resulting from Poisson's effect, see
section~\ref{app-sec:simple-stretching} in the appendix for details. In terms
of these material functions, we also define the initial Young modulus $Y_0$,
the initial Poisson's ratio $\nu_0$ and the initial curvature $Y_0'$ of the
load-displacement curve, see equation~(\ref{eq:beam-Y0-Y0p}) from the
appendix.
\begin{figure}
\centerline{\includegraphics{scherzinger-pb-with-labels.pdf}}
\caption{Buckling of a thick circular cylinder with initial radius $\rho$,
whose terminal faces slide onto two parallel planes, as analyzed by
{\citet{scherzinger1998asymptotic}}
and~{\citet{Goriely-Vandiver-EtAl-Nonlinear-Euler-buckling-2008}}. Our
analysis addresses the slightly more general case of a prismatic body, whose
cross-section $\Omega$ is mirror-symmetric with respect to be axes
$\tmmathbf{e}_1$ and $\tmmathbf{e}_2$.\label{fig:thick-Euler-buckling}}
\end{figure}
\subsection{One-dimensional reduction}\label{ssec:beam-1d-model}
In this section, we apply our reduction method to obtain the one-dimension
model for the planar beam capturing the gradient effect; it describes the
bending and stretching of a planar, higher-order Elastica.
For planar deformation, there are two relevant macroscopic strain measures,
namely the bending strain $\kappa = \kappa_1$ and the stretching strain
$\varepsilon$; they are grouped into a macroscopic strain vector
$\tmmathbf{h}= (\varepsilon, \kappa)$. In the planar twistless case, the
general form~(\ref{eq:phi-gr}) of the higher-order one-dimensional model
writes
\begin{multline}
\Phi_{(2)}^{\star} [\varepsilon, \kappa] = \int_0^{\ell} \left[
W_{\text{hom}} \left( \varepsilon + \xi_0 (\varepsilon, \kappa)
\, \varepsilon'', \kappa + \xi_1 (\varepsilon, \kappa)
\, \kappa'' \right) \right. \ldots\\
\left. {}+ A_0 (\varepsilon, \kappa) \,
\varepsilon' + A_1 (\varepsilon, \kappa) \, \kappa' + \frac{1}{2}
\, \left(\begin{array}{c}
\varepsilon'\\
\kappa'
\end{array}\right) \cdot \left(\begin{array}{cc}
D_{0 0} (\varepsilon, \kappa) & D_{1 0} (\varepsilon,
\kappa)\\
D_{1 0} (\varepsilon, \kappa) & D_{1 1} (\varepsilon,
\kappa)
\end{array}\right) \cdot \left(\begin{array}{c}
\varepsilon'\\
\kappa'
\end{array}\right) \right] \, \mathrm{d} S.
\label{eq:bending-phi2-anticipation}
\end{multline}
We now proceed to calculate the quantities $W_{\text{hom}}$, $\xi_0$, $\xi_1$,
$A_0$, $A_1$ and $D_{i j}$: we consider the case of a finite axial
strain $\varepsilon$ but limit attention to small values of the curvature
$\kappa$, anticipating that this is all that is needed for the forthcoming
buckling analysis.
\subsubsection{Symmetry properties}\label{sssec:beam-symmetries}
We first characterize the symmetry properties of the functions
$W_{\text{hom}}$, $\xi_0$, $\xi_1$, $A_0$, $A_1$ and $D_{i j}$ as
they will save us from calculating some quantities that are zero by symmetry.
The cylinder is invariant both by a mirror symmetry with respect to the axis
$\tmmathbf{e}_3$, and by a reversal of the parameterization $S \leftarrow (-
S)$. These symmetries correspond to changing $(\varepsilon, \kappa,
\varepsilon', \kappa', \varepsilon'', \kappa'')$ into $(+ \varepsilon, -
\kappa, + \varepsilon', - \kappa', + \varepsilon'', - \kappa'')$ and $(+
\varepsilon, - \kappa, - \varepsilon', + \kappa', + \varepsilon'', -
\kappa'')$, respectively. For $\Phi_{(2)}^{\star} [\varepsilon, \kappa]$ to
remain invariant by both these transformations, both $A_0$ and $A_1$ must be
zero identically, $W_{\text{hom}}$, $\xi_0$, $\xi_1$, $D_{0 0}$ and
$D_{1 1}$ must be even with respect to $\kappa$, and $D_{1
0}$ must be odd with respect to $\kappa$, see
equation~(\ref{eq:bending-phi2-anticipation}). Therefore, for any
$\varepsilon$ and $\kappa$, and for any set of non-negative integers $i$ and
$j$, we have
\begin{equation}
\begin{gathered}
\begin{array}{ll}
A_0 (\varepsilon, \kappa) = 0 & A_1 (\varepsilon, \kappa) = 0
\end{array}\\
\frac{\partial^{i + 2 \, j + 1} f}{\partial \varepsilon^i
\, \partial \kappa^{2 \, j + 1}} (\varepsilon, 0) = 0
\text{ for any choice of $f$ in $\left\{ W_{\text{hom}}, \xi_0, \xi_1,
D_{0 0}, D_{1 1} \right\}$}\\
\frac{\partial^{i + 2 \, j} D_{1 0}}{\partial
\varepsilon^i \, \partial \kappa^{2 \, j}} (\varepsilon,
0) = 0 \text{}
\end{gathered} \label{eq:bending-phi2-symmetries}
\end{equation}
In particular, the matrix $D_{i j} (\varepsilon, 0)$ is diagonal,
{\tmem{i.e.}}, $D_{1 0} (\varepsilon, 0) = 0$.
\subsubsection{Analysis of homogeneous solutions}
Homogeneous solutions are derived here by solving the equations from
\ref{app:compendium-homogeneous}, using the expression
of the strain from equation~(\ref{eq:homogeneous-strain}) relevant to the
non-linear and homogeneous setting, and by specializing to the planar
twistless case, $\kappa_2 = \kappa_3 = 0$. As we are ultimately interested in
weakly bent configurations of the cylinder, close to the bifurcation
threshold, we treat $\kappa$ as a small parameter. We refrain from setting
$\kappa = 0$ from the onset, however, as some derivatives of intermediate
quantities with respect to $\kappa$ will be needed in the course of the
derivation.
As shown in \ref{app-sec:beam-homogeneous-solutions},
the microscopic displacement corresponding to a homogeneous axial strain
$\varepsilon$ and curvature $\kappa$ is
\begin{equation}
\begin{array}{ll}
Y_{\alpha}^{(\varepsilon, \kappa)} = p (\varepsilon) \,
\left( T_{\alpha} + \kappa \, \frac{\mathrm{d} p}{\mathrm{d}
\varepsilon} (\varepsilon) \, \varphi_{\alpha} (\tmmathbf{T})
\right) +\mathcal{O} (\kappa^2)\qquad & Y_3^{(\varepsilon, \kappa)}
(\tmmathbf{T}) = 0
\end{array} \label{eq:beam-homogeneous-Y}
\end{equation}
where $\varphi_1 (\tmmathbf{T})$ and $\varphi_2 (\tmmathbf{T})$ are two
functions depending on the cross-section geometry, which are the solutions of
the differential problem on the cross-section
\begin{equation}
\begin{array}{lll}
\forall \tmmathbf{T} \in \Omega \quad \frac{\partial_{\alpha}
\varphi_{\beta} (\tmmathbf{T}) + \partial_{\beta} \varphi_{\alpha}
(\tmmathbf{T})}{2} = T_2 \, \delta_{\alpha \beta} &
\langle \varphi_{\alpha} \rangle = 0 & \left\langle \eta_{\alpha
\beta} \, T_{\alpha} \, \varphi_{\beta} (\tmmathbf{T})
\right\rangle = 0.
\end{array} \label{eq:phi-alpha-fundamental-def}
\end{equation}
With the symmetry assumptions
in~(\ref{eq:Euler-buckling-cross-section-symmetry}), the solution is
\begin{equation}
\varphi_1 (\tmmathbf{T}) = T_1 \, T_2 \qquad
\varphi_2 (\tmmathbf{T}) = \frac{T_2^2 - T_1^2}{2} - \frac{\langle T_2^2
\rangle - \langle T_1^2 \rangle}{2} .
\label{eq:phi-alpha-general-cross-section}
\end{equation}
Up to a rigid-body displacement, the functions $\varphi_1$ and $\varphi_2$
match the functions $\phi_{2 1}$ and $\phi_{2 2}$
classically used in the {\tmem{linear}} analysis of bending,
respectively---see for instance equations~[2.5,3.8,3.9] in the work
of~{\citet{trabucho1989existence}}. Our analysis shows that they are relevant
to the analysis of finite-stretching and infinitesimal-bending as well.
As shown in the appendix, the displacement~(\ref{eq:beam-homogeneous-Y}) is
such that every point in the bar is in simple traction with a local
longitudinal strain $\varepsilon + \kappa \, p (\varepsilon)
\, T_2$ depending on the transverse coordinate $T_2$: the strain is
given by $\tmmathbf{E}^{(\varepsilon, \kappa)} (\tmmathbf{T})
=\tmmathbf{E}_{\text{tr}} \left( \varepsilon + \kappa \, p
(\varepsilon) \, T_2 \right) +\mathcal{O} (\kappa^2)$ and the stress
is uniaxial, $\tmmathbf{\Sigma}^{(\varepsilon, \kappa)} (\tmmathbf{T}) =
\Sigma_{\text{tr}} \left( \varepsilon + \kappa \, p
(\varepsilon) \, T_2 \right) \, \tmmathbf{e}_3 \otimes
\tmmathbf{e}_3 +\mathcal{O} (\kappa^2)$, where $\tmmathbf{E}_{\text{tr}}
(\varepsilon)$ and $\Sigma_{\text{tr}} (\varepsilon) \,
\tmmathbf{e}_3 \otimes \tmmathbf{e}_3$ are the strain and the stress in simple
traction for the particular material considered, see
equation~(\ref{eq:beam-simple-traction-E-Sigma}).
The strain energy per unit length associated with this homogeneous solution is
found in the appendix as
\begin{equation}
W_{\text{hom}} (\varepsilon, \kappa) = A \, w_{\text{tr}}
(\varepsilon) + \frac{1}{2} \, Y_{\text{t}} (\varepsilon)
\, p^2 (\varepsilon) \, I_1^0 \,
\kappa^2 +\mathcal{O} (\kappa^4) \label{eq:beam-Whom}
\end{equation}
where $A = \iint_{\Omega} \mathrm{d} A$ is the initial area and $I_1^0 =
\iint T_2^2 \, \mathrm{d} A$ is
the initial geometric moment of inertia. In the small-strain limit, the
potential $W_{\text{hom}}$ is consistent with the classical Euler beam model
$W_{\text{hom}} (\varepsilon, \kappa) \approx C + \, \frac{Y_0 A}{2}
\, \varepsilon^2 + \frac{Y_0 \, I_1^0}{2} \,
\kappa^2$ (where $C$ is a constant) as can be shown by inserting the
equivalents of $w_{\text{tr}}$, $Y_{\text{t}}$ and $p$ for small $\varepsilon$
derived in \ref{app-sec:simple-stretching}.
\subsubsection{Gradient effect}\label{sssec:beam-gradient-sol}
The corrective displacement associated with longitudinal gradients of axial
strain $\varepsilon^{\dag}$ and curvature $\kappa^{\dag}$ is found in the
appendix as
\begin{equation}
\begin{array}{ll}
Z_{\text{opt}, \alpha}^{(\varepsilon, 0)} ((\varepsilon^{\dag},
\kappa^{\dag}), \tmmathbf{T}) = 0 \qquad& Z_{\text{opt}, 3}^{(\varepsilon, 0)}
((\varepsilon^{\dag}, \kappa^{\dag}), \tmmathbf{T}) = \varepsilon^{\dag}
\, (\ldots) + \kappa^{\dag} \, \frac{p^2
(\varepsilon) \, \frac{\mathrm{d} p}{\mathrm{d} \varepsilon}
(\varepsilon)}{1 + \varepsilon} \, \left( \Theta (\tmmathbf{T}) +
c_{\Gamma} (\varepsilon) \, \Gamma (\tmmathbf{T}) \right)
\end{array} \label{eq:beam-optima-corrective-displacement}
\end{equation}
where the contribution associated with $\varepsilon^{\dag}$ is denoted by an
ellipsis and does not need to be calculated, $c_{\Gamma} (\varepsilon)$ is a
material parameter depending on the strain $\varepsilon$,
\begin{equation}
c_{\Gamma} (\varepsilon) = \frac{Y_{\text{t}} (\varepsilon)}{2 \,
G_{\text{t}} (\varepsilon) \, p (\varepsilon) \,
\left( - \frac{\mathrm{d} p}{\mathrm{d} \varepsilon} (\varepsilon) \right)
\, (1 + \varepsilon)}, \label{eq:beam-1d-reduction-coefficients}
\end{equation}
and $\Theta$ and $\Gamma$ are the cross-sectional functions satisfying the
variational problems
\begin{equation}
\begin{array}{lll}
\forall \hat{Z}_3 \qquad \iint_{\Omega} \left[
\partial_{\alpha} \Theta (\tmmathbf{T}) \, \partial_{\alpha}
\hat{Z}_3 (\tmmathbf{T}) + \varphi_{\alpha} (\tmmathbf{T}) \,
\partial_{\alpha} \hat{Z}_3 (\tmmathbf{T}) \right] \, \mathrm{d} A =
0 & \quad & \langle \Theta \rangle = 0\\[.5em]
\forall \hat{Z}_3 \qquad \iint_{\Omega} \left[
\partial_{\alpha} \Gamma (\tmmathbf{T}) \, \partial_{\alpha}
\hat{Z}_3 (\tmmathbf{T}) + 2 \, T_2 \, \hat{Z}_3
(\tmmathbf{T}) \right] \, \mathrm{d} A = 0 & & \langle \Gamma
\rangle = 0
\end{array} \label{eq:beam-1d-reduc-pb-Theta-Gamma}
\end{equation}
The functions $\Theta$ and $\Gamma$ are denoted as $\theta_2$ and $\eta_2$ in
the work of~{\citet{trabucho1989existence}}, see their equations~[2.23]
and~[2.17].
Finally, we define four constants depending on the cross-section shape,
\begin{equation}
M = \sum_{\alpha} \iint_{\Omega} \varphi_{\alpha}^2 (\tmmathbf{T})
\, \mathrm{d} \Omega \qquad J_{\Theta
\Theta} = \iint_{\Omega} \partial_{\alpha} \Theta \,
\partial_{\alpha} \Theta \, \mathrm{d} \Omega \qquad J_{\Theta \Gamma} = \iint_{\Omega} \partial_{\alpha}
\Theta \, \partial_{\alpha} \Gamma \, \mathrm{d} \Omega
\qquad J_{\Gamma \Gamma} = \iint_{\Omega}
\partial_{\alpha} \Gamma \, \partial_{\alpha} \Gamma \,
\mathrm{d} \Omega . \label{eq:beam-1d-reduc-geom-constants-general-cross-sect}
\end{equation}
As shown in appendix~\ref{ssec:beam-app-gradient-effect}, the reduction method
from section~\ref{sec:asymptotic-1d-reduction} yields the following
expressions for the quantities entering in the one-dimensional
model~(\ref{eq:bending-phi2-anticipation}) as
\begin{equation}
\begin{array}{rll}
\xi_0 (\varepsilon, 0) & = & 0\\
\xi_1 (\varepsilon, 0) & = & A \, \frac{p (\varepsilon)
\, \left( - \frac{\mathrm{d} p}{\mathrm{d} \varepsilon}
(\varepsilon) \right)}{2 \, (1 + \varepsilon)} \,
\frac{1}{I_1^0 / A^2} \left( \frac{J_{\Theta \Gamma}}{A^3} +
\frac{J_{\Gamma \Gamma}}{A^3} \, c_{\Gamma}
(\varepsilon) \right)\\
D_{1 1} (\varepsilon, 0) & = & A^3 \, \left( p
(\varepsilon) \, \frac{\mathrm{d} p}{\mathrm{d} \varepsilon}
(\varepsilon) \right)^2 \, \left[ \frac{w_{\text{tr}}'
(\varepsilon)}{1 + \varepsilon} \, \frac{M}{A^3} + p^2
(\varepsilon) \, G_{\text{t}} (\varepsilon) \, \left(
\frac{M - J_{\Theta \Theta}}{A^3} + \frac{J_{\Gamma
\Gamma}}{A^3} \, c_{\Gamma}^2 (\varepsilon) \right) \right] .
\end{array} \label{eq:beam-1d-reduction-result-generic}
\end{equation}
These are the only properties of the one-dimensional model which are needed in
the linear buckling analysis, as we will show.
For reference, the microscopic solution in displacement is found by combining
equations~(\ref{eq:x-centerline-based-crspondence}),
(\ref{eq:y-star-expansion}), (\ref{eq:beam-homogeneous-Y})
and~(\ref{eq:beam-optima-corrective-displacement}) as
\begin{equation}
\begin{array}{l}
\tmmathbf{x} (S, \tmmathbf{T}) =\\
\quad \tmmathbf{r} (S) + \left( p
(\varepsilon) \, \left( T_{\alpha} + \kappa \,
\frac{\mathrm{d} p}{\mathrm{d} \varepsilon} (\varepsilon) \,
\varphi_{\alpha} (\tmmathbf{T}) \right) \right) \,
\tmmathbf{d}_{\alpha} (S) + \left( \varepsilon' \, (\ldots) +
\kappa' \, \frac{p^2 (\varepsilon) \,
\frac{\mathrm{d} p}{\mathrm{d} \varepsilon} (\varepsilon)}{1 +
\varepsilon} \, \left( \Theta (\tmmathbf{T}) + c_{\Gamma}
(\varepsilon) \, \Gamma (\tmmathbf{T}) \right) \right)
\, \tmmathbf{d}_3 (S) + \cdots
\end{array} \label{eq:Euler-buckling-microscopic-displacement}
\end{equation}
\subsubsection{Case of a circular cross-section}
When the cross-section is a disk with radius $\rho$, as shown in
figure~\ref{fig:thick-Euler-buckling}, the initial area is $A = \pi
\, \rho^2$, the initial moment of inertia is $I_1^0 = \iint T_2^2
\, \mathrm{d} A = \frac{\pi \, \rho^4}{4} = \frac{A^2}{4
\, \pi}$, the functions $\varphi_{\alpha}$ in
equation~(\ref{eq:phi-alpha-general-cross-section}) take the slightly simpler
form
\begin{equation}
\begin{array}{ccc}
\varphi_1 (\tmmathbf{T}) = T_1 \, T_2 &
\qquad & \varphi_2 (\tmmathbf{T}) = \frac{T_2^2 - T_1^2}{2},
\end{array} \label{eq:beam-phi-i}
\end{equation}
and the solutions $\Theta$ and $\Gamma$ to
equation~(\ref{eq:beam-1d-reduc-pb-Theta-Gamma}) are
\begin{equation}
\begin{array}{rll}
\Theta (\tmmathbf{T}) & = & - \frac{1}{4} \, (T_1^2 + T_2^2 -
\rho^2) \, T_2\\
\Gamma (\tmmathbf{T}) & = & + \frac{1}{4} \, \left( T_1^2 + T_2^2
- 3 \, \rho^2 \right) \, T_2 .
\end{array} \label{eq:beam-1d-reduction-Theta-Gamma-disk}
\end{equation}
After factoring out the cube area $A^3 = \left( \pi \, \rho^2
\right)^3$, the constants appearing in
equation~(\ref{eq:beam-1d-reduc-geom-constants-general-cross-sect}) can then
be expressed as
\[ M = \frac{A^3}{12 \, \pi^2} \qquad
(J_{\Theta \Theta}, J_{\Theta \Gamma}, J_{\Gamma
\Gamma}) = \frac{A^3}{24 \, \pi^2} \, (+ 1, - 1, + 7)
. \]
Finally, the quantities defining the one-dimensional model in
equation~(\ref{eq:beam-1d-reduction-result-generic}) are calculated as
\begin{equation}
\begin{array}{rll}
\xi_0 (\varepsilon, 0) & = & 0\\
\xi_1 (\varepsilon, 0) & = & \rho^2 \, p (\varepsilon)
\, \left( - \frac{\mathrm{d} p}{\mathrm{d} \varepsilon}
(\varepsilon) \right) \, \frac{- 1 + 7 \, c_{\Gamma}
(\varepsilon)}{12 \, (1 + \varepsilon)}\\
D_{1 1} (\varepsilon, 0) & = & \left( p (\varepsilon)
\, \frac{\mathrm{d} p}{\mathrm{d} \varepsilon} (\varepsilon)
\right)^2 \, \frac{\left( \pi \, \rho^2
\right)^3}{24 \, \pi^2} \left( \frac{2 \,
w_{\text{tr}}' (\varepsilon)}{1 + \varepsilon} + p^2
(\varepsilon) \, G_{\text{t}} (\varepsilon) \, \left( 1
+ 7 \, c_{\Gamma}^2 (\varepsilon) \right) \right),
\end{array} \label{eq:beam-1d-reduction-result}
\end{equation}
\subsection{Buckling analysis of the one-dimensional model}
We turn to the analysis of the buckling problem based on the one-dimensional
model just derived. We denote by $\varepsilon^{\star}$ the axial strain in the
unbuckled configuration, and by $u (S)$ and $v (S)$ the longitudinal and
transverse displacement associated with the buckling mode. The center-line
position therefore writes, in the buckled configuration of
figure~\ref{fig:thick-Euler-buckling}c, $\tmmathbf{r} (S) = v (S) \,
\tmmathbf{e}_2 + \left( S \, (1 + \varepsilon^{\star}) + u (S)
\right) \, \tmmathbf{e}_1$.
The axial strain $\varepsilon (S)$ and the rotation $\theta (S)$ are defined
by $\tmmathbf{r}' (S) = (1 + \varepsilon (S)) \, \tmmathbf{d}_3
(\theta (S))$ where $(\tmmathbf{d}_2 (\theta), \tmmathbf{d}_3 (\theta)) =
\left( \cos \theta \, \tmmathbf{e}_2 + \sin \theta \,
\tmmathbf{e}_3, - \sin \theta \, \tmmathbf{e}_2 + \cos \theta
\, \tmmathbf{e}_3 \right)$ is the rotated basis, see
equation~(\ref{eq:rPrime-epsilon-d3}). The bending strain is $\kappa (S) =
\theta' (S)$ from equation~(\ref{eq:kappa-i}). This yields the strain in terms
of the displacement as
\begin{equation}
\begin{array}{rll}
\varepsilon (S) & = & - 1 + \sqrt{(1 + \varepsilon^{\star} + u' (S))^2 +
v^{\prime 2} (S)}\\
\kappa (S) & = & \frac{\mathrm{d}}{\mathrm{d} S} \left( \tan^{- 1} \frac{v'
(S)}{1 + \varepsilon^{\star} + u' (S)} \right) .
\end{array} \label{eq:bending-strain}
\end{equation}
The buckling problem is governed by the total potential energy
\begin{equation}
\Psi^{\star} [u, v] = \Phi_{(2)}^{\star} [\varepsilon, \kappa] - P
\, u (\ell), \label{eq:beam-Psi}
\end{equation}
where $\Phi_{(2)}^{\star}$ is the one-dimensional strain energy obtained in
section~\ref{ssec:beam-1d-model} and $P$ is the buckling load applied on the
plane in sliding contact with the endpoint of the cylinder. For the sake of
definiteness, we analyze buckling under force control rather than displacement
control; this makes no difference for the calculation of the critical loads.
We proceed to identify the boundary conditions applicable to the
one-dimensional model. By inserting the microscopic
displacement~(\ref{eq:Euler-buckling-microscopic-displacement}) into the
sliding conditions~(\ref{eq:Euler-buckling-sliding-condition}), we find that
the following boundary conditions must hold on both ends:
$\tmmathbf{d}_{\alpha} \cdot \tmmathbf{e}_3 = 0$ (which is equivalent to
$\theta = 0$), $\varepsilon' = 0$ and $\kappa' = 0$. In addition, the bottom
support is fixed, which yields $u (0) = 0$. The following kinematic boundary
conditions are therefore applicable,
\begin{equation}
\begin{array}{lllllll}
u (0) = 0 & \theta (0) = 0 & \theta (\ell) = 0 & \theta'' (0) = 0 &
\theta'' (\ell) = 0 & \varepsilon' (0) = 0 & \varepsilon' (\ell) = 0.
\end{array} \label{eq:stubby-cyl-boundary-conditions}
\end{equation}
The high-order boundary conditions on $\kappa' = \theta''$ are legal in the
variational problem of equilibrium as the
energy~(\ref{eq:bending-phi2-anticipation}) depends on $\kappa'' = \theta'''$
when $\xi_1 \neq 0$. The high-order boundary conditions on $\varepsilon'$ are
normally not legal since $\xi_0 = 0$ and the energy depends on $\varepsilon'$
but not on $\varepsilon''$; this points to the fact that boundary layers occur
generically near the boundaries as is known since the work of St-Venant. Such
layers are nevertheless absent for the particular choice of sliding boundaries
made here; this will enable us to ultimately satisfy all boundary conditions,
even if the problem looks ill-posed from a variational standpoint.
A principle of virtual work is obtained by inserting the
strain~(\ref{eq:bending-strain}) into the total potential energy
$\Psi^{\star}$, and by calculating the first variation with respect to the
unknowns $u$ and $v$.
To characterize the fundamental solution, we require that $\Psi^{\star} [u,
v]$ is stationary at $u \equiv 0$ and $v \equiv 0$: this yields the condition
$\int_0^{\ell} \left( \frac{\partial W_{\text{hom}}}{\partial \varepsilon}
(\varepsilon^{\star}, 0) - P \right) \, \hat{u}' (S) \,
\mathrm{d} S = 0$ for any $\hat{u} (S)$ such that $\hat{u} (0) = 0$, after taking
into account the identity $\xi_0 (\varepsilon^{\star}, 0) = 0$ in
equation~(\ref{eq:beam-1d-reduction-result}) and the symmetry
properties~(\ref{eq:bending-phi2-symmetries}). Therefore, the fundamental
solution selects the axial strain $\varepsilon^{\star}$ such that
\begin{equation}
P = \frac{\partial W_{\text{hom}}}{\partial \varepsilon}
(\varepsilon^{\star}, 0) . \label{eq:beam-fundamental-equilibrium}
\end{equation}
We have just recovered the force-displacement relation of our particular
material in simple traction.
The bifurcation condition is found by setting to zero the second variation of
$\Psi^{\star} [u, v]$ about the fundamental solution $u \equiv 0$ and $v
\equiv 0$. With the help of a symbolic calculation language, we obtain the
following variational problem for the critical strain $\varepsilon^{\star} =
\varepsilon_{\text{cr}}$ and the buckling mode $\overline{v} (S)$: for any
$\forall \hat{v} \text{ such that } \hat{v}' (0) = \hat{v}''' (0) = \hat{v}'
(\ell) = \hat{v}''' (\ell) = 0$,
\begin{equation}
\int_0^{\ell} \left[ \left( 1 + \varepsilon_{\text{cr}} \right) \left(
\frac{\partial W_{\text{hom}}}{\partial \varepsilon} \right)_{\text{cr}}
\, \overline{v}' \, \hat{v}' + \left( \frac{\partial^2
W_{\text{hom}}}{\partial \kappa^2} \right)_{\text{cr}} \, \left(
\overline{v}'' + (\xi_1)_{\text{cr}} \, \overline{v}'''' \right)
\, \left( \hat{v}'' + (\xi_1)_{\text{cr}} \, \hat{v}''''
\right) + (D_{1 1})_{\text{cr}} \, \overline{v}'''
\, \hat{v}''' \right] \, \mathrm{d} S = 0.
\label{eq:beam-bifurcation-eq}
\end{equation}
A decoupled eigenvalue problem is obtained for the longitudinal displacement
$u (S)$ as well but it is not reported here as it characterizes necking
instabilities, which we ignore. In equation above, all quantities bearing the
subscript `$\text{cr}$' are evaluated in the fundamental solution, i.e.,
$(f)_{\text{cr}} = f \left( \varepsilon_{\text{cr}}, 0 \right)$.
It is interesting to contrast equation~(\ref{eq:beam-bifurcation-eq}) with the
bifurcation equation predicted by a classical beam model, which ignores the
gradient effect. The latter can be be recovered by setting $\Phi^{\star}_{(2)}
[\varepsilon, \kappa] = \int_0^{\ell} W_{\text{hom}} (\varepsilon, \kappa)
\, \mathrm{d} S$ in equation~(\ref{eq:bending-phi2-anticipation}) and
hence corresponds to $(\xi_1)_{\text{cr}} = 0$ and $(D_{1
1})_{\text{cr}} = 0$; this yields a different bifurcation equation, namely
\[ \int_0^{\ell} \left[ \left( 1 + \varepsilon_{\text{cr}} \right) \left(
\frac{\partial W_{\text{hom}}}{\partial \varepsilon} \right)_{\text{cr}}
\, \overline{v}' \, \hat{v}' + \left( \frac{\partial^2
W_{\text{hom}}}{\partial \kappa^2} \right)_{\text{cr}} \,
\overline{v}'' \, \hat{v}'' \right] \, \mathrm{d} S = 0
\text{\quad (classical beam model)} . \]
Here, $\left( \frac{\partial W_{\text{hom}}}{\partial \varepsilon}
\right)_{\text{cr}} = P$ is the applied load, see
equation~(\ref{eq:beam-fundamental-equilibrium}), and $\left( \frac{\partial^2
W_{\text{hom}}}{\partial \kappa^2} \right)_{\text{cr}}$ is the incremental
bending modulus. Comparison with equation~(\ref{eq:beam-bifurcation-eq}) shows
that our asymptotic one-dimensional model corrects the classical buckling
analysis of beams in two ways, which are important for thick cylinders: it
makes use of the modified bending strain, $\tilde{\kappa} = \overline{v}'' +
(\xi_1)_{\text{cr}} \, \overline{v}''''$ instead of the standard
bending strain $\overline{v}''$, and it takes into account the energy cost
associated with the {\tmem{gradient}} of curvature $\overline{v}'''$, through
the term proportional to $D_{11}$.
We return to the asymptotically correct model, and proceed to solve the
bifurcation equation~(\ref{eq:beam-bifurcation-eq}). An ordinary differential
equation with constant coefficients can be obtained by integration by parts
and elimination of the virtual quantity $\hat{v}$. In view of the kinematic
boundary conditions $v' (0) = v''' (0) = v' (\ell) = v''' (\ell) = 0$, a
simple calculation shows that the first buckling mode is $\overline{v} (S) =
\frac{1 - \cos \left( k \, S \right)}{2}$ where $k =
\frac{\pi}{\ell}$, and that the critical strain $\varepsilon_{\text{cr}}$
is selected by the dispersion equation
\begin{equation}
\left( 1 + \varepsilon_{\text{cr}} \right) \, \frac{\partial
W_{\text{hom}}}{\partial \varepsilon} \left( \varepsilon_{\text{cr}}, 0
\right) + \frac{\partial^2 W_{\text{hom}}}{\partial \kappa^2} \left(
\varepsilon_{\text{cr}}, 0 \right) \, \left( 1 - \xi_1 \left(
\varepsilon_{\text{cr}}, 0 \right) \, k^2 \right)^2 \, k^2
+ D_{1 1} \left( \varepsilon_{\text{cr}}, 0 \right) \, k^4
= 0. \label{eq:beam-dispersion}
\end{equation}
This implicit equation for the first buckling load $\varepsilon_{\text{cr}}$
is valid for finite $\varepsilon_{\text{cr}}$. For a long beam, {\tmem{i.e.}},
when $\ell / \rho$ is large and $\left(k \, \rho\right)$ is small, we
can seek an expansion of the critical strain $\varepsilon_{\text{cr}}$ in
powers of the aspect-ratio parameter $e = \frac{k \,
\rho}{\sqrt{\pi}} = \frac{\sqrt{\pi} \, \rho}{\ell}$,
{\tmem{i.e.}},
\begin{equation}
e^2 = \frac{\pi \, \rho^2}{\ell^2} . \label{eq:beam-e2}
\end{equation}
With the help of a symbolic calculation language, the series
$\varepsilon_{\text{cr}}$ satisfying the dispersion
equation~(\ref{eq:beam-dispersion}) is found as
\begin{equation}
\varepsilon_{\text{cr}} = - \frac{\pi \, \chi_0}{4} \,
e^2 - \pi^2 \, \left( \chi_1 - (\chi_2 + \chi_4) \,
\chi_0 + \frac{2 + \chi_3}{32} \, \chi_0^2 \right) \, e^4
+\mathcal{O} (e^6) \label{eq:beam-1d-model-epsilon-cr}
\end{equation}
where the $\chi_i$'s are dimensionless parameters from the one-dimensional
model,
\begin{equation}
\begin{array}{lllll}
\chi_0 = \frac{4}{\rho^2 \, \tilde{\chi}} \, \left(
\frac{\partial^2 W_{\text{hom}}}{\partial \kappa^2} \right)_0 & \chi_1 =
\frac{1}{\rho^4 \, \tilde{\chi}} \, (D_{1 1})_0
& \chi_2 = \frac{\xi_1 (0, 0)}{2 \, \rho^2} & \chi_3 =
\frac{1}{\tilde{\chi}} \, \left( \frac{\partial^2
W_{\text{hom}}}{\partial \varepsilon^2} \right)_0 & \chi_4 = \frac{1}{4
\, \rho^2 \, \tilde{\chi}} \, \left(
\frac{\partial^3 W_{\text{hom}}}{\partial \varepsilon \, \partial
\kappa^2} \right)_0
\end{array}, \label{eq:chi-i}
\end{equation}
where $\tilde{\chi} = \left( \frac{\partial^2 W_{\text{hom}}}{\partial
\varepsilon^2} \right)_0$. Here the `$0$' is subscript means that the quantity
inside the corresponding parentheses must be evaluated in the undeformed
configuration, $(f)_0 = f (0, 0)$. To derive the
expansion~(\ref{eq:beam-1d-model-epsilon-cr}), we have used the fact that
there is no pre-stress in the reference configuration, $\frac{\partial
W_{\text{hom}}}{\partial \varepsilon} (0, 0) = 0$, as shown by combining
equations~(\ref{eq:beam-Whom}) and~(\ref{eq:beam-wtr-prime-0}): this warrants
that $\varepsilon_{\text{cr}} \rightarrow 0$ for $e \rightarrow 0$.
\subsection{Expansion of the critical load}
With the help of equations~(\ref{eq:beam-wtr-prime-0}--\ref{eq:beam-Y0-Y0p})
from the appendix, equations~(\ref{eq:beam-1d-reduction-coefficients})
and~(\ref{eq:beam-1d-reduction-result}) for a circular cross-section
yield, in the limit $\varepsilon \rightarrow 0$,
\[ \begin{array}{lll}
c_{\Gamma} (0) = \frac{1 + \nu_0}{\nu_0} \qquad & \xi_1 (0, 0) = \frac{7 + 6
\, \nu_0}{12} \, \rho^2 \qquad & D_{1 1} (0, 0) = Y_0
\, \frac{\left( \pi \, \rho^2
\right)^3}{48 \, \pi^2} \, \frac{7 + 14 \,
\nu_0 + 8 \, \nu_0^2}{1 + \nu_0} .
\end{array} \]
Using this, the
relations~(\ref{eq:beam-simple-traction-E-Sigma}--\ref{eq:beam-Y0-Y0p}) and
the expression of $W_{\text{hom}}$ found in~(\ref{eq:beam-Whom}), we can
calculate the coefficients appearing in~(\ref{eq:chi-i}) as
\begin{equation}
\begin{array}{lllll}
\chi_0 = 1 \quad & \chi_1 = \, \, \, \frac{7 + 14
\, \nu_0 + 8 \, \nu_0^2}{48 \, (1 +
\nu_0)} \quad &
\chi_2 = \frac{7 + 6 \, \nu_0}{24} \quad & \chi_3 = \frac{Y_0'}{Y_0} &
\quad \chi_4 = \frac{1}{16} \, \left( \frac{Y_0'}{Y_0} - 2 \,
\nu_0 \right) .
\end{array}
\end{equation}
Inserting into equation~(\ref{eq:beam-1d-model-epsilon-cr}), we obtain our
final expression for the first critical load of the cylinder as a function of
the aspect-ratio parameter,
\begin{equation}
\varepsilon_{\text{cr}} = - \frac{\pi \, e^2}{4} +
\frac{\pi^2 \, e^4}{48} \, \left( \frac{3 \,
Y_0'}{2 \, Y_0} + 4 - \frac{\nu_0 \, \left( 1 + 2
\, \nu_0 \right)}{1 + \nu_0} \right) +\mathcal{O} (e^6) .
\label{eq:beam-epsilon-expansion}
\end{equation}
This is identical to the result of~\citet{scherzinger1998asymptotic}; we
refer the reader to their paper for a comparison of this expansion with
finite-element simulations for a finite aspect-ratio $e$. Note that the
correction to the classical Euler prediction $\varepsilon_{\text{cr}} = -
\frac{\pi \, e^2}{4}$, {\tmem{i.e.}}, the term proportional to
$e^4$ in equation above, depends on both material nonlinearity (through the
non-linear elastic modulus $Y_0'$) and on geometric nonlinearity (through the
other terms in the parentheses).
\citet{scherzinger1998asymptotic} observed that classical models such as the
Euler beam model fail to capture the correction of order $e^4$ in
equation~(\ref{eq:beam-epsilon-expansion}) and are therefore inappropriate for
the analysis of stubby or thick structures; we concur with this statement. It
has apparently gone unnoticed that this difficulty can be overcome by using a
refined one-dimensional model capturing the gradient effect, as we just did:
when this is done in an asymptotically correct way, the expansion of the
critical buckling strain is correctly predicted in terms of the aspect-ratio.
Unlike in earlier work, we have split the analysis of this buckling problem
into two distinct tasks: deriving a one-dimensional model on the one hand, and
carrying out a bifurcation analysis on the other hand. Keeping these two tasks
separate is not only arguably more elegant, it also avoids the need to
reinvent the wheel for every buckling problem: if one were to study the
buckling of a stubby circular {\tmem{ring}} or the {\tmem{post-buckling}} of a
stubby Elastica in compression, for instance, one could reuse the
one-dimensional structural model from section~\ref{ssec:beam-1d-model} and
simply update the buckling analysis to reflect the geometry of interest.
\section{Discussion}
We have proposed an asymptotic method for constructing one-dimensional models,
see section~\ref{sec:asymptotic-1d-reduction}. The method achieves dimension
reduction by relaxing the microscopic displacement. Concretely, it is
implemented as a straightforward (albeit lengthy) series of steps, as
described in \ref{app:compendium}. It builds up on the general recipe
for dimension reduction published in our previous
work~\citep{LESTRINGANT2020103730}. The method yields an asymptotically
exact, variational model that accounts for the gradient effect. It also
accounts for geometric and material nonlinearities, and thereby may help
broaden the range of applicability of rod theories significantly. With a view
to illustrating the method, we have treated the linear twisting of an elastic
cylinder, including higher-order effects; in
equation~(\ref{eq:twisting-final-phi}) we have derived a simple
one-dimensional strain energy potential that governs the equilibrium, which is
new to the best of our knowledge. We have also applied our reduction method to
the Euler buckling of a beam having a moderate aspect-ratio: the expansion of
the critical load in powers of the aspect-ratio from earlier work has been
recovered based on a high-order rod model. The capabilities of the method go
much beyond these two simple examples; it can be readily applied to structures
involving finite strain, arbitrary pre-stress distributions or low material
symmetries, which will be the subject of future work.
For more complex geometries, the analytical approach adopted here may no
longer be tractable, and the quantities $W_{\text{hom}} (\tmmathbf{h})$,
$\tmmathbf{B} (\tmmathbf{h})$, etc.~defining the one-dimensional model may
have to be found by solving variational problems on the cross-section
numerically; the finite-element method is perfectly suited to this task. In
this approach, the 3D elasticity problem for the slender elastic body is split
into 2D+1D problems, where the 2D problems are microscopic and are formulated
in the cross-section while the 1D one is a macroscopic structural problem.
This splitting approach makes the solution considerably easier, which is
precisely the point of dimension reduction.
The rod models that are derived by our method include a kinematic constraint
which ensures that the tangent to the center-line $\tmmathbf{r}' (S)$ stays
aligned with the director $\tmmathbf{d}_3 (S)$, see
equation~(\ref{eq:rPrime-epsilon-d3}). One-dimensional model of this kind are
referred to as {\tmem{unshearable}} but this qualifier is misleading: shear
can take place at the microscopic scale in our approach, even if it is not
exposed in the one-dimensional model. As discussed at the very end
of~{\textsection}\ref{ssec:centerline-based-parameterization}, the directors
capture the deformed cross-sections in an average sense only: the
cross-sections are by no ways constrained to remain aligned with the directors
$\tmmathbf{d}_1 (S)$ and $\tmmathbf{d}_2 (S)$, {\tmem{i.e.}}, to remain
perpendicular to the center-line, not even in an average sense. For example,
in a rod made up of an anisotropic material that is very stiff in a direction
making a 45$^{\circ}$ angle with the axis of the rod, the cross-sections
rotate about $\tmmathbf{d}_2$ (and therefore tilt along the axis) by an angle
$a \, \varepsilon$ proportional to axial stretch $\varepsilon$. This
microscopic shear is accounted for in our approach but it is not reflected in
the directors $\tmmathbf{d}_i (S)$: their assigned role is to keep track of
the twisting of the cross-sections about the tangent, {\tmem{not}} to provide
a faithful representation of the microscopic solution. To some extent, our
approach therefore has the same capabilities as Timoshenko models, except that
the microscopic shear is dealt with {\tmem{internally}}. The benefit is that a
minimal set of degrees of freedom are presented to the user. The only minor
complication is that, in order to block the rotation at the endpoints, one has
to look up the average orientation of the terminal cross-sections from the
microscopic solution, as the vectors $\tmmathbf{d}_1 (S)$ and $\tmmathbf{d}_2
(S)$ cannot be used directly.
In this paper, we have carried out dimension reduction without making scaling
assumptions on the intensity of the loading. This is not a standard way of
proceeding. It is the special form of the external force potential in
equation~(\ref{eq:full-problem-total-potential-energy}) that plays the role of
the standard scaling assumptions, as discussed at the end of
{\textsection}\ref{sec:nonlinear-energy-formulation}. Let us briefly expose
how our method can be extended to handle non-standard scaling assumptions for
the loading. Consider for instance the case where the distributed applied
torque is so large that it can induce shear, by tilting the cross-sections
towards the center-line. Such a load cannot be represented by
equation~(\ref{eq:full-problem-total-potential-energy}) since by design the
directors $\tmmathbf{d}_1$ and $\tmmathbf{d}_2$ remain perpendicular to the
center-line. The solution is to introduce an additional kinematic variable,
similar to Timoshenko's shear angle, and to modify the procedure as follows.
The new internal degree of freedom is appended to the set of macroscopic
strain $\tmmathbf{h}$, meaning that it is fixed during the relaxation
procedure and is a variable of the one-dimensional model; it is coupled to the
large applied torque in the potential energy $\Psi$. The result of this
modified relaxation procedure is an {\tmem{asymptotic}} Timoshenko-like model,
in which the average microscopic shear is explicitly represented.
It is also possible to extend the method to the case where the geometric or
elastic properties of the body vary slowly in the longitudinal
direction---such as the case of rods having non-uniform cross-sectional
dimensions---, with little additional work. This extension is discussed at the
very end of our previous paper~\citep{LESTRINGANT2020103730}: in this case
the operator $\tmmathbf{C}_{\tmmathbf{h}}^{(1)}$ gets an explicit dependence
on the axial variable $S$, and an additional term proportional to $\left.
\frac{\partial \tmmathbf{C}_{\tmmathbf{h}}^{(1)} (S)}{\partial S}
\right|_{\tmmathbf{h}=\tmmathbf{h} (S)}$ appears in the one-dimensional
potential $\Phi_{(2)}^{\star}$. This, along with other extensions, will be
described in follow-up work.
\paragraph{Acknowledgments}This paper was prepared using {T\kern-.1667em\lower.5ex\hbox{E}\kern-.125emX\kern-.1em\lower.5ex\hbox{\textsc{m\kern-.05ema\kern-.125emc\kern-.05ems}}}, an
outstanding and freely available scientific text
editor~\citep{Hoeven-The-jolly-writer.-Your-2020}.
| 502b9cc5feff360ebed36ce5e1fb7a7a48d08861 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
A major development in complex nonabelian Hodge theory was made by Hitchin\cite{Hi} and Simpson~\cite{S1}. They constructed an equivalence between the category of irreducible representations of the fundamental group of a compact K\"ahler manifold and the category of stable Higgs bundles with vanishing Chern classes which generalizes the Donaldson-Uhlenbeck-Yau correspondence for stable vector bundles. As a $p$-adic analogue, Faltings~\cite{Fal05} constructed a so-called Simpson's functor between the category of Higgs bundles and that of generalized representations of the \'etale fundamental group for curves over a $p$-adic field.\\[.2cm]
In this note we investigate the relationship between Faltings' $p$-adic Simpson correspondence and Scholze's Riemann-Hilbert correspondence. Let $(\frakX,\frakD)$ be a log scheme over $Spec(\Ok)$ with toroidal singularities, and let $(V,\nabla, \Fil)$ be a filtered logarithmic de Rham bundle over $(X,D)_k$ and $(E,\theta):=(\Gr(V,\Fil),\Gr(\nabla)\cdot \xi^{-1})$ the graded Higgs bundle attached to $(V,\nabla, \Fil)$, where $\xi = p + \left[\left(-p,(-p)^{\frac1p},(-p)^{\frac1{p^2}}\cdots \right)\right] \in \BdRp$. The natural embedding $k\rightarrow \BdRp$ induces a liftings of $(X, D)_k$ over $\BdRp$, and under this lifting $(E,\theta)$ corresponds to a generalized geometric representation $\bV$ of the geometric fundamental group $\pi_1((X\setminus D)_{\bark})$ by Faltings' $p$-adic Simpson correspondence from the category of small Higgs bundles to the category of small generalized representations. One notes that $(E,\theta)$ is indeed small as the Higgs field is nilpotent. On the other hand, by Scholze's Riemann-Hilbert correspondence there is a $\bBdRp$-local system $\bM$ over $\Xproet$ defined by $\bM= \Fil^0(V\otimes\OBdR)^{\nabla=0}$. After a slight modification of Faltings' local construction (3.1) we show the following comparison theorem.
\begin{thm}[Theorem~\ref{main01}] $\bV \cong \bM/\xi\bM$.
\end{thm}
\begin{remark} Theorem 1.1 answers a question raised by Liu-Zhu~\cite{LZ} about the relation of their $p$-adic Simpson correspondence and Faltings' $p$-adic Simpson correspondence. It should be regarded as a general form of a thoerem proven by Tan-Tong\cite{TT19}. If a filtered de Rham bundle $(V,\nabla, \text{Fil})$ underlies a Fontaine-Faltings module, then the crystalline representation corresponding to $(V,\nabla, \text{Fil})$
restricted to the geometric fundamental group coincides with the generalized representation corresponding to the graded Higgs bundle $\text{Gr}_\text{Fil}(V,\nabla)$.
\end{remark}
Theorem 1.1 has the following consequence. Starting with an \'etale local system $\bL$ over $(X\setminus D)_K$ with unipotent local monodromy around $D$ we may assume $\bL$ to be very small after taking an \'etale base change on $(X\setminus D)$ and obtain the logarithmic Higgs bundle $(E,\theta)_\bL$ corresponding to $\bL$ as a very small generalized geometric local system by Faltings' functor. It is known by Faltings that
$(E,\theta)_\bL$ has trivial Chern classes and is slope semi-stable w.r.t. any ample line bundle. On the other hand, Scholze for $D=\emptyset$ and Diao-Lan-Liu-Zhu for the general case defined a filtered $\OXet$-module with integrable logarithmic connection $(V,\nabla,\Fil)^{\OBdR}:=v_*(\OBdR\otimes\bL)$ attached to $\bL$. Subsequently, we introduce the associated $\OXet$-Higgs bundle $(E,\theta)^{\mO\bB_\text{dR}}:=
(Gr(V,\nabla)^{\mO\bB_\text{dR}}, Gr(\nabla^{\mO\bB_\text{dR}}))$. Indeed $(E,\theta)^{\mO\bB_\text{dR}}$ has trivial Chern classes as $(V,\nabla, \text{Fil})$ is a filtered logarithmic de Rham bundle defined over a field of characteristic zero and the eigenvalues of the residue of the connection vanish.
\begin{thm}\label{thm:subRep}
$\mathrm{i)}$ There is an inclusion of Higgs bundles
\[(E,\theta)^{\mO\bB_\text{dR}}\subset (E,\theta)_\bL.\]
Consequently, $(E,\theta)^{\mO\bB_\text{dR}}$ is a semistable Higgs bundle with trivial Chern classes.\\[.2cm]
$\mathrm{ii)}$ Under Faltings' Simpson correspondence, the above sub-Higgs bundle corresponds to an $\mathcal O_{\bC_p}$-geometric subrepresentation
\[\bW^\text{geo}_{\mathcal O_{\bC_p}}\subset \bL^\text{geo}\otimes \mathcal O_{\bC_p}.\]
\end{thm}
\begin{remark}
Faltings conjectured that any semistable small Higgs bundle with trivial Chern classes corresponds to a
$\bC_p$-geometric representation.
Theorem~\ref{thm:subRep}[(ii)] is probably the easiest case in testing his conjecture as $(E,\theta)^{\mO\bB_\text{dR}}$ is contained in the Higgs bundle $(E,\theta)_\bL$ arising from the $\bC_p$-geometric representation $\bL^\text{geo}\otimes \bC_p$. The coefficient reduction from $\hOXp$ to $\mathcal O_{\bC_p}$ for the generalized representation corresponding to $(E,\theta)^{\mO\bB_\text{dR}}$ actually comes from $\bL^\text{geo}$ by using Faltings' functor of twisted pull back of Higgs bundles and the property of twisted pull back trivializable.
\end{remark}
\begin{corollary}
If $\bL$ is geometrically absolutely irreducible and $(E,\theta)^{\mO\bB_\text{dR}}\not=0$,
then $\bL$ is a de Rham representation.
\end{corollary}
In general we ask the following conjecture:
\begin{conjecture}[Descending]
The geometric sublocal system in Theorem 1.2 descends to a $\bZ_p$-\'etale subrepresentation
\[ \bW\subset \bL. \]
Consequently, $\bW$ is a de Rham representation.
\end{conjecture}
The conjecture for descending is divided into two parts:
\begin{itemize}
\item[i).] The $\mathcal{O}_{\bC_p}-$geometric subrepresentation in Theorem 1.2 descends to an $\mathcal{O}_{\bC_p}$-sub \'etale representation
\[\bW_{\mathcal{O}_{\bC_p}}\subset \bL\otimes\mathcal{O}_{\bC_p}.\]
\item[ii).] The coefficient of $\bW_{\mathcal{O}_{\bC_p}}$ can be reduced to $\bZ_p.$
\end{itemize}
In Section~\ref{sec:subrep} we propose an approach to the first part, which relies on a $p$-adic analogue of Simpson's $\bC^*$-action on Higgs bundles in Section~\ref{sec:C^*}. The action of Galois group $\text{Gal}(\bar k/k)$ on $X_{\bar k}$ induces a natural action on the category of generalized representations. By carefully checking the construction of Faltings' Simpson correspondence one finds that the corresponding action on the category of Higgs bundles can be basically described as the usual Galois action on a Higgs bundle $(E,\theta\cdot\xi)$ over $\bar k$ composed with an extra action on $\xi^{-1}.$ Consequently, we show that a generalized representation corresponding to a graded Higgs bundle over $k$ is $\text{Gal}(\bar k/k)$-invariant and conversely, a Higgs bundle corresponding to a Galois-invariant generalized representation is of a nilpotent Higgs field.
\begin{theorem}
A rank-2 generalized representation is $\text{Gal}(\bar k/k)$-invariant if and only if the corresponding Higgs bundle is graded and defined over $k$ up to an isomorphism.
\end{theorem}
We believe Theorem 1.7 holds true for representations of any rank.
\begin{conjecture}
A generalized representation is $\text{Gal}(\bar k/k)$-invariant if and only if the corresponding Higgs bundle is graded and defined over $k$ up to an isomorphism.
\end{conjecture}
\section{Preliminaries}
\subsection{Faltings' p-adic Simpson Correspondence}
We recall Faltings' functor ~\cite{Fal05} between the category of Higgs bundles and that of generalized representations of the \'etale fundamental group for schemes over a $p$-adic field.\\[0cm]
Let $k$ be a complete discrete valuation field of mixed characteristic $(0,p)$ with perfect residue field $\kappa$ of characteristic $p>0$. Denote by $\Ok$ the ring of integers of $k$. Let $(\frakX,\frakD)$ be a proper log scheme over $\Ok$ the ring of integers of a $p$-adic field with toroidal singularities. Especially $\frakX_k$ is smooth and $\frakD_k$ is a divisor with normal crossings.
\subsubsection{The local correspondence}
Let $\frakU=\Spec(\frakR)$ be a small affine open subset of $\frakX$, that means $\frakR$ is \'etale over a toroidal model. By adjoining roots of characters of the torus we obtain a sub-extension $\frakR_\infty$ of $\barR$ (the integral closure of $\frakR$ in the maximal \'etale cover of $\frakU_k\setminus\frakD_k$) which is Galois over $\RbarV=\frakR\otimes_{\Ok}\mO_{\overline{k}}$ with group $\Delta_\infty$. Denote $\Delta=\Gal(\barR/\RbarV)$ and let $\hR$, $\hR_\infty$, $\hbR$, $\hRbarV$ be the $p$-adic completions of the corresponding rings.\\[.2cm]
The notions of Higgs modules twisting with $\xi^{-1}$ and generalized representations have been introduced in~\cite{Fal05}:
\begin{defi} $\text{i)}$ A \emph{Higgs module} is a pair $(M,\theta)$:
\begin{itemize}
\item $M$ a finitely generated free $\hRbarV$-module;
\item $\theta\in \End_{\hRbarV}(M) \otimes_\frakR\Omega_{\frakR/{\Ok}}^1\cdot\xi^{-1}$ satisfying integrable condition $\theta\wedge\theta=0$, where $\xi = p + \left[\left(-p,(-p)^{\frac1p},(-p)^{\frac1{p^2}}\cdots \right)\right] \in W(\varprojlim \mO_{\bC_p}) \subset \BdRp$.
\end{itemize}
The Higgs bundle $(E,\theta)$ is called \emph{small} if $p^\alpha \mid \theta$ for some $\alpha>\frac1{p-1}$.
$\text{ii).}$ A \emph{generalized representation} of $\Delta$ is a pair $(\bV,\rho)$:
\begin{itemize}
\item $\bV$ a finitely generated free $\hbR$-module;
\item $\rho$ a continuous semilinear action of $\Delta$ on $\bV$.
\end{itemize}
\end{defi}
Fix a system $\hbR$-basis $e_1,\cdots,e_r$ of a generalized representation $(\bV,\rho)$ and consider the unique linear transform $c(\delta)\in \GL(\bV)$ for each $\delta\in \Delta$ which satisfies
\[\delta(e_1,\cdots,e_r) = c(\delta)(e_1,\cdots,e_r).\]
Then $c$ defines a class in the Galois cohomology group $[c]\in H^1_{\rm cont}(\Delta,\GL(\bV))$. Conversely, to give a continuous semi-$\hbR$-linear action of $\Delta$ on $\bV$ is equivalent to give a class in the Galois cohomology group $[c]\in H^1_{\rm cont}(\nabla,\GL(\bV))$.
The generalized representations $(\bV,\rho)$ is called \emph{small} if $c\pmod{p^{2\alpha}}=0\in H^1_{\rm cont}(\Delta,\GL(\bV/p^{2\alpha}))$ for some $\alpha>\frac1{p-1}$. This is equivalent to that there exists a $\Delta$-invariant basis of the free $\hbR/p^{2\alpha}$-module $\bV/p^{2\alpha}$.
\begin{thm}[local version of Faltings' Simpson correspondence] Over the small affine space $\frakU$, there exists an equivalence between the category of small Higgs bundles to the category of small generalized representations.
\end{thm}
In the following we recall the construction of the equivalence functor from the Higgs bundle $(E,\theta)$ to the generalized representation $\bV(E,\theta)=(\bV,\rho)$. The underlying module of the generalized representation is defined to be
\[\bV := M\otimes_{\hRbarV} \hbR. \]
The semilinear action $\rho$ is a little more complicated and depends on the choice of the following data $(\widetilde{\frakR},\tau,\TT,\widetilde{\TT})$: For small toroidal affines $\frakU=\Spec(\frakR)$, there exists a log-smooth algebra $\widetilde{\frakR}$ over $A_2({\Ok}):= \Ainf/\xi^2\Ainf$ which lifts $\hRbarV$. Note that any two such lifts are isomorphic. We also choose an $A_2({\Ok})$-linear lifting $\tau \colon \widetilde{\frakR}\rightarrow A_2(\frakR):=\bAinfRh/\xi^2\bAinfRh$ of the inclusion $\frakR\rightarrow \hbR$, a system of parameters $\TT=\{T_1,\cdots,T_d\}$ of $\frakR$ over $\Ok$ and a system of liftings $\widetilde{\TT}=\{\widetilde{T}_1,\cdots,\widetilde{T}_d\}$ of $\TT$ in $\widetilde{\frakR}$.
\begin{equation}
\xymatrix{
A_2({\Ok}) \ar[r] \ar@{->>}[d] & \widetilde{\frakR} \ar[r]^{\tau} \ar@{->>}[d] & A_2(\frakR) \ar@{->>}[d] \\
\hbV \ar[r] & \hRbarV \ar[r] & \hbR\\
}
\end{equation}
Using the fact that $\theta$ behaves like a connection, for any two liftings $\tau,\tau'\colon \widetilde{\frakR} \rightarrow A_2(\frakR)$, Faltings~\cite[page 851]{Fal05} constructs an $\hbR$-linear isomorphism $\beta_{\tau,\tau'} \colon \bV \rightarrow \bV$
\begin{equation}\label{equ:taylorHiggs}
\beta_{\tau,\tau'}(m\otimes1) = \sum_{I\in \bN^d} \theta(\xi\partial)^I(m)/I! \otimes \left(\frac{\tau(\widetilde{T}) - \tau'(\widetilde{T})}{\xi}\right)^I,
\end{equation}
where $m\in M$, $\partial_i=\partial/\partial T_i$ and there is an obvious divided power structure to explain how to divide by the factorials. In particular, for any $\delta\in \Delta$, the composition $\sigma\circ \tau\colon \widetilde{\frakR}\rightarrow A_2(\frakR)$ is another lifting. One has
\begin{equation}
\beta_{\sigma\circ\tau,\tau}(m\otimes1) = \sum_{I\in \bN^d} \theta(\xi\partial)^I(m)/I! \otimes \left(\frac{\tau(\widetilde{T})^{\sigma} - \tau(\widetilde{T})}{\xi}\right)^I.
\end{equation}
For any $\sigma\in \Delta$, the composition $\rho(\sigma):=\beta_{\sigma\circ\tau,\tau}\circ(1\otimes \sigma) \colon \bV\rightarrow \bV$
\begin{equation}
\xymatrix@C=3cm@R=1cm{
\bV \ar[r]^-{1\otimes\sigma} \ar[dr]_-{\rho(\sigma)} & \bV \ar[d]^-{\beta_{\sigma\circ\tau,\tau}}\\
& \bV \\
}
\end{equation}
defines a $\sigma$-$\hbR$-semi-linear map. In this way, Faltings gave the semi-linear action $\rho$ of $\Delta$ on the free $\hbR$-module $\bV$ by sending $\sigma$ to $\rho(\sigma)$.
We note that $\rho$ depends on the choice of those data $(\widetilde{\frakR},\tau,\TT,\widetilde{\TT})$, but there exists a canonical isomorphism for different choices
\[\beta_{\tau,\tau'} \colon (\bV,\rho_{(\widetilde{\frakR},\tau,\TT,\widetilde{\TT})}) \cong (\bV,\rho_{(\widetilde{\frakR}',\tau',\TT',\widetilde{\TT}')}).\]
\subsubsection{The global correspondence}
Let $\frakX$ be proper over $\Spec({\Ok})$ with toroidal singularities. A \emph{small generalized representation} of its fundamental group is defined as a compatible system of small generalized representations on a covering of $\frakX$ by small affines. A \emph{small Higgs bundle} over $\frakX_\Ohbk$ is a vector bundle $E$ on $\frakX_\Ohbk$ together with an endomorphism with value in $\Omega^1_{\frakX/{\Ok}}\cdot\xi^{-1}$ satisfying $\theta\wedge\theta=0$ and $p^\alpha\mid \theta$ for some $\alpha>\frac1{p-1}$.
By gluing the local equivalences, Faltings got a global equivalence of categories.
\begin{thm}[Theorem 5 in \cite{Fal05}]\label{thm:Flatings}
For a liftable scheme there exists an equivalence of categories between small generalized representations and small Higgs bundles.
\end{thm}
In the following, we introduce Faltings' construction of the global equivalence functor briefly. First, one chooses a fixed a lifting $\widetilde{\frakX}$ of $\frakX$ to $A_2({\Ok})$. We note that Faltings' global equivalence really depends on the choice of such a lifting.
Choose a covering $\frakU_i=\Spec(\frakR_i)$ of $\frakX$ by small affines and fix an embedding
\[\tau_i\colon \widetilde{\frakR}_i= \mO_{\widetilde{\frakX}} (\widetilde{\frakU}_i) \rightarrow A_2(\frakR_i)\]
for each $i$ and local parameters. We have over each $\frakU_i$ an equivalence between categories
\[(E_i,\theta_i) \mapsto (E_i\otimes_{\frakR_i} \hbR_i,\rho_{\tau_i}).\]
For a global small Higgs bundle $(E,\theta)$ over $\XbarV$, one gets a system of local generalized representation:
\[(E(\frakU_i)\otimes_{\frakR_i}\hbR_i,\rho_{\tau_i})\]
These local generalized representations are glued into a global small generalized representation via the isomorphisms $\beta_{\tau_i,\tau_j}$ over $\frakU_{ij}$.
\subsection{Scholze's Riemann-Hilbert Correspondence}
Let $X$ be a rigid analytic variety, or more generally a locally noetherian adic space over $\SpaQp$, for example $X$ could be the Raynaud's generic fiber of a smooth formal scheme $\mX$ over $\Spf(\Ok)$. For the pro\'etale site $\Xproet$ and period sheaves $\OXet$, $\OX$, $\OXp$, $\hOXp$, $\hOX$, $\hOXfp$, $\hOXf$, $\bAinf$, $\bBinf$, $\bBdR$, $\bBdRp$, $\OBinf$, $\OBdRp$, $\OBdR$ on $\Xproet$, we refer to~\cite{Sch13}.
Let $X$ be a smooth adic space over $\Spa(k,\OX_k)$.
We recall some definitions and results in \cite{Sch13}.
\begin{defi}\label{def:OBdR-l.s.}
A \emph{$\bBdRp$-local system} \addd{\cite[Definition 7.1]{Sch13}} is a sheaf of $\bBdRp$-modules that is locally on $\Xproet$ free of finite rank. An \emph{$\OBdRp$-module with integrable connection} \addd{\cite[Definition~7.1]{Sch13}} is a sheaf of $\OBdRp$-modules that is locally on $\Xproet$ free of finite rank, together with an integrable connection satisfying the Leibniz rule with respect to the derivation on $\OBdRp$. A \emph{Lisse $\widehat{\bZ}_p$-sheaf} \addd{\cite[Definition~8.1]{Sch13}} $\bL$ on $\Xproet$ is a sheaf of $\widehat{\bZ}_p$-modules such that locally isomorphic to $\widehat{\bZ}_p\otimes_{\bZ_p}M$ for some finitely generated $\bZ_p$-module $M$.
\end{defi}
Similarly, one defines $\hOXp$-local systems, $\hOX$-local systems, $\OC$-local system, filtered $\OX$-module with integrable connection.
Scholze also constructed a fully faithful functor from filtered $\OX$-modules with integrable connection to $\bBdRp$-local systems
\begin{thm}[Theorem 7.6 in \cite{Sch13}] \label{thm:Scholze} Let $\mE$ be a filtered $\OX$-module with integrable connection. Then $\bM= \Fil^0(\mE\otimes_{\OX} \OBdR)^{\nabla=0}$ is a $\bBdRp$-local system and the functor sending $\mE$ to $\bM$ is fully faithful from the category of filtered $\OX$-modules with integrable connection to the category of $\bBdRp$-local systems. And $\mE$ can be reconstructed from $\bM$ via $\mE_{\mathrm et} \cong v_*(\bM \otimes_\bBdRp \OBdR)$.
\end{thm}
We note that Diao-Lan-Liu-Zhu got an algebraic logarithmic version result.
\begin{theorem}[Theorem 1.1 in \cite{DLLZ}] Let $X$ be a smooth algebraic variety over a $p$-adic local field. Then there is a tensor functor from the category of de Rham $p$-adic \'etale local systems on $X$ to the category of algebraic vector bundles on $X$ with regular integrable connections and decreasing filtration satisfying the Griffiths transversality.
\end{theorem}
\subsection{Description of generalized representations of Faltings via periodic sheaves over $\Xproet$.}
\begin{lem}\label{lem: ls&galoismod} Let $\frakU=\Spf{\frakR}$ be a small affine formal scheme over $W$. Denote by $U$ the generic fiber of $\frakU$ and by $\overline{U}=\Spec{\overline{\frakR[\frac1p]}}$. Then the functor $\bM\to\bM(\widehat{\overline{U}})$ is an equivalence functor between the category of local free $\widehat{\mO}_U^+$-module and the category of free $\hbR$-module with a semilinear action of $\Delta$.
\end{lem}
\begin{proof} This follows the fact that $\hbR = \widehat{\mO}_U^+(\widehat{\overline{U}})$.
\end{proof}
Let $\{\frakU_i\}_{i\in I}$ be a small affine covering of $\frakX$. Let $\bM=((\bM_i,\rho_i),\beta_{ij})$ be a generalized representation over $X$, where $(\bM_i,\rho_i)$ is an $\hbR_i$-module with a semilinear action of $\Delta_i$ and $\beta_{ij}$'s are the compatible gluing maps. Then by Lemma~\ref{lem: ls&galoismod}, there are local free sheaves of $\hOXp\mid_{U_i}$-modules. By gluing these local data, one gets a locally free sheaf of $\hOXp$-modules which is of finite rank. In conclusion, one gets the following result.
\begin{lem}\label{another description}
There is a natural equivalence between the global generalized representation and the locally free sheaf of $\hOXp$-modules of finite rank.
\end{lem}
In the rest of this note, we will always identify these two category via this equivalence functor. In particular the global small generalized representation associated to a small Higgs bundle $(E,\theta_i)$ is the $\hOXp$-local system $\bV$ satisfying
\begin{equation}
\xymatrix{
\bV(\hbU_{ij}) \ar@{=}[d] \ar[r]^-{\cong} & E(\frakU_i)\otimes_{\frakR_i}\hbR_i \ar[d]^-{\beta_{\tau_i,\tau_j}}\\
\bV(\hbU_{ij}) \ar[r]^-{\cong} & E(\frakU_j)\otimes_{\frakR_j}\hbR_j \\
}
\end{equation}
\begin{question} Let $\mE_0=(V_0,\nabla_0,\Fil_0)$ be a filtered de Rham bundle over $\frakX$. Denote $\mE = \mE_0\otimes_{\mO_\frakX} \OX$ and $(E,\theta) = (\Gr(V_0)\otimes_{\mO_\frakX} \widehat{\mO}_{\XhbV}, \Gr(\nabla_0)\otimes_{\mO_\frakX} \widehat{\mO}_{\XhbV}\cdot \xi^{-1})$. By Theorem~\ref{thm:Scholze}, $\mE$ corresponds to a $\bBdRp$-local system $\bM$. Then taking the first graded piece, one gets $\hOXp$-local system. By Theorem~\ref{thm:Flatings} and Lemma~\ref{another description}, one gets another $\hOXp$-local system. Do these two local systems are coincide?
\end{question}
In general, the answer is no. But we can modify Faltings' construction a little bit and show that under this modification, the two local systems do coincide. This is the one of main results in this note.
\newpage
\section{Very small generalized representations }
\subsection{Modification of Faltings construction (local).}
If $k$ is unramified, that is $k=W(\kappa)[\frac1p]$, then $\Ok$ can be naturally embedding into $A_2({\Ok})$. In this case, $\frakR\otimes_{\Ok} A_2({\Ok})$ is an algebra over $A_2(\Ok)$ lifts $\hRbarV$. In general case, one still has a naturally embedding from $k$ into $A_2(\Ok)[\frac1p]$. Let's $\alpha_0$ denote the smallest non-negative real number such that $\pi \in p^{-\alpha} A_2(\Ok)$, where $\pi$ is a uniformizer of $k$. Then one has $A_2(\Ok)[\pi] = A_2(\Ok) + \Ohbk\cdot\frac{\xi}{p^{\alpha_0}}$. Denote
\[\widetilde{\frakR}:= \frakR\otimes_{\Ok} A_2({\Ok})[\pi]\]
which is an algebra over $A_2(\Ok)[\pi]$ lifts $\hRbarV$. We choose a local coordinate system $\TT=\{T_1,\cdots,T_d\}$ of $\frakR$. Furthermore, we choose an $A_2(\Ok)[\pi]$-linear lifting $\tau\colon \widetilde{\frakR} \rightarrow A_2(\frakR)[\pi]$ of the inclusion $\frakR\rightarrow \hbR$. It is clear that $\tau(\frakR) \subset A_2(\frakR) +\hbR\cdot\frac{\xi}{p^{\alpha_0}}$ and that it is uniquely determined by its restriction to $\frakR$ (by abusing notation, we still denote it by $\tau$)
\begin{equation}\label{equ:lifting_tau}
\tau\colon \frakR \rightarrow A_2(\frakR)[\pi]
\end{equation}
and for any other lifting $\tau'\colon \frakR \rightarrow A_2(\frakR)[\pi]$, one has
\[\tau(T_i) - \tau'(T_i) \in \hbR\cdot\frac{\xi}{p^{\alpha_0}}\]
For a small Higgs bundle $(E,\theta)$, the series in \ref{equ:taylorHiggs} is no more convergent. So there is an obstruction to define the semilinear action $\rho$ on $E\otimes \hbR$. But if we require the Higgs field to be sufficiently small, then the series in \ref{equ:taylorHiggs} is still convergent. This leads us to give the following definition.
\begin{defi}
A Higgs bundle $(E,\theta)$ is called \emph{very small} if $p^{\alpha_0+\frac{1}{p-1}}$ divides $\theta$.
\end{defi}
Similar as in \ref{equ:taylorHiggs}, we define
\begin{equation}
\beta_{\tau,\tau'}(m\otimes1) = \sum_{I\in \bN^d} \theta(\xi\partial)^I(m)/I! \otimes \left(\frac{\tau(T) - \tau'(T)}{\xi}\right)^I,
\end{equation}
For any $\sigma\in \Delta$, we denote
\begin{equation}\label{equ:galoisAction}
\rho(\sigma):=\beta_{\sigma\circ\tau,\tau}\circ(1\otimes \sigma).
\end{equation}
Note that $\rho$ depends on the choice $\tau$, but there exists a canonical isomorphism for different choices
\[\beta_{\tau,\tau'} \colon (\bV,\rho_\tau) \cong (\bV,\rho_{\tau'}).\]
By gluing local functor one gets the following result.
\begin{thm} \label{thm:modifiedFunctor}
The functor $(E,\theta)\mapsto (\bV,\rho_\tau)$ is fully faithful from the category of very small Higgs bundles to the category of generalized representations.
\end{thm}
\begin{proof}
Let $\tau'$ be an integral lifting as in \cite{Fal05}. Consider the difference between $\tau$ and $\tau'$, which is a $1$-Cech-cocycle. Twisting Higgs bundle with this cocycle, one constructs an equivalent self-functor on the category of very small Higgs bundles as in \cite[page 855]{Fal05} such that Faltings' functor with respect to $\tau'$ is the isomorphic to the modified functor at here. i.e. the following diagram commutes
\begin{equation*}
\xymatrix@C=2cm{
\{\text{very small Higgs bundles}\} \ar[d]^{\text{twisted by cocycle}} \ar[r]^-{\text{modified functor}}& \{\text{generalized representations}\} \ar@{=}[d] \\
\{\text{very small Higgs bundles}\} \ar[r]^-{\text{Faltings functor}} & \{\text{generalized representations}\} \\
}
\end{equation*}
Since Faltings' functor is equivalent between small Higgs bundles and small generalized representations, the modified functor is fully faithful.
\end{proof}
\subsection{The $\xi^{-1}$-connection and the Higgs bundle}
Let $\frakX$ be a proper smooth variety over $\Ok$. Let $(V,\nabla,\Fil)$ be a filtered de Rham bundle over $\frakX$. We consider the pair $(\widetilde{V},\widetilde{\nabla})$, which is consists of a bundle $\widetilde{V}$ and a graded Higgs field $\widetilde{\nabla}\colon \widetilde{V}\rightarrow \widetilde{V}\otimes \Omega^1_{\frakX/\Ok}(\xi^{-1})$.
Consider the filtered $\Ok$-subalgebra $\Ok[\xi^\pm]\subset \BdR$. Then the locally free $\mO_\frakX\otimes_\Ok\Ok[\xi^\pm]$-sheaf $V\otimes_\Ok\Ok[\xi^\pm]$ has a natural connection (still denoted by $\nabla$) and natural decreasing filtration
\[\Fil^n(V\otimes_\Ok\Ok[\xi^\pm]):= \sum_{i+j\geq n} \Fil^i V\otimes_{\Ok} \xi^j \subset V\otimes_\Ok\Ok[\xi^\pm].\]
We denote
\[\widetilde{V}:=\Gr^0(V\otimes_\Ok\Ok[\xi^\pm]) = \Fil^0(V\otimes_\Ok\Ok[\xi^\pm])/\Fil^1(V\otimes_\Ok\Ok[\xi^\pm]).\]
Since $\nabla$ satisfies the Griffiths transversality, the $\xi^{-1}$-connection $\xi^{-1}\nabla$ can be restricted on both $\Fil^1(V\otimes_\Ok\Ok[\xi^\pm])$ and $\Fil^0(V\otimes_\Ok\Ok[\xi^\pm])$. Thus its graded map $\widetilde{V}$ is a Higgs field on $\widetilde{V}$ with value in $\Omega_{\frakX/\Ok}^1(\xi^{-1})$.
\begin{equation}
\xymatrix@C=2cm{ 0 \ar[d] & 0 \ar[d]\\
\Fil^1(V\otimes_\Ok\Ok[\xi^\pm]) \ar[d] \ar[r]^-{\xi^{-1}\nabla}
&
\Fil^1(V\otimes_\Ok\Ok[\xi^\pm]) \otimes \Omega_{\frakX/\Ok}^1(\xi^{-1}) \ar[d]
\\
\Fil^0(V\otimes_\Ok\Ok[\xi^\pm]) \ar[d] \ar[r]^-{\xi^{-1}\nabla}
&
\Fil^0(V\otimes_\Ok\Ok[\xi^\pm]) \otimes \Omega_{\frakX/\Ok}^1(\xi^{-1}) \ar[d]
\\
\widetilde{V} \ar[d] \ar[r]^-{\widetilde{\nabla}}
&
\widetilde{V} \ar[d] \otimes \Omega_{\frakX/\Ok}^1(\xi^{-1})
\\
0&0\\}
\end{equation}
The following lemma sums up basic properties of the Higgs bundle $(\widetilde{V},\widetilde{\nabla})$.
\begin{lem} \label{lem: properties of tilde V}
(i) The natural surjective map $\OBdRp \rightarrow\OC$ induces a canonical isomorphism of locally free $\OC$-sheaves with Higgs fields
\[\Gr^0\left(V\otimes_{\OfrakX} \OBdR\right) = \widetilde{V}\otimes_{\OfrakX} \OC;\]\\
(ii) Let's denote $\bM=\Fil^0(V\otimes_{\OX} \OBdR)^{\nabla=0}$. Then $\bM/\xi\bM = (\widetilde{V}\otimes_{\OfrakX} \OC)^{\theta=0}$;\\
(iii) There is a canonical isomorphism $\widetilde{V}\cong \Gr(V,\Fil)$, which sends $[v_i\otimes \xi^{-i}]\in\widetilde{V}$ to $[v_i]\in\Gr^iV\subset E$ for any $i\in \bZ$ and any $v_i\in \Fil^iV$. More over $\widetilde{\nabla}$ coincides with the Higgs field $\frac{\Gr(\nabla)}{\xi}$.
\end{lem}
\begin{proof}
This can be checked directly.
\end{proof}
\subsection{ projection maps $\pr_{\tau}$.}
Let $\frakR$ be an small smooth $W$-algebra of relative dimension $d$. Let $\TT=\{T_1,\cdots,T_d\}$ be a system of coordinates of $\frakR$. Let $\tau\colon \frakR \rightarrow A_2(\frakR)[\pi]$ be a lifting as in \ref{equ:lifting_tau}. Denote
\begin{equation}\label{equ:u_i}
u_i =- T_i\otimes 1 + 1\otimes \tau(T_i)\in \OBinfRh.
\end{equation}
Similarly as in \cite[corrollary 6.15]{Sch13}, one has an isomorphism of $\hbR$-algebras
\begin{equation}\label{equ:localseries}
\OCRh\cong \hbRK[\frac{u_1}{\xi},\cdots,\frac{u_d}{\xi}].
\end{equation}
In particular, there exists an $\hbRK$-linear projection $\OCRh \twoheadrightarrow \hbRK$ sending $\frac{u_i}{\xi}$'s to zero. For any Higgs bundle $(E,\theta)$ over $\hRbarV$ with $\theta \in \End(E)\otimes\Omega^1_{\frakR/V}\cdot\xi^{-1}$, there exists an $\hbRK$-linear morphism
\begin{equation} \label{proj: Higgs}
\pr_{\tau} \colon E\otimes \OCRh \rightarrow E\otimes \hbRK.
\end{equation}
We note that this morphism depends on $\tau$ but not on $\TT$. The following diagram isn't communicative
\begin{equation*}
\xymatrix{
E\otimes \OCRh \ar@{=}[d]\ar[r]^-{\pr_{\tau}} & E\otimes \hbRK \ar[d]^{\beta_{\tau,\tau'}}\\
E\otimes \OCRh \ar[r]^-{\pr_{\tau'}} & E\otimes \hbRK\\
}
\end{equation*}
but the restriction to $\ker(\theta)=\left(E\otimes \OCRh\right)^{\theta=0}$ is. More explicitly, we have following result.
\begin{lem}\label{lem: main-02} Fix a lifting $\tau\colon \frakR \rightarrow A_2(\frakR)[\pi]$. Let $(E,\theta)$ be a Higgs bundle over $\hRbarV$. Then
\begin{itemize}
\item[(i)] the sub module $\left(E\otimes \OCRh\right)^{\theta=0}$ has an $\hbRK$-module structure and it contains elements of form
\[\widetilde{e}=\sum_{I} \frac{\theta^I(\xi\partial)(e)}{I!} \otimes \left(\frac{u}{\xi}\right)^I.\]
\item[(ii)] The restriction of the map in (\ref{proj: Higgs}) on $\ker(\theta)$
\[\pr_{\tau} \colon \left(E\otimes_{\frakR} \OCRh\right)^{\theta=0} \longrightarrow E\otimes_{\frakR} \hbRK\]
is an $\hbRK$-linear isomorphism.
\item[(iii)] If $\tau'$ is another lifting, then there is a commutative diagram:
\begin{equation}
\xymatrix{
\left(E\otimes_{\frakR} \OCRh\right)^{\theta=0} \ar[rr]^-{\pr_{\tau}} \ar@{=}[d]
&&
E\otimes_{\frakR}\hbRK \ar[d]^-{\beta_{\tau,\tau'}}\\
\left(E\otimes_{\frakR} \OCRh\right)^{\theta=0} \ar[rr]^-{\pr_{\tau'}} &&
E\otimes_{\frakR} \hbRK\\
}
\end{equation}
\end{itemize}
\end{lem}
\begin{proof} (i) Since $\OCRh^{\theta=0} = \hbRK$, thus $\left(E\otimes_{\frakR} \OCRh\right)^{\theta=0}$ is an $\hbRK$-module. Since $\theta\Big(\left(\frac{u}{\xi}\right)^I\Big) = -\sum_{\ell=1}^{d} i_\ell \left(\frac{u}{\xi}\right)^{I-I_\ell}\cdot \frac{\mathrm{d}T_\ell}{\xi}$ and $\theta (\theta^I(\xi\partial)(e)) = \sum_{\ell=1}^{d}\theta^{I+I_\ell}(\xi\partial)(e) \cdot \frac{\mathrm{d} T_\ell}{\xi}$, one checks directly that $\theta(\widetilde{e})=0$.
(ii) Firstly, $\pr_\tau$ is surjective. For any $e \in V$,
\[\theta\left(\tilde{e}\right)=0 \quad \text{ and } \quad \pr_\tau\left(\tilde{e}\right) = e\otimes 1,\]
Since $\pr$ is $\hbR$-linear and $E(\frakU)\otimes \hbR$ is generated by elements of form $e\otimes1$, thus $\pr$ is surjective.
Secondly, $\pr_i$ is injective. By~(\ref{equ:localseries}), any element in $E\otimes \OCRh$ can be written uniquely as form $\sum\limits_{I} \frac{e_{I}}{I!} \cdot \left(\frac{u}{\xi}\right)^I$ with $e_I\in V \otimes \RK$. Suppose
\[\theta\left(\sum_{I} \frac{e_{I}}{I!} \cdot \left(\frac{u}{\xi}\right)^I\right)=0\quad \text{ and } \quad \pr\left(\sum_{I} \frac{e_{I}}{I!} \cdot \left(\frac{u}{\xi}\right)^I\right) = 0.\]
On one hand, the first equality implies that
\[e_{I+I_k} = \theta(\xi\partial_k)(e_I),\]
for all $I\in \bN^d$ and $k\in\{1,2\cdots,d\}$, where $I_k=(0,\cdots,\underbrace{1}_{k\rm{-th}},\cdots,0)$.
on the other hand, the second equality implies that $e_0=0$. Thus $e_I=0$ for all $I$. So $\pr$ is injective.
(iii) Since $\frac{u}{\xi}=\frac{-T\otimes1+\tau(T)}{\xi} =\frac{-T\otimes1+\tau'(T)}{\xi} + \frac{\tau(T)-\tau'(T)}{\xi}$, one has $\pr_{\tau'}(\frac{u}{\xi}) = \frac{\tau(T)-\tau'(T)}{\xi}$. Thus
\begin{equation*}
\begin{split}
\pr_{\tau'}(\widetilde{e}) & = \sum_{I} \frac{\theta^I(\xi\partial)(e)}{I!} \otimes \left(\frac{\tau(T) -\tau'(T)}{\xi}\right)^I \\
& = \beta_{\tau,\tau'}(e\otimes1) = \beta_{\tau,\tau'} \circ \pr_{\TT}(\widetilde{e})
\end{split}
\end{equation*}
Since $\widetilde{e}$'s generate the $\ker(\theta)$, the diagram commutes.
\end{proof}
\section{Comparison of Faltings' Simpson Correspondence and Scholze's Riemann-Hilbert Correspondence}
\add {setup }
Let $(V,\nabla,\Fil)$ be a filtered de Rham bundle over $\frakX$. Denote by $E=\Gr(V,\Fil)$ the graded bundle. Multiplying by $\xi^{-1}$ on $\Gr(\nabla)$, one gets a Higgs field
\[\theta=\frac{\Gr(\nabla)}{\xi} \colon E \rightarrow E\otimes\Omega^1_X(\xi^{-1}).\]
The embedding $k\rightarrow \BdRp$ induces a liftings of $X_k$ over $\BdRp$. Under this lifting, there exists a fully faithful functor from the category of very small Higgs bundles to the category of generalized representations. Since $(E,\theta)$ is graded, the Higgs field is nilpotent and it is very small. Thus one gets a generalized representation associated to $(E,\theta)$ by lemma~\ref{another description}, that is, an $\OXp$-local system $\bV$ on $\Xproet$. On the other hand, by Theorem~\ref{thm:Scholze} there is a $\bBdRp$-local system $\bM$ attached to $(V,\nabla,\Fil)$.
In this section, we will show that these two local systems satisfy the following relation:
\begin{thm} \label{main01}
$\bV\otimes_{\bZ_p}\bQ_p \cong \bM/\xi\bM$.
\end{thm}
\begin{proof}
By Lemma~\ref{lem: properties of tilde V},
\[\bM/\xi\bM = (\widetilde{V}\otimes_{\OfrakX} \OC)^{\theta=0}.\]
Now, we need to find a $\Delta$-isomorphism from $(\widetilde{V}\otimes_{\OfrakX} \OC)^{\theta=0}$ to $\bV\otimes_{\bZ_p}\bQ_p$.
By (ii) of lemma~\ref{lem: properties of tilde V}, there is a canonical isomorphism between Higgs bundles from $\widetilde{V}$ to $E$. It is sufficient to construct compatible $\Delta$-isomorphisms
\[\left(E(\frakU)\otimes_{\frakR} \OCRh\right)^{\theta=0} \longrightarrow E(\frakU)\otimes_{\frakR} \hbRK\]
for each small open affine subset $\frakU=\Spec(\frakR)\subset\frakX$. This is because $\bV$ is glued from the local representations $E(\frakU_i)\otimes_{\frakR_i} \hbR_i$ via Taylor formula $\beta_{\tau_i,\tau_j}$~(\ref{equ:taylorHiggs}).
Fix a local lifting $\tau$ on each $\frakU$. By Lemma~\ref{lem: main-02}, there does exist an $\hbRK$-linear isomorphism
\[\pr_{\tau}\colon \left(E(\frakU)\otimes_{\frakR} \OCRh\right)^{\theta=0} \longrightarrow E(\frakU)\otimes_{\frakR} \hbRK\]
for each $\frakU$, and these isomorphisms can be glued into a global isomorphism
\[\pr \colon \left(E\otimes \OC\right)^{\theta=0} \rightarrow \bV.\]
In the following, we only need to show that $\pr_{\tau}$ preserves the $\Delta$-actions on both sides. This is equivalent to checking that the following diagram commutes for all $\sigma\in \Delta$
\begin{equation*}
\xymatrix{
\left(E(\frakU)\otimes_{\RK} \OCRh\right)^{\theta=0} \ar[r]^-{\pr_{\tau}} \ar[d]^-{\sigma}
& E(\frakU) \otimes_{\RK} \hbRK \ar[d]^-{\sigma} \\
\left(E(\frakU)\otimes_{\RK} \OCRh\right)^{\theta=0} \ar[r]^-{\pr_{\tau}}
& E(\frakU)\otimes_{\RK} \hbRK \\
}
\end{equation*}
By lemma~\ref{lem: main-02}, $\left(E(\frakU) \otimes_{\RK} \OCRh\right)^{\theta=0}$ is generated by elements of form
\[\widetilde{e}=\sum_{I} \frac{\theta^I(\xi\partial)(e)}{I!} \otimes \left(\frac{u}{\xi}\right)^I.\]
We only need to check that $\pr_{\tau}\circ\sigma (\widetilde{e}) = \sigma\circ \pr_{\tau}(\widetilde{e})$ for all $e\in E$. By definition of $\pr_{\tau}$, one has $\pr_{\tau}(\widetilde{e})=e\otimes1$. By equation~(\ref{equ:galoisAction})
\[\sigma\circ\pr_{\tau}(\widetilde{e}) = \sum_{I} \frac{\theta^I(\xi\partial)(e)}{I!} \left(\frac{\tau(T)^\sigma-\tau(T)}{\xi}\right)^I.\]
On the other hand, $\pr_\tau\circ\sigma(\frac{u}{\xi})=\frac{\tau(T)^\sigma-\tau(T)}{\xi}$, one has
\[ \pr_\tau\circ\sigma \left(\sum_{I} \frac{\theta^I(\xi\partial)(e)}{I!} \otimes \left(\frac{u}{\xi}\right)^I \right) = \sum_{I} \frac{\theta^I(\xi\partial)(e)}{I!} \left(\frac{\tau(T)^\sigma-\tau(T)}{\xi}\right)^I. \]
This shows that the morphism preserves the $\Delta$-actions.
\end{proof}
\section{ An analogue of $\bC^*$-action} \label{sec:C^*}
In this section, we will describe the $p$-adic analogue of Simpson's $\bC^*$-action on Higgs bundles. There is a natural action of Galois group on the category of generalized representations. Let $\delta\in \Gal(\bark/k)$ be an element in the absolute Galois group of $k$. Fix a lifting $\widehat{\delta}\in \pi_1(X)$ which lifts $\delta$. More precisely, $(\bV,\rho)^{\delta}$ is given by
\[\bV^{\delta}:= \bV\otimes_{\widehat{\delta}}\OXp\]
and
\[\rho^{\delta}(\sigma):=\rho(\widehat{\delta}^{-1}\circ \sigma \circ \widehat{\delta}) \otimes_{\widehat{\delta}}\mathrm{id}.\]
We note that the isomorphism class of $(\bV^\delta, \rho^{\delta})$ does not depend on the choice of $\widehat{\delta}$. The action on Higgs bundle $(E,\theta)^{\delta}$ is given by
\[E^{\delta}:= E\otimes_{\delta} \mO_{\XbarV}\]
and
\[\theta^{\delta}(e\otimes_\delta 1)= \theta(e)\otimes_{\delta}1\]
In the following, we show that these two actions are compatible under the functor in Theorem~\ref{thm:modifiedFunctor}.
\begin{prop}\label{prop:actionsCompatible}
Let $(\bV,\rho)$ be the generalized representation corresponding to a very small Higgs bundle $(E,\theta)$. Then $(\bV,\rho)^{\delta}$ is the generalized representation corresponding to $(E,\theta)^{\delta}$.
\end{prop}
\begin{proof} We only need to check this locally over $\frakX=\Spec(\frakR)$. Due to the following commutative diagram
\begin{equation}
\xymatrix{
\RbarV \ar[r] \ar[d]^{\delta} & \hbR \ar[d]^{\widehat{\delta}}\\
\RbarV \ar[r] & \hbR\\
}
\end{equation}
there exists an canonical isomorphism
\[E^\delta\otimes \hbR \rightarrow \bV \otimes_{\widehat{\delta}} \hbR\]
sending $(e\otimes_{\delta}1)\otimes1$ to $(e\otimes1)\otimes_{\widehat{\delta}} 1$. In the following, we identify these two module via this isomorphism. Let's choose a parameter $\TT$ of $\frakR$ over $\Ok$ and fix a local lifting $\tau$. Denote by $(\bV',\rho')$ the generalized representation attached to $(E^{\delta},\theta^\delta)$ with respect to the lifting $\widehat{\delta}\circ\tau$. Then
\begin{equation*}
\begin{split}
\rho'(\sigma)((e\otimes_{\delta}1)\otimes 1) :=
& \sum_{I} \frac{\big(\theta^{\delta}(\xi\partial)\big)^I(e\otimes_\delta 1)}{I!}\otimes \left(\frac{\tau(T)^{\widehat{\delta}}-(\tau(T)^{\widehat{\delta}})^{\sigma}}{\xi}\right)^I\\
&= \sum_{I} \frac{\big(\theta(\xi\partial)\big)^I(e)}{I!}\otimes_\delta \left(\frac{\xi}{\xi^\delta}\right)^I \otimes \left(\frac{\tau(T)^{\widehat{\delta}}-(\tau(T)^{\widehat{\delta}})^{\sigma}}{\xi}\right)^I\\
&= \sum_{I} \frac{\big(\theta(\xi\partial)\big)^I(e)}{I!} \otimes \left(\frac{\tau(T)-\tau(T)^{\widehat{\delta}^{-1}\circ\sigma \circ \widehat{\delta}}}{\xi}\right)^I \otimes_{\widehat{\delta}} 1\\
&= \rho(\widehat{\delta}^{-1}\circ\sigma \circ \widehat{\delta})(e\otimes1) \otimes_{\widehat{\delta}} 1\\
& = \rho^\delta(\sigma)((e\otimes1)\otimes_{\widehat{\delta}} 1)\\
\end{split}
\end{equation*}
Thus $\rho'$ coincides with $\rho^{\delta}$ and the theorem follows.
\end{proof}
\begin{example}\label{exm:basechangehiggs}
Let $(E_0,\theta_0)$ be a very small usual Higgs bundle over $\frakX$, i.e. $\theta_0\in \End(E_0)\otimes\Omega_\frakX^1$ with $\theta_0$ divided by a sufficiently large power of $p$. Let $(\bV,\rho)$ be the generalized representation corresponding to the very small Higgs bundle $(E,\theta)$, where $E=E_0\otimes \mO_{\XbarV}$ and $\theta=\theta_0/t$, where $t=\log([(\zeta_{p^n})_n])\in \BdRp$. Then for any unit $c\in \bZ_p^\times$ sufficient closed to $1$, then there exists $\delta\in \Gal(\bark/k)$ such that $\delta(t) = c^{-1}t$. Hence
\[(E,\theta)^{\delta} \cong (E,c\theta).\]
By Theorem, $(E,c\theta)$ is corresponding to the generalized representation $(\bV,\rho)^{\delta}$.
\end{example}
\begin{prop}
Let $(E_0,\theta_0)$, $(E,\theta)$ and $(\bV,\rho)$ are given as in Example~\ref{exm:basechangehiggs}. Then $(E,\theta)$ is graded if and only if the isomorphic class of $(\bV,\rho)$ is invariant under the action of $\Gal(\bark/k)$.
\end{prop}
\begin{proof} Suppose the isomorphic class of $(\bV,\rho)$ is invariant under the $\Gal(\bark/k)$-action. Then there exists an isomorphism $(E,c\theta)\cong (E,\theta)$ for some $c\in\bZ_p$ sufficient closed to $1$. By Simpson's method, $(E,\theta)$ is graded. Conversely, Since $\theta$ is graded $(E,c\theta)\cong (E,\theta)$ for any $c\in \bZ_p^\times$. In particular $(E^\delta,\theta^\delta)\cong (E,\theta)$. Thus by Proposition~\ref{prop:actionsCompatible}, isomorphic class of $(\bV,\rho)$ is invariant under the action of $\Gal(\bark/k)$.
\end{proof}
\begin{prop}
Let $(E,\theta)$ be a Higgs bundle over $\XhbV$ such that the isomorphism class of its associated generalized representation is fixed by the action of $\Gal(\bar k/k)$. Then it is nilpotent.
\end{prop}
\begin{proof} Since the underlying vector bundle $E$ is invariant under the Galois group action, it is defined over $\frakX$, i.e. there exists a vector bundle $E_0$ over $\mX$ such that $E = E_0\otimes \mO_\XbarV$. Under this identity, $\theta$ is an element in
\[ \End_{\mO_\frakX}(E_0)\otimes \Omega_{\frakX}^1\otimes \mO_{\bC_p}\cdot \xi^{-1}.\]
Since $(E,\theta)$ is invariant under the Galois action, the coefficients of $\det(T\cdot \mathrm{id}-\theta)$ are all invariant under the Galois action. But the coefficients are contained in $\Gamma(\frakX,\Omega^{\otimes i}_{\frakX/\Ok}) \otimes \widehat{\overline{k}}(-i)$ where $i=0,\cdots,r$. Since $\left(\Gamma(\frakX,\Omega^{\otimes i}_{\frakX/\Ok}) \otimes \widehat{\overline{k}}(-i)\right)^{\Gal(\bar k/k)} = 0$ for $i=1,\cdots,r$, one has $\det(T\cdot \mathrm{id}-\theta) = T^r$. Thus $\theta$ is nilpotent.
\end{proof}
\begin{prop}Let $(E,\theta)$ be a rank $2$ Higgs bundle. If its isomorphism class is invariant under the action of $\Gal(\bark/k)$, then $(E,\theta)$ is graded.
\end{prop}
\add{The proof only works over the generic filber?}
\begin{proof}
By Lefschetz hyperplane section theorem, we may assume that the relative dimension of $\frakX/\Ok$ is $1$. Denote $L=\ker\theta$. If $\rank L=2$, there is nothing to prove. Now we assume $\rank L=1$ and denote $L'=E/L$. Then the Higgs field is equivalent to
\[\theta\colon L' \rightarrow L \otimes \Omega^1_{\frakX/\Ok}.\]
Since $(E^\delta,\theta^\delta) \cong (E,\theta)$, there exists a commutative diagram as following
\[\xymatrix{
L' \ar[r]^-{\theta} \ar[d]^{\cong}_{f_{\delta}} & L \otimes \Omega^1_{\frakX/\Ok}\cdot t^{-1} \ar[d]^{\cong}_{g_\delta}\\
L'^{\delta} \ar[r]^-{\theta^\delta} & L^\delta \otimes \Omega^1_{\frakX/\Ok} \cdot t^{-1} \\}\]
Hence the line bundles $L$ and $L'$ are defined over $\frakX$. i.e. there exist line bundles $L_0$ and $L_0'$ over $\frakX$ such that $L' = L'_0\otimes \mO_{\frakX_{\mO_{\bC_p}}}$ and $L = L_0\otimes \mO_{\frakX_{\mO_{\bC_p}}}$. The Higgs field can be rewrite as
\[\theta = \sum_{i=1}^m\omega_i \otimes c_i \cdot t^{-1} \in \Hom(L_0',L_0\otimes\Omega^1_{\frakX/\Ok}) \otimes \mO_{\bC_p}\cdot t^{-1}\]
where $\omega_1,\cdots,\omega_m$ is a basis of $\Hom(L_0',L_0\otimes\Omega^1_{\frakX/\Ok})$ and $c_i\in \mO_{\bC_p}\setminus{0}$. The existence of $(E^\delta,\theta^\delta) \cong (E,\theta)$, implies there exists a nonzero $\lambda=\lambda_\delta\in \mO_{\bC_p}^\times$ such that
\[\delta(c_i) = \lambda c_i ,\quad \text{for all } i=1,2,\cdots,m.\]
Denote $c:=c_{i_0}\neq 0$ for some $1\leq i_0\leq m$. Then $\frac{c_i}{c}$ is invariant under the action of $\Gal(\bark/k)$. Hence $\frac{c_i}{c}$ is contained in $k$. Denote $\omega= \sum_{i=1}^m\omega_i \cdot \frac{c_i}{c}$. Then one has
\[\theta = \omega \otimes c \cdot t^{-1}.\]
So one has $\theta^\delta= \frac{\delta(ct^{-1})}{ct^{-1}}\cdot\theta$ and
\[(E^\delta,\theta^\delta) = (E,\frac{\delta(ct^{-1})}{ct^{-1}}\cdot\theta).\]
Since $\left(\bC_p\cdot t^{-1}\right)^{\Gal(\bark/k)}=0$, there exists $\delta\in \Gal(\bark/k)$ such that $\mu:=\frac{\delta(ct^{-1})}{ct^{-1}}\neq 1$. Then
\[(E,\mu\theta)\cong (E,\theta)\]
By the argument due to Simpson, $(E,\theta)$ is graded.
\end{proof}
\begin{conj}
Let $(E,\theta)$ be a Higgs bundle over $\XhbV$ such that the isomorphic class of its associated generalized representation is fixed by the action of $\Gal(\bar k/k)$. Then it is graded.
\end{conj}
\section{sub-de Rham local systems}\label{sec:subrep}
Let $\bL$ be a lisse of $\hat{\bZ}_p$-sheaf on $\Xproet$\addd{Definition 8.1 in \cite{Sch13}}. By extending the coefficients one gets a $\bBdRp$-local system $\bM=\bL\otimes_{\hat{\bZ}_p} \bBdRp$. Here a natural question arises: How can we reconstruct $\bL$ from this $\bBdRp$-local system? The answer to this question is not known yet. But we would like to discuss this question for some special cases.
Denote
\[\mE_{\text{\'et}} \cong \nu_*(\bL\otimes_{\hat{\bZ}_p} \OBdR).\]
This is a filtered $\mO_\Xet$-module with an induced integrable connection on $\Xet$. \add {It is algebraic by \cite{DLLZ}.} Extending the coefficients and taking the flat sections in the $0$-th filtration, one gets a sub $\bBdRp$-local system of $\bL\otimes_{\hat{\bZ}_p} \bBdRp$:
\[\bM^{\OBdR}:= \Fil^0(\mE_{\text{\'et}} \otimes_{\mO_{\Xet}} \OBdR)^{\nabla=0} \subseteq \bM=\bL\otimes_{\hat{\bZ}_p} \bBdRp.\]
By Faltings' Simpson correspondence one gets a semistable Higgs bundle with trivial chern classes $(E,\theta)_\bL$ corresponding to $\bL\otimes\hOX$. Taking the first grading piece, one has sub generalized representation
\[\bM^{\OBdR}/\xi\bM^{\OBdR} \subset \bM/\xi\bM=\bL\otimes \hOX\]
By lemma~\ref{another description}, we get an embedding
\[(E,\theta)^{\OBdR}:=\Gr(\mE_{\text{\'et}},\nabla,\Fil) \subset (E,\theta)_\bL.\]
\begin{prop}\label{prop:main}
The generalized representation $\bM^{\OBdR}/\xi\bM^{\OBdR}$ comes from a $\bC_p$-geometrical representation.
\end{prop}
\begin{lem}\label{lem:subsheaf1}
Let $\frakX$ be a projective curve over $\Ok$ with semistable reduction. Assume $\mF$ is a sub vector bundle of $\mG=\mO_{\frakX_\kappa}^{\oplus s}$ of degree $0$. Then $\mF\cong \mO_{\frakX_\kappa}^{\oplus r}$ where $r=\rank(\mF)\leq s$.
\end{lem}
\begin{proof} Let $e_1,\cdots,e_s$ be a basis of $H^0(\frakX,\mG)$ and let $Y_1,\cdots,Y_m$ be all irreducible components of $\frakX$. denote $e_{ij}:= e_i\mid_{Y_j}$\\[-2mm]
Firstly, we show $\mG\mid_{Y_i}$ is semistable, i.e. $\deg(\mE)\leq 0$, for any subsheaf $\deg(\mE)$ of $\mG\mid_{Y_i}$. Suppose $\deg(\mE)>0$. Then we consider the sub bundle $\mE'$ of $\mE$ with maximal slope (this slope is obviously bigger than $0$). Since $0 \neq \mE'\subset \mG\mid_{Y_i} = \bigoplus_{i=1}^{s} \mO_{Y_j} \cdot e_{ij}$, there exists a nonzero projection map $\pi_i \colon \mE' \rightarrow \mO_{Y_j}\cdot e_{ij}$.
\[0\rightarrow \ker{\pi_i} \longrightarrow \mE' \longrightarrow \mathrm{im}(\pi_i)\rightarrow 0\]
As the slope of the image of $\pi_i$ is non-positive and the slope of $\mE'$ is positive, the slope of $\ker(\pi_i)$ is bigger than that of $\mE'$. This contradicts the definition of $\mE'$. Thus $\deg(\mE)\leq 0$. \\[-2mm]
Secondly, one has $\deg(\mF\mid_{Y_i})=0$ for all $i=1\cdots,m$. This is because $0=\deg(\mF) = \sum_{i=1}^m \deg(\mF\mid_{Y_i})$ and $\deg(\mF\mid_{Y_i})\leq 0$ induced from the first step. \\[-2mm]
Thirdly, every degree $0$ rank $r$ sub bundle $\mE\subset\mG\mid_{Y_i}$ is isomorphic to $\mO_{Y_i}^{\oplus r}$. Choose a nonzero projection $\pi_j\colon \mE\rightarrow \mO_{Y_i}\cdot e_{ij}$. Since $\mG\mid_{Y_i}$ is semistable of slope zero, $\mE$ is also semistable of slope zero and hence $\deg(\mathrm{im}(\pi_j))\geq0$. Thus it is surjective. By induction on the rank, we may assume $\ker(\pi_j)\cong\mO_{Y_i}^{\oplus (r-1)}$.
Then the subsheaf $\ker(\pi_j)+\mO_{Y_i}\cdot e_{ij}\subset \mG\mid_{Y_i}$ is a direct summand of rank $r$ and the projection from $\mE$ to this summand is an isomorphism.
\\[-2mm]
Finally, we show that $\mF\cong \mO_{\frakX_\kappa}^{\oplus r}$. For any $P_{ij}\in Y_i\cap Y_j\neq\emptyset$. Consider the diagram
\begin{equation*}
\xymatrix{
H^0(\frakX_\kappa,\mF) \ar@{^(->}[r] \ar[d]^{\rm res} & H^0(\frakX_\kappa,\mG)\ar[d]_{\cong}^{\rm res}\\
H^0(Y_i,\mF\mid_{Y_i}) \ar@{^(->}[r] \ar[d]_{\cong}^{\rm res} & H^0(Y_i,\mG\mid_{Y_i}) \ar[d]_{\cong}^{\rm res} \\
\mF\mid_{P_{ij}} \ar@{^(->}[r] & \mG\mid_{P_{ij}}\\
}
\end{equation*}
Since $\mG=\mO_{\frakX_\kappa}^{\oplus s}$, the two vertical maps on the right hand side are isomorphisms. We identify these three vector spaces on the right hand side via these two isomorphism. Then $H^0(Y_i,\mF\mid_{Y_i})$ is a $r$-dimensional sub vector spaces of $H^0(\frakX_\kappa,\mG)$, which does not depend on the choice of the index $i$. This subspace generates a sub bundle $\mF'\subset \mG$, which is clearly isomorphic to $\mO_{\frakX_\kappa}^{\oplus r}$. By the construction of $\mF'$, one has $\mF'\mid_{Y_i}=\mF\mid_{Y_i}$ for each $i$. Thus $\mF'=\mF$.
\end{proof}
\begin{lem}
Let $\frakX$ be a projective curve over $\Ok$ with semistable reduction. Assume $\mF$ is a sub vector bundle of $\mG=(\mO_{\frakX}/p^n)^{\oplus s}$ of degree $0$ over $\frakX\otimes_W W_n$. Then $\mF\cong(\mO_{\frakX}/p^n)^{\oplus r}$ where $r=\rank(\mF)\leq s$.
\end{lem}
\begin{proof}
By Lemma~\ref{lem:subsheaf1}, there exists a basis $e_1,\cdots,e_s$ of $\mG$ such that $\mF/p = \mF'/p$ where $\mF'$ is the direct summand of $\mG$ generated by $e_1,\cdots.e_r$. Consider the composition
\[f\colon \mF \rightarrow \mG \twoheadrightarrow \mF'\cong(\mO_{\frakX}/p^n)^{\oplus r}.\]
Since $\mF/p = \mF'/p$, $f\pmod{p} = \mathrm{id}$. Hence $f$ is an isomorphism.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:main}] By Lefschetz hyperplane section theorem, we assume that $(X,D)_{\bC_p}$ is a log pair of curves, and after a base change on $X$ \'etale over $X-D$, we assume that $\bL^\text{geo}$ is trivial (mod $p^2$). Since $(E,\theta)^{\OBdR}$ over $(X,D)_{\bC_p}$ is the graded Higgs bundle of the logarithmic de Rham bundle
$(\mE_{\text{\'et}},\nabla,\Fil) $ over characteristic $0$ with the eigenvalues of the residue of the connection on $D$ equal to zero, it is of degree $0$. Since
$(E,\theta)^{\OBdR}\subset (E,\theta)_\bL$ and $(E,\theta)_\bL$ is of degree zero and semistable, $(E,\theta)^{\OBdR}$ is also semistable. \\
On the other hand, the small representation $\bL^\text{geo}$ corresponds to an extension $(\mE,\theta)$ over $\mO_{\bC_p}$ of $(E,\theta)$ under Faltings' integral Simpson correspondence.
Furthermore, for any $n\in \bN$, we may take a morphism $\pi: Y\to X$ \'etale over $X-D$ to make $\pi^*\bL^\text{geo}$ is trivial modulo $p^{n+1}$. We may extend $\pi$ to a morphism $\pi: (\frakY,\frakD)\to (\frakX,\frakD)$ by choosing a suitable semistable module $(\frakY, \frakD)$ of $(Y,D)$. Fixing $A_2(\Ok)$-liftings of $(\frakX,\frakD)$ and $(\frakY,\frakD)$ under Faltings' integral correspondence $\pi^*\bL^\text{geo}$ corresponds to the twisted pulled $\pi^\circ(E,\theta)_{\bL^\text{geo}}$, which is the trivial Higgs bundle modulo $p^{n}$. The extension $(E,\theta)^{\OBdR}\subset (E,\theta)_\bL$ is also small which corresponds to a small generalized sub representation $\bW\subset \bL^\text{geo}$. The pullback $\pi^*(\bW)\subset \pi^*\bL^\text{geo}$ corresponds to the twisted pull-back $\pi^\circ(E,\theta)^{\OBdR}\subset \pi^\circ(E,\theta)$.
As $\pi^\circ(E,\theta)/(Y, D)_{\mO_{\bC_p}/p^{n}\mO_{\bC_p}}$
is the trivial Higgs bundle and $\deg \pi^\circ(E,\theta)^{\OBdR}/(Y, D)_{\mO_{\bC_p}/p^{n}
\mO_{\bC_p}}=0$ we show that $\pi^\circ(E,\theta)^{\OBdR}/(Y, D)_{\mO_{\bC_p}/p^{n}\mO_{\bC_p}}$ is a trivial sub-Higgs bundle of rank-$r$.
it shows that $\pi^*\bW$ (mod $p^n$) is an $\mO_{\bC_p}/p^s$-geometrical representation.
As the category of $\mO_{\bC_p}$-geometrical representations is a fully faithful subcategory of the category of generalized representations, $\pi^*(\bW)$ descends back to the original $\bW$, whose modulo $p^s$ reduction is an $ \mO_{\bC_p}/p^n\mO_{\bC_p}$-geometrical representation.
Taking inverse limits, one gets that $\bW$ is an actual $\mO_{\bC_p}/p^n\mO_{\bC_p}$-representation of the geometric \'etale fundamental group, see a similar argument in Deninger-Werner~\cite{DW} for descending representations arising from strongly semistable vector bundles.
\end{proof}
| edc9e3fd246f51db934075ed05ad4e01a4b73f22 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
Galaxy clusters are the most massive gravitationally bound systems of our universe, and as such they contain a wealth of cosmological and astrophysical information.
They can be used either as powerful cosmological probes (\cite{1993Natur.366..429W}, \cite{2011ARA&A..49..409A}) or as astrophysical objects of study, to better characterise the physics of the intra-cluster medium (ICM, \cite{1988xrec.book.....S}) and how these structures are connected to the rest of the cosmic web.
Constraints on the matter density $\Omega_m$ or the amplitude of the matter power spectrum $\sigma_8$ can be inferred from several galaxy cluster observables.
One can for instance use cluster number counts, their clustering, or the properties of their gas content to constrain cosmological parameters (see e.g. \cite{2012ARA&A..50..353K} or \cite{2011ARA&A..49..409A} for reviews). Among the more recent probes, one can also cite the cluster sparsity \citep{10.1093/mnras/stt2050}.
The baryon budget of these objects is also interesting in the aspect of galaxy clusters as cosmological probes, and especially the hot gas of the ICM which is composing the major part of the baryonic matter inside clusters.
Indeed the gas mass fraction of galaxy clusters, $f_{gas}$, is considered to be a good proxy for the universal baryon fraction \citep{2011ASL.....4..204B}, and can be used to constrain cosmological parameters including the matter density $\Omega_m$, the Hubble parameter $h$, the Dark Energy density $\Omega_{DE}$ or the Equation of State of Dark Energy $w$ (see e.g. \cite{2008MNRAS.383..879A}, \cite{2020JCAP...09..053H}, \cite{2022MNRAS.510..131M} and references therein).
The gas content inside galaxy clusters is however also affected by baryonic physics.
Such baryonic effects need to be taken into account while performing the cosmological analysis, as they introduce systematic uncertainties in the final constraints (see e.g. the discussions from \cite{2003ApJ...591..515M}, \cite{2003ApJ...591..526M}, \cite{2007MNRAS.380..437P}, \cite{2012MNRAS.427.1298H}, \cite{2013ApJ...767..116M}, \cite{2013MNRAS.432.3508R}, \cite{2018A&A...620A..78S}).
For instance, feedback mechanisms inside clusters, like Active Galactic Nuclei (AGN) heating, can drive gas out of the potential wells, resulting in slightly gas-depleted clusters.
This depletion of galaxy clusters' gas with respect to the universal baryon fraction is accounted for by the depletion factor $\Upsilon$ (\cite{1998ApJ...503..569E}, \cite{2007MNRAS.377...41C}).
This depletion factor has been thoroughly studied in hydrodynamical simulations throughout the years, in clusters (\cite{2005ApJ...625..588K}, \cite{2013MNRAS.431.1487P}, \cite{2016MNRAS.457.4063S}, \cite{2020MNRAS.498.2114H}, and references therein) as well as in filaments \citep{2021arXiv210906198G}.
As a result, this parameter can be very well constrained and robustly predicted in numerical simulations.
Secondly, galaxy clusters are often assumed to be in hydrostatic equilibrium (HE hereafter). Nevertheless, non-thermal processes such as turbulence, bulk motions, magnetic fields or cosmic rays (\cite{2009ApJ...705.1129L}, \cite{2009A&A...504...33V}, \cite{2012ApJ...758...74B}, \cite{2014ApJ...792...25N}, \cite{2015MNRAS.448.1020S}, \cite{2016ApJ...827..112B}), might cause a departure from the equilibrium condition in the ICM.
Therefore, the HE assumption leads to cluster mass estimations biased towards lower values with respect to the total cluster mass.
The impact of non-thermal processes, and therefore an evaluation for the bias in the cluster mass estimation, was first considered in hydrodynamical simulations \citep{2006MNRAS.369.2013R}.
Since then, a parametrization for this mass bias has been introduced also in observations, for instance when detecting clusters in X-ray or millimeter wavelengths, the latter exploiting the thermal Sunyaev-Zeldovich effect \citep{1972CoASP...4..173S} (tSZ hereafter) (see \cite{2019SSRv..215...25P} for a review). We stress that this bias affects all the observables which might assume hydrostatic equilibrium when evaluating cluster masses, and thus the gas mass fraction. Throughout the paper we define this mass bias as $B=M_{HE}/M_{tot}$.
If the depletion factor is well constrained and understood, this is not the case for the hydrostatic mass bias as its value is still debated.
In a number of analysis including weak lensing works (WtG \citep{2014MNRAS.443.1973V}, CCCP \citep{2015MNRAS.449..685H}, \cite{2016MNRAS.461.3794O}, \cite{2017MNRAS.472.1946S}), tSZ number counts \citep{2019A&A...626A..27S}, X-ray observations \citep{2019A&A...621A..40E} or hydrodynamical simulations (\cite{2017MNRAS.465.2936M}, \cite{2021arXiv211007326B}), this mass bias is estimated to be around $B \sim 0.8-0.85$.
The works from \cite{2014A&A...571A..20P}, \cite{2016A&A...594A..24P} however show that to alleviate the tension on the amplitude of the matter power spectrum $\sigma_8$ between local and cosmic microwave background (CMB hereafter) measurements, a much lower $B$ is needed.
Indeed when combining CMB primary anisotropies and tSZ cluster counts, CMB measurements drive the constraining power on the cosmological parameters and thus on the bias, favoring a bias $B \sim 0.6-0.65$.
This result was confirmed later on by \cite{2018A&A...614A..13S} and \cite{2020A&A...641A...6P}.
In addition, the study from \cite{2018A&A...614A..13S} shows that when forcing $B = 0.8$ while assuming a {\it Planck} cosmology, the observed cluster number counts are way below the number counts predicted using the CMB best-fit cosmological parameters.
In other words, when assuming $B = 0.8$ with a CMB cosmology, one predicts approximately thrice as many clusters as what is actually observed.
As the precise value of the mass bias is still an open matter and has a direct impact on the accuracy and precision of the cosmological constraints deduced from galaxy clusters, we propose a new and independent measurement of this quantity.
In this paper we use the gas mass fractions of 120 galaxy clusters from the {\it Planck}-ESZ sample \cite{2011A&A...536A...8P} to bring robust constraints on the value of the hydrostatic bias.
More importantly, we aim at studying the potentiality of variations of the bias with mass and redshift.
Such studies on mass and redshift trends of $B$ have already been carried out in weak lensing works (\cite{2014MNRAS.443.1973V}, \cite{2015MNRAS.449..685H}, \cite{2016MNRAS.456L..74S}, \cite{2017MNRAS.468.3322S}) or using tSZ number counts \citep{2019A&A...626A..27S} giving sometimes contradictory results.
In this work we measure and use gas mass fractions from a large sample to get new independent constraints on $B$ and its mass and redshift evolution.
We also aim at investigating the role that an evolution of the bias would have on the cosmological constraints we obtain from $f_{gas}$ data.
After describing the theoretical modelling and our cluster sample in Section \ref{sect:theoretical_model_data}, we detail our methods for the data analysis in Section \ref{sect:methods}.
We show our results in Section \ref{sect:results}, first focusing on the effect of assuming a varying bias on our derived cosmological constraints, then looking at the sample dependence of our results.
We finally discuss our results in Section \ref{sect:discussion} and draw our conclusions in Section \ref{sect:conclusions}.
Throughout the paper we assume a reference cosmology with $\mathrm{H_0 = 70 km.s^{-1}.Mpc^{-1}}$, $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$.
\section{Theoretical modelling and data}
\label{sect:theoretical_model_data}
\subsection{Gas fraction sample}
The gas mass fraction is defined as the ratio of the gas mass over the total mass of the cluster,
\begin{equation}
\label{eq:fgas_def}
f_{gas} = \frac{M_{gas}}{M_{tot}}.
\end{equation}
Using X-ray observations, the gas mass is obtained by integrating the density profile $\rho(r)$ inside a certain radius $r$ as shown in equation \ref{eq:gas_mass} below
\begin{equation}
\label{eq:gas_mass}
M_{gas}(<r) = \int^r_0 4\pi r'^2 \rho(r')dr'.
\end{equation}
Here, the density profile $\rho(r)$ is obtained from the electron density profile $n_e(r)$, measured from X-ray observations. We have
\begin{equation}
\rho(r) = \mu m_p (n_e(r) + n_p(r)),
\end{equation}
where $\mu$ is the mean molecular weight, $m_p$ the proton mass, $n_e$ the electron number density and $n_p$ the proton number density with $m_e = 1.17 m_p$ in a fully ionised gas.
Using the density profile and a temperature profile $T(r)$, the total hydrostatic mass $M_{HE}$ can be computed by solving the hydrostatic equilibrium equation shown below,
\begin{equation}
\label{eq:hydro_mass}
M_{HE}(<r) = - \frac{rk_B T(r)}{G \mu m_p} \left( \frac{\mathrm{d \ln} \rho(r)}{\mathrm{d \ln} r} + \frac{\mathrm{d \ln} T(r)}{\mathrm{d \ln} r} \right),
\end{equation}
where $k_B$ is the Boltzmann constant and $G$ the gravitational constant.
Similarly to the gas mass and the hydrostatic mass, the cluster gas mass fraction is evaluated within a characteristic radius, determined by the radius of the mass measurements.
We define this radius $R_{\Delta}$, as the radius that encloses $\Delta$ times the critical density of the Universe, $\rho_c = 3 H^2/8 \pi G$.
The gas content inside galaxy clusters is affected by baryonic physics, and the impact of the different astrophysical processes might depend on the considered cluster radius $R_{\Delta}$.
In this work, we focus on gas fractions which were taken at $R_{500}$, and from now on all the quantities we consider are taken at $R_{500}$.
Note that this radius is larger than most studies using the gas fraction of galaxy clusters as a cosmological probe, which are generally carried out at $R_{2500}$ (\cite{2002MNRAS.334L..11A}, \cite{2008MNRAS.383..879A}, \cite{2011ARA&A..49..409A}, \cite{2014MNRAS.440.2077M}, \cite{2020JCAP...09..053H}, \cite{2022MNRAS.510..131M}).
We briefly discuss this choice of radius in Section \ref{sect:comparison_R2500}.
We computed the gas fraction for the clusters in the {\it Planck}-ESZ survey \citep{2011A&A...536A...8P}. In particular, we consider 120 out of 189 clusters of the {\it Planck}-ESZ sample, for which we have follow-up X-ray observations by XMM-{\it Newton} up to $R_{500}$, which we still call the ESZ sample for simplicity. Our sample therefore spans a total mass range from $2.22 \times 10^{14} \mathrm{M_\odot}$ to $1.75 \times 10^{15} \mathrm{M_\odot}$ and redshift range from 0.059 to 0.546.
We start from the gas and total masses of the clusters which were derived in \cite{2020ApJ...892..102L}. We refer the reader to their work for the detailed analysis of the mass evaluation. We just stress here that the total cluster masses were obtained assuming hydrostatic equilibrium, as shown in equation \ref{eq:hydro_mass}, therefore inducing the presence of the hydrostatic mass
bias. From these gas masses and hydrostatic masses we computed the gas fraction following equation \ref{eq:fgas_def}, to find that they are within the range $[0.06, 0.20]$.
When evaluating the uncertainty on $f_{gas}$, we adopt a conservative approach, using the maximum error between the lower and upper errors of the mass measurements.
We give the redshifts, gas masses, hydrostatic masses and gas fractions of the clusters from our sample in Table \ref{tab:cluster_sample} in appendix.
We show in Figure \ref{fig:fgas_sample} the observed gas fraction in this sample, with respect to redshift and mass.
We also compare these $f_{gas}$ values to the universal baryon fraction $\Omega_b/\Omega_m = 0.156\pm 0.03$ from {\it Planck} 2018 results \citep{2020A&A...641A...6P}.
\subsection{Modelling of the observed gas fraction}
The hydrostatic mass is used to evaluate the total cluster mass, yet is biased low with respect to the true total mass by a factor $B=M_{HE}/M_{true}$.
The measured gas mass fraction is thus :
\begin{equation}
\label{eq:hydro_fgas}
f_{gas, mes} = \frac{M_{gas}}{M_{HE}} = \frac{M_{gas}}{B \times M_{true}} = \frac{1}{B} \times f_{gas, true}.
\end{equation}
Besides the hydrostatic mass bias, the measured gas fraction depends on a variety of instrumental, astrophysical and cosmological effects.
A way of quantifying these effects is to compare the gas fraction we obtain from gas mass and hydrostatic mass measurements to the theoretical (hydrostatic) gas fraction we expect from equation \ref{eq:fgas_model} below, from \cite{2008MNRAS.383..879A}. In this equation and in the rest of the paper, a quantity noted $X^{ref}$ is the quantity $X$ in our reference cosmology :
\begin{equation}
f_{gas, Th}(M, z) = K \frac{\Upsilon(M,z)}{B(M,z)} A(z) \left( \frac{\Omega_b}{\Omega_m}\right) \left( \frac{D_A^{ref}(z)}{D_A(z)}\right)^{3/2} - f_*.
\label{eq:fgas_model}
\end{equation}
Here $K$ is an instrumental calibration correction.
We take $ K = 1 \pm 0.1$ from \cite{2008MNRAS.383..879A} and discuss the soundness of this assumption in Section \ref{sect:instrumental_calibration}.
Regarding the astrophysical contributions, $\Upsilon(M,z)$ is the baryon depletion factor, describing how baryons in clusters are depleted with respect to the universal baryon fraction and $B(M,z)$ is the hydrostatic mass bias we have discussed above.
Finally $\Omega_b/\Omega_m$ is the universal baryon fraction, $D_A$ is the angular diameter distance, $f_*$ is the stellar fraction, and $A(z)$ is an angular correction which we show in equation \ref{eq:A_z}
\begin{equation}
\label{eq:A_z}
A(z) = \left (\frac{\theta_{500}^{ref}}{\theta_{500}} \right)^\eta \simeq \left( \frac{H(z)D_A(z)}{\left [H(z)D_A(z) \right]^{ref}} \right)^\eta.
\end{equation}
The parameter $\eta$ accounts for the slope of the $f_{gas}$ profiles enclosed in a spherical shell. Here we take $\eta = 0.442$ from \cite{2014MNRAS.440.2077M}.
$A(z)$ being however very close to one for realistic models and within our range of parameters, the value of this parameter has a negligible impact on our results.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth, trim = {0.5cm 3.5cm 0.25cm 4.5cm}, clip]{ESZ_sample.png}
\caption{{\it Top:} Gas mass fraction of {\it Planck}-ESZ clusters with respect to redshift. {\it Bottom:} Gas mass fraction of {\it Planck}-ESZ clusters with respect to cluster mass. The yellow bands in both plots mark the \cite{2020A&A...641A...6P} value of $\Omega_b/\Omega_m = 0.156 \pm 0.003$.}
\label{fig:fgas_sample}
\end{figure}
\section{Methods}
\label{sect:methods}
\begin{table*}[h!]
\centering
\begin{tabular}{cccc}
\hline \hline
& {\bf Bias evolution study} & {\bf Sample dependence of the results} & {\bf Reference}\\
\hline
Parameter & Prior & Prior & \\
\hline
$B_0$ & -- & $\mathcal{U}(0.3, 1.7)$ & -- \\
$B(z_{CCCP}, M_{CCCP})$ & $\mathcal{N}(0.780, 0.092)$ & -- & 1 \\
$f_*$ & $\mathcal{N}(0.015, 0.005)$ & $\mathcal{N}(0.015, 0.005)$ & 2\\
$\Upsilon_0 $ & $\mathcal{N}(0.79, 0.03)$ & $\mathcal{N}(0.79, 0.03)$ & 3\\
$K$ & $\mathcal{N}(1, 0.1)$ & $\mathcal{N}(1, 0.1)$ & 4 \\
$\sigma_f$ & $\mathcal{U}(0, 1)$ & $\mathcal{U}(0, 1)$ & -- \\
$h$ & $\mathcal{N}(0.674, 0.005)$ & $\mathcal{N}(0.674, 0.005)$ & 5\\
$\Omega_b/\Omega_m$ & $\mathcal{U}(0.05, 0.3)$ & $\mathcal{N}(0.156, 0.003)$ & 5\\
$\Omega_m$ & $\mathcal{U}(0.01, 1)$ {\bf (CB, VB)} or $\mathcal{N}(0.315, 0.007)$ {\bf (VB + $\Omega_m$)} & $\mathcal{N}(0.315, 0.007)$ & 5\\
\hline
$\alpha$ & Fixed at 0 {\bf (CB)} or $\mathcal{U}(-2, 2)$ {\bf (VB, VB + $\Omega_m$)} & $\mathcal{U}(-2, 2)$ & -- \\
$\beta$ & Fixed at 0 {\bf (CB)} or $\mathcal{U}(-2, 2)$ {\bf (VB, VB + $\Omega_m$)} & $\mathcal{U}(-2, 2)$ & -- \\
\hline \hline
\end{tabular}
\tablebib{(1)~\citet{2015MNRAS.449..685H}; (2)\citet{2019A&A...621A..40E}; (3)\citet{2013MNRAS.431.1487P}; (4)\citet{2008MNRAS.383..879A}; (5)\citet{2020A&A...641A...6P}.}
\caption{Set of priors used in our analysis. A prior noted $\mathcal{U}(l, u)$ is a uniform prior of lower bound $l$ and upper bound $u$, while a prior noted $\mathcal{N}(\mu, \sigma)$ is a gaussian prior of mean $\mu$ and standard deviation $\sigma$.}
\label{tab:priors}
\end{table*}
Our purpose in this work is to use the gas mass fraction of galaxy clusters to constrain the value of the hydrostatic mass bias, and particularly its evolution with mass and redshift.
We also want to study the role of such an evolution of the bias on the subsequent cosmological constraints derived from $f_{gas}$ data.
We recall here that all constraints derived from gas mass fraction data are obtained by comparing the measured gas fraction to the theoretical gas fraction expected from equation \ref{eq:fgas_model}, which is proportional to the universal baryon fraction.
Besides this proportionality, all the other constraints are deduced as corrections needed to match the observed $f_{gas}$ in clusters to the constant $\Omega_b/\Omega_m$.
As discussed in Section \ref{sect:theoretical_model_data}, one of these correction terms accounts for the baryonic effects taking place in the ICM.
These baryonic effects are accounted for by the depletion factor $\Upsilon(M,z)$ and the hydrostatic bias $B(M, z)$, which we are interested in.
As shown in equation \ref{eq:fgas_model}, we cannot constrain independently $B(M, z)$ and $\Upsilon(M, z)$, as the two parameters are strongly degenerated.
What we have access to instead is the ratio of the two quantities, $\Upsilon(M, z)/B(M, z)$.
In order to break this degeneracy and properly constrain the bias, strong constraints on the depletion factor and its evolution with mass and redshift are required.
Obtaining such results is however out of the scope of this paper, and we retrieve these constraints from hydrodynamical simulations works.
The depletion factor is known to vary with mass, however this evolution is particularly strong for groups and low mass clusters ($<2.10^{14} M_\odot$) while it becomes negligible at the high masses we consider (see the discussion in Section 3.1.1 of \cite{2019A&A...621A..40E} based on results from The Three Hundred Project simulations \citep{2018MNRAS.480.2898C}).
Works from \cite{2013MNRAS.431.1487P} and \cite{2013ApJ...777..123B} equally show that the depletion factor is constant with redshift when working at $R_{500}$.
Throughout the paper we thus assume a constant depletion factor with mass and redshift $\Upsilon_0 = 0.79 \pm 0.03$, based on hydrodynamical simulations from \cite{2013MNRAS.431.1487P}.
\subsection{Bias evolution modelling}
In order to analyse a possible mass and redshift evolution of the mass bias, we consider a power-law evolution for the hydrostatic bias, with pivot masses and redshift set at the mean values of the considered cluster sample :
\begin{equation}
B(z, M) = B_0 \left( \frac{M}{\left< M \right>}\right)^\alpha \left( \frac{1+z}{\left< 1+z \right>}\right)^\beta
\label{eq:b_powerlaw}
\end{equation}
with $B_0$ an amplitude.
We chose this model for the sake of simplicity following what is done in \cite{2019A&A...626A..27S}, as the exact dependence of $B$ on mass and redshift is not known.
In Section \ref{sect:parametrization} we discuss the role of this parametrization by comparing our results with a linear evolution of $B$, and find results similar to those of the power-law description.
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth, trim = {3cm 3cm 1cm 7cm}, clip]{ESZ_sample_wrt_M-zmap_truncated_lovisari_None.png}
\caption{Binned mass-redshift plane of the {\it Planck}-ESZ sample. Inside each bin we compute the mean value of $f_{gas}$. We show the number of clusters included in each bin and mark the delimitation of each subsample.}
\label{fig:binned_Mz}
\end{figure}
The complete likelihood function thus writes :
\begin{strip}
\large
\begin{equation}
\label{eq:likelihood}
- 2 \ln \mathcal{L} = \sum_i \left (\ln{2\pi s_i^2} + \frac{\left(f_{gas, i} - K \frac{\Upsilon_0}{B_0} \left( \frac{M_i}{\left< M \right>}\right)^{-\alpha} \left( \frac{1+z_i}{\left< 1+z \right>}\right)^{-\beta} A(z_i, h, \Omega_m) \frac{\Omega_b}{\Omega_m} \left( \frac{D_A^{ref}(z_i)}{D_A(z_i, h, \Omega_m)}\right)^{3/2} + f_* \right)^2}{s_i^2}\right),
\end{equation}
\end{strip}
where
\begin{equation}
s_i^2 = \sigma_i^2 + f_{gas, Th}^2 \sigma_f^2,
\end{equation}
with $\sigma_i$ the uncertainties on the gas fraction data, and $f_{gas, Th}$ the gas fraction expected from equation \ref{eq:fgas_model}.
The parameter $\sigma_f$ is the intrinsic scatter of the data, which we treat as a free parameter.
\subsection{Free cosmology study}
In the first part of our analysis we simply assess the necessity to consider an evolving bias, by looking at the impact of assuming such a variation on the subsequent cosmological constraints.
To do so, we compare the posterior distributions in three different cases :
\begin{itemize}
\item In the first case we let free the baryon fraction $\Omega_b/\Omega_m$, the matter density $\Omega_m$, and the parameters accounting for the variation of the bias $\alpha$, $\beta$ and $B_0$.
We will refer to this scenario as "VB".
\item In the second case we let free only the cosmological parameters and fix the bias parameters $\alpha=0$ and $\beta=0$, resulting in a constant bias at the value $B_0$.
This scenario will be noted in the rest of the paper as "CB".
\item Due to a degeneracy between $\beta$ and $\Omega_m$ which we discuss later on, we also look at our results when leaving the set of parameters ($B_0, \alpha, \beta, \Omega_b/\Omega_m$) free but assuming a prior on $\Omega_m$, in order to break this degeneracy and constrain $\beta$ more accurately.
We call this scenario "VB + $\Omega_m$".
\end{itemize}
The set of parameters for which we assume flat priors in this part of the study is thus ($B_0, \alpha, \beta, \Omega_b/\Omega_m, \Omega_m, \sigma_f$), as we show in the first column of Table \ref{tab:priors}.
This work is performed on the entirety of our cluster sample, for which our mean mass and redshift are :
\begin{equation}
(M_{full}, z_{full}) = (6.42 \times 10^{14} M_\odot, 0.218)
\label{eq:pivots_full_ESZ}
\end{equation}
Throughout this whole part of the analysis, we consider a prior on the total value of the bias for a certain cluster mass and redshift, taken from the CCCP analysis \cite{2015MNRAS.449..685H} :
\begin{equation*}
B(z_{CCCP}, M_{CCCP}) = 0.780 \pm 0.092,
\end{equation*}
where $z_{CCCP} = 0.246$ and $M_{CCCP} = 14.83 \times 10^{14}h^{-1}M_\odot$ are the mean mass and redshift for the CCCP sample.
\subsection{Sample dependence tests}
In a second part, we look into possible sample dependencies of our results regarding the value of the bias parameters $B_0$, $\alpha$ and $\beta$.
To that end we focus on different subsamples inside the main sample, based on mass and redshift selections.
Matching the selection from weak lensing studies like the {\sc CoMaLit} \citep{2017MNRAS.468.3322S} or {\sc LoCuSS} \citep{2016MNRAS.456L..74S} studies, we operate a redshift cut at $z = 0.2$, differentiating clusters that are above or below this threshold value.
This choice was also motivated by the results from \cite{2019A&A...626A..27S}, investigating the hydrostatic mass bias from the perspective of tSZ number counts. Their study showed that the trends in the mass bias depended on the considered redshift range, with results changing when considering only clusters with $z>0.2$.
We also perform a mass selection, differentiating between the clusters that are above or below the median mass of the sample $M_{med} = 5.89\times 10^{14}M_\odot$.
In summary, the samples we are considering in this study are the following :
\begin{itemize}
\item The full sample of 120 clusters, with the mean mass and redshift given previously in equation \ref{eq:pivots_full_ESZ}.
\item Clusters with $z < 0.2$ and $M < 5.89\times 10^{14}M_\odot$. We consider them in the "{\it Low Mz}" subsample, which contains 47 clusters. The mean mass and redshift are
\begin{equation}
(M_{lowMz}, z_{lowMz}) = (4.26 \times 10^{14} M_\odot, 0.126)
\label{eq:pivots_lowMz}
\end{equation}
\item Clusters with $z > 0.2$ and $M > 5.89\times 10^{14}M_\odot$. We consider them in the "{\it High Mz}" subsample, which contains 45 clusters. The mean mass and redshift are
\begin{equation}
(M_{highMz}, z_{highMz}) = (8.86 \times 10^{14} M_\odot, 0.333)
\label{eq:pivots_highMz}
\end{equation}
\end{itemize}
We do not consider the "Low z - high M" and "High z - low M" subsamples as by construction they do not contain enough clusters (respectively 15 and 13) to obtain meaningful results.
For illustration purposes we show in Figure \ref{fig:binned_Mz} the binned mass-redshift plane of our sample, with the delimitation of the different subsamples.
Inside each bin we compute the mean value of $f_{gas}$ if at least one cluster is inside the bin.
To carry out this study of the sample dependence we compare the posterior distributions obtained when running our MCMC on the three aforementioned samples independently.
Note that we keep the prior on $\Omega_m$ for this part of the study, and that we add a prior on $\Omega_b/\Omega_m$.
This choice is motivated by the presence of a degeneracy between the baryon fraction and the amplitude of the bias $B_0$.
This degeneracy is broken in the first part of the analysis by assuming a prior on the total value of the bias.
However, when trying to compare all the bias parameters between samples (including the amplitude), we do not wish to be dependant on such a prior.
The universal baryon fraction being well known and constrained, such a prior is a convenient way to break the degeneracy between $\Omega_b/\Omega_m$ and $B_0$ and still obtain meaningful results on the value of all the bias parameters.
The set of parameters following flat priors in this part of the study is thus ($B_0, \alpha, \beta, \sigma_f$), as we show in the second column of Table \ref{tab:priors}.
In brief, we adopt the list of priors given in Table \ref{tab:priors} to constrain the parameters used to describe our $f_{gas}$ data.
We fit our model in equation \ref{eq:fgas_model} to the $f_{gas}$ data with an MCMC approach using the sampler {\tt emcee} \citep{2013PASP..125..306F}.
Note that the prior we consider on $f_* = 0.015\pm 0.005$ coming from \cite{2019A&A...621A..40E} has close to no effect on our final results, as this term in equation \ref{eq:fgas_model} is almost negligible.
\section{Results}
\label{sect:results}
\subsection{Bias evolution study}
\begin{figure*}
\centering
\includegraphics[width=\textwidth, trim = {0.5cm 0.5cm 0.5cm 0.5cm}, clip]{ESZ_contours_power_law_free_cosmo_varying_v_constantB.png}
\caption{1D and 2D posterior distributions for the CB, VB, and VB + $\Omega_m$ scenarios. The contours mark the 68\% and 95\% confidence level (c.l.). The grey dashed lines highlight reference values for $(\alpha, \beta, \Omega_b/\Omega_m, \Omega_m) = (0, 0, 0.156, 0.315)$. The orange bands mark the \cite{2020A&A...641A...6P} values for $\Omega_b/\Omega_m$ and $\Omega_m$ at 2$\sigma$ c.l.}
\label{fig:varying_v_constant}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth, trim = {0.5cm 0.5cm 0.5cm 0.5cm}, clip]{ESZ_contours_power_law_free_cosmo_varying_v_constantB_degenzoom.png}
\caption{Posterior distributions showing the degeneracy between $\beta$ and $\Omega_m$. The contours mark the 68\% and 95\% c.l. The grey dashed lines highlight reference values for $(\beta, \Omega_m) = (0, 0.315)$. The orange band marks the \cite{2020A&A...641A...6P} value for $\Omega_m$ at 2$\sigma$ c.l.}
\label{fig:varying_v_constant_degenzoom}
\end{figure}
In the first section of this analysis we intend on studying the possibility of an evolution of the hydrostatic mass bias with cluster mass and redshift.
To that purpose we compare the cosmological constraints obtained on the full sample when considering a constant bias to those obtained when leaving the bias free to vary. Our results are summed up in Figure \ref{fig:varying_v_constant} and in Table \ref{tab:results_varying_v_constant}.
\begin{table*
\centering
\begin{tabular}{cccc}
\hline \hline
{\bf Parameter} & CB & VB & VB + $\Omega_m$ \\
\hline
$\mathbf{B_0}$ & $0.791 \pm 0.091$ & $0.817 \pm 0.095$ & $0.821 \pm 0.096$ \\
$\boldsymbol{\alpha}$ & 0 & $-0.047\pm 0.037$ & $-0.047\pm 0.037$ \\
$\boldsymbol{\beta}$ & 0 & $-0.45^{+0.60}_{-0.38}$ & $-0.64 \pm 0.17$ \\
$\boldsymbol{\Omega_b/\Omega_m}$ & $0.142^{+0.020}_{-0.026}$ & $0.163^{+0.025}_{-0.036}$ & $0.171^{+0.023}_{-0.031}$ \\
$\boldsymbol{\Omega_m}$ & $> 0.849$ & -- & $0.315\pm 0.007$ \\
\hline
\end{tabular}
\caption{Constraints obtained on bias and cosmological parameters in the CB, VB and VB + $\Omega_m$ scenarios. The uncertainties are given at 68\%c.l.}
\label{tab:results_varying_v_constant}
\end{table*}
In the VB case, i.e. when leaving the set of parameters $(B_0, \alpha, \beta, \Omega_b/\Omega_m, \Omega_m, \sigma_f)$ free with a prior on the total value of $B$, we show that a mass independent bias seems to be favoured, with $\alpha = -0.047\pm 0.037$ compatible with 0, albeit peaking slightly lower.
We also show that our derived $\Omega_b/\Omega_m$ is fully compatible with the \cite{2020A&A...641A...6P} value, as we obtain $0.163^{+0.025}_{-0.036}$.
We cannot however bring such constraints on $\beta$ and $\Omega_m$, as these two parameters are heavily degenerated.
We zoom in on this degeneracy in Figure \ref{fig:varying_v_constant_degenzoom} and show that higher values of $\beta$ call for higher values of $\Omega_m$.
This degeneracy can be explained by the fact that both parameters are encoding a redshift dependence of the gas fraction.
Indeed, $\Omega_m$ intervenes in the computation of $D_A(z)$ and $H(z)$, which entirely drive the sensitivity of $f_{gas}$ to cosmology.
We thus argue that this degeneracy between $\beta$ and $\Omega_m$ that we show here in a simple Flat-$\Lambda \mathrm{CDM}$ model could also be observed between $\beta$ and any other cosmological parameter, as long as they appear in the computation of $D_A(z)$ or $H(z)$.
On a side note, we show a slight degeneracy between the baryon fraction $\Omega_b/\Omega_m$ and $\beta$, with a lower $\Omega_b/\Omega_m$ implying a higher, closer to 0 $\beta$.
This degeneracy is caused by the combined effect of the degeneracy between $\beta$ and $\Omega_m$, and the degeneracy between $\Omega_b/\Omega_m$ and $\Omega_m$, which is expected and which we also show in Figure \ref{fig:varying_v_constant}.
As the matter density $\Omega_m$ has been strongly constrained in a number of works, we chose to assume the \cite{2020A&A...641A...6P} prior shown in Table \ref{tab:priors} on this parameter to break its degeneracy with $\beta$, in the VB + $\Omega_m$ scenario.
The effect of this prior is negligible on the constraints on $\alpha$ and $\Omega_b/\Omega_m$, as we obtain
$\alpha = -0.047\pm 0.037$ and $0.171^{+0.023}_{-0.031}$, fully compatible with the results obtained without this prior.
On the other hand the use of this prior allows us to constrain $\beta$.
We show that the hydrostatic bias seems to show a strong redshift dependence, with $\beta = -0.64\pm 0.17$.
This value of $\beta < 0$ would mean a value of $B$ decreasing with redshift, that is to say more and more biased masses towards higher redshifts.
We also note a slight degeneracy between $\alpha$ and $\beta$, but this is most probably due to selection effects of the sample.
As the higher redshift clusters tend to mostly have higher masses (see Figure \ref{fig:binned_Mz}), a redshift trend of the bias could then be interpreted as a slight mass trend, explaining this degeneracy.
The effect of assuming a constant bias ($\alpha = \beta = 0$) in the CB case does not have a strong impact on our constraints on $\Omega_b/\Omega_m$, which peaks just slightly below the {\it Planck} value but remains compatible, with $\Omega_b/\Omega_m = 0.142^{+0.020}_{-0.026}$.
$B_0$ is slightly below yet compatible with the value found in the varying bias case, with $B_0 = 0.791 \pm 0.091$. This is simply caused by the fact that for a constant bias the amplitude $B_0$ is now completely determined by the total value of the bias $B(z_{CCCP}, M_{CCCP})$.
On the other hand, due to the degeneracy between $\beta$ and $\Omega_m$, imposing no redshift evolution of the bias requires a very high matter density, resulting in $\Omega_m > 0.894$, fully incompatible with the {\it Planck} value.
Such a high matter density is not expected in current standard cosmology and is totally aberrant.
As such, we show that we need to assume an evolution of the hydrostatic mass bias, at least in redshift, to properly describe our $f_{gas}$ data.
In the rest of our study we focus on exploring the bias evolution.
We thus consider only the VB + $\Omega_m$ scenario, to be able to constrain $\beta$ despite its degeneracy with the matter density.
We also assume a \cite{2020A&A...641A...6P} prior on $\Omega_b/\Omega_m$, as the universal baryon fraction is degenerated with the total value of the bias (see equation \ref{eq:fgas_model}).
The bias parameters $(B_0, \alpha, \beta)$ are thus left free, and we chose not to use a prior on the total value of $B$ anymore.
\subsection{Sample dependence of the results}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth, trim = {1.25cm 1.25cm 1cm 1cm}, clip]{ESZ_contours_power_law_free_cosmo_lovisari_multithresh_varyingB_cosmopriors.png}
\caption{1D and 2D posterior distributions for the bias parameters in the three mass and redshift selected samples. The levels of the contours mark the 68\% and 95\% confidence levels. The grey dashed lines mark the reference values $(\alpha, \beta) = (0, 0)$}
\label{fig:contours_sample_dependence}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width = \textwidth, trim = {0.5cm 2cm 0.25cm 0.25cm}, clip]{ESZ_fit_power_law_free_cosmo_lovisari_multithresh_varyingB_cosmopriors.png}
\caption{Fits obtained from our analysis, for the different mass and redshift selected samples. The results in the top two panels are represented with respect to redshift at constant mass (respectively minimal and maximal masses of the full sample), while the bottom two panels are the results presented with respect to mass, at fixed redshift (respectively minimal and maximal redshifts of the full sample). The shaded areas around the curves mark the 68\% and 95\% confidence levels. The blue dashed line marks the reference value $\Omega_b/\Omega_m = 0.156$}
\label{fig:fits_sample_dependence}
\end{figure*}
If an evolution of the bias seems necessary to properly constrain cosmological parameters using $f_{gas}$ data at $R_{500}$, the strength of this evolution might differ depending on the masses and redshifts of the clusters we consider.
We thus repeat the previous study, this time focusing only on the bias parameters, on the previously defined {\it LowMz} and {\it HighMz} subsamples in addition to the full sample.
We recall that in this section, we are considering the VB + $\Omega_m$ scenario, with an additional prior on $\Omega_b/\Omega_m$.
The summary of our results is given in Table \ref{tab:results_sample_dependence} and Figures \ref{fig:contours_sample_dependence} and \ref{fig:fits_sample_dependence}.
\begin{table*}[h!]
\centering
\begin{tabular}{cccc}
\hline \hline
{\bf Parameter} & LowMz subsample & HighMz subsample & Full sample \\
\hline
$\mathbf{B_0}$ & $0.846^{+0.091}_{-0.10}$ & $0.716\pm 0.081$ & $0.779\pm 0.089$ \\
$\boldsymbol{\alpha}$ & $0.08\pm 0.10$ & $-0.150\pm 0.061$ & $-0.047\pm 0.037$ \\
$\boldsymbol{\beta}$ & $-1.14^{+0.33}_{-0.73}$ & $-0.14\pm 0.23$ & $-0.64 \pm 0.17$ \\
\hline
\end{tabular}
\caption{Constraints obtained on the bias parameters depending on the considered sample. The uncertainties are given at 68\% c.l.}
\label{tab:results_sample_dependence}
\end{table*}
Our results when considering the full sample are, as expected, the same as when adopting flat priors on the cosmological parameters. The only slight exception is $B_0$, peaking slightly lower due to the absence of a prior on the total bias. We find an amplitude $B_{0, full} = 0.779\pm 0.089$.
The parameters accounting for the bias evolution remain unchanged, with $\alpha_{full} = -0.047\pm 0.037$ and $\beta_{full} = -0.64 \pm 0.17$.
We thus remain compatible with no mass evolution of the bias but still find a strong hint for a redshift dependence.
If all subsamples provide compatible values for the amplitude with $B_{0, lowMz} = 0.846^{+0.091}_{-0.10}$, $B_{0, HighMz} = 0.716\pm 0.081$ and $B_{0, full} = 0.779\pm 0.089$, this cannot be said regarding $\alpha$ and $\beta$.
Indeed, if the study of the full sample seems to suggest no mass evolution and a mild redshift trend of the bias, we observe the reverse behaviour in the {\it HighMz} subsample.
With $\alpha_{highMz} = -0.150\pm 0.061$ and $\beta_{HighMz} = -0.14\pm 0.23$, the preferred scenario would be the one of $B$ constant with redshift, yet decreasing with cluster mass.
On the other end of the mass-redshift plane, the results show exactly the opposite evolution, in agreement with the constraints from the full sample but aggravating the trends.
With $\alpha_{LowMz} = 0.08\pm0.10$ and $\beta_{LowMz} = -1.14^{+0.33}_{-0.73}$, we show we are fully compatible with no mass evolution of the bias, even with the posterior of $\alpha$ peaking slightly above 0 contrary to the other samples.
More importantly, we show a strong decreasing trend of $B$ with redshift, as we obtain $\beta$ peaking below -1.
We note that the posterior distribution for this subsample is quite wider than for the full sample or the {\it HighMz}.
This is due to the smaller mass and redshift range of this sample (see Figure \ref{fig:binned_Mz}), diminishing its constraining power with respect to the other two selections.
This is however sufficient to highlight a redshift dependence of the bias when considering the least massive clusters of our sample at the lowest redshifts.
In Figure \ref{fig:fits_sample_dependence} we show what these values of the bias parameters translate to in terms of gas fraction with respect to redshift and mass.
We show these fits computed at $[M_{min}, M_{max}]$ with $z$ free , and at $[z_{min}, z_{max}]$ with $M$ free, taking into account the uncertainties at 68\% and 95\% c.l.
We first notice what was highlighted in the contours of Figure \ref{fig:contours_sample_dependence}, which is that the results obtained for the full sample mainly fall in between the two extreme cases of the {\it LowMz} and {\it HighMz} subsamples.
Secondly we show that the incompatibility between the two smaller subsamples is well visible in the result of the fits.
In the bottom two panels showing the $f_{gas}(M)$ relation, the {\it LowMz} and {\it HighMz} even seem to exhibit opposite trends, similarly to what was seen in Figure \ref{fig:contours_sample_dependence}.
Finally we note an offset in the relative positions of the curves for the two subsamples, depending on mass and redshift.
This is simply due to the fact that our model describes a simultaneous evolution of the bias both with mass and redshift, which happens to be non-zero in our case.
In summary we thus claim to find strong evidence for a sample dependence of our results regarding the mass and redshift evolution of the hydrostatic mass bias.
Such sample dependence had already been noted in other works studying the evolution of the bias using tSZ cluster counts, but not when using $f_{gas}$ data.
\section{Discussion}
\label{sect:discussion}
As we have shown in the previous section, considering an evolution of the bias seems to be necessary to infer sensible cosmological constraints.
On the other hand, the variation of the bias that we measure is very dependent on the sample we consider, as we show different trends of the bias depending on mass and redshift selections inside our main sample.
We discuss these results here, trying to take into account all the systematic effects that could appear and bias our results.
We also compare our findings to previous studies which focused on the bias and its evolution, from different probes.
\subsection{Possible sources of systematic effects}
\subsubsection{Instrumental calibration effects}
\label{sect:instrumental_calibration}
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth, trim = {0.25cm 1cm 0cm 0.1cm}, clip]{ESZ_samples_XMM_v_Chandra.png}
\caption{Comparison of the total masses inside the {\it Planck}-ESZ sample between XMM-{\it Newton} and {\it Chandra}.}
\label{fig:XMM_v_Chandra}
\end{figure}
All of the masses used in this work were taken from \cite{2020ApJ...892..102L}, where the authors used XMM-{\it Newton} observations.
Possible calibration effects may have impacted the mass measurements in their work and induced biases (see e.g. \cite{2013ApJ...767..116M} or \cite{2015A&A...575A..30S})
This effect has been taken into account under the form of a $K=1.0 \pm 0.1$ parameter in our analysis, but we try here to check if this assumption is sound.
The clusters of the {\it Planck}-ESZ sample have also been observed using {\it Chandra}, in \cite{2021ApJ...914...58A}.
From their work we retrieve their total masses and compare them to the XMM-{\it Newton} masses.
The result of the comparison is shown in Figure \ref{fig:XMM_v_Chandra}.
We perform a fit of the point cloud using a simple orthogonal distance regression method with a linear model.
Similarly to the model we defined in equation \ref{eq:b_powerlaw} when studying the evolution of the bias, we set a pivot to the mean value of the {\it Chandra} masses, $\left <M_{Chandra} \right > = 6.32 \times 10^{14}M_\odot$.
Note that the results we present below do not change when putting the pivot at the median mass instead of the mean.
By doing this we obtain the following relation :
\begin{equation}
M_{\mathrm{tot}, XMM} = (1.06 \pm 0.04) M_{\mathrm{tot}, Chandra} - (0.75 \pm 0.10) \times 10^{14}M_\odot
\end{equation}
This relation might imply not only that the masses from {\it Chandra} could be biased low with respect to XMM-{\it Newton}, with a non-zero offset, but more importantly that the calibration bias could exhibit a mass-dependent behaviour.
This could have an effect regarding the mass trends of the hydrostatic mass bias.
We however note that this mass dependence seems to be driven by the most massive cluster of the sample, {\sc PLCKESZ G006.78+30.46}, or A2163.
Indeed, with small error bars and being the only cluster in this region of the high-mass end of the sample, this cluster might artificially draw the mass dependence of the calibration bias to non-zero values.
In addition, this cluster is currently undergoing merger activity and is the hottest cluster of the Abell catalogue \citep{2008A&A...481..593M}.
As the difference between {\it Chandra} and XMM-{\it Newton} temperatures increase with cluster temperature \citep{2015A&A...575A..30S}, and the cluster masses are measured using temperature profiles, this cluster in particular is not well suited to measure a global difference in the masses from the two instruments.
We therefore perform the exact same fitting method but after removing {\sc PLCKESZ G006.78+30.46} from our sample, and show the results as the green line in Figure \ref{fig:XMM_v_Chandra}.
We obtain the following relation :
\begin{equation}
M_{\mathrm{tot}, XMM} = (1.00 \pm 0.05) M_{\mathrm{tot}, Chandra} - (0.47 \pm 0.10) \times 10^{14}M_\odot
\end{equation}
We show this time that we are fully compatible with a mass-independent mass calibration bias.
We still however observe an offset, with the masses from XMM-{\it Newton} being globally lower than the {\it Chandra} masses.
This result had already been shown previously (see Appendix D of \cite{2015A&A...575A..30S}).
This offset is however well accounted for by the 10\% uncertainty we allow in the $K$ prior in the majority of the sample.
34 clusters, (roughly 28\% of the main sample) are laying outside the 10\% tolerance allowed by the prior. They however remain within the 20\% tolerance.
We however point out to the reader that this comparison has been carried out only on the total masses, while our study focused on gas mass fraction.
To be able to fully rule out possible biases from instrumental effects we would need to compare the gas mass fractions obtained from both observations rather than the total masses.
Unfortunately as we do not have access to the gas masses from the {\it Chandra} observations, we can only assume that the compatibility we see for $M_{tot}$ stays true for $f_{gas}$.
We therefore assume that if calibration biases are affecting our study, they have only minor effects on our results.
\subsubsection{Role of the parametrization}
\label{sect:parametrization}
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth, trim = {1.2cm 1.2cm 1cm 1cm}, clip]{ESZ_contours_free_cosmo_varying_v_constantB.png}
\caption{1D and 2D posteriors obtained when comparing the CB, VB and VB + $\Omega_m$ scenarios with a linear parametrization. The contours mark the 68\% and 95\% c.l. and the grey dashed lines highlight reference values for $(B_1, \Omega_b/\Omega_m, \Omega_m) = (0, 0.156, 0.315)$. The orange bands mark the \cite{2020A&A...641A...6P} values for $\Omega_b/\Omega_m$ and $\Omega_m$ at 2$\sigma$ c.l.}
\label{fig:constant_v_varying_linear}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth, trim = {0.75cm 0.75cm 0.5cm 0.75cm}, clip]{ESZ_contours_Planck18_lovisari_multithresh_None.png}
\caption{1D and 2D posteriors obtained when comparing the bias parameters derived for the three samples, when considering a linear evolution of the bias. The contours mark the 68\% and 95\% c.l.}
\label{fig:sample_dependence_linear}
\end{figure}
\begin{table*} [h!]
\centering
\begin{tabular}{ccccc}
\hline \hline
& $B_0$ & $B_1$ & $\Omega_b/\Omega_m$ & $\Omega_m$ \\
\hline
{\bf CB} & $0.795\pm 0.088$ & 0 & $0.140\pm0.018$ & $> 0.843$ \\
{\bf VB} & $0.807\pm 0.092$ & $-0.22^{+0.28}_{-0.13}$ & $0.151^{+0.019}_{-0.025}$ & -- \\
{\bf VB + $\Omega_m$} & $0.816\pm 0.092$ & $-0.47^{+0.11}_{-0.09}$ & $0.165^{+0.019}_{-0.022}$ & $0.315\pm 0.007$ {\it (prior)}\\
\hline
{\bf LowMz subsample} & $0.857^{+0.041}_{-0.056}$ & $-0.79^{+0.31}_{-0.53}$ & $0.156\pm 0.003$ {\it (prior)}& $0.315\pm 0.007$ {\it (prior)}\\
{\bf HighMz subsample} & $0.745_{-0.059}^{+0.018}$ & $-0.12^{+0.11}_{-0.15}$ & $0.156\pm 0.003$ {\it (prior)}& $0.315\pm 0.007$ {\it (prior)}\\
{\bf Full sample} & $0.799^{+0.033}_{-0.050}$ & $-0.456^{+0.08}_{-0.09}$ & $0.156\pm 0.003$ {\it (prior)}& $0.315\pm 0.007$ {\it (prior)}\\
\hline \hline
\end{tabular}
\caption{Constraints obtained on bias and cosmological parameters, when assuming a linear parametrization. The uncertainties are given at 68\%c.l. We remind that when investigating the sample dependence (second part of the table), we are considering the VB + $\Omega_m$ scenario with a prior on $\Omega_b/\Omega_m$}
\label{tab:results_linear}
\end{table*}
In a previous work from \cite{2022EPJWC.25700046W} we investigated the evolution of $B$ with redshift, using a linear parametrization for the evolution of the bias
\begin{equation}
B(z) = B_0 + B_1(z-\left < z \right>)
\end{equation}
instead of a power-law.
We compare here the results given by this choice of parametrization to the results we got for a power-law model.
In this comparison of the linear case with the power law case we focus on the redshift dependence, as we did not find strong evidence for a mass evolution of the bias in the majority of our samples.
We show in Figures \ref{fig:constant_v_varying_linear}, \ref{fig:sample_dependence_linear} and in Table \ref{tab:results_linear} that the results are qualitatively consistent between the power-law and linear descriptions.
Indeed in both cases when considering the VB + $\Omega_m$ scenario we observe a strong sample dependence of the results.
As a matter of fact, we show in Figure \ref{fig:constant_v_varying_linear} that the low redshift and low mass clusters strongly favour a non-zero slope $B_1 = -0.79^{+0.31}_{-0.53}$.
On the high end of the mass-redshift plane we however find consistent results with the power-law case, with results compatible with no redshift evolution of $B$, as we find $B_1 = -0.12_{-0.15}^{+0.11}$.
Our estimates of $B_0$ are also consistent between the power-law case and the linear case for each subsample respectively, as we find $B_{0, lowMz} = 0.857_{-0.056}^{+0.041}$, $B_{0, highMz} = 0.745_{-0.059}^{+0.018}$ and $B_{0, full} = 0.799_{-0.050}^{+0.033}$.
When studying the VB scenario, we still observe the strong degeneracy between the term accounting for the redshift evolution of $B$ (here, $B_1$) and $\Omega_m$.
This explains why we once again obtain aberrant values of $\Omega_m$ when considering the CB scenario.
This compatibility between the qualitative results given by both parametrizations would hint at the fact that our results are not dominated by our choice of model for the bias evolution.
\subsection{Comparison with other studies}
\subsubsection{Mass and redshift trends of the bias}
\begin{figure}
\centering
\includegraphics[width= 0.5\textwidth, trim={1cm 1cm 3cm 3cm}, clip]{PSZ2Cosmo_v_ESZ.png}
\caption{Comparison between the PSZ2Cosmo sample used in \cite{2019A&A...626A..27S} and the {\it Planck}-ESZ sample.}
\label{fig:PSZ2C_v_ESZ}
\end{figure}
We show in this work a strong sample dependence of our results, with different mass and redshift variations of the bias depending on the mass and redshift range of the subsample.
We show that for low redshift and low mass clusters, the bias tends to stay constant with mass, while it increases towards higher redshift.
The reverse behaviour is observed in the case of high masses and high redshift clusters, with a bias constant with redshift but slightly increasing with the mass of the objects.
This sample dependence had already been noted in \cite{2019A&A...626A..27S}, where the authors used tSZ number counts on the cosmology sample of the PSZ2 catalog.
The authors indeed noted a non-zero trend of the bias with redshift when considering their complete sample, that disappeared when considering only the clusters above $z = 0.2$.
We however note that the compatibility of our results with theirs regarding the mass and redshift trends depend on the considered subsample of clusters.
We observe the same behaviour regarding the value of the constant term of the bias $B_0$, with compatible results for the high redshift clusters but incompatible ones when considering the full sample.
One might argue that these differences come from different choices of priors in the two studies.
Indeed when investigating sample dependencies we do not assume any prior on the bias parameters and consider priors on cosmology, while \cite{2019A&A...626A..27S} assume a prior on the total value of the bias, and let their cosmological parameters free.
This however might not have such a strong impact, as in the first part in the analysis we let cosmological parameters free and put a prior on the total value of the bias, and we do not see significant deviations in the value of $B_0$ between the two cases.
These differences are actually most probably due to differences between the clusters considered in the two studies, as the clusters from the ESZ sample have globally higher mass at equivalent redshift than the PSZ2Cosmo sample considered in their study, as we show in Figure \ref{fig:PSZ2C_v_ESZ}.
If anything, this could be yet another indication of the strong sample dependence of the results we show.
Our results however are consistent with other weak lensing studies among which are Weighting the Giants (WtG, \cite{2014MNRAS.443.1973V} or CCCP \citep{2015MNRAS.449..685H}, which show a mild decreasing trend in mass for $B$ when considering high redshift ($z_{WtG} = 0.31$, $z_{CCCP} = 0.246$) and high mass ($M_{500, WtG} = 13.58\times10^{14}M_\odot$, $M_{500, CCCP} =10.38\times10^{14}M_\odot$) clusters.
More recent works like the X-ray study X-COP \citep{2019A&A...621A..40E} also seem to show a possible mass dependent bias, with a decreasing trend of $B$.
The weak-lensing studies {\sc LoCuSS} \citep{2016MNRAS.456L..74S} and {\sc CoMaLit} \citep{2017MNRAS.468.3322S} both find decreasing trends of $B$ with redshift, in agreement with this work, for clusters in the mass and redshift range of our sample ($z_{{\sc LoCuSS}} = 0.22$, $M_{{\sc LoCuSS}} = 6.8\times10^{14}M_\odot$).
\subsubsection{Discussing the choice of a sample at $R_{500}$}
\label{sect:comparison_R2500}
In this work we focus on gas fractions taken at $R_{500}$, which is larger than most works using $f_{gas}$ as a cosmological probe, carried out at $R_{2500}$.
The choice of $R_{2500}$ is generally motivated by the low scatter of the gas fraction data around those radii (see \cite{2014MNRAS.440.2077M} and references therein), allowing for more precise cosmological constraints.
The scatter $f_{gas}$ in data is larger at $R_{500}$, however this inconvenient is balanced by the stability of the depletion factor $\Upsilon$ at this radius.
Indeed as shown by the hydrodynamical simulations from \cite{2013MNRAS.431.1487P}, the value of $\Upsilon$ does not vary much with the radius when one measures it around $R_{500}$.
This reduces the possibility of a biased estimation of the depletion due to incorrect determinations of $R_{500}$.
On the other hand $\Upsilon$ starts to decrease in the vicinity of $R_{2500}$.
As a consequence if the radius is not properly measured, due for instance to erroneous estimations of the density contrast, the estimation of the depletion will be biased.
Our purpose in this study being to constrain the hydrostatic mass bias and its evolution, we need a robust prior on the depletion factor, due to the degeneracy between $\Upsilon$ and $B$.
A bias in the value of the depletion due to an incorrectly measured radius would then impact our results on the mean value and evolution of $B$.
\subsection{Discussing the tension on the bias value}
\begin{figure}
\centering
\includegraphics[width= 0.5\textwidth, trim={0.45cm 0cm 0cm 0.35cm}, clip]{ESZ_BMz.png}
\caption{Comparison of our value of the mass bias depending on mass and redshift with other X-ray, weak lensing and hydrodynamical simulation works.
In both panels, the shaded areas mark the 1 and 2$\sigma$ confidence levels and
the grey band marks the value preferred by CMB observations of $B = 0.62 \pm 0.003$.
The half-filled circles are hydrodynamical simulation works, other points are works from observations.\\
{\it Top :} Value of the bias depending on redshift, which we computed at the mean mass of our sample. \\
{\it Bottom : } Value of the bias depending on mass, which we computed at the mean redshift of our sample.\\
{\bf References :} \cite{2012NJPh...14e5018R}, \cite{2013ApJ...767..116M}, \cite{2014MNRAS.443.1973V}, \cite{2014MNRAS.442.1507G}, \cite{2015MNRAS.449..685H}, \cite{2015MNRAS.448..814I}, \cite{2016ApJ...827..112B}, \cite{2016MNRAS.461.3794O}, \cite{2016MNRAS.456L..74S}, \cite{2017MNRAS.472.1946S}, \cite{2017MNRAS.469.3069G}, \cite{2018MNRAS.478..638J}, \cite{2018PASJ...70S..28M}, \cite{2019A&A...621A..40E}, \cite{2019A&A...626A..27S}, \cite{2020MNRAS.497.4684H}, \cite{2020A&A...641A...6P}, \cite{2021MNRAS.502.5115G}}
\label{fig:B(z, M)}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth, trim = {0.75cm 0.65cm 2cm 1.75cm}, clip]{fbaresz1.png}
\caption{Comparison of the expected baryon fraction $\Omega_b/\Omega_m$ between the bias we derived in the VB + $\Omega_m$ scenario and $B = 0.62$ from \citep{2020A&A...641A...6P}.}
\label{fig:ObOm_predictions_B0.6}
\end{figure}
Finally, we highlight that the value of $B_0 = 0.779 \pm 0.089$ we found for the full sample is in agreement with a collection of other studies including the aforementioned weak-lensing and X-ray studies, and with works from hydrodynamical simulations, as we show in Figure \ref{fig:B(z, M)}.
The works shown in Figure \ref{fig:B(z, M)} are those for which the bias is known at $R_{500}$ with a central value and error bars, where the mean mass and redshift of the samples are available.
We show however that this value is incompatible with the values $B_0 = 0.62 \pm 0.03$ from \cite{2020A&A...641A...6P}, or $B_0 \lesssim 0.67$ from \cite{2018A&A...614A..13S}, needed to reconcile local measurements of the bias with the combination of CMB and tSZ number counts measurements.
Indeed we show in Figure \ref{fig:B(z, M)}, that the tension is alleviated only for the highest redshifts, and more particularly for the highest masses, for clusters with $M \gtrsim 10^{15}M_\odot$.
Similarly the X-COP study \citep{2019A&A...621A..40E} shows that assuming a bias $B = 0.58 \pm 0.04$ from \cite{2016A&A...594A..24P} results in gas fractions which are way lower than expected, as they find a median gas fraction of their sample $f_{gas, B=0.58} = 0.108\pm 0.006$.
This would imply that the galaxy clusters from their sample are missing about a third of their gas.
A low value of the bias thus seems highly unlikely seeing its implications on the gas content of galaxy clusters.
We show a similar result in this work.
Indeed, from equation \ref{eq:fgas_model}, it is possible to compute the expected universal baryon fraction for a certain value of the bias :
\begin{equation}
\label{eq:Ob_Om_wrt_B}
\frac{\Omega_b}{\Omega_m} = \left(f_{gas} + f_* \right) \frac{1}{KA(z)}\frac{B(M,z)}{\Upsilon(M,z)}\left( \frac{D_A(z)}{D_A(z)^{ref}}\right)^{3/2},
\end{equation}
meaning $\Omega_b/\Omega_m \propto B$.
When assuming a low value of the bias $B = 0.62 \pm 0.03$ from \cite{2020A&A...641A...6P}, we show that the derived baryon fraction becomes $\Omega_b/\Omega_m = 0.124 \pm 0.015$, which is incompatible with the value of the baryon fraction derived from the CMB, $\Omega_b/\Omega_m = 0.156 \pm 0.003$ \citep{2020A&A...641A...6P}, as we show in Figure \ref{fig:ObOm_predictions_B0.6}.
Assuming a low value of the bias would then imply that the universe hosts roughly 20\% less baryons than expected.
We thus argue that a value of the bias $B = 0.62 \pm 0.03$ seems highly unlikely provided our gas fraction data.
\section{Conclusion}
\label{sect:conclusions}
We measured the gas mass fraction of galaxy clusters at $R_{500}$, and used it to constrain a possible mass and redshift evolution of the hydrostatic mass bias.
To do so, we compared the cosmological constraints we obtained when assuming a varying bias to those we obtained when assuming a constant $B$.
We show that assuming a redshift evolution seems necessary when performing a cosmological analysis using $f_{gas}$ data at $R_{500}$.
Indeed we show a significant degeneracy between the redshift dependence of the bias $\beta$ and the matter density $\Omega_m$.
This degeneracy implies that high and close to zero $\beta$ push the matter density to higher values.
As a consequence, assuming a constant bias implies $\Omega_m > 0.849$, which is aberrant.
Forcing $\Omega_m$ to have sensible values by imposing a \cite{2020A&A...641A...6P} prior in turns induces $\beta = -0.64 \pm 0.17$, in a $3.8 \sigma$ tension with 0.
We show however that these results are strongly dependent on the considered sample.
Indeed for the least massive clusters of our sample at the lowest redshifts, we show an important decreasing trend of $B$ with redshift and no trend with mass, with the set $(\alpha, \beta) = (0.08 \pm 0.10, -1.14_{-0.73}^{+0.33})$.
On the other hand, for the most massive clusters at highest redshifts we observe no variation with redshift but are favoring a decreasing evolution with mass, with $(\alpha, \beta) = (-0.150 \pm 0.061, -0.14 \pm 0.23)$.
When we consider the full sample, the results we obtain are lying in between the two extremes, largely favouring a decreasing trend of $B$ with redshift yet remaining compatible with no mass trend of the bias as we obtain $(\alpha, \beta) = (-0.47 \pm 0.037, -0.64 \pm 0.17)$.
We remind however that other selection effects might affect our results as the cluster at the highest redshifts are also generally the most massive.
In order to identify and rule out different sources of systematic effects in our study, we looked at the possibility of being subject to instrumental calibration effects.
Using mass measurements of the galaxy clusters in our sample both from {\it Chandra} and XMM-{\it Newton}, we find no evidence of a bias which would significantly change our results.
In this study we assumed a power-law description for the evolution of the bias.
We look at the effect of this choice of parametrization by comparing our results to those we obtain when assuming a linear evolution of $B$ with redshift.
We find no major difference with the power-law case, as we still observe a strong degeneracy between the redshift evolution of $B$ and $\Omega_m$, favoring an evolution of the bias with redshift when considering a {\it Planck} prior on $\Omega_m$.
Furthermore the sample dependence we observe for the power-law case holds true when assuming a linear evolution of the bias.
Despite these results, we stress that we remain compatible with a collection of X-ray, weak lensing, and hydrodynamical simulation works regarding the mean value of the bias, as we find $B = 0.779 \pm 0.089$.
This value remains on the other hand in tension with the value required from the combination CMB observations and tSZ cluster counts to alleviate the tension on $\sigma_8$.
Finally, we remind that this work focused on gas fractions taken at $R_{500}$, and tried to obtain constraints on two cosmological parameters, the universal baryon fraction $\Omega_b/\Omega_m$ and the matter density of the universe $\Omega_m$, which are the two parameters mainly probed by $f_{gas}$.
We thus argue that the gas fraction can be used to put constraints on the cosmological model, albeit provided that one properly models the gas effects taking place inside clusters, and provided that $f_{gas}$ is used in combination with other probes.
\begin{acknowledgements}
The authors acknowledge the fruitful discussions and comments from Hideki Tanimura, Daniela Galárraga-Espinosa, Edouard Lecoq, Joseph Kuruvilla and Celine Gouin.
The authors also thank the organisers and participants of the 2021 edition of the "Observing the mm Universe with the NIKA2 Camera" conference for their useful comments and discussions. RW acknowledges financial support from the from the Ecole Doctorale d'Astronomie et d'Astrophysique d'Ile-de-France (ED AAIF).
This research has made use of the SZ-Cluster Database operated by the Integrated Data and Operation Center (IDOC) at the Institut d'Astrophysique Spatiale (IAS) under contract with CNES and CNRS.
This project was carried out using the Python libraries {\tt matplotlib} \citep{2007CSE.....9...90H}, {\tt numpy} \citep{harris2020array}, {\tt astropy} (\cite{astropy:2013}, \cite{astropy:2018}) and {\tt pandas} \citep{jeff_reback_2021_4681666}. It also made use of the Python library for MCMC sampling {\tt emcee} \citep{2013PASP..125..306F}, and of the {\tt getdist} \citep{2019arXiv191013970L} package to read posterior distributions.
\end{acknowledgements}
\bibliographystyle{aa}
| 118072774acbdb4ebe10058106c2e1c9cbd93a9d | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
Radio-frequency portion of
the electromagnetic spectrum,
which ranges
from
about 3~kHz to 300~GHz,
is inevitably a limited resource,
and a
huge amount of
it
considered as exploitable
has already been
allocated to licensed owners.
Today's ever-increasing
demand for radio
resources,
which is
likely to outstrip the available spectrum, calls for novel
ideas and techniques to overcome the traditional barriers in spectrum exploitation.
One notable fact is that, most of the licensed radio spectrum is severely
underutilized \cite{Mishra04,Chiang2007,fcc,ofcom,Haykin05}.
As reported by the Federal Communications Commission (FCC), the shortage of spectrum
resources can mainly
be attributed to the
\textit{spectrum access
problem}, rather than the \textit{spectrum crowdedness} \cite{fcc}.
In response to the
spectrum scarcity
challenge, the idea of using cognitive radios
(CRs), which
was first proposed by Mitola and Maguire in
1999~\cite{Mitola99},
has been found to be a promising approach.
In the hierarchical access model for dynamic spectrum access,
the spectrum users are classified into
two types: licensees and
non-licensees~\cite{survey.dsa}.
A licensee
has the exclusive right to access its
licensed band.
In the jargon of CR, it is commonly referred to as a \textit{primary user} (PU),
and accordingly, a non-licensee
is called a \textit{secondary user} (SU).
CR is, in fact, a paradigm
for enabling the spectral coexistence
between the PUs and the SUs.
The PUs have to be
protected from harmful interference,
i.e., the activities of the SUs
have to be made transparent
to them.
Over the time, different approaches
have been suggested
to establish coexistence between
PUs and SUs.
%
In all of these approaches,
the ultimate goal is to provide
spectrum access to
SUs without
introducing any disturbance to the
normal operation of PUs.
In the
most
widely adopted approach,
known as the
\textit{interweave} paradigm,
the aim is to
provide SUs with an
\textit{opportunistic} spectrum
access.
Therefore, SUs
utilize the temporarily
vacant portions of
the spectrum, referred to as \textit{spectrum holes} (or white spaces).
%
CR technology has its own intricacies and challenges.
Efforts
are needed to address these
challenging problems.
%
One of these challenges is
to perform spectrum allocation to
SUs in an effective manner.
%
%
{Several variants of
the problem of spectrum allocation,
also referred to as spectrum
assignment
and frequency assignment,
have already been
investigated in the
literature
(see \cite{Tragos13,Hu2018,survey3} and references therein).
%
%
In this contribution,
we consider a variant of
the problem first
introduced by
Martinovic~\textit{et~al.}~in \cite{ilp}.
%
%
The distinguishing feature of this
version of the problem is that, for each
SU,
it takes into account the
requirement of
closeness of the
to-be-aggregated spectrum holes.}
In fact,
since a single
spectrum hole may be too small to
satisfy the spectrum demand of an
SU,
we have to resort to technologies
such as software defined radio
to glue these unoccupied intervals
together.
However, only a few
research works
have taken users' hardware limitations
into account when aggregating spectrum holes~\cite{ilp}.
In fact, it may be the case that
the spectrum holes are spread across a wide frequency range,
but due to
certain technological restrictions,
only those
that lie in a certain
specified distance of each other
can be considered to be aggregated.
This practical concern can be taken into account by considering per-user
upper-bound constraints in the
problem formulation.
As in~\cite{ilp},
we will
refer to
this per-user upper-bound
as the
\textit{maximal aggregation range~(MAR)}.
The MAR constraints
were taken into account
for the first time in~\cite{Martinovic16},
but for the special case of SUs
of the same spectrum requirement.
In~\cite{ilp},
the authors have considered
a more general setting in which
the SUs are heterogeneous
in the sense that
they
may have different
spectrum demands and MARs.
They have proposed
two different 0-1
integer linear
programming
(ILP) formulations
for this more general version of the
problem.
The resulting models can then
be fed to a
general-purpose ILP solver (e.g., CPLEX and GUROBI).
Typically, CR resource management
problems are computationally hard.
This is also the case for the problem
considered in this paper.
Methods for solving
computationally hard
problems can
generally
be categorized
into two main groups:
exact and inexact.
An exact approach
is guaranteed to return an optimal
solution if
one exists (see, e.g.,
\cite{ilp,Martinovic16}).
%
An inexact approach, on the other hand,
can return a
satisfactory%
---hopefully optimal or near-to-optimal---%
solution
in a reasonable (polynomial) time
(see, e.g., \cite{mathprog-2,mathprog-5}).
%
%
The branch-and-price~(B\&P}\def\bp{\bnp{}) routine described
in the present paper is
an exact approach.
Tools and
techniques
originating
from the
discipline of
mathematical
programming
have proven
to be
very useful
in
solving
computationally hard
CR
resource management
problems,
in both exact and inexact manners~(see, e.g.,~\cite{ilp,Martinovic16,mathprog-2,mathprog-5}).
Specifically,
many computationally hard
combinatorial (discrete)
optimization problems
are
naturally expressible as
integer linear
programs~\cite{ilp,Martinovic16}.
%
%
Generally, such a problem,
can be reduced (formulated, modeled)
as an integer linear
program
in various ways~\cite{ilp,Martinovic16,applied}.
The main
accomplishment of this paper
is, firstly, the introduction of a novel 0-1~ILP formulation for the
above-mentioned variant of
the cognitive radio resource allocation
problem, which is hereinafter
referred to as \textit{ the
MAR-Constrained Hole Assignment Problem{} (MC-HAP{})}.
The aim is
to maximize the spectrum utilization
subject to
the constraints imposed by hardware limitations.
Because of the (potentially) huge number
of decision variables,
the
associated
0-1 integer linear programs
cannot be described explicitly,
and therefore
cannot be fed to an ILP solver.
Hence,
for solving these programs,
we resort to
the well-known framework of
B\&P}\def\bp{\bnp{}~\cite{dual-2,branching,dual-4,applied}.
This is, in fact,
a linear-programming-relaxation-based branch-and-bound
(B\&B}\def\bb{\bnb{})
framework within which
the linear programming (LP)
instances are solved using
the so-called (delayed)
column generation
method.
%
Our numerical results show that
the formulation yields a
very tight (strong)
LP relaxation
(but, as stated earlier,
at the expense of
a potentially huge number of binary
decision variables).
%
%
This leads to a very effective
pruning of the search space.
As evidenced by the
simulation results,
the
proposed B\&P}\def\bp{\bnp{} approach
exhibits a superior performance
in terms of the
required CPU time
compared
to the best currently available
0-1~ILP formulation of the problem,
which is presented
in~\cite{ilp}.
It is worth noting that
the MC-HAP{}
has a lot in common
with the Generalized Assignment Problem (GAP). The GAP
can be described as
finding a maximum profit assignment of
$ n $ jobs to $ m $ ($ n\geq m $)
agents such that
each job is assigned to exactly one agent,
and that each agent is permitted to be assigned to more than one job, subject to its capacity limitation~\cite{applied,gap-survay,gap-bnp}.
From the perspective
of the GAP,
each SU can be
seen as an agent,
and each hole can be
seen as a job.
The distinguishing point between
the two problems is
that in the MC-HAP{},
each SU has its own
MAR,
but in the GAP,
the an agent
does not impose such a
restriction.
In fact,
in the GAP,
assigning a job
to an agent
does not prohibit
the assignment of
another job,
as long as enough capacity
is available.
On the other hand, in
the MC-HAP{},
the assignment of
a hole $ h $ to
an SU, forbids us
from assigning
the holes whose
distances
from $ h $
are greater than
what MAR dictates.
Therefore,
the existing
approaches for solving the
GAP, in
an exact or inexact manner,
although very inspiring,
are not directly
applicable
for the MC-HAP{}.
Moreover,
in the MC-HAP{},
there is a kind
of \textit{conflict} between the holes.
Holes that are far apart are in conflict with each other. Therefore, the approaches
available to solve problems
such as the \textit{bin packing with conflicts problem},
can be inspiring
in the design of algorithms for
the MC-HAP{}.
The B\&P}\def\bp{\bnp{}
approach has been employed for both of the
above-mentioned problems \cite{gap-bnp,bpack-bnp2,bpack-bnp1}.
The organization of this
contribution is as follows. In Section~2, we
introduce
some preliminaries and notation,
and provide a rigorous formulation
of the MC-HAP{}. Section~3
is devoted to
the presentation of
our novel 0-1~ILP formulation
of the problem, along with
a brief overview of the
B\&P}\def\bp{\bnp{} framework.
In Section~4,
we describe
the column generation procedure,
discuss the
pricing problems,
an describe their
corresponding
pricing oracles.
Section~5
is dedicated to
experimental evaluations, and to numerical
comparisons with the best currently available ILP formulation
of the problem. We extensively compared our B\&P}\def\bp{\bnp{}
approach with a
formulation presented in~\cite{ilp}. Finally, some concluding remarks
are offered in Section~6.
\section{Notations and Problem Statement}
In this section,
we provide a formal definition of
the MC-HAP{}, and
introduce some definitions and notations.
A summary of notations used in this paper
is shown in Table~\ref{tab.notat}.
%
In an instance
of the MC-HAP{}, we are given two sets
$\mathcal{H}$ and
$\mathcal{U}$, where
$\mathcal{H}=\{h_1,h_2,\ldots,h_M\}$ is the set of all available
spectrum holes, and
$\mathcal{U}=\{u_1,u_2,\ldots,u_N\}$
is the set of all SUs.
%
%
Each spectrum hole $h_i$,
$1\leq i \leq M$,
is specified by its
left endpoint $\alpha_i$
and right endpoint $\beta_i$,
i.e., the hole can be represented
by the interval $[\alpha_i,\beta_i]$.
We assume that spectrum holes are
pairwise
disjoint, and
appear in $ \cal H $ based on
their left endpoints,
i.e., we have
$ \alpha_1 < \alpha_2 < \cdots < \alpha_M $.
We denote the length of the hole
$h_i$, i.e., $\beta_i-\alpha_i$, by
$\mathrm{len}(h_i)$.
Each SU has
its own required bandwidth and its own MAR.
We denote the
required bandwidth of the $j$th user $u_j$,
$1\leq j \leq N$,
by $R_j$, and its MAR by $\delta_j$.
%
The objective is to maximize the total spectrum utilization by assigning a subset of
the
available spectrum holes to each SU.
This assignment has to be carried out
subject to the following conditions:
\begin{itemize}
\item Each spectrum hole can be assigned
to at most one SU.
This means that it can be left unutilized.
\item The total length of the spectrum holes
assigned to an SU has to be greater than
or equal to its required bandwidth.
\item If $h_s=[\alpha_s,\beta_s]$
is the leftmost spectrum hole and
$h_e=[\alpha_e,\beta_e]$ is the rightmost
spectrum hole assigned to
the SU $u_j$, $1\leq s\leq e\leq M$ and
$1\leq j\leq N$, then
$\beta_e-\alpha_s$ cannot be greater than
$\delta_j$.
\end{itemize}
Therefore, a \textit{feasible hole assignment scheme
(pattern)}
for the SU $u_j$, $1\leq j\leq N$,
is a subset of spectrum holes
whose total length is greater
than or equal to
$u_j$'s
required bandwidth $R_j$,
and satisfies
$u_j$'s
MAR constraint.
Mathematically speaking,
if
$1\leq i_1 < i_2 < \cdots < i_\ell \leq M$
is some (increasing) integer sequence (of indices),
then the subset $\pi=\{h_{i_1},h_{i_2},\ldots,h_{i_\ell}\}$ of $\mathcal{H}$
is a feasible hole assignment scheme for
the SU $u_j$
if, firstly, $\sum_{h\in \pi} \mathrm{len}(h)\geq R_j$
and, secondly, $\beta_{i_\ell}-\alpha_{i_1} \leq \delta_j$.
For the SU $u_j$,
we denote
by $\Pi_j$
the set of all of its feasible hole assignment schemes.
Finally,
by
$\mathbf{I}_\pi(\cdot)$
we denote the indicator function of the
hole assignment pattern $\pi$;
i.e., $\mathbf{I}_\pi(h)=1$, if $h\in \pi$, and
$\mathbf{I}_\pi(h)=0$, otherwise.
\begin{table}[tbh]
\centering
\caption{ A summary of notations used in this paper.}
\label{tab.notat}
\begin{tabular}{|l|p{5in}|}
\hline
$ a_{ij} $ & The binary
indicator variable for the assignment of
the hole $ h_i $ to the SU $ u_j $\\
$ {\cal F} $ & A set
of
forbidden
patterns\\
$ {\cal H} $ & The set of all available
spectrum holes\\
$ H $ & A subset of $ \cal H $\\
$ h_i $ & The $ i $th available spectrum hole\\
$ \mathbf{I}_{\pi}(\cdot) $ & The indicator function of the
hole assignment pattern $\pi$\\
$ \mathrm{len}(h) $ &The length of the hole $ h $\\
$ M $ & The size of the set $ \cal H $\\
$ N $ & The size of the set $ \cal U $\\
$ R_j $ & The
required bandwidth of the SU $u_j$\\
$ {\cal U} $ & The set of all SUs\\
$ u_j $ & The $ j $th user \\
$ x_{j,\pi} $ & A binary decision variable in the formulation (BILP), and a real-valued decision variable in
its LP relaxation
(PLP)\\
$ y_i $ & A non-negative real-valued decision variable in (DLP)\\
$ z_j $ & A non-negative real-valued decision variable in (DLP)\\
\hline
$ \alpha_i $ & The left endpoint of the spectrum hole $ h_i $\\
$ \beta_i $ & The right endpoint of the spectrum hole $ h_i $\\
$ \delta $ & The MAR of a user $ u $\\
$ \delta_j $ & The
MAR of the SU $u_j$\\
$ \Pi_j $ & The set of all feasible hole assignment schemes for the SU $ u_j $ (with respect to $ \cal H $)\\
$ \Pi_{u,H} $ & The
set
of all the feasible hole assignment patterns of a user $u$
with respect to a subset $H$ of $ \cal H $\\
$ \varrho $ & The required bandwidth of a user $ u $\\
\hline
\end{tabular}
\end{table}
\section{A Novel 0-1 ILP Formulation of the MC-HAP{} and
an Overview of the
B\&P}\def\bp{\bnp{} Framework}
The following proposition presents a
novel
0-1 ILP formulation of the problem.
\begin{proposition}\label{prop1}
The MAR-constrained hole assignment problem{} can be formulated as the following binary integer linear program:
\begin{IEEEeqnarray}{l}
\text{(BILP) Maximize} \quad \sum_{j=1}^{N} \sum_{\pi \in \Pi_j}R_j x_{j,\pi}, \label{obj}\\
\text{subject to } \nonumber\\
\sum_{\pi \in \Pi_j} x_{j,\pi} \leq 1, \quad \text{for}\ j=1,2,\ldots,N, \label{constr1}\\
\sum_{j=1}^{N} \sum_{\pi \in \Pi_j} \mathbf{I}_{\pi}(h_i)\ x_{j,\pi} \leq 1,\quad \text{for}\ i =1,2,\ldots, M,\label{constr2}\\
x_{j,\pi}\in\{0,1\},\quad\text{for}\ j=1,2,\ldots,N\ \text{and}\ \pi \in \Pi_j.\label{binary.constr}
\end{IEEEeqnarray}
\end{proposition}
\begin{proof}
For every
$j$, $j=1,2,\ldots,N$, and every
$\pi \in \Pi_j$, we associate a binary
indicator variable $x_{j,\pi}$ with
the pair $(j,\pi)$.
This decision variable has the following
interpretation: $x_{j,\pi}$ is $1$ if
the hole assignment scheme $\pi$
takes part in the solution, and is $0$ otherwise.
In fact, $x_{j,\pi}$
indicates that whether or not
the spectrum holes in $\pi$ are allocated to
the SU $u_j$.
The inequalities (\ref{constr1}) ensure that
at most one hole assignment scheme can be selected
for each SU.
(In a solution to the formulation,
the left-hand-side of one such constraint being zero
indicates that
none of the spectrum holes are assigned to
the corresponding SU.)
The inequalities (\ref{constr2}) guarantee
that each spectrum hole can be occupied by at most one SU.
(In a solution to the formulation,
the left-hand-side of one such constraint being zero
implies that
the corresponding hole is unoccupied.)
It is now clear that,
in the presence of constraints (\ref{constr1}) and
(\ref{constr2}),
the objective function
that has to be maximized is the one
given in (\ref{obj}).
\end{proof}
The above-described formulation contains $\sum_{j=1}^N|\Pi_j|$ binary decision variables and $N+M$ linear constraints.
The number of decision variables can grow exponentially in the worst-case scenario.
As mentioned earlier, this exponential growth of the number of decision variables,
which renders the general-purpose ILP solvers useless,
calls for the application of the branch-and-price technique.
Roughly speaking, B\&P}\def\bp{\bnp{}
is essentially nothing more than an LP-relaxation-based
B\&B}\def\bb{\bnb{} framework in which the LP instances
are solved using the column generation technique.
\subsection{Basics of LP-relaxation-based B\&B}\def\bb{\bnb{} procedures}
In an LP-relaxation-based B\&B}\def\bb{\bnb{} procedure,
the search space of the problem is represented as a tree of live
(a.k.a., open or active) nodes.
The root of this tree
corresponds to the original 0-1 ILP instance and each node
corresponds to a subproblem.
In fact, in each node, the search is restricted
to those solutions consistent (compatible) with it.
In each node,
we have to solve an LP instance,
which is obtained by
relaxing (dropping) the integrality constraints
from the
associated 0-1 ILP instance.
When the aim is to maximize an objective function,
which is the case in our formulation,
the optimal solution
to this LP instance
provides an upper bound to the
optimal solution to
the 0-1 ILP instance corresponding to the node
(because it has less restrictive constraints).
At the root node,
if we were lucky and the optimal solution
to the corresponding LP instance is integral,
then the node is fathomed and this integral
optimal
solution is returned.
Otherwise, the algorithm proceeds by creating two
(or more)
children
nodes (subproblems) for the root.
To create the children nodes,
one can select
a decision variable
whose
value in the optimal solution for the LP relaxation
is non-integer.
The variable is called as the branching variable.
The children inherit all of the constrains of their parent.
Furthermore, in one child the value of this branching
variable is set to zero, and in the other, it
is set to one.
There are various \emph{branching variable selection strategies} recommended in the literature~\cite{bnb,applied}.
The division process is repeated
according
to a pre-specified \emph{node
selection
strategy}
until all of the subproblems are
fathomed
(pruned, conquered)~\cite{bnb,applied}.
%
%
By pruning a node, we mean
the exclusion of
all the nodes in the subtree rooted at it,
from further consideration.
In an LP-relaxation-based B\&B}\def\bb{\bnb{} procedure,
we fathom a node
whenever one of the following cases occurs (see, e.g., \cite{applied}):
\begin{itemize}
\item \textit{Pruning by integrality},
which occurs when the optimal solution
to the corresponding LP instance is integral. In this case, the value of this integral solution is compared to the value of the best integral solution found so far, which is usually called as the \textit{incumbent solution}.
(We have to keep track of the best solution found so far.)
If this new-found integral solution is
better than the incumbent solution, then it needs to be updated.
%
%
\item \textit{Pruning by bound},
which occurs when
the value of an optimal solution to the LP instance corresponding
to the node is not better than the value of the incumbent solution. Such a node can be prunded out
because it
cannot lead us to a better solution.
\item \textit{Pruning by infeasibility},
which occurs when
the LP instance corresponding to the node is infeasible.
\end{itemize}
In the second case, the node is said to be nonpromising
because of its bound, and in the third case,
the node is said to be nonpromising because of its infeasibility.
\subsection{ The B\&P}\def\bp{\bnp{} Framework}
Since the bounding strategy
in the above-described procedure is
based on solving an LP instance
within each node of the search tree,
the effective solution of these LP instances becomes
of a crucial importance.
As one such linear program
can involve a huge number of decision variables, we need to embed a column generation
subroutine into the B\&B}\def\bb{\bnb{} framework.
%
In fact,
the fundamental difficulty we encounter
in this B\&B}\def\bb{\bnb{} framework is that
the LP instances may have exponentially large
number of decision variables (columns), therefore
we have to resort to the column
generation
approach
to generate
columns in an on-the-fly fashion.
In the terminology of the B\&P}\def\bp{\bnp{} approach,
the LP instance corresponding to
a node in the search tree is commonly called
as the master problem.
At each node,
in order to solve the LP instance,
we start with a small number of
decision
variables. This confined LP instance is
called
restricted master problem. After solving this
problem, we have to determine
whether the current solution is optimal.
If it is not,
the decision variables (columns)
to be added to the model
must be identified.
The problem of generating such
improving columns, if at least one exists,
or otherwise declaring the optimality of the solution at hand,
is often called as the
pricing (sub-)problem.
This
procedure continues until
it is confirmed that
no further improvement could be made.
\section{The Column Generation Procedure and the Pricing Oracles}
Now we are in a position to describe our
column generation approach for solving
the LP instances.
It can easily be seen that,
the employment of
the column generation technique
for solving a primal linear program
can be viewed as
the use of
the row generation technique
for solving
its dual linear program~\cite{dual-2,dual-1,dual-3,dual-4}.
%
%
In fact,
in the column generation technique for solving
a linear program,
for verifying the optimality of the
solution at hand,
we have to solve a subproblem, called as the pricing problem.
This is exactly equivalent to solving
the
so-called separation problem for the dual linear program.
%
%
It seems to us that
describing the
row generation
scheme
for solving the dual linear program
may be more accessible to the reader.
Therefore,
in what follows,
we consider the dual linear program,
and describe
two \textit{separation oracles}.
These are in fact
\textit{pricing oracles}
for the primal linear program.
We have to choose the
suitable pricing oracle
according to
our branching strategy.
%
%
The LP relaxation of the 0-1 ILP formulation
given in Proposition~\ref{prop1}
and its associated dual linear program are as follows:
$$\begin{array}{l}
\hline
\mbox{Primal Linear Program} \\
\hline
\text{\textbf{(PLP)} Maximize} \quad \sum_{j=1}^{N} \sum_{\pi \in \Pi_j}R_j x_{j,\pi}, \\
\text{subject to } \quad \nonumber\\
\sum_{\pi \in \Pi_j} x_{j,\pi} \leq 1, \quad \text{for}\ j=1,2,\ldots,N, \\
\sum_{j=1}^{N} \sum_{\pi \in \Pi_j} \mathbf{I}_{\pi}(h_i)\ x_{j,\pi} \leq 1, \quad \text{for}\ i =1,2,\ldots, M,\\
0 \leq x_{j,\pi}\leq 1,\quad\text{for}\ j=1,2,\ldots,N\ \text{and}\ \pi \in \Pi_j.
\\
\hline
\end{array}$$
$$
\begin{array}{l}
\hline
\mbox{Dual Linear Program}\\
\hline
\text{\textbf{(DLP)} Minimize}\quad\sum_{i=1}^{M} y_i + \sum_{j=1}^{N} z_j, \\
\text{subject to}\\
z_j + \sum_{i=1}^M \mathbf{I}_{\pi}(h_i)\ y_i \geq R_j,\\
\hspace{1.2in}\text{for}\ j=1,2,\ldots,N\ \text{and}\ \pi \in \Pi_j, \\
y_i \geq 0, \quad \text{for}\ i =1,2,\ldots, M,\\
z_j \geq 0, \quad \text{for}\ j=1,2,\ldots,N.\\
\hline
\end{array}
$$
In the dual linear program, the number of constraints (rows) is prohibitively large.
This difficulty can be overcome
by an ad hoc incorporation of the
constraints into the LP problem instance
(i.e., in an as-needed fashion).
%
%
In the dual linear program,
we start with an
initial (small or even empty) set of constraints.
In order to determine whether
the solution at hand is \textit{feasible}, an instance of
the separation problem
has to be solved.
We try
to identify the
\textit{violated}
linear constraints
to be included in the current set of
constraints.
If such rows are found,
they are appended to
the current set of constraints,
and
the resulting linear
program
is reoptimized.
%
%
This procedure is
repeated iteratively until
no further violated
linear constraints can
be found.
%
%
Therefore,
starting from the initial
set of linear inequalities, the constraint
set is progressively expanded by
incorporating the violated
constraints.
%
%
%
Let $\mathbf{y}^*=(y^*_i)_{i=1,2,\ldots,M}$
and
$\mathbf{z}^*=(z^*_j)_{j=1,2,\ldots,N}$
be two vectors of nonnegative real components.
In order to determine whether
$(\mathbf{y}^*,\mathbf{z}^*)$
is a feasible solution to
(DLP), it suffices to decide
whether there exists some
$j\in\{1,2,\ldots,N\}$
and some
$\pi \in \Pi_j$ such that
$ \sum_{i=1}^M \mathbf{I}_{\pi}(h_i)\ y^*_i < R_j - z^*_j$.
Therefore,
the separation problem
can be stated as follows:
\begin{quote}
\textbf{Instance:} A set
of spectrum holes $H$, a
nonnegative real vector
$\mathbf{y}^*$ of the same size, and
a user $u$ whose required bandwidth
is $\varrho$ and whose MAR
is $\delta$.\\
\textbf{Task: }
Let $\Pi_{u,H}$ be the
set
of all the feasible hole assignment patterns of $u$
\textit{with respect to the set $H$}.
Find a pattern $\pi\in\Pi_{u,H}$ such that
$\sum_{h\in H} \mathbf{I}_{\pi}(h)\ y^*_h$
is minimum, where
$ y^*_h $
is the element of $ \mathbf{y}^* $
corresponding to the hole $ h\in H $
\end{quote}
We call
the above-stated separation
problem
as {SEP1}.
Indeed,
$(\mathbf{y}^*,\mathbf{z}^*)$
is a feasible
solution to
(DLP) if and only if,
for every $ u_j $,
$j\in \{1,2,\ldots,N\}$,
the value of
an optimal solution
to SEP1
over the set $ \Pi_j = \Pi_{u_j,{\cal H}} $
is greater than
or equal to
$R_j - z^*_j $.
%
%
However, there is still
an issue here that must be
addressed.
As stated earlier,
in the branch-and-bound tree,
in the linear
program associated with a non-root node,
we have additional constraints
that enforce that
some of the decision variables $x_{j,\pi}$
take the value zero, and some of them
take the value one.
A constraint that assigns the value
one to a decision variable
$x_{j,\pi}$ is straightforward to deal
with. All we need to do is to
exclude the SU $u_j$
from the set $\cal U$ and
treat the holes in $\pi$
as occupied
(the gain earned by this assignment is $R_j$).
More accurately speaking,
if the subproblem
corresponding to a
(non-root) node
of the tree
is constrained by
the
equalities
$x_{j_1,\pi_1}=x_{j_2,\pi_2}=\cdots=
x_{j_t,\pi_t}=1$,
where
$1\leq j_1 < j_2 < \cdots < j_t \leq N$
and,
for each $\ell\in\{1,2,\ldots,t\}$,
$\pi_{\ell}\in \Pi_{j_\ell}$,
then in this subproblem,
the set of
spectrum holes
is
\begin{equation}\label{hset}
\mathcal{H}\setminus\bigcup_{\ell=1}^{t}\pi_{j_\ell}
\end{equation}
%
and the set of SUs is
$\mathcal{U}\setminus \{u_{j_1},u_{j_2},\ldots,u_{j_t}\}$.
Notice that the patterns $\pi_1,\pi_2,\ldots,\pi_t$ are pairwise disjoint. Moreover, the gain earned by the
selection of
these SUs is $\sum_{\ell=1}^{t}R_{j_\ell}$.
On the other hand,
a
constraint that requires
the value of
a decision variable $x_{j,\pi}$
to be zero is not as convenient to deal with.
This variable cannot be selected
to enter the basis
(i.e., become a basic variable).
Therefore,
corresponding to each
(non-root)
node in the
tree, we may have a set
of decision variables
that aren't allowed to be selected
to enter the basis.
Accordingly,
our separation oracle needs
to be able to solve the following
more general problem, which we
call SEP2:
%
%
\begin{quote}
\textbf{Instance:} A set
of spectrum holes $H$, a
nonnegative real vector
$\mathbf{y}^*$ of the same size,
a user $u$ whose required bandwidth
is $\varrho$ and whose MAR
is $\delta$,
and a subset ${\cal F} \subseteq \Pi_{u,H}$
of forbidden patterns, where $\Pi_{u,H}$ is the
set
of all the feasible hole assignment patterns of $u$
with respect to the set $H$.\\
\textbf{Task: }
Find a pattern ${\pi\in\Pi_{u,H} \setminus {\cal F}}$ such that
$\sum_{h\in H}\mathbf{I}_{\pi}(h)\ y^*_h$
is minimum, where
$ y^*_h $
is the element of $ \mathbf{y}^* $
corresponding to the hole $ h\in H $.
\end{quote}
\subsection{Solving the SEP1 problem}\label{subsect.sep1
As we will see in the next subsection,
the SEP2 problem can be solved by the use of
the dynamic programming paradigm~\cite[Chapter~15]{clrs}.
However, instead of the usual branching on the binary decision variables
$ x_{j,\pi} $, another branching
strategy can be used
so that
the SEP1 problem, which is easier to deal with,
is solved, instead of SEP2.
As stated above,
fixing a decision
variable
$ x_{j,\pi} $
equal to zero,
forbids
a
hole assignment
pattern for the SU
$ u_j $.
As we go deep
in the search tree,
the size of the set of
forbidden (excluded)
patterns $ \cal F $
gets larger, and
solving the problem
SEP2 can
become more challenging.
Hence,
instead of the usual branching on the variables $ x_{j,\pi} $, we try to branch on the
variables
$ a_{ij} $ ($ 1\leq i \leq M $ and $ 1 \leq j \leq N $), which are
in fact
\emph{hidden and implicit}.
Setting variable $ a_{ij} $ to 1 means that
the hole $ h_i $ is assigned to the user $ u_j $, while
setting $ a_{ij} $ to 0 means that $ h_i $ is not assigned to $ u_j$.
This is exactly analogous
to what has been described in
\cite[Subsec.~13.4.1]{applied}
for the GAP.
In a node of the search tree,
for solving the problem SEP1
for the SU $ u_{j_1} $,
$ 1\leq j_1 \leq N $,
we
firstly have to exclude
all the already occupied holes
(i.e., the holes for which we have
$ a_{ij} = 1$)
from the set of available
holes.
If a hole is occupied by
the SU $ u_{j_1} $ itself,
then the hole contributes
in providing the
required bandwidth $ R_{j_1} $.
Therefore,
$ \varrho $ has to
be set
equal to $ R_{j_1} $
minus the sum of the lengths of
the holes occupied by $ u_{j_1} $
itself.
Furthermore,
every hole $ h_i $
for which $ a_{i{j_1}} = 0$
has to be excluded from the
set of available holes.
Now, if we are sure that
the set $ H $ of remaining holes
do satisfy the
MAR constraint
for $ u_{j_1} $,
then
SEP1 reduces to an
instance of the
Minimization
Knapsack Problem (MinKP)~\cite[Subsec.~13.3.3]{knapsack}:
$$
\begin{array}{ll}
\text{Minimize}& \sum_{h\in H}y_h^*\nu_h,\\
\text{Subject to} & \sum_{h\in H}\mathrm{len}(h)\nu_h\geq \varrho,\\
&\nu_{h}\in\{0,1\},\quad h\in H.
\end{array}
$$
In the above
description of the MinKP, the
binary
decision
variable $ \nu_h = 1 $
if the hole $ h\in H $ is assigned to the SU
$ u_{j_1} $ and $ \nu_h = 0 $ otherwise.
Moreover, an optimal solution
for a given instance of the
MinKP
can readily be obtained
from an optimal solution for the following instance of
the traditional 0-1 knapsack problem
(KP)~\cite[Subsec.~13.3.3]{knapsack}:
$$
\begin{array}{ll}
\text{Maximize}& \sum_{h\in H}y_h^*\xi_h,\\
\text{Subject to} & \sum_{h\in H}\mathrm{len}(h)\xi_h\leq \sum_{h\in H}\mathrm{len}(h)-\varrho,\\
&\xi_{h}\in\{0,1\},\quad h\in H.
\end{array}
$$
Finally,
this resulting
instance of the 0-1 knapsack problem
can effectively be solved
e.g.
using the dynamic programming
approach \cite[Sec.~2.3]{knapsack}.
However,
we do not know whether the set $ H $
satisfies the MAR constraint
for the SU $ u_{j_1} $.
Hence, before every call to the
knapsack oracle,
we must be sure that
the set of holes we
consider, do
satisfy
the MAR constraint for
the SU $ u_{j_1} $.
There are two cases to consider:
\begin{itemize}
\item If the set of holes
that have already been assigned to
$ u_{j_1} $
is empty,
then for finding
a pattern
in $ \Pi_{u_{j_1},H} $
that minimizes
the objective function
$\sum_{h\in H} \mathbf{I}_{\pi}(h)\ y^*_h$,
we proceed as follows.
For every $ h\in H $,
we consider
$ h $ as the
first contributing hole,
and consider all the holes
in $ H $
that are not \textit{too far} from $ h $
as the other available holes.
These holes together do not violate the MAR constraint for $ u_{j_1} $.
The knapsack oracle has to be called
for every such a set.
A
pattern corresponding
to the
minimum value returned by these calls is a
pattern in
$ \Pi_{u_{j_1},H} $
that minimizes
the objective function
$\sum_{h\in H} \mathbf{I}_{\pi}(h)\ y^*_h$.
\item
If the set of holes
that
have already been
assigned to
$ u_{j_1} $ is not empty,
then
for every $ h\in H $ that
does not appears after the first
already assigned hole,
we consider
$ h $ as the
first contributing hole.
If
$ h $ and
the already
assigned
holes
are
too far apart
from each other to
satisfy the MAR
constraint for $ u_{j_1} $,
then they cannot
provide
a pattern
in $ \Pi_{u_{j_1},H} $.
Hence,
we don't call
the knapsack oracle
for them.
Otherwise,
we consider
$ h $ as the
first contributing hole,
and consider all the holes
in $ H $
that are not too far from $ h $
as the other available holes.
The knapsack oracle has to be called
for every such a set.
A
pattern corresponding
to the
minimum value returned by these calls is a
pattern in
$ \Pi_{u_{j_1},H} $
that minimizes
the objective function
$\sum_{h\in H} \mathbf{I}_{\pi}(h)\ y^*_h$.
\end{itemize}
The B\&P}\def\bp{\bnp{}
procedure
for solving the
(BILP) instances
that branches
on the variables
$ a_{ij} $, and
employs the
above-described
separation oracle,
will be called
\textsf{B\&P-SEP1}{} hereinafter.
\subsection{Solving the SEP2 problem }\label{subsect.sep2}
%
%
We can also employ the
standard branching scheme.
At each node of the search tree,
we branch on
a variable $x_{j,\pi}$
whose value is fractional.
Therefore,
the separation oracle
has to
solve the more
challenging problem SEP2.
Algorithm~\ref{oraclealg}
presents
the pseudocode of
the recursive procedure
$\textsc{SEP2-DP-Oracle}$
that can effectively solve
the instances of
the SEP2 problem.
As the simulation results show,
employing the standard branching
scheme and
$\textsc{SEP2-DP-Oracle}$
within the B\&P}\def\bp{\bnp{} framework
can present a
very good performance.
The B\&P}\def\bp{\bnp{}
procedure
for solving the
(BILP) instances
that branches
on the variables
$ x_{j,\pi} $, and
employs
$ \textsc{SEP2-DP-Oracle} $
as the
separation oracle,
will be called
\textsf{B\&P-SEP2}{} hereinafter.
In our implementation of
the \textsf{B\&P-SEP2}{}, we employed the
\textit{most infeasible branching strategy},
in which the variable whose
(fractional) value
is closest to $0.5$ is chosen
to be branched on~\cite{bnb}.
The inputs to the
procedure $ \textsc{SEP2-DP-Oracle} $,
which is in fact a
memoized top-down DP
algorithm%
\footnote{For a thorough discussion of
dynamic programming
and memoization, the reader is referred to~\cite[Chapter~15]{clrs}.},
are as follows:
\textbf{1)}~A set of spectrum holes
$H=\{\widetilde{h}_1,\widetilde{h}_2,\ldots,\widetilde{h}_m\}$.
(We used tilded symbols to distinguish the members of $ H $ from the holes in
$ \cal H $.)
\textbf{2)}~A nonnegative
real vector $\mathbf{y}^*$
whose length is equal to the size of
$H$.
\textbf{3)}~An index $\sigma\in\{1,2,\ldots,|H|\}$
that specifies the
first available hole (i.e.,
the first $\sigma-1$ holes are not allowed to
be used).
In
the top-level call we always have
$\sigma=1$.
\textbf{4)}~A required bandwidth $\varrho$.
\textbf{5)}~An MAR $\delta$.
\textbf{6)}~A boolean flag that indicates whether
a hole has already been
selected to be included in the output pattern.
In
the top-level call we always have
$\mathit{active}=\mathsf{False}$.
This input is actually used to ensure that the MAR constraint is not violated. If $ \mathit{active}=\mathsf{True} $, then we have already assigned at least one hole to the user and we must make sure that we do not go too far from the hole(s) that have been assigned so far. If we move too far away from the first assigned hole, and the MAR constraint becomes violated, we lose the feasibility. If $\mathit{active}=\mathsf{False}$, we should not worry about the violation of the MAR constraint. In fact, as soon as $ \mathit{active}$ becomes $\mathsf{True}$, we have to be concerned about the violation of the MAR constraint.
\textbf{7)}~A set $\mathcal{F}$ of forbidden patterns.
(Each element of $\mathcal{F}$ is a non-empty subset
of $H$.)
The procedure
$\textsc{SEP2-DP-Oracle}$
returns
a feasible hole assignment scheme
$\pi\notin \mathcal{F}$
corresponding to
the given
bandwidth requirement $\varrho$
and MAR $\delta$
(with respect to the set $H$)
for which the summation
$\sum_{h\in H} \mathbf{I}_{\pi}(h)\ y^*_h$
is minimized.
%
If no
feasible hole assignment
scheme exists for
the given instance, it returns an empty
set.
%
In a non-root node of the
search tree,
for the SU $u_j$, $1\leq j \leq N$,
all the patterns $\pi \in \Pi_j$
for which we have $x_{j,\pi}=0$
should be included in a set $\mathcal{F}$.
If none of the patterns in $\Pi_j$
are forbidden, i.e., none of the variables $x_{j,\pi}$,
$\pi\in \Pi_j$, are set to be zero,
then we set $\mathcal{F}$ to be $\varnothing$.
This is the case, for example, in the root node.
Then, for obtaining the
violated
constraints corresponding to $u_j$,
or to conclude that no such constraint exists,
we have to call
$$\textsc{SEP2-DP-Oracle}(H,\mathbf{y}^*,1,R_j,\delta_j,\mathsf{False},\mathcal{F}),$$
where $H$
is as defined
in~(\ref{hset}).
(In the root node
we have $H=\mathcal{H}$.)
%
%
We now argue the
correctness of the procedure $ \textsc{SEP2-DP-Oracle} $.
Depending on the values of the arguments
$\sigma$, $\varrho$, and $\delta$,
we have to distinguish three cases:
\textbf{Case~I:} $\sigma = |H|$,
i.e,
the first available
hole is indeed the last member of $ H $;
\textbf{Case~II:} $\sigma < |H|$ and
$\varrho \leq \mathrm{len}(\widetilde{h}_\sigma)$,
i.e., the first available hole
is not the last hole in $ H $ and it \textit{alone}
can provide the
required bandwidth $ \varrho $;
and, finally,
\textbf{Case~III:}
$\sigma < m$ and
$\varrho > \mathrm{len}(\widetilde{h}_\sigma)$,
i.e., the first available hole
is not the last hole in $ H $ and it alone
cannot provide the
required bandwidth $ \varrho $.
The procedure $\textsc{SEP2-DP-Oracle}$
is made up of six parts;
three of them are corresponding to
the above three cases (Parts IV--VI),
two of them
are responsible for detecting infeasibility
(Parts~I and II),
and one (Part~III) is responsible for the
retrieval of the
previously stored
(cached)
results.
These six parts are as follows:
%
%
\begin{itemize}
\item \textbf{Part~I.} (Lines~\ref{d<r.b}--\ref{d<r.e})
If the MAR is negative or is strictly less
than the bandwidth requirement,
then, obviously,
the problem instance is infeasible.
Since
there is no
feasible hole assignment for
such values of $\varrho$ and $\delta$,
the procedure returns an empty set.
%
%
\item \textbf{Part~II.} (Lines~\ref{infeas.b}--\ref{infeas.e})
If,
for the given values of $\varrho$ and $\delta$,
for every $\sigma\leq s_1 \leq s_2 \leq m$,
the subset $\{\widetilde{h}_{s_1}, \widetilde{h}_{s_1+1},\ldots, \widetilde{h}_{s_2}\}$
of $H$ is not a feasible hole assignment
scheme,
then the problem instance is infeasible.
In fact,
for the given values of $\varrho$ and $\delta$,
if no set of
\textit{consecutive} spectrum holes of
$H$
can be utilized,
because either $\sum_{i=s_1}^{s_2}\mathrm{len}(\widetilde{h}_i)<\varrho$
or $\widetilde{\beta}_{s_2} - \widetilde{\alpha}_{s_1} > \delta$,
then certainly no subset of
$H$,
whose holes are not necessarily consecutive,
is
utilizable.
\item \textbf{Part~III.} (Lines~\ref{save.b}--\ref{save.e})
The procedure
\textsc{SEP2-DP-Oracle}{}
employs a globally accessible
lookup
table $ M $
in which it stores
all the subproblem solutions.
Whenever we want to solve a subproblem, we first check that
whether
$ M $ contains a solution.
If a solution
has already been stored in
$ M $, then
we have to simply retrieve and return the
stored result.
Otherwise,
unless the subproblem is sufficiently small,
we solve
it by recursive call(s) to
$ \textsc{SEP2-DP-Oracle} $ itself.
It is important to
note that,
$ \textsc{SEP2-DP-Oracle} $
is in fact
a
recursive
divide-and-conquer
(top-down)
procedure.
However, we have to
avoid solving a
subproblem more than once.
Therefore, we
maintain the table $ M $,
which is in fact
a container of key-value
pairs,
one pair for each subproblem.
In each key-value pair,
subproblem parameters are
used as
the key, and the value is a
solution to this
subproblem.%
\footnote{In our C++
implementation of
\textsc{SEP2-DP-Oracle}{}, we utilized
\texttt{std::map} for this purpose.}
\item \textbf{Part~IV.} (Lines~\ref{s=m.b}--\ref{s=m.e})
If the first available hole is
the $m$th
(i.e., the last) one, then the problem instance
is feasible only if
the following three conditions are
satisfied:
$\varrho \leq \widetilde{\beta}_m-\widetilde{\alpha}_m$, $\delta\geq \widetilde{\beta}_m-\widetilde{\alpha}_m$, and
$\{\widetilde{h}_m\}\not\in\mathcal{F}$. In this case,
the procedure returns the
only feasible solution that is $\{m\}$.
Otherwise, no feasible
hole assignment scheme exists
for the given values of $\sigma$,
$\varrho$, $\delta$, and $\mathcal{F}$. Therefore,
the procedure returns an empty set.
\item \textbf{Part~V.} (Lines~\ref{r<first.b}--\ref{r<first.e})
If $\sigma < m$ and $\varrho \leq \mathrm{len}(\widetilde{h}_\sigma)$,
then
we have to consider four subcases.
If
$\delta < \mathrm{len}(\widetilde{h}_\sigma)$
and $\mathit{active}=\mathsf{True}$,
then the instance is
simply infeasible, so the algorithm
returns an empty set.
If $\delta < \mathrm{len}(\widetilde{h}_\sigma)$
and $\mathit{active}=\mathsf{False}$,
then
because of the
violation of the
MAR constraint,
$\{\widetilde{h}_\sigma\}$
cannot be a feasible
hole assignment pattern.
Therefore,
the procedure should
call
$$\textsc{SEP2-DP-Oracle}(H,\mathbf{y}^*,\sigma+1,\varrho,\delta,\mathit{active},\mathcal{F}')$$ and return the resulting set, where
$\mathcal{F}'=\{F\in\mathcal{F}\,\vert\,
F\subseteq \{\widetilde{h}_{\sigma+1},\widetilde{h}_{\sigma+2},\ldots,\widetilde{h}_{m}\}\}$.
%
%
%
If $\delta \geq \mathrm{len}(\widetilde{h}_\sigma)$
and $\mathit{active}=\mathsf{True}$,
then either we utilize
the spectrum hole
$\widetilde{h}_\sigma$ or we don't utilize it.
In the
former case,
the
hole assignment
pattern
is $\{\widetilde{h}_\sigma\}$,
and in the latter
case,
the hole assignment pattern
is
the one
corresponding
to the set, call it $U$,
returned by the call
%
\begin{equation*}
\textsc{SEP2-DP-Oracle}(H,\mathbf{y}^*,\sigma+1,
\varrho,\delta-(\widetilde{\alpha}_{\sigma+1}-\widetilde{\alpha}_\sigma),\mathit{active},\mathcal{F}').
\end{equation*}
%
%
Therefore,
if $\{\widetilde{h}_\sigma\}\in \mathcal{F}$
or
$\sum_{u\in U}y^*_u<y^*_\sigma$,
the procedure should return the
set $U$,
and otherwise it should return
the set
$\{\sigma\}$.
It should be remarked that,
throughout the pseudocode,
our convention is that a summation over an empty index set is considered to be $+\infty$.
%
%
%
Finally, if $\delta \geq \mathrm{len}(\widetilde{h}_\sigma)$
and $\mathit{active}=\mathsf{False}$,
then,
again, either we utilize
the spectrum hole
$\widetilde{h}_{\sigma}$ or we don't
utilize it.
In the former case,
the hole assignment pattern is
$\{\widetilde{h}_\sigma\}$, and
in the latter case,
the hole assignment pattern is
the one corresponding
to the set, call it $U$,
returned by the call
$$\textsc{SEP2-DP-Oracle}(H,\mathbf{y}^*,\sigma+1,\varrho,\delta,\mathit{active},\mathcal{F}').$$
%
%
%
Note that because $\mathit{active}=\mathsf{False}$,
i.e., no hole has been assigned to the user so far, $ \delta $ remains unchanged. Unlike when $ \mathit{active} = \mathsf{True} $.
%
%
%
Again,
if $\{\widetilde{h}_\sigma\}\in \mathcal{F}$
or
$\sum_{u\in U}y^*_u<y^*_\sigma$,
the procedure should return the
set $U$,
and otherwise it should return
the set
$\{\sigma\}$.
\item \textbf{Part~VI.} (Lines~\ref{fin.b}--\ref{fin.e})
If $\sigma < m$ and $\varrho > \mathrm{len}(\widetilde{h}_\sigma)$,
then either the spectrum hole
$\widetilde{h}_\sigma$ takes part in an optimal solution
to the problem instance or does not.
In the case that $\widetilde{h}_\sigma$ does not contribute,
if $\mathit{active}=\mathsf{False}$,
then the
optimal
solution is
the one corresponding to the set returned by the call
$$\textsc{SEP2-DP-Oracle}(H,\mathbf{y}^*,\sigma+1,\varrho,\delta,\mathit{active},\mathcal{F}'),$$
and if $\mathit{active}=\mathsf{True}$,
then the
optimal solution
is
the one corresponding to the set returned by the call
\begin{equation*}
\textsc{SEP2-DP-Oracle}(H,\mathbf{y}^*,\sigma+1,\\
\varrho,\delta-(\widetilde{\alpha}_{\sigma+1}-\widetilde{\alpha}_\sigma),\mathit{active},\mathcal{F}').
\end{equation*}
In either case, we call the returned set $U$.
In the case that $\widetilde{h}_\sigma$ contributes,
then the call
\begin{equation*}
\textsc{SEP2-DP-Oracle}(H,\mathbf{y}^*,\sigma+1,\\
\varrho - (\widetilde{\beta}_\sigma - \widetilde{\alpha}_\sigma),\delta-(\widetilde{\alpha}_{\sigma+1}-\widetilde{\alpha}_\sigma),\mathsf{True}, \mathcal{F}'')
\end{equation*}
needs to be made, where
\begin{equation*}\mathcal{F}''=\{F\setminus\{\widetilde{h}_\sigma\}\,\vert\,F\in\mathcal{F} \ \mbox{and}\
F\subseteq \\ \{\widetilde{h}_{\sigma},\widetilde{h}_{\sigma+1},\ldots,\widetilde{h}_{m}\}\
\mbox{and}\ \widetilde{h}_\sigma\in F\
\mbox{and}\ |F|>1\}.
\end{equation*}
Let us call the returned set $V$.
If
$\sum_{u\in U}y^*_u<
\sum_{v\in V}y^*_v+
y^*_\sigma$,
the procedure should return the
set $U$,
and otherwise it should return
the set
$V\cup\{\sigma\}$.
\end{itemize}%
A worst-case
time complexity
for the procedure
\textsc{SEP2-DP-Oracle}{}
can be obtained
by
multiplying together
the number of
all possible
values to the
parameters
$ \sigma $,
$ \varrho $,
$ \delta $,
$\mathit{active}$,
and
$ \mathcal{F} $.
This is in fact
a straightforward application of the rule of product for counting the number of
all possible subproblems.
For instance, it can readily be
seen that,
if $ {\cal F}=\varnothing $, then
the
worst-case time complexity of this procedure is in
$\mathcal{O}\left(\frac{|H|\,\cdot\, \varrho \,\cdot\, \delta}%
{\min_{h\in H}{\mathrm{len}(h)}\,\cdot\,
\min_{1\leq i \leq |H|-1}%
(\widetilde{\alpha}_{i+1}-\widetilde{\alpha}_{i})}\right)$.
Under the assumption that
all $\widetilde{\alpha}_i$'s
and all $\widetilde{\beta}_i$'s are integers,
$1\leq i \leq |H|$,
this is indeed $\mathcal{O}(|H|\cdot\varrho\cdot \delta)$.
However,
the number of
subproblems
actually needed to
be solved
is far less than the
above-mentioned upper bound,
because the procedure
only solves
the definitely required
subproblems~\cite[Sec.~15.3]{clrs}.
We conclude this section by giving a tiny example,
that may shed some light on the
whole process of \textsf{B\&P-SEP2}{}.
\begin{algorithm*}
%
\singlespacing\footnotesize
\caption{The memoized top-down dynamic programming
algorithm $\textsc{SEP2-DP-Oracle}$ for
solving the separation problem SEP2.}\label{memoized}
\begin{algorithmic}[1]
\Statex \textbf{Input:}
\textbf{1)}~A set of spectrum holes $H$;
\textbf{2)}~A nonnegative
real vector $\mathbf{y}^*$
whose length is equal to the size of
$H$;
\textbf{3)}~An index $\sigma\in\{1,2,\ldots,|H|\}$
that specifies the
first available hole;
\textbf{4)}~A required bandwidth $\varrho$;
\textbf{5)}~An MAR $\delta$;
\textbf{6)}~A boolean flag that indicates whether
a hole has already been
selected to be included in the output pattern;
\textbf{7)}~A set $\mathcal{F}$ of forbidden patterns.
\Statex \textbf{Output:}
A subset $I$ of
$\{\sigma,\sigma+1,\ldots,|H|\label{key}\}$
for which $\sum_{i\in I}y^*_i$ is minimized.
(Throughout the pseudocode,
our convention is that a summation over an empty index set is considered to be $+\infty$.)
\Statex
\textbf{Note:}
The procedure uses a globally accessible
lookup
table $ M $
that maintains
all the subproblem solutions.
Initially,
$ M $ is empty.
\Procedure{\textsc{SEP2-DP-Oracle}}{$H,\mathbf{y}^*,\sigma,\varrho,\delta,\mathit{active},\mathcal{F}$}
%
\If{$\delta < 0$ {\bf or} $\delta < \varrho$} \label{d<r.b} \Comment{Infeasible instance}
\State \textbf{return}{} $\varnothing$;
\EndIf \label{d<r.e}
\State Let $m=|H|$; \Comment{$m$ is the number of spectrum holes in $H$, which is equal to the length of $\mathbf{y}^*$}
%
\If{for every $\sigma\leq s_1 \leq s_2\leq m$, we have
$\sum_{i=s_1}^{s_2}(\widetilde{\beta}_i-\widetilde{\alpha}_i) < \varrho$ or
$\widetilde{\beta}_{s_2}-\widetilde{\alpha}_{s_1}>\delta$} \Comment{Infeasible instance} \label{infeas.b}
\State \textbf{return}{} $\varnothing$;
\EndIf \label{infeas.e}
%
\If{$ M $
contains
a solution to the input instance} \label{save.b}
\State retrieve and return the stored result;
\EndIf \label{save.e}
%
\If{$\sigma = m$} \Comment{A base condition} \label{s=m.b}
\If{$\varrho \leq \widetilde{\beta}_m-\widetilde{\alpha}_m$ \textbf{and}{} $\delta\geq \widetilde{\beta}_m-\widetilde{\alpha}_m$ \textbf{and}{}
$\{\widetilde{h}_m\}\not\in\mathcal{F}$}
\State \textbf{return}{} $\{m\}$;
\Else \State{\textbf{return}{} $\varnothing$};
\EndIf
\EndIf \Comment{In the remaining part
of the algorithm, we have $\sigma < m$} \label{s=m.e}
%
%
%
\State $\mathcal{F}'=\{F\in\mathcal{F}\,\vert\,
F\subseteq \{\widetilde{h}_{\sigma+1},\widetilde{h}_{\sigma+2},\ldots,\widetilde{h}_{m}\}\}$;
\If{$\varrho\leq \widetilde{\beta}_\sigma-\widetilde{\alpha}_\sigma$} \label{r<first.b}
%
\If{$\delta<\widetilde{\beta}_\sigma-\widetilde{\alpha}_\sigma$ \textbf{and}{} $\mathit{active} = \mathsf{True}$} \Comment{A base condition}
\State \textbf{return}{} $\varnothing$;
\ElsIf{$\delta<\widetilde{\beta}_\sigma-\widetilde{\alpha}_\sigma$ \textbf{and}{} $\mathit{active} = \mathsf{False}$}
\State \textbf{store and return}{} \textsc{SEP2-DP-Oracle}($H,\mathbf{y}^*,\sigma+1,\varrho,\delta,\mathit{active},\mathcal{F}'$);\Comment{Stores the solution in $ M $}
\ElsIf{$\delta\geq\widetilde{\beta}_\sigma-\widetilde{\alpha}_\sigma$ \textbf{and}{} $\mathit{active} = \mathsf{True}$}
\State $U=$ \textsc{SEP2-DP-Oracle}($H,\mathbf{y}^*,\sigma+1,\varrho,\delta-(\widetilde{\alpha}_{\sigma+1}-\widetilde{\alpha}_\sigma),\mathit{active},\mathcal{F}'$);
\ElsIf{$\delta\geq\widetilde{\beta}_\sigma-\widetilde{\alpha}_\sigma$ \textbf{and}{} $\mathit{active} = \mathsf{False}$}
\State $U=$ \textsc{SEP2-DP-Oracle}($H,\mathbf{y}^*,\sigma+1,\varrho,\delta,\mathit{active},\mathcal{F}'$);
\EndIf
%
%
\If{$\{\widetilde{h}_\sigma\}\in\mathcal{F}$ \textbf{or} $y_\sigma^*> \sum_{u\in U}y^*_u $}
\State \textbf{store and return}{} $U$;\Comment{Stores the solution in $ M $}
\Else
\State \textbf{store and return}{} $\{\sigma\}$;\Comment{Stores the solution in $ M $}
\EndIf
%
%
%
\EndIf \label{r<first.e} \Comment{In the remaining part
of the algorithm, we have $\sigma < m$ and $\varrho> \widetilde{\beta}_\sigma-\widetilde{\alpha}_\sigma$}
%
\If{$\mathit{active} = \mathsf{True}$} \label{fin.b}
\State $U = $ \textsc{SEP2-DP-Oracle}($H,\mathbf{y}^*,\sigma+1,\varrho,\delta-(\widetilde{\alpha}_{\sigma+1}-\widetilde{\alpha}_\sigma),\mathit{active},\mathcal{F}'$);
\Else
\State $U = $ \textsc{SEP2-DP-Oracle}($H,\mathbf{y}^*,\sigma+1,\varrho,\delta,\mathit{active},\mathcal{F}'$);
\EndIf
\State $\mathcal{F}''=\{F\setminus\{\widetilde{h}_\sigma\}\,\vert\,F\in\mathcal{F} \ \mbox{and}\
F\subseteq \{\widetilde{h}_{\sigma},\widetilde{h}_{\sigma+1},\ldots,\widetilde{h}_{m}\}\
\mbox{and}\ \widetilde{h}_\sigma\in F\
\mbox{and}\ |F|>1\}$;
\State $V=$ \textsc{SEP2-DP-Oracle}($H,\mathbf{y}^*,\sigma+1,\varrho - (\widetilde{\beta}_\sigma - \widetilde{\alpha}_\sigma),\delta-(\widetilde{\alpha}_{\sigma+1}-\widetilde{\alpha}_\sigma),\mathsf{True}, \mathcal{F}''$);
\If{$\sum_{u\in U}y^*_u\leq \sum_{v\in V}y^*_v + y^*_\sigma$}
\State \textbf{store and return}{} $U$;\Comment{Stores the solution in $ M $}
\Else
\State \textbf{store and return}{} $V\cup \{\sigma\}$;\Comment{Stores the solution in $ M $}
\EndIf \label{fin.e}
\EndProcedure
\end{algorithmic}
\label{oraclealg}
\end{algorithm*}
%
\begin{example}\label{example1}
Consider an instance of the problem in which
the set $\mathcal{H}$ of available spectrum holes
consists of the following 4 holes
$$h_1= [5,10],
h_2= [14,19],
h_3= [21,25],
h_4= [28,33],$$
and the set $\mathcal{U}$ of SUs includes the following 6 users:
$$
\begin{array}{ll}
u_1: R_1=3 ,\, \delta_1= 5,&
u_2: R_2=12 ,\, \delta_2= 28,\\
u_3: R_3=6 ,\, \delta_3= 11,&
u_4: R_4=4 ,\, \delta_4= 6,\\
u_5: R_5=2 ,\, \delta_5= 4,&
u_6: R_6=9 ,\, \delta_6= 12.
\end{array}
$$
In the root node of the search tree,
the column generation procedure
has to solve a linear program
in which the number of
(functional) constraints is 10.
(See the linear program (PLP).)
In the first iteration of the procedure,
the objective function coefficients
are all zero.
The optimal value of this
objective function is obviously zero.
In the second iteration,
we call the pricing algorithm
once
for each SU.
Each of these calls provides an improving
decision variable.
The objective function
(to be maximized)
will be therefore
\begin{equation*}
\varphi_1=3\,x_{1 , \{h_4\}} +
12\,x_{2 , \{h_2 , h_3 , h_4\}} +
6\,x_{3 , \{h_2 , h_3\}} +
4\,x_{4 , \{h_4\}} +
2\,x_{5 , \{h_3\}} +
9\,x_{6 , \{h_3 , h_4\}}.
\end{equation*}
In an
optimal solution to this linear program,
$x_{2 , \{h_2 , h_3 , h_4\}}=1$
and all other decision variables are zero.
The value of this solution is 12.
%
%
%
In the third iteration,
by calling the pricing algorithm once for each SU,
we find that there exist 4 improving decision
variables.
The objective function will be
\begin{equation*}
\varphi_2=\varphi_1+3\,x_{1 , \{h_2\}} +
12\,x_{2 , \{h_1, h_2, h_4\}}+\\
4\,x_{4 , \{h_2\}} +
9\,x_{6 , \{h_2 , h_3\}}.
\end{equation*}
In an optimal solution to this linear program,
$
x_{6 , \{h_3 , h_4\}}=
x_{2 , \{h_1, h_2, h_4\}}=
x_{6 , \{h_2 , h_3\}}=\frac{1}{2}$
and all other decision variables are zero.
The value of this solution is 15.
In the forth iteration,
we again run the pricing algorithm
once for each SU.
This indicates that there exist
3 improving decision variables.
The objective function will be
$$\varphi_3=\varphi_2+3\,x_{1 , \{h_1\}} +
12\,x_{2 , \{h_1 , h_3 , h_4\}} +
4\,x_{4 , \{h_1\}}.$$
In an optimal solution to this
linear program,
$
x_{6 , \{h_3 , h_4\}}=
x_{2 , \{h_1, h_2, h_4\}}=
x_{6 , \{h_2 , h_3\}}=
x_{4 , \{h_1\}}=\frac{1}{2}
$
and all other decision variables are zero.
The value of this solution is 17.
In the fifth iteration,
the pricing algorithm
implies that there exists no improving
decision variable.
Therefore,
no further improvement can be made, and
the last obtained solution is
optimal for the relaxed problem.
%
However, this optimal solution is non-integral, and
a variable (with a non-integer value)
has to be selected to be branched on.
(We will shortly see that the absolute integrality gap is in fact 1.)
We select the variable
$x_{6 , \{h_3 , h_4\}}$
(whose value is $0.5$ in the optimal solution)
as the branching variable.
The root node is split into two child nodes.
In one of them, we set the value of $x_{6 , \{h_3 , h_4\}}$
to be 1, and in the other, 0.
In the node in which
$x_{6 , \{h_3 , h_4\}}=1$,
the column generation procedure
returns an integral solution whose objective
value is 16. In this solution, we have
$x_{6 , \{h_3 , h_4\}} = 1$, $x_{1,\{h_1\}}=1$,
and $x_{4,\{h_2\}}=1$.
The value of all other decision variables is zero.
Since the solution is integral, we can fathom the
node.
On the other hand, in the node in
which
$x_{6 , \{h_3 , h_4\}}=0$,
the column generation procedure
converges to a non-integral solution
for which the value of the objective function
is
$16.75$.
In the considered instance of the problem,
all the bandwidth requirements are integral.
Hence,
the value of an optimal solution to it, has
to be integral as well.
Therefore, the current node cannot lead us
to a solution whose objective value is better that
16, and can safely be pruned.
This means that the solution found in the
first considered child node,
whose value was 16, is in fact
the optimal solution to the instance.
In this solution,
which is depicted in
Figure~\ref{fig},
$h_1$ is assigned to $u_1$,
$h_2$ is assigned to $u_4$, and
$h_3$ and $h_4$ are assigned to $u_6$.
\end{example}
%
%
%
\begin{figure}[htb]\centering
\includegraphics[width=0.45\textwidth]{fig1.eps}
\caption{A depiction of an optimal solution
to the tiny instance
described in Example~\ref{example1}.}
\label{fig}
\end{figure}
\section{Computational Results}\label{simulsec}
This section is devoted to evaluating the performance of
the proposed
B\&P}\def\bp{\bnp{} procedure
against the best currently available
ILP formulation
of the problem, which is
presented
in~\cite{ilp},
with respect to the CPU time.
%
We implemented the B\&P}\def\bp{\bnp{} procedure
in C++, and
carried out
the simulations
on an Intel$^{\text{\textregistered}}$ Core\texttrademark{}~i7-9750H 2.60~GHz laptop with
16.00~GB~of~RAM,
running Microsoft$^{\text{\textregistered}}$ Windows$^{\text{\textregistered}}$~10 (64-bit) operating system.
%
In our implementation of the
B\&P}\def\bp{\bnp{} procedure, for solving the LP instances
in the nodes of the search tree,
we employed the IBM$^{\text{\textregistered}}$ ILOG$^{\text{\textregistered}}$ Concert CPLEX$^{\text{\textregistered}}$ API.
As pointed out in the introduction,
two 0-1 ILP formulations
have been proposed in~\cite{ilp}
for the problem.
The
(binary) integer
linear programs
obtained
by using these formulations
can be solved
using the off-the-shelf ILP solvers like
CPLEX and GUROBI.
%
%
The second formulation, entitled
``Linear Assignment Model (Type 2)," outperforms
the first one, in terms of running
time, and therefore, comparisons
will be conducted against this formulation.
Hereinafter we use the acronym \textsf{LAM-T2}{} to refer to it.
For the sake of
self-containedness,
we prsenet this formulation here.
For every $j\in\{1,2,\ldots,N\}$
and every
$ i \in \{1,2,\ldots,M\} $, let
the set $I_j(i)$ be defined by
$
I_j(i)=\{i'\in\{i,i+1,\ldots,M\}\,|\, \beta_{i'}-\alpha_{i}\leq \delta_j \}
$.
Moreover,
for every $j\in\{1,2,\ldots,N\}$, let
the set $T_j$
be defined as
$T_j=\{i\in\{1,2,\ldots,M\}\,|\, \sum_{i'\in I_j(i)} \mathrm{len}(h_{i'})\geq R_j \}$.
Finally, let the set $ \Gamma $ be defined as
$
\Gamma =
\{(j,i,i')\,|\, j\in\{1,2,\ldots,N\},\, i \in T_j ,\, i' \in I_j(i)\}
$.
Corresponding to each triple $(j,i,i')\in \Gamma $,
the formulation \textsf{LAM-T2}{}
contains a binary
decision variable $ \gamma^{j}_{ii'} $.
The formulation is as follows:
\begin{IEEEeqnarray*}{l}
\text{(\textsf{LAM-T2}{}) Maximize} \quad \sum_{j=1}^{N} R_j
\sum_{i\in T_j}\gamma^{j}_{ii}\ , \\
\text{subject to }\\
\sum_{(j,i,i')\in \Gamma} \gamma_{ii'}^j\leq 1,\quad\text{for}\ i'=1,2,\ldots,M,\\
\sum_{i\in T_j} \gamma_{ii}^j\leq 1, \quad \text{for}\ j=1,2,\ldots,N,\\
\sum_{i'\in I_j(i)}\mathrm{len}(h_{i'})\cdot\gamma^j_{ii'}\geq R_j \cdot \gamma^j_{ii}\ ,\quad \text{for}\ j= 1,2,\ldots,N
\ \text{and}\
i\in T_j\ , \\
\gamma_{ii'}^{j}\in\{0,1\},\quad\text{for every}\ (j,i,i')\in\Gamma.
\end{IEEEeqnarray*}
\textsf{LAM-T2}{}
consists of at most $NM^2$ binary decision variables
and at most $M + N + NM$ linear constraints.
In contrast to our formulation (BILP) which,
due to its huge number
of decision variables,
cannot be fed directly to an ILP solver,
the integer linear programs corresponding to \textsf{LAM-T2}{},
due to their compactness,
can be solved by using an ILP solver.
In our experiments, the CPLEX solver
(Ver.~12.10.0.0)
has been employed,
both as an LP solver within the B\&P}\def\bp{\bnp{} routine and
as an ILP solver for solving the integer linear programs
corresponding to
\textsf{LAM-T2}{}.
There are two remarks that should be made at this point.
Firstly,
the
performance of the
B\&P}\def\bp{\bnp{} search procedure
can be improved
by embedding a
heuristic method
for finding
good
integer-feasible
solutions during
the search process~\cite{bnb}.
We didn't employ
such a
heuristic
method in the B\&P}\def\bp{\bnp{} process.
However, by means of a
powerful heuristic
method
for generating good
integer-feasible solutions,
the active
nodes
can more effectively be fathomed,
which in turn makes the search
process faster.
%
%
Secondly,
a decision
has to be made as to
which active node to visit
first.
Various node selection strategies
have
been described in the literature~\cite{bnb}.
In our implementation of
the B\&P}\def\bp{\bnp{} procedure,
we employed the
\textit{best-bound-first} node
selection rule.
The unfathomed subproblems are maintained in a priority queue,
and
the one with the largest
LP relaxation bound
has the highest priority.
For the case of two nodes
with equal LP relaxation bounds,
the one that is more likely
to yield an integer-feasible
solution is chosen.
To our knowledge,
presently no benchmark
dataset is available for the MC-HAP{}.
We therefore created our own one, which contains
more than 1000
randomly generated
instances of the problem.
The
comparison has been made
by the use of these
instances.
%
All the problem instances, and also their optimal solutions,
are available online.%
\footnote{\url{https://github.com/hfalsafain/crrap}}%
\footnote{At this web address, we have also provided a number of large instances of the problem that \textsf{B\&P-SEP1}{}
and \textsf{B\&P-SEP2}{}
were unable to solve in less than 15 minutes.
These instances may be
useful for future research on the problem.}
As in \cite{ilp},
the TV band
(470MHz -- 862MHz)
has been considered for the numerical simulations.
In all the generated
instances,
the required bandwidth
of a user is a random number
(not necessarily integer)
drawn
uniformly
from the
range $ [10,25] $ (in MHz).
This is similar to that of
\cite{ilp},
except that
in our instances,
the numbers are not necessarily integers.
In the tables presented
in this section,
the parameter $ q\in [0, 1] $
determines the
fraction
of available
spectrum
in the frequency
band, which is mainly
determined by the
geographical location.
It has been shown that in urban areas,
we have
$ q \approx 0.2 $, whereas in suburban areas,
$ q \approx 0.8 $~\cite{ilp}.
A time limit of 600 seconds has been set for each run.
If the execution time for
a run
reaches this limit,
the run halts and
the best-so-far
solution is
reported.
A comparison of the average running times (in seconds)
of
\textsf{B\&P-SEP1}{}
and \textsf{B\&P-SEP2}{}
against
those of the formulation \textsf{LAM-T2}{},
is tabulated in Table~\ref{tab1-h25}.
The results
(optimal objective values)
of
the three methods
were entirely consistent with each other.
For each row, we
have considered 20 randomly generated instances of the problem.
In the
upper part of the table
(first eight rows), as in \cite[Table~1]{ilp},
for a user $ u_j $,
the parameter $ \delta_j $
is drawn uniformly
from the
interval $ [2R_j,3R_j] $.
Again, unlike
the scenario
described in \cite{ilp},
$ \delta_j $s are
not necessarily integers.
In this part, $ q $ is set to $ 0.5 $. Moreover, the number of
available
holes for all
the considered
instances is equal to $ 25 $, but for the number of users, we have
$ N\in\{25,50,\ldots,200\} $.
In the lower part of the
table (last six rows),
we have
$ M=30 $,
$ N\in\{30,60,\ldots,180\} $,
and
$ q=0.25 $.
All $ \delta_j $s are
set equal to 45
$(1\leq j \leq N)$.
As can be observed from the table, when the number of users is small, there is no major difference between the running times,
but when the number of users is large, our approach has a much better
performance.
\begin{table}[tbh]
\centering
\caption{The effect of the ratio of the number of users to the number of holes on the
time
performance
(in seconds)
of the approaches. For each row, 20 random instances are used.
The column labeled with ``TL'' reports the number of
CPLEX
runs
stopped due to the time limit
(for solving \textsf{LAM-T2}{} models). None of the \textsf{B\&P-SEP1}{} and \textsf{B\&P-SEP2}{} runs reached the 600 seconds time limit.}
\label{tab1-h25}
\small
\begin{tabular}{|l|c|c|c|cc|cc|}
\hline
$ M $ & $ N $ & $ q $ & $ \delta $ & \textsf{B\&P-SEP1}{} & \textsf{B\&P-SEP2}{} & \textsf{LAM-T2}{} & TL \\
\hline
25 & 25 & 0.5 & As in \cite[Table~1]{ilp} &1.34 &0.705 & \textbf{0.212} & 0\\
& 50 & & &1.30 &\textbf{0.670} & 0.795 & 0\\
& 75 & & &2.95 &\textbf{1.67} & 3.61 & 0\\
& 100 & & &1.22 &\textbf{1.06} & 12.0 & 0\\
& 125 & & &2.74 &\textbf{1.92} & 10.4 & 0\\
& 150 & & &2.22 &\textbf{1.70} & 47.7 & 1\\
& 175 & & &\textbf{2.40} &7.45 & 127 & 3\\
& 200 & & &\textbf{3.04} &3.58& 162& 4\\
\hline
30 & 30 & 0.25 & 45 &1.50 &0.525 & \textbf{0.240} & 0 \\
& 60 & & &1.64 &\textbf{0.920} & 2.21 & 0 \\
& 90 & & &\textbf{1.01} &1.14 & 4.26 & 0 \\
& 120 & & &1.28 &\textbf{1.03} & 15.8 & 0 \\
& 150 & & &0.910 &\textbf{0.730} & 6.30 & 0 \\
& 180 & & &2.92 &\textbf{1.98} & 74.0 & 0 \\
\hline
\end{tabular}
\end{table}
We empirically observed that when the ratio of the number of users to the number of holes is large, our method works much more effective than its counterpart \textsf{LAM-T2}{}.
Table~\ref{tab-ratio}, in which a total of 600 instances of the problem are solved (60 instances per row), also provides
a strong evidence for this fact.
In this table,
the parameter
$ \delta $
is set as
in \cite[Tables~1 and 2]{ilp}.
In the first five
rows, we have $ M\approx 30 $,
$ N\in\{30,60,\ldots,150\} $,
and
$ \delta_j\in[2R_j,3R_j] $
($ 1\leq j \leq N $).
In the next
five rows,
we have
$ M\approx 50 $,
$ N\in\{50,100,\ldots,250\} $,
and $ \delta_j\in[
\max\{R_j, 15\}, \min\{2R_j, 40\}] $ ($ 1\leq j \leq N $).
For each row, 60 random instances are used.
\begin{table}[tbh]
\caption{The effect of the ratio of the number of users to the number of holes on the time performance
(in seconds)
of the two approaches. For each row, 60 random instances are used.
The column labeled with ``TL'' reports the number of
CPLEX
runs
stopped due to the time limit
(for solving \textsf{LAM-T2}{} models). None of the \textsf{B\&P-SEP1}{} and \textsf{B\&P-SEP2}{} runs reached the 600 seconds time limit.}
\label{tab-ratio}
\centering
\begin{tabular}{|ll|c|cc|cc|}
\hline
$ M $& $ N $&$ \delta $ & \textsf{B\&P-SEP1}{} & \textsf{B\&P-SEP2}{} & \textsf{LAM-T2}{} & TL \\%& integral\\
\hline
$ \approx 30 $&$ 30 $&As in \cite[Tab.~1]{ilp} & 1.17 &2.67 &\textbf{0.293} & 0 \\%& 36/60\\
&$ 60 $ & & 3.63 & \textbf{2.33} & 11.0 & 0 \\%& 31/60\\
&$ 90 $& & 3.98 & \textbf{1.49} &32.2 & 2 \\%&30/60 \\
&$ 120 $& & 21.3 &\textbf{7.38} & 87.8 & 6\\% & 35/60\\
&$ 150 $& &32.7 &\textbf{11.8} & 90.8 & 8 \\%& 30/60\\
\hline
$ \approx 50 $&$ 50 $ & As in \cite[Tab.~2]{ilp}& 0.338 & 0.577& \textbf{0.150} & 0 \\%& 34/60\\
&$ 100 $& & 0.773& \textbf{0.493} & 1.04 & 0 \\%& 38/60 &\\
&$ 150 $ & &1.27 & \textbf{0.482} & 7.48 & 0 \\%& 38 / 60\\
&$ 200 $& & 0.802 & \textbf{0.715}& 27.8 & 1 \\%& 43 /60 \\
&$ 250 $& & \textbf{4.18} &5.27 &101 & 6 \\%& 35/60 \\
\hline
\end{tabular}
\end{table}
The superiority of the proposed B\&P}\def\bp{\bnp{} procedure
stems from the strength
(tightness) of the LP relaxation
of the
formulation (BILP).
The LP relaxation
of the formulation (BILP)
can provide much tighter upper bounds
than
the LP relaxation of
\textsf{LAM-T2}{}.
To see this fact more clearly,
we reported the
optimal objective values of the
LP relaxations of
(BILP) and \textsf{LAM-T2}{} for the 20 instances
corresponding to
the 8th row of Table~\ref{tab1-h25},
in which we have
$ M=25 $, $ N=200 $, $ q=0.5 $,
and $ \delta_j\in[2R_j,3R_j] $.
Table \ref{tab-relax} reports these values.
The numbers in parentheses are
the
absolute integrality gaps. As can be seen, for a given instance,
the LP relaxation of (BILP)
can
provide a
much sharper
upper-bound than the LP relaxation
of \textsf{LAM-T2}{}.
For example, for
the 6th considered
instance of the problem, the absolute
integrality gap
for (BILP) is 0.05, and for \textsf{LAM-T2}{} is 24.435.
These
sharp upper-bounds
allow for a much more
effective
pruning of the search tree,
which is apparent in the
performance
of
\textsf{B\&P-SEP1}{} and \textsf{B\&P-SEP2}{}.
\begin{table}[tbh]
\centering
\caption{%
The
optimal objective values of the
LP relaxations of
(BILP) and \textsf{LAM-T2}{} for the 20 instances
corresponding to
the 8th row of Table~\ref{tab1-h25}.
For these instances, we have
$ M=25 $, $ N=200 $, $ q=0.5 $,
and $ \delta_j\in[2R_j,3R_j] $.
The numbers in parentheses are
the
absolute integrality gaps.}
\label{tab-relax}
\footnotesize
\begin{tabular}{|llll|llll|}
\hline
& (BILP) & \textsf{LAM-T2}{} & Opt. & & (BILP) & \textsf{LAM-T2}{} & Opt. \\
\hline
1& 190.917 (0.117)& 196.46 (5.66)& 190.8 &
2&164.64 (0.04) &165.04 (0.44)&164.6 \\
3& 182.7 (0)& 194.77 (12.07)& 182.7 &
4&194.68 (0.08) & 195.16 (0.56)& 194.6\\
5&196.15 (0.05) &197.41 (1.31)&196.1 &
6&159.45 (0.05) &183.835 (24.435)& 159.4\\
7&195.456 (0.056) &195.71 (0.31) &195.4 &
8&191.467 (0.067) &196.83 (5.43)& 191.4\\
9&184.567 (0.067) &194.78 (10.28)& 184.5 &
10& 189.1 (0.1)& 194.78 (5.78)& 189\\
11&170.3 (0) & 180.083 (9.783)& 170.3 &
12&181.233 (0.33) & 195.88 (14.68)& 181.2\\
13&197.05 (0.05) & 197.4 (0.4)& 197 &
14&182.3 (0) & 190.45 (8.15)& 182.3\\
15&171.75 (0.05) & 181.77 (10.07)& 171.7 &
16&189.333 (0.033) & 194.74 (5.44)& 189.3\\
17&184.083 (0.083) & 197.37 (13.37)&184 &
18&193.85 (0.05) & 195.06 (1.26) & 193.8\\
19&174.85 (0.05) &187.414 (12.614)& 174.8 &
20&197.433 (0.033) & 197.98 (0.58)& 197.4\\
\hline
\end{tabular}
\end{table}
The parameter $ \delta $ has a major impact on the behavior of \textsf{LAM-T2}{}.
As has been stated in
\cite{ilp},
as the value
of $ \delta $ grows, more variables and constraints
need to be employed in
\textsf{LAM-T2}{}, which
increases the
solution time.
We observed that,
as the value of $ \delta $ increases, the execution time of our method increases as well.
However, when the value of $ \delta $ increases, our method shows
a
much better performance than its counterpart.
In Table~\ref{tab-delta}, we examine the effect of the value of
$ \delta $
on the performance of
\textsf{B\&P-SEP1}{},
\textsf{B\&P-SEP2}{}, and \textsf{LAM-T2}{}. In this table, $ \delta $ takes
the values
30, 35, 40, and 45.
In the first four
rows, we have $ M\approx 30 $ and
$ N=120 $, and in the next
four rows
$ M\approx 30 $ and $ N=150 $.
For each row, 60 random instances are used.
As can be observed from the table, as the value of $ \delta $ increases,
the
performance difference becomes more
pronounced. For example, for
$ M\approx 30 $,
$ N = 120 $,
and
$ \delta = 45 $, the average running time for \textsf{B\&P-SEP1}{} is 30, for
\textsf{B\&P-SEP2}{}
is 7.6, and for \textsf{LAM-T2}{} is 302.
Moreover,
for 25 of the 60 runs corresponding to this row, CPLEX reached the time limit of 600 seconds when solving
\textsf{LAM-T2}{},
and returned a feasible, and not necessarily optimal, solution.
\begin{table}[tbh]
\caption{The effect of the value of $ \delta $ on the time performance
(in seconds)
of the two approaches. For each row, 60 random instances are used.
The column labeled with ``TL'' reports the number of
CPLEX
runs
stopped due to the time limit
(for solving \textsf{LAM-T2}{} models). None of the \textsf{B\&P-SEP1}{} and \textsf{B\&P-SEP2}{} runs reached the 600 seconds time limit.}
\label{tab-delta}
\centering
\begin{tabular}{|l|c|c|cc|cc|}
\hline
$M $ & $ N $ & $ \delta $ & \textsf{B\&P-SEP1}{} & \textsf{B\&P-SEP2}{} & \textsf{LAM-T2}{} & TL\\% & integral\\
\hline
$ \approx 30 $ & 120 & 30 &1.08&\textbf{0.380} & 1.62 & 0\\% & 45 / 60\\
& & 35 & \textbf{2.43} &2.60 & 27.2 & 2 \\%& 33 /60\\
& & 40 & 9.77 & \textbf{3.13} & 110 & 7 \\%& 20/60\\
& & 45 & 30 & \textbf{7.60} & 302 & 25 \\%& 18/60 \\
\hline
$ \approx 30 $ & 150 & 30 & 1.64 & \textbf{0.972} & 1.34 & 0 \\%& 44 / 60\\
& & 35 & 6.03 & \textbf{1.78}& 20.2 & 0 \\%& 39/60\\
& & 40 & 15.6 &\textbf{9.03} & 111 & 9 \\%& 25 / 60\\
& & 45 &21.5 &\textbf{8.23} &210 & 18 \\%& 22/60 \\
\hline
\end{tabular}
\end{table}
The integer-feasible
but not
necessarily
optimal solutions
that can be obtained
during the B\&B}\def\bb{\bnb{} process
are also valuable.
When solving an instance of
the ILP problem,
sometimes we are
satisfied with
a solution that
is integer-feasible,
but not necessarily optimal.
In these cases, we
can set a time-limit for the solver. We are satisfied with the best solution that the solver has encountered during this
period.
This
practice
is more common for
large instances for which obtaining the optimal solution may be very time consuming.
%
Table~\ref{tab-feas} shows
the \textsf{B\&P-SEP1}{}
and \textsf{B\&P-SEP2}{} can
obtain better
feasible solutions
than \textsf{LAM-T2}{}.
In this table, only 90 seconds of time is given to the routines, and the best solution obtained is reported.
In this table, we have considered 20 instances of the problem.
For these instances, we have
$ M=70 $, $ N=70 $, $ q=0.25 $,
and $ \delta_j\in[2R_j,3R_j] $.
As can be seen, the solutions
obtained by
\textsf{B\&P-SEP1}{} and \textsf{B\&P-SEP2}{}
are of better quality than the
solutions of \textsf{LAM-T2}{}.
We conclude this section by a brief comparison between the performance of
\textsf{B\&P-SEP1}{} and \textsf{B\&P-SEP2}{}.
As can be observed from the
Tables \ref{tab1-h25}--\ref{tab-feas},
both of these B\&P}\def\bp{\bnp{} procedures
can provide a much better performance
than \textsf{LAM-T2}{}.
Generally,
\textsf{B\&P-SEP2}{} performs better
than \textsf{B\&P-SEP1}{}.
As can be seen from Table~\ref{tab-feas},
the performance of the two routines in finding a good feasible solution as soon as possible was almost the same. However, we observed that, for some of the problem instances, \textsf{B\&P-SEP2}{} performed much better at proving the optimality. In fact, for \textsf{B\&P-SEP2}{}, \textit{lowering} the upper bound---%
so that it can be
confirmed that the current
incumbent is in fact optimal---%
was much better during the B\&P}\def\bp{\bnp{} process,
for some of the
considered
problem instances.
This can be regarded as the reason for its better performance.
\begin{table}[tbh]
\centering
\caption{
A comparison of the capability of the approaches to find
feasible,
and not necessarily
optimal, solutions
in 90 seconds.
20 instances of the problem
have been considered.
For these instances, we have
$ M=70 $, $ N=70 $, $ q=0.25 $,
and $ \delta_j\in[2R_j,3R_j] $.}
\label{tab-feas}
\footnotesize
\begin{tabular}{|lllll|lllll|}
\hline
&\textsf{B\&P-SEP1}{} & \textsf{B\&P-SEP2}{} & \textsf{LAM-T2}{} & Opt. &
&\textsf{B\&P-SEP1}{} & \textsf{B\&P-SEP2}{} & \textsf{LAM-T2}{} & Opt. \\
\hline
1&\textbf{71.5}&\textbf{71.5} & \textbf{71.5} &71.5 &
2&\textbf{73.6}&\textbf{73.6} & 70.2 & 73.6 \\
3&\textbf{72.1}&\textbf{72.1} & 69.5 & 72.1 &
4&71.6&\textbf{71.7} & 70.6 & 71.7 \\
5&\textbf{69.7}&\textbf{69.7} & \textbf{69.7} & 69.7 &
6&\textbf{69.9}&\textbf{69.9} &67.9 & 69.9 \\
7&\textbf{75.4}&\textbf{75.4} & \textbf{75.4} & 75.4 &
8&\textbf{72.5}&69.2 & 67.1 & 72.6 \\
9&\textbf{74.4}&72.4 & \textbf{74.4} & 74.4 &
10&\textbf{62.8}&\textbf{62.8} & 62.6 & 62.8 \\
11&\textbf{71.8}&71.7 & 70.4 & 71.9 &
12&\textbf{70.1}&\textbf{70.1} & 67.7 & 70.1 \\
13&\textbf{75.4}&\textbf{75.4} & 74.9 & 75.4 &
14&\textbf{75.5}&\textbf{75.5} & 74.5 & 75.5 \\
15&70.5&\textbf{70.7} & 69.7 & 70.7 &
16&\textbf{67.2}&61.9 & 65.5 & 67.2 \\
17&\textbf{71.9}&71.8 & 71 & 72.9 &
18&74.7&\textbf{74.8} & 66.3 & 74.8 \\
19&\textbf{72.4}&70.3 & 72.1 & 72.4 &
20&\textbf{72.3}&\textbf{72.3} & 70.4 & 72.3 \\
\hline
\end{tabular}
\end{table}
\section{Conclusions}
In this paper,
we revisited the problem of
spectrum hole assignment to cognitive secondary
users with different hardware limitations.
A novel 0-1 ILP formulation has been proposed, which
provides very tight LP relaxation bounds. Moreover, we devised two different B\&P}\def\bp{\bnp{}
procedures
to overcome the problem of
huge number of decision variables in
the proposed formulation. Exhaustive numerical experiments demonstrate that the
proposed approach can
achieve a much better performance
than the best currently
available ILP formulation of the problem. Specifically, we reported
some scenarios in which, the proposed
approach was able to solve the problem instances
in about 0.02 of the
time required by its counterpart.
The work presented in this paper may
be extended in different directions. Other user-specific parameters, rather than the MAR and the required bandwidth, may be
taken into account in the problem formulation. For instance, the users may have different preferences about the spectrum holes, based on
how much data-rate they can gain from each hole (due to their locations and/or technologies).
Another direction is to take into consideration other metrics (objective functions) such as \textit{fairness}, instead of considering only the spectrum utilization. Finally, incorporating learning mechanisms
to predict the behavior of the PUs
is a promising direction for future works.
Based on analyzing the behavior of PUs,
especially in scenarios where they may frequently
change their spectrum usage pattern, we can
seek for a more \textit{stable} assignment.
We may first rank the holes based on
their
availability probabilities, and then
provide an acceptable
hole assignment from the view point of stability.
\section{Acknowledgements}
This work was supported by by Iran's National Elites Foundation (INEF).
\bibliographystyle{elsarticle-num}
| 7fca057b9672ab092fa3c371eaa202f675181f10 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
Theoretical, experimental, and computational tools are employed for studying turbulence. \citet{Kolmogorov:DANS1941Dissipation,Kolmogorov:DANS1941Structure} constructed one of the most popular model of turbulence and showed that for homogeneous and isotropic turbulence, the kinetic energy spectrum in wavenumber space is
\begin{equation}
E_u(k) = K_\mathrm{Ko} \epsilon_u^{2/3} k^{-5/3},
\label{eq:Kolm_spectrum}
\ee
where $ k $ is the wavenumber, $ \epsilon_u $ is the dissipation rate of kinetic energy, and $ K_\mathrm{Ko} $ is Kolmogorov's constant. Computation of $ E_u(k) $ requires three-dimensional velocity field, which is difficult to measure in experiment. Instead, the velocity field is measured at select real-space points. \citet{Taylor:PRS1938} proposed an important conjecture that helps connect $ E_u(k) $ to the frequency spectrum, $ E_u(f) $, of the measured time series. \citet{Taylor:PRS1938} hypothesized that turbulent fluctuations are advected by the mean flow (velocity $ {\bf U}_0 $) as if fluctuations are frozen in the flow. Under this assumption, using Kolmogorov's spectrum of Eq.~(\ref{eq:Kolm_spectrum}), we can easily derive the frequency spectrum as
\begin{equation}
E_u(f) = A (\epsilon_u U_0)^{2/3} f^{-5/3},
\label{eq:Taylor_Ef}
\ee
where $ A $ is a constant. Thus, Taylor's hypothesis plays an important role in turbulence experiments and data analysis.
\citet{Kraichnan:PF1964Eulerian} argued that the large-scale eddies too sweep the small-scale fluctuations, a phenomenon called \textit{sweeping effect}. \citet{Wilczek:PRE2012} and \citet{He:ARFM2017} generalized Eq.~(\ref{eq:Taylor_Ef}) by taking into account the sweeping effect. Recently, \citet{Verma:INAE2020_sweeping} performed a detailed calculation of temporal correlation function that includes sweeping effect and turbulent diffusion; they showed that $ E_u(f) \propto f^{-5/3} $ in the presence of a mean flow, but $ E_u(f) \propto f^{-2} $ when $ U_0 $ is small compared to the fluctuations.
In magnetohydrodynamic (MHD) turbulence, the velocity and magnetic fields have their mean values ($ {\bf U}_0 $ and ${\bf B}_0 $, respectively, in velocity units) and fluctuations. Hence, Taylor's hypothesis and Eq.~(\ref{eq:Taylor_Ef}) need generalization, especially because Solar wind and solar corona are important MHD turbulence laboratories where spacecrafts make in situ measurements of the velocity and magnetic fields. Taylor's hypothesis would help us derive the wavenumber spectrum using the frequency spectrum of the time series. Note that the typical speed of spacecrafts are much smaller than the solar wind speed (250 km/s to 750 km/s). Hence, the spacecraft can be assumed to be stationary, consistent with the assumptions of Taylor's hypothesis. Another important factor is that $ B_0 \ll U_0$ at distances 0.3 AU or higher, where most spacecrafts are located. As a result, we can ignore the effects of the mean magnetic field and employ Taylor's hypothesis. In one of the first applications of Taylor's hypothesis to the solar wind, \citet{Matthaeus:JGR1982rugged} deduced $ k^{-5/3} $ spectrum for the solar wind by employing Eq.~(\ref{eq:Taylor_Ef}). There are many more works on solar wind observations, e.g.,~\citet{Podesta:ApJ2007}.
Recent spacecraft \textit{Parker Solar Probe} (PSP) has come quite close the Sun (as close as 8.86 solar radii). At this distance, $ B_0 $ is comparable to $ U_0 $, and hence, we need to examine whether Taylor's hypothesis can be employed to PSP data. In addition, the solar wind is anisotropic that makes an application of Taylor's hypothesis problematic. \citet{Bourouaine:ApJ2018} modelled the Fourier-transformed temporal correlation function of MHD turbulence using a Gaussian model rather than a pure exponential model, and argued that the decorrelation frequency is linearly related to the perpendicular wavenumber. In a related work, \citet{Perez:AA2021} investigated the validity of Taylor's hypothesis for the PSP data, and showed that the frequency spectrum accurately represents the spectral indices associated with the underlying spatial spectrum of turbulent fluctuations in the plasma frame. In a another recent work, \citet{Kasper:PRL2021} computed the frequency spectrum of the PSP data recorded at 13 million km above the photosphere, and observed the spectral index to be closer to $ -3/2 $ than $ -5/3 $. Note that \citet{Kasper:PRL2021} assume Taylor's hypothesis for the solar wind even though $ B_0 $ and $ U_0 $ are comparable in this regime.
In this paper, we extend the derivation of \citet{Verma:INAE2020_sweeping} for hydrodynamic turbulence to MHD turbulence. We express the ``dressed" or ``renormalized" temporal correlation functions for the Els\"{a}sser variables in the Fourier space. Then, an inverse Fourier transform of the above correlation function yields two-point two-time correlation function. By setting the distance between the two points to zero, we deduce one-point two-time correlation functions, whose Fourier transform yields the frequency spectra for MHD turbulence. For Kolmogorov's and Iroshnikov-Kraichnan models~\cite{Iroshnikov:SA1964,Kraichnan:PF1965MHD}, we obtain respectively $ f^{-5/3} $ and $ f^{-3/2} $ frequency spectra with prefactors that are functions of $| {\bf U}_0 \mp {\bf B}_0| $, which are the wave speeds of Alfv\'{e}n waves. Thus, we generalize Taylor's hypothesis to MHD turbulence; our derivation is mathematically more rigorous than earlier ones.
In the next two sections, we derive the correlation functions for linear and turbulent MHD.
\section{Correlation functions for linear MHD}
The equations of a magnetofluid moving with a mean velocity of ${\mathbf U}_0$ in a background of a mean magnetic field $ {\bf B}_0 $ are
\begin{eqnarray}
\frac{ \partial{\bf u}}{\partial t} + ({\mathbf U}_0 \cdot \nabla) {\bf u} - ({\mathbf B}_0 \cdot \nabla) {\bf b} + ({\bf u} \cdot \nabla) {\bf u} - ({\bf b} \cdot \nabla) {\bf b} & = & -\nabla p + \nu \nabla^2 {\bf u} + {\bf f},
\label{eq:u} \\
\frac{ \partial{\bf b}}{\partial t} + ({\mathbf U}_0 \cdot \nabla) {\bf b} - ({\mathbf B}_0 \cdot \nabla) {\bf u} + ({\bf u} \cdot \nabla) {\bf b} - ({\bf b} \cdot \nabla) {\bf u} & = & \eta \nabla^2 {\bf b},
\label{eq:b} \\
\nabla \cdot {\bf u} = 0,
~~~\nabla \cdot {\bf b} = 0,
\label{eq:divfree}
\end{eqnarray}
where ${\bf u,b}$ are the velocity and magnetic field fluctuations, ${\bf f}$ is the external force, $p$ is the pressure, $\nu$ is the kinematic viscosity, and $ \eta $ is the magnetic diffusivity. In the above equations, the magnetic field is velocity units, which is obtained by a transformation, $ {\bf B}_\mathrm{CGS} \rightarrow {\bf B}_\mathrm{CGS}/\sqrt{4\pi\rho}$, where $ \rho $ is the material density of the flow. In this paper, we assume that the flow is incompressible.
Alfv\'{e}n waves are the basic modes of linearized MHD equations, and they
are conveniently expressed in terms of Els\"{a}sser variables, $ {\bf z^\pm = u \pm b} $. Using Eqs.~(\ref{eq:u}, \ref{eq:b}, \ref{eq:divfree}), we can derive the following equations for $ {\bf z^\pm}$:
\begin{eqnarray}
\frac{ \partial{\bf z}^\pm}{\partial t} + {\mathbf Z}_0^\mp \cdot \nabla {\bf z}^\pm + ({\bf z}^\mp \cdot \nabla) {\bf z}^\pm & = & -\nabla p + \nu_\pm \nabla^2 {\bf z}^\pm + \nu_\mp \nabla^2 {\bf z}^\mp + {\bf f},
\label{eq:z} \\
\nabla \cdot {\bf z}^\pm = 0,
\end{eqnarray}
where ${\mathbf Z}_0^\mp = ({\mathbf U}_0 \mp {\mathbf B}_0) $, and $ \nu_\pm = (\nu \pm \eta)/2 $. For simplification, in this paper, we take $ \nu = \eta$ that leads to $\nu_+ = \nu = \eta$ and $\nu_- = 0$. In Fourier space, the equations for $ {\bf z}^\pm $ are
\begin{eqnarray}
\left[\frac{ \partial}{\partial t} + i {\mathbf Z}_0^\mp \cdot {\bf k} + \nu k^2 \right] {\bf z}^\pm ({\bf k})
& = & -i {\bf k} p({\bf k}) - i \sum_{\bf p} [{\bf k \cdot z^{\mp} ({\bf q}) ] z^\pm ({\bf p}) } + {\bf f} ({\bf k}),
\label{eq:zk} \\
{\bf k} \cdot {\bf z}^\pm ({\bf k}) = 0,
\end{eqnarray}
with $ {\bf q = k-p} $. A linearized version of the above equations indicates that the waves $ {\bf z}^+ ({\bf k}) $ and $ {\bf z}^- ({\bf k}) $ move with speeds $ ({\mathbf U}_0 - {\mathbf B}_0) \cdot {\bf k} $ and $({\mathbf U}_0 + {\mathbf B}_0) \cdot {\bf k} $ respectively.
Using the linearised versions of Eq.~(\ref{eq:zk}), we derive the equations for the Green's functions
\begin{eqnarray}
\left[\frac{ \partial}{\partial t} + i {\mathbf Z}_0^\mp \cdot {\bf k} + \nu k^2 \right] G^\pm ({\bf k},t,t') & = & \delta(t-t'),
\label{eq:Green_linear}
\end{eqnarray}
whose solutions are
\begin{equation}
G^\pm({\bf k},\tau) =\theta(\tau) \exp{[-i {\mathbf Z}_0^\mp \cdot {\bf k} \tau]} \exp{(- \nu k^2 \tau)},
\label{eq:Gkt_nu0}
\ee
where $\tau = t-t'$, and $\theta(\tau)$ is the step function.
The equal-time correction functions, $C^\pm({\bf k},0)$, and two-time correction functions, $C^\pm({\bf k},\tau)$, for $ {\bf z^\pm (k)} $ are defined as
\begin{eqnarray}
C^\pm({\bf k},0) & = & \langle |\mathbf z^\pm(\mathbf k, t)|^2 \rangle, \\
C^\pm({\bf k},\tau) & = & \langle \mathbf z^\pm(\mathbf k, t)\cdot \mathbf z^{\pm*}(\mathbf k, t+\tau) \rangle.
\label{eq:C_k_tau_def}
\end{eqnarray}
Note that $C^\pm({\bf k},\tau) $ is a complex function. We define the normalised correlation function as
\begin{equation}
R^\pm(\mathbf k, \tau) = \frac{C^\pm(\mathbf k, \tau)}{C^\pm(\mathbf k, 0)}.
\label{eq:R}
\end{equation}
A generalisation of fluctuation-dissipation theorem to MHD yields~\cite{Kiyani:PRE2004}
\begin{equation}
R^\pm(\mathbf k, \tau) = G^\pm({\bf k},\tau) =\theta(\tau) \exp{(-i {\mathbf Z}_0^\mp \cdot { \bf k} \tau)} \exp{(- \nu k^2 \tau)}.
\label{eq:Rkt_linear}
\ee
The above equation indicates that the normalised correlation functions exhibit damped oscillations.
\section{Correlation functions for turbulent MHD}
A magnetofluid becomes turbulent when $ UL/\nu \gg 1 $ and $ UL/\eta \gg 1 $, where $ U, L $ are the large-scale velocity and length respectively. There is no definitive theory of MHD turbulence, rather it has many models~\cite{Verma:PR2004,Verma:book:ET,Beresnyak:LR2019}. In this paper we will focus on two leading models. Here, we focus on the shell spectra: $ E^\pm(k) $, $ E_u(k) $, $ E_b(k) $, which are defined as
\begin{eqnarray}
E^\pm & = & \frac{1}{2} \langle |{\bf z^\pm|^2} \rangle = \int E^\pm(k) dk, \\
E_u & = & \frac{1}{2} \langle |{\bf u}^2 \rangle = \int E_u(k) dk, \\
E_b & = & \frac{1}{2} \langle |{\bf b}^2 \rangle = \int E_u(k) dk,
\end{eqnarray}
where $ E^\pm, E_u , E_b $ are the total energies per unit volume of $ {\bf z}^\pm $, $ {\bf u} $, and $ {\bf b} $ respectively.
\begin{enumerate}
\item {\em Komogorov-like MHD turbulence phenomenology}: In this framework, the energy spectra $ E^\pm(k) $ are modelled as~\cite{Marsch:RMA1991,Verma:JGR1996DNS,Verma:PR2004}
\begin{equation}
E^\pm(k) = K^\pm (\epsilon^\pm)^{4/3} (\epsilon^\mp)^{-2/3} k^{-5/3},
\ee
where $ \epsilon^\pm $ are the inertial-range energy fluxes or dissipation rates of $ {\bf z}^\pm $, and $ K^\pm $ are constants, similar to Kolmogorov's constant for hydrodynamic turbulence. This phenomenology is also referred to as \textit{imbalanced MHD}.
In addition, \citet{Goldreich:ApJ1995} constructed a phenomenology for the anisotropic MHD turbulence. Using \textit{critical balance} between the time scales for the nonlinear interactions and Alfv\'{e}n wave propagation, they showed that the modal energy
\begin{equation}
\tilde{E}(k_\perp,k_\parallel) = K \epsilon^{2/3} k_\perp^{-10/3} g(k_\parallel/k_\perp^{2/3}),
\label{eq:Ek_GS}
\ee
where $ K $ is a constant, $ \epsilon $ is the total dissipation rate, and $k_\parallel$ and $k_\perp$ are respectively the wavenumber components parallel and perpendicular to the mean magnetic field. Note that
\begin{equation}
\int k_\perp dk_\perp dk_\parallel \tilde{E}(k_\perp,k_\parallel) = E,
\ee
where $ E$ is the total energy.
\item {\em Iroshnikov-Kraichnan phenomenology\cite{Iroshnikov:SA1964,Kraichnan:PF1965MHD}}: In this framework, the Alfv\'{e}n time scale, $ (k B_0)^{-1} $, is the relevant time scale, leading to the energy spectrum as
\begin{equation}
E_u(k) \approx E_b(k) \approx K_\mathrm{IK} (\epsilon B_0)^{1/2} k^{-3/2},
\ee
where $ K_\mathrm{IK} $ is constant, and $ B_0 $ is the amplitude of the mean magnetic field or that of large-scale magnetic field. In this phenomenology, the kinetic and magnetic energies are equipartitioned.
\citet{Dobrowolny:PRL1980} showed that
\begin{equation}
\epsilon^+ = \epsilon^- = \frac{1}{B_0} E^+(k) E^-(k) k^3.
\ee
For a special case when $ E^+(k) = E^-(k) =E(k) $ ($ E$ is the total energy), we obtain
\begin{equation}
E^+(k) = E^-(k) = E(k) = K' (\epsilon B_0)^{1/2} k^{-3/2}.
\label{eq:E(k)_MHD_IK}
\ee
\end{enumerate}
Now, we model the correlation function for MHD turbulence following the strategies adopted for the hydrodynamic turbulence. The most critical part is the convective component. Using Eq.~(\ref{eq:zk}), we deduce the convective component to be
$ \exp{(-i {\mathbf Z}_0^\mp \cdot { \bf k} \tau)} $ for $ {\bf z}^\pm $. In the following discussion we show that the convective part contributes most significantly to the frequency spectra.
The other two parts of the correlation function are the effective diffusion parameters and the sweeping effect.
In hydrodynamic turbulence, field-theoretic treatment shows that the \textit{renormalized viscosity} or \textit{effective viscosity} ($ \nu(k) $) is
\begin{equation}
\nu(k) = \nu_* \epsilon^{1/3} k^{-4/3},
\label{eq:nu(k)_HD}
\ee
where $ \nu_* $ is a nondimensional constant\cite{Yakhot:JSC1986, McComb:book:Turbulence,Verma:PR2004}. However, MHD turbulence has two diffusion parameters, viscosity and magnetic diffusivity, that depend on the cross helicity, Alfv\'{e}n ratio, and mean magnetic field. We do not yet have general formulas for these renormalized parameters, even though they have been solved for special cases~(see \citet{Verma:PRE2001,Verma:Pramana2003Nonhelical,Verma:Pramana2003Helical}). In this paper, we simplify the calculation by assuming that both the renormalized parameters are equal (i.e., $ \nu(k) = \eta(k) $), and that for Kolmogorov-like phenomenology,
\begin{equation}
\nu^\pm(k) = \nu^\pm_* (\epsilon^\pm)^{1/3} k^{-4/3},
\label{eq:nu(k)_MHD}
\ee
and for Iroshnikov-Kraichnan phenomenology,
\begin{equation}
\nu^\pm(k) = \nu'_* (\epsilon B_0)^{1/4} k^{-5/4}.
\label{eq:nu(k)_IK}
\ee
Here, $\nu^\pm_* $ and $ \nu'_* $ are constants. As we show in the next section, the terms with $ \nu^\pm(k) $ get integrated in $ E^\pm(f) $. Hence, a precise form of $ \nu^\pm(k) $ may not be critical for the derivation of $ E^\pm(f) $.
In addition, according to the \textit{sweeping effect}, large-scale flow structures sweep the inertial-range fluctuations. For hydrodynamic turbulence, \citet{Kraichnan:PF1964Eulerian}, \citet{Wilczek:PRE2012}, \citet{Verma:INAE2020_sweeping} and others have constructed models for the sweeping effect. For MHD turbulence, we follow the prescription of \citet{Verma:INAE2020_sweeping} who added a random large-scale velocity field, $ \tilde{\bf U}_0 $, to the mean velocity field $ {\bf U}_0 $. These corrections are added in the correlation function for the linear equation [Eq.~(\ref{eq:Rkt_linear})].
Under the above assumptions, we arrive at the following expressions for the correlation functions of MHD turbulence~\cite{Verma:INAE2020_sweeping}:
\begin{eqnarray}
R^\pm (\mathbf k, \tau) = \frac{C^\pm (\mathbf k, \tau)}{C^\pm (\mathbf k)} & = & \exp{(-i {\mathbf Z}_0^\mp \cdot { \bf k} \tau)}\exp(-i {\bf \tilde{U}_0 \cdot k} \tau) \exp[-\nu^\pm(k) k^2 \tau] .
\label{eq:Rk_MHD}
\end{eqnarray}
Note that the correlations functions depend on both ${\bf U}_0$ and ${\bf B}_0$. In the next section, we will relate the above functions to Taylor's hypothesis for MHD turbulence.
\section{Taylor's hypothesis for MHD turbulence}
\label{sec:Taylor_hypo}
Using Eq.~(\ref{eq:Rk_MHD}), we first derive the two-point two-time correlation functions, after that we derive one-point two-time correlation functions, whose Fourier transform yields the frequency spectra of real-space time series of $ {\bf z}^\pm $.
Using Eq.~(\ref{eq:Rk_MHD}), we derive the following two-point two-time correlation functions for $ {\bf z}^\pm $:
\begin{eqnarray}
C^\pm({\bf r}, \tau) & = & \int d{\bf k} C^\pm({\bf k}) \exp[-\nu(k) k^2 \tau-i{\bf Z^\mp_0 \cdot k} \tau] \exp[-i {\bf k} \cdot \tilde{\bf U}_0({\bf k}) \tau] \exp(i {\bf k} \cdot {\bf r}),
\label{eq:corr_U0}
\end{eqnarray}
where $ {\bf Z}^\mp_0 = {\bf U}_0 \mp {\bf B}_0$. We ensemble average $C^\pm({\bf r}, \tau)$ for random $ \tilde{\bf U}_0 $ (assuming isotropic, as in \citet{Kraichnan:PF1964Eulerian}) that yields~\cite{Kraichnan:PF1964Eulerian, Wilczek:PRE2012,Verma:INAE2020_sweeping}
\begin{eqnarray}
C^\pm({\bf r}, \tau) & = & \int d{\bf k} C^\pm({\bf k}) \exp[-\nu^\pm(k) k^2 \tau - i{\bf Z^\pm_0 \cdot k} \tau] \langle \exp[-i c k \tilde{U}_0( k)\tau] \rangle \exp(i {\bf k} \cdot {\bf r}) \nonumber \\
& = & \int d{\bf k} C^\pm({\bf k})\exp[-\nu^\pm(k) k^2 \tau - i{\bf Z^\pm_0 \cdot k} \tau] \exp[- c^2 k^2 \{\tilde{U}_0( k) \}^2 \tau^2] \exp(i {\bf k} \cdot {\bf r}).
\label{eq:Cpm_r_t}
\end{eqnarray}
For simplicity, we assume that the constant $ c \approx 1 $, and set ${\bf r}=0$ to compute one-point two-time correlation functions $ C^\pm({\bf r}=0, \tau) = C^\pm(\tau)$. The Gaussian model for the sweeping effect has been reported earlier by \citet{Kraichnan:PF1964Eulerian,Wilczek:PRE2012}, and \citet{Bourouaine:ApJ2018}.
Now we derive $ C^\pm(\tau) $ for the Kolmogorov-like phenomenology. Following Pope~\cite{Pope:book}, we take $C^\pm({\bf k})$:
\begin{equation}
C^\pm({\bf k}) = \frac{E^\pm(k)}{2\pi k^2} = \frac{1}{2\pi k^2} f_L(kL) f_\eta(k\eta) K^\pm k^{-5/3} \frac{(\epsilon^\pm)^{4/3}}{(\epsilon^\mp)^{2/3}}
\label{eq:Ek_Pope}
\ee
where
\begin{eqnarray}
f_L(kL) & = & \left( \frac{kL}{[(kL)^2 + c_L]^{1/2}} \right)^{5/3+p_0},
\label{eq:fL} \\
f_\eta(k\eta) & = & \exp \left[ -\beta \left\{ [ (k \eta)^4 + c_\eta^4 ]^{1/4} - c_\eta \right\} \right]
\label{eq:feta}
\end{eqnarray}
are respectively the forcing and dissipative components of the energy spectra, and $c_L, c_\eta, p_0, \beta$ are constants. We employ Eq.~(\ref{eq:nu(k)_MHD}) for $ \nu(k) $, and $\tilde{U}_0(k) = \epsilon^{1/3}k^{-1/3}$ with $ \epsilon $ as the total dissipation rate (see \citet{Verma:INAE2020_sweeping}). In addition, we ignore the constants for brevity. After the above substitutions in Eq.~(\ref{eq:Cpm_r_t}) with $ {\bf r}=0 $, we obtain
\begin{eqnarray}
C^\pm(\tau) & = & K^\pm \frac{(\epsilon^\pm)^{4/3}}{(\epsilon^\mp)^{2/3}} \int dk k^{-5/3} f_L(kL) f_\eta(k\eta) \exp(-i{\bf Z^\mp_0 \cdot k} \tau) \times \nonumber \\
&& \exp[-(\epsilon^\pm)^{1/3}k^{2/3}\tau] \exp[-\epsilon^{2/3}k^{4/3}\tau^2].
\label{eq:C_tau_appendix}
\end{eqnarray}
The above integral is quite complex, but it can be simplified in the asymptotic case.
For Taylor's frozen-in hypothesis to work, we assume that
$\mathbf Z^\mp_0 \cdot \mathbf k \gg \nu^\pm(k) k^2$ and $\mathbf Z^\mp_0 \cdot \mathbf k \gg k \tilde{U}_0(k)$. In addition, for $ C^+(\tau) $ and $ C^-(\tau) $, we choose the $z$ axis to be along the direction of ${\bf Z}^-_0$ and ${\bf Z}^+_0$ respectively. For simplification, we make a change of variable, $\tilde{k}_\pm = Z^\mp_0 k \tau$, and use $ \epsilon = U^2/T $ ($ U $ is the rms speed). As a result, we obtain
\begin{eqnarray}
C^\pm(\tau) & \approx & K^\pm ( Z^\mp_0 \tau)^{2/3} \frac{(\epsilon^\pm)^{4/3}}{(\epsilon^\mp)^{2/3}} \int d{\tilde{k}_\pm} \tilde{k}_\pm^{-5/3} f_L[\tilde{k}_\pm (L/Z^\mp_0\tau)] f_\eta[\tilde{k}_\pm (\eta / U_0 \tau)] \frac{\sin(Z^\mp_0 k \tau)}{Z^\mp_0 k \tau} \times \nonumber \\
&& \exp[- \tilde{k}_\pm^{2/3} (U/Z^\mp_0)^{2/3} (\alpha^\pm \tau/T)^{1/3} - \tilde{k}_\pm^{4/3} (U/Z^\mp_0)^{4/3} (\tau/T)^{2/3} ],
\end{eqnarray}
where $\epsilon^\pm = \epsilon \alpha^\pm $. For $\tau$ in the inertial range, $L/Z^\mp_0\tau \gg 1$ and $\eta/ Z^\mp_0 \tau \ll 1$. Consequently, $f_L(\tilde{k}_\pm (L/Z^\mp_0\tau)) \approx 1$, and $f_\eta( \tilde{k}_\pm (\eta / Z^\mp_0 \tau) \approx 1$. Therefore,
\begin{eqnarray}
C^\pm(\tau) & \approx & K^\pm ( Z^\mp_0 \tau)^{2/3} \frac{(\epsilon^\pm)^{4/3}}{(\epsilon^\mp)^{2/3}} \int d{\tilde{k}_\pm} \tilde{k}_\pm^{-5/3} \frac{\sin \tilde{k}_\pm}{\tilde{k}_\pm } \times \nonumber \\
&& \exp[- \tilde{k}_\pm^{2/3} (U/Z^\mp_0)^{2/3} (\alpha^\pm \tau/T)^{1/3} - \tilde{k}_\pm^{4/3} (U/Z^\mp_0)^{4/3} (\tau/T)^{2/3} ] \nonumber \\
& = & A^\pm K^\pm ( Z^\mp_0 \tau)^{2/3} \frac{(\epsilon^\pm)^{4/3}}{(\epsilon^\mp)^{2/3}},
\end{eqnarray}
where $A^\pm$ are the values of the nondimensional integrals. The Fourier transform of the above $ C^\pm(\tau)$ yields the following frequency spectra:
\begin{eqnarray}
E^\pm(f) & \approx & \int C^\pm(\tau) \exp(-i 2\pi f \tau) d\tau = \int A^\pm K^\pm ( Z^\mp_0 \tau)^{2/3} \frac{(\epsilon^\pm)^{4/3}}{(\epsilon^\mp)^{2/3}} \exp(-i 2\pi f \tau) d\tau \nonumber \\
& = & A'^\pm ( |{\bf U}_0 \mp {\bf B}_0|)^{2/3} \frac{(\epsilon^\pm)^{4/3}}{(\epsilon^\mp)^{2/3}} f^{-5/3},
\label{eq:MHD_Taylor_Kolm}
\end{eqnarray}
where $ A'^\pm $ are constants. Thus, we obtain $ -5/3 $ frequency spectra for Kolmogorov-like MHD turbulence phenomenology. Note that $E^+(f)$ and $E^-(f)$ are functions of $ |{\bf U}_0 - {\bf B}_0| $ and $ |{\bf U}_0 + {\bf B}_0 |$, respectively, which are the respective speeds of $ {\bf z^+(k)}$ and $ {\bf z^-(k)}$ in the linear approximation. In comparison to Eq.~(\ref{eq:Taylor_Ef}), $ E^\pm(f) $ has $ U_0 \rightarrow |{\bf U}_0 \mp {\bf B}_0| $ and
$ \epsilon_u \rightarrow (\epsilon^\pm)^{4/3} (\epsilon^\mp)^{-2/3}$.
The calculation for the anisotropic MHD turbulence is more complex. However, the complexity is likely to be in the integral computation, which will reflect in the constants $ A^\pm $. For anisotropic MHD turbulence, we expect that the form of $ E^\pm(f) $ will the same as in Eq.~(\ref{eq:MHD_Taylor_Kolm}).
The above analysis can be extended to Iroshnikov-Kraichnan phenomenology, where the correlation functions are
\begin{equation}
C^\pm({\bf k}) = \frac{E^\pm(k)}{2\pi k^2} \sim
( \epsilon B_0)^{1/2} k^{-7/2}.
\label{eq:Ck_Kraichnan}
\ee
We substitute Eq.~(\ref{eq:Ck_Kraichnan}) into Eq.~(\ref{eq:Cpm_r_t}), employ $ \nu^\pm(k) $ of Eq.~(\ref{eq:nu(k)_IK}), and set $ {\bf r} = 0$. Following the same steps as above, we obtain
\begin{eqnarray}
C^\pm(\tau) \approx (\epsilon B_0 Z^\mp_0 \tau)^{1/2}.
\end{eqnarray}
Fourier transform of the above $ C^\pm(\tau) $ yields the following frequency spectra:
\begin{eqnarray}
E^\pm(f) = A_\mathrm{IK} (\epsilon B_0 |{\bf U}_0 \mp {\bf B}_0| )^{1/2} f^{-3/2},
\label{eq:MHD_Taylor_Kr}
\end{eqnarray}
where $ A_\mathrm{IK} $ is a constant. Thus, we obtain $ f^{-3/2} $ frequency spectrum for the Iroshnikov-Kraichnan phenomenology.
The prefactors for $ E^\pm(f) $ are functions of $ Z^\pm_0 $. However, the prefactors for $ E_u(k) $ and $ E_b(k) $ would be more complex because
\begin{equation}
E^\pm(f) = E_u(f) + E_b(f) \pm 2 H_c(f),
\ee
where $ H_c = (1/2) \langle {\bf u \cdot b} \rangle$. Clearly, derivation of $ E_u(k)$, $ E_b(k)$, and $H_c(k) $ requires further inputs, e.g., relationships among these functions. Based on these complexities,
\section{Discussion and Conclusions}
In this paper, we extend Taylor's frozen-in hypothesis to MHD turbulence. From the first principle, we derive one-point two-time correlation functions for MHD turbulence, whose Fourier transform yields the corresponding frequency spectra. The main predictions of our quantitative calculations are as follows:
\begin{enumerate}
\item The spectral indices for $ E^\pm(k) $ and $ E^\pm(f) $ are the same.
\item The prefactors of $E^\pm(f) $ are proportional to $ |{\bf U_0 \mp B_0}|^{2/3 }$ in Kolmogorov-like phenomenology, but proportional to $ B_0^{1/2} | {\bf U_0 \mp B_0}|^{1/2 }$ in Iroshnikov-Kraichnan phenomenology. In contrast to $ E_u(f) $ for hydrodynamic turbulence, $ U_0 \rightarrow| {\bf U_0 \mp B_0}| $ in $E^\pm(f) $ of MHD turbulence.
\item The kinetic and magnetic energy spectra, $ E_u(f) $ and $ E_b(f) $, are expected to have more complex prefactors.
\item When $ B_0 \ll U_0 $, the frequency spectrum of Eq.~(\ref{eq:Taylor_Ef}) can be employed for all the fields.
\end{enumerate}
The above predictions are important for the time series analysis of the solar wind and solar corona when $ U_0 $ and $ B_0 $ are comparable, e.g., for Parker Solar Probe (PSP) when it is close to the Sun. Hence, PSP's data provides an unique opportunity for testing the above predictions. We hope to validate these predictions in near future.
Solar wind data reveal another interesting property. Many authors\cite{Matthaeus:JGR1982rugged,Podesta:ApJ2007,Kasper:PRL2021} observe that the kinetic energy spectrum ($ E_u(k) $) is steeper than the magnetic energy spectrum ($ E_b(k) $). This relative steepening of $ E_u(k) $ with relative to $ E_b(k) $ is attributed to the energy transfers from the kinetic energy to the magnetic energy\cite{Verma:Fluid:2021,Verma:JPA2022}. Note that these energy transfers are critical for the magnetic field generation or dynamo. Interestingly, $ E^\pm(k) $ do not suffer from such steepening due to an absence of cross transfer between $ {\bf z}^+ $ and $ {\bf z}^-$ (see \citet{Verma:PR2004,Verma:book:ET}). This is another reason why $ E^\pm(k) $ are more reliable energy spectra compared to $ E_u(k) $ and $ E_b(k) $. It will be interesting to quantitatively compare the solar wind observations with the theoretical predictions made in this paper.
\acknowledgements
The author thanks Rupak Mukherjee for useful suggestions and discussions. This work is partially supported by the project CRG/2021/00l097 by Science and Engineering Research Board (SERB), India.
| fcb2a48addd876c4bf3a06a46bfd53be28a777d1 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
Chandrasekhar pioneered the following areas of astrophysics: white dwarfs, neutron stars, black holes, stellar structures, radiative transfers, random processes, stability of ellipsoidal figures of equilibrium, instabilities, and turbulence. His work on turbulence is not as well known as others, even though his papers on quantification of structure functions and energy spectrum of magnetohydrodynamic (MHD) turbulence are first ones in the field. Recently, \citet{Sreenivasan:ARFM2019} wrote a very interesting review on Chandrasekhar's contributions to fluid mechanics, which includes hydrodynamic instabilities and turbulence. While Sreenivasan's article is focussed on hydrodynamics, in this paper, I will provide a brief review on Chandrasekhar's work in MHD turbulence.
In the years 1948 to 1960, Chandrasekhar worked intensely on turbulence. In 1954, Chandrasekhar gave a set of lectures on turbulence in Yerkes Observatory. These lectures, published by \citet{Spiegel:book_edited:Turbulence}, illustrate Chandrasekhar's line of approach to understand turbulence. To quote \citet{Spiegel:book_edited:Turbulence}, ``Still, Chandra pulled things together and published two papers on his approach (in 1955 and 1956). The initial reception of the theory was positive. Indeed, Stanley Corrsin once told me that, back in the mid-fifties, he was so sure that the `turbulence problem' would soon be solved that he bet George Uhlenbeck five dollars that he was right. Afterwards, when Corrsin and Uhlenbeck heard Chandra lecture on his theory, Uhlenbeck came over and handed Corrsin a fiver. It soon appeared that Uhlenbeck should have waited before parting with his money." Refer to \cite{Sreenivasan:ARFM2019} for a more detailed account of Chandrasekhar's work on turbulence and instabilities. The books by \cite{Wali:book:Chandra_bio} and \cite{Miller:book:Chandra} are excellent biographies of Chandrasekhar.
In this paper, I provide a brief overview of the leading works in MHD turbulence, starting from those of Chandrasekhar. These works are related to the inertial range of homogeneous MHD turbulence. The works beyond Chandrasekhar's contributions are divided in two periods: (a) 1965-1990, during which the field was essentially dominated by the belief that Kraichnan-Iroshnikov model works for MHD turbulence. (b) 1991-2010, during which many models and theories came up that support Kolmogorov-like spectrum for MHD turbulence. I also remark that the present paper is my personal perspective that may differ from those of others.
The outline of the paper is as follows: In Section 2, I will briefly introduce the theories of hydrodynamic turbulence by Kolmogorov and Heisenberg.
Section 3 contains a brief summary of Chandrasekhar's work on MHD turbulence that occurred between 1948 and 1955. In Sections 4 and 5, I will brief works on MHD turbulence during the periods 1965-1990 and 1991-2010 respectively. Section 6 contains a short discussion on possible approaches for resolving the present impasse in MHD turbulence.
I conclude in Section 7.
\section{Leading turbulence models before Chandrasekhar's work}
\label{sec:HD_turb}
Chandrasekhar worked on hydrodynamic (HD) and magnetohydrodynamics (MHD) turbulence during the years 1948 to 1960. Some of the papers written by him during this period are~\citet{Chandrasekhar:PTRS1950,Chandrasekhar:PRSA1951,Chandrasekhar:PRSA1951_II,Chandrasekhar:PTRS1952,Chandrasekhar:PRSA1955_I,Chandrasekhar:PRSA1955_II,Chandrasekhar:PR1956}. In addition, \citet{Chandrasekhar:book:Instability} wrote a famous treatise on hydrodynamic and magnetohydrodynamic instabilities. Chandrasekhar's lectures on turbulence (delivered in 1954) have been published by \citet{Spiegel:book_edited:Turbulence}.
Refer to \cite{Sreenivasan:ARFM2019} for commentary on these works.
During or before Chandrasekhar's work, there were important results by Taylor, Batchelor, Kolmogorov, Heisenberg, among others. Here, we briefly describe the turbulence theories of Kolmogorov and Heisenberg, primarily because Chandrasekhar's works on MHD turbulence are related to these theories. We start with Kolmogorov's theory of turbulence.
\subsection{Kolmogorov's theory of turbulence}
\label{sec:Kolm}
Starting from Navier-Stokes equation, under the assumptions of homogeneity and isotropy, \citet{Karman:PRSA1938} [also see \cite{Monin:book:v2}] derived the following evolution equation for $\left\langle u_i u'_i \right\rangle$:
\begin{eqnarray}
\frac{\partial}{\partial t} \frac{1}{2} \left\langle u_i u_i' \right\rangle & = &
\frac{1}{4} \nabla_l \cdot \left\langle |{\bf u'-u}|^2 ({\bf u'-u}) \right\rangle + \left\langle F_{\mathrm{LS},i} u_i' \right\rangle \nonumber \\
&& + \nu \nabla^2 \left\langle u_i u_i' \right\rangle \nonumber \\
& = & T_u({\bf l}) + \mathcal{F}_\mathrm{LS}({\bf l}) - D_u({\bf l}),
\label{eq:Karman}
\end{eqnarray}
where \textbf{u} and \textbf{u'} are the velocities at the locations \textbf{r} and \textbf{r+l} respectively, and $ \nu $ is the kinematic viscosity (see Figure 1). The terms $ T_u({\bf l}) $ and $ D_u({\bf l}) $ represent respectively the nonlinear energy transfer and the dissipation rates at scale \textbf{l}, while $ \mathcal{F}_\mathrm{LS}({\bf l}) $ is the energy injection rate by the external force $ \mathbf{F}_\mathrm{LS} $, which is active at large scales. For a steady turbulence, under the limit $ \nu \rightarrow 0$, \citet{Kolmogorov:DANS1941Structure,Kolmogorov:DANS1941Dissipation} showed that in the inertial range (intermediate scales between the forcing and dissipation scales),
\begin{eqnarray}
\left\langle [{\bf (u'-u) } \cdot \hat{\bf l}]^3 \right\rangle = -\frac{4}{5} \epsilon_u l,
\label{eq:K41}
\end{eqnarray}
where $ \epsilon_u $ is the viscous dissipation rate per unit mass, and $ \hat{\bf l} $ is the unit vector along \textbf{l}. Kolmogorov's theory is commonly referred to as \textit{K41} theory.
\begin{figure}
\centering\includegraphics[scale=1]{fig1.pdf}
\caption{ The velocity fields at two points \textbf{r} and \textbf{r+l} are \textbf{u(r)} and \textbf{u(r+l)} respectively. We denote them using \textbf{u} and \textbf{u'} respectively.
}
\end{figure}
A simple-minded extrapolation of Eq.~(\ref{eq:K41}) leads to
\begin{eqnarray}
\left\langle [{\bf (u'-u) } \cdot \hat{\bf l}]^2 \right\rangle \approx \epsilon_u^{2/3} l^{2/3}
\end{eqnarray}
whose Fourier transform leads to the following formula for the energy spectrum:
\begin{eqnarray}
E(k) = K_\mathrm{Ko} \epsilon_u^{2/3} k^{-5/3},
\end{eqnarray}
where $ K_\mathrm{Ko} $ is a nondimensional constant [\cite{Frisch:book}].
In Fourier space, Eq.~(\ref{eq:Karman}) transforms to the following energy transfer relation~(\cite{Verma:book:ET,Verma:JPA2022}):
\begin{eqnarray}
\frac{\partial}{\partial t} E_u({\bf k},t) = T_u({\bf k},t) + \mathcal{F}_\mathrm{LS}({\bf k},t) - D_u({\bf k},t),
\label{eq:energy_k_dot}
\end{eqnarray}
where $ E_u({\bf k}) = |{\bf u(k)}|^2/2$ is the modal energy, and
\begin{eqnarray}
T_u({\bf k},t) = \sum_{\bf p} \Im[ \{ {\bf k \cdot u(q)}\} \{ {\bf u(p) \cdot u^*(k)}\} ],
\label{eq:Tk}
\end{eqnarray}
with $ {\bf q = k-p} $, represents the total energy gained by $ {\bf u(k)} $ via nonlinear energy transfers. For isotropic turbulence,
\begin{eqnarray}
T_u({\bf k}) = T_u(k) = - \frac{d}{dk} \Pi_u(k),
\end{eqnarray}
where $ \Pi_u(k) $ is the energy flux emanating from a wavenumber sphere of radius $ k $. In the inertial range, the energy injection by the external force vanishes and viscous dissipation rate is negligible, hence $ \Pi_u(k) \approx \mathrm{const} $. Refer to \cite{Frisch:book}, \cite{Verma:book:ET}, and \citet{Verma:JPA2022} for more details.
\subsection{Heisenberg's theory of turbulence}
In this subsection, we describe Heisenberg's theory of turbulence because Chandrasekhar employed this theory to derive the energy spectrum for MHD turbulence. \cite{Heisenberg:PRSA1948} derived an integral equation for the temporal evolution of kinetic energy spectrum $ E_u(k) $ under the assumption of homogeneity and isotropy. In particular, he derived that
\begin{eqnarray}
\frac{\partial}{\partial t} \int_0^k dk' E_u(k',t) & = & -2 \left(\nu+\alpha \int_k^\infty \sqrt{\frac{E_u(k')}{k'^3}} dk' \right) \nonumber \\
&& \times \int_0^k k'^2 E_u(k') dk'.
\label{eq:Heisenberg}
\end{eqnarray}
In the above equation, the second term of the right-hand-side is a model for the diffusion of kinetic energy to smaller scales by eddy viscosity (induced by the nonlinear term). Many authors, including Chandrasekhar, have employed Heisenberg's model for modelling turbulent flows.
\section{Chandrasekhar's contributions to MHD turbulence}
\label{sec:Chandra}
Chandrasekhar wrote around a dozen papers on turbulence, four of which are on MHD. He focussed on the closing the hierarchical equations of turbulence. In the following, I will provide a brief overview of Chandrasekhar's work on MHD turbulence.
In turbulence, the nonlinear interactions induce energy transfers among the Fourier modes. Hydrodynamic interactions involve triadic interactions, e.g., in Eqs.~(\ref{eq:energy_k_dot},\ref{eq:Tk}), the Fourier mode $ {\bf u(k)} $ receives energy from the Fourier modes $ {\bf u(p)} $ and $ {\bf u(q)} $.
In 1954, Chandrasekhar gave a set of lectures on turbulence in which he showed that the energy transfer from $ {\bf u(p)} $ to $ {\bf u(k)} $ with the mediation of $ {\bf u(q)} $ is
\begin{eqnarray}
Q({ \bf k, k'} ) = \Im [({\bf u({q}) \cdot k } ) ({\bf u{(p} ) \cdot u^*({k}) })].
\label{eq:Qkp}
\end{eqnarray}
As far as we know, the above formula first appears in~\cite{Onsagar:Nouvo1949_SH}, but not in any paper of Chandrasekhar. Incidently, Onsager is not cited for this formula in \citet{Spiegel:book_edited:Turbulence}. Hence, it is not apparent whether Chandrasekhar derived Eq.~(\ref{eq:Qkp}) independently, or he was aware of Onsager's work. Around 2000, we were working on the energy fluxes of MHD turbulence, and we~(\cite{Dar:PD2001}) arrived at the same formula independently. Note that in MHD turbulence, energy transfers occur between velocity field and magnetic field as well.
Let us get back to MHD turbulence. Chandrasekhar's four papers on MHD turbulence are as follows:
\begin{enumerate}
\item \citet{Chandrasekhar:PRSA1951}: The invariant theory of isotropic turbulence in magneto-hydrodynamics
\item \citet{Chandrasekhar:PRSA1951_II}: The Invariant Theory of Isotropic Turbulence in Magneto-Hydrodynamics. II
\item \citet{Chandrasekhar:PRSA1955_I}: Hydromagnetic turbulence. I. A deductive theory
\item \citet{Chandrasekhar:PRSA1955_II}: Hydromagnetic turbulence II. An elementary theory
\end{enumerate}
The first three papers are in real space, and they are generalization of the hydrodynamics equations of \citet{Karman:PRSA1938} and \citet{Kolmogorov:DANS1941Dissipation,Kolmogorov:DANS1941Structure} to MHD turbulence.
The fourth paper attempts to employ Heisenberg's theory of turbulence to MHD turbulence (in spectral space). In the following discussion, we briefly sketch the results of these papers.
\subsection{Summary of the results of \citet{Chandrasekhar:PRSA1951,Chandrasekhar:PRSA1951_II} and \citet{Chandrasekhar:PRSA1955_I}}
For isotropic and homogeneous MHD turbulence, \citet{Chandrasekhar:PRSA1951} derived equations for the second-order correlations of the velocity and magnetic fields. The derivation here is along the lines followed by \citet{Karman:PRSA1938}. Note that the equations for MHD turbulence are much more complex due to more number of fields and nonlinear terms than in HD turbulence. As in other papers, Chandrasehkar follows rigorous and formal approach in these papers. We skip the details due to their lengths and complexity, and provide only the leading equations of the papers.
The second-order correlation functions for the velocity and magnetic fields are given below:
\begin{eqnarray}
\left\langle u_i u_j' \right\rangle = \frac{Q'}{l} l_i l_j - (l Q' + 2Q) \delta_{ij} ,
\label{eq:Q} \\
\left\langle b_i b_j' \right\rangle = \frac{H'}{l} l_i l_j - (l H' + 2H) \delta_{ij} . \label{eq:H}
\end{eqnarray}
Here, \textbf{b} is the magnetic field, and $ u_j, b'_j $ represent the $ j $th components of the velocity and magnetic fields at the locations \textbf{r} and \textbf{r+l} respectively. Throughout the paper, the magnetic field is in velocity units, which is obtained by dividing \textbf{b} in CGS unit with $ \sqrt{4\pi\rho} $, where $ \rho $ is the density of the fluid. Note that the above correlation functions satisfy the incompressibility relations, $\partial'_j \left\langle u_i u_j' \right\rangle = 0$ and $ \partial'_j \left\langle b_i b_j' \right\rangle =0$.
As a sample, we present one of the equations derived by~\cite{Chandrasekhar:PRSA1951_II}:
\begin{eqnarray}
\frac{\partial}{\partial l_m} \left\langle (b_i u_m - b_m u_i) b_j' \right\rangle
& = & \frac{\partial}{\partial l_m} P(l_i \delta_{jm} - l_m \delta_{ij}) \nonumber \\
& = & \frac{P'}{r} l_i l_j - (lP'+2P) \delta_{ij}
\end{eqnarray}
where $P$ is a scalar function, similar to $Q$ and $H$ of Eqs.~(\ref{eq:Q}, \ref{eq:H}). Using the above equations and others, one of the inertial-range relations derived by Chandrasekhar is
\begin{eqnarray}
\left\langle (u_1^2 + 2 b_2^2) u_1' \right\rangle = -\frac{2}{15} \epsilon r,
\label{eq:Chandra_real1}
\end{eqnarray}
where $ \epsilon $ is the total dissipation rate, and $ u_1,b_1 $ are the longitudinal components along $ \hat{\bf l} $, while $ u_2, b_2 $ are components perpendicular to $ \hat{\bf l}$. {The above equation is a generalization of K41 relation to MHD turbulence.}
{For hydrodynamic turbulence, \citet{Loitsiansky} derived the following relations:
\begin{eqnarray}
\int_0^\infty Q(l) l^4 dl = \mathrm{const.}
\end{eqnarray}
where $ Q(l) $ is the correlation function defined in Eq.~(\ref{eq:Q}). Using the dynamical equations of MHD, Chandrasehkar showed that Loitsiansky's integral remains constant to MHD turbulence as well.} In the second paper (\cite{Chandrasekhar:PRSA1951_II}), Chandrasekhar derived relations for the third-order correlation functions $ \left\langle p u_i' u_j' \right\rangle$ and $ \left\langle p b_i' b_j' \right\rangle$, where $ p $ is the pressure field.
In \citet{Chandrasekhar:PRSA1955_I}, Chandrasekhar derived a pair of differential equations for the velocity and magnetic fields at two different points and at two different times in terms of scalars. The derivation is quite mathematical and detailed, and is being skipped here.
\subsection{Summary of the results of \citet{Chandrasekhar:PRSA1955_II}}
\citet{Chandrasekhar:PRSA1955_II} generalized Heisenberg's theory for hydrodynamic turbulence to MHD turbulence. In this paper, the equations are in spectral space. One of the leading equations of the paper is
\begin{eqnarray}
&& -\frac{\partial}{\partial t} \int_0^k dk' [E_u(k',t)+E_b(k',t) ] \nonumber \\
& = & 2 [ \nu \int_0^k k'^2 E_u(k') dk'+\eta \int_0^k k'^2 E_b(k') dk' ] \nonumber \\
&&+ \kappa \int_k^\infty \left[ \sqrt{\frac{E_u(k')}{k'^3}}
+ \sqrt{\frac{E_b(k')}{k'^3}} \right] \times \nonumber \\
&& \int_0^k k'^2 [E_u(k') + E_b(k') ]dk',
\label{eq:Chandra_Heisenberg}
\end{eqnarray}
where $ E_u(k), E_b(k) $ are the energy spectra of the velocity and magnetic fields respectively, $ \eta $ is the magnetic diffusivity, and $ \kappa $ is a constant. {Physical interpretation of Eq.~(\ref{eq:Chandra_Heisenberg}) is as follows. Without an external force, the energy lost by all the modes of a wavenumber sphere of radius $k$ is by (a) viscous and Joule dissipation in the sphere (the first term in the right-hand-side of Eq.(\ref{eq:Chandra_Heisenberg})), and (b) the nonlinear energy transfer from the modes inside the sphere to the modes outside the sphere (the second term in the right-hand-side of Eq.(\ref{eq:Chandra_Heisenberg})). The latter term is the total energy flux~(\cite{Verma:PR2004,Verma:book:ET}).}
Using the above equation, Chandrasekhar derived several results for the asymptotic cases, e.g., $ \nu \rightarrow 0 $ and $ \eta \rightarrow 0 $. For example, Chandrasekhar observed that for small wavenumbers ($ k \rightarrow 0 $), the velocity and magnetic fields are nearly equipartitioned, and they exhibit Kolmogorov's energy spectrum ($ k^{-5/3} $ ). However, at large wavenumbers, the magnetic and kinetic energies are not equipartitioned. Quoting from his paper, ``in the velocity mode (kinetic-energy dominated case), the ratio of the magnetic energy to the kinetic energy tends to zero among the smallest eddies present (i.e., as $ k \rightarrow \infty $), while in the magnetic mode (magnetic-energy dominated case), the same ratio tends to about 2.6 as $ k \rightarrow \infty$."
\citet{Chandrasekhar:PRSA1951,Chandrasekhar:PRSA1951_II} and \citet{Chandrasekhar:PRSA1955_I,Chandrasekhar:PRSA1955_II} are the first set of papers on MHD turbulence. However, after these pioneering works, Chandrasekhar left the field somewhat abruptly. \cite{Sreenivasan:ARFM2019} ponders over this question in his review article.
A decade later, \citet{Kraichnan:PF1965MHD} and \citet{Iroshnikov:SA1964} brought next breakthroughs in MHD turbulence. Thus, Chandrasekhar pioneered the field of MHD turbulence. We find that Chandrasekhar's results have not been tested rigorously using numerical simulations and solar wind observations, and they have received less attention than his other papers. In the following discussion, we will briefly discuss some of the important papers after Chandrasekhar's work on MHD turbulence.
\section{Works in MHD turbulence between 1965 and 1990}
\label{sec:1965_1990}
\subsection{The energy spectrum $ k^{-3/2} $: Kraichnan and Iroshnikov}
In the presence of a mean magnetic field ($ {\bf B}_0 $), MHD has two kinds of Alfv\'{e}n waves that travel parallel and antiparallel to the mean magnetic field. \citet{Kraichnan:PF1965MHD} and \citet{Iroshnikov:SA1964} exploited this observation and argued that the Alfv\'{e}n time scale is the relevant time scale for MHD turbulence. Consequently, the interaction time for an Alfv\'{e}n wave of wavenumber $ k $ is proportional to $ (kB_0)^{-1} $. Note that the magnetic field including ${\bf B}_0 $ is in velocity units.
Using these inputs and dimensional analysis, \citet{Kraichnan:PF1965MHD} and \citet{Iroshnikov:SA1964} argued that the kinetic and magnetic energies are equipartitioned, and that the magnetic energy spectrum is
\begin{eqnarray}
E_b(k) = A (\epsilon B_0)^{1/2} k^{-3/2},
\label{eq:Kraichan_3by2}
\end{eqnarray}
where $ A $ is a dimensionless constant. The above phenomenology predicts $ k^{-3/2} $ energy spectrum that differs from Kolmogorov's $ k^{-5/3} $ spectrum, for which the relevant time scale is $ (k u_k)^{-1} $. Note however that the solar wind turbulence tends to exhibit $ k^{-5/3} $ spectrum~[e.g., \cite{Matthaeus:JGR1982rugged}], however some authors report $ k^{-3/2} $ spectrum for the~solar~wind.
\subsection{Generalization by \cite{Dobrowolny:PRL1980}}
The MHD equations can be written in terms of Els\"{a}sser variables $ \bf{z}^\pm ={ \bf u} \pm {\bf b}$. These variables represent the amplitudes of the Alfv\'{e}n waves travelling in the opposite direction. The nonlinear interactions between the Alfv\'{e}n waves yield energy cascades. The fluxes of $ \bf{z}^+ $ and $ \bf{z}^- $ are $ \epsilon_{z^+} $ and $ \epsilon_{z^-} $ respectively, which are also their respective dissipation rates.
\citet{Dobrowolny:PRL1980} modelled the random scattering of Alfv\'{e}n waves. They showed that the two fluxes are equal irrespective of the ratio $ z^+/z^- $, i.e.,
\begin{eqnarray}
\epsilon_{z^+} = \epsilon_{z^-}.
\label{eq:equal_flux}
\end{eqnarray}
\citet{Dobrowolny:PRL1980} used these observations to explain depletion of cross helicity in the solar wind as it moves away from the Sun. Also, they derived $ k^{-3/2} $ energy spectrum for $ {\bf z}^\pm $, as in Eq.~(\ref{eq:Kraichan_3by2}).
\subsection{Field-theoretic calculation }
\citet{Fournier:JPA1982} employed field-theoretic methods to derive energy spectra $ E_u(k) $ and $ E_b(k) $, and the cross helicity spectrum $ H_c(k) $. They employed the renormalization group procedure of \citet{Yakhot:JSC1986}. The authors attempted to compute the renormalized viscosity and magnetic diffusivity, as well as vortex corrections. However, they were short of closure due to the complex nonlinear couplings of MHD turbulence. There are more field-theoretic works before 1990, but I am not describing them here due to lack of space.
Kraichnan and Iroshnikov's models dominated till 1990. During this period, numerical simulations tended to support the $ k^{-3/2} $ spectrum [e.g., see \cite{Biskamp:PFB1989}], but they were not conclusive due to lower resolutions. On the contrary, several solar wind observations [e.g., \cite{Matthaeus:JGR1982rugged}] supported Kolmogorov's spectrum. In 1990's, new models and theories were constructed that support Kolmogorov's spectrum for MHD turbulence. We describe these theories in the next section.
\section{Works between 1991 and 2010}
\label{sec:1991_2010}
As discussed earlier, \citet{Chandrasekhar:PRSA1955_II} argued that the kinetic and magnetic energy follow $ k^{-5/3} $ spectrum as $ k \rightarrow 0 $. More detailed works on Kolmogorov's spectrum for MHD turbulence followed after this work.
\subsection{Emergence of $ k^{-5/3} $ in MHD turbulence: \cite{Marsch:RMA1991}}
\cite{Marsch:RMA1991} considered a situation when the Alfv\'{e}nic fluctuations are much larger than the mean magnetic field. In this case, the nonlinear term ($ {\bf z}^\mp \cdot \nabla {\bf z}^\pm $) dominates the linear term ($ {\bf B}_0 \cdot \nabla {\bf z}^\pm $). Here, usual dimensional arguments yields
\begin{eqnarray}
\frac{E_{z^+}(k)}{E_{z^-}(k)} = \frac{K_+}{K_-} \left( \frac{\epsilon_{z^+}}{\epsilon_{z^-}} \right)^2,
\label{eq:Marsch_5by3}
\end{eqnarray}
where $ K _\pm$ are dimensionless constants. Note that the inertial-range fluxes $ \epsilon_{z^+} $ and $ \epsilon_{z^-} $ are unequal, unlike the predictions of \citet{Dobrowolny:PRL1980} (see Eq.~(\ref{eq:equal_flux})). The inequality increases with the increase of the ratio $ E_{z^+}(k)/E_{z^-}(k)$.
Interestingly, the formulation of \citet{Dobrowolny:PRL1980} too yields $ k^{-5/3} $ spectrum when the Alfv\'{e}n time is replaced by nonlinear time scale $ (k u_k)^{-1} $ (\cite{Verma:PR2004}). \citet{Matthaeus:PF1989} attempted to combine the $ k^{-3/2} $ and $ k^{-5/3} $ models by proposing the harmonic mean of the Alfv\'{e}n time scale and the nonlinear time scale as the relevant time scale. In their framework, $ E(k) \sim k^{-5/3} $ for small wavenumbers, and $ E(k) \sim k^{-3/2} $ for larger wavenumbers. It turns out that the predictions of \citet{Matthaeus:PF1989} are counter to weak turbulence theories where $ E(k) \sim k^{-3/2} $ should be active at small wavenumbers.
\subsection{Energy fluxes: Verma et al. [1994, 1996]}
For my Ph. D. thesis (\cite{Verma:thesis}), I wanted to verify which of the two spectra, $ k^{-5/3} $ and $ k^{-3/2} $, is valid for MHD turbulence. We simulated several two-dimensiona (2D) MHD flows on $ 512^2 $ grids, and a single 3D flow on $ 128^3 $ grid. These runs had different $ B_0 $ and $ z^+/z^- $. We observed that the energy fluxes $ \epsilon_{z^\pm} $ satisfy Eq.~(\ref{eq:Marsch_5by3}) even when $ B_0 $ is five times larger than the fluctuations, and that the fluxes deviate significantly from Eq.~(\ref{eq:equal_flux}). Based on these observations, we concluded that Kolmogorov's model is more suited for MHD turbulence than Iroshnikov-Kraichnan model~(\cite{Verma:thesis,Verma:JGR1996DNS}).
\subsection{\citet{Politano:PRE1998} on structure functions}
Following similar approach as K41, \citet{Politano:PRE1998} showed that for MHD turbulence, the third-order structure function follows
\begin{eqnarray}
\left\langle ({\bf z'^{\pm} - z^\pm})^2 [({\bf z'^{\mp}} -{\bf z^\mp})\cdot \hat{l}] \right\rangle = -\frac{4}{3} \epsilon_{z^\pm} l.
\end{eqnarray}
The above equations have a simple form because of the absence of cross transfer between $ {\bf z}^+ $ and $ {\bf z}^- $. Note that the energy fluxes $ \Pi_{z^\pm} $ are constant in the inertial range~(\citet{Verma:book:ET}). The above relations translate to Kolmogorov's spectrum in Fourier space.
\citet{Politano:PRE1998} also derived the third-order structure functions for the velocity and magnetic fields. These relations are more complex due to the coupling between the velocity and magnetic fields. Also refer to the complex relations in \cite{Chandrasekhar:PRSA1951}, which differ from those of \citet{Politano:PRE1998}.
\subsection{Anisotropic MHD turbulence}
{Kolmogorov's $ k^{-5/3}$ theory and Iroshnikov-Kraichnan's $ k^{-3/2} $ theory assume the flow to be isotropic. However, this is not the case in MHD turbulence when a mean magnetic field is present. There are several interesting results for this case, which are discussed below.
\subsubsection{ \cite{Goldreich:ApJ1995}:}
For anisotropic MHD turbulence, \cite{Goldreich:ApJ1995} argued that a critical balance is established between the Alfv\'{e}n time scale and nonlinear time scale, that is, $ k_\parallel B_0 \approx k_\perp z^\pm_{k_\perp} $.
Using this assumption, Goldreich and Sridhar (1995) derived that
\begin{eqnarray}
E(k_\perp) = \epsilon^{2/3} k_\perp^{-5/3},
\end{eqnarray}
which is Kolmogorov's spectrum.
\subsubsection{Weak turbulence formalism:}
For MHD turbulence with strong $ {\bf B}_0 $,
\cite{Galtier:JPP2000} constructed a weak turbulence theory and obtained
\begin{eqnarray}
\epsilon \sim \frac{1}{k_\parallel B_0} E_{z^+}(k_\perp)E_{z^-}(k_\perp) k_\perp^4.
\label{eq:Weak_turb}
\end{eqnarray}
When $ {\bf z}^+ $ and $ {\bf z}^- $ have the same energy spectra, Eq.~(\ref{eq:Weak_turb}) reduces to
\begin{eqnarray}
E(k_\perp, k_\parallel) \sim B_0^{1/2} k_\perp^{1/2} k_\perp^{-2}.
\end{eqnarray}
Several numerical simulations support this prediction. Note however that the solar wind turbulence exhibits nearly $ k^{-5/3} $ energy spectrum even though its fluctuations are five times weaker than the Parker field. This aspect needs a careful look.
\subsubsection{Anistropic energy spectrum and fluxes:}
In the presence of strong $ {\bf B}_0 $, the energy spectrum and energy transfers become anisotropic. \citet{Teaca:PRE2009} quantified the angular dependence of energy spectrum using {\em ring spectrum}. They showed that for strong $ {\bf B}_0 $, the energy tends to concentrate near the equator, which is the region perpendicular to $ {\bf B}_0 $. \citet{Teaca:PRE2009} and \citet{Sundar:PP2017} also studied the anisotropic energy transfers using ring-to-ring energy transfers.
In addition, \citet{Sundar:PP2017} showed that strong magnetic field yields an inverse cascade of kinetic energy which may invalidate some of the assumptions made in \citet{Goldreich:ApJ1995} and in \citet{Galtier:JPP2000}.
}
\subsection{Mean magnetic field renormalization}
Given that several solar wind observations, numerical simulations, and the works of \citet{Politano:PRE1998} support $ k^{-5/3} $ spectrum, it is quite puzzling what is going wrong with Kraichnan and Iroshnikov's arguments on the scattering of Alfv\'{e}n waves. This led me to think about the effects of magnetic fluctuations on the propagation of Alfv\'{e}n wave.
In the presence of a mean magnetic field, MHD equations are nearly linear at large length scales. Alfv\'{e}n waves are the basic modes of the linearlized MHD equations. However, the nonlinear term becomes significant at the intermediate and small scales (large wavenumbers). Using renormalization group (RG) procedure, I could show that the an Alfv\'{e}n wave with wavenumber \textbf{k} is affected by an \textit{``effective" mean magnetic field}, which is the \textit{renormalized mean magnetic field} (\citet{Verma:PP1999,Verma:PR2004}):
\begin{eqnarray}
B_0(k) = C \epsilon^{1/3} k^{-1/3},
\label{eq:RGB0}
\end{eqnarray}
where $ C $ is a constant. Hence, an Alfv\'{e}n wave is not only affected by the mean magnetic field, but also by the waves with wavenumber near \textit{k}; this feature is called \textit{local interaction}. See Figure 2 for an illustration. Note that \cite{Kraichnan:PF1965MHD} and \cite{Iroshnikov:SA1964} considered time scales based only on the mean magnetic field.
\begin{figure}
\centering\includegraphics[height=.15\textheight]{fig2.pdf}
\caption{A schematic diagram of multiscale Alfv\'{e}n waves. A fluctuation of wavelength $ \lambda_1 $ is affected by the ``effective" or renormalized magnetic field $ B_0(k=1/\lambda_1) $ that scales as $ k^{-1/3} $.
Reproduced with permission from \cite{Verma:book:ET}. }
\end{figure}
Substitution of $B_0(k) $ of Eq.~(\ref{eq:RGB0}) in Eq.~(\ref{eq:Kraichan_3by2}) yields
\begin{eqnarray}
E_u(k) \approx [\epsilon B_0(k)]^{1/2} k^{-3/2} \approx \epsilon^{2/3} k^{-5/3}.
\end{eqnarray}
Thus, we recover Kolmogorov's spectrum in the framework of Kraichnan and Iroshnikov. Hence, there is consistency among various models. This argument is complimentary to those of \citet{Goldreich:ApJ1995}.
In the RG procedure of \citet{Verma:PP1999}, I went from large scales to small scales because the nonlinear interaction in MHD turbulence is weak at large scales. This is akin to quantum electrodynamics (QED) where particles (consider electrons) are free when they are separated by large distances.
\subsection{Renormalization of viscosity and magnetic diffusivity}
In the usual RG procedure of turbulence, we coarse grain the small-scale fluctuations (\cite{Yakhot:JSC1986}, \citet{McComb:book:Turbulence}). That is, we average the small-scale fluctuations and go to larger scales. At small scales, the linearized MHD equations have viscous and magnetic-diffusive terms. As we go to larger scales, the nonlinear terms enhance diffusion, which is referred to as \textit{turbulent diffusion}. The effective diffusive constants in MHD turbulence are the \textit{renormalized kinematic viscosity} and \textit{renormalized magnetic diffusivity}.
In \citet{Verma:PRE2001}, \citet{Verma:Pramana2003Nonhelical}, \citet{Verma:Pramana2003Helical}, and \citet{Verma:PR2004}, I implemented the above scheme using the self-consistent procedure of \cite{McComb:book:Turbulence,McComb:book:HIT}, and computed the renormalized viscosity and magnetic diffusivity. This self-consistent procedure was useful in circumventing the difficulties faced by \citet{Fournier:JPA1982} and others. Compared to the procedure of \cite{Yakhot:JSC1986}, McComb's scheme has less parameters to renormalize. For tractability, I focussed on the following two limiting cases:
\subsubsection{Cross helicity $ H_c = 0 $:} This assumption leads to major simplification of the calculation. I could show that
\begin{eqnarray}
\nu(k) & = & \sqrt{K} \nu_* \epsilon^{1/3} k^{_4/3}, \\
\eta(k) & = & \sqrt{K}\eta_* \epsilon^{1/3} k^{-4/3}, \\
E(k) &= & K \epsilon^{2/3} k^{-5/3}
\end{eqnarray}
are consistent solutions of RG equations. Thus, we show that the kinetic and magnetic energies exhibit $ k^{-5/3} $ energy spectra.
\subsubsection{Non-Alfv\'{e}nic case, $ z^+ \gg z^- $:} This limiting case corresponds to large cross helicity. Again, a self-consistent RG procedure yields $ k^{-5/3} $ spectrum for the Els\"{a}sser variables.
\subsection{\citet{Boldyrev:PRL2006} revives $ k^{-3/2} $ spectrum}
\citet{Boldyrev:PRL2006} hypothesized that the inertial-range fluctuations of MHD turbulence have certain dynamical alignments that yields interaction time scale as
\begin{eqnarray}
T_k \sim (k u_k \sin \theta_k)^{-1} \sim (k u_k \theta_k)^{-1},
\end{eqnarray}
where $ \theta_k $ is the angle between the velocity and magnetic fluctuations at the scale of $ k^{-1} $. \citet{Boldyrev:PRL2006} argued that $ \theta_k \sim k^{-1/4} $. Using dimensional analysis, we obtain
\begin{eqnarray}
\theta_k \sim k^{-1/4} (\epsilon/B_0^3)
^{1/4} ,
\end{eqnarray}
substitution of which in the flux equation yields
\begin{eqnarray}
\Pi \sim \epsilon \sim \frac{u_k^2}{T_k} \sim k u_k^3 k^{-1/4} (\epsilon/B_0^3)^{1/4}.
\end{eqnarray}
The above equation was inverted to obtain the following energy spectrum:
\begin{eqnarray}
E_u(k) \sim (\epsilon B_0)^{1/2} k^{-3/2},
\label{eq:Boldyrev}
\end{eqnarray}
which is same as that predicted by \cite{Kraichnan:PF1965MHD} and \cite{Iroshnikov:SA1964}. Boldyrev and coworkers performed numerical simulations and observed consistency with the above predictions. Thus, $ k^{-3/2} $ spectrum has come back with vengeance.
\subsection{Energy fluxes of MHD turbulence}
MHD turbulence has six energy fluxes, in contrast to single flux of hydrodynamic turbulence~(\cite{Dar:PD2001,Verma:PR2004,Debliquy:PP2005}). The energy fluxes from the velocity field to the magnetic field are responsible for dynamo action, or amplification of magnetic field in astrophysical objects~(\cite{Brandenburg:PR2005,Kumar:JOT2015,Verma:JOT2016}). Energy fluxes can also help us decipher the physics of MHD turbulence, e.g., in \citet{Verma:JGR1996DNS}. We cannot describe details of energy flux in this short paper; we refer the reader to \cite{Verma:PR2004,Brandenburg:PR2005}; and \cite{Verma:book:ET} for details.
{
\section{Possible approaches to reach the final theory of MHD turbulence}
As discussed above, we are far from the final theory of MHD turbulence. Future high-resolution simulations and data from space missions may help resolve this long-standing problem. I believe that the following explorations would provide important clues for MHD turbulence:
\begin{enumerate}
\item Measurements of the time series of the inertial-range Alfv\'{e}n waves would help us explore the wavenumber dependence of $ B_0 $~(\cite{Verma:PP1999}).
\item The energy fluxes of $ {\bf z^\pm} $, $ \epsilon_{z^\pm} $, are approximately equal in the Iroshnikov-Kraichnan phenomenology, but not so in Kolmgoorov-like phenomenology for MHD turbulence. \citet{Verma:JGR1996DNS} showed that $ \epsilon_{z^\pm} $ for 2D MHD turbulence follow Kolmogorov-like theory. But, we need to extend this study to three dimensions and for high resolutions. The findings through these studies will also help estimate the turbulent heating in the solar wind and in the solar corona.
\item Recent spacecrafts are providing high-resolution solar wind and corona data, which can be used for investigating MHD turbulence. These studies would compliment numerical studies.
\end{enumerate}
We hope that above studies would be carried out in near future, and we will have a definitive theory of MHD turbulence soon.
}
\section{Summary}
\label{sec:summary}
In this paper, I surveyed the journey of MHD turbulence, starting from the pioneering works of Chandrasekhar. Chandrasekhar attempted to model the structure functions and energy spectra of MHD turbulence. Unfortunately, Chandrasekhar's papers on hydrodynamic and hydromagnetic turbulence did not attract significant attention in the community. \citet{Sreenivasan:ARFM2019}, who studied this issue in detail, points out the following possible reasons for the above. Chandrasekhar's papers are typically more mathematical than a typical paper on turbulence. As written in \citet{Sreenivasan:ARFM2019}, ``what mattered to Chandra was what the equations revealed; everything else was superstition and complacency." Thus, Chandrasekhar did not make significant effort to extract physics from mathematical equations, unlike the other stalwarts of the field (e.g., Batchelor, Taylor, Kolmogorov).
\citet{Sreenivasan:ARFM2019} points out another factor that drifted Chadrasekhar from the turbulence community. Chandrasekhar sent one of his important manuscripts on turbulence to the Proceedings of Royal Society, but the paper was rejected. This paper was eventually published in Physical Review (\cite{Chandrasekhar:PR1956}), but it contained several incorrect assumptions~(\cite{Sreenivasan:ARFM2019}). When these assumptions were criticised by Kraichnan and others, Chandrasekhar did not take them kindly and left the field of turbulence abruptly. Refer to \citet{Sreenivasan:ARFM2019} for details on this topic.
More work on MHD turbulence followed 10 years after Chandrasekhar left this field. I divided these works in two temporal regimes: between 1965 to 1990, and between 1991 to 2010. The first period was dominated by Kraichnan and Iroshnikov's $ k^{-3/2} $ model, which is based on the scattering of Alfv\'{e}n waves. Till 1990, the community appears to believe in the validity of this theory, even though several astrophysical observations supported $ k^{-5/3} $ spectrum. From 1991 onwards, there were a flurry of models and calculations that support Kolmogorov-like spectrum ($ k^{-5/3} $) for MHD turbulence. However, in 2006, Boldyrev and coworkers argued in favour of $ k^{-3/2} $ spectrum. Hence, the jury is not yet out. More detailed diagnostics have to be performed to arrive at the final theory of MHD turbulence.
At present, there is a lull in this fields. We hope that in near future, we will be able to completely understand the underlying physics of MHD turbulence, a journey that started with Chandrasekhar's pioneering work.
\section*{Acknowledgements}
I enjoyed participating in the conference ``Chandra's Contribution in Plasma Astrophysics". I thank the organizers, especially Ram Prasad Prajapati, for the invitation. {I am grateful to Katepalli Sreenivasan (Sreeni) for insightful discussions on the contributions and work style of Chandrasekhar. In fact, the present paper is inspired by Sreeni's ARFM article, {\em Chandra's Fluid Dynamics}. I also thank Sreeni for numerous useful suggestions on this paper. }In addition, I thank all my collaborators---Melvyn Goldstein, Aaron Roberts, Gaurav Dar, Rodion Stepanov, Franck Plunian, Daniele Carati, Olivier Debliquy, Riddhi Bandyopadhyay, Stephan Fauve, and Vinayak Eswaran---for wonderful discussions on MHD turbulence, and to Anurag Gupta for useful comments.
\vspace{-1em}
| 25317d8c0aa09d04fff7fb5fb50f0a406f9fdd78 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
The recent progress in terrestrial communication technologies has unlocked unprecedented data rates, enabling various new applications, and with the upcoming 6G era, the capacity growth is expected to improve by 10 to 100 times \cite{DavidIEEEVTechMag18, DangNatElec20}. Despite the recent advancements in wireless communications on land, offering reliable and high-speed data rates for marine communication remains a challenge. Maritime communication is crucial due to the dramatic increase in oceanic activities, including naval shipping and logistics, offshore oil exploration, wind farming, fishing, and tourism. Maritime communication is equally needed for Internet of things (IoT) applications, such as environmental monitoring and climate change control. Nevertheless, maritime communication is challenging because 4G and 5G base stations cannot be installed in the seas and oceans, limiting their use to onshore scenarios only. Hence, non-conventional wireless networks, such as satellites and aerial networks are used for maritime communications. Satellites that use radio signals are playing an essential role in providing connectivity in unconnected oceans at the expense of high terminal costs, and limited available bandwidth \cite{xu2017quality, jiang2015possible}. Aerial solutions such as unmanned aerial vehicles (UAVs) and helikites are also used to provide coverage for maritime applications \cite{BlueComPlus}.
Presently, most of the technologies for ship-to-ship and ship-to-shore communication use medium frequency (MF), high frequency (HF), very high frequency (VHF), and ultra-high frequency (UHF) bands. Although these bands can ensure relatively long propagation distances, supporting only basic applications such as email, text messaging, and automatic identification systems (AIS). Therefore, improving the quality of life on board when sailing vast distances requires more advanced communication systems.
Radio maritime communication is subject to various impairments, including:
\begin{itemize}
\item weather conditions: mainly rain causing significant signal scattering,
\item sea waves: which cause vibrations that affect the antenna height and orientation,
\item water surface reflections: which causes a number of rays to reflect and interfere with each other. \newline
\end{itemize}
Besides the RF band, optical wireless communication-based solutions in the infrared band, also known as free space optics (FSO), also provide connectivity for maritime networks. Due to the collimated nature of laser beams, optical-based maritime communication systems are not affected by water surface reflections. However, FSO signals are affected by sea waves and weather conditions. Sea waves lead to pointing errors between FSO terminals, while weather conditions such as Fog create scatterings of the optical signals. Turbulence caused by the random variations of the refractive index of the atmospheric channel is also a major concern for FSO links causing scintillation and random light beam movements at the detector plane.
There are also hybrid solutions proposed for maritime communication that install FSO on top of the RF infrastructure, which aim to provide more robust communication to FSO and RF propagation effects while higher data rates. Various models have been proposed in the literature for RF and optical wave propagation through oceanic environments.\newline
Motivated by the emerging research in maritime communication networks, this survey presents state-of-the-art literature on marine communication technologies. The article focuses on the latest advances in different technologies used to connect ships, ships to shore, and onboard communication. Furthermore, we discuss several issues related to maritime networks, including radio resource management, coverage and capacity, and modulation and coding schemes. Moreover, we also present several emerging applications of maritime networks, such as IoT for marine environments.
\subsection{Related Surveys}
In recent years, there has been a growing interest in developing maritime communication networks, and multiple surveys related to this topic have been published \cite{YangBook14,zolich2019survey,jo2019lte,guan2021magicnet,aslam2020internet, WeiIoT21, BalkeesICACCI15, WangAccess18}. These articles covered various aspects of maritime communication, such as the different network architectures, RF channel models, communication and networking of autonomous marine systems, and the Internet of things (IoT) in maritime environments. For instance, a comprehensive survey on video transmission scheduling for wideband maritime communication is presented in \cite{YangBook14}. Then, Zolich et al. reviewed the major advancements in autonomous maritime systems and applications and also provided an overview of maritime communication and networking technologies \cite{zolich2019survey}. Furthermore, the authors in \cite{jo2019lte} discussed the progress on a long-term evolution maritime (LTE-maritime) Korean project aiming to provide high data rates in the orders of Mbps with 100 km coverage. Authors of \cite{guan2021magicnet} briefly reviewed current maritime communication and networking projects and introduced the key technologies and applications of a novel maritime giant cellular network (Magicnet) architecture based on seaborne floating towers acting as base stations to provide wide coverage.
The authors of \cite{aslam2020internet} provided a comprehensive survey on the applications and challenges of maritime IoT technologies, also known as the Internet-of-Ships (IoS). Maritime communications and IoTs enabled by hybrid satellite-territorial networks were surveyed in \cite{WeiIoT21}. Moreover, there are also a few surveys on maritime communications focusing on RF channel models \cite{BalkeesICACCI15, WangAccess18}. A summary of the are of focus of related surveys is given in Table~\ref{Tab:SurveysComp}.
\begin{table}[htbp]
\centering
\caption{Summary of related surveys}
\begin{tabularx}{\linewidth}{|l|l|X|}
\hline
\multicolumn{1}{|c|}{{\textbf{Ref.}}}& \multicolumn{1}{c|}{{\textbf{Year}}} & \multicolumn{1}{c|}{{\textbf{Area of focus}}} \\
\hline
\cite{YangBook14}&2014& Presents video transmission scheduling in maritime
wideband communication networks.\\
\hline
\cite{zolich2019survey}&2019& Surveys advancements in autonomous maritime systems with an overview of current and future communication technologies.\\
\hline
\cite{jo2019lte} & 2019& Provides and overview on existing maritime communication systems and introduces LTE-maritime network, a project being conducted in South Korea, aiming to provide 100 km marine coverage. \\
\hline
\cite{guan2021magicnet} & 2019& Presents the pros and cons of existing maritime communication technologies and proposes a maritime giant cellular network architecture (MagicNet).\\
\hline
\cite{aslam2020internet}& 2021& Provides an overview of the key elements and main characteristics of the IoS paradigm.\\
\hline
\cite{WeiIoT21}& 2021& Discusses hybrid satellite-terrestrial maritime networks.\\
\hline
\cite{BalkeesICACCI15,WangAccess18}& 2018 & Surveys different RF channel models for maritime communications.\\
\hline
\end{tabularx}
\label{Tab:SurveysComp}
\end{table}
\begin{figure*}[h]
\centering
\includegraphics[width=0.95\linewidth] {img/MaritimeCommunication.png}
\caption {Maritime communication use cases.}
\label{Fig:MaritimeCom}
\end{figure*}
None of the previously presented references present a complete survey of the various aspects of maritime communication. Due to the prominent role of communication in marine life and industry, it is essential to study the different building blocks of maritime communication and the emerging related IoTs. Moreover, future research directions are discussed, including the use of visible light communication (VLC) and THz band for on-board communication. Data-driven modeling is equally highlighted as a key aspect for future maritime channel modeling.
\subsection{Survey Organization}
The organization of this paper is as follows. Section \ref{Sec:Overview} provides an overview on the various forms of maritime communication, including RF, optical wireless, and hybrid RF/optical solutions for ship-to-ship, ship-to-shore, satellite and on-board communication. Section \ref{Sec:BBlocks} presents the building blocks of maritime communication, covering the different propagation effects and channel models as well as modulation schemes and resource management. The internet of ships paradigm and other applications are discussed in Section \ref{Sec:IoSParadigm}. In Section \ref{Sec:Challenges}, we discuss the challenges and open problems of maritime communication and identify future research directions. Section \ref{Sec:Conclusion} concludes the paper with few remarks.
\section{Overview of Maritime Communications}
\label{Sec:Overview}
Speaking trumpets that intensify and direct human voice were used by the ancient Greek ships as a means of marine communication in the 5th century BC. Homing pigeons and small boats were equally used to convey physical messages from ship to shore and ship to ship. Semaphore flag signaling became the principal means of maritime communication by the 18th century. Each flag represents a letter or signal. Light torches in the nighttime replaced flags. Until today, semaphore signaling is still recognized as a means of maritime communication. The development of the electromagnetic theory by Maxwell and the telephone invention by Marconi in the 19th century allowed for the wireless transfer of messages in the form of Morse codes. Using Morse codes over radio waves is also known as wireless telegraphy and was ensured by radio operators transferring and receiving messages at rates up to 200 words per minute. In the early 1900s, multiple naval warships were equipped with radiotelephones or ``voice radio''. The idea is to convert sound waves into radio at the transmitter using amplitude modulation and then convert the received radio signals back to sound waves at the destination using frequencies ranging from 2 to 23 MHz. In the 1950s, a VHF band was allocated for marine use. In 1962, the Commercial Telecommunications Satellite Act was put into effect, allowing the launching of satellites into outer space for telecommunication purposes, including maritime communication \cite{SatelliteAct1962}. Many international organizations, including the International Association of Lighthouse Authorities, the International Telecommunication Union (ITU), and the International Maritime Organization, recognize the benefits of seamless data exchange for maritime communities. Nowadays, various forms of technologies are used to ensure maritime coverage, as seen in Fig.~\ref{Fig:MaritimeCom}. In the following, we will present the various forms of maritime communications and highlight the latest progress in each of these technologies.\newline
\begin{figure*}[h]
\centering
\includegraphics[width=1\linewidth] {img/Spectrum.png}
\caption {Electromagnetic spectrum in frequency and wavelength. ELF: extremely low frequency; VLF: very low frequency; LF: low frequency; MF: middle frequency; HF: High frequency; VHF: very high frequency; UHF: ultra high frequency: super high frequency; EHF: extremely high frequency; IR: infrared; UV: ultraviolet. Frequency bands in the region between 1-40 GHz are denoted by letters by IEEE. }
\label{fig:EMSpectrum}
\end{figure*}
\subsection{RF Technologies for Maritime Communications}
\label{Subsec:RFTech}
Maritime radio provides commercial and recreational communications uses and allows search and rescue assistance to ships in distress. An ITU designated band for marine radio in the VHF band is the VHF maritime mobile band from 156 and 174 MHz. Marine VHF transceivers are installed in all large vessels and most seagoing crafts where a particular VHF frequency, known as Channel 16 (156.8 MHz), is designated as an international distress frequency. VHF terminal can be portable on a vessel or installed, which allows for higher transmission power. Various VHF systems allow following different functionalities:
\begin{itemize}
\item Voice-only: which relies on human voice for calling and communicating.
\item Digital selective calling (DSC): In addition to the voice calling functionality, the DSC allows the user to communicate with another vessel using a unique identifier known as the Maritime Mobile Service Identity (MMSI). The MMSI information is transmitted digitally, and once detected by a receiver, the operator of the receiving vessel will be alerted of the incoming call. DSC allows the automatic transfer of the caller's coordinates when sending a distress call if connected to a global positioning system (GPS).
\item Automatic identification system (AIS): AIS allows the digital transfer of MMSI together with other information, including the vessel specification, real-time coordinates, speed, and course to avoid collisions and boost maritime safety. AIS operates as a mesh network, which enables extending the communication ranges and enables access to maritime traffic.
\item Text messaging: It is also possible to send and receive text messages between VHF terminals using the Radio Technical Commission for Maritime RTCM 12301.1 standard.
\end{itemize}
According to the IMO, fitting an AIS transceiver is required in any cargo ship over 300 gross tonnage and all people transporting vessels. AIS transceivers use two ITU-designated VHF frequencies, known as marine band channel 87B at 161.975 MHz and channel 88B at 162.025 MHz. Each AIS transmits and receives over two channels to avoid interference. AIS transmission is based on a Gaussian minimum shift keying (GMSK) frequency modulation (FM) at a data rate of 9.6 Kbps. Each time frame lasts 60 seconds and is divided into 2250 time slots, where each slot is 26.67 ms and contains 256 bits of data. AIS equipment uses self-organized time-division multiple access (SOTDMA) datalink schemes that are responsible for data transmission by using a reference time derived from GPS signaling to synchronize numerous data streams sent from many AIS transponders on a single band channel \cite{AISRadioScience}.
VHF Data Exchange System (VDES) is another VHF-enabled technology, which is seen as \textit{the successor of AIS} offering the same functions of an AIS with the ability to connect to satellite using the same antenna and providing higher data rate.
Radio maritime communication is also possible on the MF and HF bands. For example, navigation data (NAVDAT) is a safety and security maritime digital broadcasting system that operates in the MF/HF bands. In the MF/HF bands, radio waves are reflected at ionospheric layers, therefore enabling longer propagation distances, reaching several hundreds of kilometers. NAVDAT is set to complement and possibly replace the direct printing navigational TEleX (NAVTEX) system operating in the MF band. NAVDAT also provides extended coverage compared to NAVTEX, enabling a maximum off-shore range of 200 nautical miles ($\sim$ 370 km). In the following, we discuss several wireless access technologies in the RF band for maritime communications.\newline
\subsubsection{Standard Wireless Access Technologies}
\label{StdWirelessAccessNet}
Efforts have been made to provide high-speed data connectivity using standard wireless access techniques beyond voice calls, text messaging, and safety information exchange.
For instance, the BLUECOM+ project aims to provide broadband internet connectivity at distances beyond 100 km from shore with Mbps data rates \cite{BlueComPlus}. BLUECOM+ leverages the following to provide such connectivity:
\begin{itemize}
\item Helikites, which are the combination of kites, carrying radio relays even at extreme conditions (of windspeed of 100 km/hr) without being severely affected by sea conditions. Helikites can be either tethered on land or sea platforms.
\item Using the unused TV channels in the VHF and UHF bands for long-range LoS transmission.
\item Multihop relaying for radio range extension of standard wireless communication such as universal mobile telecommunications system universal mobile telecommunications systems (UMTS) and LTE.
\end{itemize}
The BLUECOM+ trials deployment reported single-hop and two-hop land-sea communications. The two-hop testing involved two helikites tethered at 120 m from two vessels.\newline
LTE-maritime is another exciting project aiming to provide broadband connectivity at sea. Reports from the test-bed implementation of the LTE-maritime showed Mbps communication at a shore-to-ship distance of up to 100 km using base stations located at high altitude regions on the land.
\subsubsection{Satellites-Based Maritime Communication System}
In addition to VDES, maritime communication systems primarily use satellites for providing wider coverage than standard techniques that utilize microwave frequency bands in the 1-40 GHz band (See Fig.~\ref{fig:EMSpectrum} for possible satellite bands). Among various satellite constellations, some popular ones are Inmarsat, Iridium, and Thuraya. Inmarsat relies on a 14 GEO satellite constellation operating in the L-band to provide near-global connectivity with relatively high data rates reaching up to 50 Mbps. Tapping on 66 L-band satellites on a low-earth orbit (LEO) constellation, Iridium provides voice and messaging global connectivity, including polar regions. With coverage in more than 160 countries, Thuraya provides voice and data coverage. Thuraya operates using 2 GEO satellites in the L-band. Given that a single GEO satellite can provide a converge of more than $35\%$, Thuraya achieves beyond 70\% global coverage. The data rates of Thuraya using the ThurayaIP device are limited to 444 kbps. In different maritime satellite-based systems, GEO satellite-based solutions provide broader coverage with fewer satellites at the expense of the delay due to the significant round-trip propagation distance ($\sim$78.000 km) compared to LEO satellites.
A summary of terrestrial and satellite RF maritime communication systems and the various projects aiming to provide maritime broadband coverage is given in Table \ref{tab:MaritimeProjects}.\newline
\begin{table*}[t!]
\centering
\caption{\label{tab:MaritimeProjects} Summary of RF-based maritime communication systems and projects.}
\label{Table}
\begin{tabular}{|m{55pt}|m{92pt}|m{90pt}|m{55pt}|m{150pt}|}
\hline
System&Technology and Band&Coverage&Max Data Rates&Use Cases\\
\hline
DSC&VHF, Maritime band&64 km&9.6 kbps&Maritime voice calling\\
\hline
AIS&VHF, Maritime band&64 km&9.6 kbps&Track and monitor vessels movements\\
\hline
NAVDAT&MF/HF, &$\sim$ 500 km&18 kbps&Broadcasting of security and safety information from shore to ships\\
\hline
VDES&VHF&500&300 kbits& Establishing digital two-way communication between ships, satellite and shore\\
\hline
BLUECOM+ &Air-air: IEEE 802.11g\newline &$>$100 km&$\sim$3 Mbps&Providing broadband internet access\\
\hline
LTE-maritime<E&100 km&10 Mbps&Ship-to-shore communication\\
\hline
Inmarsat&Uplink: 1626.5-1660.5 MHz \newline Downlink: 1525-1559 MHz &Global, except polar regions&50 Mbps&Mobile and data services\\
\hline
Iridium&Ku band&Global, except polar regions&46 Mbps&Providing on-ship voice calling and internet access\\
\hline
Thuraya&L band&161 countries& kbps&Providing on-ship voice calling and internet access\\
\hline
VSAT&Ku band&Global, except polar regions&46 Mbps&Providing on-ship internet access\\
\hline
\end{tabular}
\end{table*}
\subsubsection{Aerial Networks for Maritime Communication}
Aerial networks involve the use of UAVs, flying up to a few hundred meters above the sea surface and high-altitude platform Stations (HAPS) flying at the stratosphere, at least 20 km from the ground. Recent use of UAVs improves search and rescue missions by providing quick on-demand network deployment after disasters and also supporting mobility \cite{XianlingTCOM20}. In maritime networks, UAVs can relay the information sent from a ground station to mobile vessels beyond the LoS limit or when an LoS path is unavailable.
Moreover, UAVs can also help retrieve information from IoT devices located in the oceans and relay information to/ from unmanned surface vehicles (USVs). The main restriction of the use of UAVs is the limited flying time restricted by the carried load and the battery or fuel cell \cite{PANAppEng19}. Using a tethered UAV connected by an electrical cable connected to a power source can help relieve such limitations. Recent studies have shown the feasibility of using tethered UAVs fixed on buoys \cite{TetheredUAV21}. Tethered UAVs can hover in a certain place tens of meters above the seawater with limited coverage and mobility. Also, tethered UAVs can be connected to optical fibers in addition to the power cable for enabling high transmission rates.
Due to their higher flying altitude compared to UAVs, HAPS flying at about 20 km from sea level offer an extended coverage radius that could reach hundreds of kilometers \cite{HAPSComst21}. The autonomy of HAPS can be as long as several months and, with less load weight restrictions, can carry large antennas \cite{BelmekkiAccess22}. A recent demonstration by Airbus and NTT DOCOMO, INC. using their solar-powered Zephyr HAPS aiming to extend connectivity in the air and sea has reported connectivity in a range of up to 140 km \cite{Zephyr}. The HAPS-enabled connectivity to mobile terminals could allow users to use their mobile devices without the need for a dedicated antenna.
\subsection{On-board Communications}
Another major aspect of maritime communications is the connectivity among different entities on the ship. Unlike the conventional indoor radio wave propagation, radio waves propagation inside ships principally constructed with steel can be different. For instance, radios are used for communication between ship crew members, and a wireless sensor network may be used to monitor the movement of perishable and dangerous goods in shipping containers \cite{yingjun2010shipping}. Therefore, it is essential that the wireless channel for on-board applications is carefully modeled considering various scenarios.
Balboni et al. reported a series of seminal investigations on radio channel characterization inside navy ships at a frequency range in the microwave band between 800 MHz and 2.6 GHz \cite{balboni2000empirical}.
The authors reported RMS delay spreads ranging between 70 and 90 ns \cite{balboni2000empirical} and path loss gradients ranging from 1/2 to unity. Both the path loss and the root-mean-square (RMS) delay spread were found independent of frequency over the considered frequency range. For on-board communications, channel measurements have been reported in different types of ships in various studies \cite{NoblesMILCOM03, MariscottiIMTCP10, MariscottiMeasurement11, MAOTCOM12}. The channel impulse responses inside compartments and within a passageway of a ship were obtained using a vertical network analyzer for 2 GHz and 5 GHz \cite{NoblesMILCOM03}. Channel sounding measurements were conducted in the restaurant hall and corridors of a cruise ship at 2.4 GHz \cite{MariscottiIMTCP10, MariscottiMeasurement11}. LoS and NLoS channel measurements and 3D ray-tracing simulations were performed in the UHF band (from 225 to 450 MHz) inside a cargo hold of a merchant ship \cite{MAOTCOM12}, deriving the path LoS models for both scenarios. In \cite{mao2010study}, in a vessel, the channel characteristics and temporal fluctuations related to the propagation of VHF waves between the engine control room (ECR) and the bridge room have been examined. Due to the dense multipath environment formed by the metallic structures inside the channel, broadband propagation may be impossible.
Furthermore, \cite{de2021radio} tested the channel in three separate places within the ship in a broader comprehensive investigation conducted in larger bands (868 MHz, 2.5, 5.25, and 60 GHz). The path loss exponents for sub-6GHz were 1.21, 1.14, and 1.36 for 868 MHz, 2.4 GHz, and 5.25 GHz, respectively. However, the path loss exponent for mmWave wireless communications was more significant, i.e., 1.9.
\subsection{FSO and Hybrid RF/FSO for Maritime Communications}
\label{Subsec:FSO}
A particular advantage of FSO in maritime communication is the difficulty of interception and immunity to jamming contrary to RF signals, opening many opportunities for military and civil applications. Various demonstrations were conducted mostly for military applications investigating the potential of FSO deployment in maritime military communication. Initial laser-based maritime communication demonstrations date back to 1970s \cite{GiannarisSPIE77}. A full-duplex heterodyne laser transmission was demonstrated using two 1060-nm CO$_{2}$ lasers over an 18.2 km maritime link within the optical convert communications using laser transceivers (OCCULT) experimental research initiative \cite{GiannarisSPIE77}. An automatic acquisition mechanism was involved together with reciprocal pointing and tracking were involved to ensure stable communication of the coherent two-way communication.
In 2006, within the yearly Trident Warrior exercise, FSO systems were installed on two naval vessels \cite{RabinovichSPIE10}. A high-quality 300-Mbps uncompressed video transmission was reported over a maximum distance of 17.5 km in the pacific, and the data link was transmitted with no disruptions or delay over 10 hours \cite{RabinovichSPIE10}. A bidirectional muti-Gbps FSO transmission was conducted off the mid-Atlantic coast between a tower on Cedar Island (Virginia, US) and a JHU/APL research vessel with varying distances between 2 and 22 km \cite{JuarezLSC10}. In 2017, a team from the Johns Hopkins University Applied Physics Laboratory (APL) demonstrated up to 7.5 Gbps FSO communication between two moving ships \cite{TW17Demonstration}. During the 14 hours up-time of the FSO terminal in a ship-to-shore configuration, data rates between 1 and 2 Gbps were reported for ranges exceeding 25 km. FSO link ensured voice communications for distances more than 35 km and sent messages up to the maximum available LoS distance of 45 km.
Moreover, modulating retro-reflector (MRR)-based maritime links were demonstrated in \cite{MooreSPIE02,RabinovichSPIE05,BurrisSPIE09}.
The diagram of an MRR is shown in Fig.~\ref{Fig:MRRModulation} that combines an optical retro-reflector with a modulator to reflect modulated optical signals (initially emitted by a laser interrogator) directly back to an optical receiver, allowing the MRR to function passively as an optical communication device without emitting its own optical power. MRRs are mainly used in maritime for FSO link characterizations.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth] {img/MRRIllustration.png}
\caption {Diagram of an MRR-based FSO. The MRR couples a corner-cube retro-reflector and a modulator. }
\label{Fig:MRRModulation}
\end{figure}
FSO transmissions involving an array of MRRs with data rates ranging from 100 to 500 Mbps were conducted over 32.4 km folded round-trip distance \cite{MooreSPIE02}.
Using an array of 5 quantum-well-based MRR, a series of shore-to-boat FSO communications using a 1550-nm laser over a distance of 2 km in the Chesapeake Bay achieving data rates up to 5 Mbps \cite{RabinovichSPIE05}.
A 32-km round-trip FSO communication using a 1550-nm laser modulated by an analog RF signal was reported through a maritime link using a retro-reflector at the Tilghman Island \cite{BurrisSPIE09}.\newline
\indent Beyond high-data-rate transmission and the use of MRRs, other experimental efforts involved deep investigations of the various propagation effects on FSO in maritime environments. For instance, the effects of turbulence and extinction on 7.2 km FSO maritime path were investigated in \cite{GadwalSPIE06}.
A summary of FSO trials and demonstrations is given in Table \ref{tab:FSODemonstrations}.
\begin{table*}[t!]
\centering
\caption{\label{tab:FSODemonstrations} Summary of FSO field trials and demonstrations.}
\begin{tabular}{|m{20pt}|m{190pt}|m{190pt}|}
\hline
Ref.&Demonstration description&Major outcomes\\
\hline
\cite{GiannarisSPIE77}&- CO$_{2}$ lasers operating at 10.6 $\mu$m were used to build heterodyne systems for full-duplex ship-to-ship communication at close 5 MHz.\newline - Automatic acquisition and reciprocal pointing and tracking mechanisms were involved.&- Reciprocal pointing and tracking ensured stable communication.\newline - High frequency stability is achieved. \\
\hline
\cite{RabinovichSPIE10}&- A high-speed FSO transmission, in the form of a pre-taped video and live video feed between two US navy vessels, was established over 10 hours.
& - Up to 17.5 km transmission range and 300 Mbits were reported.\newline - No artifacts or delays in the videos were reported in clear weather transmission.\newline- During rain, slight video artifacts were obtained at ranges less than 10 km.\\
\hline
\cite{JuarezLSC10}&- Two bidirectional ship-to-shore 1550-nm FSO field trials conducted off mid-Atlantic coast near Wallops Island in July and September 2009\newline - The propagation distance was varied from 2 to 22 km, visual horizon.\newline - Two adaptive optics (AO) units were used to compensate for beam distortions and pointing errors. &- Up to 10 Gbit/s data rate was achieved.\newline - Daytime atmospheric turbulence is stronger than nighttime.\\
\hline
\cite{TW17Demonstration}&- Demonstration of high-speed FSO transmission using terminals developed by APL Engineers in the 2017 Trident Warrior exercise.\newline - Demonstrations involved ship-to-ship and ship-to-shore communications. &- Up to a 7.5 Gbps transmission rate between two moving vessels was reported.\newline
- For 14 hours total up-time in ship-to-shore testing:\newline
* Error-free transmissions with data rates between 1 and 2 Gbps at more than 25 km ranges.\newline
* Voice communications for ranges up to 35 km.\newline
* Operational chat messaging at maximum LoS of 45 km.\newline
- Sea spray and fog were the major challenges.\\
\hline
\cite{MooreSPIE02}&- MRR-based 32.4 km roundtrip transmission using an array of 22 MRR.\newline - The height of the laser interrogator was 30 m from the water surface, while the height of the MRR from the surface was set to 15 to strengthen the propagation effects of the propagating laser beams.
&\- Data rates between 100 and 500 Mbps were demonstrated with BER below $10^{-5}$. \\
\hline
\cite{RabinovichSPIE05}&- A series of FSO communication tests using a 1550-nm laser with data rates up to 5 Mbps over 2 km distance from a ship to a boat in the Chesapeake Bay (Maryland, Virginia, US). \newline- Various weather conditions were covered over a one year period&- Scintillation is the major challenge.\newline -At low and medium turbulence regimes, results are consistent with modelling proposed in \cite{AndrewsBook}.\newline -Experimental data are not in good agreement with theoretical models at a high turbulence regime.\\
\hline
\cite{BurrisSPIE09}&- 32 km roundtrip MRR-based FSO communication using a 1550-nm laser modulated by an analog RF signal through a maritime link.\newline
- Retro-reflector fixed at the Tilghman Island.&- The analog modulation link was subject to turbulence, which could have been compensated if a digitizer was used. \\
\hline
\cite{GadwalSPIE06}&- Investigated the effect of turbulence and attenuation using a 1060-nm laser across a 7.6-km maritime path.& - Near-surface marine environment that is appropriate for ship-to-ship or ship-to-shore communications is an especially stressing propagation environment.\\
\hline
\end{tabular}
\end{table*}
FSO is not only restricted to horizontal links, but can also be involved for vertical links, such as HAPS to vessel links. FSO is also a core technology to provide fiber-like data-rates in satellite crosslinks in large LEO constellations aiming to provide wide maritime connectivity such as Telesat and SpaceX constellations \cite{Starlink,Telsat}.
Besides FSO only links, various terrestrial hybrid RF/FSO schemes are also recently investigated in the literature \cite{TrichiliOJCS21, NadeemJSAC09, Gregorycharacterization11}. These hybrid links can be well suited for maritime communication between ships and ship to shore. Nevertheless, research on hybrid RF/FSO links for maritime networks is still in the early stages and needs further research.
\section{Building Blocks of Maritime Communications}
\label{Sec:BBlocks}
In this section, we discuss the fundamental physical layer aspects of maritime communications, including channel modeling, modulation and coding, and other key performance parameters, such as coverage and capacity and radio resource management.
\subsection{Channel Models (RF/Optical/Hybrid)}
In the literature, various works discussed the channel modeling and system implementations of maritime networks using RF, FSO, and hybrid technologies \cite{NadeemJSAC09, Gregorycharacterization11, lionis2020experimental, arienzo2019green, bajwa2009sparse, saleh1987statistical, garroppo2008wimax, durgin2002new, romero2016fluctuating, romero2017fluctuating, zhao2014radar, reddy2016analysis, hubert2012impact,huang2015maritime, dinc2015channel, dinc2014beyond, lionis2021using, grant2012maritime}. In the following, we present these different channel models in detail, particularly the channel between shore-to-ship, ship-to-ship, and ship-to-shore. Fig. \ref{ducting} shows the possible propagation paths in the situation of shore-to-ship communication. More generally, Fig. \ref{Shore/Satellite-to-ship} shows the possible propagation paths for the three cases (a) Shore-to-Ship (b) Ship-to-Ship and (c) Satellite/Air-to-Ship.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth] {img/PropagationPaths.jpg}
\caption {Illustration of the various possible RF maritime propagation paths}
\label{ducting}
\end{figure}
\subsubsection{RF-based channel models}
The characteristics of maritime communication channels are different from conventional terrestrial wireless channels. The difference is mainly due to sparsity, wave-induced instability, and ducting phenomena.
\newline
\textbf{Sparsity}: Since RF-based maritime channels do not suffer from scattering, the assumption of Rayleigh fading is no longer valid. Therefore, finite-scattering models are mainly used in the statistical modeling of maritime channels. \cite{bajwa2009sparse, saleh1987statistical}. \newline
\textbf{Wave-induced instability:} The movement of waves causes periodic changes in the height and orientation of the on-board antennas that lead to a reduction in the received message power. The movement of the waves can be either considered as linear motion and/or rotation motion \cite{zhao2014radar, reddy2016analysis}. The linear motion exists along one specific axis (x-only, y-only, or z-only). In contrast, the rotation motion considers the movement along all three axes (see Fig. \ref{seamotion}). Aside from the link mismatch effect due to the sea wave movement, water motion also leads to radio transmission scattering, particularly at the air/water interface. Three metrics are often employed to characterize sea wave movement: significant sea wave height, average sea wavelength, and average sea wave period \cite{hubert2012impact, huang2015maritime}.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth] {img/WaveMotion.jpg}
\caption {Sea surface waves motion}
\label{seamotion}
\end{figure}
\newline
\textbf{Evaporation ducting phenomena:} The atmospheric ducting effect has been well-studied, particularly in radar systems. The ducting effect is mainly introduced by the refractivity variation with altitude due to air pressure, temperature, and humidity changes \cite{dinc2015channel, dinc2014beyond}. As illustrated in Fig. \ref{ducting}, the waves may be ``trapped" inside the ducting layer within a height of 10-20 m, with a maximum of 40m.\newline
The sparsity, wave-induced instability, and ducting can occur for different wireless links, i.e., ship-to-ship, air-to-ship, shore-to-ship, and satellite-to-ship. We will examine these RF-based wireless links one by one in the following.\newline
\indent \textbf{Shore/Ship-to-Ship Links}: shore-to-ship or ship-to-ship links are primarily distance-dependent, as shown in Fig. \ref{Shore/Satellite-to-ship}. In the case of short-range, the wireless channel acts as a two-ray model by taking into account the earth curvature, and the channel response can be formulated as follows:
\begin{equation}
h_{2}(t, \tau)=\delta(\tau- \left.\tau_{0}(t)\right) +\alpha_{s}(t) \exp \left(j \varphi_{s}(t)\right) \delta\left(\tau-\tau_{s}(t)\right),
\end{equation}
where $\alpha_{s}(t)$ is the amplitude of the surface reflection wave, $\varphi_{s}(t)$ is the phase difference between the direct route and the reflection wave, and $\tau$ is the propagation delay. $\alpha_{s}(t)$ may be modified by characteristics such as the reflection coefficient, shadowing factor, divergence factor, and surface roughness factor, whereas $\varphi_{s}(t)$ can be estimated geometrically using the curved earth approximation \cite{matolak2016air}. Mostly, for short distances between the transmitter and receiver, if the antenna is mounted at a good height, the maritime channel has LOS and NLoS reflected wave components, leading to a two-ray channel model assumption \cite{garroppo2008wimax}. On the other hand, the local scattering around the user in the ship cannot be neglected for the low-height antennas. Then we need to take more paths into consideration, which leads us to the so-called two-waves with diffusion power model introduced in (\cite{durgin2002new} Eq. (4)):
\begin{equation}
\tilde{V}=\sum_{i=1}^{N} V_{i} \exp \left(j \Phi_{i}\right)+X+j Y,
\end{equation}
where $N=2$, $V$ is the total voltage induced at the receiver antenna, which is composed of two components: the specular component $\sum_{i=1}^{N} V_{i} \exp \left(j \Phi_{i}\right)$ and the diffusion component $X+jY$, which has a complex Gaussian distribution and represents the sum of several individual weak waves. This model fits very well with mmWave channel model \cite{romero2016fluctuating, romero2017fluctuating}.
\begin{figure*}[h]
\centering
\includegraphics[width=1\linewidth] {img/SignalPropagation.jpg}
\caption {Illustration of signal propagation in (a) shore-to-ship, (b) ship-to-ship, and (c) satellite/air-to-ship communication scenario}
\label{Shore/Satellite-to-ship}
\end{figure*}
In the case of a medium-range communication, a three ray model can be used to characterize the channel as follows:
\begin{equation}\label{threeray}
h_{3}(t,\tau)=h_{2}(t, \tau) +z_{3}(t) a_{3}(t) \exp \left(j \varphi_{3}(t)\right) \delta\left(\tau-\tau_{3}(t)\right),
\end{equation}
where $a_{3}(t)$, $\tau_{3}(t)$, and $\varphi_{3}(t)$ denote the third multipath component's time-varying amplitude, propagation delay, and phase shift, respectively \cite{matolak2016air}. In Eq.~(\eqref{threeray}), $z_{3}(t)$ is created by a random process that determines the chance of the third multipath component.
In the case of long-range, the only possible reflected wave can happen due to the ducting layer. In this case, evaporation ducts may operate as a propagation medium for signals that go beyond LOS (B-LOS), as shown in Fig. \ref{Shore/Satellite-to-ship} (a). The B-LOS is very common in maritime communication, consisting of signals getting trapped between the ducting layer and the sea surface. Multiple efforts have been conducted to understand the B-LOS and study its behavior. First, there were studies related to wave refractivity and propagation in the ducting layer. Authors of \cite{dinc2015channel} then developed a statistical large-scale path-loss model that helps to evaluate the path loss exponent and the propagation range using parabolic equation (PE) simulation, which is the approximation of Helmholtz wave equation \cite{sirkova2012brief}. PE is often solved numerically using one of three methods: split-step Fourier (SSF), finite difference (FD), or finite element (FE) methods \cite{sirkova2012brief}. The most appropriate strategy for a given situation and set of circumstances is highly dependent on the scenario and set of circumstances. The SSF approach uses methods for fast Fourier transform (FFT). As a result, PE with SSF is more computationally efficient than other approaches, and the methodology may provide precise and stable answers. The FD method gives the maximum resolution in simulating the boundary conditions by applying the Crank Nicholson finite difference technique. The FE method enables more accurate modeling of quick shifts in atmospheric conditions and more modeling flexibility for complicated boundary conditions. A comprehensive examination of these strategies can be found in \cite{sirkova2012brief}. There are also wave propagation tools that use these numerical models to solve the PE \cite{dinc2015channel}.
\textbf{Satellite/Air-to-Ship Links}:
The sea surface mainly causes multipath, creating two possible paths; an LoS path and another one reflected from the duct, sea surface, or both, as in the case of Air-to-Ship links, illustrated in Fig. \ref{Shore/Satellite-to-ship}(c). In terms of channel model, various experiments demonstrated that the Rician model is the most suitable statistical model for the wireless channel in satellite/air-to-ship communication links \cite{li2020maritime,hagenauer1987maritime,wang2019doppler}. Nevertheless, recent works have tried to improve the communication performance between the UAVs and the ship, which requires more complex channel modeling either with 2D or 3D formulation \cite{shi2014modeling,liu2021novel}. For example, in \cite{shi2014modeling}, 3D and 2D sea surface simulations are used to understand the signal propagation for UAV-to-ship links. For 2D, the finite difference time domain (FDTD) was used with a maximum 10 m variation. An alternating-direction implicit finite-difference time-domain (ADI-FDTD) was used for the 3D modeling. Although the 3D formulation needs high computational, it is more realistic. Similarly, the authors of \cite{liu2021novel} considered 3D channel modeling by taking into account the multi-mobility of UAVs and the ship's motion at arbitrary speeds and directions. This approach helps to study some of the channel statistical properties in maritime communication between UAVs and ships, considering the mobility, speed, and clusters between the transmitter and the receiver.
Due to the relative motion of satellites, UAVs' mobility, and ships' movement, the Doppler shift is a common issue for these links. For example, let's consider a typical LEO satellite at an elevation of 650 kilometers above the earth's surface. The Doppler shift may range between -4 and 4 kHz depending on the relationship between relative velocity, angle, and carrier frequency \cite{wang2019doppler}. Hence, numerous approaches have been proposed in the literature to characterize the Doppler effect \cite{ma2016accurate, meng2018frequency, mengX2018frequency, choi2013ship}.\newline
\subsubsection{FSO Channel Modeling}
Like terrestrial propagation, optical beams propagating through maritime environments are subject to various weather conditions and turbulence. Weather conditions (including haze, humidity, fog, rain, etc.) cause attenuation, with effects lasting from a few minutes to several hours. Turbulence, however, causes effects with a timescale of a few seconds known as scintillation and beam wandering. Scintillation is the rapid random intensity fluctuations quantified by the so-called scintillation index, which is the variance of the irradiance fluctuation normalized by the square of the mean irradiance. Beam wandering is the random movements of the incoming laser beam on the receiver plane.\newline
FSO power attenuation as a function of the propagation distance $z$ can be described by Beer's law as follows:
\begin{equation}
P(z)=P_{0}\exp(-\gamma(\lambda)z)
\end{equation}
with $P_{0}$ is the initial power, $\gamma(\lambda)$ is a wavelength-dependent attenuation coefficient, and $\lambda$ is the operation wavelength. $\gamma(\lambda)=\alpha+\beta$ is the contribution of two phenomena; absorption ($\alpha$) and scattering ($\beta$). The absorption of light phenomenon is caused by gaseous molecules and aerosol particles in the atmosphere. Coefficient $\alpha$ can be neglected for maritime FSO since FSO wavelengths are in the non-absorption atmospheric windows. There are three scattering types; Rayleigh, Mie, and non-selective scattering, and $beta$ can be the contribution of all these three forms. Rayleigh is an all-direction scattering caused by particles with sizes smaller than the optical wavelength. The effect of Rayleigh scattering is negligible for wavelengths beyond 800 nm; therefore, its impact can be neglected for FSO maritime links incorporating IR laser sources. Mie scattering is mainly caused by particles with sizes comparable to the optical wavelength and originates from the fog. There are various empirical models in the literature for Mie scattering. Kim and Kruse models are the most accepted ones. The scattering coefficient, $\beta_{Mie}$ [km$^{-1}$] in both models can be written in the following form:
\begin{equation}
\beta_{Mie}(\lambda)=\frac{3.91}{V}\left(\frac{\lambda}{\lambda_{0}}\right)^{-q}
\end{equation}
where $V$ is the visibility, $\lambda_{0}$ is a reference wavelength (commonly fixed at 550 nm), $q$ is the size distribution of the scattering particles. For the Kruse, parameter $q$ is given as follows:
\begin{equation}
q_{Kruse} =
\begin{cases}
& 1.6~~~~V>50~\text{km} \\
&1.3~~~~6~\text{km}\leq V<50~\text{km}\\
&0.58V^{1/3}~~~~V<6~\text{km}
\end{cases}
\end{equation}
At low visibility Kim model provides higher accuracy:
\begin{equation}
q_{Kim} =
\begin{cases}
& 1.6~~~~V>50~\text{km} \\
&1.3~~~~6~\text{km}\leq V<50~\text{km}\\
&0.16V+0.34~~~~1~\text{km}<V\leq6~\text{km} \\
&V-0.5~~~~0.5~\text{km}<V\leq1~\text{km}\\
&0~~~~V\leq0.5~\text{km}
\end{cases}
\end{equation}
Non-selective scattering is caused by particles with sizes larger than the optical wavelengths, including rain and snow. Empirical models for rain and snow determined for terrestrial FSO links are applicable in maritime environments. For the rain model, the scattering coefficient can be be expressed as $\beta_{rain}=KR^{\alpha}$ [dB/km] with $R$ being the precipitation intensity in [mm/hr] and $(K,R)$ are model parameters \cite{WirelessComBook}. For snow, the scattering coefficient can be expressed as $\beta_{snow}=a S^{b_s}$ [dB/km] with $S$ being the snowfall rate [mm/hr] and ($a_s$,$b_s$) are snow parameters that take different values in wet and dry snow \cite{WirelessComBook}. \newline
\indent There are various numerical and stochastic models to model the effect of turbulence in terrestrial FSO channels \cite{TrichiliJOSAB20}. Numerical models derived from the Kolmogorov turbulence theory can be used to model the random variations of the refractive index of the atmosphere that cause turbulence \cite{AndrewsBook}. A key parameter in the different numerical models is the refractive index structure parameter, $C_{n}^{2}$, which is a measure of the strength of fluctuations and could take values ranging from 10$^{-13}$ for strong turbulence and 10$^{-17}$ for weak turbulence.\newline
\indent Stochastic models for terrestrial atmospheric turbulence involve the lognormal distribution for weak turbulence, negative exponential distribution for strong turbulence, the Gamma-Gamma distribution for weak to medium turbulence, and the generalized Malag{\`a} model covering a wide range of turbulence strengths, \cite{Comst14}. The rich literature of terrestrial FSO channel modeling \cite{AndrewsBook,Comst14} cannot be used to describe the propagation in a maritime channel mainly because turbulence shows different behaviors between terrestrial and marine environments \cite{GrayshanMarineModelling,ToselliMarineModelling}. \newline
An early study by Friehe \textit{et al.} revealed that $C_{n}^2$ fluctuations over the water is different from the inland case, mainly due to the significant humidity variations \cite{FrieheJOSA75}.
However, there have been equally several experimental and theoretical efforts on marine FSO channel modeling. The authors of \cite{GrayshanMarineModelling} introduced a novel marine turbulence spectrum and derived a theoretical expression for the irradiance fluctuation under a weak turbulence regime.\newline
An experimental study conducted at the Piraeus Port (Greece) proposed a novel empirical model for attenuation dependent on three parameters; the relative humidity, the atmospheric temperature, and the wind speed \cite{PiraeusModel}. The experimental validation was based on a $\sim$3 km FSO link operating at the 850 nm wavelength with transceivers fixed 35 m from sea level. The experiment reported in \cite{JuarezLSC10} revealed that daytime atmospheric turbulence is stronger than nighttime turbulence. \newline
Li et al. investigated the BER performance of a coherent FSO employing a quadrature phased shift keying (QPSK) modulation subject to maritime atmospheric turbulence and considered compensating the effect of turbulence distortions using an AO unit \cite{CvijeticAO15}. The authors showed that the use of AO to compensate for beam distortions could significantly improve the BER system performance by several orders of magnitude \cite{CvijeticAO15}. \newline
Further studies investigated the propagation of light beams with complex light structures through oceanic environment \cite{ZhuJOSAA16}. Note that spatially structured light beams are used to increase the transmission capacity by multiplexing multiple orthogonal light modes in the same beams (\cite{TrichiliComst19}. Authors of \cite{ZhuJOSAA16} studied the propagation dynamics of partially coherent modified Bessel-Gaussian (that carry OAM) beams in an anisotropic non-Kolmogorov maritime atmosphere and derived an analytical formula on the evolution of the powers of the received beams.\newline
\subsubsection{Hybrid Models}
Experimental investigations revealed that RF and FSO are affected differently by weather conditions \cite{NadeemJSAC09,KimSpie11}. For example, FSO links are highly sensitive to fog and snow but resilient to rain. In contrast, RF links are resilient to fog and snow but severely degraded by rain. Nadeem \textit{et al.} studied the impact of dense maritime fog (with measurements conducted in a coastal city in France) on an FSO link operating at 850 nm installed as the primary link with an RF backup link operating at a frequency of 40 GHz \cite{NadeemJSAC09}. The authors reported a significant deterioration of the FSO link by 480 dB/km attenuation. However, a $100\%$ availability of the hybrid system was reached thanks to the low attenuation of the RF signals at 40 GHz. Gregory and Badri-Hoehe conducted a 6-month measurement campaign on a 14-km long RF/FSO hybrid link with the FSO system operating at a 1550-nm wavelength and the RF system operating at 38 GHz \cite{Gregorycharacterization11}. The main motivation of the work was to correlate the hybrid link results with the weather conditions that were measured simultaneously. The RF link exhibited more than $99\%$ availability over the measurement period and was mainly degraded by rain. The FSO link was affected primarily by fog leading to severe attenuation, especially in the daytime. The authors equally provided the max and average values of the Fried parameter, which is a measure of coherence length, collected over the measurement period to study the effect of scintillation and also helpful in applying turbulence mitigation strategies such as the use of MIMO FSO. In a MIMO FSO configuration, transceivers separated by $r_{0}$ can ensure diversity and each path can experience different turbulence effects.
\subsection{Modulation and Coding Schemes}
Besides channel modeling, maritime network modulation and coding schemes are other essential physical layer issues. This section first discusses the modulation and coding techniques used in maritime communication for RF and FSO based systems. Then we will discuss the coverage and capacity issues, radio resource management, and energy efficiency.
\subsubsection{RF-based schemes}
Multiple modulation and coding techniques have been proposed for theoretical and experimental reports in the literature \cite{lazaro2019vhf,gamache1999oceanographic,hagenauer1982data}. For instance, the authors of \cite{lazaro2019vhf} proposed an adaptive coding and modulation (ACM) for the VHF band-based VDES (See Section \ref{Subsec:RFTech}). The ACM consists of dynamically changing the modulation format and the coding rate according to the experienced signal-to-noise ratio (SNR). When a ship is far from shore, the received signal may be weak and slightly higher than the background thermal noise, leading to a low SNR \cite{lazaro2019vhf} . In such a case, the communication should involve a robust modulation such as the $\pi/4$QPSK and a channel code rate of $1/2$. When the ship is close to shore, leading to higher SNR, 16-quadrature amplitude modulation (QAM) with a rate $3/4$ channel code may be used \cite{lazaro2019vhf}.The author of \cite{gamache1999oceanographic} uses oceanographic data link (ODL) system, which is a two-way connection: a forward link (from the hub to the terminal) and a return link (terminal to hub). This bidirectional feature enables dynamic experimentation and remote sensor system monitoring and control. The ODL architecture offers several methods for multi-access, including direct sequence spreading to avoid interference from neighboring satellites, TDMA, FDMA, and CDMA. The ODL system's access protocols may be customized to a particular network and can handle thousands of users per MHz for oceanographic applications. In an earlier paper \cite{hagenauer1982data}, the authors utilized a stored channel method, simulating three different links, i.e., satellite-to-ship, buoy-to-satellite, and base station-to-land mobile. The satellite-to-ship link uses a binary phase shift keying modulation scheme, whereas the base station-to-land mobile link utilizes a variety of modulation schemes like PSK, differential phase shift keying (DPSK)/FM, PSK/FM, and digital FM.\newline
\subsubsection{Optical Related Schemes}
There are two types of FSO systems: intensity modulation/direct detection (IM/DD) and coherent systems. IM/DD systems are relatively simple and consist of modulating the intensity of a laser and directly detecting light signals at the reception by a photodetector. IM/DD can only support unipolar modulation, including on-off shift keying (OOK), pulse amplitude modulation (PAM), and pulse position modulation (PPM). Coherent systems provide phase tracking by a so-called local oscillator (LO), enabling encoding information using complex multilevel modulation formats like QPSK and M-array QAM. The incoming information-carrying signal is mixed with the LO at the receiver. In general coherent systems compared to IM/DD allow for better background and shot noise resilience and ensure the transfer of higher data rates, but at the expense of cost and complexity.\newline
There are two types of coherent detection depending on the frequency of the LO; homodyne and heterodyne. In homodyne detection, the LO frequency matches the frequency of the laser, while in heterodyne signal, mixing the LO light and the information signal yields a signal in the microwave region. The demonstration reported in 1977 is an example of a coherent heterodyne maritime optical communication with FM modulation \cite{GiannarisSPIE77}. In 2005, a 5.62 Gbps homodyne FSO transmission was conducted between two Canary Islands (two ground stations on La Palma and Tenerife Islands) separated by 142 km mostly above the sea and incorporating a BPSK modulation \cite{CanaryTransmission}. The use of BPSK modulation exhibited robustness to atmospheric conditions \cite{CanaryTransmission}. Authors of \cite{MingLiAO15} evaluated the effect of maritime turbulence on a QPSK coherent system. They found that the maritime FSO system has a higher BER when compared to a terrestrial FSO system experiencing the same
turbulence strength.\newline
Authors of \cite{qiao2021performance} proposed the use of DPSK modulation with repetition coding consisting of sending the same message several times to benefit from time diversity. When compared to other modulation formats (PPM, PAM, OOK, and QPSK) through BER simulation analyses, the authors found that DPSk can provide a good compromise between long-distance and system capacity. DPSK can equally solve the phase ambiguity condition in BPSK modulation \cite{qiao2021performance}. It was also found that by increasing the repeat time, repeating coding can significantly suppress BER without the need for aperture averaging used commonly at the receiver to undo the effect of atmospheric turbulence \cite{qiao2021performance}.
Another way of modulation in FSO links is through the use of MRRs as reported in \cite{MooreSPIE02,RabinovichSPIE05,BurrisSPIE09} and discussed in Section \ref{Subsec:FSO}.\newline
\indent In the case of an MRR-based FSO, the modulation of the beam emitted by the laser is conducted at the MRR through a modulator, and the modulated light beam is reflected back to a receiver through a retro-reflector \cite{RabinovichSPIE05} (See Fig.~\ref{Fig:MRRModulation}). We stress that the MRR is a passive component and does not emit any light. The use of MRR is not restricted to maritime FSO experiments but has been widely used in terrestrial and Earth-satellite lasercom links for link characterization and ranging purposes (\cite{RabinovichSPIE05}, and references within). The modulation at the MRR can be ensured using different technologies such as liquid crystals \cite{FLC}, and quantum well \cite{RabinovichSPIE05}. \newline
\subsection{Coverage and Capacity}
Coverage and capacity are the other two physical layer performance metrics for maritime communication networks where numerous studies exist. A current research effort in Korea, LTE-maritime, confirms that LTE can provide around 100 kilometers of coverage with high data rates and throughputs \cite{jo2018validation}. Also, a new maritime system called MarCom presented in \cite{kim2009application, bekkadal2009emerging} tries to extend the maritime communications coverage. Another maritime communications system, called PACTOR-III, uses the MF/HF band with an average coverage of 4,000 km to 40,000 km square region. Also, as part of the process of harmonizing marine D-VHF services, a considerably more spectral-efficient solution has been developed, implying that the second generation D-VHF boosts the capacity by a factor of 3-10 \cite{ITU-RM.1842} \cite{chang2004development}. Recently, the authors in \cite{wei2020environment} proposed a non-convex optimization solution to maximize the ergodic sum capacity of the mobile terminals (MTs) for providing satellite terminals (STs) with guaranteed QoS in maritime networks.
In the case of FSO, there are several approaches to increasing the coverage and capacity of the optical band. The FSO LOS is restricted by the curvature of the Earth and may reach distances of more than 30 km for typical ships. For instance, as reported in Table \ref{tab:FSODemonstrations}, voice communications can range up to 35 km while operational chat messaging can reach upto a maximum LOS of 45 km \cite{TW17Demonstration}. Also, in terms of capacity, a maximum of 10 Gbit/s data rate was reported in \cite{JuarezLSC10}, and 7.5 Gbps was achieved between two moving vessels \cite{TW17Demonstration}.
\subsection{Radio Resource Management}
Like in terrestrial wireless communication systems, RRM is essential for maritime communication networks to manage the radio resources and other transmission characteristics at the system level. In \cite{mroueh2015radio}, the authors propose a maritime Mobile Ad-hoc Network (MANET) with LTE nodes where the nodes represent ships forming a naval fleet headed by a shipmaster. This naval fleet is treated as a cluster in MANET, where the ship crew is referred to as a cluster node (CN), and the shipmaster is referred to as a cluster head (CH) \cite{yu2005survey}. Then, the allocated bandwidth is optimized at the CH to service all CNs while reducing the chance of the CH running out of radio resources \cite{mroueh2015radio}. In summary, this study aims to calculate the necessary bandwidth to be provided at the CH to accommodate all active CNs' traffic. They considered two transmission schemes: (a) the single input single output (SISO) configuration; and (b) the 2x2 MIMO setup with properly spaced antennas. They also demonstrated that when compared to the SISO scenario, the 2x2 MIMO spatial multiplexing modes with a full diversity transmission mode for long-distance marine communication enhance the average network spectral efficiency and resource outage probability. Because of the complexity and high computation of the calculation in \cite{mroueh2015radio}, the authors also investigate the same problem, but this time with taking the ITU marine path-loss \cite{itu2013method} with 10\% time as a reference model \cite{kessab2016impact}. In \cite{duan2019joint}, the authors proposed to use MIMO antennas with a coastal two-hop relaying system to manage and communicate with users and boats in the offshore region. Although the boats in the offshore region are few, they may be grouped into many clusters consisting of multiple vessels close together. Their numerical results show that the algorithm is power-efficient, which may fit with the challenges faced in maritime communication. Additionally, it is worth noting that some recent research works use USV in their calculations as a means of increasing power efficiency and coverage. \cite{zeng2021joint, zeng2021joint1}.
\subsection{Energy Efficiency}
Aside from boosting the maximum throughput that could be attained, researchers also focus on energy saving to provide sustainable solutions for maritime networks. For instance, a commonly used software-defined networking (SDN) solution may reduce network latency and energy consumption while ensuring network flexibility and stability. In this context, the delay-tolerant networking (DTN) as scheduling mechanisms can be utilized in maritime communication with SDN as controller, which both together offer a cost-effective solution \cite{yang2021efficient}. Altman et al. presented an energy allocation strategy to regulate node sleeping and message forwarding to maximize the energy allocation and utility of DTN. The optimization approach is similar to duty cycle optimization \cite{altman2010optimal, altman2012combined}. Yang et al. modeled DTN and SDN as a trade-off optimization problem and showed that modifying various factors could achieve delay-energy trade-off \cite{yang2021efficient}. \newline
Besides, self-powered devices can play a crucial role in developing energy-efficient marine communication networks. Maritime devices may harvest energy from many resources accessible at seas, such as wind, sun, and sea waves. Self-powered maritime networks can enable many applications, such as temperature sensing, water quality monitoring, and position tracking \cite{de2020toward, wang2017new, bai2019high, chandrasekhar2020fully, yang2012nanowire}.
\section{Use Cases of Maritime Communication Networks}
\label{IoT section}
This section presents some of the emerging use cases of maritime networks, such as the Internet of Ships (IoS), maritime IoT, and ship-to-underwater IoT.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth] {img/maritimeIoT.jpg}
\caption {IoT categories and uses in maritime communications}
\label{Fig:IoTuses}
\end{figure}
\subsection{Internet-of-Ships Paradigm}
\label{Sec:IoSParadigm}
By anticipating IoT networks' economic and social advantages, the autonomous control of marine services can also bring new services. In the case of a maritime network, the nodes participating in developing the IoT setup are the devices of the network, such as ships and buoys, leading to the Internet of Ships (IoS) paradigm \cite{martelli2020internet}. The IoS enables node computation coordination through some high-level virtualization of the core network where machine learning and artificial intelligence approaches are used to perform computational jobs linked to forecasting analysis.
According to \cite{liu2016internet}, the concept of IoS in shipbuilding might have a significant influence on ship construction and operation, with a wide range of future uses.
A more comprehensive study related to IoS is performed in \cite{aslam2020internet}, which examines three significant IoS subject areas: intelligent vessels, smart ports, and transportation. Reference \cite{aslam2020internet} also includes a discussion of the design, cores, and qualities that enable IoS unique to set it apart from other conventional IoT-based solutions. The IoS paradigm employs the AIS to establish marine operational capabilities and analyze maritime mobility. Nevertheless, effectively collecting salient data from the AIS dataset/database is usually a time-consuming and challenging task for maritime authorities and analysts. Hence, the authors of \cite{he2017internet} proposed a new method to extract essential data from the AIS dataset by using a rule-based reasoning technique. Moreover, they also performed experiments at Yangtze River (China) to prove their findings, indicating that the suggested approach can extract data with great precision.
\subsection{Maritime IoT}
Over the last few years, numerous research groups have conducted maritime IoT projects. A wireless sensor network (WSN) for marine environment monitoring and based on the Zigbee technology was designed and deployed in Mar Menor coastal lagoon (Spain) \cite{perez2011system,PerezJOE17}. The network is composed of four sensor nodes, and each node consists of a solar-powered buoy equipped with air temperature and pressure sensors. The collected information is transmitted to a 30-km far base station \cite{PerezJOE17}.\newline
Within the BLUECOM+ project (previously presented in \ref{StdWirelessAccessNet}), authors of \cite{ferreira2017autonomous} reported a series of oceanic life monitoring trials using autonomous robotic systems and sensors fixed on a drifter buoy. The data gathered during the tests that were conducted in the Atlantic Ocean was transmitted using the Helikite-based network for several kilometers. \newline
Collecting information from IoT maritime sensor nodes can also be done through UAVs \cite{HuICCT18}. However, this method can be constrained by the UAVs' battery lifetime, mainly when the collection head node gathering the data from the sensor nodes is mobile. Authors of \cite{HuICCT18} proposed a routing maintenance approach for mobile sensor networks in maritime environments based on a ring broadcast mechanism consisting of finding the optimal path from sensor nodes to the collection head nodes and from the collection node to the nearest UAV.
\newline
In addition to oceanic data collection and observation, installing IoT inside shipping containers can enable monitoring shipments sensitive to temperature or humidity, for example. Authors of \cite{salah2020iot} designed and implemented a smart container prototype for shipping sensitive items with remote monitoring and location tracking capabilities. \newline
Nonetheless, IoT-based solutions have a broad range of applications in marine environments, as seen in Fig. \ref{Fig:IoTuses}. For instance, consider a smart port, which enables authorities to provide their customers with more reliable information and innovative services \cite{yang2018internet}. Also, maritime IoT applications can serve in other situations like weather prediction, pollution control, and oil platform monitoring \cite{adiono2021internet, sai2020oil}. A summary of some recent IoT projects and system deployments for maritime environments monitoring is given in Table \ref{Tab:IoTbasedProjects}.
\subsection{Ship-to-Underwater IoT}
Providing Internet underwater is essential for oil exploration, and monitoring sub-aquatic environment, among other industrial applications. Ships and buoys can act as base stations, providing internet and gathering data from underwater sensors. The structure and functions of the internet of underwater things (IoUT) are similar to IoT inland. However, the IoT-based technology cannot be directly deployed underwater. The reason is that the sea waves and the harsh environment limit the coverage and durability of state-of-the-art IoTs. An overview of IoUT and challenges are studied in \cite{kao2017comprehensive, qiu2019underwater}. Many communication technologies with varying degrees of capability are available to establish IoUT networks, including the use of acoustic band \cite{akyildiz2004challenges, sendra2015underwater}, optical wireless communication \cite{jamali2016performance, zeng2016survey}, magnetic induction communication \cite{domingo2012magnetic, akyildiz2015realizing, Khalil2021}, and hybrid technologies \cite{Celik2022, saeed2017energy}. Each of these technologies has its pros and cons. For instance, acoustic communication can transmit signals over long distances but is constrained by the limited bandwidth and the lack of stealth. The use of the optical band for IoUT networks benefits from the broad bandwidth, particularly around the blue-green region of the visible light spectrum. However, optical communication underwater can be strongly affected by turbulence. The transmission stability of magnetic induction communication is better than optical and acoustic communications.
\begin{table*}
\caption{IoT-based projects for maritime environments monitoring}
\centering
\begin{tabular}{|p{1cm}|p{3cm}|p{3.4cm}|p{7.5cm}|}
\hline
Ref. & Location & Communication Technology & Specifications \\
\hline
{\cite{perez2011system}} & Mar Menor Golf (Spain) & ZigBee/GPRS &A WSN composed of four buoys deployed to record data on water temperature, pressure, and salinity, among other parameters. \\
\hline
{\cite{ferreira2017autonomous}} & Atlantic Ocean (Portugal) & IEEE 802.11a/b/g/n and GPRS/UMTS/LTE & Sea trials on connecting autonomous surface vehicles and autonomous underwater vehicles using a helikite-based BLUECOM+ network. \\
\hline
{\cite{al2018building}}&North Sea (UK)& VHF & An IoT network architecture consisting of collecting data from marine sensory installed on ships is proposed. The gathered data is forwarded to onshore base stations. \\
\hline
{\cite{mourya2018ocean}}&(Not Deployed)& Acoustic &A framework for oceanic spatio-temporal monitoring using acoustic sensor networks collecting underwater physical parameters (temperature, salinity, oxygen level, etc) is proposed. \\
\hline
{\cite{morozs2018robust}}& Fort William (UK)&Acoustic/TDA-MAC &Low cost underwater acoustic sensor network deployment incorporating a TDA-MAC protocol for data collection. Several modifications were conducted to the TDA-MAC protocol to make it more robust in real-world deployments.\\
\hline
\cite{adamo2014smart}&Adriatic sea (Italy) & GPRS & Deployment of an acoustic sensor network for water quality monitoring by measuring chlorophyll concentration together with other water physical parameters (temperature, turbidity, and salinity). \\
\hline
\cite{regan2009demonstration}&River Lee, Cork (Ireland)& ZigBee &A multi-sensor system was deployed for real-time water quality monitoring (by providing readings od water temperature, pH, oxygen level, and turbidity). \\
\hline
\cite{seders2007lakenet}& Notre Dame (US) & Unlicensed UHF (433 MHz)&A sensing network, known as LakeNet, composed of 8 sensor pods was deployed in a lake to monitor the water parameters.\\
\hline
\cite{jin2010novel}& Designed in China but not deployed & ZigBee/GPRS &A multi-sensor architecture to measure water parameters was proposed. \\
\hline
\end{tabular}
\label{Tab:IoTbasedProjects}
\end{table*}
\section{Challenges and Future Research Directions}
\label{Sec:Challenges}
Data acquired in the maritime sector could be inadequate, imprecise, or untrustworthy at specific periods or places due to the continuous mobility of ships. Naval vessels, for instance, are sometimes not linked to offer real-time data, and data may be dropped or interrupted as a result of a bad connection. Such realities often obstruct the marine industry's ability to make timely and informed decisions. The shipping sector will need to adopt new communication and data collecting technologies to meet these critical problems \cite{aslam2020internet}. The majority of the communication between ships and ships-to-shores are carried out through satellite communication \cite{hu2010applications}. However, satellite connections are costly and cause significant communication delays due to the large communication distance. Nevertheless, owing to the growing nature of marine applications, maritime network infrastructure is necessary to enable worldwide connectivity for ships, primarily across open seas and in the most distant parts of the planet, such as the northern latitudes \cite{xia2019satellite}. Due to the importance of maritime networks, technological advancements have been carried out recently, which we have explained in the upper sections; however, there are still many open research areas. In this section, we present the major future research directions as follows.
\subsection{Safety and Security}
Maritime transportation is a safety-critical activity, but there is no standardized strategy in terms of cyber security in place \cite{jensen2015challenges}. It is also challenging to set cybersecurity standards in a short time in the maritime industry due to the lack of technical expertise in maritime IT departments and because a single shipping line could involve multiple entities in different locations. Cybersecurity attacks in shipping lines might result in severe outcomes, including maritime accidents and paralyzing supply chains. With the emergence of autonomous vessels, the impact of cyber attacks could lead to the worst consequences \cite{SilverajanAsiaJCIS}. For all these reasons, safety-critical network standards must be incorporated to improve the security for maritime networks \cite{wang2015big}.
\subsection{Resource Allocation}
Communication throughput is not the only important metric in maritime networks, as there are many other important performance factors such as power consumption, delay, and security. All these performance metrics must be managed and allocated efficiently in different situations in the open sea. In the same context, the future challenges from the resource allocation perspective will increase as the sea/ocean activities increases. One way that could be involved here is using artificial intelligence to take the role of resource allocation. For example, researchers in \cite{yang2020ai} discussed the artificial intelligence-based algorithms and techniques in the service-oriented maritime networks with real-time case studies. They have also gone through the functions of parallel networking and shown how important it is for resource allocation and heterogeneous network preservation.
Moreover, 6G technology is expected to aid in advancing resource allocation in marine communication. For example, in \cite{xia2020maritime}, the author discusses the IoT use case in maritime communication and how both AIS and ASM can perform distributed resource allocation.
\subsection{Bringing Broadband Cellular Connectivity to Deep Sea}
Regular phones cannot be connected to terrestrial cellular broadband networks when far-off shore. User mobile devices operating on terrestrial networks cannot directly connect to satellites. To connect to a non-terrestrial network (NTN), proprietary user equipment (UE) or satellite receiving terminal is needed. Connecting UEs, such as 5G ones, to an NTN can be possible, but after coping with a wide range of challenges \cite{SatelliteUE}, such as the requirement of low latency. Connecting to a GEO satellite with a fixed coverage leads to at least 240 ms latency. For this reason, relying on LEO satellites orbiting a few hundred kilometers from Earth can satisfy the low latency requirement.
In contrast to GEO satellites, LEO satellites have substantial coverage variations in time and space (\cite{SatelliteUE}, and references within). Each UE needs to be handed to another satellite every few seconds. LEO satellites also travel at higher speeds than vessels, creating a significant Doppler Effect.
\subsection{On-board VLC Communication}
VLC is maturing rapidly and can potentially be part of the future sixth-generation (6G) technology and beyond. VLC is an unlicensed technology that can co-exist with the lighting infrastructure. The coverage of light-emitting diodes (LEDs) used as VLC sources is restricted to the illuminated users making this technology secure from eavesdroppers in neighboring rooms. VLC is also immune to electromagnetic interference with RF terminals. VLC or data transfer through illumination can find potential use for on-board maritime communication. Given the progress in underwater optical wireless communication (UWOC) that operates with wavelengths in the visible spectrum \cite{NasirUWOCSurvey19}, using VLC in maritime communication can allow vessel communication with divers and remotely operated underwater vehicles (particularly when lasers are used as light sources). The main issue in this scenario will be fulfilling the pointing, acquisition, and tracking (PAT) requirements, particularly the potential in the random movements in the air-water interface. Recently developed solutions based on the use of scintillating fibers to relieve the PAT requirements for UWOC can be implemented to fulfill air-water (vessel-underwater) convergence \cite{SaitOPEX21}. Receivers with an enlarged field of view can be equally useful in similar situations where alignment is an issue \cite{AlkhazragiOL21}. Another feature that using VLC on-board can enable is the simultaneous lightwave information and power transfer (SLIPT) \cite{DiamantoulakisIEEETGCN18}. The illuminated user (i.e., under the LED coverage) can be charged by the information-carrying light signals, extending the working time of battery-based on-board IoTs.
\subsection{A Room for THz Communication?}
The use of the THz band is envisioned as one of the enabling technology of the upcoming 6G era \cite{SaadIEEENet20}. THz signals are strongly absorbed by water vapor in the atmosphere limiting the use of signals in this band could be unsuitable for relatively long-range ship-to-ship and shore-to-ship applications. However, THz can benefit on-board applications requiring high data rates compared to what could be provided by microwave and technologies operating at lower RF frequency ranges. More importantly, the use of THz can open the room for sensing on top of the communication as THz can be used for metal and gas sensing applications (\cite{THzCommag20}, and references within). Potential THz maritime sensing use cases may involve sensing chemical leaks of biological materials on-board or around the vessel or an offshore oil platform. In addition to sensing, using the THz band can enable imaging and localization capabilities using small footprint antennas.
The use of intelligent reflecting surfaces (IRS) can facilitate the integration of these applications on top of the communication. IRS can also help improving the NLoS penetration of THz dominated by a strong LOS component.
\subsection{Data-Driven Channel Modeling}
There has been tremendous progress in learning-based modeling in RF and optical wireless communication. The use of generative adversarial networks (GANs)-based approach can be helpful in RF and optical maritime channel modeling. We note that GANs are machine learning-based frameworks that learn to generate new data with the same statistics as their training sets \cite{GANCommag}. Involving GANs in maritime RF and FSO channel modeling can help ease complex theoretical modeling and cover simulation scenarios beyond what has been studied in the literature with theoretical modeling and experimental measurements. Data-driven modeling can also be useful in the case of THz communication.
\subsection{Inter-Medium Communications}
In maritime communication, links across two different mediums (e.g., from water to air or vice versa) are equally important to model. Few recent studies investigate intermedium communications using relays or direct links in maritime networks. In the case of relays, the relay gives access to the other medium in the decode and forward (DF) fashion. For example, in \cite{ji2021photoacoustic} the author uses a photoacoustic device, which receives the information from the air by laser beam and forwards the signal as acoustic wireless to the underwater receiver. Also, in \cite{rhodes2011underwater}, the author used the buoys to be the point of contact between the air and underwater mediums, where electrically insulated magnetic coupled antennas and RF transceivers are mounted on the bottom and the top of buoys, respectively. In the case of direct optical links, the diffused light in low turbid seawater can be detected by a photo-detector from underwater \cite{sun2019realization}. The same method also can be conducted from underwater to air communication \cite{chen2021underwater}. The second way of establishing direct links is using vibration detection, where acoustic waves from the underwater environment vibrate the seawater up to the surface, which is detected via radar or laser doppler vibrometer \cite{tonolini2018networking}.
Since intermedium communication involves two different mediums with different fading effects (through air and underwater), a very high constraint may adversely impact data rates and increase the outage probability. Hence, intermedium communication for maritime networks is still not a well-explored area and needs further investigation for different technologies, including optical, acoustic, and magnetic induction.
\section{Conclusions}
This article provides a state-of-the-art survey on maritime communications. We first provided an overview of maritime communication technologies based on multiple radio bands and the optical spectrum. Different channel models for radio and optical wireless maritime links are studied. We also categorized the channel models depending on radio link communication scenarios and the weather conditions in free-space optics. We further covered different aspects of maritime networks, including modulation and coding schemes, radio resource management, coverage and capacity, and energy efficiency. Moreover, we also presented some major use cases of IoT-related maritime networks, such as IoS and ship-to-underwater IoT. Compared to terrestrial communication, maritime communication networks still lack high-speed links and, most of the time-limited to the exchange of navigational information and critical data. Bringing broadband connectivity is identified as an open challenge that requires further efforts. We finally discussed exciting research problems, such as incorporating the optical and THz spectra in on-board applications, data-driven maritime channel modeling, safety and security, and intermedium communications. We believe that this article provides valuable insights for maritime communications researchers in academia and industry and contributes to UN sustainable development goal 14 (``To conserve and sustainably use the oceans, seas and marine resources for sustainable development").
\label{Sec:Conclusion}
\section*{List of Acronyms}
\noindent 5G: 5th Generation\\
\noindent 6G: 6th Generation\\
\noindent AIS: Automatic Identification System\\
\noindent AO: Adaptive Optics\\
\noindent ASM: Application Specific Messages\\
\noindent BER: Bit Eror Rate\\
\noindent BS: Base Station\\
\noindent CH: Cluster Head\\
\noindent CN: Cluster Node\\
\noindent DPSK: Differential phase shift keying\\
\noindent DSC: Digital Selective Calling\\
\noindent DTN: Delay-Tolerant Networking\\
\noindent FDTD: Finite Difference Time Domain\\
\noindent FM: Frequency Modulation\\
\noindent FSO: Free Space Optics\\
\noindent GAN: Generative Adversarial Network\\
\noindent GEO: Geostationary Earth Orbit\\
\noindent GMSK: Gaussian Minimum Shift Keying\\
\noindent GNSS: Global Navigation Satellite System\\
\noindent GPRS: General Packet Radio Service\\
\noindent GPS: Global Positioning System\\
\noindent HAPS: High-Altitude Platform Station\\
\noindent HF: High Frequency\\
\noindent IM/DD: Intensity Modulation/Direct Detection\\
\noindent IMO: International Maritime Organization\\
\noindent IoS: Internet-of-Ships\\
\noindent IoT: Internet of Things\\
\noindent IoUT: Internet of Underwater Things\\
\noindent IRS: Intelligent Reflecting Surface\\
\noindent ITU: International Telecommunication Union\\
\noindent LED: Light Emitting Diode\\
\noindent LEO: Low Earth Orbit\\
\noindent LO: Local Oscillaor\\
\noindent LoS: Line-of-Sight\\
\noindent LTE: Long-Term Evolution\\
\noindent MagicNet: Maritime Giant Cellular Network\\
\noindent MF: Medium Frequency\\
\noindent MIMO: Multi-Input-Multi-Output\\
\noindent MMR: Modulating Retro-Reflector\\
\noindent MMSI: Maritime Mobile Service Identity\\
\noindent mmWave: millimeter wave\\
\noindent NAVDAT: Navigation Data\\
\noindent NAVTEX: Navigational TEleX\\
\noindent NLoS: Non Line-of-Sight\\
\noindent OAM: Orbital Angular Momentum\\
\noindent OOK: On-Off Keying\\
\noindent PE: Parabolic Equation\\
\noindent PPM: Pulse Position Modulation\\
\noindent QAM: Quadrature Amplitude Modulation\\
\noindent QoS: Quality of Service\\
\noindent QPSK: Quadrature Phase Shift Keying\\
\noindent PAM: Pulse Amplitude Modulation\\
\noindent PAT: Pointing, Acquisition, and Tracking\\
\noindent RF: Radio Frequency\\
\noindent RRH: Remote Radio Head\\
\noindent RRM: Radio Resource Management\\
\noindent SDN: Software-Defined Networking\\
\noindent SISO: Single-Input-Single-Output\\
\noindent SLIPT: Simultaneous Lightwave Information and Power Transfer\\
\noindent SOTDMA: Self-Organized Time-Division Multiple Access\\
\noindent SNR: Signal-to-Noise Ratio\\
\noindent TDA-MAC: Transmit Delay Allocation MAC\\
\noindent UAV: Unmanned Aerial Vehicle\\
\noindent UE: User Equipment\\
\noindent UHF: Ultra High Frequency\\
\noindent UMTS: Universal Mobile Telecommunications Systems\\
\noindent USV: Unmanned Surface Vehicle\\
\noindent UWOC: Underwater Wireless Optical Communication\\
\noindent VHF: Very High Frequency\\
\noindent VLC: Visible Light Communication\\
\noindent VSAT: Very Small Aperture Terminal\\
\noindent WSN: Wireless Sensor Network\\
\bibliographystyle{IEEEtran}
| fad8ee63662358f9eeb7fd854fe839d90b5304d5 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{conceptual_illustration8.pdf}
\caption{A high-level overview of our proposed method: the CA-UReID method disentangles camera-agnostic feature from the original feature in a explicit way. By jointly optimized with the camera-aware contrastive learning, our method can effectively reduce the intra-class distance and increase the inter-class distance with high matching accuracy.}
\label{fig_conception}
\vspace{-10pt}
\end{figure}
Person re-identification (ReID) aims at retrieving images of the same person across cameras with non-overlapping views, which has important applications in the field of intelligent surveillance and public security.
In the past few years, researchers pay more and more attention to this task and make great research progress.
However, existing high-performance methods mainly use the supervised learning framework with identities' annotation and the process of building a dataset is time-consuming and laborious.
Besides, the supervised models need to be retrained in a new scenario, which have poor scalability and are not conducive to the practical deployment.
Therefore, some researchers recently turn their attention to unsupervised person ReID (UReID)~\cite{DBLP:conf/aaai/HuangPJLX20, DBLP:conf/nips/Ge0C0L20,DBLP:journals/corr/abs-2103-16364, DBLP:conf/aaai/0001LHG021}, which has great potential value to get rid of dependence on manual annotation and become an emerging research hotspot.
Existing unsupervised person ReID methods are mainly divided into two categories: unsupervised domain adaptive (UDA) methods and purely unsupervised learning (USL) methods.
UDA methods~\cite{DBLP:conf/aaai/HuangPJLX20,DBLP:conf/iclr/GeCL20,DBLP:conf/nips/Ge0C0L20,DBLP:conf/aaai/ZhengLZZZ21} first train the model on a supervised source-domain dataset to make the model have a certain discriminant ability for identities.
Then the training is carried out in the target domain without annotation to transfer the ability.
However, the performance of UDA methods is greatly affected by the quality and size of the annotated source domain datasets, which constrains their scalability.
In contrast, the purely unsupervised methods~\cite{DBLP:conf/aaai/0001LHG021,DBLP:journals/corr/abs-2103-16364} that requires only the unlabeled target dataset is more flexible.
USL methods mainly rely on clustering to generate pseudo labels for supervised learning in the target dataset with the strategy of contrastive learning.
However, most USL methods~\cite{DBLP:conf/eccv/LiZG18, DBLP:conf/bmvc/ChenZG18, DBLP:conf/iccv/WuLYLLL19} only focus on improving the differences between classes and ignore the intra-class differences caused by camera style variance, including camera property and environment information of cameras, etc.
Several methods~\cite{DBLP:journals/tist/TianTLTZF21,DBLP:conf/cvpr/XuanZ21, DBLP:conf/aaai/0001LHG021} that consider the influence of cameras are complicated and indirect.
Based on the above observations, we propose a novel purely unsupervised learning method, the camera-aware style separation and contrast learning network (CA-UReID), which directly disentangles camera style at the level of feature map and utilize camera-aware contrastive learning to learn more discriminative feature for each identity (Fig.\ref{fig_conception}).
Specifically, we first design a camera style separation (CSS) module that divides the extracted features into camera-specific and camera-agnostic parts with attention mechanism, so as to alleviate the impact of camera style variance on performance.
In the camera-specific branch, the disentangled attention module captures the camera style information under the guidance of the proposed camera separation loss.
Then, by using complementary attention maps, the camera-agnostic branch can enforce the network to focus on the persons, which extracts more camera-invariant features with the compact intra-class distribution.
After the CSS module, we propose a camera-aware contrastive center (CACC) loss into the optimization constraints to further narrow the gap between cameras within the class and increase the feature distance of different classes.
Moreover, our CACC loss considers the constraints on the sample set under the same camera rather than individual samples, which is verified by us as a more stable and efficient way and is not easy to be affected by noise samples.
Through joint optimization with the designed camera style separation module and camera-aware contrastive center loss, our method can learn more robust features of identities for the final retrieval.
\vspace{2pt}
Our main contributions are summarized as follows:
\vspace{-5pt}
\begin{itemize}
\item We propose a novel camera style separation module to explicitly disentangle camera-specific information from feature maps and reduce intra-class variance.
\vspace{-5pt}
\item We design a more robust camera-aware contrastive center loss, which further enhances camera-agnostic features at center level to get discriminative embeddings.
\vspace{-5pt}
\item Extensive experiments on mainstream datasets (Market1501~\cite{DBLP:conf/iccv/ZhengSTWWT15} and DukeMTMC-reID~\cite{DBLP:conf/eccv/RistaniSZCT16}) show superior performance of our proposed CA-UReID method.
\end{itemize}
\section{Related Work}
\textbf{Unsupervised Person Re-identification.}
Unsupervised person ReID methods can be categorized into the unsupervised domain adaptive (UDA) methods and the purely unsupervised learning (USL) methods.
The main difference between them is whether to use labeled source domain datasets.
As for the UDA methods, some of them use GAN~\cite{DBLP:conf/cvpr/Deng0YK0J18,DBLP:conf/cvpr/WeiZ0018} for style transfer to narrow the gap between the source domain and target domain.
Some other methods are designed to generate more reliable pseudo labels:
SpCL~\cite{DBLP:conf/nips/Ge0C0L20} introduces a cluster reliability criterion to retain reliable clusters and disassemble unreliable clusters to outlier instances.
DAAM~\cite{DBLP:conf/aaai/HuangPJLX20} assigns different weights to samples based on their distance from the class centers.
UNRN~\cite{DBLP:conf/aaai/ZhengLZZZ21} measured the reliability of pseudo labels by inconsistencies between the teacher network and student network.
Compared with the UDA method, the USL method is more flexible and promising without manual annotations.
Some approaches explore special clustering methods, such as k-NN~\cite{DBLP:conf/eccv/LiZG18,DBLP:conf/bmvc/ChenZG18} and graphs~\cite{DBLP:conf/iccv/WuLYLLL19}, to generate pseudo-labels.
MMCL~\cite{DBLP:conf/cvpr/WangZ20a} uses a memory-based multi-label classification loss to update the model.
CycAs~\cite{DBLP:conf/eccv/WangZZLSLW20} learns pedestrian embedding from original videos to mine temporal information.
Most USL methods focus on the distance between classes but often ignore the intra-class variance caused by inconsistent camera styles.
In contrast, our USL-based CA-UReID method considers the effects of camera style, which effectively improves the consistency within the class.
\vspace{4pt}
\noindent\textbf{Research of Camera Style in UReID.}
Eliminating the effects of camera style has recently been highlighted by a few unsupervised ReID methods.
CIDC~\cite{DBLP:journals/tist/TianTLTZF21} method transfers samples to each camera via StarGAN~\cite{DBLP:conf/cvpr/ChoiCKH0C18}.
Some other methods use intra-camera and inter-camera combined training.
In the intra-camera training stage, a classifier is set under each camera, and all the classifiers share a backbone network.
In the inter-camera training stage, existing methods design different strategies to unify the knowledge learned from different cameras:
IICS~\cite{DBLP:conf/cvpr/XuanZ21} uses a combination of Instance Normalization (IN) and Batch Normalization (BN) to improve the generalization ability of classifiers.
CAP~\cite{DBLP:conf/aaai/0001LHG021} designs camera-based clusters to pull positive clusters and push nearest negative proxies.
MetaCam~\cite{DBLP:conf/cvpr/YangZLCLLS21} uses meta-learning to learn robust models for cameras.
Most of these methods have complex training processes, and their way of eliminating the influence of camera style is indirect.
Different from them, our proposed CA-UReID method separates camera style in an explicit way by disentangling camera-specific information in the feature space.
Moreover, compared with the existing CAP~\cite{DBLP:conf/aaai/0001LHG021} and ICE~\cite{DBLP:journals/corr/abs-2103-16364} methods,
our designed camera-aware contrastive loss considers the overall distribution in each camera-aware cluster, which is more robust with better performance.
\section{Method}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{overall_structure9.pdf}
\vspace{-18pt}
\caption{
The framework of our proposed method.
(a) The two-branch camera style separation module, including the camera-specific branch and the camera-agnostic branch.
(b) The pipeline of calculating the camera-aware contrastive center loss.
}
\vspace{-10pt}
\label{fig_overall_structure}
\end{figure*}
In this section, we introduce our camera-aware style separation and contrastive learning (CA-UReID) method in detail.
In Sec.~\ref{subsec_overview}, we first describe the overall architecture of CA-UReID, including the structure of network and the overall objective function during the training.
Then we introduce our proposed camera style separation module and camera-aware contrastive center loss in the next two subsections.
\subsection{Overall Structure} \label{subsec_overview}
As shown in Fig.~\ref{fig_overall_structure}, our method follows the memory bank-based framework, constructing an unsupervised learning network.
The overall process of CA-UReID mainly includes three stages: identities' feature extraction, two-branch camera style separation, and self-supervised contrastive learning.
In the feature extraction stage, let $\mathcal{X}=\{x_1,x_2,…,x_N\}$ denote the person ReID dataset, and $N$ is the number of samples.
Specifically, given an input sample $x_i$, we use ResNet-50 as the backbone to extract the corresponding feature map $F_{x_i}\in \mathbb{R}^{H \times W \times C}$,
where $H$, $W$, $C$ denote height, width and channel number respectively.
In the stage of two-branch camera style separation, the designed camera separation (Cam-Sep) attention module is used to disentangle the camera-specific feature map $F_{x_i}^{sp}$ and camera-agnostic feature map $F_{x_i}^{ag}$ from $F_{x_i}$.
The $F_{x_i}^{sp}$ is fed into the camera-specific branch under the constraints of the proposed camera separation loss $\mathcal{L}_{sep}$, and we use the $F_{x_i}^{ag}$ to generate the camera-agnostic 2048-dim feature embedding $f_i$ with the global average pooling (GAP) and batch normalization (BN) operations.
During the last stage of self-supervised contrastive learning, as shown in the upper part of Fig.~\ref{fig_overall_structure}, our method follows the framework based on memory bank like some previous methods~\cite{DBLP:conf/nips/Ge0C0L20}.
Before training, the memory bank is initialized with the embeddings extracted from the whole training set by using ImageNet-pretrained ResNet-50.
At the beginning of each epoch, the k-reciprocal-coded Jaccard distance matrix of all pedestrian features in the bank is used for DBSCAN clustering to obtain pseudo labels $\mathcal{Y}=\{y_1,y_2…y_N\}$.
Base on the pseudo labels, our method can calculate the common InfoNCE loss~\cite{DBLP:journals/corr/abs-1807-03748} $\mathcal{L}_{base}$ and our proposed camera-aware contrastive center loss $\mathcal{L}_{cacc}$.
Let $\mathcal{M}$ ($\mathcal{M} \in \mathbb{R}^{d\times N}$) denotes the memory bank, where $d$ is the dimension of the embeddings.
At the end of each training iteration, extracted embeddings $f_i$ are used to update the memory bank $\mathcal{M}$ at the instance level with the momentum mechanism:
\vspace{-20pt}
\begin{eqnarray}
\mathcal{M}[i]\leftarrow \mu \cdot \mathcal{M}[i]+\left(1- \mu \right) \cdot f_{i}~,
\end{eqnarray}
\vspace{-19pt}
\noindent where $\mathcal{M}[i]$ is the $i$-th item in the memory bank, storing the updated feature of sample $x_i$, and $\mu$ is a momentum coefficient that controls the updating speed.
In the training mode, the overall objective function of this CA-UReID method can be defined as follows:
\vspace{-20pt}
\begin{eqnarray}
\mathcal{L}_{total}=\mathcal{L}_{base}+\lambda_{1} \cdot \mathcal{L}_{sep}+\lambda_{2} \cdot \mathcal{L}_{cacc}~,
\label{eq_total_loss}
\end{eqnarray}
\vspace{-19pt}
\noindent where $\lambda_{1}$ and $\lambda_{2}$ are hyper-parameters to balance these different components.
In the test mode, the $f_{i}$ extracted from image is the embedding used for final matching.
\vspace{-10pt}
\subsection{Camera Style Separation Module} \label{subsec_CSS}
\vspace{-5pt}
The camera style variance, including camera property and environment information, leads to the gap in intra-class features.
To solve this problem, we propose the camera style separation (CSS) module to eliminate the camera style directly from the extracted feature maps.
Inspired by the way DAAM~\cite{DBLP:conf/aaai/HuangPJLX20} separates domain-related information in UDA framework, we design a novel strategy to disentangle camera-style information in our USL pipeline.
As shown in Fig.~\ref{fig_overall_structure}a, the designed CSS module has two branches: the camera-specific branch and the camera-agnostic branch.
In the camera-specific branch, to separate camera-style features unrelated to pedestrians (e.g. background and camera property, etc.) from feature map, we reframe the learning of this branch as a camera classification problem.
Note that, although we don’t have ID annotation in USL mode, we can use the cameras annotations, which is easy to obtain in the actual collection of datasets.
Specifically, $F_{x_i}$ is fed into a Cam-Sep attention module to obtain a weight mask $Attn(F_{x_i})$ to focus on camera information.
Here, we adopt the HA structure of HA-CNN~\cite{DBLP:conf/cvpr/LiZG18} as the attention module to obtain both spatial and channel attention weights.
Through element wise product with the feature map $F_{x_i}$, camera-specific feature map $F_{x_i}^{sp}$ is calculated by the following formula:
\vspace{-18pt}
\begin{eqnarray}
F_{x_i}^{sp}=Attn(F_{x_i})\otimes F_{x_i}~.
\end{eqnarray}
\vspace{-15pt}
\noindent The camera-specific embedding $f_i^{sp}$ is generated by GAP and BN operation on $F_{x_i}^{sp}$.
To guide the attention of this branch, we design the camera separation loss $\mathcal{L}_{sep}$:
\vspace{-18pt}
\begin{eqnarray}
\mathcal{L}_{sep}=\mathbb{E}[-y_i^{cam}log(\sigma(FC(f_i^{sp})))]~,
\end{eqnarray}
\vspace{-15pt}
\noindent where $y_i^{cam}$ is the camera label, $\sigma$ is the Softmax function, and $FC$ is the fully connected layer.
$f_i^{sp}$ is input into a classifier with the number of categories as the number of cameras to predict probability of belonging to each camera, which can enforce this branch to extract camera-related information.
In the camera-agnostic branch, we carry out element-wise products with the complementary mask of $Attn(F_{x_i})$ and
the original feature map $F_{x_i}$ to obtain the camera-agnostic feature map $F_{x_i}^{ag}$, which is defined as follows:
\vspace{-18pt}
\begin{eqnarray}
F_{x_i}^{ag}=(1-Attn(F_{x_i}))\otimes F_{x_i}~.
\end{eqnarray}
\vspace{-15pt}
\noindent Then the camera-agnostic feature vector $f_i$ is obtained from $F_{x_i}^{ag}$ and used to calculate $\mathcal{L}_{base}$ and $\mathcal{L}_{cacc}$.
By optimizing $\mathcal{L}_{sep}$, the influence of camera styles is reduced and the model can learn more robust pedestrian features.
\vspace{-10pt}
\subsection{Camera-aware Contrastive Center Loss} \label{subsec_CACC}
\vspace{-5pt}
To further enhance the network to extract camera-invariant embeddings $f_i$, we propose a novel camera-aware contrastive center (CACC) loss $\mathcal{L}_{cacc}$ to narrow the feature distance of different cameras in the same class and increase the feature distance of different classes.
As shown in Fig.~\ref{fig_overall_structure}b, different from the common contrastive learning loss $\mathcal{L}_{base}$ in UReID, the $\mathcal{L}_{cacc}$ considers the camera in the clustering process.
Besides, unlike other camera-aware contrastive losses (ICE~\cite{DBLP:journals/corr/abs-2103-16364}, CAP~\cite{DBLP:conf/aaai/0001LHG021}), our proposed $\mathcal{L}_{cacc}$ constrains the embeddings at the set level by using a camera center instead of a sample as an anchor (Fig.~\ref{fig_cacc_loss}), attempting to avoid the impact of noise samples.
Specifically, given the camera number $c$ and identity class $k$ for each embedding stored in the memory bank $\mathcal{M}$ and a batch, the camera center $g_k^{c}$ and $p_k^{c}$ can be respectively formulated as:
\vspace{-13pt}
\begin{eqnarray}
g^{c}_k=\frac{1}{N^c_{k}}\sum_{\mathcal{M}[i] \in k \cap c} \mathcal{M}[i],~~~~p^{c}_k=\frac{1}{\hat{N}^c_{k}}\sum_{f_{i} \in k \cap c} f_{i}~,
\end{eqnarray}
\vspace{-10pt}
\noindent where $N^c_{k}$ is the number of all embeddings of identity class $k$ with camera number $c$ in $\mathcal{M}$, and $\hat{N}^c_{k}$ is the number of embeddings with class $k$ and camera number $c$ in a training batch.
In each batch, by using $p^{c}_k$ as an anchor and
combining all positive camera centers $g_i$ and k-nearest negative camera centers $g_j$,
our proposed $\mathcal{L}_{cacc}$ can be formulated as:
\vspace{-13pt}
\begin{eqnarray}
\mathcal{L}_{cacc}=-\frac{1}{G}\sum_{i\in k}log \frac{S(p^{c}_k, g_i)}{S(p_k^{c} , g_i)+\sum_{j=1}^{J}S(p_k^{c} , g_j)}~,
\end{eqnarray}
\vspace{-10pt}
\noindent where $G$ is the number of $g_i$, $J$ is the number of $g_j$, $S(p_k^{c} , g_i)=exp(p_k^{c} \cdot g_i/\tau_{cacc})$, and $\tau_{cacc}$ is a temperature hyper-parameter.
By optimizing the $\mathcal{L}_{cacc}$ constrained on the center within each camera (shown in Fig.~\ref{fig_cacc_loss}), our method can effectively pull the intra-class features and push the inter-class features to obtain more discriminative embeddings.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{camera-aware_contrastive_center_loss2.pdf}
\vspace{-18pt}
\caption{The camera-aware contrastive center loss. The geometrical patterns with different colors represent different ID's embeddings, and different shapes represent different cameras.}
\vspace{-10pt}
\label{fig_cacc_loss}
\end{figure}
\vspace{-5pt}
\section{Experiments}
\begin{table}[htbp]
\centering
\caption{Comparison with state-of-the-art methods.}
\fontsize{8}{9.5}\selectfont
\setlength{\tabcolsep}{0.4mm}
\begin{tabular}{c|c|c|cc|c|cc}
\hline
\hline
\multirow{2}[0]{*}{Method} & \multirow{2}[0]{*}{Venue} & \multirow{2}[0]{*}{Source} & \multicolumn{2}{c|}{Market} & \multirow{2}[0]{*}{Source} & \multicolumn{2}{c}{Duke} \\\cline{4-5}\cline{7-8}
& & & mAP & Rank1 & & mAP & Rank1 \\
\hline
\multicolumn{8}{l}{\textbf{Unsupervised Domain Adaptation}} \\
\hline
MMCL~\cite{DBLP:conf/cvpr/WangZ20a} & CVPR2020 & Duke & 60.4 & 84.4 & Market & 51.4 & 72.4 \\
DAAM~\cite{DBLP:conf/aaai/HuangPJLX20} & AAAI2020 & Duke & 67.8 & 86.4 & Market & 63.9 & 77.6 \\
HGA~\cite{DBLP:conf/aaai/ZhangLLGDLJ21} & AAAI2021 & Duke & 70.3 & 89.5 & Market & 67.1 & 80.4 \\
MMT~\cite{DBLP:conf/iclr/GeCL20} & ICLR2020 & Duke & 71.2 & 87.7 & Market & 65.1 & 78.0 \\
JVTC+*~\cite{DBLP:conf/cvpr/ChenWLDB21} & CVPR2021 & Duke & 75.4 & 90.5 & Market & 67.6 & 81.9 \\
MetaCam~\cite{DBLP:conf/cvpr/YangZLCLLS21}&CVPR2021 & Duke & 76.5 & 90.1 & Market & 65.0 & 79.5 \\
SpCL~\cite{DBLP:conf/nips/Ge0C0L20} &NeurIPS2020& Duke & 76.7 & 90.3 & Market & 68.8 & 82.9 \\
GCMT~\cite{DBLP:conf/ijcai/LiuZ21} & IJCAI2021& Duke & 77.1 & 90.6 & Market & 67.8 & 81.1 \\
UNRN~\cite{DBLP:conf/aaai/ZhengLZZZ21} & AAAI2021 & Duke & 78.1 & 91.9 & Market & 69.1 & 82.0 \\
GLT~\cite{DBLP:conf/cvpr/ZhengLH0LZ21} & CVPR2021 & Duke & 79.5 & 92.2 & Market & 69.2 & 82.0 \\
OPLG~\cite{Zheng_2021_ICCV} & ICCV2021 & Duke & 80.0 & 91.5 & Market & 70.1 & 82.2 \\
\hline
\multicolumn{8}{l}{\textbf{Purely Unsupervised Learning}} \\
\hline
BUC~\cite{DBLP:conf/aaai/LinD00019} & AAAI2019 & None & 38.3 & 66.2 & None & 27.5 & 47.4 \\
SSL~\cite{DBLP:conf/cvpr/LinXWY020} & CVPR2020 & None & 37.8 & 71.7 & None & 28.6 & 52.5 \\
MMCL~\cite{DBLP:conf/cvpr/WangZ20a} & CVPR2020 & None & 45.5 & 80.3 & None & 40.2 & 65.2 \\
HCT~\cite{DBLP:conf/cvpr/ZengNW020} & CVPR2020 & None & 56.4 & 80.0 & None & 50.7 & 69.6 \\
SpCL~\cite{DBLP:conf/nips/Ge0C0L20} & NeurIPS2020 &None& 73.1 & 88.1 & None & 65.3 & 81.2 \\
IICS~\cite{DBLP:conf/cvpr/XuanZ21} & CVPR2021 & None & 72.1 & 88.8 & None & 59.1 & 76.9 \\
OPLG~\cite{Zheng_2021_ICCV} & ICCV2021 & None & 78.1 & 91.1 & None & 65.6 & 79.8 \\
CAP~\cite{DBLP:conf/aaai/0001LHG021} & AAAI2021 & None & 79.2 & 91.4 & None & 67.3 & 81.1 \\
MGH~\cite{DBLP:conf/mm/WuWLT21} & MM2021 & None & 81.7 & 93.2 & None & 70.2 & 83.7 \\
ICE~\cite{DBLP:journals/corr/abs-2103-16364} & ICCV2021 & None & 82.3 & 93.8 & None & 69.9 & 83.3 \\
\hline
\textbf{CA-UReID (Ours)} & - & \textbf{None} & \textbf{84.5} & \textbf{94.1} & \textbf{None} & \textbf{73.2} & \textbf{85.2} \\
\hline
\hline
\end{tabular}
\label{tab_sota_comparison}
\vspace{-10pt}
\end{table}
\vspace{-5pt}
\subsection{Datasets and Implementation Details}
\vspace{-5pt}
\textbf{Datasets.} We use the mainstream ReID datasets to evaluate our proposed method:
Market1501~\cite{DBLP:conf/iccv/ZhengSTWWT15} contains 32,668 images of 1,501 identities captured by 6 cameras, while DukeMTMC-reID~\cite{DBLP:conf/eccv/RistaniSZCT16} contains 36,411 images of 1,404 identities captured by 8 cameras.
As the common practice, Cumulative Matching Characteristic (CMC) and mean Average Precision (mAP) are used to measure the performance.
\noindent \textbf{Implementation Details.}
We use ResNet-50 as backbone with the pre-trained weights on ImageNet.
All images are resized to $256\times128$.
The data augmentation strategies used in training include random flipping, random cropping, and random erasing.
In each epoch, our method uses DBSCAN clustering with k-reciprocal Jaccard distance to generate pseudo-labels.
The $eps$ in DBSCAN is set to 0.5/0.6 for Market1501/DukeMTMC-reID.
Momentum factor $\mu$ is set to 0.2/0.1 on Market1501/DukeMTMC-reID.
In $\mathcal{L}_{cacc}$, the temperature factor $\tau_{cacc}$ is set to 0.07, and $J$ is set to 50.
The $\lambda_{1}$ and $\lambda_{2}$ in $\mathcal{L}_{total}$ is empirically set to 0.4 and 1.0,
The batch size is set to 64 with Adam optimizer.
The learning rate is set to 0.00035 and multiplied by 0.1 per 20 epoch in training.
\vspace{-10pt}
\subsection{Comparison with the State-of-the-art}
\vspace{-5pt}
In experiments, we compare our method with state-of-the-art unsupervised ReID methods, including the UDA and USL methods.
The experimental results are shown in Tab.~\ref{tab_sota_comparison}.
We list state-of-the-art UDA methods in the top half of Tab.~\ref{tab_sota_comparison}.
The UDA leverages the annotation information of the source domain to make it easier to achieve better performance on the target dataset.
Nevertheless, without any identity annotation, the performance of our method still clearly surpasses them, outperforming the latest OPLG method with +4.5\% mAP and +2.6\% Rank-1 on Market1501 and +3.1\% mAP and +3.0\% Rank-1 on DukeMTMC-reID.
As shown in the bottom half of Tab.~\ref{tab_sota_comparison}, our method also achieves better performance than SOTA methods under the purely unsupervised setting.
Compared with IICS~\cite{DBLP:conf/cvpr/XuanZ21}, which also considers the influence of camera style, our method has obvious advantages.
IICS divides the training process into the intra-camera training stage and the inter-camera training stage, requiring additional AIBN to integrate the knowledge learned under different cameras and reduce the intra-camera variance.
Our proposed camera separation structure can separate the feature map, which is more direct than IICS.
Compared with CAP~\cite{DBLP:conf/aaai/0001LHG021} and ICE~\cite{DBLP:journals/corr/abs-2103-16364} using camera-aware proxys, our method still achieve better performance on both Market1501 and DukeMTMC-reID.
It also indicates that our designed center-based cross-camera loss has superiority with less sensitivity to noise samples.
\vspace{-10pt}
\subsection{Ablation Study}
\vspace{-5pt}
\begin{table}[t]
\centering
\caption{Ablation studies of the designed components.}
\fontsize{9}{9.5}\selectfont
\setlength{\tabcolsep}{1.5mm}
\begin{tabular}{c|l|cc|cc}
\hline
\hline
\multicolumn{1}{c|}{\multirow{2}[0]{*}{Index}} & \multicolumn{1}{c|}{\multirow{2}[0]{*}{Methods}} & \multicolumn{2}{c|}{Market1501} & \multicolumn{2}{c}{Duke-reID} \\\cline{3-6}
& & mAP & Rank1 & mAP & Rank1 \\
\hline
1 & Baseline & 78.7 & 91.3 & 68.6 & 82.2 \\
2 & Baseline~+~CSS & 80.7 & 91.9 & 69.5 & 82.4 \\
3 & Baseline~+~$\mathcal{L}_{cacc}$ & 82.8 & 93.3 & 71.9 & 85.0 \\
4 & Baseline~+~CSS~+~$\mathcal{L}_{cacc}$ & \textbf{84.5} & \textbf{94.1} & \textbf{73.2} & \textbf{85.2} \\
\hline
5 & $\mathcal{L}_{casc}$ (with sample) & 83.5 & 93.4 & 72.0 & 84.0 \\
6 & $\mathcal{L}_{cacc}$ (with center) & \textbf{84.5} & \textbf{94.1} & \textbf{73.2} & \textbf{85.2} \\
\hline
\hline
\end{tabular}
\label{tab_ablation}
\vspace{-5pt}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{ccs_attention_map6.pdf}
\vspace{-22pt}
\caption{Visualization of attention map in the CSS module.}
\vspace{-15pt}
\label{fig_css_attention}
\end{figure}
In this section, we further discuss and analyze the effectiveness of key components, including the camera style separation module and the camera-aware contrastive center loss.
As shown in Tab.~\ref{tab_ablation}, the Baseline (Index-1) is the method without our proposed CSS branch and CACC loss.
Compared with the Baseline, the CSS module brings performance improvements on all two metrics (mAP, Rank1) are 2.0\%, 0.6\% on Market1501 and 0.9\%, 0.2\% on DukeMTMC-reID.
The improvements of $\mathcal{L}_{cacc}$ (Index-3) over baseline are 4.1\%, 2.0\% on Market and 3.3\%, 2.8\% on Duke.
The best performance can be obtained by using both CSS and $\mathcal{L}_{cacc}$ (Index-4), with improvements of 5.8\%, 2.8\% on Market and 4.6\%, 3.0\% on Duke.
It shows the good complementarity of CSS and $\mathcal{L}_{cacc}$ and they can jointly optimize the feature learning.
To observe the function of CSS module, we visualize the learned attention maps in the HA structure.
As shown in Fig.~\ref{fig_css_attention}, the camera-specific maps mainly focus on background, which is more related to cameras because of their fixed position.
In contrast, the complementary camera-agnostic attention maps focus on the main body of pedestrian, which can narrow the gap between cameras within the class.
In experiments, we also compare two different constraints: the camera-aware contrastive sample loss $\mathcal{L}_{casc}$ and camera-aware contrastive center loss $\mathcal{L}_{cacc}$, which act on samples and centers, respectively.
From Tab.~\ref{tab_ablation}, we can notice that the version of center-based loss achieves higher performance with stronger retrieval ability.
In general, by using two components together, our method can generate more discriminative embeddings in this person ReID task under USL settings.
\vspace{-10pt}
\section{Conclusion}
\vspace{-5pt}
In this paper, we propose a camera-aware style separation and contrastive learning method for purely unsupervised person ReID task, which can effectively separate camera style from the feature map by our proposed camera style separation module and camera-aware contrastive center loss.
Experimental results demonstrate that our method shows superior performance against the existing state-of-the-art methods.
\clearpage
\bibliographystyle{IEEEbib}
\begin{spacing}{0.9}
| 5997c9dc4f78dc9a7243346b83d90908f7026fd3 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
\par
In the paper, we consider the following Cauchy problem for the Camassa-Holm (CH) equation \cite{ch}
\begin{equation}\label{ch}
\left\{\begin{array}{l}
u_t-u_{xxt}+3uu_{x}=2u_{x}u_{xx}+uu_{xxx},\qquad t>0,\ x\in\mathbb{R},\\
u(0,x)=u_0(x),\qquad x\in\mathbb{R}.
\end{array}\right.
\end{equation}
The CH equation is completely integrable \cite{c3,cgi} and admits infinitely many conserved quantities, an infinite hierarchy of quasi-local symmetries, a Lax pair and a bi-Hamiltonian structure \cite{c1,or}.
There are a lot of literatures devoted to studying the CH equation. Li and Olver \cite{lio} (see also \cite{rb}) established the local well-posedness for the Cauchy problem \eqref{ch} in Sobolev space $H^{s}(\mathbb{R})(s>\frac 3 2)$. Byers \cite{b} obtained that the Cauchy problem \eqref{ch} is ill-posed in $H^{s}(\mathbb{R})$ for $s<\frac{3}{2}$ in the sense of norm inflation. Then, by the Littlewood-Paley decomposition theory and transport equations theory, Danchin \cite{d1,d2}, Li and Yin \cite{liy} et al proved that the Cauchy problem \eqref{ch} is well-posed in Besov spaces $B^{s}_{p,r}(\mathbb{R})$ with $s>\max\{\frac{3}{2},1+\frac{1}{p}\},\ r<+\infty$ or $s=1+\frac{1}{p},\ p\in[1,2],\ r=1$ ($1+\frac{1}{p}\geq\frac{3}{2}$). Li et al. \cite{lyz1} demonstrated the non-continuity of the CH equation in $B^{\sigma}_{p,\infty}(\mathbb{R})$ with $\sigma>2+\max\{\frac32,1+\frac1p\}$ by constructing a initial data $u_{0}$ such that corresponding solution to the CH equation that starts from $u_{0}$ does not converge back to $u_{0}$ in the norm of $B^{\sigma}_{p,\infty}(\mathbb{R})$ as time goes to zero. Recently, Guo et al. \cite{glmy} established the ill-posedness for the Camassa-Holm type equations in Besov spaces $B^{1+\frac 1 p}_{p,r}(\mathbb{R})$ with $p\in[1,+\infty],\ r\in(1,+\infty]$, which implies $B^{1+\frac 1 p}_{p,1}(\mathbb{R})$ is the critical Besov space for the CH equation. However, there is no answer to the space $B^{1+\frac{1}{p}}_{p,1}(\mathbb{R})$ with $p\in(2,+\infty]$ ($1+\frac{1}{p}<\frac{3}{2}$). The main difficult is that the CH equation induce a loss of one order derivative in the stability estimates. To overcome this difficult, we adopt the compactness argument and Lagrangian coordinate transformation in our upcoming article \cite{yyg} rather than the usual techniques used in \cite{liy} to obtain the local well-posedness for the Cauchy problem of the Camassa-Holm type equations in $B^{1+\frac 1 p}_{p,1}(\mathbb{R})$ with $p\in[1,+\infty)$. This implies $B^{1+\frac 1 p}_{p,1}(\mathbb{R})$ is the critical Besov space and the index $\frac 3 2$ is not necessary for the Camassa-Holm type equations.
Furthermore, in contrast to the KdV equation who can not describe the wave breaking phenomena observed in nature, the CH equation not only possesses global strong solutions \cite{ce2,ce4,ce5}, but also has wave-breaking phenomena \cite{c2,ce3,lio}. When this happens, the solution remains H\"{o}lder continuous and uniformly bounded, but develops an unbounded slope in finite time \cite{c2}. The questions about how to continue the solution beyond wave breaking can be nicely studied in the case of multipeakons. Multipeakons are given by (see \cite{ch} and references therein)
\begin{align}
u(t,x)=\sum\limits_{i=1}^{N}p_i(t)e^{-|x-q_i(t)|}.\label{mult}
\end{align}
Observe that the solution \eqref{mult} is not smooth even with continuous functions $(p_{i}(t),q_{i}(t))$, one possible way to interpret \eqref{mult} as a weak solution of \eqref{ch} is to rewrite \eqref{ch} as
\begin{equation}\label{nonch}
\left\{\begin{array}{l}
u_{t}+uu_{x}+\partial_{x}(1-\partial_{xx})^{-1}\Big(u^{2}+\frac{u^{2}_{x}}{2}\Big)=0,\qquad t>0,\ x\in\mathbb{R},\\
u(0,x)=u_0(x),\qquad x\in\mathbb{R}.
\end{array}\right.
\end{equation}\\
First, we recall some recent results for the CH equation in critical Besov spaces.
\begin{theo}[See \cite{yyg}]\label{wellch}
Let $u_0\in B^{1+\frac{1}{p}}_{p,1}(\mathbb{R})$ with $1\leq p<\infty$ (~when $p=\infty,$ we choose $u_0\in B^{1+\epsilon}_{\infty,1}(\mathbb{R}),\ \forall \epsilon>0$). Then there exists a time $T>0$ such that the CH equation with the initial data $u_{0}$ is locally well-posed in the sense of Hadamard.
\end{theo}
\begin{theo}[See \cite{glmy}]\label{illch}
Let $1\leq p\leq\infty,\ 1<r\leq\infty$. For any $\varepsilon>0$, there exists $u_0\in H^{\infty}(\mathbb{R})$ such that the following hold:
\begin{itemize}
\item [{\rm (1)}] $\|u_{0}\|_{B^{1+\frac{1}{p}}_{p,r}}\leq\varepsilon;$
\item [{\rm (2)}] There is a unique solution $u\in \mathcal{C}_{T}\big(H^{\infty}(\mathbb{R})\big)$ to the Cauchy problem \eqref{ch} with $T<\varepsilon;$
\item [{\rm (3)}] $\limsup\limits_{t\rightarrow T^{-}}\|u(t)\|_{B^{1+\frac{1}{p}}_{p,r}}\geq\limsup\limits_{t\rightarrow T^{-}}\|u(t)\|_{B^{1}_{\infty,\infty}}=\infty.$
\end{itemize}
\end{theo}
From Theorem \ref{wellch} and \ref{illch}, we known that the local well-posedness and ill-posedness for the Cauchy problem \eqref{ch} of the CH equation have been completed in all critical Besov spaces except for $B^1_{\infty,1}(\mathbb{R})$. Note that the CH equation is the high-frequency limit model of the boundary problem of 2D incompressible Euler equation. For the Cauchy problem of the incompressible Euler equation, it is locally well-posed in $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}^{2}))$ \cite{gly} and locally ill-posed in $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}^{2}))$ (norm inflation) \cite{boli}. Then, the fact
\begin{align}\label{c01}
B^{1+\epsilon}_{\infty,1}\hookrightarrow B^{1}_{\infty,1}\cap B^{1}_{\infty,\infty,1}\hookrightarrow B^{1}_{\infty,1}\hookrightarrow \mathcal{C}^{0,1}
\end{align}
implies that $B^{1}_{\infty,1}(\mathbb{R}^{2})$ is the critical Besov space for the incompressible Euler equation where $B^{1}_{\infty,\infty,1}$ is a Banach space equiped with the norm $\|f\|_{B^{1}_{\infty,\infty,1}}=\sup\limits_{j}j\cdot2^{j}\|\Delta_{j}f\|_{L^{\infty}}$. For the CH equation, we see from Theorem \ref{wellch} that it is locally well-posed in $\mathcal{C}_T(B^{1+\epsilon}_{\infty,1}(\mathbb{R})).$ Analogously, a nature problem is
\quad\\
\textbf{H}:{\it ~~Whether the problem \eqref{ch} is locally well-posed or not in $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R}))$, $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}))$ or $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$?}
\quad\\
In this paper, we aim to solve this problem. The main difficulty is the force term $-\partial_x(1-\partial_{xx})^{-1}(u^2+\frac{u_x^2}{2})$ (we will focus on $-\partial_x(1-\partial_{xx})^{-1}(\frac{u_x^2}{2})$ in below since $-\partial_x(1-\partial_{xx})^{-1}(u^2)$ is a lower order term). In $B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R})$, we can easliy get the following estimate
$$\|-\partial_x(1-\partial_{xx})^{-1}(u^2+\frac{u_x^2}{2})\|_{B^{1}_{\infty,1}\cap B^{1}_{\infty,\infty,1}}\leq C \|u\|^2_{B^{1}_{\infty,1}\cap B^{1}_{\infty,\infty,1}}.$$
So the local well-posedness for the Cauchy problem of the CH equation in $B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R})$ is obvious. But only for $B^{1}_{\infty,1}(\mathbb{R})$, since we don't know whether $B^{0}_{\infty,1}(\mathbb{R})$ is a Banach algebra or not, we can't obtain a priori estimate of $\|-\partial_x(1-\partial_{xx})^{-1}(u^2+\frac{u_x^2}{2})\|_{B^{1}_{\infty,1}}$ (This problem doesn't exist for the incompressible Euler equation, since the divergence free condition makes sense). The good news is, we construct a counter example to present that $B^{0}_{\infty,1}(\mathbb{R})$ is not a Banach algebra, even $\|f^2\|_{B^{0}_{\infty,1}}$ is not bounded when $f\in B^{0}_{\infty,1}(\mathbb{R})$. Note that
for any $u_0\in B^1_{\infty,1}(\mathbb{R})$, we have
$$E_0:=-\partial_x(1-\partial_{xx})^{-1}(\frac{u_{0x}^2}{2})\in B^1_{\infty,1}(\mathbb{R}) \Longleftrightarrow u_{0x}^2\in B^0_{\infty,1}(\mathbb{R}),\quad
-\partial_{xx}(1-\partial_{xx})^{-1}={\rm Id}-(1-\partial_{xx})^{-1}.$$
So we conclude that the CH equation is ill-posed in $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}))$. Then, by iterating over time, we obtain the norm inflation and hence the ill-posedness of the CH equation in this space. Finally, for $u_0\in \mathcal{C}^{0,1}(\mathbb{R})$, we get the local existence and uniqueness of the solution to \eqref{ch}, but we prove that the continuous dependence does not hold by giving a counter example. It is worth mentioning that we show that the norm inflation can not be appeared in $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$, but the continuous dependence of \eqref{ch} still do not hold, which implies the norm inflation is just a sufficient but unnecessary conditions for the local ill-posedness.\\
Now, we give our two theorems which answer the problem \textbf{H}:
\begin{theo}\label{well}
\begin{itemize}
\item [{\rm(1)}] Let $u_0\in B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R})\big(B^{1+\epsilon}_{\infty,1}(\mathbb{R})\hookrightarrow B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R}),\ \forall \epsilon>0\big)$. Then there exists a time $T>0$ such that the Cauchy problem \eqref{ch} is locally well-posed in $\mathcal{C}_T(B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R}))\cap \mathcal{C}^1_T(B^{0}_{\infty,1}(\mathbb{R})\cap B^{0}_{\infty,\infty,1}(\mathbb{R}))$ in the sense of Hadamard;
\item [{\rm(2)}] Let $u_0\in \mathcal{C}^{0,1}(\mathbb{R})$. Then there exists a time $T>0$ such that the Cauchy problem \eqref{ch} has a unique solution in $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$, but the continuous dependence does not hold.
\end{itemize}
\end{theo}
About the uniqueness for the periodic case in Theorem \ref{well} ${\rm(2)}$, one can refer to \cite{leii} for more details.
\begin{theo}\label{ill}
For any $10<N\in\mathbb{N}^{+}$ large enough, there exists a $u_{0}\in\mathcal{C}^{\infty}(\mathbb{R})$ such that the following hold:
\begin{itemize}
\item [{\rm(1)}] $\|u_{0}\|_{B^{1}_{\infty,1}}\leq CN^{-\frac{1}{10}};$
\item [{\rm(2)}] There is a unique solution $u\in \mathcal{C}_{T}\big(\mathcal{C}^{\infty}(\mathbb{R})\big)$ to the Cauchy problem \eqref{ch} with a time $T\leq\frac{2}{N^\frac{1}{2}};$
\item [{\rm(3)}]
There exists a time $t_{0}\in[0,T]$ such that $\|u(t_{0})\|_{B^{1}_{\infty,1}}\geq \ln N$.
\end{itemize}
\end{theo}
\begin{rema}
From Theorem \ref{well} and Theorem \ref{ill}, we know that the Cauchy problem \eqref{ch} is ill-posed in $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$ and $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}))$ respectively. The difference is, in $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}))$, we present the norm inflation (the norm inflation implies the discontinuous dependence) and hence the ill-posedness for the Cauchy problem \eqref{ch}, which means that the Cauchy problem \eqref{ch} is ill-posed in $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}))$ due to the norm inflation; in $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$, we prove there is no norm inflation but the continuous dependence is still broken, which means that the Cauchy problem \eqref{ch} is ill-posed in $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$ because of the discontinuous dependence.
\end{rema}
\begin{rema}
Since the incompressible Euler equation is locally well-posed in $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}^{2}))$ \cite{gly} and locally ill-posed in $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}^{2}))$ \cite{boli} (norm inflation), but for the CH equation we obtain the local ill-posedness in $\mathcal{C}_T(B^1_{\infty,1}(\mathbb{R}))$ (norm inflation) and $L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$ (only the continuous dependence is broken). This interesting fact illustrates that there is a nature difference between the two equations.
\end{rema}
In sum, from Theorem \ref{wellch}--Theorem \ref{ill} we know that $B^{1}_{\infty,1}(\mathbb{R})$ is the critical Besov space for the CH equation, and the local well-posedness and ill-posedness for the CH equation in all critical Besov spaces have been completed, see the following
\begin{align*}
\underset{}{\overset{\text{well-posed}}{B^{1+\frac{1}{p}}_{p,1}(\mathbb{R})}~\text{with}~p<\infty}~\hookrightarrow\underset{\text{norm inflation}}{\overset{\text{ill-posed}}{B^{1}_{\infty,1}(\mathbb{R})}}\hookrightarrow\underset{\text{no norm inflation}}{\overset{\text{ill-posed}}{\mathcal{C}^{0,1}(\mathbb{R})}}
\end{align*}
Finally, by the proof of Theorem \ref{ill}, we prove a norm inflation by constructing an special initial data $u_{0}$ satisfies $u_0\in B^1_{\infty,1}(\mathbb{R})$ but $u^2_{0x}\notin B^{0}_{\infty,1}(\mathbb{R})$, we now show that this condition is necessary.
\begin{theo}\label{non}
Let $u_{0}\in B^{1}_{\infty,1}(\mathbb{R})$. If $u^2_{0x}\in B^{0}_{\infty,1}(\mathbb{R})$, then there exists a lifespan $T>0$ such that the Cauchy problem \eqref{ch} has a unique solution $u(t,x)\in \mathcal{C}_T(B^{1}_{\infty,1}(\mathbb{R}))\cap \mathcal{C}^{1}_T(B^{0}_{\infty,1}(\mathbb{R}))$. This time, the norm inflation will not occur.
\end{theo}
The rest of our paper is as follows. In Section 2, we introduce some preliminaries which will be used in the sequel. In Section 3, we establish the well-posedness and ill-posedness for the Cauchy problem \eqref{ch} in $B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R})$ and $\mathcal{C}^{0,1}(\mathbb{R})$ respectively.
In Section 4, we first present the norm inflation and hence the ill-posedness for the Cauchy problem \eqref{ch} in $B^{1}_{\infty,1}(\mathbb{R})$ by choosing an special initial data $u_0\in B^{1}_{\infty,1}(\mathbb{R})$ but $u^2_{0x}\notin B^{0}_{\infty,1}(\mathbb{R})$ (an example implies $B^{0}_{\infty,1}(\mathbb{R})$ is not a Banach algebra). Then, we prove that this condition is necessary. That is, if $u^2_{0x}\in B^{0}_{\infty,1}(\mathbb{R})$ holds, the Camassa-Holm equation has a unique solution $u(t,x)\in \mathcal{C}_T(B^{1}_{\infty,1}(\mathbb{R}))\cap \mathcal{C}^{1}_T(B^{0}_{\infty,1}(\mathbb{R}))$ and the norm inflation will not occur.
\section{Preliminaries}
\par
In this section, we recall some basic properties about the Littlewood-Paley theory and Besov spaces, which can be found in \cite{book}.
Let $\chi$ and $\varphi$ be a radical, smooth, and valued in the interval $[0,1]$, belonging respectively to $\mathcal{D}(\mathcal{B})$ and $\mathcal{D}(\mathcal{C})$, where $\mathcal{B}=\{\xi\in\mathbb{R}^d:|\xi|\leq\frac 4 3\},\ \mathcal{C}=\{\xi\in\mathbb{R}^d:\frac 3 4\leq|\xi|\leq\frac 8 3\}$.
Denote $\mathcal{F}$ by the Fourier transform and $\mathcal{F}^{-1}$ by its inverse.
The nonhomogeneous dyadic blocks $\Delta_j$ and low-frequency cut-off operators $S_j$ are defined as
\begin{equation*}
\left\{\begin{array}{ll}
\Delta_j u=0,\ \text{for $j\leq -2;$}\
\Delta_{-1} u=\mathcal{F}^{-1}(\chi\mathcal{F}u);\
\Delta_j u=\mathcal{F}^{-1}(\varphi(2^{-j}\cdot)\mathcal{F}u),\ \text{for $j\geq0,$}\\
S_j u=\sum\limits_{j'<j}\Delta_{j'}u.
\end{array}\right.
\end{equation*}
Let $s\in\mathbb{R},\ 1\leq p,\ r\leq\infty.$ The nonhomogeneous Besov spaces $B^s_{p,r}(\mathbb{R}^d)$ are defined by
\begin{align*}
B^s_{p,r}=B^s_{p,r}(\mathbb{R}^d)=\Big\{u\in S'(\mathbb{R}^d):\|u\|_{B^s_{p,r}}=\big\|(2^{js}\|\Delta_j u\|_{L^p})_j \big\|_{l^r(\mathbb{Z})}<\infty\Big\}.
\end{align*}
The nonhomogeneous Bony's decomposition is defined by
$uv=T_{u}v+T_{v}u+R(u,v)$ with
$$T_{u}v=\sum_{j}S_{j-1}u\Delta_{j}v,\ \ R(u,v)=\sum_{j}\sum_{|j'-j|\leq 1}\Delta_{j}u\Delta_{j'}v.$$
\begin{prop}[See \cite{book}]\label{Besov} Let $s,\ s_{1},\ s_{2}\in\mathbb{R}$ and $1\leq p,\ p_{1},\ p_{2},\ r,\ r_{1},\ r_{2}\leq+\infty$.
\begin{itemize}
\item [{\rm(1)}] If $p_1\leq p_2$ and $r_1\leq r_2$, then $ B^s_{p_1,r_1}\hookrightarrow B^{s-d(\frac 1 {p_1}-\frac 1 {p_2})}_{p_2,r_2}$.
\item [{\rm(2)}] If $s_1<s_2$ and $r_1\leq r_2$, then the embedding $B^{s_2}_{p,r_2}\hookrightarrow B^{s_1}_{p,r_1}$ is locally compact.
\item [{\rm(3)}] Let $m\in\mathbb{R}$ and $f$ be a $S^m$-mutiplier $($i.e. f is a smooth function and satisfies that $\forall\ \alpha\in\mathbb{N}^d$,
$\exists\ C=C(\alpha)$ such that $|\partial^{\alpha}f(\xi)|\leq C(1+|\xi|)^{m-|\alpha|},\ \forall\ \xi\in\mathbb{R}^d)$.
Then the operator $f(D)=\mathcal{F}^{-1}(f\mathcal{F})$ is continuous from $B^s_{p,r}$ to $B^{s-m}_{p,r}$.
\end{itemize}
\end{prop}
\begin{prop}[See \cite{book}]\label{bony}
Let $1\leq p,\ r,\ p_{1},\ p_{2},\ r_{1},\ r_{2}\leq+\infty$.
\begin{itemize}
\item [{\rm(1)}] For $s\in\mathbb{R},\ t<0$, a constant $C$ exists which satifies the following inequalities
\begin{align*}
&\|T_{f}g\|_{B^{s}_{p,r}}\leq C\|f\|_{L^{\infty}}\|g\|_{B^{s}_{p,r}};\\
&\|T_{f}g\|_{B^{s+t}_{p,r}}\leq C\|f\|_{B^{t}_{p,r_{1}}}\|g\|_{B^{s}_{p,r_{2}}}\quad\text{with $\frac1r\stackrel{\rm{def}}{=}\min\Big\{1,\frac{1}{r_{1}}+\frac{1}{r_{2}}\Big\}$}.
\end{align*}
\item [{\rm(2)}] For $s_{1},\ s_{2}\in\mathbb{R}$ satisfying $s_{1}+s_{2}>0$ and $\frac{1}{p}\stackrel{\rm{def}}{=}\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq1,\ \frac{1}{r}\stackrel{\rm{def}}{=}\frac{1}{r_{1}}+\frac{1}{r_{2}}\leq1$, then
\begin{align*}
\|R(f,g)\|_{B^{s_{1}+s_{2}}_{p,r}}\leq C\|f\|_{B^{s_{1}}_{p_{1},r_{1}}}\|g\|_{B^{s_{2}}_{p_{2},r_{2}}}
\end{align*}
\end{itemize}
\end{prop}
\begin{prop}[See \cite{book}]\label{s0}
Let $s>0$ and $1\leq p,\ r\leq\infty$, then $B^{s}_{p,r}=L^{p}\cap\dot{B}^{s}_{p,r}$.
\end{prop}
For the ill-posedness of the Cauchy problem \eqref{ch} in $B^{1}_{\infty,1}(\mathbb{R})$, we need some estimates in space $B^{0}_{\infty,\infty,1}(\mathbb{R})$ with the norm $\|f\|_{B^{0}_{\infty,\infty,1}}=\sup\limits_{j}\|\Delta_jf\|_{L^{\infty}}\cdot j$.
\begin{lemm}\label{b01}
For any $f\in B^{0}_{\infty,1}(\mathbb{R})\cap B^{0}_{\infty,\infty,1}(\mathbb{R})$, we have
\begin{align*}
&\|f^{2}\|_{B^{0}_{\infty,\infty,1}}\leq C\|f\|_{B^{0}_{\infty,1}}\|f\|_{B^{0}_{\infty,\infty,1}},\\
&\|f^{2}\|_{B^{0}_{\infty,1}}\leq C\|f\|_{B^{0}_{\infty,1}}\|f\|_{B^{0}_{\infty,\infty,1}}.
\end{align*}
\end{lemm}
\begin{lemm}\label{rj}
Define $R_{j}=\Delta_j(fg_{x})-f\Delta_jg_{x}$. Then we have
\begin{align*}
&\sup\limits_{j}\big(\|R_{j}\|_{L^{\infty}}\cdot j\big)\leq
C\|f_{x}\|_{B^{0}_{\infty,1}}\|g\|_{B^{0}_{\infty,\infty,1}},\\
&\sum\limits_{j}\|R_{j}\|_{L^{\infty}}\leq C\|f_{x}\|_{B^{0}_{\infty,1}}\|g\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}},\\
&\sum\limits_{j}2^{j}\|R_{j}\|_{L^{\infty}}\leq C\|f_{x}\|_{B^{0}_{\infty,1}}\|g\|_{B^{1}_{\infty,1}}.
\end{align*}
\end{lemm}
Lemma \ref{b01} and Lemma \ref{rj} can be proved by the Bony's decomposition and some similar calculations as Lemma 2.100 in \cite{book} and here we omit it.
In the paper, we also need some estimates for the following Cauchy problem of 1-D transport equation:
\begin{equation}\label{transport}
\left\{\begin{array}{l}
f_t+v\partial_{x}f=g,\ x\in\mathbb{R},\ t>0, \\
f(0,x)=f_0(x),\ x\in\mathbb{R}.
\end{array}\right.
\end{equation}
\begin{lemm}[See \cite{book,liy}]\label{existence}
Let $1\leq p,\ r\leq\infty$ and $\theta> -\min(\frac 1 {p}, \frac 1 {p'}).$ Suppose $f_0\in B^{\theta}_{p,r},$ $g\in L^1(0,T;B^{\theta}_{p,r}),$ and $v\in L^\rho(0,T;B^{-M}_{\infty,\infty})$ for some $\rho>1$ and $M>0,$ and
\begin{align*}
\begin{array}{ll}
\partial_{x}v\in L^1(0,T;B^{\frac 1 {p}}_{p,\infty}\cap L^{\infty}), &\ \text{if}\ \theta<1+\frac 1 {p}, \\
\partial_{x}v\in L^1(0,T;B^{\theta}_{p,r}),\ &\text{if}\ \theta=1+\frac{1}{p},\ r>1, \\
\partial_{x}v\in L^1(0,T;B^{\theta-1}_{p,r}), &\ \text{if}\ \theta>1+\frac{1}{p}\ (or\ \theta=1+\frac 1 {p},\ r=1).
\end{array}
\end{align*}
Then the problem \eqref{transport} has a unique solution $f$ in
\begin{itemize}
\item [-] the space $C([0,T];B^{\theta}_{p,r}),$ if $r<\infty,$
\item [-] the space $\big(\bigcap_{{\theta}'<\theta}C([0,T];B^{{\theta}'}_{p,\infty})\big)\bigcap C_w([0,T];B^{\theta}_{p,\infty}),$ if $r=\infty.$
\end{itemize}
\end{lemm}
\begin{lemm}[See \cite{book,liy}]\label{priori estimate}
Let $1\leq p,\ r\leq\infty$ and $\theta>-\min(\frac{1}{p},\frac{1}{p'}).$ There exists a constant $C$ such that for all solutions $f\in L^{\infty}(0,T;B^{\theta}_{p,r})$ of \eqref{transport} with initial data $f_0$ in $B^{\theta}_{p,r}$ and $g$ in $L^1(0,T;B^{\theta}_{p,r}),$ we have, for a.e. $t\in[0,T],$
$$ \|f(t)\|_{B^{\theta}_{p,r}}\leq \|f_0\|_{B^{\theta}_{p,r}}+\int_0^t\|g(t')\|_{B^{\theta}_{p,r}}{\ud}t'+\int_0^t V'(t')\|f(t')\|_{B^{\theta}_{p,r}}{\ud}t' $$
or
$$ \|f(t)\|_{B^{\theta}_{p,r}}\leq e^{CV(t)}\Big(\|f_0\|_{B^{\theta}_{p,r}}+\int_0^t e^{-CV(t')}\|g(t')\|_{B^{\theta}_{p,r}}{\ud}t'\Big) $$
with
\begin{equation*}
V'(t)=\left\{\begin{array}{ll}
\|\partial_{x}v(t)\|_{B^{\frac 1 p}_{p,\infty}\cap L^{\infty}},\ &\text{if}\ \theta<1+\frac{1}{p}, \\
\|\partial_{x}v(t)\|_{B^{\theta}_{p,r}},\ &\text{if}\ \theta=1+\frac{1}{p},\ r>1, \\
\|\partial_{x}v(t)\|_{B^{\theta-1}_{p,r}},\ &\text{if}\ \theta>1+\frac{1}{p}\ (\text{or}\ \theta=1+\frac{1}{p},\ r=1).
\end{array}\right.
\end{equation*}
If $\theta>0$, then there exists a constant $C=C(p,r,\theta)$ such that the following statement holds
\begin{align*}
\|f(t)\|_{B^{\theta}_{p,r}}\leq \|f_0\|_{B^{\theta}_{p,r}}+\int_0^t\|g(\tau)\|_{B^{\theta}_{p,r}}{\ud}\tau+C\int_0^t \Big(\|f(\tau)\|_{B^{\theta}_{p,r}}\|\partial_{x}v(\tau)\|_{L^{\infty}}+\|\partial_{x}v(\tau)\|_{B^{\theta-1}_{p,r}}\|\partial_{x}f(\tau)\|_{L^{\infty}}\Big){\ud}\tau.
\end{align*}
In particular, if $f=av+b,\ a,\ b\in\mathbb{R},$ then for all $\theta>0,$ $V'(t)=\|\partial_{x}v(t)\|_{L^{\infty}}.$
\end{lemm}
\section{Well-posedness}
\par
In this section, we main study the local well-posedness for the CH equation in subcritical spaces (see Theorem \ref{well}):
$$B^{1+\epsilon}_{\infty,1}\hookrightarrow B^{1}_{\infty,1}\cap B^{1}_{\infty,\infty,1}\hookrightarrow B^{1}_{\infty,1}\hookrightarrow \mathcal{C}^{0,1}.$$
To prove Theorem \ref{well}, we must recall a useful lemma.
First we consider the following Cauchy problem for a general abstract equation
\begin{equation}\label{abstract}
\left\{\begin{array}{ll}
\partial_{t}u+A(u)\partial_{x}u=F(u),&\quad t>0,\quad x\in\mathbb{R},\\
u(t,x)|_{t=0}=u_0(x),&\quad x\in\mathbb{R},
\end{array}\right.
\end{equation}
where $A(u)$ is a polynomial of $u$, $F$ is called a `good operator' such that for any $\varphi\in \mathcal{C}^{\infty}_0(\mathbb{R})$ and any $\epsilon>0$ small enough,
$$if\quad u_n\varphi\rightarrow u\varphi\quad in\quad B^{1+\frac{1}{p}-\epsilon}_{p,1},\quad then\quad \langle F(u_n),\varphi\rangle\longrightarrow\langle F(u),\varphi\rangle.$$
This definition is reasonable for the Camassa-Holm type equations. For example, it's easy to prove that $F(u)=-\partial_{x}(1-\partial_{xx})^{-1}\Big(u^2+\frac{u^2_x}{2}\Big)$ is a `good operator' by an approximation argument, since $\mathcal{C}^{\infty}_0(\mathbb{R})$ is dense in $\mathcal{S}(\mathbb{R})$.
The associated Lagrangian scale of \eqref{abstract} is the following initial valve problem
\begin{equation}\label{ODE}
\left\{\begin{array}{ll}
\frac{{\ud}}{{\ud}t}y(t,\xi)=A(u)\big(t,y(t,\xi)\big),&\quad t>0,\quad \xi\in\mathbb{R},\\
y(0,\xi)=\xi,&\quad \xi\in\mathbb{R}.
\end{array}\right.
\end{equation}
Introduce the new variable $U(t,\xi)=u\big(t,y(t,\xi)\big)$. Then, \eqref{abstract} becomes
\begin{equation}\label{lagrange}
\left\{\begin{array}{ll}
U_t=\Big(F(u)\Big)(t,y(t,\xi)):=\widetilde{F}(U,y),&\quad t>0,\quad \xi\in\mathbb{R},\\
U(t,\xi)|_{t=0}=U_0(\xi)=u_0(\xi),&\quad \xi\in\mathbb{R}.
\end{array}\right.
\end{equation}
Here is the useful lemma:
\begin{lemm}[See \cite{yyg}]\label{lagrabstr}
Let $u_0\in B^{1+\frac{1}{p}}_{p,1}(\mathbb{R})$ with $1\leq p<\infty$ and $k\in\mathbb{N}^{+}$. Suppose $F$ is a `good operator' and $F,\ \widetilde{F}$ satisfy the following conditions:
\begin{align}
&\|F(u)\|_{B^{1+\frac{1}{p}}_{p,1}}\leq C\Big(\|u\|^{k+1}_{B^{1+\frac{1}{p}}_{p,1}}+1\Big);\label{fkl}\\
&\|\widetilde{F}(U,y)-\widetilde{F}(\bar{U},\bar{y})\|_{W^{1,\infty}\cap W^{1,p}}\leq C\Big(\|U-\bar{U}\|_{W^{1,\infty}\cap W^{1,p}}+\|y-\bar{y}\|_{W^{1,\infty}\cap W^{1,p}}\Big);\label{wideFkl}\\
&\|F(u)-F(\bar{u})\|_{B^{1+\frac{1}{p}}_{p,1}}\leq C\|u-\bar{u}\|_{B^{1+\frac{1}{p}}_{p,1}}\Big(\|u\|^{k}_{B^{1+\frac{1}{p}}_{p,1}}+\|\bar{u}\|^{k}_{B^{1+\frac{1}{p}}_{p,1}}+1\Big).\label{Fkl}
\end{align}
Then, there exists a time $T>0$ such that
\begin{itemize}
\item [\rm{(1)}] Existence: If \eqref{fkl} holds, then \eqref{abstract} has a solution $u\in E^p_T:=\mathcal{C}_{T}\big(B^{1+\frac{1}{p}}_{p,1}(\mathbb{R})\big)\cap \mathcal{C}^{1}_{T}\big(B^{\frac{1}{p}}_{p,1}(\mathbb{R})\big);$
\item [\rm{(2)}] Uniqueness: If \eqref{fkl} and \eqref{wideFkl} hold, then the solution of \eqref{abstract} is unique;
\item [\rm{(3)}] Continuous dependence: If \eqref{fkl}--\eqref{Fkl} hold, then the solution map is continuous from any bounded subset of $ B^{1+\frac{1}{p}}_{p,1}(\mathbb{R})$ to $ \mathcal{C}_{T}\big(B^{1+\frac{1}{p}}_{p,1}(\mathbb{R})\big)$.
\end{itemize}
That is, the problem \eqref{abstract} is locally well-posed in the sense of Hadamard.
\end{lemm}
\begin{proof}[\rm\textbf{The proof of Theorem \ref{well}:}]
(1) For $u_0\in B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R})$, since
$$\|u^2_x\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}\leq C\|u_x\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}^2,$$
taking the similar proof of Lemma \ref{lagrabstr}, one can easily obtain \eqref{fkl}, \eqref{wideFkl} and \eqref{Fkl}. Therefore, the local well-posedness for the Cauchy problem \eqref{ch} of the CH equation is obvious.
(2) However, for $u_0\in \mathcal{C}^{0,1}(\mathbb{R})$, one can get
\begin{align}
&\|F(u)\|_{C^{0,1}}\leq C\|u\|^{2}_{\mathcal{C}^{0,1}};\\
&\|\widetilde{F}(U,y)-\widetilde{F}(\bar{U},\bar{y})\|_{\mathcal{C}^{0,1}}\leq C\Big(\|U-\bar{U}\|_{\mathcal{C}^{0,1}}+\|y-\bar{y}\|_{\mathcal{C}^{0,1}}\Big);\\
&\|F(u)-F(\bar{u})\|_{C^{0,1}}\leq C\|u,\bar{u}\|_{C^{0,1}}\|u-\bar{u}\|_{C^{0,1}},
\end{align}
but we can't obtain the local well-posedness.
In fact, the local existence and uniqueness of the solution to the CH equation can be obtained in $L^{\infty}_{T}(\mathcal{C}^{0,1}(\mathbb{R}))$ by the way in \cite{yyg}. This time, we find a lifespan $T\approx \frac{1}{\|u_0\|_{\mathcal{C}^{0,1}}}$ and a local unique solution $u(t,x)$ such that $\|u\|_{L^{\infty}_{T}(\mathcal{C}^{0,1})}\leq C\|u_0\|_{\mathcal{C}^{0,1}}$ which means the norm inflation can not be appeared in $L^{\infty}_{T}(\mathcal{C}^{0,1}(\mathbb{R}))$. However, although the norm inflation does not occur, we claim that the continuous dependence of \eqref{ch} still does not hold. The reason is that the space $\mathcal{S}(\mathbb{R})$ is not dense in $\mathcal{C}^{0,1}(\mathbb{R})$. The well-known peakon solution $ce^{-|x-ct|}$ is a counter example.
Indeed, let $c_1-c_2=\frac{1}{N}$ with $c_2>1, N\in\mathbb{N}^+$. On the one hand, when $t=0$, one has
$$\|c_1e^{-|x|}-c_2e^{-|x|}\|_{\mathcal{C}^{0,1}}\leq \frac{C}{N}.$$
On the other hand, since $\partial_x(ce^{-|x-ct|})=-c e^{-|x-ct|}{\rm sgn}(x-ct)$, when $x=c_1t-\frac{t}{2N}$ for any $t\in (0,\frac{1}{N}]$, we deduce that
\begin{align*}
\Big|c_1 e^{-|x-c_1t|}{\rm sgn}(x-c_1t)-c_2 e^{-|x-c_2t|}{\rm sgn}(x-c_2t)\Big|_{x=c_1t-\frac{t}{2N}}
=&\big|-c_1 e^{-\frac{t}{2N}}-c_2e^{-\frac{t}{2N}}\big|\\
=&(c_1+c_2)e^{-\frac{t}{2N}}\\
\geq&2\times\frac{1}{2}=1.
\end{align*}
where the last inequality holds for $t<\frac{1}{N}$ and $N$ sufficiently large such that $\frac{t}{2N}\leq \frac{1}{2N^2}\leq\ln2$. That is
$$
\|c_1e^{-|x|}-c_2e^{-|x|}\|_{\mathcal{C}^{0,1}}\leq \frac{C}{N}\quad\text{but}\quad\|c_1 e^{-|x-c_1t|}-c_2 e^{-|x-c_2t|}\|_{\mathcal{C}^{0,1}}\geq 1,\quad\forall~t\in (0,\frac{1}{N}]$$
which implies the solution map is not continuous from $\mathcal{C}^{0,1}(\mathbb{R})$ to $L^{\infty}_{T}(\mathcal{C}^{0,1}(\mathbb{R}))$.
\end{proof}
\section{Ill-posedness and existence}
\par
\subsection{Ill-posedness}
In this subsection, we main investigate the ill-posedness for the Cauchy problem \eqref{ch} of the CH equation in $B^{1}_{\infty,1}(\mathbb{R})$. To begin with, we construct a vital counter example which implies the space $B^{0}_{\infty,1}(\mathbb{R})$ is not a Banach algebra.
\begin{lemm}
The space $B^{0}_{\infty,1}(\mathbb{R})$ is not a Banach algebra. That is, there exist $f,\ g\in B^{0}_{\infty,1}(\mathbb{R})$ such that
$\|fg\|_{B^{0}_{\infty,1}} \nleq C\|f \|_{B^{0}_{\infty,1}}\|g\|_{B^{0}_{\infty,1}}$.
\end{lemm}
\begin{proof}
The lemma may be a well-known result in some classical books about functional analysis. However, in order to use this lemma to the CH equation, we must prove a stronger case: $f=g$. That is, there exists $f \in B^{0}_{\infty,1}(\mathbb{R})$ such that
$\|f^2\|_{B^{0}_{\infty,1}} \nleq \|f \|^2_{B^{0}_{\infty,1}}$, which also implies the lemma.
Now, choose
\begin{align}
u_{0}(x)=-(1-\partial_{xx})^{-1}\partial_{x}\Big[\cos2^{N+5}x\cdot \big(1+N^{-\frac{1}{10}}S_{N}h(x)\big)\Big]N^{-\frac{1}{10}}\label{u0}
\end{align}
where $S_{N}f=\sum\limits_{-1\leq j<N}\Delta_{j}f$ and $h(x)=1_{x\geq0}(x)$.\\
Since
\begin{align}
\mathcal{F}\big(\cos2^{N+5}x\big)\approx\delta(\xi+2^{N+5})+\delta(\xi-2^{N+5}),\label{cos}
\end{align}
we know that
\begin{align*}
\|\cos2^{N+5}x\|_{B^{0}_{\infty,1}}\leq C.
\end{align*}
On the one hand, note that the Fourier transform of $S_{N}h$ is supported in ball $2^{N}B$ where $B$ is a ball, the Fourier transform of $\cos2^{N+5}x\cdot S_{N}h$ is supported in annulus $\pm[2^{N+5}-2^{N},2^{N+5}+2^{N}]$. It follows that
\begin{align*}
\Delta_{j}\big(\cos2^{N+5}x\cdot S_{N}h\big)=\cos2^{j}x\cdot S_{j-5}h,\quad j\approx N+5.
\end{align*}
Therefore,
\begin{align*}
\|\cos2^{N+5}x\cdot S_{N}h\|_{B^{0}_{\infty,1}}\leq C
\end{align*}
which infers that
\begin{align*}
\|u_{0}\|_{B^{1}_{\infty,1}}\big(\approx\|u_{0}\|_{L^{\infty}}+\|u_{0x}\|_{B^{0}_{\infty,1}}\big)\leq CN^{-\frac{1}{10}}.
\end{align*}
It then turns out that
\begin{align}
\|u_{0}\|_{L^{\infty}},\ \|u_{0x}\|_{B^{0}_{\infty,1}}\leq CN^{-\frac{1}{10}}.\label{u0xb0}
\end{align}
Thanks to \eqref{u0xb0} and the definition of the space $B^{0}_{\infty,\infty,1}(\mathbb{R})$, we also see
\begin{align}
\|u_{0}\|_{B^{1}_{\infty,\infty,1}}=\sup\limits_{j}j2^{j}\|\Delta_{j}u_{0}\|_{L^{\infty}}\approx\sup\limits_{j}j\|\Delta_{j}u_{0x}\|_{L^{\infty}}\leq C N^{\frac{9}{10}},\qquad j\approx N+5.\label{b1ww}
\end{align}
On the other hand, since
\begin{align*}
u_{0x}\approx\Big[\cos2^{N+5}x\cdot (1+N^{-\frac{1}{10}}S_{N}h)+R_{L}^{1}\Big]N^{-\frac{1}{10}}
\end{align*}
where $\mathcal{R}_{L}^{1}=-(1-\partial_{xx})^{-1}\big[\cos2^{N+5}x\cdot (1+N^{-\frac{1}{10}}S_{N}h)\big]$ is a lower order term ($\mathcal{R}_{L}^{i}$ is noted here and below, without causing any confusion, the lower order terms). Note that $(1-\partial_{xx})^{-1}$ is a $S^{-2}$ operator, we discover
\begin{align}
\|\mathcal{R}_{L}^{1}\|_{L^{\infty}}\leq C\|\mathcal{R}_{L}^{1}\|_{B^{0}_{\infty,1}}\leq C\|\mathcal{R}_{L}^{1}\|_{B^{1}_{\infty,1}}\leq C\Big\|\cos2^{N+5}x\cdot \big(1+N^{-\frac{1}{10}}S_{N}h\big)\Big\|_{B^{0}_{\infty,1}}\leq C.\label{rj1}
\end{align}
Then using the fact $\cos^{2}x=\frac{1+\cos2x}{2}$ we find
\begin{align}
u_{0x}^{2}\approx&\Big[\cos^{2}2^{N+5}x\cdot (1+2N^{-\frac{1}{10}}S_{N}h+N^{-\frac{1}{5}}(S_{N}h)^{2})+\mathcal{R}_{L}^{2}\Big]N^{-\frac{2}{10}}\notag\\
\approx&\Big[N^{-\frac{1}{5}}(S_{N}h)^{2}+N^{-\frac{1}{10}}S_{N}h+\mathcal{R}_{L}^{3}\Big]N^{-\frac{1}{5}}\label{u0x2}
\end{align}
where
\begin{align*}
&\mathcal{R}_{L}^{2}=(\mathcal{R}_{L}^{1})^{2}+\cos2^{N+5}x\cdot \big(1+N^{-\frac{1}{10}}S_{N}h\big)\cdot\mathcal{R}_{L}^{1},\\
&\mathcal{R}_{L}^{3}=\cos^{2}2^{N+5}x+\cos2^{N+6}x\cdot \big(N^{-\frac{1}{10}}S_{N}h++N^{-\frac{1}{5}}(S_{N}h)^{2}\big)+\mathcal{R}_{L}^{2}.
\end{align*}
As the Fourier transform of $S_{N}h$ is supported in $2^{N}B$, then the Fourier transform of $(S_{N}h)^{2}$ is supported in $2^{2N}B$ and the Fourier transform of $\cos2^{N+6}x\cdot(S_{N}h)^{2}$ is supported in annulus $\pm[2^{2N}-2^{N+6},2^{2N}+2^{N+6}]$, we thus have
\begin{align*}
&\Delta_{j}\big(\cos2^{N+6}x\cdot(S_{N}h)^{2}\big)=\cos2^{j}x\cdot(S_{j}h)^{2},\qquad j\approx 2N,\\
&\|\cos2^{N+6}x\cdot(S_{N}h)^{2}\|_{B^{0}_{\infty,1}}\leq C.
\end{align*}
It follows from
\eqref{rj1} that
\begin{align*}
&\|(\mathcal{R}_{L}^{1})^{2}\|_{B^{0}_{\infty,1}}\leq C\|(\mathcal{R}_{L}^{1})^{2}\|_{B^{1}_{\infty,1}}
\leq C\|\mathcal{R}_{L}^{1}\|^{2}_{B^{1}_{\infty,1}}\leq C,\\
&\|\cos2^{N+5}x\cdot \big(1+N^{-\frac{1}{10}}S_{N}h(x)\big)\cdot\mathcal{R}_{L}^{1}\|_{B^{0}_{\infty,1}}\leq C.
\end{align*}
Hence,
\begin{align}
&\|\mathcal{R}_{L}^{2}\|_{B^{0}_{\infty,1}},\quad\|\mathcal{R}_{L}^{3}\|_{B^{0}_{\infty,1}}\leq C,\label{rj3}\\
&\|(S_{N}h)^{2}\|_{B^{0}_{\infty,1}}\leq\sum\limits_{j<2N+1}\|\Delta_{j}(S_{N}h)^{2}\|_{L^{\infty}}\leq C\sum\limits_{j<2N+1}\|(S_{N}h)^{2}\|_{L^{\infty}}\leq CN.\label{snh2}
\end{align}
Observe that
\begin{align*}
\Delta_{j}h=&\int_{\mathbb{R}}2^{j}\check{\varphi}(2^{j}(x-y))1_{y\geq0}(y){\ud}y\\
=&\int_{0}^{+\infty}2^{j}\check{\varphi}(2^{j}(x-y)){\ud}y\\ =&\int_{0}^{+\infty}\check{\varphi}(2^{j}x-\eta){\ud}\eta:=f(2^{j}x),
\end{align*}
we get
\begin{align*}
\|\Delta_{j}h\|_{L^{\infty}}=\|f(2^{j}\cdot)\|_{L^{\infty}}=\|f(\cdot)\|_{L^{\infty}}=\|\Delta_{0}h\|_{L^{\infty}}.
\end{align*}
It therefore turns out that
\begin{align}
\|S_{N}h\|_{B^{0}_{\infty,1}}=&\sum\limits_{j\leq N}\|\Delta_{j}\sum\limits_{k< N,|k-j|\leq1}\Delta_{k}h\|_{L^{\infty}}\notag\\
=&\sum\limits_{j\leq N}\|\Delta_{j}h\|_{L^{\infty}}=\sum\limits_{j\leq N}\|\Delta_{0}h\|_{L^{\infty}}\approx N\|\Delta_{0}h\|_{L^{\infty}}\approx C_{1}N.\label{snh}
\end{align}
\eqref{u0x2}, \eqref{rj3}, \eqref{snh2} and \eqref{snh} together yield
\begin{align}
\|u_{0x}^{2}\|_{B^{0}_{\infty,1}}\geq&\Big(\|N^{-\frac{1}{10}}S_{N}h\|_{B^{0}_{\infty,1}}-\|N^{-\frac{1}{5}}(S_{N}h)^{2}\|_{B^{0}_{\infty,1}}-C\Big)N^{-\frac{1}{5}}\notag\\
\geq& C_{1}NN^{-\frac{1}{10}}N^{-\frac{1}{5}}-CNN^{-\frac{1}{5}}N^{-\frac{1}{5}}-CN^{-\frac{1}{5}}\notag\\
\approx&\frac{C_{1}}{2}N^{\frac{7}{10}}\notag\\
\geq&CN^{\frac{3}{5}}.\label{fanli}
\end{align}
That is, we finally deduce
\begin{align}
\|u_{0x}\|_{B^{0}_{\infty,1}}\leq CN^{-\frac{1}{10}},\quad \|u_{0x}^{2}\|_{B^{0}_{\infty,1}}\geq CN^{\frac{3}{5}}\label{u0x2in}
\end{align}
which implies that the space $B^{0}_{\infty,1}(\mathbb{R})$ is not a Banach algebra by setting $f=u_{0x}$.
\end{proof}
\begin{proof}[\rm{\textbf{The proof of Theorem \ref{ill}:}}] Let $u$ be to solutions to \eqref{ch} with the initial data $u_{0}$ defined as \eqref{u0}. Owing to Theorem \ref{well}, the CH equation has a solution $u(t,x)$ with the initial data $u_{0}$ in $\mathcal{C}^{0,1}(\mathbb{R})$ such that
\begin{align}
\|u(t)\|_{\mathcal{C}^{0,1}}\leq C\|u_0\|_{\mathcal{C}^{0,1}}\leq CN^{-\frac{1}{10}},\quad \forall~ t\in[0,T_{0}]\label{uxinfty}
\end{align}
where $T_{0}<\frac{1}{4C\|u_{0}\|_{\mathcal{C}^{0,1}}},\ \|u_{0}\|_{\mathcal{C}^{0,1}}\leq C \|u_{0}\|_{B^1_{\infty,1}}\leq CN^{-\frac{1}{10}}$ and $C$ is a constant independent of $N$.
Set
\begin{align}
\frac{\ud}{\ud t}y(t,\xi)=u(t,y(t,\xi)),\quad y_{0}(\xi)=\xi. \label{char}
\end{align}
Therefore, according to \eqref{uxinfty} and \eqref{char}, we can find a $T_1>0$ sufficiently small such that $\frac{1}{2}\leq y_{\xi}(t)\leq2$ for any $t\in[0,\min\{T_0,T_1\}]$. Let $\bar{T}=\frac{2}{N^{\frac{1}{2}}}\leq \min\{T_0,T_1\}$ for $N>10$ large enough. To prove the norm inflation, we see that it suffices to prove there exists a time $t_{0}\in[0,\frac{2}{N^{\frac{1}{2}}}]$ such that $\|u_{x}(t_{0})\|_{B^{0}_{\infty,1}}\geq \ln N$ for $N>10$ large enough. Let us assume the opposite. Namely, we suppose that
\begin{align}
\sup\limits_{t\in[0,\frac{2}{N^{\frac{1}{2}}}]}\|u_{x}(t)\|_{B^{0}_{\infty,1}}<\ln N.\label{uxln}
\end{align}
Applying $\Delta_{j}$ and the Lagrange coordinates to Eq. \eqref{nonch}, and then integrating with respect to $t$, we get
\begin{align}
(\Delta_{j}u)\circ y=\Delta_{j}u_{0}+\int_{0}^{t}\underbrace{-R_{j}\circ y}_{I_{1}}+\underbrace{(\Delta_{j}\mathcal{R}_{L}^{4})\circ y}_{I_{2}}+\underbrace{\Delta_{j}E\circ y-\Delta_{j}E_{0}}_{I_{3}}{\ud}s+t\Delta_{j}E_{0}\label{delta}
\end{align}
where
\begin{align*}
&R_{j}=\Delta_{j}(uu_{x})-u\Delta_{j}u_{x},\\
&\mathcal{R}_{L}^{4}=-\big[(1-\partial_{xx})^{-1}\partial_{x}(u^{2})\big],\\
&E(t,x)=-(1-\partial_{xx})^{-1}\partial_{x}(\frac{u^2_x}{2}).
\end{align*}
Let $T=\frac{1}{N^{\frac{1}{2}}}$ (Indeed $\forall T\in [\frac{1}{2N^{\frac{1}{2}}},\frac{3}{2N^{\frac{1}{2}}}]$ also makes sense). We will obtain the norm inflation by the following estimates:\\
${\rm(i)}$ Following the similar proof of Lemma 2.100 in \cite{book}, we see
\begin{align*}
\sum\limits_{j}2^{j}\|I_{1}\|_{L^{\infty}}\leq\sum\limits_{j}2^{j}\|R_{j}\|_{L^{\infty}}\leq \|u_{x}\|_{L^{\infty}}\|u\|_{B^{1}_{\infty,1}}\leq\|u\|_{\mathcal{C}^{0,1}}\cdot\ln N\leq \frac{C\ln N}{N^{\frac{1}{10}}}.
\end{align*}
${\rm(ii)}$ According to the Bony's decomposition, we find
\begin{align*}
\sum\limits_{j}2^{j}\|I_{2}\|_{L^{\infty}}\leq\sum\limits_{j}2^{j}\|\Delta_{j}\mathcal{R}_{L}^{4}\|_{L^{\infty}}\leq C\|u\|_{L^{\infty}}\|u\|_{B^{1}_{\infty,1}}\leq\|u\|_{\mathcal{C}^{0,1}}\cdot\ln N\leq \frac{C\ln N}{N^{\frac{1}{10}}}.
\end{align*}
${\rm(iii)}$ Now we estimate $I_{3}$. Noting that $u(t,x)\in L^{\infty}_{T}(\mathcal{C}^{0,1}(\mathbb{R}))$ is a solution to the CH equation, then we have
\begin{equation}\label{E}
\left\{\begin{array}{l}
\frac{d}{dt}E+u\partial_{x}E=G(t,x),\quad t\in (0,T],\\
E(0,x)=E_{0}(x)=-(1-\partial_{xx})^{-1}\partial_{x}(\frac{u^2_{0x}}{2})
\end{array}\right.
\end{equation}
where $G(t,x)=\frac{u^3}{3}-u(1-\partial_{xx})^{-1}\big(\frac{u_x^2}{2}\big)-(1-\partial_{xx})^{-1}
\Big(\frac{u^3}{3}-\frac{1}{2}uu^2_x
-\partial_{x}\big[u_x(1-\partial_{xx})^{-1}(u^2+\frac{u_x^2}{2})\big]\Big)$. Since $(1-\partial_{xx})^{-1}$ is a $S^{-2}$ operator in nonhomogeneous Besov spaces, one can easily get
\begin{align}
\|G(t)\|_{B^{1}_{\infty,1}}\leq C\|u(t)\|^2_{\mathcal{C}^{0,1}}\|u(t)\|_{B^{1}_{\infty,1}}\leq CN^{-\frac{1}{5}}\ln N,\quad\forall~t\in(0,T].\label{nonlin}
\end{align}
Applying $\Delta_{j}$ and the Lagrange coordinates to \eqref{E} yields
\begin{align}\label{22}
(\Delta_{j}E)\circ y-\Delta_{j}E_0=\int_0^t\tilde{R}_{j}\circ y+ (\Delta_{j}G)\circ y{\ud}s
\end{align}
where $\tilde{R}_{j}=u\partial_x\Delta_{j} E-\Delta_{j}\big(u\partial_xE\big)$. By Lemmas \ref{b01} and \ref{rj}, we discover
\begin{align}
\sum2^j
\|\tilde{R}_{j}\circ y\|_{L^{\infty}}=&\sum2^j
\|\tilde{R}_{j}\|_{L^{\infty}}\leq C\|u\|_{B^{1}_{\infty,1}}\|E\|_{B^{1}_{\infty,1}}\notag\\
\leq& C\|u\|_{B^{1}_{\infty,1}}\|u_{x}\|_{B^{0}_{\infty,1}}\|u_{x}\|_{B^{0}_{\infty,\infty,1}}\leq C(\ln N)^{2}\|u_{x}\|_{B^{0}_{\infty,\infty,1}}.\label{comm}
\end{align}
Thereby, we deduce that
\begin{align}
\sum\limits_{j}2^{j}\big\|\Delta_{j}E\circ y-\Delta_{j}E_0\|_{L^{\infty}}\leq&C\int_{0}^{t}\sum\limits_{j}2^{j}\|\tilde{R}_{j}\circ y\|_{L^{\infty}}+\sum\limits_{j}2^{j}\|\Delta_{j}G\circ y\|_{L^{\infty}}{\ud}s\notag\\
\leq&CT\cdot(\ln N)^{2}\cdot\|u_{x}\|_{L^{\infty}_{T}(B^{0}_{\infty,\infty,1})}+CT\cdot N^{-\frac{1}{5}}\cdot\ln N\label{bern}
\end{align}
Moreover, note that $u_{x}$ solves
\begin{align*}
u_{xt}+uu_{xx}=u^{2}-\frac{1}{2}u_{x}^{2}-(1-\partial_{xx})^{-1}\big(u^{2}+\frac{1}{2}u_{x}^{2}\big).
\end{align*}
Then, following the similar proof of Lemma \ref{existence} and Lemma \ref{priori estimate}, we find
\begin{align}
\|u_{x}\|_{L^{\infty}_{T}(B^{0}_{\infty,\infty,1})}\leq&\|u_{x}\|_{L^{\infty}_{T}(B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1})}\notag\\
\leq&\|u_{0x}\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}+C\int_{0}^{T}\|u_{x}^{2}\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}+\|u^{2}\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}\ud\tau\notag\\
\leq&\|u_{0x}\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}+C\int_{0}^{T}\|u_{x}\|_{B^{0}_{\infty,1}}\|u_{x}\|_{B^{0}_{\infty,\infty,1}}+\|u^{2}\|_{B^{1}_{\infty,\infty}}\ud\tau\notag\\
\leq&\|u_{0x}\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}+C\int_{0}^{T}\|u_{x}\|_{B^{0}_{\infty,1}}\|u_{x}\|_{B^{0}_{\infty,\infty,1}}+\|u^{2}\|_{\mathcal{C}^{0,1}}\ud\tau\notag\\
\leq&CN^{\frac{9}{10}}+\underbrace{CN^{-\frac{1}{2}}\cdot\ln N}_{\text{which is a small quantity}}\|u_{x}\|_{L^{\infty}_{T}(B^{0}_{\infty,\infty,1})}+C\notag\\
\leq&CN^{\frac{9}{10}}.\label{uxinin1}
\end{align}
Plugging \eqref{uxinin1} into \eqref{bern}, we discover
\begin{align}\label{eb1}
\sum\limits_{j}2^{j}\big\|I_{3}\|_{L^{\infty}}\leq CN^{\frac{9}{10}-\frac{1}{2}}(\ln N)^{2}+CN^{-\frac{1}{2}-\frac{1}{5}}\ln N.
\end{align}
Multiplying both sides of \eqref{delta} by $2^{j}$ and performing the $l^{1}$ summation, by ${\rm(i)}-{\rm(iii)}$ we gain for any $t\in[0,T]$
\begin{align*}
\|u(t)\|_{B^{1}_{\infty,1}}=&\sum\limits_{j}2^{j}\big\|\Delta_{j}u\|_{L^{\infty}}=\sum\limits_{j}2^{j}\big\|\Delta_{j}u\circ y\|_{L^{\infty}}\\
\geq&t\|E_{0}\|_{B^{1}_{\infty,1}}-Ct\big(N^{-\frac{1}{10}}\ln N-N^{\frac{9}{10}-\frac{1}{2}}(\ln N)^{2}-N^{-\frac{1}{2}-\frac{1}{5}}\ln N\big)-\|u_{0}\|_{B^{1}_{\infty,1}}\\
\geq&Ct\Big(\frac{1}{4}N^{\frac{3}{5}}-N^{-\frac{1}{10}}\ln N-N^{\frac{9}{10}-\frac{1}{2}}(\ln N)^{2}-N^{-\frac{1}{2}-\frac{1}{5}}\ln N\Big)-C\\
\geq&\frac{1}{8}tN^{\frac{3}{5}}-C
\end{align*}
where the second inequality holds by \eqref{fanli}. That is
$$\|u(t)\|_{B^{1}_{\infty,1}}\geq\frac{1}{16}N^{\frac{3}{5}-\frac{1}{2}}-C,\quad \forall t\in [\frac{1}{2N^{\frac{1}{2}}},\frac{1}{N^{\frac{1}{2}}}].$$
Hence,
\begin{align}
\sup\limits_{t\in[0,\frac{1}{N^{\frac{1}{2}}}]}\|u(t)\|_{B^{1}_{\infty,1}}\geq\frac{1}{16}N^{\frac{3}{5}-\frac{1}{2}}-C>\ln N\qquad\qquad\text{for $N>10$ large enough}
\end{align}
which contradicts the hypothesis \eqref{uxln}.
In conclusion, we obtain for $N>10$ large enough
\begin{align*}
&\|u\|_{L^{\infty}_{\bar{T}}(B^{1}_{\infty,1})}\geq\|u_{x}\|_{L^{\infty}_{\bar{T}}(B^{0}_{\infty,1})}\geq\ln N,\quad\quad \bar{T}=\frac{2}{N^{\frac{1}{2}}},\\
&\|u_{0}\|_{B^{1}_{\infty,1}}\lesssim N^{-\frac{1}{10}},
\end{align*}
that is we get the norm inflation and hence the ill-posedness of the CH equation. Thus, Theorem \ref{ill} is proved.
\end{proof}
\begin{rema}
In the proof, we have constructed an initial data $u_{0}$ such that
$$\|u_{0x}\|_{B^{0}_{\infty,1}}\lesssim N^{-\frac{1}{10}},~~\|u^2_{0x}\|_{B^{0}_{\infty,1}}\approx N^{\frac{7}{10}}.$$
Since $\|f^2 \|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}\leq \|f\|_{B^{0}_{\infty,1}}\|f \|_{B^{0}_{\infty,\infty,1}}$, to obtain the norm inflation in $B^{1}_{\infty,1}(\mathbb{R})$, one needs to construct an initial data $u_{0}$ such that
\begin{align}\label{guanjian}
c_2N^{\frac{7}{10}}\approx\|u^2_{0x}\|_{B^{0}_{\infty,1}}\leq\|u^2_{0x}\|_{B^{0}_{\infty,1}\cap B^{0}_{\infty,\infty,1}}\approx c_1N^{\frac{7}{10}},~~c_1\geq c_2.
\end{align}
Note that the CH equation is local well-posedness in $B^{1}_{\infty,1}(\mathbb{R})\cap B^{1}_{\infty,\infty,1}(\mathbb{R})$ (see Theorem \ref{well}), so if we want to prove the ill-posedness in $B^{1}_{\infty,1}(\mathbb{R})$, the equality \eqref{guanjian} will be the key skill.
\end{rema}
\subsection{The existence of solutions to the CH equation with $u_{0x}^{2}$ belongs to $ B^{0}_{\infty,1}(\mathbb{R})$ }
\par
In this subsection, we give the existence of the solutions to the Cauchy problem \eqref{ch} with $u_{0x}^{2}\in B^{0}_{\infty,1}(\mathbb{R})$.
\begin{proof}[\rm\textbf{The proof of Theorem \ref{non}}:] Thanks to $u_0\in{B^1_{\infty,1}}(\mathbb{R})\hookrightarrow \mathcal{C}^{0,1}(\mathbb{R})$, one can obtain a unique solution $u\in L^{\infty}_T(\mathcal{C}^{0,1}(\mathbb{R}))$ ($\|u\|_{L^{\infty}_T(\mathcal{C}^{0,1})}\leq C\|u_0\|_{\mathcal{C}^{0,1}}$) by use of {\rm(2)} in Theorem \ref{well}. Moreover, one can choose a time $0< T_{0}<1$ sufficiently small such that $\frac{1}{2}\leq y_{\xi}(t,\xi)\leq 2$ where $y(t,\xi)$ satisfies \eqref{char}.
We rewrite the CH equation as follows:
\begin{equation*}
\left\{\begin{array}{l}
\frac{\ud}{\ud t}u+u\partial_{x}u=E-(1-\partial_{xx})^{-1}\partial_{x}(u^2),\quad t\in (0,T_{0}],\\
u(0,x)=u_0(x).
\end{array}\right.
\end{equation*}
Using again the embedding $\mathcal{C}^{0,1}(\mathbb{R})\hookrightarrow B^{1}_{\infty,\infty}(\mathbb{R})$, we see
\begin{align}
\|(1-\partial_{xx})^{-1}\partial_{x}(u^2)\|_{B^{1}_{\infty,1}}\leq C\|u^2\|_{\mathcal{C}^{0,1}}\leq C\|u_0\|^2_{B^1_{\infty,1}}.\label{u2}
\end{align}
Therefore, according to Lemma \ref{priori estimate} and the Gronwall inequality, we obtain
\begin{align}
\|u(t)\|_{B^1_{\infty,1}}\leq C_{u_{0}}\Big(\|u_0\|_{B^1_{\infty,1}}+\|u_0\|^2_{B^1_{\infty,1}}+ \int_0^t\|E(s)\|_{B^1_{\infty,1}}\ud s \Big),\qquad\ \forall~t\in(0,T_{0}].\label{ub1}
\end{align}
On the other hand, \eqref{nonlin}, \eqref{22} and \eqref{comm} together infer that
\begin{align}
\|E(t)\|_{B^{1}_{\infty,1}}\leq
\|E_0\|_{B^{1}_{\infty,1}}+C_{u_{0}}\int_0^t \|u(s)\|_{B^{1}_{\infty,1}}\|E(s)\|_{B^{1}_{\infty,1}}+ \|u(s)\|_{B^{1}_{\infty,1}}{\ud}s,\quad\ \forall~t\in(0,T_{0}].\label{ee}
\end{align}
Finally, combining \eqref{ub1} and \eqref{ee}, we deduce the following priori estimate
\begin{align}
\|u(t)\|_{B^{1}_{\infty,1}}+\|E(t)\|_{B^{1}_{\infty,1}}
\leq & C_{u_{0}}\Big(\|u_{0}\|_{B^{1}_{\infty,1}}+\|u_{0}\|^2_{B^{1}_{\infty,1}}+\|E_{0}\|_{B^{1}_{\infty,1}}
\notag\\
&+\int_0^t \|u(s)\|_{B^{1}_{\infty,1}}\|E(s)\|_{B^{1}_{\infty,1}}+ \|u(s)\|_{B^{1}_{\infty,1}}{\ud}s\Big),\quad \forall~t\in[0,T_0].
\end{align}
Therefore, taking the similar calculations in \cite{yyg}, we can choose a lifespan $T\approx \frac{1}{\|u_{0}\|_{B^{1}_{\infty,1}}+\|u_{0}\|^2_{B^{1}_{\infty,1}}+\|E_{0}\|_{B^{1}_{\infty,1}}+1}$ such that $\|u\|_{{L}^{\infty}_T(B^{1}_{\infty,1})}\leq C\|u_0\|_{B^{1}_{\infty,1}}$. Then, one can obtain a unique solution $u(t,x)\in \mathcal{C}_T(B^{1}_{\infty,1}(\mathbb{R}))\cap \mathcal{C}^{1}_T(B^{0}_{\infty,1}(\mathbb{R}))$ and the norm inflation will not occur at this time. This proves Theorem \ref{non}.
\end{proof}
\noindent\textbf{Acknowledgements.}
The authors are grateful for helpful discussion from Professor Pierre-Gilles, Lemari\'{e}-Rieusset and Professor Changxing, Miao in constructing a counter example to present $B^{0}_{\infty,1}$ is not a Banach algebra. Guo was partially supported by the Guangdong Basic and Applied Basic Research Foundation (No. 2020A1515111092) and Research Fund of Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology (No. 2020B1212030010). Ye and Yin were partially supported by NNSFC (No. 11671407), FDCT (No. 0091/2013/A3), Guangdong Special Support Program (No. 8-2015) and the key project of NSF of Guangdong Province (No. 2016A030311004).
\addcontentsline{toc}{section}{\refname}
| 52dc13c883b9af5d353ad9b15c119da62d67355a | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
The high-performance computing power in GPUs has developed a strong software eco-system based on GPU programs
Although there are other choices for GPU programming such as HIP~\cite{hip}, OpenMP and DPC~\cite{dpct}, in recent years, CUDA is still in the dominant place. In the realm of Deep Learning, both of the two most popular frameworks, Pytorch and TensorFlow, support only CUDA for GPU backend \cite{asaduzzaman2021impact}. In the multimedia realm, for the video editing applications, as lists in \cite{GPU-roundup}, CUDA is compatibility in 14 of the 17 applications, while OpenCL is only supported by 9 of them.
Unfortunately, despite the popularity of CUDA programs, NVIDIA GPUs are the main hardware platforms to run them. Although there have been several efforts to run CUDA on non-NVIDIA GPUs, they are lack supporting newer CUDA features.
There are mainly two challenges to run CUDA on other platforms are. The first is to convert Single Program Multiple Data (SPMD) programs to programs for non-SPMD friendly architectures. In the SPMD programming model, the same kernel is executed by many threads at runtime, and the GPU is built on for through-put-oriented architectures by supporting many threads (or warps). However, other architectures often do not have that many threads, so the CUDA programs need to be converted to a fewer number of threads efficiently. The second problem is a continuous support for still evolving programming models like CUDA. To address the second problem, we utilize the open-source compiler frame LLVM as much as possible. In this paper, we tackle the first problem: Supporting CUDA with fewer number threads, which is an essential component for running CUDA on X86, ARM, or Intel-GPU, which has fewer hardware threads than NVIDIA GPUs.
Several projects aim to support this transformation: running GPU programs on CPUs~\cite{stratton2008mcuda, diamos2010ocelot,jaaskelainen2015pocl,stratton2010efficient,hipify,dpct,intel-opencl,PGI,karrenberg2012improving,diamos2010dynamic,CudaEmulator,chen2018enabling,gummaraju2010twin}. While a few projects focus on narrowing the gap between SPMD and MPMD by adding a hardware extension\cite{chen2018enabling} or adding system-level support for faster context switching\cite{gummaraju2010twin}, most projects try to do compiler-level transformation to translate GPU functions to be suitable for CPUs. These projects use the same granularity of transformation: a CPU thread is responsible for executing all the threads in a CUDA block (or OpenCL work-group). The CUDA-block-to-CPU-thread mapping is optimal based on three observations: 1) it has a fewer CPU threads compared with CUDA-thread-to-CPU-thread mapping, which can lead to low overhead for context switching; 2) the memory access within a CUDA block utilizes GPU caches. Both shared memory in the CUDA programming model (or local memory in OpenCL) and global memory with spatial/temporal locality within a CUDA block utilize caches. These memory accesses from a CUDA block are mapped into a CPU thread, so those memory accesses would also utilize CPU caches \cite{stratton2013performance}; 3) threads within a block have similar computation patterns, which makes them amenable to the SIMD instructions \cite{jeong2012performance,karrenberg2012improving} common in the current CPU architectures for further optimizations ~\cite{nuzman2006auto,nuzman2006multi,kar:hac11,nuzman2006autovectorization,maleki2011evaluation,porpodas2017supergraph,rosen2007loop}. The transformation is shown in Figure \ref{fig:SPMD}(b): for an original GPU kernel, the translator first splits it into different regions according to synchronization instructions and then wraps each region with a loop whose size equals to the GPU block size. However, this transformation was proposed based on early GPU programming models and cannot support several important features that were proposed in recent GPU programming models. One of the significant changes is the warp-level programming model in CUDA \footnote{Warp is now officially a part of the programming model instead of being a microarchitecture concept.}. And this new feature is critical for achieving high performance. hierarchical collapsing is proposed to support these warp-level features on CPUs. Although this might sound like a trivial extension, it is critical to identify new types of barriers in the warp level. And warp- and block-level barriers form a hierarchical relationship that complicates translating loops or branches into a CPU-friendly version. Throughout the paper, the focus is on the CUDA programming model. However, the same techniques would also be applicable to other SPMD programming models (e.g., OpenCL\cite{munshi2009opencl}, HIP~\cite{hip}, DPC~\cite{dpct}). Based on hierarchical collapsing, a framework, COX, is implemented that efficiently executes CUDA programs with the latest features on X86 devices. COX also uses SIMD instructions explicitly to take advantage of the hardware features in the latest CPUs
The main contributions of this paper as follows:
\begin{itemize}
\item propose hierarchical collapsing, which provides the correctness for mapping GPU programs that use warp-level functions to CPU programming models, and implement it with LLVM passes.
\item extend the Parallel Region concept into the Hierarchical Parallel Region to provide the correct translation when the GPU programs have warp-level functions.
\item implement the COX framework, which executes CUDA source code on CPUs. The framework includes a new LLVM pass that contains hierarchical collapsing and a lightweight runtime system. \footnote{the frame will be released as an open source once the paper is accepted.}
\end{itemize}
\begin{figure*}[ht]
\centering
\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/SPMD.pdf}
\caption{GPU SPMD}
\end{subfigure}
\begin{subfigure}[t]{0.27\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/SPMD_CPU.pdf}
\caption{Output of flat collapsing}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/SPMD_CPU_WARP.pdf}
\caption{Output of hierarchical collapsing}
\end{subfigure}
\caption{The programming model for input CUDA SPMD and output CPU programs by flat collapsing and hierarchical collapsing}
\label{fig:SPMD}
\end{figure*}
\section{Background and Motivation\label{sec:background}}
\subsection{Running SPMD Programming Models on CPUs \label{sec:previous_concept}}
The basic mechanism to support SPMD programming models on CPUs is to map a thread block/work-group to a CPU thread and iterate for the thread block/work-group using a loop~\cite{kar:hac11, stratton2008mcuda,diamos2010ocelot,jaaskelainen2015pocl,karrenberg2012improving}. Figure~\ref{fig:SPMD}(a) and (b) show the input and output of the basic process. This transformation has different names in different projects: microthreading~\cite{stratton2010efficient}, thread aggregation~\cite{zhang2013improving}, thread-fusion~\cite{diamos2010ocelot}, region-based serialization~\cite{stratton2013performance}, loop chunking~\cite{shirako2009chunking}, and kernel serialization~\cite{blomkvist2021cumulus}. In this paper, this transformation is called flat collapsing, as it uses a single loop to represent all threads within a block. The loop can be vectorized by the compiler, and multiple CPU threads (essentially multiple CUDA blocks) are executed in parallel on multiple cores using runtime system such as p-thread or openMP. When a GPU program contains synchronization primitives inside (e.g., {\tt synchthreads())}, the loop needs to be split (loop fission).
Below is some of the terminologies used in~\cite{jaaskelainen2015pocl, stratton2008mcuda, karrenberg2012improving} to support this transformation in general cases:
\begin{itemize}
\item \textbf{Parallel Region (PR)}: These are the regions between barriers that must be executed by all the threads within a block before proceeding to the next region. Typically, loops are generated to wrap each PR. In Figure \ref{fig:SPMD}, {\tt statements-A} and {\tt statements-B} form two PRs.
\item \textbf{Extra barrier}: Unlike explicit barriers that are inserted by programmers (e.g., {\tt synchthreads()}), the flat collapsing inserts extra barriers that are necessary to define the Parallel Region. The flat collapsing groups instructions between barriers as PRs and wraps these PRs by loops. However, when barriers are present in the conditional statements, the situation becomes more complex. For example, to transform a CUDA kernel that has a barrier within an if–then construct, flat collapsing has to insert extra barriers in the original CFG so that it can get correct PRs in the further step
\end{itemize}
The definition of Parallel Region in previous project for flat collapsing cannot support wrap-level features. The flat collapsing generates a single loop for each PR to simulate all threads within a block. This coarse-grain simulation cannot distinguish threads among different wraps. In this paper, an extension definition is proposed and used to support warp-level features. (See Section~\ref{sec:hierarchical_PR} for details.)
\subsection{Warp-level Programming Features \label{sec:cuda_feature}}
\subsubsection{Warp-Level Collectives\label{sec:cooperative_group_intro}}
CUDA provides a list of warp-level collective functions,~\footnote{\url{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html##warp-shuffle-functions}} which are necessary when achieving high performance for reduction. This section introduces two of them that are commonly used in existing benchmarks.
\begin{itemize}
\item \textbf{Warp shuffle}: In early versions, although most GPUs have local memory and global memory to support data exchange between threads, there was no efficient way to exchange data among threads within a warp. To efficiently exchange data that is stored in registers, CUDA provides a series of warp shuffle instructions. When the warp shuffle instructions are invoked, a thread can send its local data to another thread in the same warp. Warp shuffle can be used with warp vote to write more flexible programs.
\item \textbf{Warp Vote}: Instead of only exchanging data among threads within a warp, warp vote instructions can directly make logical reductions (e.g., all, any) for the local variables, controlled by the {\tt mask} argument. These features, used with warp shuffle and cooperative group, are necessary to implement high-performance reduction kernels.
\end{itemize}
\subsubsection{Cooperative group\label{sec:cooperative_group_intro}}
In the early CUDA programming models, there are only two explicit groups of threads: block and grid. Users cannot easily organize a sub-group among a small group of threads within the block. NVIDIA proposed a new concept in CUDA 7.0 called {\tt cooperative group}. The corresponding instructions allow users to group threads within a block and this group can further be used for data exchange. There are two kinds of grouping strategies: static and dynamic. For the static grouping, it is known whether a thread belongs to a group in compile time (e.g., group threads with index 0 and 1), while for dynamic grouping, it can only be known during runtime (e.g., group all activated threads).
\subsubsection{Limitation of COX}
This project focuses mainly on the compile-level transformation. Thus, only the static features can be addressed. The latest CUDA supports several dynamical features. For example, for warp-level collective operations, users can assign only a sub-group of threads within the wrap to do the collective operations. The sub-group is organized by a mask argument at runtime. For the cooperative group, users can also group all activated threads into a group at runtime. For these warp-level features, although these dynamic features provide more flexibility, they can be harmful for performance, as they may incur warp-divergence. Thus, most high-performance implementations~\cite{CUB,nai2015graphbig} use warp-level collectives and the cooperative group without warp-divergence. In the following sections, only the non warp-divergence use cases for these new features are of concern. For the same reason, only the aligned barriers\footnote{\url{https://docs.nvidia.com/cuda/parallel-thread-execution/index.html##parallel-synchronization-and-communication-instructions-bar}} are of concern. In other words, for a block/wrap barrier, it is assumed that all or none of the threads within a block/wrap can reach this barrier.
\subsection{Motivation}
\label{subsec:motiv}
Section~\ref{sec:previous_concept} introduced the concepts of Parallel Region (PR) and extra barrier. This section discusses, with examples, the limitation of these concepts and how to extend them.
Assume the input kernel shown in Code~\ref{code:gpu_shfl},\footnote{This example is a simplified version of {\tt reduction\_kernel.cu} in CUDA 10.1 SDK.} and its block size is $b\_size$. The code accumulates the variable $val$ within the first warp and stores the accumulated value in the first thread of this warp. Figure~\ref{fig:shfl_explanation} illustrates an example of the last two iterations.
\begin{lstlisting}[caption={Input GPU reduction kernel},label={code:gpu_shfl},language=C]
int val = 1;
if (threadIdx.x < 32) {
for (int offset = 16; offset > 0; offset /= 2)
val += __shfl_down_sync(-1, val, offset);
}
\end{lstlisting}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\columnwidth,trim=4 12 4 4,clip]{figures/shfl_down.png}
\caption{Explanation of using {\tt shfl\_down\_sync} to implement reduction.}
\label{fig:shfl_explanation}
\end{figure}
Although none of the existing projects can support this kernel, it is assumed that flat collapsing would generate code with the following steps:
\begin{itemize}
\item group consecutive instructions between two barriers, and wrap each group with a for-loop whose length equals to the block size. In Code \ref{code:cpu_shfl_v1}, there are three groups that are separately wrapped by three for-loops in line 3, line 6, and line 9. Please note that the loop in line 5 is from the source input;
\item replace the use of {\tt threadIdx.x} by the loop iteration variable {\tt tx};
\item replicate variables (e.g., {\tt val}) used by more than one group into an array form;
\end{itemize}
Unfortunately, these steps are insufficient to generate the correct CPU program for this GPU kernel. The key reason is that there are implicit warp-level barriers derived from {\tt shfl\_down()}: each thread within a warp has to first calculate its {\tt val} and then invoke {\tt shfl\_down()} at the same time (see Section. \ref{subsec:warp} for details). These warp-level barriers are inside a branch of an if–then construct, which creates a more complex situation.
In previous projects, there are only block-level barriers, so it can safely be assumed that for each barrier instruction, all or no threads within the block can access it. Thus, it will just wrap the barrier instructions with for-loops with lengths {\tt b\_size}, and these for-loops are located in the branch of a if–then construct: if it is knwon at runtime that this barrier will be accessed, then the control flow will directly runs into this branch and execute the for-loop; or go to another branch otherwise. However, in the example, these warp-level barriers are only accessible for the threads in the first wrap. To get the correct result, one needs to not only wrap the barrier instructions with for-loops, but also replicate the control flow instruction in if–then construct (line 2 in Code~\ref{code:gpu_shfl}) and insert them into line 7 and line 10 in Code~\ref{code:cpu_shfl_v1}. With the above modifications, Code~\ref{code:gpu_shfl} would become Code~\ref{code:cpu_shfl_v1}.
These transformations are quite complex, even for the demo example, not to mention implementing it in compilers used for all possible CFGs. Based on the above analysis, hierarchical collapsing is proposed which produces Code \ref{code:cpu_shfl_v3}. The concept is also illustrated in Figure~\ref{fig:SPMD}(c).
Compared with Code~\ref{code:cpu_shfl_v1}, Code~\ref{code:cpu_shfl_v3} has two types of generated loops: the loop with induction variable {\tt wid} (line 4) is for the block level called {\em inter-warp loop} , while the inner loops with induction variable {\tt tx} (lines 5, 7, 12, 14) are for the warp level called {\em intra-warp loop}. With inter/intra warp loops, compared with a single-level loop for all threads within a block, the complexity of the generated code can be reduced: no longer needs to replicate and insert the if-then construct. Instead, it is maintained only with a simple loop peeling (line 10 in Code~\ref{code:cpu_shfl_v3}). With the low complexity, hierarchical collapsing can easily be implemented and integrated into compilers. In COX, hierarchical collapsing is implemented as a new LLVM pass to automatically transform the GPU kernels.
\begin{lstlisting}[caption={CPU warp shuffle program generated by flat collapsing},label={code:cpu_shfl_v1},language=C]
int shfl_arr[32];
int val[b_size];
for (int tx = 0; tx < b_size; tx++)
val[tx] = 1;
for (int offset = 16; offset > 0; offset /= 2) {
for (int tx = 0; tx < b_size; tx++)
if (tx < 32)
shfl_arr[tx] = val[tx];
for (int tx = 0; tx < b_size; tx++)
if (tx < 32)
if (tx + offset < 32)
val[tx] += shfl_arr[tx + offset];
}
\end{lstlisting}
\begin{lstlisting}[caption={CPU warp shuffle program by hierarchical collapsing},label={code:cpu_shfl_v3},language=C]
int shfl_arr[32];
int val[b_size];
bool flag[32];
for (int wid = 0; wid < b_size / 32; wid++) {
for (int tx = 0; tx < 32; tx++)
val[wid * 32 + tx] = 1;
for (int tx = 0; tx < 32; tx++)
flag[tx] = (wid * 32 + tx) < 32;
// loop peeling
if (flag[0]) {
for (int offset = 16; offset > 0; offset /= 2) {
for (int tx = 0; tx < 32; tx++)
shfl_arr[tx] = val[wid * 32 + tx];
for (int tx = 0; tx < 32; tx++)
if (tx + offset < 32)
val[wid * 32 + tx] += shfl_arr[tx + offset];
}
}
}
\end{lstlisting}
Below are several details worth mentioning:
\begin{itemize}
\item As the input CUDA kernel has a {\tt shfl\_down} inside an if-then construct, this is quite a complex situation: not all warps can access the implicit warp-level barriers derived from {\tt shfl\_down}. According to CUDA document, \footnote{\url{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html##synchronization-functions}} for a given warp-barrier, none or all threads within a warp can access it. Thus, loop peeling (line 10 in Code \ref{code:cpu_shfl_v3}) is used to evaluate the condition of the if-then construct only for the first thread in the warp, and then all other threads in the warp just follow the same path. (See Section \ref{subsec:implicit} for details);
\item Although only {\tt flag[0]} is needed, the instructions to calculate other elements in {\tt flag} are also executed. Because these instructions may have a side effect, to guarantee the correctness, they have to be executed even these outputs are not needed;
\end{itemize}
The rest of this paper is organized as follows: Section \ref{sec:IR_transformation} introduces the key part of the hierarchical collapsing. The runtime system is introduced in Section \ref{sec:runtime_system}. Section \ref{sec:experiment} shows the evaluation of COX with CUDA SDK, heter-Mark, and GraphBig benchmarks and the comparison of the performance with POCL and DPC, which are the state-of-the-art open source frameworks. Section \ref{sec:related_work} provides a survey that describes various attempts to migrate GPU programs to CPU devices. Finally, concluding thoughts are presented in Section \ref{sec:conclusion}.
\section{IR Transformation\label{sec:IR_transformation}}
\subsection{Overview of COX}
Figure~\ref{fig:execute_pipeline} shows an overview of the COX framework. At a high level, a CUDA kernel is compiled with Clang, which produces NVVM IR~\cite{NVVM} for the NVIDIA GPU. Then, COX transforms the NVVM IR into the CPU-friendly LLVM IR. The hierarchical collapsing~is implemented in this transformer. After that, the LLVM IR is linked with host programs and runtime libraries to generate a CPU-executable file.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth,trim=4 12 4 4,clip]{figures/Overview.pdf}
\caption{COX pipeline for generating CPU executable files from CUDA kernel codes. }
\label{fig:execute_pipeline}
\end{figure}
\begin{lstlisting}[caption={CUDA Warp Vote example},label={code:cuda_warpvote},language=C]
__global__ void VoteAll(int *result) {
int tx = threadIdx.x;
result[tx] = __all_sync(-1, tx
}
\end{lstlisting}
\begin{figure*}[ht]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/Step1.pdf}
\caption{Original Code.}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/Step2.pdf}
\caption{Step 1: Replace warp function.}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/Step3.pdf}
\caption{Step 2: Insert extra barriers.}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/Step4.pdf}
\caption{Step 3: Split blocks by barriers.}
\end{subfigure}
\hspace{2mm}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/Step5.pdf}
\caption{Step 4: Wrap the current CFG with intra-warp loop.}
\end{subfigure}
\hspace{2mm}
\begin{subfigure}[t]{0.23\textwidth}
\includegraphics[width=\textwidth,trim=4 4 4 4,clip]{figures/Step6.pdf}
\caption{Step 5: Wrap the current CFG with inter-warp loop.}
\end{subfigure}
\caption{Steps of NVVM to LLVM-IR transformer in Figure~\ref{fig:execute_pipeline}. }
\label{fig:pipeline_demo}
\end{figure*}
Figure~\ref{fig:pipeline_demo} shows an example of transforming CUDA kernels shown in Code~\ref{code:cuda_warpvote} into a LLVM-IR for CPU. First, warp-level functions are replaced with built-in functions defined in the runtime library, as shown in Step 1 (Section~\ref{subsec:warp}). Second, in Step 2, the extra barriers are identified and inserted (Section~\ref{subsec:implicit}). Last, through Steps 3 to 5, hierarchical parallel regions are identified, and intra/inter-warp loops are generated accordingly to create a CPU-friendly version (Section~\ref{subsec:split}). After Step 5, the generated LLVM IR will be compiled and linked with host programs and runtime libraries to generate a CPU-executable file.
\subsection{Support Warp-level Functions}
\label{subsec:warp}
In a GPU architecture, when threads within a warp invoke the warp functions, the GPU will have internal communications to accumulate and/or communicate the local variables among the warp. To support these features on CPUs, the corresponding accumulation and communication need to be explicitly performed. This section describes how to support warp-level collectives. \\
In the initialization, COX allocates an array {\tt warp\_vote} with length 32. The {\tt warp\_vote} should be stored in CPU thread local memory, as a CPU thread is used to simulate a GPU block. Otherwise, if {\tt warp\_vote} is stored in global memory that is accessible to all CPU threads, a data race can occur when multi CPU threads read/write the {\tt warp\_vote} variable at the same time.
A GPU warp vote instruction is translated to the following CPU instructions: for threads within a warp, first, each thread stores its local flag into a different element in {\tt warp\_vote}. After all the elements are set, the result for this warp vote can easily be computed. The function {\tt warp\_all} is defined in a runtime library that will be linked at the final compilation. To utilize the computation resource of X86, {\tt warp\_all} is implemented with the AVX instructions. The benefits brought by AVX are evaluated in Section~\ref{sec:simd}. The ways to support warp shuffle are quite similar. See Code~\ref{code:gpu_shfl} and Code~\ref{code:cpu_shfl_v3} for example. \\
COX also needs to insert the implicit warp-level barriers when supporting these warp-level functions. As discussed in \cite{patel2021virtual}, two warp-level barriers are required: barriers for the Read-after-Write (RAW) hazard and barriers for the Write-after-Read (WAR) hazard.
The use of these two barriers is shown in Code~\ref{code:consecutive_warp}. There are two consecutive warp vote instructions and the inserted barriers 1) without the barriers for RAW hazard, a thread will invoke {\tt warp\_all} before other threads set {\tt warp\_vote[tx]} to 1 (the first vote) or 2 (the second vote); 2) without the barriers for the WAR hazard, a thread will set {\tt warp\_vote[tx]} to 2 before other threads invoke the first vote function. The use of consecutive warp-level collective functions is really common when implementing reduction.
\begin{lstlisting}[float,floatplacement=H,caption={Insert implicit Barriers to avoid RAW/WAR hazards},label={code:consecutive_warp},language=LLVM]
; the first warp vote instruction
@warp_vote[tx] = 1;
call @warp.sync() ; for RAW hazard
call @warp.sync() ; for WAR hazard
; the second warp vote instruction
@warp_vote[tx] = 2;
call @warp.sync() ; for RAW hazard
call @warp.sync() ; for WAR hazard
\end{lstlisting}
\subsection{Insert extra Barriers}
\label{subsec:implicit}
In Steps 3, 4, and 5, hierarchical collapsing needs barrier information to identify the Parallel Region and generate intra/inter-warp loops accordingly. Thus, it is important to insert extra barriers that are not shown in the input GPU codes but necessary for identifying the Parallel Region. Some researchers \cite{karrenberg2012improving} proposes a similar concept and corresponding algorithm, but they cannot support warp-level functions.
The extra barriers are sourced from barriers in conditional statements. An example is shown in Figure \ref{fig:explicit_implicit_barrier}.
\begin{figure}[H]
\centering
\begin{subfigure}{.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/for-sync.png}
\caption{Input CUDA kernel}
\end{subfigure}
\begin{subfigure}{.22\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/for-sync-implicit.png}
\caption{After extra barrier insertion}
\end{subfigure}
\begin{subfigure}{.2\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/for-sync-res.png}
\caption{After loop generation}
\end{subfigure}
\caption{An example of extra barriers needed for identifying PRs. a) the input CUDA kernel, which has a barrier in for-loop construct; b) as there is a barrier in the conditional statement, extra barriers are inserted to guide the generation of intra/inter-warp loops in future steps; c) according to the barriers, two PRs are identified and two for-loops are generated separately. Note, all transformations in COX are done in the LLVM IR level. This source code level example is only used for explanation.}
\label{fig:explicit_implicit_barrier}
\end{figure}
To make the hierarchical collapsing work, extra block-level barriers are inserted at the beginning of the entry block and at the end of the exit block as POCL does\cite{jaaskelainen2015pocl}. \\
The two most common conditional statements are {\tt If-Then} construct and 2) {\tt For-loop} construct.
\subsubsection{Barriers in if–then construct \label{sec:barrier_in_if}}
The CFG of a classical if–then construct is shown in left side of Figure \ref{fig:barrier_insert}(a).
In the block {\tt if.body}, there is a barrier. According to \cite{CUDA-SYNC}, for a block/wrap barrier, none or all threads within the block/wrap can reach it\footnote{This rule does not exist for non-aligned barriers in CUDA, which is beyond the scope of this paper.}. Thus, COX can safely apply loop peeling on the CFG; COX peels the first thread to evaluate the branch direction and the rest of the threads within the warp/block can just follow the same direction. See Code \ref{code:cpu_shfl_v3} for a loop peeling example. The result after inserting extra barriers and block split is shown in the right side of Figure~\ref{fig:barrier_insert}(a). Several details are worth mentioning:
\begin{itemize}
\item insert extra barriers with the same type as the barrier in {\tt if.body}. In the example, there is a warp barrier in {\tt if.body}, thus, hierarchical collapsing also inserts warp barriers as extra barriers;
\item after transformation, all blocks will be wrapped by intra-warp loops, except {\tt if.cond}, which is used for loop peeling;
\item {\tt if.cond} should contain only a single conditional-branch instruction and it should not have any side-effect. In Figure \ref{fig:barrier_insert}(a), all computation instructions are put into {\tt if.head} so that they are executed {\tt b\_size} times, as the original GPU program does.
\end{itemize}
\begin{figure}[H]
\centering
\begin{subfigure}{.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/if_then_case.png}
\caption{Barriers in if-then construct.}
\end{subfigure}
\\
\begin{subfigure}{.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/for_case.png}
\caption{Barriers in for-loop construct}
\end{subfigure}
\caption{After transformation, the inserted barriers are shown in bold. The PRs identified in further step are also shown in the figure.}
\label{fig:barrier_insert}
\end{figure}
The detailed algorithm for inserting extra barriers derived from barriers in if-then construct is described in Algorithm \ref{alg:insert_if_barrier}. COX has to do some additional checking to avoid an infinite loop caused by a for-loop construct in CFG. For simplicity this checking part is not shown in Algorithm \ref{alg:insert_if_barrier}.
\begin{algorithm}[htbp]
\caption{Transformation for inserting extra barriers for barrier in if-then construct}
\label{alg:insert_if_barrier}
\begin{algorithmic}[1]
\Require $K$: The CFG for the input kernel
\Require $PDT$: The Post Dominator Tree for the input CFG
\Require $DT$: The Dominator Tree for the input CFG
\State $conditional\_block \gets []$
\LeftComment {Find all barriers in if-body construct}
\ForAll{$block \in K$}
\If{$has\_barrier(block)$}
\If{$!PDT.dominates(block, K.entry)$}
\State $conditional\_block.insert(block)$
\EndIf
\EndIf
\EndFor
\LeftComment {Insert extra barriers}
\ForAll{$block \in conditional\_block$}
\State $NearestEntry \gets block.precessor$
\While{$PDT.dominates(block, NearestEntry)$}
\State $NearestEntry \gets NearestEntry.precessor$
\EndWhile
\LeftComment {Insert barrier in the end of if-head}
\State $insert\_barrier\_before(precessor.terminator)$
\State $pre\_successor \gets block$
\State $successor \gets block.successor$
\While{$DT.dominates(block, successor)$}
\State $pre\_successor \gets successor$
\State $successor \gets successor.successor$
\EndWhile
\LeftComment {Insert barrier in the beginning of if-exit}
\State $insert\_barrier\_before(successor.begin)$
\LeftComment {Insert barrier in the end of if-body}
\State $insert\_barrier\_before(pre\_successor.terminator)$
\LeftComment {Inserted extra barriers may generate another if-then construct that contains barriers}
\If{$!PDT.dominates(NearestEntry, K.entry)$}
\State $conditional\_block.insert(NearestEntry)$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\subsubsection{Barriers in for-loop construct}
\label{subsubsec:barrier_for}
Although CUDA supports several loop styles (e.g., for-loop, while-loop, do-while-loop), after LLVM's transformation, all loops will be simplified to the canonical format, which 1) has single latch; 2) has the loop headers dominating all exit blocks.\footnote{A latch is a node in the loop that has an edge to the loop header. An exiting edge is an edge from inside the loop to a node outside of the loop. The source of such an edge is called an exiting block, its target is an exit block.\cite{llvm-loop}} Thus, COX only needs to concern these canonical loops.
COX inserts extra barriers before/after the branch instructions (back edge of the loop). Figure~\ref{fig:barrier_insert}(b) shows the example of inserting extra barriers for a for-loop construct which contains a block barrier. The same as with an if–then construct, all these inserted extra barriers (shown by bold text in the figure) should have the same type as with the barriers in {\tt for.header} (block barrier in the example).
\subsubsection{Supporting other conditional statements}
CUDA is a high-level flexible language; even a single concept can generate quite a different CFG. For example, the loop concept can be implemented by different CFGs, such as do-while-loop, while-loop, and for-loop. However, with the existing LLVM transformations, COX can automatically convert the input CFGs to canonical formats and only focus on these canonical formats in the above discussion. Below are some important features in the canonical format:
\begin{itemize}
\item Each branch instruction has only two successors; most input CFGs already have this feature, except the CFG which uses switch-case construct. For these exceptions, COX uses LLVM's $lowerswitch$ transformation to convert the switch-case construct into if–then constructs.
\item All loops are in canonical format that 1) they all have pre-headers; 2) each loop has only a single latch, in other words, a single backedge; and 3) the loop header will dominate all exit blocks. COX calls LLVM's $loop-simplify$ transformation to translate the input CFG loops to the canonical format.
\end{itemize}
\subsection{Split Blocks Before/After Each Barrier}
\label{subsec:split}
As the instructions before/after a barrier in a block need to be wrapped by different intra/inter-warp loops, in this step, COX splits the blocks that have barriers inside. See Step 3 in Figure \ref{fig:pipeline_demo} for an example.
\subsection{Hierarchical Parallel Region \label{sec:hierarchical_PR}}
As discussed in Section~\ref{subsec:motiv}, each parallel region (PR) becomes a for-loop. Due to the warp-level functions, COX has to generate two kinds of for-loops: inter-warp loop and intra-warp loop. Thus, COX needs two kinds of Parallel Regions: 1) warp-level Parallel Region, which will be wrapped by intra-warp loop and 2) block-level Parallel Region, which will be wrapped by inter-warp loop. It is obvious that a warp-level PR will always be a subset for a block-level PR (a GPU warp is always within a GPU block); thus, the new concept is called a Hierarchical Parallel Region. An example of a Hierarchical Parallel Region is shown in Figure \ref{fig:PR_example}.
\ignore{
\textbf{Total order of Barrier:}
Up to now, we have only discussed two types of barrier: warp-level barrier and block-level barrier. However, according to the CUDA programming model, there are also other types of barrier; CUDA supports further splitting a warp to generate finer groups. For example, we can generate 16 groups within a warp while each group has two threads. These finer groups will involve new barriers. The easy way is to use block-level barriers to replace all of these finer groups' barriers, and we name this solution as flat collapsing. However, the programs generates by flat collapsing has poor locality and thus will slow down the execution time, as shown in Code~\ref{code:cpu_shfl_v2}. To further speedup, we have to use hierarchical collapsing. Before introduction of hierarchical collapsing, we have to define some concepts. \\
We can define a total order for different kinds barrier: If barrier $a$ can be replaced by barrier $b$, then we can define ${\displaystyle a\leq b}$. For example, as we can use block-level barrier to replace warp-level barrier in our CUDA programs, we have ${\displaystyle warp\ barrier \leq block\ barrier}$ \\
}
\ignore{
After we define the total order of barriers, now we can define another concept. We use the similar concept of \\ $Parallel\ Region (PR)$ that proposed in \cite{jaaskelainen2015pocl}, but with some extensions: \\
Instead of the informal definition of PR in previous project \cite{jaaskelainen2015pocl}, we propose a mathematical definition for Hierarchical Parallel Region: \\
\textbf{Hierarchical Parallel Region:}
Parallel Region is a set of blocks, in the later step, each parallel region becomes a for-loop. Multiple PRs will be converted as multiple sequential for-loops. However, in our project, we use two kinds of loop (intra/inter-warp loop). Thus, we also need two kinds of Parallel Region: 1) Parallel Region of warp-barrier, which will be further wrapped by intra-warp loop, and 2) Parallel Region of block-barrier, which will be wrapped by inter-warp loop. It is obvious that the a PR of warp-barrier will always be a subset for a PR of block-barrier (intra-warp loop is always inside an inter-warp loop). Thus we call our new concept as Hierarchical Parallel Region.
}
\ignore{
We provide a more formal definition for the concept of PR. First, we define the {\tt scope} of a barrier: a barrier's scope is the set that contains all threads controlled by this barrier. For example, the scope of a warp-level barrier is a warp. \\
Then we define {\tt Parallel Region for group G} (G is a set of threads, e.g., block, warp) as following: {\em Parallel Region for group {\tt G} is a set of blocks. There is one and only one block in the set that has a barrier whose scope contains {\tt G}, and this block post-dominates all other blocks in the set. We call this block the {\tt tail} of this PR. Any two blocks in the set are connected regardless of the direction of edges.}
In Figure~\ref{fig:pipeline_demo}, after Step 4 and Step 5, we can get three PRs for warp (\{block1\}, \{block2\}, \{block3\}) and a single PR for block (\{block1, block2, block3, intra\_warp\_init, intra\_warp\_cond, intra\_warp\_inc...\}).
}
\begin{figure}[htbp]
\centering
\includegraphics[width=90mm]{figures/PR_example.png}
\caption{As there is a warp barrier in {\tt block1}, after transformation, there are two warp-level PR (\{ block1\},\{block2\}) and a single block-level PR (\{block1, block2\}).}
\label{fig:PR_example}
\end{figure}
Thus, the rest of the steps are for finding the block/warp level PRs and wrapping them with inter/intra-warp loops.
The algorithm (Alg. \ref{alg:intra_warp_PR}) is used for finding the set of warp-level PRs. The algorithm for finding the block-level PRs is very similar, except it only concerns the block barrier.
COX cannot find the PR for the warp level and block level simultaneously: COX first finds all warp-level PRs and generates intra-warp loops to wrap these PRs. Then, COX finds the block-level PRs in the new CFG and wrap them with inter-warp loops.
\begin{algorithm}[htbp]
\caption{Find all warp-level PRs}
\label{alg:intra_warp_PR}
\begin{algorithmic}[1]
\Require $K$: The CFG after Step3
\Ensure $PR\_set$: The set of PRs.
\State $PR\_set \gets \{\}$
\State $end\_block \gets []$
\ForAll{$block \in K$}
\If{$block$ contains warp/block barrier}
\State $end\_block.insert(block)$
\EndIf
\EndFor
\LeftComment {Find the PR that $block$ belongs to}
\ForAll{$block \in end\_block$}
\If{block has more than one precessors}
\State $continue$ \Comment{This is the exit of an if-then construct}
\EndIf
\State $PR \gets \{block\}$
\State $pending\_block \gets block.precessors$
\While{$!pending\_block.empty()$}
\State $current \gets pending\_block.front()$
\State $pending\_block.pop()$
\If{has visited $current$}
\State $continue$
\EndIf
\If{$current$ has warp/block barriers}
\State $continue$
\EndIf
\State $PR.insert(current)$
\State $pending\_block.insert(current.preprocessors)$
\EndWhile
\LeftComment {Blocks for loop peeling do not belong to any PR}
\If{PR only has a single block which only contains a conditional branch}
\State $continue$
\EndIf
\State $PR\_set.insert(PR)$
\EndFor
\end{algorithmic}
\end{algorithm}
\ignore{
In the following paragraphs, we prove that the Alg. \ref{alg:intra_warp_PR} can return a valid set of PRs for group {\tt G} that satisfy:
\begin{itemize}
\item \textbf{There is one and only one block in the set that has a barrier whose scope contains {\tt G}}: We first insert a block BB that has a barrier whose scope contains {\tt G} (line12 in Alg. \ref{alg:intra_warp_PR}). Then we continue visit BB's precessors and only insert these blocks that do not have barrier whose scope contains {\tt G} into the set.
\item \textbf{The block that has a barrier whose scope contains {\tt G} post-dominates all other blocks in the set}: Following the above definition, BB is the first block inserted into the PR, and all other blocks are BB's ancestors. If there is a block PB that can not be post dominated by BB, there must be a for-loop; BB is within the for-body and PB is inserted into the PR after we visit the backedge of that for-loop. However, we have already inserted extra barrier at the beginning of each for-body, so we stop before reaching the backedge.
\item \textbf{We cannot insert any other block not contained in the PR and get a new PR}: If there is a block {\tt BB} not contained in the current PR that can be inserted into the current PR and still a valid new PR, we know {\tt BB} is post-dominated by {\tt tail}. \textcolor{red}{(Maybe we should just remove these proof due to page limitation)}
\end{itemize}
}
\subsection{Wrap PR with For-Loop \label{subsec:warp_with_for}}
In this step, COX wraps warp/block-level PRs by intra/inter-warp loop. Please see Figure \ref{fig:pipeline_demo}(e)(f) for an example.
Although this step is quite straightforward, it requires proving the correctness~\footnote{Due to page limitation, we move the proof into Appendix}: after inserting intra/inter-warp loops, each instruction from the input GPU kernel is executed {\tt b\_size} times ({\tt b\_size} is the block size), except the instructions used for loop peeling.
Finally, after adding intra/inter-warp loops, some local variables are needed to be replicated: for local variables that are used in several warp-level PRs but only used in a single block-level PR, they are replicated with an array of length 32. For local variables that are used among different block-level PRs, they are replicated by an array of length equals to block size.
\section{Runtime System\label{sec:runtime_system}}
The above section describes only the CUDA device part. As for the CUDA host part which involves memory allocation, memory transfer, and kernel launch, these features has to be manually migrated. The automatic translation from CUDA host code to CPU is left for future work. In the runtime system, p-thread is used for multi-threads. In this paper, both host and device are x86; thus, CUDA malloc and memcpy are replaced by C malloc and memcpy. In Figure \ref{fig:vecCopy_example}(a), a CUDA host example is presented for vector copy. The migrated COX host code is shown in Figure \ref{fig:vecCopy_example}(b), with the corresponding CUDA operations recorded in the comments. Compared with the CUDA host program, the COX host program has the following differences: 1) COX uses thread-local variable {\tt block\_index} to store the block index, and the block index is explicitly set during invocation; 2) COX replaces the CUDA memory operations with corresponding CPU operations; 3) COX uses pthread fork/join to replace kernel launch in CUDA. There are several potential optimization for the runtime system, such as using thread-pool instead of fork/join for kernel launching and using regular expression or LLVM transformation to automatically generate COX host programs from the CUDA source code. These optimizations are beyond the scope of this paper and are open for future research.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/cuda_code.png}
\caption{CUDA host code.}
\end{subfigure}
\\
\begin{subfigure}{.9\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/x86_code.png}
\caption{Migrated COX host code.}
\end{subfigure}
\caption{The host code in COX is similar with the original CUDA host code. The automatic migration from CUDA host code to COX host code is open for future work.}
\label{fig:vecCopy_example}
\end{figure}
\ignore{
\begin{lstlisting}[caption={vector copy CUDA example},label={code:vecCopy_cuda},language=C]
int main()
{
int n = 1024;
int grid_size = n/1024;
// Host input vectors
int *h_a,*h_b;
// Device input vectors
int *d_a,*d_b;
// Size, in bytes, of each vector
size_t bytes = n*sizeof(int);
h_a = (int*)malloc(bytes);
h_b = (int*)malloc(bytes);
cudaMalloc(&d_a, bytes);
cudaMalloc(&d_b, bytes);
cudaMemcpy(d_a, h_a, bytes, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, h_b, bytes, cudaMemcpyHostToDevice);
vecCopy<<<grid_size, 1024>>>(d_a, d_b);
cudaMemcpy(h_b, d_b, bytes, cudaMemcpyDeviceToHost);
cudaFree(d_a);
cudaFree(d_b);
return 0;
}
\end{lstlisting}
\begin{lstlisting}[caption={vector copy COX host program},label={code:vecCopy_COX},language=C]
// store block index in TLS, this variable is used
// by functions generated by hierarchical collapsing
__thread int block_index;
// functions generated by hierarchical collapsing
extern void *vecCopy_wrapper(void *);
// this function is a wrapper of the translated kernel,
// which sets the block index and invoke the kernel
void *wrap(void *p) {
int **res = (int **)p;
// set block index
block_index = (*(int *)res[2]);
// execute the translated kernel
vecCopy_wrapper(p);
return NULL;
}
// as pthread only accept a single void* as input,
// we have to compress all arguemtns into a void*
void *gen_input(int bid, int *a, int *b) {
int **ret = new int *[3];
int **p0 = new int *;
*p0 = a;
ret[0] = (int *)(p0);
int **p1 = new int *;
*p1 = b;
ret[1] = (int *)(p1);
int *p2 = new int;
*p2 = bid;
ret[2] = (int *)p2;
return (void *)ret;
}
int main() {
int n = 1024;
int grid_size = n / 1024;
// Host input vectors
int *h_a, *h_b;
// Device input vectors
int *d_a, *d_b;
// Size, in bytes, of each vector
size_t bytes = n * sizeof(int);
h_a = (int *)malloc(bytes);
h_b = (int *)malloc(bytes);
d_a = (int *)malloc(bytes); // cudaMalloc(&d_a, bytes);
d_a = (int *)malloc(bytes); // cudaMalloc(&d_a, bytes);
memcpy(d_a, h_a, bytes); // cudaMemcpy(d_a...
memcpy(d_b, h_b, bytes); // cudaMemcpy(d_b...
// vecCopy<<<n/1024, 1024>>>(d_a, d_b);
pthread_t *threads = new pthread_t[grid_size];
for (int bid = 0; bid < grid_size; bid++) {
void *inp = gen_input(bid, d_a, d_b);
pthread_create(&threads[bid], NULL, wrap, inp);
}
for (int bid = 0; bid < grid_size; t++)
pthread_join(threads[bid], NULL);
memcpy(h_b, d_b, bytes); // cudaMemcpy(h_b...
free(d_a); // cudaFree(d_a);
free(d_b); // cudaFree(d_b);
}
\end{lstlisting}
}
The following steps make up the workflow for COX: It 1) compiles the input CUDA source code with Clang and gets the NVVM IR of kernel functions and; 2) transforms the NVVM IR with hierarchical collapsing; 3) links the transformed kernel with the COX host program (manually migrated from CUDA host program) and generates the CPU-executable file. COX has two modes: 1) normal mode to maintain the runtime configuration as variables (e.g., grid size, block size) and 2) JIT mode to compile the program with the given runtime configuration. In the normal mode, COX only needs to compile programs once, and it can be used for different runtime configurations. In JIT mode, a program has to be recompiled when executed with different configurations. Although the JIT mode requires recompiling, in some cases, it can generate higher-performance programs, as it provides more opportunities for optimizations. For more details, please see Section~\ref{sec:jit_mode}.
\section{Experimental Results\label{sec:experiment}}
To verify the correctness and performance, this section describes the experiments for supporting the CUDA kernel in several benchmarks on X86 and ARM architectures. Although several frameworks also support executing CUDA on CPU, most of them were developed decades ago and cannot really support programs that use new CUDA features. Below are the hardware and software environments for the experiments:
\begin{itemize}
\item Software: Ubuntu 18.04, LLVM 10.0, gcc 7.5.0, CUDA 10.1, POCL 1.4, DPCT~\cite{dpct} Ver. : 2021.3.0.
\item X86-Hardware: 8 x Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz
\item ARM-hardware: 48 x ARM(R) A64FX CPU @ BogoMIPS 200.00
\item Benchmarks: CUDA SDK 10.1, Hetero-mark~\cite{sun2016hetero}, GraphBig~\cite{nai2015graphbig};
\item Time: average time for running more than 1000 times; fork/join time of threads is also included.
\end{itemize}
\subsection{Coverage}
Table~\ref{table:cuda_sdk} analyzes examples in CUDA SDK10.1 that use but do not require special hardware support (e.g., tensorcore, unified memory).
POCL and DPCT are chosen for the coverage comparisons since they are the currently activated projects that support executing CUDA programs on CPUs. As POCL is designed for OpenCL and cannot directly execute CUDA programs, a third-party translator~\cite{han2021supporting} is used to support executing CUDA with POCL. Besides, although POCL has both compilation and runtime parts, only the compilation is used in this experiment. For POCL evaluation, the GPU programs is compiled by POCL and then executed on COX. Thus, it's fair to compare the execution time to show the results of compilation and avoid the effect of runtime system. \\
As shown in the table, the existed frameworks can only automatically support at most 21 kernels (coverage=68\%). Those failed kernels are using new CUDA features. On the other hand, COX supports 28 kernels (coverage=90\%).
\begin{table}[htp]
\tiny
\begin{tabular}{L{0.27\columnwidth}L{0.27\columnwidth}C{0.05\columnwidth}C{0.05\columnwidth}C{0.05\columnwidth}C{0.05\columnwidth}}
\hline
kernel name & features & POCL & DPCT & COX \\ \hline
initVectors & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
gpuDotProduct & warp cooperative group & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} \\
gpuSpMV & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
r1\_div\_x & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
a\_minus & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
gpuConjugateGradient & grid sync & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} \\
multigpuConjugateGradient & multi grid sync & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} \\
MatrixMulCUDA & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
matrixMul & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
copyp2p & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
reduce0 & block cooperative group & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
reduce1 & block cooperative group & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
reduce2 & block cooperative group & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
reduce3 & block cooperative group & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
reduce4 & warp cooperative group & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} \\
reduce5 & warp cooperative group & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} \\
reduce6 & warp cooperative group & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} \\
shfl\_intimage\_rows & warp shuffle & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
shfl\_vertical\_shfl & warp shuffle & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
shfl\_scan\_test & warp shuffle & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark}* & \textcolor{codegreen}{\cmark} \\
uniform\_add & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
reduce & warp cooperative group & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} \\
reduceFinal & warp cooperative group & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} \\
simpleKernel & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
VoteAnyKernel1 & warp vote & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
VoteAllKernel2 & warp vote & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
VoteAnyKernel3 & warp vote & \textcolor{codered}{\xmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
spinWhileLessThanone & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
matrixMultiplyKernel & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
vectorAdd & & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} & \textcolor{codegreen}{\cmark} \\
filter\_arr & activated thread sync & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} & \textcolor{codered}{\xmark} \\ \hline
Coverage & & 39\% & 68\% & 90\% \\ \hline
\end{tabular}
\caption{Coverage of COX compared to other frameworks. *enabled by manual code migration\cite{tsai2021porting}}
\label{table:cuda_sdk}
\end{table}
The CUDA features supported by POCL, DPCT and COX is also shown in Figure \ref{fig:venn_diagram}.
\begin{figure}[htbp]
\centering
\includegraphics[width=75mm]{figures/venn_diagram.png}
\caption{The CUDA features supported by POCL, DPCT and COX.}
\label{fig:venn_diagram}
\end{figure}
Although the coverage with COX can be significantly improved, there are still three kernels that cannot be supported yet. $gpuConjugateGradient$ and $multiGpuConjugateGradient$ rely on synchronization between different grids and devices, which utilize the grid cooperative group and multi-grid cooperative group separately. $filter\_arr$ uses a dynamic cooperative group: it dynamically groups all activated threads. As discussed in Section~\ref{sec:cooperative_group_intro}, all these features should be supported at the runtime level: frameworks should schedule threads accordingly at runtime, and each thread can only know whether it is activated during runtime. Supporting runtime features is included for future work.
\ignore{
To verify the availability, we also run and record the execution times for these kernels can not be supported by other frameworks (Table. \ref{table:execute_time}). To prevent large amount of context switch time that may impact the results, in this table, we only fork/join threads once, and each thread will execute kernels for ~2000 times and get the average execute time.
}
\ignore{
\begin{table*}[]
\begin{tabular}{|c|c|c|}
\hline
Application & kernel name & execute time (µs) \\ \hline
conjugateGradientCudaGraphs & gpuDotProduct & 1.414 \\ \hline
\multirow{7}{*}{reduction} & reduce0 & 20.15 \\ \cline{2-3}
& reduce1 & 9.869 \\ \cline{2-3}
& reduce2 & 10.148 \\ \cline{2-3}
& reduce3 & 10.053 \\ \cline{2-3}
& reduce4 & 5.442 \\ \cline{2-3}
& reduce5 & 2.837 \\ \cline{2-3}
& reduce6 & 3.151 \\ \hline
\multirow{3}{*}{shfl\_scan} & shfl\_intimage\_rows & 2.802 \\ \cline{2-3}
& shfl\_vertical\_shfl & 372.296 \\ \cline{2-3}
& shfl\_scan\_test & 13.253 \\ \hline
\multirow{2}{*}{simpleCudaGraphs} & reduce & 6.378 \\ \cline{2-3}
& reduceFinal & 5.627 \\ \hline
\multirow{3}{*}{simpleVoteIntrinsics} & VoteAnyKernel1 & 0.241 \\ \cline{2-3}
& VoteAllKernel2 & 0.236 \\ \cline{2-3}
& VoteAnyKernel3 & 1.426 \\ \hline
GraphColoring & kernel & 51.127 \\ \hline
\end{tabular}
\label{table:execute_time}
\caption{To verify the availability of executing these kernels that are NOT supported in other framework, we run and record the execution time.}
\end{table*}
}
\subsection{Performance \label{sec:perf}}
Figure~\ref{fig:perf} shows a performance comparison of POCL, DPC, and COX on CUDA SDK, Hetero-Mark, and GraphBig benchmark on X86 architecture.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{figures/DPC_POCL_result.png}
\caption{The normalized execution time of POCL and DPC on X86 architecture. The normalized execution time is the POCL/DPC execution time divide by COX's execution time. Thus, the COX's normalized execution time is always 1.}
\label{fig:perf}
\end{figure}
In most cases, COX and POCL have close execution time. Thus, the POCL's normalized execution times are always close to 1. However, DPC's normalized execution time has large variance. This is due to: 1) DPC's most optimizations are for multiple block cases, which are runtime optimization. While in the evaluation, to shown the compile-level optimization, there is only a single block in each application; 2) DPC has optimizations on new Intel CPUs, while POCL and COX do not have special optimizations for the new Intel architectures.
The evaluation results for ARM CPU with AArch64 architecture is shown in Figure \ref{fig:perf_arm}. As DPC does not support ARM CPU, only POCL and COX are evaluated. The performance between COX and POCL are close among all experiments.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/aarch64.png}
\caption{The normalized execution time of COX (normalized with POCL execution time) on ARM CPU. In all cases, COX and POCL has close performance. DPC does not support ARM CPU.}
\label{fig:perf_arm}
\end{figure}
\subsubsection{Performance effects of flat collapsing vs. hierarchical collapsing}
Although hierarchical collapsing can support wrap-level features, it generates nested loops instead of a single loop as flat collapsing does. The complex nested loops incurs more instructions and also makes it difficult to do some optimizations. Figure~\ref{fig:single_nested} shows the overhead of hierarchical collapsing over flat collapsing on X64 architecture, for three micro-benchmarks by varying the vector/matrix sizes, and none of these three benchmarks use warp-level functions. As the results show, hierarchical collapsing downgrades performance by 13\% on average due to additional instructions. Hence, COX uses hybrid-mode: for each input kernel, first checkes whether there are warp-level functions or other features for which cannot be supported by flat collapsing. If not, flat collapsing is used in default.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\columnwidth]{figures/flat_hier.png}
\caption{Performance comparisons of flat collapsing and hierarchical collapsing}
\label{fig:single_nested}
\end{figure}
\subsubsection{Normal mode vs JIT mode \label{sec:jit_mode}}
Loop optimization is an important optimizations for high-performance programs. Although the intra-warp loop's length is always 32, the inter-warp loop's length depends on the block size, a runtime configuration. COX supports two compile modes: normal mode and JIT mode. Although the programs generated by these two modes will both forward to LLVM's optimizer (with $-O3$ flag), they have an obvious difference, especially when compiling complex kernels. Figure~\ref{fig:two_mode} shows the difference in execution time between two modes. These two modes have a relatively small difference for the VectorAdd kernel, as it is quite simple and can easily be vectorized with compiler optimization even the block size is not provided at compile time. However, for more complex kernels, JIT mode generates programs with higher performance.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\columnwidth]{figures/normal-jit.png}
\caption{JIT mode can generate faster programs, especially for complex kernels.}
\label{fig:two_mode}
\end{figure}
\subsubsection{SIMD instructions \label{sec:simd}}
For CPU programs, SIMD instructions are necessary to achieve high performance \cite{jeong2012performance,agulleiro2015tomo3d,bramas2017fast,pharr2012spmd}. The warp vote function execution time with/without AVX is shown in Table~\ref{table:simd_warp_vote}.
With AVX instructions, around 10x speedup is achieved for both functions. The benefit is due to fewer instructions and branches.
\begin{table}[]
\small
\centering
\begin{tabular}{|c|c|c|c|}
\hline
function & & w/ AVX & w/o AVX \\ \hline
\multirow{5}{*}{vote any} & time (µs) & 0.241 & 2.542 \\ \cline{2-4}
& instructions & 1,447,901,852 & 23,472,339,251 \\ \cline{2-4}
& branches & 100,110,593 & 4,260,452,162 \\ \hline
\multirow{5}{*}{vote all} & time (µs) & 0.236 & 2.992 \\ \cline{2-4}
& instructions & 1,384,476,021 & 29,552,745,486 \\ \cline{2-4}
& branches & 100,108,177 & 5,220,517,219 \\
\hline
\end{tabular}
\caption{Both functions gain around 10x speed up when using AVX instructions.}
\label{table:simd_warp_vote}
\end{table}
\subsubsection{Scalability}
Besides the single block execution time, the execution time for multi-block cases are also measured and the result is shown in Figure~\ref{fig:scale}. In the Hetero-mark benchmark, the kernels have fixed block sizes. Thus, to enlarge the grid size, workload size is also enlarged. As the X86 platform has eight CPU cores; the speed up significantly degrades when the grid sizes are larger than eight. Up to eight, it shows good scalability.
\begin{figure}[htbp]
\centering
\includegraphics[width=90mm]{figures/scale.png}
\caption{Multi-core execution time with COX.}
\label{fig:scale}
\end{figure}
\section{Related Work \label{sec:related_work}}
The CPU architecture belongs to MPMD, while the GPU architecture is SPMD. Although users can naively execute a GPU thread with a CPU thread, due to the limited parallelism in CPU, the system can only execute around 100 CPU threads simultaneously, which is much smaller than the parallelism in the GPU architecture. Thus, to achieve the same number of threads as a GPU, the CPU has to create more threads than it can actually execute simultaneously, which will incur a large amount of thread context switching overhead. Two methods solve this issue. The first is to accelerate the context-switching time in the CPU. Some researchers extend the CPU architecture to accelerate the context switching~\cite{chen2018enabling}, these hardware-level extensions are beyond the scope of this paper. In the software level, \cite{gummaraju2010twin} proposes to use lightweight threading to accelerate the context switching. Context switching only stores and reloads a few registers, while maintaining the stack memory. Most modifications are in the runtime level, and users can directly use the original GPU source code. As reported in \cite{stratton2013performance}, the AMD CPU OpenCL implementation is based on this technology. However, even with these optimizations, there is still significant overhead for context switching, around 10ns per switching. \\
Thus, another direction is being explored: increasing the workload of each CPU thread. For each CPU thread, instead of executing a single GPU thread, it executes whole GPU threads within a block. This mechanism can elicit two benefits: 1) it can increase the CPU execution time to make it much larger so that context switching overload becomes negligible; 2) with more workload in a single thread, there are more opportunities to do optimizations (e.g., vectorization, loop transformation). This mechanism has several different names: microthreading~\cite{stratton2010efficient}, thread aggregation~\cite{zhang2013improving}, thread-fusion~\cite{diamos2010ocelot}, region-based serialization~\cite{stratton2013performance}, loop chunking~\cite{shirako2009chunking}, and kernel serialization~\cite{blomkvist2021cumulus}. In this paper, this mechanism is given a new name: flat collapsing. In \cite{stratton2008mcuda, stratton2010efficient}, the authors propose to wrap an SPMD kernel with a loop, and the loop size is equal to the block size. Thus, each loop iteration can simulate a GPU thread within a block, and the a CPU thread is mapped to a GPU block. An important technical detail is supporting synchronization instructions: compilers should separately wrap instructions before/after a synchronization instruction into different loops to maintain the correctness. A similar technology is also discussed in \cite{shirako2009chunking} which utilizes loop transformations (e.g., loop strip-mining, interchange, distribution, unswitching) to transform SPMD execution models with synchronization to Task Parallel execution models. The authors in \cite{zhang2013improving} propose improved static analysis to vectorize the generated loop-programs to improve the performance, and also propose another algorithm to wrap the original kernels with loops to avoid additional synchronization points in previous works. In some GPU architectures, such as NVIDIA GPU, there is implicit lock step within a group of threads. \cite{guo2011correctly} proposed transformations to detect these implicit warp-level synchronizations and maintained them during transformations. The authors in \cite{stratton2013performance} propose to use C Extensions for Array Notation to further accelerate the generated CPU programs with SIMD execution and better spatial locality. \\
Several projects have been proposed to execute CUDA on non-NVIDIA devices. In the early days, NVIDIA provided an emulation framework~\cite{CudaEmulator} to execute CUDA on a CPU; each thread within a GPU block is executed by a CPU thread. Horus~\cite{elhelw2020horus} is another emulator. It supports parsing and executing NVIDIA PTX instructions on CPU devices. These emulators are for debugging rather than for performance. In MCUDA\cite{stratton2008mcuda}, the authors also use a source-to-source translation to translate CUDA to C with the flat collapsing mechanism. Ocelot~\cite{diamos2010ocelot} uses the same mechanism, but instead of source-to-source translation, it converts in the PTX level to avoid recompiling. MapCG~\cite{hong2010mapcg} is a hybrid computing framework. It uses source-to-source translation to translate CUDA kernels to C programs, which can be executed on a CPU. \cite{lee2014boosting} proposes another framework for hybird-computing based on Ocelot to translate GPU programs on the PTX level. Cumuls~\cite{blomkvist2021cumulus} uses Clang to parse the CUDA programs and modifies them on AST level. Cumuls is mainly concern on CUDA runtime support, as for compilation part, it reuses the transformation in MCUDA. Instead of directly translating CUDA/PTX to CPU executable files, other projects utilize the portability of other front-end languages. The authors in \cite{harvey2011swan, sathre2019portability} propose using source-to-source translation to translate CUDA to OpenCL. Instead of source-to-source translation, \cite{perkins2017cuda,han2021supporting} implements the translations with LLVM IR. DPC++ Compatibility Tool~\cite{dpct} and HIPIFY~\cite{hipify} are frameworks that translate CUDA to source languages for Intel and AMD devices. \\
Most related works only focus on supporting old CUDA features. However, the rapid evolution of GPU hardware and software stacks bring lots of new features which are important to achieve high performance, such as warp-level collectives, unified memory and CudaGraph. Achieving high coverage on these new features is still an ongoing project. The researchers in \cite{patel2021virtual} propose to use explicitly barriers and memory exchanges to support warp shuffle on OpenMP, which shares the same insight with COX. \\
OpenCL is another framework that supports executing SPMD programs on MPMD architectures. POCL~\cite{jaaskelainen2015pocl} is an open source OpenCL implementation which supports CPU backend. To support SPMD programs on CPU, POCL implements flat collapsing on LLVM IR level. The authors in ~\cite{karrenberg2012improving} also proposes to use flat collapsing on OpenCL, but with a different method to insert extra synchronization and find Parallel Region which result in fewer extra synchronization barriers. However, this method is not extendable for Hierarchical Parallel Region, thus, cannot be utilized to support warp-level features. In \cite{kim2012snucl}, another OpenCL implementation has been proposed, which mainly focus on support OpenCL programs on multi-device clusters with heterogeneous devices.
\ignore{
Table~\ref{table:cuda_sdk} summarizes the capability of other frames that support executing CUDA (or OpenCL) on x86 platforms. All projects \cite{stratton2008mcuda,diamos2010ocelot,stratton2010efficient} use flat collapsing ~\cite{kar:hac11} without considering warp-level functions and warp-level barrier. \cite{guo2011correctly} proposed transformations to detect these implicit warp-level synchronization and maintained them during transformations. \\
We mainly focused on CUDA, but there are also some projects that focus on supporting other GPU programs on CPU. For example, \cite{jaaskelainen2015pocl} is an open-source OpenCL implementation. It can support executing OpenCL's SPMD kernels on CPU, using the similar transformations proposed by \cite{stratton2008mcuda}, but has the same limitations of flat collapsing.
\cite{lomont2011introduction, raman2000implementing} proposed to utilize SIMD instructions on the generated kernels. However, for some CUDA features, it cannot be converted to a format that can be automatically vectorized. In our framework, we use a built-in library to invoke SIMD instructions into these functions. \cite{lee2014boosting} is a project based on Ocelot
}
\section{Conclusion\label{sec:conclusion}}
This project proposes and builds COX, a framework that supports executing CUDA kernels on CPU devices. It also proposes hierarchical collapsing which can be used to transform SPMD programs to MPMD friendly programs and it supports the latest warp-level functions in CUDA. Using CUDA 10.1 Sample as a benchmark, the previous projects can only support 21 of 31 kernels (coverage=68\%), while COX can support 28 (coverage=90\%). The kernel performance is also compared on X86 and AArch64 architectures and shows that the performance is comparable. COX is based on compile-time transformation. Future work will provide a runtime system to support other CUDA features.
\newpage
\bibliographystyle{unsrt}
| dfdc8dff5a069abe132223cac8310f2eb55ec3bb | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
If $A$ is a $C^{*}$-algebra with an action of the group $\Z$, then the Pimsner-Voiculescu sequence \cite{pv}, \cite[10.2.1]{blackadar} is the long exact sequence of $K$-theory groups \begin{equation}\label{asdfasdfadfdffs}
K_{*+1}(A\rtimes \Z)\to K_{*}(\mathrm{Res}^{\Z}(A))\stackrel{1-\sigma_{*}}{\to}
K_{*}(\mathrm{Res}^{\Z}(A))\to K_{*}(A\rtimes \Z)\ ,
\end{equation}
where $\mathrm{Res}^{\Z}$ denotes the operation of forgetting the $\Z$-action, and
$\sigma_{*}$ is induced by the $\Z$-action on $A$ via functoriality.
It is the long exact sequence of homotopy groups
associated to a fibre sequence of $K$-theory spectra \begin{equation}\label{adfasdfadsfdasf}
\xymatrix{K(\mathrm{Res}^{\Z}(A))\ar[r]\ar[d]&K(A\rtimes \Z)\ar[d]\\0\ar[r]& \Sigma K(\mathrm{Res}^{\Z}(A))}\ .
\end{equation}
The classical arguments derive such a fibre sequence of spectra from a suitably designed short exact sequence of $C^{*}$-algebras. Thereby different proofs work with different sequences.
They all give the long exact sequence \eqref{asdfasdfadfdffs} of homotopy groups, but the question whether the resulting fibre sequences of spectra are equivalent has not been discussed in the literature so far.
The Baum-Connes conjecture with coefficients for the amenable group $\Z$ is known to hold and asserts that
the assembly map \begin{equation} \label{eqwfwefwdwe} \mu^{Kasp}_{\mathbf{Fin},A}:\mathrm{KK}^{\Z}(C_{0}(\R),A)\to K(A\rtimes \Z)\end{equation}
introduced by Kasparov \cite{kasparovinvent} (see \cite[13.9]{bel-paschke} for the spectrum level version)
is an equivalence.
Accepting to use the Baum-Connes conjecture with coefficients for $\Z$
a fibre sequence of the form \eqref{adfasdfadsfdasf} can be derived in the following two ways.
Observe that $\mathrm{KK}^{\Z}(C_{0}(-),A)$ is a locally finite $\Z$-equivariant homology theory. Using the
$\Z$-CW-structure of $\R$ with precisely two $\Z$-cells in dimension $0$ and $1$
and the identification $\mathrm{KK}^{\Z}(C_{0}(\Z),A)\simeq K(\mathrm{Res}^{\Z}(A))$ (use \cite[Cor. 1.23]{KKG})
we get a fibre sequence of the form \eqref{adfasdfadsfdasf}.
Alternatively,
to $A$ one can associate a Davis-L\"uck type functor
$K^{\Z}_{A}:\Z\mathbf{Orb}\to \Sp$ \cite{kranz} (see also \cite[Def. 16.15]{bel-paschke}) whose values on the $\Z$-orbits $\Z$ and $*$ are given by \begin{equation}\label{ewfewfawefwead}
K(\mathrm{Res}^{\Z}(A))\simeq K^{\Z}_{A}(\Z)\ , \quad K(A\rtimes \Z)\simeq K_{A}^{\Z}(*)\ .
\end{equation}
If we equip $K(\mathrm{Res}^{\Z}(A))$ with the $\Z$-action induced by functoriality from the $\Z$-action on $A$, and $ K^{\Z}_{A}(\Z)$ with the $\Z$-action induced by functoriality from the $\Z$-action on the orbit $\Z$, then the first equivalence in \eqref{ewfewfawefwead} is $\Z$-equivariant. The Davis-L\"uck assembly map (see \eqref{wefqwedqwdeewdeqwd} below) turns out to be equivalent to the map
\begin{equation}\label{qfqwefewdqwed}
\colim_{B\Z} K(A) \to K(A\rtimes \Z)\ .
\end{equation} Since Kasparov's assembly map \eqref{eqwfwefwdwe} and the Davis-L\"uck assembly map are isomorphic on the level of homotopy groups \cite{kranz}, if one of them is an equivalence, then so is the other. Therefore the Baum-Connes conjecture with coefficients for $\Z$, stating that Kasparov's assembly map is an equivalence, also implies that \eqref{qfqwefewdqwed} is an equivalence. The usual cofibre sequence calculating the coinvariants of a $\Z$-object in a stable $\infty$-category specializes to
$$K(\mathrm{Res}^{\Z}(A))\stackrel{1-\sigma}{\to} K(\mathrm{Res}^{\Z}(A)) \to K(A\rtimes \Z)\ ,$$ (we write the square \eqref{adfasdfadsfdasf} in this form in order to be able to highlight the map $1-\sigma$),
where $\sigma$ denotes the action of the generator of $\Z$ on $K(\mathrm{Res}^{\Z}(A))$. On the level of homotopy groups this cofibre sequence
induces the PV-sequence \eqref{asdfasdfadfdffs}
including the calculation of the map in the middle.
One outcome of this note is an alternative construction of a square of the form \eqref{adfasdfadsfdasf} in terms of the $\Z$-equivariant coarse the $K$-homology theory. In particular, in our construction the maps involved in this square acquire a clear geometric interpretation.
The symmetric monoidal category $(\Z\mathbf{BC},\otimes)$ of $\Z$-bornological coarse spaces and the notion of a $\Z$-equivariant coarse homology theory have been
introduced in \cite{equicoarse}. Examples of $\Z$-bornological coarse spaces are $\Z_{min,min}$ and $\Z_{can,min}$ given by the group $\Z$ with the minimal bornology and the minimal or canonical coarse structures.
The additional structure of transfers was introduced in \cite{coarsetrans}.
For every $\Z$-equivariant coarse homology theory $E^{\Z}:\Z\mathbf{BC}\to \mathbf{M}$ with transfers and $\Z$-bornological coarse space $X$ in Section \ref{wkgopgsgfdg}
we will construct a commutative square
\begin{equation}\label{qwdqwewqwdqweddwdwdwd1eee} \xymatrix{ E^{\Z}( X\otimes \Z_{min,min})\ar[r]^-{\iota}\ar[d]& E^{ \Z}(X\otimes \Z_{can,min})\ar[d]^{\tr}\\0\ar[r]& E^{\Z}(X\otimes \Z_{can,min}\otimes \Z_{min,min})}
\end{equation}
which we call the coarse PV-square associated to $E^{\Z}$ and $X$.
The morphism $\iota$ in \eqref{qwdqwewqwdqweddwdwdwd1eee} is induced by the morphism
of bornological coarse spaces $\Z_{min,min}\to \Z_{can,min}$ given by the identity of the underlying sets, and the morphism
$\tr$ is the transfer morphism induced by the coarse covering
$\Z_{min,min}\to *$. The filler of the square is given by a simple
argument using the properties of coarse homology theories.
We then study conditions on $E^{\Z}$ and $X$ which imply that the square
\eqref{qwdqwewqwdqweddwdwdwd1eee} cartesian.
In order to formulate one such condition, for any $\Z$-equivariant coarse homology theory $F^{\Z}:\Z\mathbf{BC}\to \mathbf{M}$
we form the functor $$HF^{\Z} :\Z\mathbf{Orb}\to \mathbf{M}\ , \quad S\mapsto F^{\Z}(S_{min,max}\otimes \Z_{can,min})$$
and consider
the Davis-L\"uck assembly map \begin{equation}\label{wefqwedqwdeewdeqwd}
\mu^{DL}_{HF^{\Z} }:\colim_{\Z_{\mathbf{Fin}}\mathbf{Orb}}HF^{\Z}\to HF^{\Z}(*)
\end{equation}
for the family $\mathbf{Fin}$ of finite subgroups.
We
form the $\Z$-equivariant coarse homology theory
$E_{X,c}^{\Z}:\Z\mathbf{BC}\to \mathbf{M}$ given by the continuous approximation of thq $E^{\Z}(X\otimes -):\Z\mathbf{BC}\to \mathbf{M}$ of $E^{\Z}$ by $X$. \begin{theorem}[{Theorem \ref{weijtgowegferwerwgwergre}}]\label{gwjreogrgrfwerefwerf}
Assume:
\begin{enumerate}
\item\label{sdvsdfvfewcd} $X$ has the minimal bornology and is discrete.
\item\label{wetkogierfrewfwef} $E^{\Z}$ is strong and strongly additive.
\end{enumerate}
Then the PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian if and only if $\mu_{HE^{\Z}_{X,c}}^{DL}$ is an equivalence.
\end{theorem}
We refer to Section \ref{wkgopgsgfdg} for an explanation and references concerning the additional properties of $\Z$-equivariant coarse homology theories appearing in the above statement.
The Condition \ref{gwjreogrgrfwerefwerf}.\ref{wetkogierfrewfwef} on $E^{\Z}$ is satisfied for many examples of equivariant coarse homology theories, e.g. the coarse topological $K$-theory \eqref{qwedqwedqwedqwd} with coefficients in a $C^{*}$-category with $\Z$-action, or the coarse algebraic $K$-homology associated to an additive category or a left-exact $\infty$-category with $\Z$-action. The first is the main example for the present paper, and we refer to Example \ref{wrthokwegfwerwrf} for detailed references for the second case and to \cite{unik} for the third.
The condition on $\mu_{HE^{\Z}_{X,c}}^{DL}$
is complicated and not always satisfied, see Example \ref{wrthokwegfwerwrf}.
At a first glance the Condition \ref{gwjreogrgrfwerefwerf}.\ref{sdvsdfvfewcd} on $X$ is very restrictive, but using that the corners of
the square \eqref{qwdqwewqwdqweddwdwdwd1eee} are $\Z$-equivariant coarse homology theories in the variable $X$
one can extend the range of $\Z$-bornological spaces $X$ for which this square is known to be cartesian considerably.
Let $\mathrm{Yo}^{s}:\Z\mathbf{BC}\to \Z\Sp\cX$ be the universal $\Z$-equivariant coarse homology theory.
Then the property that \eqref{qwdqwewqwdqweddwdwdwd1eee} is cartesian only depends on the image
$\mathrm{Yo}^{s}_{\loc}(X):=\ell(\mathrm{Yo}^{s}(X))$ of $\mathrm{Yo}^{s}(X)$ under the localization $\ell:\Z\Sp\cX\to \Z\Sp\cX_{\loc}$ of $\Z\Sp\cX$ at the three
$\Z$-equivariant coarse homology theories $E^{\Z}( -\otimes \Z_{min,min})$, $E^{\Z}(- \otimes \Z_{can,min})$ and
$E^{\Z}( -\otimes \Z_{can,min}\otimes \Z_{min,min})$ appearing at the corners of \eqref{qwdqwewqwdqweddwdwdwd1eee}.
In Definition \ref{eiojgwergrefwerf} we introduce $\Z\Sp\cX_{\loc}\langle DL\rangle$ as the localizing subcategory of $\Z\Sp\cX_{\loc}$
generated by $\mathrm{Yo}^{s}_{\loc}(X)$ for all $X$ in $\Z\mathbf{BC}$ which are discrete, have the minimal bornology, and are such that $\mu^{DL}_{HE^{\Z}_{X,c}}$ is an equivalence.
Then for $X$ in $\Z\mathbf{BC}$ we have the following consequence of Theorem \ref{gwjreogrgrfwerefwerf}.
\begin{kor}[{Corollary \ref{wrthiojwergwergwerrwfg}}] \label{erthokeorpthertgergregretg}Assume that $E^{\Z}$ is strong and strongly additive.
If $\mathrm{Yo}^{s}_{\loc}(X)\in \Z\Sp\cX_{\loc}\langle DL\rangle$, then the PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian.
\end{kor}
There are many $\Z$-bornological coarse spaces $X$
which are not necessarily discrete or have the minimal bornology
but still satisfy $\mathrm{Yo}^{s}_{\loc}(X) \in \Z\Sp\cX_{\loc}\langle DL\rangle$.
It is even not clear that for such spaces
the Davis-L\"uck assembly map $\mu^{DL}_{HE^{\Z}_{X,c}}$ is an equivalence, but nevertheless the PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian.
In order to connect the square \eqref{qwdqwewqwdqweddwdwdwd1eee} with the classical PV-sequence \eqref{adfasdfadsfdasf} for a $\Z$-$C^{*}$-algebra $A$
we take $E^{\Z}=K\cX_{\mathbf{Hilb}_{c}(A)}^{\Z}$, the coarse algebraic $K$-homology \eqref{qwedqwedqwedqwd} with coefficients in the $C^{*}$-category $\mathbf{Hilb}_{c}(A)$ of Hilbert $A$-modules and compact operators \cite{coarsek}, and $X=*$.
Then $E^{\Z}$ and $X$ satisfy the assumptions of Theorem \ref{gwjreogrgrfwerefwerf}.
We further employ the fact that group $\Z$ satisfies the Baum-Connes conjecture with coefficients in order to verify that $\mu^{DL}_{HK\cX_{\bC}^{\Z}} $ is an equivalence. In view of the obvious equivalence
$\mu^{DL}_{HK\cX_{\bC}^{\Z}} \simeq \mu^{DL}_{HK\cX_{\bC,*,c}^{\Z}} $,
by Theorem \ref{gwjreogrgrfwerefwerf}
the coarse PV-square
\begin{equation}\label{qwdqwewqrwdqweddwdwdwd1eee} \xymatrix{ K\cX^{\Z}_{\mathbf{Hilb}_{c}(A)}( \Z_{min,min})\ar[r]^-{\iota}\ar[d]& K\cX_{\mathbf{Hilb}_{c}(A)}^{ \Z}( \Z_{can,min})\ar[d]^{\tr}\\0\ar[r]& K\cX_{\mathbf{Hilb}_{c}(A)}^{\Z}( \Z_{can,min}\otimes \Z_{min,min})}
\end{equation}
is cartesian. In Proposition
\ref{eorkjgwegreegrwegr9} we explain that this square is
equivalent to a square of the form \eqref{adfasdfadsfdasf}.
We further determine the boundary map
explicitly.
We thus get a new construction of a Pimsner-Voiculescu sequence.
Note that $\mathbf{Hilb}_{c}(A)$ for a $\Z$-$C^{*}$-algebra $A$ is just a particular example of a $C^{*}$-category with a strict $\Z$-action. More generally,
if $\bC$ is any $C^{*}$-category with a strict $\Z$-action which admits small orthogonal AV-sums, then we have the $\Z$-equivariant coarse homology theory with transfers \begin{equation}\label{qwedqwedqwedqwd}K\cX_{\bC}^{\Z}:\Z\mathbf{BC}\to \Sp\end{equation}
constructed in \cite{coarsek}. For every
$X$ in $\Z\mathbf{BC}$, by specializing \eqref{qwdqwewqwdqweddwdwdwd1eee}, we obtain the
coarse PV-square
\begin{equation}\label{qwdqwerrwqwdqweddwdwdwd1eee} \xymatrix{ K\cX_{\bC}^{\Z}( X\otimes \Z_{min,min})\ar[r]^-{\iota}\ar[d]& K\cX_{\bC}^{\Z} (X\otimes \Z_{can,min})\ar[d]^{\tr}\\0\ar[r]& K\cX_{\bC}^{\Z}(X\otimes \Z_{can,min}\otimes \Z_{min,min})}\ .
\end{equation}
If $X $ is discrete and has the minimal bornology, i.e., $X\cong Y_{min,min}$ for some $\Z$-set $Y$, then
by Proposition \ref{wiothgerththerh} we have an equivalence of $\Z$-equivariant coarse homology theories
$$K\cX_{\bC,X,c}^{\Z }\simeq K\cX_{\bC_{Y}}^{\Z} \ ,$$
where $ \bC_{Y}$ is a suitably defined $C^{*}$-category with $\Z$-action depending on $Y$. We can therefore reduce the problem of showing that \eqref{qwdqwerrwqwdqweddwdwdwd1eee} is cartesian to the case of $X=*$ at the cost of modifying the coefficient $C^{*}$-category. Using the Baum-Connes conjecture with coefficients for $\Z$ we can then check that
$\mu^{DL}_{HK\cX^{\Z}_{\bC_{Y}}}$ and hence
$\mu^{DL}_{HK\cX^{\Z}_{\bC,X,c}}$ are equivalences so that $\mathrm{Yo}^{s}_{\loc}(X)\in \Z\Sp\cX_{\loc}\langle DL\rangle$ by Theorem \ref{gwjreogrgrfwerefwerf}.
Let $\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ be the localizing subcategory of $\Z\Sp\cX_{\loc}$ generated by all $X$ in $\Z\mathbf{BC}$ which are discrete and have the minimal bornology.
\begin{theorem}[{Theorem \ref{weitgowergerfrwferfw}}]
Assume that $E^{\Z}=K\cX_{\bC}^{\Z}$. Then
$$\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle\subseteq \Z\Sp\cX_{\loc}\langle DL\rangle\ .$$
\end{theorem}
For a general $X$ in $\Z\mathbf{BC}$ we then apply Corollary \ref{erthokeorpthertgergregretg}
in order to conclude that \eqref{qwdqwerrwqwdqweddwdwdwd1eee} is cartesian provided $\mathrm{Yo}^{s}_{\loc}(X)\in \Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$, see Corollary \ref{qerigojoqrfewfqewfqf}.
We now restrict to bornological coarse spaces with trivial $\Z$-action
and provide very general conditions on $X$ ensuring that $\mathrm{Yo}^{s}_{\loc}(X)\in \Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$.
In order to state the result we employ the coarse assembly map
\begin{equation}\label{fqwefwqedwqedqwedqewd}
\mu_{F,X}:F{\mathcal{O}}^{\infty}\mathbf{P}(X)\to F(X)
\end{equation} for a strong coarse homology theory $F:\mathbf{BC}\to \mathbf{M}$ and a bornological coarse space $X$ which has been introduced in \cite[Def. 9.7]{ass}. Recall the notion of weakly finite asymptotic dimension \cite[Def. 10.3]{ass} and bounded geometry \cite[Def. 7.77]{buen}. In the following we consider the functors $K\cX_{\bC}^{\Z}( -\otimes \Z_{min,min})$, $K\cX_{\bC}^{\Z} (-\otimes \Z_{can,min})$ and $K\cX_{\bC}^{\Z} (-\otimes \Z_{can,min}\otimes \Z_{min,min})$ from $ \mathbf{BC}$ to $\Sp$ as strong non-equivariant coarse homology theories.
\begin{theorem}[{Theorem \ref{werigosetrrgertgerwgw}}]\label{werigosetrgertgerwgw}Assume one of the following:
\begin{enumerate}
\item\label{ijtgoertgegertgwee} $X$ has weakly finite asymptotic dimension.
\item \label{ijtgoertgegertgwee1} $X$ has bounded geometry and the three coarse assembly maps $\mu_{K\cX_{ \bC}(-\otimes \Z_{min,min}),X}$,
$\mu_{K\cX^{\Z}_{\bC}(-\otimes \Z_{can,min}),X}$ and
$\mu_{K\cX^{\Z}_{\bC}(-\otimes \Z_{can,min}\otimes \Z_{min,min}),X}$
are equivalences.
\end{enumerate}
Then $\mathrm{Yo}^{s}_{\loc}(X)\in \Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ and hence
\eqref{qwdqwerrwqwdqweddwdwdwd1eee} is cartesian.
\end{theorem}
It could be true that the coarse PV-square \eqref{qwdqwerrwqwdqweddwdwdwd1eee}
is cartesian for all $X$ in $\mathbf{BC}$.
The Condition \ref{werigosetrgertgerwgw}.\ref{ijtgoertgegertgwee} on $X$ in
implies that the coarse assembly map $\mu_{F,X}$ is an equivalence for any strong coarse homology theory. But for the coarse topological $K$-theory
$K\cX_{\mathbf{Hilb}_{c}(\C)}$ the coarse assembly map is an equivalence for
a much bigger class of bornological coarse spaces, e.g. discrete metric spaces of bounded geometry which admit a coarse embedding into a Hilbert space \cite[Thm. 1.1]{yu_embedding_Hilbert_space}.
We therefore expect that the Assumption \ref{werigosetrgertgerwgw}.\ref{ijtgoertgegertgwee1} is satisfied for many spaces not having finite asymptotic dimension.
We consider this note as a possibility to demonstrate the usage of results of coarse homotopy homotopy as developed in \cite{buen}, \cite{ass}, \cite{equicoarse}, \cite{coarsek} and \cite{bel-paschke}.
{\em Acknowledgements: The author thanks M. Ludewig
for fruitful discussion, in particular for insisting to give a detailed argument that \eqref{qwdqwewqrwdqweddwdwdwd1eee} is equivalent to the classical PV-sequence.
This work was supported by the CRC 1085 {\em Higher structures} funded by the DFG.}
\section{The coarse PV-square}
\label{wkgopgsgfdg}
A $\Z$-equivariant $\mathbf{M}$-valued coarse homology theory is a functor
$$E^{\Z}:\Z\mathbf{BC}\to \mathbf{M}$$ from the category $\Z\mathbf{BC}$ of $\Z$-bornological coarse spaces to a cocomplete stable $\infty$-category $\mathbf{M}$ such that the functor $E^{\Z}$ is coarsely invariant, excisive, $u$-continuous and vanishes on flasques \cite[Def. 3.10]{equicoarse}. There exists a universal $\Z$-equivariant coarse homology theory \cite[Def. 4.9]{equicoarse}
$$\mathrm{Yo}^{s}:\Z\mathbf{BC}\to \Z\Sp\cX\ ,$$ where $ \Z\Sp\cX$ is a presentable stable $\infty$-category called the $\infty$-category of coarse motivic spectra.
The category $\Z\mathbf{BC}$ admits a symmtric monoidal structure $\otimes$ described in \cite[Ex. 2.17]{equicoarse}. It induces a presentably symmetric monoidal structure on $\Z\Sp\cX$ such that functor $\mathrm{Yo}^{s}$
refines essentially uniquely to a symmetric monoidal functor \cite[Sec. 4.3]{equicoarse}.
If $E^{\Z}:\Z\mathbf{BC}\to \mathbf{M}$ is a $\Z$-equivariant coarse homology theory, then by \cite[Cor. 4.10]{equicoarse} it essentially
uniquely factorizes as the composition of $\mathrm{Yo}^{s}$ and a colimit-preserving functor $E^{\Z}: \Z\Sp\cX\to \mathbf{M}$ denoted by the same symbol.
If $X$ is an object of $\Z\mathbf{BC}$, then the functor
$E^{\Z}(X\otimes -):\Z\mathbf{BC}\to \mathbf{M}$ is again a $\Z$-equivariant coarse homology theory
called the twist of $E^{\Z}$ by $X$.
We will encounter additional conditions and structures on $E$:
\begin{enumerate}
\item continuity \cite[Def. 5.19]{equicoarse}
\item strongness \cite[Def. 4.19]{equicoarse}
\item strong additivity \cite[Def. 3.12]{equicoarse}
\item transfers \cite[Def. 2.53]{coarsetrans}\ .
\end{enumerate}
For any $\Z$-set $Y$ we can form the objects $Y_{min,max}$ and $Y_{min,min}$ in $\Z\mathbf{BC}$ obtained by equipping $Y$ with the minimal coarse structure (whose maximal entourage is $\diag(Y)$) and the minimal bornology (consisting of the finite subsets) or the maximal bornology (consisting of all subsets), respectively.
The group $\Z$ has a canonical $\Z$-coarse structure generated by the entourages
$U_{r}:=\{(n,m)\in \Z\times \Z\mid |n-m|\le r\}$ for all $r$ in $\nat$. We let $\Z_{can,min}$ in $\Z\mathbf{BC}$ denote the corresponding object.
Let $E^{\Z}:\Z\mathbf{BC}\to \mathbf{M}$ be an equivariant coarse homology theory
with transfers.
We start with
describing a commutative square
\begin{equation}\label{qwdqwewqwdqwed1eee} \xymatrix{ E^{\Z}( -\otimes \Z_{min,min})\ar[r]^-{\iota}\ar[d]& E^{ \Z}(-\otimes \Z_{can,min})\ar[d]^{\tr}\\0\ar[r]& E^{\Z}(-\otimes \Z_{can,min}\otimes \Z_{min,min})}
\end{equation}
of $\mathbf{M}$-valued $\Z$-equivariant coarse homology theories.
The map $\iota$ is induced by the morphism $\Z_{min,min}\to \Z_{can,min}$ in $\Z\mathbf{BC}$
given by the identity of the underlying sets.
The map
$\tr$ is the transfer along the coarse covering $ \Z_{min,min}\to *$. \begin{lem}\label{wigowegreewff}
The square \eqref{qwdqwewqwdqwed1eee} commutes.
\end{lem}
\begin{proof}
We have the following commutative diagram
\begin{equation}\label{qewfqwefdewqdewdqwerereeredewqdeeeee}
\xymatrix{ E^{ \Z}(-\otimes \Z_{min,min})\ar[r]^{\iota}\ar[d]^{\tr}&E^{ \Z}(-\otimes \Z_{can,min})\ar[d]^{\tr}\\ E^{\Z}( -\otimes \Z_{min,min}\otimes \Z_{min,min}) \ar[r] \ar[d]_{!}^{\simeq}&E^{\Z}( -\otimes \Z_{can,min}\otimes \Z_{min,min})\ar[d]_{!}^{\simeq}\\ E^{\Z}( -\otimes \mathrm{Res}^{\Z}(\Z_{min,min})\otimes \Z_{min,min}) \ar[r] \ar[d]_{\partial^{MV}}^{0}&E^{\Z}( -\otimes \mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min})\ar[d]_{\partial^{MV}}^{\simeq}\\ \Sigma E^{ \Z} ( - \otimes \Z_{min,min})\ar@{=}[r] & \Sigma E^{ \Z} ( - \otimes \Z_{min,min})}\ .
\end{equation}
The upper three horizontal maps are all induced from $\Z_{min,min}\to \Z_{can,min}$.
The map marked by $!$ is induced by the equivariant map of $\Z$-sets
$$X\times \Z\times \Z\to X\times \mathrm{Res}^{\Z}(\Z)\times \Z\ , \quad (x,n,m)\mapsto (x,n-m,m)\ .$$ The morphism $\partial^{MV}$ is the Mayer-Vietoris boundary map associated to the decomposition of $\mathrm{Res}^{\Z}(\Z)$ into $(\nat,-\nat)$.
Its left instance vanishes since this decomposition of $\mathrm{Res}^{\Z}(\Z_{min,min})$ is coarsely disjoint.
Using the Mayer-Vietoris sequence we see that right instance of $\partial^{MV}$ is an equivalence
since the subsets $\pm \nat$ of $\mathrm{Res}^{\Z}(\Z_{can,min})$ are flasque.
The commutativity of the upper square encodes the naturality of the transfer, and the commutativity of the lower square reflects the naturality of the Mayer-Vietoris boundary.
The middle square arrises from applying $E^{\Z}$ to a commutative square in $\Z\mathbf{BC}$ and therefore commutes, too.
The commutative diagram \eqref{qewfqwefdewqdewdqwerereeredewqdeeeee}
yields a filler of \eqref{qwdqwewqwdqwed1eee}. \end{proof}
We now fix $X$ in $\Z\mathbf{BC}$ and consider the $\Z$-equivariant coarse homology theory \begin{equation}\label{qw3erqw3asded}
E_{X}^{\Z}(-):=E^{\Z}(X\otimes -):\Z\mathbf{BC}\to \mathbf{M}
\end{equation} obtained from $E^{\Z}$ by twisting with $X$.
\begin{ddd}
The commutative square \begin{equation}\label{qwdqwewqwdqwedefe1eee}
\xymatrix{E_{X}^{Z}(\Z_{min,min})
\ar[r]^{\iota}\ar[d]&E_{X}^{\Z}(\Z_{can,min})\ar[d]^{\tr}\\ 0\ar[r]&E_{X}^{\Z}(\Z_{can,min}\otimes \Z_{min,min})}
\end{equation}
is called the coarse PV-square associated to $E^{\Z}$ and $X$.
\end{ddd}
\section{The coarse PV-sequence}
In this section we are interested in conditions on $E^{\Z}$ and $X$ ensuring that the coarse PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian, i.e. that \begin{equation}\label{weqfoijoiwqjdoieewdqwedqewd}
E_{X}^{Z}(\Z_{min,min})\stackrel{\iota}{\to}E_{X}^{\Z}(\Z_{can,min})\stackrel{\tr}{\to} E_{X}^{\Z}(\Z_{can,min}\otimes \Z_{min,min})
\end{equation} is a part of
a fibre sequence. In this case it will be called the coarse PV-sequence.
Let
$i: \Z\mathbf{BC}_{min}\to \Z\mathbf{BC}$ denote the inclusion of the full subcategory of $\Z$-bornological coarse spaces with the minimal bornology. We let $i^{*}$ and $i_{!}$ denote the operations of restriction and left Kan extension along $i$. Recall from \cite[Sec. 5.4]{equicoarse} that a coarse homology theory $F^{\Z}:
\Z\mathbf{BC}\to \Sp$ is continuous if the canonical transformation \begin{equation}\label{afdasdfqwefq}
i_{!}i^{*}F^{\Z}\to F^{\Z}
\end{equation} is an equivalence.
\begin{ddd}\label{qrigfjqorfqew}We call $ F^{\Z}_{c}:=i_{!}i^{*}F^{\Z}$ the continuous approximation of $F^{\Z}$.\end{ddd}
We apply this construction to the functor $E_{X}^{\Z}$ from \eqref{qw3erqw3asded} and get a continuous equivariant
coarse homology theory $E_{X,c}^{\Z}$.
The functor $E_{X,c}^{\Z}$ gives rise to a functor
$$HE^{\Z}_{X,c}:\Z\mathbf{Orb}\to \mathbf{M}\ , \quad S\mapsto E^{\Z}_{X,c}(S_{min,max}\otimes \Z_{can,min})$$
from the orbit category of $\Z$ to $\mathbf{M}$ and an associated Davis-L\"uck assembly map $\mu^{DL}_{HE^{\Z}_{X,c}}$ given in \eqref{wefqwedqwdeewdeqwd}. Unfolding the definition,
the Davis-L\"uck assembly map is equivalent to the map
\begin{equation}\label{arfefewfqfewqdqweded}
\colim_{B\Z'} E_{X,c}^{\Z}( \Z_{min,max}\otimes \Z_{can,min}) \to
E^{\Z}_{X,c}( \Z_{can,min})
\end{equation} induced by the projections $ \Z_{min,max}\to *$, where $\Z'$ acts on $Z_{min,max}$ by translations.
The main theorem of the present section is:
\begin{theorem}\label{weijtgowegferwerwgwergre}
Assume:
\begin{enumerate}
\item $X$ has the minimal bornology and is discrete.
\item $E^{\Z}$ is strong and strongly additive.
\end{enumerate}
Then the PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian if and only if $\mu_{HE^{\Z}_{X,c}}^{DL}$ is an equivalence.
\end{theorem}
\begin{proof}
We let $\Z'$ be a second copy of $\Z$ which acts on $\Z_{min,min}$ by translations. Then $\Z'$ acts by functoriality on the coarse homology theory $E^{\Z}_{X}( -\otimes \Z_{min,min})$. The coarse covering
$\Z_{min,min}\to *$ in $\Z'$-equivariant, where $\Z'$ acts trivially on $*$. As a consequence, the transfer map $\tr$ for $E^{\Z}_{X}$ along the coarse coverings
$ -\otimes \Z_{min,min}\to -$ has a factorization
$$\tr:E_{X}^{\Z}(-)\stackrel{\mathrm{coass}_{X}}{\to} \lim_{B\Z'}E_{X}^{\Z}(-\otimes \Z_{min,min})\stackrel{\ev}{\to} E_{X}^{\Z}(-\otimes \Z_{min,min})\ .$$
\begin{ddd} For $Z$ in $\Z\mathbf{BC}$
we call $$\mathrm{coass}_{X,Z}:E^{\Z}_{X}(Z)\to \lim_{B\Z'}E^{\Z}_{X}(Z\otimes \Z_{min,min})$$ the coassembly map for the object $Z$.
\end{ddd}
In a stable $\infty$-category like $\mathbf{M}$ finite limits commute with colimits. Since
$\lim_{B\Z'}$ is a finite limit
the functor
$\lim_{B\Z'}E^{\Z}(- \otimes \Z_{min,min})$
is again a $\Z$-equivariant coarse homology theory.
We will keep $X$ in the notation since later we will study properties of the coassembly map which may depend on the choice of $X$.
\begin{prop}\label{weriojtgowtgerf}
Assume that $E^{\Z}$ is strong and the coassembly maps $\mathrm{coass}_{X,\Z_{min,min}}$ is an equivalence. Then the following assertions are equivalent:
\begin{enumerate}
\item The coassembly map $\mathrm{coass}_{X,\Z_{can,min}}$ is an equivalence.
\item The coarse $PV$-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian.
\end{enumerate}
\end{prop}
\begin{proof}
We consider the commutative diagram
\begin{equation}\label{qewfqwefdewqdewdrerqwerereeredewqdeeeee}
\xymatrix{ E_{X}^{ \Z}( \Z_{min,min})\ar[r]^{\iota}\ar[d]_{\simeq}^{\mathrm{coass}_{X,\Z_{min,min}}}&E_{X}^{ \Z}( \Z_{can,min})\ar[d]^{\mathrm{coass}_{X,Z_{can,min}}}\ar@/^3.5cm/[dd]^{\tr}\\ \lim_{B\Z'} E_{X}^{\Z}( \Z_{min,min}\otimes \Z_{min,min}) \ar[r]^{\lim_{B\Z'}\iota} \ar@{..>}[dr]^{0} &\lim_{B\Z'}E_{X}^{\Z}( \Z_{can,min}\otimes \Z_{min,min})\ar[d]^{\ev} \\ & E_{X}^{\Z}( \Z_{can,min}\otimes \Z_{min,min}) }\ .
\end{equation}
The vanishing of the composition $\ev\circ \lim_{B\Z'}\iota$ is a consequence of Lemma \ref{wigowegreewff}. We claim that $\ev$ presents
the cofibre of $ \lim_{B\Z'}\iota$
For the moment left us assume the claim.
If $\mathrm{coass}_{X,Z_{can,min}}$ is an equivalence, then $\tr$ represents the cofibre of $\iota$
and the coarse PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian.
Vice versa, if the coarse PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian,
$\mathrm{coass}_{X,Z_{can,min}}$ is an equivalence by an application of the Five Lemma.
In order to show the claim we form the commutative diagram \begin{equation}\label{ewqfqwefwedqwedqwdewd}
\xymatrix{\lim_{B\Z'}E^{\Z}_{X}(\Z_{min,min}\otimes \Z_{min,min})\ar[r]^{\ev}\ar[d]^{\lim_{B\Z'}\iota}& E^{\Z}_{X}(\Z_{min,min}\otimes \Z_{min,min})\ar[d]_{0}^{\iota}\ar[r]^{1-\sigma}&E^{\Z}_{X}(\Z_{min,min}\otimes \Z_{min,min})\ar[d]^{\iota}\\\lim_{B\Z'}E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})\ar[r]^{\ev}\ar[d]& E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})\ar[d]\ar[r]_{0}^{1-\sigma}&E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})\ar[d]\\\lim_{B\Z'} Q\ar[r]\ar[d]^{0}& Q\ar[r]^{1-\sigma}& Q\\&&}
\end{equation}
in $\mathbf{M}$.
Here $Q$ is the object of $\mathbf{M}$ defined as the cofibre of the middle map denoted by $\iota$ with the induced action of $\Z'$. The symbol $\sigma$ stands for the action of the generator of $\Z'$.
The horizontal sequences are fibre sequences reflecting the usual presentation of the fixed points of a $\Z'$-object in a stable $\infty$-category.
The vertical sequences are fibre sequences by construction.
We now have the following assertions:
\begin{lem}\label{tgot0grefwefefewf}\mbox{}\begin{enumerate}
\item \label{sfdgsfgdsfds} The map $E^{\Z}_{X}(\Z_{min,min}\otimes \Z_{min,min})\to E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})$ vanishes.
\item \label{sfdgsfgdsfds1}The group $\Z'$ acts trivially on $E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})$.
\item\label{sfdgsfgdsfds2} The map $\lim_{B\Z'}\iota$ is split injective.
\end{enumerate}\end{lem}
These facts imply that the maps marked by $0$ in the diagram \eqref{ewqfqwefwedqwedqwdewd} vanish.
From Lemma \ref{afafadsadsf} below applied to the web \eqref{ewqfqwefwedqwedqwdewd}
we then get a tcommutative triangle
\begin{equation}\label{}
\xymatrix{&\lim_{B\Z'}E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})\ar[dr]^{\ev}\ar[dl]&\\\lim_{B\Z'}Q\ar[rr]^{\simeq}&&Q}
\end{equation}
which solves our task.
We consider a web of fibre sequences
\begin{equation}\label{regwergefefefewe}
\xymatrix{A\ar[r]\ar[d]^{a}&B\ar[d]^{0}\ar[r]&C\ar[d]\\E\ar[d]^{l} \ar[r]^{j}&
\ar[d]^{i}\ar[r]^{0}&G\ar[d]\\
\ar[r]^{m} \ar[d]^{0}\ar[d]&I\ar[r]^{d}\ar[d]&J\ar[d]\\\Sigma A\ar[r]&\Sigma B\ar[r]&\Sigma C}
\end{equation}
in some stable $\infty$-category where the maps marked by $0$ vanish.
\begin{lem}\label{afafadsadsf}
There exists a commutative triangle
\begin{equation}\label{}
\xymatrix{&E\ar[dr]^{j}\ar[dl]_{l}&\\ H\ar[rr]^{u}_{\simeq}&&F}
\end{equation}
\end{lem}
\begin{proof}
The map $u$ is obtained from the universal property of $H$ as the cofibre of $a$ together with the fact that $ j\circ a\simeq 0$ witnessed be the upper left square.
We have $j\simeq u\circ l$.
The potential inverse $v:F\to H$ of $u$ is obtained from the universal property of $H$ as the fibre of $I\to J$ together
with the fact that $d\circ i\simeq 0$ witnessed by the middle right square.
We have $i\simeq m\circ v$.
We claim that $u$ and $v$ are inverse to each other equivalences in the homotopy category of $\mathbf{M}$.
To this end we must consider equivalences between maps (we say homotopies)
as $2$-categorical data.
We first observe that $i\circ u= m$. The map
$u$ is homotopy class of the pair of the map $j$ together with the homotopy
$\alpha:j\circ a\Rightarrow 0$.
Similarly the map $m$ is the homotopy class of the map $i\circ j$ together with the homotopy $\beta:i\circ j\circ a\Rightarrow 0$. The commutativity of the
left middle square expresses the fact that $i_{*}\alpha=\beta$.
We conclude that $i\circ u= m$.
We now calculate
$i\circ u\circ v\circ j= m\circ v\circ j= i\circ
$. Since $i$ is a monomorphism and $j$ is an epimorphism
we conclude that $u\circ v=\id_{F}$.
We now show that $v\circ u= \id_{H}$. The map
$v$ is the homotopy class of a pair of the map $i$ and the zero homotopy $\sigma$ of $d\circ i$. The map $l$ is similarly a homotopy class of the pair of the map $i\circ j$ and the zero homotopy $\kappa$ of $d\circ i\circ j$.
In this picture the commutativity of the left middle square expresses the fact that
$j^{*}\sigma=\kappa$. Alltogether we conclude that
$j^{*}v=l$. We now calculate that
$v \circ u\circ l= v\circ j= l$. Since $l$ is is an epimorphism we conclude that
$v\circ u=\id_{H}$.\end{proof}
\begin{proof}[Proof of Lemma \ref{tgot0grefwefefewf}]
Assertion \ref{sfdgsfgdsfds} is provided by the two lower squares of \eqref{qewfqwefdewqdewdqwerereeredewqdeeeee}.
We now show Assertion \ref{sfdgsfgdsfds1}.
We use the isomorphism
\begin{equation}\label{rwewfwefewfrwf}
\Z_{can,min}\otimes \Z_{min,min}\to \mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min}\ , \quad (m,n)\mapsto (m-n,n)
\end{equation}
in $\Z \mathbf{BC}$. The $\Z'$-action on $ \mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min}$ is given by
$(k,(m,n))\mapsto (m-k,n+k)$. For given $k$ in $\Z'$ we can decompose this into the map
$c_{k}:(m,n)\mapsto (m-k,n)$ and $a_{k}:(m,n)\mapsto (m,n+k)$.
The map $c_{k}$ is close to the identity. By the coarse invariance of coarse homology theories the induced action of $k$ on
$E^{\Z}_{X}( \mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min})$ is therefore the equivalent map induced by $a_{k}$. The morphism
$$\xymatrix{\mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min}\ar[rr]^{a_{k}}\ar[dr]&&\mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min}\ar[dl]\\&\mathrm{Res}^{\Z}(\Z_{can,min})&}$$
between coarse coverings
induces
the commutative diagram
\begin{equation}\label{}
\xymatrix{&E^{\Z}_{X}( \mathrm{Res}^{\Z}(\Z_{can,min} ))\ar[dr]^{\simeq}_{\tr}\ar[dl]_{\simeq}^{\tr}&\\ E^{\Z }_{X}( \mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min})\ar[rr]^{a_{k}}&&E^{\Z }_{X}( \mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min})}
\end{equation}
This shows that the map induced by $a_{k}$ is also equivalent to the identity.
We finally show Assertion \ref{sfdgsfgdsfds2}. We use the geometric cone functor
${\mathcal{O}}^{\infty}:\Z\mathbf{UBC}\to \Z\mathbf{BC}$ which sends a $\Z$-uniform bornological coarse space $Y$ to the $\Z$-bornological coarse space given by
$\Z$-set $\R\times Y$ with the bornology generated by the subsets
$[-n,n]\times B$ for all bounded subsets $B$ of $Y$ and $n$ in $\nat$, and the hybrid coarse structure (this is ${\mathcal{O}}(Y)_{-}$ in the notation from \cite[Sec. 9]{equicoarse}).
By \cite[Prop. 9.31]{equicoarse} we have a natural cone fibre sequence in $\Z\Sp\cX$ involving the cone boundary $\partial^{\mathrm{cone}}:\mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(-))\to \Sigma \mathrm{Yo}^{s}(\cF(-))$ which is induced from the transformation
$d^{\mathrm{cone}}:{\mathcal{O}}^{\infty}(-)\to \R\otimes \cF(-)$ between $\Z\mathbf{BC}$-valued functors and the suspension equivalence $\mathrm{Yo}^{s}(\R\otimes-)\simeq \Sigma^{s} \mathrm{Yo}^{s}(-)$, where $\cF:\Z\mathbf{UBC}\to \Z\mathbf{BC}$ is the forgetful functor.
We consider $\R$ as an object in $\Z\mathbf{UBC}$ with the action of $\Z$ by translations and the metric structures. We furthermore consider the invariant subset $\Z$ of $\R$ with the induced structures. Note that $\cF(\Z)\cong \Z_{can,min}$ and that the inclusion $\Z_{can,min}\to \cF(\R)$ is a coarse equivalence. We let $\Z_{\mathrm{disc}}$ in $ \Z\mathbf{UBC}$ denote $\Z$ with the coarse structure replaced by the discrete one. Then the identity map of underlying sets is a coarsification morphism $\Z_{\mathrm{disc}}\to \Z$ in $\Z\mathbf{UBC}$. We have $\cF(\Z_{\mathrm{disc}})\cong \Z_{min,min}$, and by \cite[Prop. 9.33]{equicoarse}
the induced map
$\mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}}))\to\mathrm{Yo}^{s} ({\mathcal{O}}^{\infty}(\Z))$ is an equivalence.
We consider the following diagram:
\begin{equation}\label{}
\xymatrix{E^{\Z}_{X}({\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}}))\ar[r]^{i}\ar[d]^{\mathrm{coass}_{X,{\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}})}}&E^{\Z}_{X}({\mathcal{O}}^{\infty}(\R))\ar[d]^{\mathrm{coass}_{X,{\mathcal{O}}^{\infty}(\R)}}\\
\lim_{B\Z'} E^{\Z}_{X}({\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}})\otimes \Z_{min,min})\ar[r]\ar[d]^{\lim_{B\Z'}\partial^{\mathrm{cone}}_{\Z_{\mathrm{disc}}}}&\lim_{B\Z'} E^{\Z}_{X}({\mathcal{O}}^{\infty}(\R)\otimes \Z_{min,min})\ar[d]^{\lim_{B\Z'} \partial^{\mathrm{cone}}_{\R}}\\
\lim_{B\Z'} \Sigma E^{\Z}_{X}(\Z_{min,min}\otimes \Z_{min,min}) \ar[r]^{\lim_{B\Z'}\iota}&
\lim_{B\Z'} \Sigma E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})
}\ .
\end{equation}
The upper two horizontal maps are induced by $\Z_{\mathrm{disc}}\to \Z\to \R$.
At the lower right corner we used the identification
$ E^{\Z}_{X}(\Z_{can,min}\otimes \Z_{min,min})\stackrel{\simeq}{\to}
E^{\Z}_{X}(\cF(\R)\otimes \Z_{min,min})$
induced by the coarse equivalence $\Z_{can,min}\to \cF(\R)$.
\end{proof}
It is clear that the following assertions imply Assertion \ref{tgot0grefwefefewf}.\ref{sfdgsfgdsfds2}.
\begin{lem} \label{qirogfqrfewfewfqwef}
\mbox{}
\begin{enumerate}
\item \label{wtihgowgergwefw}$\mathrm{coass}_{X,{\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}})}$ and $\mathrm{coass}_{X,{\mathcal{O}}^{\infty}(\R)}$
are equivalences.
\item \label{wtihgowgergwefw1} $\lim_{B\Z'}\partial^{\mathrm{cone}}_{\Z_{\mathrm{disc}}}$ and $\lim_{B\Z'}\partial^{\mathrm{cone}}_{\R}$ are equivalences.
\item\label{wtihgowgergwefw2} The map $i$ is split injective.
\end{enumerate}
\end{lem}
\begin{proof}
We start with Assertion \ref{wtihgowgergwefw}.
By \cite[Prop. 9.35]{equicoarse} we have an equivalence
$\partial^{\mathrm{cone}}_{\Z_{\mathrm{disc}}}:\mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}}))\simeq \Sigma \mathrm{Yo}^{s}(\Z_{min,min})$.
Hence
$\mathrm{coass}_{X,{\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}})}$ is equivalent to the suspension of
$\mathrm{coass}_{X, \Z_{min,min}}$ which is an equivalence by assumption.
Since the coassembly map $\mathrm{coass}_{X,-}$ is a transformation between $\Z$-equivariant coarse homology theories
the fact that $\mathrm{coass}_{X,Z}$ is an equivalence only depends on the motive $\mathrm{Yo}^{s}(Z)$ in $\Z\Sp\cX$.
In order to deal with ${\mathcal{O}}^{\infty}(\R)$ we use that
the composition
$ \mathrm{Yo}^{s}\circ {\mathcal{O}}^{\infty}:\Z\mathbf{UBC}\to \Z\Sp\cX$ is excisive
and homotopy invariant by \cite[Prop. 9.36 \& 9.38]{equicoarse}. In particular, if we decompose
$\R$ into the $\Z$-invariant subsets $\bigcup_{n\in \nat}[n,1/2+n]$ and
$\bigcup_{n\in \nat}[n+1/2,n +1]$ and use the
homotopy equivalences $\Z\to \bigcup_{n\in \nat}[n,1/2+n]$, $n\mapsto n$
and
$\Z\to \bigcup_{n\in \nat}[n+1/2,n +1]$, $n\mapsto n+1$ in $\Z\mathbf{UBC}$,
we get a Mayer-Vietoris sequence \begin{equation}\label{qwefqwedqewdqdqwed}
\mathrm{Yo}^{s} ( {\mathcal{O}}^{\infty}(\Z))\oplus \mathrm{Yo}^{s}( {\mathcal{O}}^{\infty}(\Z) ) \to \mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\Z))\oplus \mathrm{Yo}^{s} ( {\mathcal{O}}^{\infty}(\Z)) \to \mathrm{Yo}^{s} ({\mathcal{O}}^{\infty}(\R))\ .
\end{equation}
Since $\mathrm{coass}_{X,{\mathcal{O}}^{\infty}(\Z)}$ is already known to be an equivalence we can use the Five Lemma in order to conclude that $\mathrm{coass}_{X, {\mathcal{O}}^{\infty}(\R)}$
is an equivalence, too.
We now show Assertion \ref{wtihgowgergwefw1}.
It suffices to show that the underlying maps $$\partial^{\mathrm{cone}}_{\Z_{\mathrm{disc}}}:
E^{\Z}_{X}({\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}})\otimes \Z_{min,min})\to \Sigma E^{\Z}_{X}(\Z_{min,min}\otimes Z_{min,min})
$$ and $$\partial^{\mathrm{cone}}_{\R}:
E^{\Z}_{X}({\mathcal{O}}^{\infty}(\R)\otimes \Z_{min,min})\to \Sigma E^{\Z}_{X}(\cF(\R)\otimes Z_{min,min})$$ are equivalences.
As already observed in the previous step, $\partial^{\mathrm{cone}}_{\Z_{\mathrm{disc}}}$ is an equivalence. We now discuss the case of $\partial^{\mathrm{cone}}_{\R}$.
We use the isomorphisms
$${\mathcal{O}}^{\infty}(\R)\otimes \Z_{min,min}\to {\mathcal{O}}^{\infty}( \mathrm{Res}^{\Z}(\R))\otimes \Z_{min,min} \ , \quad (t,x,n)\mapsto t,x-n,n) $$ and
$$\cF(\R)\otimes \Z_{min,min}\to \mathrm{Res}^{\Z}(\cF(\R))\otimes \Z_{min,min}
\ , \quad (x,n)\mapsto (x-n,n)$$
in $\Z\mathbf{BC}$.
It therefore suffices to show that
$$\partial^{\mathrm{cone}}_{\mathrm{Res}^{\Z}(\R)}:
E^{\Z}_{X}( {\mathcal{O}}^{\infty}( \mathrm{Res}^{\Z}(\R))\otimes \Z_{min,min})\to \Sigma E^{\Z}_{X}(\cF(\mathrm{Res}^{\Z}(\R))\otimes Z_{min,min})$$
is an equivalence. By \cite[Prop. 7.12]{ass} the uniform bornological coarse space $\mathrm{Res}^{\Z}(\R)$ is coarsifying. Since $E^{\Z}$ is assumed to be strong also $E^{\Z}_{X}(-\otimes \Z_{min,min})$ is a strong non-equivariant coarse homology theory. We can now conclude that
$\partial^{\mathrm{cone}}_{\mathrm{Res}^{\Z}(\R)}$ is equivalent to coarse assembly map $\mu_{E^{\Z}_{X}(-\otimes \Z_{min,min}),\cF(\mathrm{Res}^{\Z}(\R))}$
from \cite[Def. 9.7]{ass}. Since $\cF(\mathrm{Res}^{\Z}(\R))$ (i.e. the bornological coarse space $\R$ with the metric structures) has weakly finite asymptotic dimension the coarse assembly map $\mu_{E^{\Z}_{X}(-\otimes \Z_{min,min}),\cF(\mathrm{Res}^{\Z}(\R))}$ is an equivalence by \cite[Thm. 10.4]{ass}.
We finally show Assertion \ref{wtihgowgergwefw2}.
We consider the Mayer-Vietoris sequence \eqref{qwefqwedqewdqdqwed}.
Using the intersection with $\Z$ of decomposition of $\R$ into the subsets $\bigcup_{n\in \nat}[n,1/2+n]$ and
$\bigcup_{n\in \nat}[n+1/2,n +1]$ we get an analogous
Mayer-Vietoris sequence for $\mathrm{Yo}^{s} ({\mathcal{O}}^{\infty}(\Z_{\mathrm{disc}}))$.
The inclusion $\Z_{\mathrm{disc}}\to \R$ induces the map of Meyer-Vietoris sequences (since ${\mathcal{O}}^{\infty}$ is invariant under coarsification we can omit the subscript $\mathrm{disc}$)
$$\xymatrix{ \mathrm{Yo}^{s} ( {\mathcal{O}}^{\infty}(\Z ) \ar[r]^{x\mapsto x\oplus \sigma (x)}\ar[d]^{x\mapsto x\oplus 0}&\ar@{-->}@/^-1cm/[l]_{x\oplus y\mapsto x} \mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\Z))\oplus \mathrm{Yo}^{s} ( {\mathcal{O}}^{\infty}(\Z))\ar@{=}[d]\ar[r]^-{e}& \mathrm{Yo}^{s} ({\mathcal{O}}^{\infty}(\Z ))\ar[d]^{\alpha}\\\ar@/^1cm/@{..>}[u]^{(a,b)\mapsto a+b} \mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\Z))\oplus \mathrm{Yo}^{s} ( {\mathcal{O}}^{\infty}(\Z))\ar[r]^{x\oplus y\mapsto x+y \oplus \sigma(x)+\sigma(y)}& \mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\Z))\oplus \mathrm{Yo}^{s} ( {\mathcal{O}}^{\infty}(\Z))\ar[r]& \mathrm{Yo}^{s} ({\mathcal{O}}^{\infty}(\R))\ar@/^-1cm/@{..>}[u]_{\beta}}\ ,$$
where $\sigma$ indicates the map induced by action of the generator of $\Z$.
The existence of the split indicated by the dashed arrow implies that $e$ is
an epimorphism.
The left vertical arrow has a left-inverse (indicated by the left dotted arrow) such that corresponding left square commutes. It induces a map $\beta$ as indicated. We have $\beta\circ \alpha \circ e\simeq e$ and hence $\beta\circ \alpha\simeq \id_{\mathrm{Yo}^{s} ( {\mathcal{O}}^{\infty}(\Z))}$. Applying $E^{\Z}_{X}$ we obtain the desired left-inverse $E^{\Z}_{X}(\beta)$ of $i\simeq E^{\Z}_{X}(\alpha)$.
\end{proof}
This completes the proof of Proposition \ref{weriojtgowtgerf}. \end{proof}
\begin{prop} \label{wtiohgwtrwerrefwre}If $E^{\Z}$ is strongly additive and
$X$ is discrete, then $\mathrm{coass}_{X,\Z_{min,min}}$ is an equivalence.
\end{prop}
\begin{proof}
We first calculate the target of the coassembly map explicitly.
\begin{eqnarray}
\lim_{B\Z'}E^{\Z}(X\otimes \Z_{min,min}\otimes \Z_{min,min})&\stackrel{\eqref{rwewfwefewfrwf}}{\simeq}
&\lim_{B\Z'}E^{\Z}(X\otimes \mathrm{Res}^{\Z}(\Z_{min,min})\otimes \Z_{min,min})\nonumber\\
&\stackrel{!}{\simeq}&\lim_{B\Z'}\prod_{ \mathrm{Res}^{\Z}(\Z )}E^{\Z}(X\otimes \Z_{min,min})\nonumber\\&\stackrel{\ev_{0}}{\simeq} &
E^{\Z}(X\otimes \Z_{min,min})\label{wtrko0wergergrefwferf}
\end{eqnarray}
For the equivalence marked by $!$
we use the isomorphism $$X\otimes \mathrm{Res}(\Z_{min,min})\otimes \Z_{min,min} \cong \bigsqcup^{\mathrm{free}}_{\mathrm{Res}^{\Z}(\Z )} X\otimes \Z_{min,min}$$ (see \cite[Ex. 2.16]{equicoarse} for the free union) in $\Z\mathbf{BC}$.
At this point it is important that $X$ is discrete.
The equivalence $!$ then follows from the assumption that $E^{\Z}$ is strongly additive. The group $\Z'$ acts freely transitively on the index set $ \mathrm{Res}^{\Z}(\Z )$ by translations, and also on $\Z_{min,min}$.
The equivalence $\ev_{0}$ is the projection onto the factor with index $0$ in $\mathrm{Res}^{\Z}(\Z )$. Using \cite[Lem. 2.59]{coarsetrans}, or more concretely \cite[(2.22)]{coarsetrans}, we see that the composition of the transfer with \eqref{wtrko0wergergrefwferf} is equivalent to the identity.
Hence \eqref{wtrko0wergergrefwferf} is an inverse equivalence for $\mathrm{coass}_{X,\Z_{min,min}}$.
\end{proof}
We recall the fibre sequence
of functors \begin{equation}\label{qewfqwedqewdewedq}
\Sigma^{-1}F^{\infty}\stackrel{\beta}{\to} F^{0}\to F
\end{equation}
from $\Z\mathbf{BC}$ to $\Z\Sp\cX$
introduced in \cite[Def. 11.9 \& (11.2)]{equicoarse}, where $\beta$ is called the motivic forget-control map and $F^{0}\simeq \mathrm{Yo}^{s}$.
The motivic forget control map induces a the forget control map
\begin{equation}\label{fwerfwerferwfrefw}
\gamma_{E_{X}^{\Z}}: E_{X}^{\Z}( F^{\infty}(\Z_{can,min}))\to \Sigma E_{X}^{\Z}( F^{0}(\Z_{can,min}))\simeq \Sigma E_{X}^{\Z}( \Z_{can,min})\ .
\end{equation}
\begin{prop}\label{wreigjwerijogweferfwefr}
Assume:
\begin{enumerate}
\item $X$ has the minimal bornology and is discrete.
\item $E^{\Z}$ is strong and strongly additive.
\end{enumerate}
Then $\mathrm{coass}_{X,\Z_{can,min}}$ is an equivalence if and only if $\gamma_{E^{\Z}_{X}}$ is an equivalence.
\end{prop}
\begin{proof}
The following commutative square \begin{equation}\label{adfasdfdssad}
\xymatrix{ \Sigma^{-1}E^{\Z}_{X}( F^{\infty}(\Z_{can,min}))\ar[r]^{\gamma_{E^{\Z}_{X}}}\ar[d]^{\Sigma^{-1}\mathrm{coass}_{ X,F^{\infty}(\Z_{can,min})}}&E_{X}^{\Z}( \Z_{can,min})\ar[d]^{\mathrm{coass}_{X, \Z_{can,min}}}\\ \lim_{B\Z'}\Sigma^{-1}E_{X}^{\Z}( F^{\infty}( \Z_{can,min})\otimes \Z_{min,min})
\ar[r]^{\lim_{B\Z'}\delta}& \lim_{B\Z'}\Sigma^{-1}E_{X}^{\Z}( \Z_{can,min}\otimes \Z_{min,min})}
\end{equation}
is at the heart of the proof of the split injectivity of the Davis-L\"uck assembly map for CP-functors given in \cite{desc}. In the present paper we reverse the flow of information and use it in order to deduce properties of the coassembly map
$\mathrm{coass}_{X, \Z_{can,min}}$.
The map $\delta$ in \eqref{adfasdfdssad} is also induced by the forget control map $\beta$ in \eqref{qewfqwedqewdewedq}.
Let $P_{U}(\Z_{can,min})$ in $\Z\mathbf{UBC}$ be the Rips complex of $\Z_{can,min}$ of size $U$, where $U$ is any invariant entourage of $\Z_{can,min}$.
By definition we have $$F^{\infty}(\Z_{can,min})\simeq \colim_{U\in \cC_{\Z_{can,min}}} \mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(P_{U}(\Z_{can,min})))\ .$$
If
$U_{1}=\{(n,m)\mid |n-m|\le 1 \}$, then we have a natural identification
$\R\cong P_{U_{1}}(\Z_{can,min})$. We then observe that the maps
$\R\to P_{U_{r}}(\Z_{can,min})$ are homotopy equivalences in $\Z\mathbf{UBC}$ for all $r\in [1,\infty)$. This implies the equivalence $\mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\R))\stackrel{\simeq}{\to} F^{\infty}(\Z_{can,min})$.
The proof of Lemma \ref{qirogfqrfewfewfqwef}.\ref{wtihgowgergwefw} actually shows that $\mathrm{coass}_{X,{\mathcal{O}}^{\infty}(\R)}$ is an equivalence
provided $\mathrm{coass}_{X,\Z_{min,min}}$ is an equivalence.
But this is the case by Proposition \ref{wtiohgwtrwerrefwre}.
We conclude that $\mathrm{coass}_{X,F^{\infty}(\Z_{can,min})}$ is an equivalence.
Note that we used here the assumptions that $E^{\Z}$ is strongly additive and that $X$ is discrete.
In order to show that $\lim_{B\Z'}\delta$ is an equivalence it suffices to show that the underlying map $\delta$ is one. The isomorphism \eqref{rwewfwefewfrwf} induces the vertical equivalences in the commutative square \begin{equation}\label{fqwefqwefwedqdqde}
\xymatrix{ \Sigma^{-1}E_{X}^{\Z}( F^{\infty}( \Z_{can,min})\otimes \Z_{min,min})\ar[r]^{\delta}\ar[d]^{\simeq}& \Sigma^{-1}E_{X}^{\Z}( \Z_{can,min}\otimes \Z_{min,min})\ar[d]^{\simeq}\\
\Sigma^{-1}E_{X}^{\Z}( F^{\infty}(\mathrm{Res}^{\Z}( \Z_{can,min}))\otimes \Z_{min,min})
\ar[r]^{\delta'}& \Sigma^{-1}E_{X}^{\Z}( \mathrm{Res}^{\Z}( \Z_{can,min})\otimes \Z_{min,min})}\ .
\end{equation}
The map $\delta'$ is the coarse Baum-Connes assembly map from \cite[Def. 9.7]{ass}
for the non-equivariant coarse homology theory $E^{\Z}_{X}(-\otimes \Z_{min,min})$ which is strong since $E^{\Z}$ was assumed to be strong.
Since $\mathrm{Res}^{\Z}(\Z_{can,min})$ has finite asymptotic dimension this coarse Baum-Connes assembly map is an equivalence by \cite[Thm. 10.4]{ass}. We conclude that the morphism $\lim_{B\Z'}\delta$ in \eqref{adfasdfdssad}
is an equivalence.
We now have shown that the morphisms $\lim_{B\Z'}\delta $ and $
\mathrm{coass}_{X,F^{\infty}(\Z_{can,min})} $ in \eqref{adfasdfdssad} are equivalences.
Hence $\mathrm{coass}_{X, \Z_{can,min}}$ is an equivalence if and only of
$\gamma_{E^{\Z}_{X}}$ is an equivalence.
\end{proof}
\begin{prop}\label{werijgoerwgrewferfwf}
The Davis-L\"uck assembly map
$\mu^{DL}_{HE_{X,c}^{\Z}}$ is equivalent to the forget control map
$\gamma_{E_{X}^{\Z}}$ in \eqref{fwerfwerferwfrefw}.
\end{prop}
\begin{proof}
Recall that $E^{\Z}_{X,c}$ is the continuous approximation of $E^{\Z}_{X}$, see \eqref{afdasdfqwefq}.
\begin{lem}\label{weiorgwergerfwf}
The forget control map $\gamma_{E_{X}^{\Z}}$ is equivalent to $\gamma_{E_{X,c}^{\Z}}$.
\end{lem}
\begin{proof}
The full category $\cC{\mathcal{E}}$ of $\Z\Sp\cX$ of objects $W$ such that
$E^{\Z}_{X,c}(W)\to E^{\Z}_{X}(W)$ is an equivalence is localizing. It contains the objects $\mathrm{Yo}^{s}(Y)$ for all $Y$ in $\Z\mathbf{BC}$ with the minimal bornology, so in particular $\mathrm{Yo}^{s}(\Z_{can,min})$
and $\mathrm{Yo}^{s}(\Z_{min,min})$.
The Mayer-Vietoris sequence \eqref{qwefqwedqewdqdqwed} and the equivalence
$\mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\Z))\simeq \Sigma \mathrm{Yo}^{s}(\Z_{min,min})$
\cite[Prop. 9.33 \& 9.35]{equicoarse}
imply that
$\mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\R))$ belongs to $\cC{\mathcal{E}}$. Finally, in the proof of Proposition \ref{wreigjwerijogweferfwefr} we have seen that
$F^{\infty}(\Z_{can,min})\simeq \mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(\R))$.
Therefore
$F^{\infty}(\Z_{can,min})$ belongs to $\cC{\mathcal{E}}$.
The natural transformation
$E^{\Z}_{X,c}\to E^{\Z}_{X}$ induces a commutative square
$$\xymatrix{E^{\Z}_{X,c}(F^{\infty}(\Z_{can,min}))\ar[r]^{\gamma_{E_{X,c}^{\Z}}}\ar[d]&\Sigma E_{X,c}(\Z_{can,min})\ar[d]\\
E^{\Z}_{X}(F^{\infty}(\Z_{can,min}))
\ar[r]^{\gamma_{E_{X}^{\Z}}}&
\Sigma E_{X}(\Z_{can,min})}\ .$$
The observations above imply that the vertical morphisms are equivalences.
\end{proof}
By \cite[Cor. 8.25]{desc} applied to the continuous coarse homology theory $E^{\Z}_{X,c}$
the Davis-L\"uck assembly map
$\mu^{DL}_{HE_{X,c}^{\Z}}$ is equivalent to
the forget control map \begin{equation}\label{rfwreffewrferf}
E_{X,c}^{\Z}( F^{\infty}(\Z_{can,min})\otimes \Z_{max,max})\to \Sigma E_{X,c}^{\Z}( \Z_{can,min}\otimes \Z_{max,max})
\end{equation}
induced by the map $\beta$ in \eqref{qewfqwedqewdewedq}. In the following we explain that
the additional factor $\Z_{max,max}$ can be dropped. First of all, since $\Z$ is torsion free, the projection
$F^{\infty}(\Z_{can,min})\otimes \Z_{max,max}\to F^{\infty}(\Z_{can,min})$ is an equivalence. Furthermore, the projection
$\Z_{can,min}\otimes \Z_{max,max}\to \Z_{can,min}$ is even a coarse equivalence.
Thus the map in \eqref{rfwreffewrferf} is equivalent to $\gamma_{E_{X,c}^{\Z}}$. By Lemma \ref{weiorgwergerfwf}
we conclude that
$\mu^{DL}_{HE_{X,c}^{\Z}}$ is equivalent to $\gamma_{E_{X}^{\Z}}$.
\end{proof}
The proof of Theorem \ref{weijtgowegferwerwgwergre} now follows from a combination of the assertions of Propositions \ref{qewfqwefdewqdewdrerqwerereeredewqdeeeee}, \ref{wtiohgwtrwerrefwre}, \ref{wreigjwerijogweferfwefr} and \ref{werijgoerwgrewferfwf}
\end{proof}
We call a morphism in $\Z\Sp\cX$ a local equivalence it is sent to an
equivalence by the coarse homology theories
$E^{\Z}(-\otimes \Z_{min,min})$, $E^{\Z}(-\otimes \Z_{can,min})$, and
$E^{\Z}(-\otimes \Z_{can,min}\otimes \Z_{min,min} )$ appearing the coarse PV-square. Note that in view of Remark \ref{wrgijowergwregrwef} below we could drop the last entry of this list without changeing the notion of a local equivalence. \begin{ddd}\label{wgkoowegkorpwfr} We let $$\ell:\Z\Sp\cX\to \Z\Sp\cX_{\loc}$$ be the localization at
the local equivalences and set $\mathrm{Yo}^{s}_{\loc}:=\ell \circ \mathrm{Yo}^{s}$. \end{ddd}
Note that the notion of a local equivalence and $\ell:\Z\Sp\cX\to \Z\Sp\cX_{\loc}$
depend on the choice of $E^{\Z}$ though this is not indicated in the notation.
The three coarse homology theories listed above have colimit-preserving factorizations $\Z \Sp\cX_{\loc}\to \mathbf{M}$ which will be denoted by the same symbols. The property that the coarse PV-square \eqref{qwdqwewqwdqwedefe1eee} associated to $E^{\Z}$ and $X$ is cartesian only depends on the class $\mathrm{Yo}^{s}_{\loc}(X)$.
\begin{ddd} We let $\mathbf{PV}_{E^{\Z}}$ denote the full subcategory of $ \Z\Sp\cX_{\loc}$
of objects $X$ for which the coarse PV-square \eqref{qwdqwewqwdqwedefe1eee} is cartesian.
\end{ddd}
By construction $\mathbf{PV}_{E^{\Z}}$ is a localizing subcategory of $\Z\Sp\cX_{\loc}$.
\begin{ddd} \label{eiojgwergrefwerf}We define
$\Z\Sp\cX_{\loc}\langle DL \rangle$ as the localizing subcategory
generated by $\mathrm{Yo}^{s}_{\loc}(Y_{min,min})$ for all $\Z$-sets $Y$ such that
$\mu^{DL}_{HE^{\Z}_{Y_{min,min},c}}$ is an equivalence.
\end{ddd}
Theorem \ref{weijtgowegferwerwgwergre} now has the following immediate consequence.
\begin{kor}\label{wrthiojwergwergwerrwfg}
If $E^{\Z}$ is strong and strongly additive, then
$$\Z\Sp\cX_{\loc}\langle DL \rangle\subseteq \mathbf{PV}_{E^{\Z}}\ .$$
\end{kor}
\begin{prop} If $E^{\Z}$ is strong and strongly additive, then
$\mathrm{Yo}^{s}_{\loc}(\Z_{min,min})\in \Z\Sp\cX_{\loc}\langle DL \rangle$.
\end{prop}
\begin{proof}
The map $\delta$ in \eqref{fqwefqwefwedqdqde} is the forget-control map
$\gamma_{E^{\Z}_{\Z_{min,min}}}$, and it has been shown in the proof of Lemma \ref{wreigjwerijogweferfwefr} that it is an equivalence.
We now apply Proposition \ref{werijgoerwgrewferfwf}.
\end{proof}
\begin{ex}\label{wrthokwegfwerwrf}
In this example we show that it can happen that $\mathrm{Yo}^{s}_{\loc}(*)\not\in \mathbf{PV}_{E^{\Z}}$.
Let ${\mathbf{A}}$ be an additive category with strict $\Z$-action and let $E^{\Z}:=K{\mathbf{A}}\cX^{\Z}$ denote the coarse algebraic $K$-homology \cite[Def. 8.8]{equicoarse}.
This functor is strong \cite[Prop. 8.18]{equicoarse}, continuous \cite[Prop. 8.17]{equicoarse}, strongly additive \cite[Prop. 8.19]{equicoarse} and has transfers \cite[Thm. 1.4]{coarsetrans}.
We have an obvious equivalence $\mu^{DL}_{HK{\mathbf{A}}^{\Z}_{*,c}}\simeq \mu^{DL}_{HK{\mathbf{A}}^{\Z}}$.
The Davis-L\"uck assembly map $\mu^{DL}_{HK{\mathbf{A}}^{\Z}}$
is known to be split-injective (see e.g. \cite{desc} through this is not the
original reference), and its cofibre can be expressed in terms of so-called nil-terms, see e.g. \cite{L_ck_2016}.
So $\mathrm{Yo}^{s}_{\loc}(*)\in \mathbf{PV}_{K{\mathbf{A}}\cX^{\Z}}$ if and only if these nil-terms vanish.
If $R$ is a non-regular ring and ${\mathbf{A}}=\Mod(R)$ with the
trivial $\Z$-action, then the nil-terms can be non-trivial and hence
$\mathrm{Yo}^{s}_{\loc}(*)\not\in \mathbf{PV}_{K{\mathbf{A}}\cX^{\Z}}$. \hspace*{\fill}$\qed$}% \newline\noindent
\end{ex}
We finally calculate the boundary map of the coarse PV-sequence \eqref{weqfoijoiwqjdoieewdqwedqewd} explicitly.
Let $X$ be in $\Z\mathbf{BC}$.
Recall that $\Z'$ acts $ E_{X}^{\Z}( \Z_{min,min})$ by functoriality via its action on $\Z_{min,min}$ by translations. \begin{prop}\label{werkgjowergwerfwerferfewrfwerf} If the coassembly maps $\mathrm{coass}_{X, \Z_{min,min}}$ and $\mathrm{coass}_{X,\Z_{can,min}}$ are equivalences, then the coarse $PV$-sequence is equivalent to a fibre sequence
$$ \Sigma^{-1}E_{X}^{\Z}(\Z_{can,min} )\to E_{X}^{\Z}( \Z_{min,min}) \stackrel{1-\sigma}{\to} E_{X}^{\Z}( \Z_{min,min})$$
\end{prop}
\begin{proof}
If $\mathrm{coass}_{X, \Z_{min,min}}$ and $\mathrm{coass}_{X,\Z_{can,min}}$ are equivalences, then in view of Proposition \ref{weriojtgowtgerf}, \eqref{qewfqwefdewqdewdrerqwerereeredewqdeeeee} and \eqref{ewqfqwefwedqwedqwdewd} the coarse PV-sequence \eqref{weqfoijoiwqjdoieewdqwedqewd} is equivalent
to a fibre sequence
$$ E_{X}^{\Z}(\Z_{can,min} )\to E_{X}^{\Z}(\Z_{can,min}\otimes \Z_{min,min}) \stackrel{1-\sigma}{\to} E_{X}^{\Z}(\Z_{can,min}\otimes \Z_{min,min}) \ .$$ The isomorphism \eqref{rwewfwefewfrwf} yields the first equivalence in the chain of equivalences
\begin{equation}\label{gwegwergrefrew}E_{X}^{\Z}(\Z_{can,min}\otimes \Z_{min,min}) \simeq E_{X}^{\Z}(\mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min}) \simeq \Sigma E_{X}^{\Z}( \Z_{min,min}) \end{equation}
of $\Z'$-objects in $\mathbf{M}$.
In order to see the second equivalence note that after applying the isomorphism \eqref{rwewfwefewfrwf}
the group $\Z'$ acts diagonally on $\mathrm{Res}^{\Z}(\Z_{can,min})\otimes \Z_{min,min})$. But arguing as in the proof of Lemma \ref{tgot0grefwefefewf} we can replace the $\Z'$-action on the factor $\mathrm{Res}^{\Z}(\Z_{can,min})$ by the trivial action. \end{proof}
If $E^{\Z}$ is strong and strongly additive, $\mathrm{Yo}^{s}_{\loc}(X)$ is in $\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$, and $\mu_{HE^{\Z}_{X,c}}$ is an equivalence, then the assumptions of Proposition \ref{werkgjowergwerfwerferfewrfwerf} are satisfied.
\begin{rem}\label{wrgijowergwregrwef}
Using the equivalence \eqref{gwegwergrefrew} we observe that a morphism in $\Z\Sp\cX$ is a local equivalence if and only it is sent to an equivalence by $E^{\Z}(-\otimes \Z_{min,min})$ and $E^{\Z}(-\otimes \Z_{can,min})$. \hspace*{\fill}$\qed$}% \newline\noindent
\end{rem}
\section{Topological coarse $K$-homology}
Let $\bC$ be a $C^{*}$-category with a strict $ \Z$-action which admits all orthogonal AV-sums \cite[Def. 7.1]{cank}. Then we can consider the spectrum-valued strong coarse homology theory
$$ K\cX^{\Z}_{\bC}:\Z\mathbf{BC}\to \Sp$$
from \cite[Def. 6.1.2]{coarsek} for the homological functor $\mathrm{K}^{C^{*}\mathrm{Cat}}:C^{*}\mathbf{Cat}^{\mathrm{nu}}\to \Sp$. In order to simplify the notation, in the present paper we omit the subscript $c$ appearing in this reference which indicates continuity.
The coarse homology theory $K\cX^{\Z}_{\bC}$ is continuous \cite[Thm. 6.3]{coarsek}, strong \cite[Prop. 6.5]{coarsek}, strongly additive
\cite[Thm 11.1]{coarsek} and admits transfers by \cite[Thm 9.7]{coarsek}.
So we can take $ K\cX^{\Z}_{\bC}$ as an example for $E^{\Z}$ in the preceding sections and define the associated stable $\infty$-category $\Z\Sp\cX_{\loc}$ as in Definition \ref{wgkoowegkorpwfr}.
\begin{ddd}We let $\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ denote the localizing subcategory of $\Z\Sp\cX_{\loc}$ generated by the objects $\mathrm{Yo}^{s}_{\loc}(Y_{min,min})$ for all $Y$ in $\Z\Set$. \end{ddd}Recall the Definition \ref{eiojgwergrefwerf} of $\Z\Sp\cX_{\loc}\langle DL \rangle$.
\begin{theorem}\label{weitgowergerfrwferfw}
We have
$\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle\subseteq \Z\Sp\cX_{\loc}\langle DL \rangle$.
\end{theorem}
\begin{proof}
The main input is that the amenable group $\Z$ satisfies the Baum-Connes conjecture with coefficients.
Using the main result of \cite{kranz}, see also \cite[Thm, 1.7]{bel-paschke}, on the level of homotopy groups the Davis-L\"uck assembly map
$\mu_{HK\cX^{\Z}_{\bC}}$ from \eqref{wefqwedqwdeewdeqwd} is isomorphic to the Baum-Connes assembly map \eqref{eqwfwefwdwe}
with coefficients in the free $\Z$-$C^{*}$-algebra $A^{f}(\bC)$ generated by $\bC$. The latter was introduced in \cite[Def. 3.7]{joachimcat}. Consequently, we know that $\mu^{DL}_{HK\cX_{\bC}^{\Z}}$ is an equivalence.
Let $Y$ be a $\Z$-set.
We form the $C^{*}$-category with $\Z$-action $\bV_{\bC}^{ \Z}(Y_{min,min}\otimes \Z_{min,min})$ by specializing \cite[Def. 4.19.2]{coarsek} (note that in the present paper we use a different notation). The
$\Z$-action is induced by functoriality from the right action on $\Z_{min,min}$.
As explained in \cite[Sec. 4]{coarsek} the objects of $ \bV_{\bC}^{ \Z}(Y_{min,min}\otimes \Z_{min,min})$
are triples $(C,\rho,\mu)$, where $(C,\rho)$ is a $G$-object in the multiplier category $ \mathbf{M}\bC$ of $\bC$ (see \cite[Def. 3.1]{coarsek}), and $\mu$ is an invariant finitely additive measure on $Y_{min,min}\otimes \Z_{min,min} $ with values in multiplier projections on $C$
such that $ \mu(\{(y,n)\})\in \bC$ for every point $(y,n)\in Y\times \Z$.
The morphisms of $\bV_{\bC}^{ \Z}(Y_{min,min}\otimes \Z_{min,min})$ are the invariant and $\diag(Y\times \Z)$-controlled multiplier morphisms.
We need an AV-sum completion $\bC_{Y}$ of $\bV_{\bC}^{ \Z}(Y_{min,min}\otimes \Z_{min,min})$, see \cite[Def. 7.1]{cank} for the notion of an AV-sum.
It turns out to be useful to work with an explicit model for $\bC_{Y}$ given as follows.
The objects of $\bC_{Y}$ are again triples $(C,\rho,\mu)$ as above, but we replace the condition $\mu(\{(y,n)\})\in \bC$ (which prevents the existence of infinite AV-sums in $\bV_{\bC}^{ \Z}(Y_{min,min}\otimes \Z_{min,min})$) by the more general condition that the images of $\mu(\{(y,n)\})$ for all $(y,n)$ in $Y\times \Z$ are
isomorphic to AV-sums of families of unital objects of $\bC$ (see \cite[Def. 2.14]{cank}).
Morphisms $A: (C,\rho,\mu)\to (C',\rho',\mu')$ in this category are invariant $\diag(X\times \Z)$-controlled
morphisms $A:C\to C'$ in $\mathbf{M}\bC$ such that $A\mu(\{(y,n)\})\in \bC$ for all $(y,n)$ in $Y\times \Z$.
Note that $\bC_{Y}$ contains $\bV_{\bC}^{ \Z}(Y_{min,min}\otimes \Z_{min,min})$
as the full subcategory of unital objects.
We let $K\cX^{\Z}_{\bC,Y_{min,min},c}$ be the continuous approximation (see Definition \ref{wefqwedqwdeewdeqwd} ) of the coarse homology theory $K\cX^{\Z}_{\bC}(-\otimes Y_{min,min})$.
\begin{prop}\label{wiothgerththerh}
We have an equivalence
$ K\cX_{\bC,Y_{min,min},c }^{\Z}\simeq K\cX_{\bC_{Y}}^{\Z}$ of $\Z$-equivariant coarse homology theories.
\end{prop}
\begin{proof}
We consider the inclusion of groups $\Z\to \Z\times \Z$ given by $n\mapsto (n,0)$.
We let $\Z\times \Z$ act on $\bC$ via the projection onto the first factor.
Identifying $\Ind_{\Z}^{\Z\times \Z}(-)\simeq - \otimes \Z_{min,min}$
we have the induction equivalence \cite[10.5.1]{coarsek}
\begin{equation}\label{qefqwedewdqwedqwd}
K\cX^{\Z}_{\bC,Y_{min,min}}( -) \stackrel{\simeq}{\to} K\cX^{ \Z\times \Z}_{\bC}(Y_{min,min}\otimes (-)\otimes \Z_{min,min})\ .
\end{equation}
On the r.h.s. the first copy of $\Z$ acts diagonally on $Y\times \Z\times (-)$, while the second copy only acts on $\Z_{min,min}$.
We have a natural isomorphism
$$Y\times (-)\times \Z\stackrel{\cong}{\to}\ Y\times \Z\times (-) \ , \quad (x,- ,n)\mapsto (x,n,n^{-1}-)$$
of functors from $\Z\mathbf{BC}$ to $(\Z\times \Z)\mathbf{BC}$. On the target the first factor of $\Z\times \Z$ only acts on $ Y\times \Z$, while the second factor now acts diagonally on $ \Z\times (-)$. We get an equivalence
\begin{equation}\label{eqwfwedwqedewdqewd}
K\cX^{ \Z\times \Z}_{\bC}(Y_{min,min}\otimes (-)\otimes \Z_{min,min})\simeq K\cX^{ \Z\times \Z}_{\bC}(Y_{min,min}\otimes \Z_{min,min}\otimes (-))\ .
\end{equation} By definition \cite[Def. 6.1]{coarsek} of the coarse $K$-homology functor we have an equivalence
$$K\cX^{ \Z\times \Z}_{\bC}(Y_{min,min}\otimes \Z_{min,min}\otimes (-))\simeq \mathrm{K}^{C^{*}\mathrm{Cat}} (\bV_{\bC}^{ \Z\times \Z}(Y_{min,min}\otimes \Z_{min,min}\otimes (-)))\ .$$
We now construct a natural transformation
\begin{equation}\label{dafadfdsfadfafafasdfadfadsf}
\bV_{\bC}^{ \Z\times \Z}(Y_{min,min}\otimes \Z_{min,min}\otimes Z)\to \bV^{\Z}_{\bC_{Y}}(Z)\ ,
\end{equation}
where
$Z$ runs over $\Z\mathbf{BC}_{min}$. Let
$(C,\rho,\mu)$ be an object $\bV_{\bC}^{ \Z\times \Z}(Y_{min,min}\otimes \Z_{min,min}\otimes Z)$. The functor \eqref{dafadfdsfadfafafasdfadfadsf} sends this object to an
object $((C,\rho_{|\Z\times 1},\pr_{|Y\times \Z,*}\mu),\sigma,\kappa)$. We first observe that $(C,\rho_{|\Z\times 1},\pr_{|Y\times \Z,*}\mu)$ is an object of $\bC_{Y}$. We define the measure $\kappa$ by $\kappa(\{z\})=\mu(Y\times \Z\times \{z\})$. Note that $\kappa(\{z\})$ is an endomorphism of
$(C,\rho_{|\Z\times 1},\pr_{|Y\times \Z,*}\mu)$ in $\bC_{Y}$.
We further set $\sigma:=\rho_{|1\times \Z}$.
Using that $Z$ has the minimal bornology we then observe that
$((C,\rho_{|\Z\times 1},\pr_{|Y\times \Z,*}\mu),\sigma,\kappa)$ indeed belongs to $\bV^{\Z}_{\bC_{Y}}(Z)$.
On morphisms the functor \eqref{dafadfdsfadfafafasdfadfadsf} is given by the identity.
Naturality in $Z$ is obvious. The transformation
identifies invariant and controlled morphisms on both sides.
We conclude that the transformation is fully faithful.
We finally observe that it is essentially surjective.
Let $((C,\tilde \rho,\tilde \mu),\sigma,\kappa)$ be an object of $\bV^{\Z}_{\bC_{Y}}(Z)$.
We define
$\mu$ such that $ \mu(\{(y,n,z)\}=\tilde \mu(\{(y,n)\})\kappa(z)$ and
$\rho$ such that by $\rho_{(n,m)}=\tilde \rho_{n}\sigma_{m}$.
Then $(C,\rho,\mu)$ is a preimage in $\bV_{\bC}^{ \Z\times \Z}(Y_{min,min}\otimes \Z_{min,min}\otimes Z)$.
Applying $\mathrm{K}^{C^{*}\mathrm{Cat}}$ to the natural equivalence \eqref{dafadfdsfadfafafasdfadfadsf}
to we get a natural equivalence \begin{equation}\label{adfasq2fasdfdsf}
\mathrm{K}^{C^{*}\mathrm{Cat}} (\bV_{\C}^{ \Z\times \Z}(X_{min,min}\otimes \Z_{min,min}\otimes Z))\stackrel{\simeq}{\to} K\cX_{\bC_{Y}}^{\Z}(Z)
\end{equation}
for $Z$ in $\Z\mathbf{BC}_{min}$. Composing
\eqref{qefqwedewdqwedqwd}, \eqref{eqwfwedwqedewdqewd} and \eqref{adfasq2fasdfdsf} we get the equivalence \begin{equation}\label{qewfewdqdewdqewd}
K\cX^{\Z}_{\bC,Y_{min,min}} \simeq K\cX_{\bC_{Y}}^{\Z}
\end{equation}
of functors on $\Z\mathbf{BC}_{min}$. The desired equivalence is now given by
$$K\cX^{\Z}_{\bC,Y_{min,min},c} \stackrel{\eqref{afdasdfqwefq}}{\simeq} i_{!}i^{*}K\cX^{\Z}_{\bC,Y_{min,min}} \stackrel{\eqref{qewfewdqdewdqewd}}{\simeq}
i_{!}i^{*} K\cX_{\bC_{Y}}^{\Z} \stackrel{\simeq}{\to} K\cX_{\bC_{Y}}^{\Z} $$
where the last equivalence is an equivalence since $ K\cX_{\bC_{Y}}^{\Z} $ is already continuous.
\end{proof}
We can now finish the proof of Theorem \ref{weitgowergerfrwferfw}.
We have equivalences of morphisms
$$\mu^{DL}_{ HK\cX^{\Z}_{\bC,Y_{min,min},c}} \stackrel{ {\scriptsize \cite[Cor. 8.25]{desc} }}{\simeq} \gamma_{K\cX^{\Z}_{\bC,Y_{min,min},c}} \stackrel{Prop. \ref{wiothgerththerh}}{\simeq}
\gamma_{K\cX^{\Z}_{\bC_{Y} }} \stackrel{{\scriptsize \cite[Cor. 8.25]{desc} } }{\simeq} \mu^{DL}_{HK\cX^{\Z}_{\bC_{Y}}}\ .$$
At the beginning of this proof we have seen that $ \mu^{DL}_{HK\cX^{\Z}_{\bC_{Y} }}$ is an equivalence.
We conclude that
$\mu^{DL}_{ HK\cX^{\Z}_{\bC,Y_{min,min},c}}$ is an equivalence, too.
It follows that $\Z\Sp\cX_{\loc}\langle DL\rangle$ contains $\mathrm{Yo}^{s}_{\loc}(Y_{min,min})$ for all $\Z$-sets $Y$, and this implies the assertion of the theorem.
\end{proof}
Corollary \ref{wrthiojwergwergwerrwfg} now implies:
\begin{kor}\label{qerigojoqrfewfqewfqf}
We have
$\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle \subseteq \mathbf{PV}_{K\mathbf{X}_{\bC}^{\Z}}$.
\end{kor}
Let $A$ be a unital $C^{*}$-algebra with an action of $\Z$. We consider the $C^{*}$-category $\mathbf{Hilb}_{c}(A)$ of Hilbert $A$-modules and compact operators which has the induced $\Z$-action \cite[Ex. 2.9]{cank}.
Then the square \eqref{qwdqwewqrwdqweddwdwdwd1eee} is cartesian by
Corollary \ref{qerigojoqrfewfqewfqf}. We let $\Z'$ denote the group which acts by automorphisms on $\mathrm{Res}^{\Z}(A)$ (via the identification of $\Z'$ with the original group $\Z$)
and on $\Z_{min,min}$ by translations.
We let $\sigma$ denote the action by functoriality of the generator of $\Z'$ on various derived objects.
\begin{prop}\label{eorkjgwegreegrwegr9}
We have equivalences
\begin{enumerate}
\item \label{werjkngwerge}$K\cX^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{min,min})\simeq K(\mathrm{Res}^{\Z}(A))$
\item \label{werjkngwerge1} $K\cX^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{can,min})\simeq K(A\rtimes \Z)$
\item \label{werjkngwerge2}$K\cX^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{can,min}\otimes \Z_{min,min}) \simeq \Sigma K(\mathrm{Res}^{\Z}(A))$.
\end{enumerate}
Furthermore, the coarse PV-sequence
is equivalent to a fibre sequence
$$ \Sigma^{-1}K(A\rtimes \Z) \to K(\mathrm{Res}^{\Z}(A))\stackrel{1-\sigma}{\to} K(\mathrm{Res}^{\Z}(A))\ .$$
\end{prop}
\begin{proof}
In order to prepare the argument for the second assertion we will actually show that
there is an equivalence $$K\cX^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{min,min})\simeq K(\mathrm{Res}^{\Z}(A))$$ of spectra with an action of $\Z'$.
Using that $\Z_{min,min}\cong \Ind_{1}^{\Z}(*)$, by \cite[Cor. 10.5.2]{coarsek} have an equivalence \begin{equation}\label{qwfqqewfeqwfewddq}
K\cX^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{min,min})\stackrel{\simeq}{\to} K\cX_{\mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A))}(*)\ .
\end{equation}
Unfolding \cite[Def. 4.19]{coarsek} we get
an isomorphism $ \bV_{\mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A))}(*)\cong\mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A))^{u} $, where $(-)^{u}$ denotes the operation of taking the full subcategory of unital objects. We therefore get an equivalence
\begin{equation}\label{fewqwfqewd}
K\cX_{\mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A))}(*)\simeq \mathrm{K}^{C^{*}\mathrm{Cat}}(\mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A)^{u}))\ .
\end{equation}
We consider $A$ as a $C^{*}$-category ${\mathbf{A}}$ with a single object. We then have a fully faithful functor
$\psi:{\mathbf{A}}\to \mathrm{Res}^{\Z}( \mathbf{Hilb}_{c}(A)^{u})$ which sends the unique object of ${\mathbf{A}}$ to the Hilbert $A$-module given by $A$ with the right $A$-multiplication and the scalar product $\langle a,a'\rangle:=a^{*}a'$.
Since $A$ is assumed to be unital the latter object of $\mathbf{Hilb}_{c}(A)$ is indeed unital. The functor
${\mathbf{A}}\to \mathrm{Res}^{\Z}( \mathbf{Hilb}_{c}(A)^{u})$
is a Morita equivalence \cite[18.15]{cank}.
Since $\mathrm{K}^{C^{*}\mathrm{Cat}}$ is Morita invariant and extends the $K$-theory functor for $C^{*}$-algebras
we get the equivalence \begin{equation}\label{adfadsfdfasff}
K(\mathrm{Res}^{\Z}(A))\simeq \mathrm{K}^{C^{*}\mathrm{Cat}}({\mathbf{A}}) \stackrel{\simeq}{\to}\mathrm{K}^{C^{*}\mathrm{Cat}}(\mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A)^{u}))\ .
\end{equation}
The equivalence in Assertion \ref{werjkngwerge} is the composition of the equivalences \eqref{qwfqqewfeqwfewddq}, \eqref{fewqwfqewd} and \eqref{adfadsfdfasff}.
We let $\sigma$ denote the generator of $\Z'$. Then for $n$ in $\Z_{min,min}$ we have $\sigma(n)=n+1$. We will show that the composition
$\eqref{fewqwfqewd}\circ \eqref{qwfqqewfeqwfewddq}$ and the equivalence
\eqref{adfadsfdfasff} preserve the action of $\sigma$ up to equivalence.
Using the explicit description of
the equivalence \eqref{qwfqqewfeqwfewddq} given in the proof of \cite[Prop. 10.1]{coarsek}
the equivalence $\eqref{fewqwfqewd}\circ \eqref{qwfqqewfeqwfewddq}$ is induced by the functor
$$\phi:\bV^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{min,min})\to \bV_{\mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A))}(*)\cong \mathrm{Res}^{\Z}(\mathbf{Hilb}_{c}(A))^{u}\ .$$
The functor $\phi$
sends the object $(C,\rho,\mu)$ of $\bV^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{min,min})$ to the submodule $C(0):=\mu(\{0\})C$ in $\mathbf{Hilb}_{c}(A)^{u}$.
In contrast to the general case considered in \cite[Prop. 10.1]{coarsek}
here we can take this preferred image of the projection $\mu(\{0\})$.
The functor $\phi$ furthermore sends a morphism $B:(C,\rho,\mu)\to (C',\rho',\mu')$ in $\bV^{\Z}_{\mathbf{Hilb}_{c}(A)}(\Z_{min,min})$ to
$\mu'(\{0\})B\mu(\{0\}):C(0)\to C'(0)$.
Note that $$\phi(\sigma(C,\rho,\mu))=\phi( (C,\rho,\sigma_{*}\mu)) =(\sigma_{*}\mu)(\{0\})C=\mu(\{-1\})C=:C(-1)\ .$$
Using the invariance of $\mu$ we see that the unitary multiplier $\rho_{\sigma}$ restricts to a unitary multiplier
$\rho_{\sigma,-1,0}:C(-1) \to \sigma C(0)$.
We can therefore define a natural unitary isomorphism
$v:\phi\circ \sigma\to \sigma\circ \phi$ such that its evaluation at $(C,\rho,\mu)$ is given by $\rho_{\sigma,-1,0}$. Since $\mathrm{K}^{C^{*}\mathrm{Cat}}$ sends unitarily equivalent functors to equivalent maps \cite[Lem. 17.11]{cank} we conclude that $\eqref{fewqwfqewd}\circ \eqref{qwfqqewfeqwfewddq}$ preserves the action of $\sigma$ up to equivalence.
In order to show that \eqref{adfadsfdfasff} commutes with $\sigma$ up to equivalence we construct a natural unitary isomorphism
$w:\psi\circ \sigma\to \sigma \circ \psi$. The evaluation of $w$ at the unique object of ${\mathbf{A}}$ is the unitary isomorphism
${}^{\sigma}(-):A\to \sigma A$ of Hilbert $A$-modules, where
$ \sigma A$ is the vector space $A$ with the new Hilbert $A$-module described in \cite[Ex. 2.10]{cank}, and $a\mapsto {}^{\sigma}a$
is the action of $\Z'$ on $ A $.
Assertion
\ref{werjkngwerge1} follows from \cite[Prop. 2.8.3]{coarsek} applied to $X=*$ and $G=\Z$.
Assertion \ref{werjkngwerge2} follows from Assertion \ref{werjkngwerge}
and the equivalence in
\eqref{gwegwergrefrew}.
Finally, the last assertion is a consequence of Proposition \ref{werkgjowergwerfwerferfewrfwerf}
and the observation that the equivalence in Assertion
\ref{werjkngwerge} preserves the $\sigma$-actions up to equivalence.
\end{proof}
\section{What is in $\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ }
In this section we again consider the example $E^{\Z}=K\cX^{\Z}_{\bC}$.
By Corollary \ref{qerigojoqrfewfqewfqf}
we have
$\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle \subseteq \mathbf{PV}_{K\mathbf{X}_{\bC}^{\Z}}$. Hence we know that the coarse PV-square \eqref{qwdqwerrwqwdqweddwdwdwd1eee} is cartesian for discrete $X$. In the present section we show that
$\Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ contains the motives of many non-discrete $\Z$-bornological coarse spaces. The main result is the following theorem.
\begin{theorem}\label{werigosetrrgertgerwgw}Assume one of the following:
\begin{enumerate}
\item\label{ijtgoertgegertgwee} $X$ has weakly finite asymptotic dimension.
\item \label{ijtgoertgegertgwee1} $X$ has bounded geometry and the coarse Baum-Connes assembly maps $\mu_{K\cX^{\Z}_{ \bC}(-\otimes \Z_{min,min}),X}$ and
$\mu_{K\cX^{\Z}_{\bC}(-\otimes \Z_{can,min}),X}$ are equivalences.
\end{enumerate}
Then $\mathrm{Yo}^{s}_{\loc}(X)\in \Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ and
the coarse PV-square \eqref{qwdqwerrwqwdqweddwdwdwd1eee} is cartesian.\end{theorem}
\begin{proof}
By Corollary \ref{qerigojoqrfewfqewfqf} the first part of the assertion implies the second.
Considering a bornological coarse space as a $\Z$-bornological coarse space we get a functor $\mathrm{Res}_{\Z}:\mathbf{BC}\to \Z\mathbf{BC}$. A morphism in $ \Sp\cX$ is called a local equivalence
if it is sent to an equivalence by the functors $K\cX^{\Z}_{ \bC}(-\otimes \Z_{min,min})$ and $K\cX_{ \bC}(-\otimes \Z_{min,min})$
which are considered as non-equivariant homology theories.
As in the $\Z$-equivariant case we let $\ell:\Sp\cX\to \Sp\cX_{\loc}$ denote the localization at the local equivalences.
We get a commutative diagram
$$\xymatrix{\ar@/^-1cm/[dd]_{\mathrm{Yo}^{s}_{\loc}}\mathbf{BC}\ar[r]^{\mathrm{Res}_{\Z}}\ar[d]^{\mathrm{Yo}^{s}}&\ar@/^1cm/[dd]^{\mathrm{Yo}^{s}_{\loc}}\Z\mathbf{BC}\ar[d]^{\mathrm{Yo}^{s}}\\\Sp\cX\ar[r]^{\mathrm{Res}_{\Z}}\ar[d]^{\ell}&\Z\Sp\cX\ar[d]^{\ell}\\\Sp\cX_{\loc}\ar[r]^{\mathrm{Res}_{\Z}}&\Z\Sp\cX_{\loc}}$$
If $X $ in $\Sp\cX$ satisfies Assumption \ref{werigosetrgertgerwgw}.\ref{ijtgoertgegertgwee}, then $\mathrm{Yo}^{s}(X)\in \Sp\cX\langle \mathrm{disc}\rangle$ by \cite[Thm 5.59]{buen}. This immediately implies that
$\mathrm{Res}_{\Z}(\mathrm{Yo}^{s}_{\loc}(X))\in \Z\Sp\cX_{\loc}\langle \mathrm{disc}\rangle$.
We now assume that $X$ satisfies Assumption \ref{werigosetrgertgerwgw}.\ref{ijtgoertgegertgwee1}. Since the assertion of the theorem only depends on the coarse equivalence class of $X$ we can assume that $X$
has strongly bounded geometry \cite[Def. 7.75]{buen}. Then for every coarse entourage $U$ of $X$
the Rips complex $P_{U}(X)$ \cite[Ex. 2.6]{ass} is a finite-dimensional simplicial complex. The spherical path metric induces a bornological coarse structure on the Rips complex such that $X\to P_{U}(X)$ is an equivalence of bornological coarse spaces.
Recall from \cite[Def. 9.7]{ass} that the universal coarse assembly map is induced by the morphism \begin{equation}\label{adfqewfqewfe}
\mu_{\mathrm{Yo}^{s},X}: \colim_{U\in \cC_{X}} \mathrm{Yo}^{s}({\mathcal{O}}^{\infty}(P_{U}(X)))\to \Sigma \mathrm{Yo}^{s}(X)
\end{equation}
derived from the cone sequence.
For any non-equivariant strong homology theory $F:\mathbf{BC}\to \mathbf{M}$ the coarse assembly map $\mu_{F,X}$ from \eqref{fqwefwqedwqedqwedqewd} is then given by
$\mu_{F,X}\simeq F( \mu_{\mathrm{Yo}^{s},X})$.
The Assumption \ref{werigosetrgertgerwgw}.\ref{ijtgoertgegertgwee1} together with Remark \ref{wrgijowergwregrwef}
imply that $ \mu_{\mathrm{Yo}^{s},X}$ is a local equivalence. Hence applying $\ell$ to \eqref{adfqewfqewfe} we get the equivalence
\begin{equation}\label{adfqewfqewfe1}
\mu_{\mathrm{Yo}^{s}_{\loc},X}: \colim_{U\in \cC_{X}} \mathrm{Yo}^{s}_{\loc}({\mathcal{O}}^{\infty}(P_{U}(X)))\stackrel{\simeq}{\to} \Sigma \mathrm{Yo}^{s}_{\loc}(X)
\end{equation}
in $\Sp\cX_{\loc}$.
The functor $ \mathrm{Yo}^{s}_{\loc}\circ {\mathcal{O}}^{\infty}$ is homotopy invariant and excisive for closed decompositions of uniform bornological coarse spaces. Furthermore
if $Y$ is a set, then $$\mathrm{Yo}^{s}_{\loc}( {\mathcal{O}}^{\infty}(Y_{min,min,disc}))\simeq \Sigma \mathrm{Yo}^{s}_{\loc}(Y_{min,min})\in \Sp\cX_{\loc}\langle \mathrm{disc}\rangle\ .$$
Using that $ \Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ is a thick subcategory of $\Sp\cX_{\loc}$ and $P_{U}(X)$ is finite-dimensional we
can now use a finite induction by the skeleta of $X$
in order to conclude that $$\mathrm{Yo}^{s}_{\loc}({\mathcal{O}}^{\infty}(P_{U}(X)))\in \Sp\cX_{\loc}\langle \mathrm{disc}\rangle\ .$$
Since $ \Sp\cX_{\loc}\langle \mathrm{disc}\rangle$ even localizing \eqref{adfqewfqewfe1} finally implies that $$ \mathrm{Yo}^{s}_{\loc}(X)\in \Sp\cX_{\loc}\langle \mathrm{disc}\rangle\ .$$
\end{proof}
\bibliographystyle{alpha}
| b466a7ad06d4b40b530b18bf1781a5d284e4ddba | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
Neural network models have significantly pushed forward performances on natural language processing benchmarks with the development of large-scale language model pre-training~\cite{Peters_ELMO@2018, radford2018gpt_improving, devlin2019bert, radford2019gpt2_language, liu2019roberta}. For example, on two semantically challenging tasks, Natural Language Inference (NLI) and Reading Comprehension (RC), the state-of-the-art results have reached or even surpassed the estimated human performance on certain benchmark datasets~\cite{wang2019glue, rajpurkar-etal-2016-squad, rajpurkar2018know}.
These astounding improvements, in turn, motivate a new trend of research to analyze what language understanding and reasoning skills are actually achieved, versus what is still missing within these current models.
Following this trend, numerous analysis approaches have been proposed to examine models' ability to capture different linguistic phenomena (e.g., named entities, syntax, lexical inference, etc.). Those studies are often conducted in 3 steps: (1) proposing assumptions about a certain ability of the model; (2) building analysis datasets by automatic generation or crowd-sourcing; (3) concluding models' ability using results on these analysis datasets.
Past analysis studies have led to many key discoveries in NLP models, such as over-stability~\cite{jia2017advsquad}, surface pattern overfitting~\cite{gururangan2018annotation}, but recently \citet{mccoy2019berts} found that the results of different runs of BERT NLI models have large non-negligible variances on the HANS~\cite{mccoy2019right} analysis datasets, contrasting sharply with their stable results on standard validation set across multiple seeds. This finding raises concerns regarding the reliability of individual results reported on those datasets, the conclusions made upon these results, and lack of reproducibility~\cite{makel2012replications}.
Thus, to help consolidate further developments, we conduct a deep investigation on model instability, showing how unstable the results are, and how such instability compromises the feedback loop between model analysis and model development.
We start our investigation from a thorough empirical study of several representative models on both NLI and RC. Overall, we observe four worrisome observations in our experiments: (1) The final results of the same model with different random seeds on several analysis sets are of significantly \textbf{high variance}. The largest variance is more than 27 times of that for standard development set; (2) These large instabilities on certain datasets is \textbf{model-agnostic}. Certain datasets have unstable results across different models; (3) The instability not only occurs at the final performance but exists \textbf{all along training trajectory}, as shown in Fig.~\ref{fig:bert_training_trajectory}; (4) The results of the same model on analysis sets and on the standard development set have \textbf{low correlation}, making it hard to draw any constructive conclusion and questioning the effectiveness of the standard model-selection routine.
Next, in order to grasp a better understanding of this instability issue, we explore theoretical explanations behind this instability. Through our theoretical analysis and empirical demonstration, we show that inter-examples correlation within the dataset is the dominating factor causing this performance instability.
Specifically, the variance of model accuracy on the entire analysis set can be decomposed into two terms: (1) the sum of single-data variance (the variance caused by individual prediction randomness on each example), and (2) the sum of inter-data covariance (caused by the correlation between different predictions).
To understand the latter term better, consider the following case: if there are many examples correlated with each other in the evaluation set, then the change of prediction on one example will influence predictions on all the correlated examples, causing high variances in final accuracy.
We estimate these two terms with multiple runs of experiments and show that inter-data covariance contributes significantly more than single-data variance to final accuracy variance, indicating its major role in the cause of instability.
Finally, in order for the continuous progress of the community to be built upon trustworthy and interpretable results, we provide initial suggestions on how to perceive the implication of this instability issue and how we should potentially handle it. For this, we encourage future research to: (1) when reporting means and variance over multiple runs, also report two decomposed variance terms (i.e., sum of single data variance and sum of inter-data covariance) for more interpretable results and fair comparison across models; (2) focus on designing models with better inductive and structural biases, and datasets with higher linguistic diversity.
Our contribution is 3-fold. First, we provide a thorough empirical study of the instability issue in models' performance on analysis datasets. Second, we demonstrate theoretically and empirically that the performance variance is attributed mostly to inter-example correlations. Finally, we provide suggestions on how to deal with instability, including reporting the decomposed variance for more interpretable evaluation and better comparison.
\section{Related Work}
\paragraph{NLI and RC Analysis.}
Many analysis works have been conducted to study what the models are actually capturing alongside recent improvements on NLI and RC benchmark scores. In NLI, some analyses target word/phrase level lexical/semantic inference \cite{glockner2018breaking, shwartz2018paraphrase, carmona2018behavior}, some are more syntactic-related \cite{mccoy2019right, nie2019analyzing, geiger2019posing}, some also involved logical-related study \cite{minervini2018adversarially, wang2019glue} and some involved pragmatic inference~\cite{jeretic-etal-2020-natural}. \newcite{naik2018stress} proposed a suite of analysis sets covering different linguistic phenomena. \newcite{ynie2020chaosnli} studied the factor of collective human opinion distributions on model performance which might also have connections with performance instability. In RC, adversarial style analysis is used to test the robustness of the models~\cite{jia2017advsquad}.
Most of the work follows the style of \citet{carmona2018behavior} to diagnose/analyze models' behavior on pre-designed analysis sets.
In this paper, we analyze NLI and RC models from a broader perspective by inspecting models' performance across different analysis sets, and their inter-dataset and intra-dataset relationships.
\paragraph{Dataset-Related Analysis.}
Another line of works study the meta-issues of the dataset. The most well-known one is the analysis of undesirable bias. In VQA datasets, unimodal biases were found, compromising their authority on multi-modality evaluation~\cite{jabri2016revisiting, goyal2017making}. In RC, \newcite{kaushik2018much} found that passage-only models can achieve decent accuracy.
In NLI, hypothesis bias was also found in SNLI and MultiNLI~\cite{tsuchiya2018performance, gururangan2018annotation}.
These findings revealed the spurious shortcuts in the dataset and their harmful effects on trained models.
To mitigate these problems,
\newcite{liu2019inoculation} introduced a systematic task-agnostic method to analyze datasets. \newcite{rozen2019diversify} further explain how to improve challenging datasets and why diversity matters. \newcite{geva2019we} suggest that the training and test data should be from exclusive annotators to avoid annotator bias. Our work is complementary to those analyses.
\paragraph{Robustifying NLI and RC Models.}
Recently, a number of works have been proposed to directly improve the performance on the analysis datasets both for NLI through model ensemble~\cite{clark2019don, he2019unlearn}, novel training mechanisms~\cite{pang2019improving, yaghoobzadeh2019robust}, adversarial data augmentation~\cite{nie2019adversarial}, enhancing word representations~\cite{moosavi2019improving}, and for RC through different training objectives ~\cite{yeh2019qainfomax, Lewis2019GenerativeQA}. While improvements have been made on certain analysis datasets, the stability of the results is not examined. As explained in this paper, we highly recommend those result variances be scrutinized in future work for fidelity considerations.
\paragraph{Instability in Performance.}
Performance instability has already been recognized as an important issue in deep reinforcement learning~\cite{rlblogpost} and active learning~\cite{bloodgood2013analysis}. However, supervised learning is presumably stable especially with fixed datasets and labels. This assumption is challenged by some analyses recently. \newcite{mccoy2019berts} show high variances in NLI-models performance on the analysis dataset. \newcite{phang2018sentence} found high variances in fine-tuning pre-trained models in several NLP tasks on the GLUE Benchmark. \newcite{reimers2017reporting, reimers2018comparing} state that conclusions based on single run performance may not be reliable for machine learning approaches. \newcite{weber2018fine} found that the model's ability to generalize beyond the training distribution depends greatly on the random seed. \newcite{dodge2020fine} showed weight initialization and training data order both contribute to the randomness in BERT performance. \newcite{nie-etal-2020-simple} found that combining training data from different tasks in multi-tasking training setting also induces instability in the training trajectory. In our work, we present a comprehensive explanation and analysis of the instability of neural models on analysis datasets and give general guidance for future work.
\section{The Curse of Instability}
\subsection{Tasks and Datasets}
In this work, we target our experiments on NLI and RC for two reasons: 1) their straightforwardness for both automatic evaluation and human understanding, and 2) their wide acceptance of being benchmarks evaluating natural language understanding.
For NLI, we use {SNLI}~\cite{bowman2015large} and {MNLI}~\cite{williams2018broad} as the main standard datasets and use {HANS}~\cite{mccoy2019right}, {SNLI-hard}~\cite{gururangan2018annotation}, {BREAK-NLI}~\cite{glockner2018breaking}, {Stress Test}~\cite{naik2018stress}, {SICK}~\cite{marelli2014sick}, {EQUATE}~\cite{ravichander2019equate} as our auxiliary analysis sets. Note that the {Stress Test} contains 6 subsets (denoted as `STR-X') targeting different linguistic categories. We also splite the {EQUATE} dataset to two subsets (denoted as `EQU-NAT/SYN') based on whether the example are from natural real-world sources or are controlled synthetic tests.
For RC, we use {SQuAD1.1}~\cite{rajpurkar2016squad} as the main standard dataset and use {AdvSQuAD}~\cite{jia2017advsquad} as the analysis set. All the datasets we use in this paper are English. Detailed descriptions of the datasets are in Appendix.
\subsection{Models}
Since BERT~\cite{devlin2019bert} achieves state-of-the-art results on several NLP tasks, the pretraining-then-finetuning framework has been widely used. To keep our analysis aligned with recent progress, we focused our experiments on this framework. Specifically, in our study, we used the two most typical choices: BERT~\cite{devlin2019bert} and XLNet~\cite{yang2019xlnet}.\footnote{For all the transformer models, we use the implementation in \url{https://github.com/huggingface/transformers}. BERT-B, BERT-L stands for BERT-base and BERT-large, respectively. The same naming rule applies to other transformer models.} Moreover, for NLI, we additionally use RoBERTa~\cite{liu2019roberta} and ESIM~\cite{chen2017enhanced} in our experiments. RoBERTa is almost the same as BERT except that it has been trained on 10 times more data during the pre-training phrase to be more robust. ESIM is the most representative pre-BERT model for sequence matching problem and we used an ELMo-enhanced-version~\cite{Peters_ELMO@2018}.\footnote{For ESIM, we use the implementation in AllenNLP~\cite{Gardner2017AllenNLP}.}
All the models and training details are in Appendix.
\subsection{What are the Concerns?}
\paragraph{Instability in Final Performance.}
Models' final results often serve as a vital measurement for comparative study. Thus, we start with the question: ``How unstable are the final results?''
To measure the instability, we train every model $10$ times with different random seeds. Then, we evaluate the performances of all the final checkpoints on each NLI dataset and compute their standard deviations.
As shown in Fig.~\ref{fig:bert_results}, the results of different runs for BERT, RoBERTa, and XLNet are highly stable on MNLI-m, MNLI-mm, and SNLI, indicating that model performance on standard validation datasets regardless of domain consistency\footnote{Here SNLI and MNLI-m share the same domain as the training set while MNLI-mm is from different domains.} are fairly stable.
This stability also holds on some analysis sets, especially on SNLI-hard, which is a strict subset of the SNLI validation set. On the contrary, there are noticeable high variances on some analysis sets. The most significant ones are on STR-NU and HANS where points are sparsely scattered, with a 10-point gap between the highest and the lowest number for STR-NU and a 4-point gap for HANS.
\paragraph{Model-Agnostic Instability.}
Next, we check if the instability issue is model-agnostic. For a fair comparison, as the different sizes of the datasets will influence the magnitude of the instability, we normalize the standard deviation on different datasets by multiplying the square root of the size of the dataset\footnote{This normalization factor assumes that every prediction is independent of each other.} and focus on the relative scale compared to the results on the MNLI-m development set, i.e., $\frac{STD(dataset)}{STD(MNLI-m)}\sqrt{\frac{SIZE(dataset)}{SIZE(MNLI-m)}}$. The results for all the models are shown in Table~\ref{tab:all_instability} (the original means and standard deviations are in Appendix).
From Table~\ref{tab:all_instability}, we can see that the instability phenomenon is consistent across all the models. Regardless of the model choice, some of the analysis datasets (e.g., HANS, STR-O, STR-N) are significantly more unstable (with standard deviation 27 times larger in the extreme case) than the standard evaluation datasets.
Similarly, for RC, the normalized deviation of model F1 results on SQuAD almost doubled when evaluated on AddSent, as shown in Table~\ref{tab:squad_instability} (the original means and standard deviations are in Appendix).
\begin{table*}[t]
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccccccccccccccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c}{Standard Datasets} & \multicolumn{12}{c}{Analysis Sets} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-16}
& \bf MNLI-m & \bf MNLI-mm & \bf SNLI & \bf BREAK-NLI & \bf HANS & \bf SNLI-hard & \bf STR-L & \bf STR-S & \bf STR-NE & \bf STR-O & \bf STR-A & \bf STR-NU &\bf SICK &\bf EQU-NAT &\bf EQU-SYN \\
\midrule
\textbf{ESIM} & 1.00 & 0.57 & 0.73 & \underline{3.84} & 0.82 & 0.73 & 0.77 & 0.73 & 3.57 & \bf 4.63 & 2.58 & 2.79 & 1.47 & 1.19 & 2.70 \\
\textbf{ESIM+ELMo} & 1.00 & 2.00 & 1.50 & \underline{11.5} & 4.55 & 2.48 & 3.10 & 2.20 & 7.50 & \bf 15.5 & 6.38 & 8.36 & 2.28 & 2.36 & 8.45
\\
\textbf{BERT-B} & 1.00 & 0.83 & 0.48 & 1.43 & 10.95 & 0.95 & 1.39 & 1.04 & 2.70 & 3.70 & 1.46 & \bf 13.65 & 1.48 & 1.03 & \underline{13.17} \\
\textbf{RoBERTa-B} & 1.00 & 1.46 & 0.64 & 2.82 & 15.42 & 1.47 & 1.27 & 2.17 & 5.45 & 8.45 & 5.55 & \bf 25.75 & 2.91 & 2.29 & \underline{22.68} \\
\textbf{XLNet-B} & 1.00 & 0.48 & 0.37 & 2.03 & 6.60 & 0.75 & 0.59 & 0.92 & 1.96 & 7.19 & 2.07 & \underline{13.33} & 0.82 & 1.15 & \bf{13.33} \\
\textbf{BERT-L} & 1.00 & 1.13 & 0.56 & 2.86 & 18.47 & 1.37 & 1.31 & 2.63 & 9.19 & 10.13 & 2.39 & \bf 21.88 & 1.71 & 1.41 & \underline{20.36} \\
\textbf{RoBERTa-L} & 1.00 & 0.88 & 0.69 & 1.03 & 10.27 & 1.01 & 1.12 & 1.20 & 12.13 & 10.13 & 4.51 & \bf 27.38 & 1.71 & 1.21 & \underline{22.36} \\
\textbf{XLNet-L} & 1.00 & 0.90 & 0.69 & 1.06 & 10.67 & 0.85 & 0.89 & 1.45 & \underline{16.21} & 11.84 & 4.26 & 15.93 & 1.50 & 1.31 &\bf 19.93 \\
\bottomrule
\end{tabular}%
}
\caption{Relatively normalized deviations of the results on MNLI-m for all models. The highest deviations are in bold and the second highest deviations are underlined for each individual model.}
\vspace{-5pt}
\label{tab:all_instability}
\end{table*}
\begin{table}[t]
\resizebox{0.47\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{\bf Model} & \multicolumn{1}{c}{Standard Dataset} & \multicolumn{2}{c}{Analysis Sets} \\
\cmidrule(lr){2-2} \cmidrule(lr){3-4}
& \bf SQuAD & \bf AddSent & \bf AddOneSent \\
\midrule
\textbf{BERT-B} & 1.00 & \bf 2.61 & 1.58 \\
\textbf{XLNet-B} & 1.00 & \bf 1.78 & 1.00 \\
\bottomrule
\end{tabular}%
}
\caption{Relatively normalized deviations of the results on SQuAD dev set for both BERT-B and XLNet-B.}
\vspace{-5pt}
\label{tab:squad_instability}
\end{table}
\paragraph{Fluctuation in Training Trajectory.}
\label{sec:instable_train}
Intuitively, the inconsistency and instability in the final performance of different runs can be caused by the randomness in initialization and stochasticity in training dynamics. To see how much these factors can contribute to the inconsistency in the final performance, we keep track of the results on different evaluation sets along the training process and compare their training trajectories. We choose HANS and STR-NU as our example unstable analysis datasets because their variances in final performance are the largest, and we choose SNLI and MNLI-m for standard validation set comparison.
As shown in Fig.~\ref{fig:bert_training_trajectory}, the training curve on MNLI and SNLI (the top two lines) is highly stable, while there are significant fluctuations in the HANS and STR-NU trajectories (bottom two lines).
Besides the mean and standard deviation over multiple runs, we also show the accuracy of one run as the bottom dashed line in Fig.~\ref{fig:bert_training_trajectory}. We find that two adjacent checkpoints can have a dramatically large performance gap on STR-NU. Such fluctuation is very likely to be one of the reasons for the instability in the final performance and might give rise to untrustworthy conclusions drawn from the final results.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{model_selection_1.pdf}
\caption{Spearman's correlations for different datasets showing the low correlation between standard datasets (i.e., MNLI-m, MNLI-mm, and SNLI) and all the other analysis datasets.}
\vspace{-6pt}
\label{fig:heatmap_modelselection}
\end{figure}
\paragraph{Low Correlation between Datasets.}
The typical routine for neural network model selection requires practitioners to choose the model or checkpoint hinged on the observation of models' performance on the validation set. The routine was followed in all previous NLI analysis studies where models were chosen by the performance on standard validation set and tested on analysis sets. An important assumption behind this routine is that the performance on the validation set should be correlated with the models' general ability. However, as shown in Fig.~\ref{fig:bert_training_trajectory}, the striking difference between the wildly fluctuated training curves for analysis sets and the smooth curves for the standard validation set questions the validity of this assumption.
Therefore, to check the effectiveness of model selection under these instabilities, we checked the correlation for the performance on different datasets during training.
For dataset $\mathcal{D}^i$, we use $a^i_{t,s}$ to denote the accuracy of the checkpoint at $t$-th time step and trained with the seed $s\in S$, where $S$ is the set of all seeds.
We calculate the correlation $\mathrm{Corr}_{i,j}$ between datasets $\mathcal{D}^i$ and $\mathcal{D}^j$ by:
\begin{align*}
\mathrm{Corr}_{i,j} \!=\! \frac{1}{\vert S \vert}\sum_{s\in S}\mathrm{Spearman}\left[ (a^i_{t,s})_{t=1}^{T},
(a^j_{t,s})_{t=1}^{T} \right]
\end{align*}
where $T$ is the number of checkpoints.
The correlations between different NLI datasets are shown in Fig.~\ref{fig:heatmap_modelselection}.
We can observe high correlation ($>0.95$) among standard validation datasets (e.g. MNLI-m, MNLI-mm, SNLI) but low correlations between other dataset pairs, especially when pairing STR-O or STR-NU with MNLI or SNLI.
While these low correlations between standard evaluation sets and analysis sets can bring useful insights for analysis,
this also indicates that: 1) performance on the standard validation set is not representative enough for certain analysis set performances; 2) comparison/conclusions drawn from analysis datasets' results from model selection on standard evaluation sets may be unreliable.
\section{Tracking Instability}
\label{sec:why}
Before answering the question how to handle these instabilities, we first seek the source of the instability to get a better understanding of the issue.
We start with the intuition that high variance could be the result of high inter-example correlation within the dataset, and then provide hints from experimental observations. Next, we show theoretical evidence to formalize our claim. Finally, we conclude that the major source of variance is the inter-example correlations based on empirical results.
\subsection{Inter-Example Correlations}
Presumably, the wild fluctuation in the training trajectory on different datasets might come from two potential sources. Firstly, the individual prediction of each example may be highly unstable so that the prediction is constantly changing. Secondly, there might be strong inter-example correlations in the datasets such that a large proportion of predictions are more likely to change simultaneously, thus causing large instability.
Here we show that the second reason, i.e., the strong inter-example prediction correlation is the major factor.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\linewidth]{interexample.pdf}
\caption{The two heatmaps of inter-example correlations matrices for both MNLI and HANS. Each point in the heatmap represents the Spearman's correlation between the predictions of an example-pair.}
\vspace{-5pt}
\label{fig:interexample_map}
\vspace{-5pt}
\end{figure}
We examine the correlation between different example prediction pairs during the training process.
In Fig.~\ref{fig:interexample_map}, we calculated the inter-example Spearman's correlation on MNLI and HANS.
Fig.~\ref{fig:interexample_map} shows a clear difference between the inter-example correlation in stable (MNLI) datasets versus unstable (HANS) datasets.
For stable datasets (MNLI), the correlations between the predictions of examples are uniformly low, while for unstable datasets (HANS), there exist clear groups of examples with very strong inter-correlation between their predictions.
This observation suggests that those groups could be a major source of instability if they contain samples with frequently changing predictions.
\begin{table*}[t]
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccccccccccccccc}
\toprule
\multirow{2}{*}{Statistics} & \multicolumn{3}{c}{Standard Dataset} & \multicolumn{12}{c}{Analysis Dataset} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-16}
&\bf MNLI-m &\bf MNLI-mm &\bf SNLI &\bf BREAK &\bf HANS &\bf SNLI-hard &\bf STR-L &\bf STR-S &\bf STR-NE &\bf STR-O &\bf STR-A &\bf STR-NU &\bf SICK &\bf EQU-NAT &\bf EQU-SYN \\
\midrule
$\sqrt{\text{Total Var}}$ & 0.24 & 0.20 & 0.11 & 0.38 & 1.51 & 0.40 & 0.34 & 0.28 & 0.65 & 0.90 & 0.89 & 3.76 & 0.35 & 0.66 & 3.47 \\
$\sqrt{\text{Idp Var}}$ & 0.18 & 0.18 & 0.13 & 0.12 & 0.10 & 0.30 & 0.17 & 0.22 & 0.17 & 0.19 & 0.56 & 0.33 & 0.17 & 0.59 & 0.31 \\
$\sqrt{\vert\text{Cov}\vert}$ & 0.16 & 0.09 & 0.06 & 0.36 & 1.51 & 0.27 & 0.28 & 0.15 & 0.63 & 0.88 & 0.69 & 3.74 & 0.31 & 0.31 & 3.45 \\
\bottomrule
\end{tabular}%
}
\caption{
The square roots of total variance (Total Var), independent variance (Idp Var), and the absolute covariance ($\vert\text{Cov}\vert$) of BERT model on different NLI datasets.
Square root is applied
to map variances and covariances to a normal range.
Analysis datasets have much higher covariance than standard datasets.
}
\vspace{-6pt}
\label{tab:idp_variance_new}
\end{table*}
\begin{table}[t]
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{Statistics} & \multicolumn{1}{c}{Standard Dataset} & \multicolumn{2}{c}{Analysis Dataset} \\
\cmidrule(lr){2-2} \cmidrule(lr){3-4}
&\bf SQuAD &\bf AddSent & \bf AddOneSent \\
\midrule
$\sqrt\text{Total Var}$ & 0.13 & 0.57 & 0.48 \\
$\sqrt\text{Idp Var}$ & 0.15 & 0.33 & 0.44 \\
$\sqrt{\vert\text{Cov}\vert}$ & 0.09 & 0.43 & 0.13 \\
\bottomrule
\end{tabular}%
}
\caption{
The square roots of total variance (Total Var), independent variance (Idp Var), and absolute covariance ($\vert\text{Cov}\vert$) of BERT model on different RC datasets.
}
\label{tab:idp_variance_mc}
\end{table}
\subsection{Variance Decomposition}
\label{sec:v_dec}
Next, we provide theoretical support to show how the high inter-example correlation contributes to the large variance in final accuracy. Later, we will also demonstrate that it is the major source of the large variance.
Suppose dataset $\mathcal{D}$ contains examples $\{x_i, y_i\}_{i=1}^N$, where $N$ is the number of data points in the dataset, $x_i$ and $y_i$ are the inputs and labels, respectively.
We use a random variable $C_{i}$ to denote whether model $M$ predicts the $i$-th example correctly:
$C_i\!=\!\mathbbm{1}[y_i = M(x_i)]$.
We ignore the model symbol $M$ in our later notations for simplicity.
The accuracy $\mathit{Acc}$ of model $M$ is another random variable, which equals to the average over $\{C_i\}$, w.r.t. different model weights (i.e., caused by different random seeds in our experiments):
$\mathit{Acc} = \frac{1}{N} \sum_i C_i$.
We then decompose the variance of the accuracy $\mathrm{Var}(\mathit{Acc})$ into the sum of data variances $\mathrm{Var}(C_i)$, and the sum of inter-data covariances $\mathrm{Cov}(C_i, C_j)$:
\begin{align}
\label{eq:var_dec}
\mathrm{Var}(\mathit{Acc})
&\!=\! \frac{1}{N^2}\mathrm{Cov}\left(\sum_{i=1}^N C_i,\,\, \sum_{j=1}^N C_j\right) \nonumber \\
&\!=\! \frac{1}{N^2}\sum_{i=1}^N \sum_{j=1}^N \mathrm{Cov}\left( C_i,\,\, C_j\right) \nonumber \\
&\!=\!\frac{1}{N^2}\!\sum_{i=1}^N\! \mathrm{Var}(C_i) \!+\! \frac{2}{N^2}\!\sum_{i< j}\!\mathrm{Cov}( C_i\mbox{,}C_j)
\end{align}
Here, the first term $\frac{1}{N^2}\sum \mathrm{Var}(C_{i})$ means the instability caused by the randomness in individual example prediction and the second term $\frac{2}{N^2}\sum_{i<j}\mathrm{Cov}(C_{i},C_{j})$ means the instability caused by the covariance of the prediction between different examples. The latter covariance term is highly related to the inter-example correlation.
Finally, to demonstrate that the inter-example correlation is the major source of high variance, we calculate the total variance, the independent variance (the 1st term in Eq. \ref{eq:var_dec}), and the covariance (the 2nd term in Eq. \ref{eq:var_dec}) on every dataset in Table \ref{tab:idp_variance_new}.
In contrast to similar averages of the independent variance on standard and analysis datasets, we found a large gap between the averages of covariances on different datasets.
This different trend of total variance and independent variance proves that the inter-example correlation is the major reason for the difference of variance on the analysis datasets.
\begin{table}[t]
\centering
\small
\begin{tabular}{rl}
\toprule
Premise: & Though the author encouraged the lawyer, \\
& the tourist waited.\\
Hypothesis: & The author encouraged the lawyer. \\
Label: & entailment\\
\midrule
Premise: & The lawyer thought that the senators \\
&supported the manager. \\
Hypothesis: & The senators supported the manager. \\
Label: & non-entailment\\
\bottomrule
\end{tabular}
\caption{A highly-correlated example pair in the HANS dataset with the BERT model. This example pair have the largest covariance (0.278) among all the pairs.}
\vspace{-3pt}
\label{tab:nli_example}
\end{table}
\begin{table*}[t]
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccccccccccccc}
\toprule
\bf Target Eval Set & \bf MNLI-m & \bf BREAK & \bf HANS & \bf SNLI-hard & \bf STR-L & \bf STR-S & \bf STR-NE & \bf STR-O & \bf STR-A & \bf STR-NU & \bf SICK & \bf EQU-NAT & \bf EQU-SYN\\
\midrule
\multicolumn{14}{c}{\textit{Accuracy Mean}}\\
\midrule
MNLI-m & 85.1 & 95.3 & 61.6 & 80.9 & \textbf{81.9 } & 77.3 & 55.5 & 59.9 & 62.9 & 41.1 & 57.3 & 60.1 & 41.3 \\
Re-Split Dev & - & \textbf{96.2} & \textbf{ 64.3} & \textbf{81.0 } & 81.7 & \textbf{77.4} & \textbf{56.5} & \textbf{66.0} & \textbf{ 67.2} & \textbf{ 48.2} & \textbf{59.3} & \textbf{61.2} & \textbf{47.6} \\
\midrule
\multicolumn{14}{c}{\textit{Accuracy Standard Deviation}}\\
\midrule
MNLI-m & 0.22 & 0.37 & 1.57 & \textbf{0.33 } & 0.36 & \textbf{0.35} & \textbf{0.65 } & \textbf{0.88} & \textbf{1.60} & 3.49 & \textbf{0.55} & \textbf{1.06} & 3.19 \\
Re-Split Dev & - & \textbf{0.32} & \textbf{1.51} & 0.52 & \textbf{0.34} & 0.47 & 0.83 & 2.70 & 1.83 & \textbf{2.64} & 1.26 & 1.18 & \textbf{1.86} \\
\bottomrule
\end{tabular}%
}
\caption{The comparison of means and standard deviations of the accuracies when model selection are conducted based on different development set. `MNLI-m' chooses the best checkpoint based on the MNLI-m validation set. `Re-Split Dev' chooses the best checkpoint based on the corresponding re-splitted analysis-dev set.}
\vspace{-5pt}
\label{tab:resplit}
\end{table*}
\subsection{Highly-Correlated Cases}
\label{sec:cases}
From these analyses, we can see that one major reason behind the high variance in certain analysis datasets is high inter-example correlation. Following this direction, the next question is why these highly-correlated example-pairs are more likely to appear in analysis datasets. From Table~\ref{tab:all_instability}, we can find that the largest variance happens in HANS, several subsets of STR, and EQU-SYN. On the other hand, while datasets like SNLI-hard and EQU-NAT are also analysis datasets, their variance is much smaller than the former ones. One crucial difference among the high-variance datasets is that they are usually created with the help of synthetic rules.
This way of well-controlled synthetic-rule based construction can effectively target certain linguistic phenomena in the dataset, but they may also cause many examples to share similar lexicon usage. One example from the HANS dataset is shown in Table \ref{tab:nli_example}, and another similar example for RC is also shown in Appendix. These similarities in syntax and lexicon are very likely to cause the prediction in these two examples to be highly-correlated.
Another evidence can also be seen from Figure~\ref{fig:interexample_map}, where we can see clear boundaries of blocks of high-correlation examples in the right sub-figure (for HANS dataset). Since the examples in HANS are ordered by its templates, examples in the same block are created using the same template. Hence, the block patterns in the figure also show how synthetic rules may cause predictions to be more correlated with each other.
In conclusion, since analysis datasets are sometimes created using pre-specified linguistic patterns/properties and investigation phenomena in mind, the distributions of analysis datasets are less diverse than the distributions of standard datasets. The difficulty of the dataset and the lack of diversity can lead to highly-correlated predictions and high instability in models' final performances.
\section{Implications, Suggestions, and Discussion}
So far, we have demonstrated how severe this instability issue is and how the instability can be traced back to the high correlation between predictions of certain example clusters. Now based on all the previous analysis results, we discuss potential ways of how to deal with this instability issue.
We first want to point out that this instability issue is not a simple problem that can be solved by trivial modifications of the dataset, model, or training algorithm. Here, below we first present one initial attempt at illustrating the difficulty of solving this issue via dataset resplitting.
\paragraph{Limitation of Model Selection.}
In this experiment, we see if an oracle model selection process can help reduce instability.
Unlike the benchmark datasets, such as SNLI, MNLI, and SQuAD, analysis sets are often proposed as a single set without dev/test splits. In Sec.~\ref{sec:why}, we observe that models' performances on analysis sets have little correlation with model performance on standard validation sets, making the selection model routine useless for reducing performance instability on analysis sets.
Therefore, we do oracle model selection by dividing the original analysis set into an 80\% analysis-dev dataset and a 20\% analysis-test dataset. Model selection is a procedure used to select the best model based on the high correlation between dev/test sets. Hence, the dev/test split here will naturally be expected to have the best performance.
In Table~\ref{tab:resplit}, we compare the results of BERT-B on the new analysis-test with model selection based on the results on either MNLI or the corresponding analysis-dev. While model selection on analysis-dev helps increase the mean performance on several datasets\footnote{Although the new selection increase the performance mean, we suggest not use the results on analysis sets as benchmark scores but only as toolkits to probe model/architecture changes since analysis datasets are easy to overfit.}, especially on HANS, STR-O, and STR-NU, indicating the expected high correlation inside the analysis set, however, the variances of final results are not always reduced for different datasets. Hence, besides the performance instability caused by noisy model selection, different random seeds indeed lead to models with different performance on analysis datasets. This observation might indicate that performance instability is relatively independent of the mean performance and hints that current models may have intrinsic randomness brought by different random seeds which is unlikely to be removed through simple dataset/model fixes.
\subsection{Implications of Result Instability}
If the intrinsic randomness in the model prevents a quick fix, what does this instability issue imply?
At first glance, one may view the instability as a problem caused by careless dataset design or deficiency in model architecture/training algorithms. While both parts are indeed imperfect, here we suggest it is more beneficial to view this instability as an inevitable consequence of the current datasets and models. On the data side, as these analysis datasets usually leverage specific rules or linguistic patterns to generate examples targeting specific linguistic phenomena and properties, they contain highly similar examples (examples shown in \ref{sec:cases}). Hence, the model's predictions of these examples will be inevitably highly-correlated. On the model side, as the current model is not good enough to stably capture these hard linguistic/logical properties through learning, they will exhibit instability over some examples, which is amplified by the high correlation between examples' predictions. These datasets can still serve as good evaluation tools as long as we are aware of the instability issue and report results with multiple runs. To better handle the instability, we also propose some long and short term solution suggestions below, based on variance reporting and analysis dataset diversification.
\subsection{Short/Long Term Suggestions}
\paragraph{Better Analysis Reporting (Short Term).}
Even if we cannot get a quick fix to remove the instability in the results, it is still important to keep making progress using currently available resources, and more importantly, to accurately evaluate this progress.
Therefore, in the short run, we encourage researchers to report the decomposed variance (Idp Var and Cov) for a more accurate understanding of the models and datasets as in Sec \ref{sec:v_dec}, Table \ref{tab:idp_variance_new} and Table \ref{tab:idp_variance_mc}.
The first number (independent variance, i.e., Idp Var) can be viewed as a metric regarding how stable the model makes one single prediction and this number can be compared across different models.
Models with a lower score can be interpreted as being more stable for one single prediction. The values of Cov also help us better understand both the model and the datasets. A high Cov indicates that many examples look similar to the model, and the model may be exploiting some common artifacts in this group of examples. A lower Cov usually means that the dataset is diverse and is preferable for evaluation.
By comparing models with both total variance and the Idp Var, we can have a better understanding of where the instability of the models comes from.
A more stable model should aim to improve the total variance with more focus on Idp Var. If the target is to learn the targeted property of the dataset better, then more focus should be drawn towards the second term when analysing the results.
\paragraph{Model and Dataset Suggestions (Long Term).}
In the long run, we should be focusing on improving models (including better inductive biases, large-scale pre-training with tasks concerning structure/compositionality) so that they can get high accuracy stably.
Dataset-wise, we encourage the construction of more diverse datasets (in terms of syntax and lexicon). From our previous results and analysis in Section~\ref{sec:why}, we can see that analysis datasets from natural real-life sources usually lead to lower covariance between predictions and show better stability. Manual verification for synthetic examples also helps reduce the instability of analysis datasets. While controlled synthetic datasets are more accurate and effective in evaluating certain linguistic phenomenon, the lack of diversity may increase the model's ability to guess the answer right and solve only that single pattern/property instead of mastering the systematic capability of those linguistic properties under different contexts (as reflected by the poor correlation between different analysis datasets). Therefore, a very valuable direction in constructing these datasets is to both maintain the specificity of the dataset while having a larger diversity.
\section{Conclusions}
Auxiliary analysis datasets are meant to be important resources for debugging and understanding models. However, large instability of current models on some of these analysis sets undermine such benefits and bring non-ignorable obstacles for future research. In this paper, we examine the issue of instability in detail, provide theoretical and empirical evidence discovering the high inter-example correlation that causes this issue. Finally, we give suggestions on future research directions and on better analysis variance reporting. We hope this paper will guide researchers on how to handle instability and inspire future work in this direction.
\section{Details of Models}
For models, we mainly focus on the current state-of-the-art models with a pre-trained transformer structure.
In addition, we also selected several traditional models to see how different structures and the use of pre-trained representations influence the result.
\subsection{Transformer Models}
\paragraph{BERT~\cite{devlin2019bert}.} BERT is a Transformer model pre-trained with masked language supervision on a large unlabeled corpus to obtain deep bi-directional representations \cite{vaswani2017attention}. To conduct the task of NLI, the premise and the hypothesis are concatenated as the input and a simple classifier is added on top of these pre-trained representations to predict the label. Similarly, for RC, the question and the passage are concatenated as a single input and the start/end location of the answer span is predicted by computing a dot product between the start/end vector and all the words in the document. The whole model is fine-tuned on NLI/RC datasets before evaluation.
\paragraph{RoBERTa~\cite{liu2019roberta}.} RoBERTa uses the same structure as BERT, but carefully tunes the hyper-parameters for pre-training and is trained 10 times more data during pre-training. The fine-tuning architecture and process are the same as BERT.
\paragraph{XLNet~\cite{yang2019xlnet}.} XLNet also adopts the Transformer structure but the pre-training target is a generalized auto-regressive language modeling. It also can take in infinite-length input by using the Transformer-XL~\cite{dai2019transformer} architecture. The fine-tuning architecture and process are the same as BERT.
\subsection{Traditional Models}
\paragraph{ESIM~\cite{chen2017enhanced}.} ESIM first uses BiLSTM to encode both the premise and the hypothesis sentence and perform cross-attention before making the prediction using a classifier. It is one representative model before the use of pre-trained Transformer structure.
\begin{table}[t]
\resizebox{0.47\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
\bf Name & \bf Standard/Analysis & \bf \#Examples & \bf \#Classes \\
\midrule
\textbf{MNLI-m} & Standard & 9815 & 3 \\
\textbf{MNLI-mm} & Standard & 9832 & 3 \\
\textbf{SNLI} & Standard & 9842 & 3 \\
\textbf{BREAK-NLI} & Analysis & 8193 & 3 \\
\textbf{HANS} & Analysis & 30000 & 2 \\
\textbf{SNLI-hard} & Analysis & 3261 & 3 \\
\textbf{STR-L} & Analysis & 9815 & 3 \\
\textbf{STR-S} & Analysis & 8243 & 3 \\
\textbf{STR-NE} & Analysis & 9815 & 3 \\
\textbf{STR-O} & Analysis & 9815 & 3 \\
\textbf{STR-A} & Analysis & 1561 & 3 \\
\textbf{STR-NU} & Analysis & 7596 & 3 \\
\textbf{SICK} & Analysis & 9841 & 3 \\
\textbf{EQU-NAT} & Analysis & 1384 & 3 \\
\textbf{EQU-SYN} & Analysis & 8318 & 3 \\
\bottomrule
\end{tabular}%
}
\caption{Dataset statistics and categories for all the NLI dev/analysis datasets.}
\label{tab:dataset_stats}
\end{table}
\begin{table}[t]
\resizebox{0.47\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
\bf Name & \bf Standard/Analysis & \bf \#Paragraphs & \bf \#Questions \\
\midrule
\textbf{SQuAD} & Standard & 48 & 10570 \\
\textbf{AddSent} & Analysis & 48 & 3560 \\
\textbf{AddOneSent} & Analysis & 48 & 1787 \\
\bottomrule
\end{tabular}%
}
\caption{Dataset statistics and categories for all the RC dev/analysis datasets.}
\label{tab:dataset_stats_rc}
\end{table}
\begin{table*}[t]
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccccccccccccccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c}{Standard Datasets} & \multicolumn{12}{c}{Analysis Sets} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-16}
& \bf MNLI-m & \bf MNLI-mm & \bf SNLI & \bf BREAK-NLI & \bf HANS & \bf SNLI-hard & \bf STR-L & \bf STR-S & \bf STR-NE & \bf STR-O & \bf STR-A & \bf STR-NU &\bf SICK &\bf EQU-NAT &\bf EQU-SYN \\
\midrule
\textbf{ESIM} & 77.38$\pm$0.32 & 77.03$\pm$0.18 & 88.34$\pm$0.24 & 78.49$\pm$1.00 & 49.89$\pm$0.15 & 75.03$\pm$0.40 & 74.21$\pm$0.24 & 69.30$\pm$2.38 & 51.61$\pm$1.13 & 57.95$\pm$1.47 & 53.21$\pm$2.04 & 21.02$\pm$1.00 &55.55$\pm$0.47 &55.87$\pm$1.01 &22.89$\pm$0.94 \\
\textbf{ESIM+ELMo} & 79.83$\pm$0.11 & 79.85$\pm$0.21 & 88.81$\pm$0.17 & 83.24$\pm$1.33 & 50.07$\pm$0.27 & 76.30$\pm$0.45 & 76.29$\pm$0.33 & 74.03$\pm$0.25 & 52.80$\pm$0.79 & 58.42$\pm$1.63 & 54.41$\pm$1.69 & 20.95$\pm$1.00 &57.21$\pm$0.25 &59.19$\pm$0.69 &22.70$\pm$1.01 \\
\textbf{BERT-B} & 84.72 $\pm$0.24 & 84.89 $\pm$0.20 & 91.24 $\pm$0.11 & 95.53$\pm$0.38 & 62.31$\pm$1.51 & 81.30$\pm$0.40 & 81.79$\pm$0.34 & 76.91$\pm$0.28 & 55.37$\pm$0.65 & 59.57$\pm$0.90 & 64.96$\pm$0.89 & 39.02$\pm$3.76 &57.17$\pm$0.34 &60.33$\pm$0.63 &39.44$\pm$3.29 \\
\textbf{RoBERTa-B} & 87.64$\pm$0.12 & 87.66$\pm$0.17 & 91.94$\pm$0.07 & 97.04$\pm$0.36 & 72.45$\pm$1.02 & 82.44$\pm$0.30 & 85.13$\pm$0.15 & 81.97$\pm$0.27 & 57.39$\pm$0.63 & 63.38$\pm$0.98 & 73.84$\pm$1.61 & 52.80$\pm$3.39 &57.14$\pm$0.32 &63.92$\pm$0.67 &51.85$\pm$2.71 \\
\textbf{XLNet-B} & 86.78$\pm$0.28 & 86.42$\pm$0.14 & 91.54$\pm$0.11 & 95.95$\pm$0.63 & 66.29$\pm$1.08 & 81.35$\pm$0.37 & 84.40$\pm$0.17 & 80.33$\pm$0.28 & 57.18$\pm$0.56 & 63.70$\pm$2.04 & 75.70$\pm$1.48 & 40.32$\pm$4.31 &56.66$\pm$0.22 &61.79$\pm$0.83 &39.93$\pm$3.91 \\
\textbf{BERT-L} & 86.62$\pm$0.17 & 86.75$\pm$0.19 & 92.09$\pm$0.09 & 95.71$\pm$0.53 & 72.42$\pm$1.78 & 82.26$\pm$0.40 & 84.20$\pm$0.22 & 79.32$\pm$0.48 & 62.25$\pm$1.55 & 64.48$\pm$1.71 & 72.28$\pm$1.01 & 49.56$\pm$4.20 &57.19$\pm$0.29 &62.66$\pm$0.64 &49.38$\pm$3.76 \\
\textbf{RoBERTa-L} & 90.04$\pm$0.17 & 89.99$\pm$0.15 & 93.09$\pm$0.12 & 97.50$\pm$0.19 & 75.90$\pm$0.99 & 84.42$\pm$0.30 & 87.68$\pm$0.19 & 85.67$\pm$0.22 & 60.03$\pm$2.04 & 63.10$\pm$1.71 & 78.96$\pm$1.91 & 61.27$\pm$5.25 &57.77$\pm$0.29 &66.11$\pm$0.55 &58.34$\pm$4.13 \\
\textbf{XLNet-L} & 89.48$\pm$0.20 & 89.31$\pm$0.18 & 92.90$\pm$0.14 & 97.57$\pm$0.23 & 75.75$\pm$1.22 & 83.55$\pm$0.30 & 87.33$\pm$0.18 & 84.30$\pm$0.32 & 60.46$\pm$3.25 & 67.47$\pm$2.37 & 84.26$\pm$2.14 & 62.14$\pm$3.63 &57.33$\pm$0.30 &63.56$\pm$0.70 &60.45$\pm$4.33 \\
\bottomrule
\end{tabular}%
}
\caption{Means and standard deviations of final performance on NLI datasets for all models.}
\label{tab:all_result_nli}
\end{table*}
\begin{table}[t]
\resizebox{0.47\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{\bf Model} & \multicolumn{1}{c}{Standard Dataset} & \multicolumn{2}{c}{Analysis Sets} \\
\cmidrule(lr){2-2} \cmidrule(lr){3-4}
& \bf SQuAD & \bf AddSent & \bf AddOneSent \\
\midrule
\textbf{BERT-B} & 87.16$\pm$0.13 & 63.70$\pm$0.57 & 72.33$\pm$0.48 \\
\textbf{XLNet-B} & 89.33$\pm$0.39 & 69.19$\pm$1.18 & 77.20$\pm$0.94 \\
\bottomrule
\end{tabular}%
}
\caption{Means and standard deviations of final F1 on SQuAD dev set for both BERT-B and XLNet-B.}
\label{tab:all_result_rc}
\end{table}
\begin{table*}[t]
\centering
\small
\begin{tabularx}{\textwidth}{rX}
\toprule
Original Context: & In February 2010, in response to controversies regarding claims in the Fourth Assessment Report, five climate scientists--all contributing or lead IPCC report authors--wrote in the journal Nature calling for changes to the IPCC. They suggested a range of new organizational options, from tightening the selection of lead authors and contributors to dumping it in favor of a small permanent body or even turning the whole climate science assessment process into a moderated ``living'' Wikipedia-IPCC. Other recommendations included that the panel employs full-time staff and remove government oversight from its processes to avoid political interference. \\
Question: & How was it suggested that the IPCC avoid political problems? \\
Answer: & remove government oversight from its processes \\
\midrule
Distractor Sentence 1: & It was suggested that the PANEL avoid nonpolitical problems. \\
\midrule
Distractor Sentence 2: & It was suggested that the panel could avoid nonpolitical problems by learning. \\
\bottomrule
\end{tabularx}
\caption{A highly-correlated example pair in the SQuAD-AddSent dataset based with the BERT model. This example pair have the largest covariance (0.278) among all the pairs.
}
\label{tab:squad_example}
\end{table*}
\section{Details of Analysis Datasets}
We used the following NLI analysis datasets in our experiments: \textbf{Break NLI}~\cite{glockner2018breaking}, \textbf{SNLI-hard}~\cite{gururangan2018annotation}, \textbf{NLI Stress Test}~\cite{naik-etal-2018-stress} and \textbf{HANS}~\cite{mccoy2019right}. We use \textbf{AdvSQuAD}~\cite{jia2017advsquad} as the RC analysis dataset.
\paragraph{Break NLI.\footnote{\url{github.com/BIU-NLP/Breaking_NLI}}}
The examples in Break NLI resemble the examples in SNLI. The hypothesis is generated by swapping words in the premise so that lexical or world knowledge is required to make the correct prediction.
\paragraph{SNLI-Hard.\footnote{\url{nlp.stanford.edu/projects/snli/snli_1.0_test_hard.jsonl}}}
SNLI hard dataset is a subset of the test set of SNLI. The examples that can be predicted correctly by only looking at the annotation artifacts in the premise sentence are removed.
\paragraph{NLI Stress.\footnote{\url{abhilasharavichander.github.io/NLI_StressTest/}}}
NLI Stress datasets is a collection of datasets modified from MNLI. Each dataset targets one specific linguistic phenomenon, including word overlap (STR-O), negation (STR-NE), antonyms (STR-A), numerical reasoning (STR-NU), length mismatch (STR-L), and spelling errors (STR-S). Models with certain weaknesses will get low performance on the corresponding dataset. In our experiments, we use the mismatched set if there are both a matched version and a mismatched version. For STR-S, we follow the official evaluation script\footnote{\url{github.com/AbhilashaRavichander/NLI_StressTest/blob/master/eval.py}} to use the gram\_content\_word\_swap subset.
\paragraph{HANS.\footnote{\url{github.com/tommccoy1/hans}}}
The examples in HANS are created to reveal three heuristics used by models: the lexical overlap heuristic, the sub-sequence heuristic, and the constituent heuristic. For each heuristic, examples are generated using 5 different templates.
\paragraph{SICK.\footnote{\url{marcobaroni.org/composes/sick.html}}}
SICK is a dataset created for evaluating the compositional distributional semantic models. The sentences in this dataset come from the 8K ImageFlickr dataset and the SemEval 2012 STS MSR-Video Description dataset. The sentences are first normalized and then paired with an expanded version so that the pair can test certain lexical, syntactic, and semantic phenomena.
\paragraph{EQUATE.\footnote{\url{github.com/AbhilashaRavichander/EQUATE}}}
EQUATE is a benchmark evaluation framework for evaluating quantitative reasoning in textual entailment. It consists of five test sets. Three of them are real-world examples (RTE-Quant, NewsNLI, RedditNLI) and two of them are controlled synthetic tests (AWPNLI, Stress Test). In this work, we use EQU-NAT to denote the real-world subset and EQU-SYN to denote the synthetic tests.
\paragraph{AdvSQuAD.\footnote{Both AddSent and AddOneSent can be downloaded from \url{worksheets.codalab.org/worksheets/0xc86d3ebe69a3427d91f9aaa63f7d1e7d/}.}} AdvSQuAD is a dataset created by inserting a distracting sentence into the original paragraph. This sentence is designed to be similar to the question but containing a wrong answer in order to fool the models.
\section{Dataset Statistics}
Dataset statistics and categories for all the NLI datasets can be seen in Table \ref{tab:dataset_stats}.
Dataset statistics and categories for all the RC datasets can be seen in Table \ref{tab:dataset_stats_rc}.
\section{Training Details}
For all pre-trained transformer models, namely, BERT, RoBERTa, and XLNet, we use the same set of hyper-parameters for analysis consideration.
For NLI, we use the suggested hyper-parameters in \newcite{devlin2019bert}. The batch size is set to 32 and the peak learning rate is set to 2e-5. We save checkpoints every 500 iterations, resulting in 117 intermediate checkpoints. In our preliminary experiments, we find that tuning these hyper-parameters will not significantly influence the results. The training set for NLI is the union of SNLI~\cite{bowman2015large} and MNLI~\cite{williams2018broad}\footnote{Both SNLI and MNLI can be downloaded from \url{gluebenchmark.com}.} training set and is fixed across all the experiments.
This will give us a good estimation of state-of-the-art performance on NLI that is fairly comparable to other analysis studies.
For RC, we use a batch size of 12 and set the peak learning rate to 3e-5. RC Models are trained on SQuAD1.1\footnote{\url{rajpurkar.github.io/SQuAD-explorer/}}~\cite{rajpurkar2016squad} for 2 epochs. All our experiments are run on Tesla V100 GPUs.
\section{Means and Standard Deviations of Final Results on NLI/RC datasets}
Here we provide the mean and standard deviation of the final performance over 10 different seeds on NLI and RC datasets in Table \ref{tab:all_result_nli} and Table \ref{tab:all_result_rc} respectively.
\section{High-Correlated Cases for SQuAD}
In this section, we show an example to illustrate that the high-correlated cases are similar to NLI datasets for RC datasets.
As adversarial RC datasets such as AddSent are created by appending a distractor sentence at the end of the original passage, different examples can look very similar.
In Table~\ref{tab:squad_example}, we see two examples are created by appending two similar distractor sentences to the same context, making the predictions of these two examples highly correlated.
\section*{Acknowledgments}
We thank the reviewers for their helpful comments. This work was supported by ONR Grant N00014-18-1-2871, DARPA YFA17-D17AP00022, and NSF-CAREER Award 1846185. The views contained in this article are those of the authors and not of the funding agency.
| e48e2318172f081c234f0911e79f536cf39e1f3d | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
{{}The nonrelativistic gravity theory of Newton is only invariant under the Galilei group. Corrections to the
non-relativistic Newtonian theory at higher order in $1/c$ are famously important for the original experimental evidence for general relativity
\cite{Dautcourt64,Dautcourt90}.
Higher-order (parametrized) post-Newtonian
corrections to the Keplerian two-body motion are also
of central importance in current investigations of binary
gravitational wave sources \cite{Blanchet:1995ez,Buonanno:1998gg,Blanchet:2013haa,Will:2014kxa,Damour:2015isa}. The effective one-body approach \cite{Buonanno:1998gg,Maheshwari:2016edp} was
inspired in part by the developments of the classical mechanical relativistic two-body problem \cite{Todorov:1976,PhysRevD.18.1881,PhysRevD.19.702,Giachetti:1981gr}.
Starting from the relativistic Poincar\'e algebra, one can perform an algebra contraction to the non-relativistic Galilei algebra. Alternatively, starting from the Galilei transformations one can include relativistic corrections at every order in $1/c$. With corrections up to a finite order, this system has neither Galilei nor Poincar\'e symmetry. Only when including an infinite set of corrections, with specific coefficients, does one regain the Poincar\'e symmetry. However, it has recently been argued that the finite-correction-case has a symmetry algebra
\cite{Gomis:2019sqv}, but this requires enlarging the space on which the transformations act.
}
It is well known that the most general point symmetries of the Schr\"odinger equation and the free non-relativistic particle is the Bargmann group~\cite{Bargmann:1954gh}
with two extra generators,
the dilations and the special conformal transformations in one dimension, also called expansions~\cite{Niederer:1972zz,Hagen:1972pd}.
Recently, the action and symmetries of a post-Galilean particle,
which includes corrections of arbitrarily high order in $1/c$ to the non-relativistic particle in an infinite extended Minkowski space, has been constructed \cite{Gomis:2019sqv}. This study is motivated by the importance of
higher-order post-Galilean corrections to the Keplerian two-body motion
in current investigations of binary gravitational wave sources~\cite{Blanchet:1995ez,Buonanno:1998gg,Blanchet:2013haa,Will:2014kxa,Damour:2015isa}.
The action of a post-Galilean particle in an infinite dimensional Minkowski space
with coordinates $(t_{(m)}, x^a_{(m)})$, $a=1,\ldots,d$,
is obtained expanding the action for a massive free relativistic particle $S=-mc \int\mbox{d}\tau \sqrt{-\dot{X}^\mu \dot{X}_\mu}$ in powers of $1/c^2$ where the coordinates
${X}^\mu $ are\footnote{{{}The expansion is in powers of $1/c^2$ because we are only interested in symmetries of a free theory in flat space-time and this is how $c$ appears in a post-Galilean expansion of the boosts.
In theories that also have
gravity, the expansion in $1/c$ could take into account
strong gravity effects, see \cite{Dautcourt:1996pm,Ergen:2020yop,VandenBleeken:2017rij}.}
}
\begin{align}
\frac{1}{c} X^0 &= t_{(0)} + \frac{1}{c^2} t_{(1)} + \frac{1}{c^4} t_{(2)}+\ldots,\\
X^a &= x_{(0)}^a + \frac{1}{c^2} x_{(1)}^a + \frac{1}{c^4} x_{(2)}^a + \ldots.
\end{align}
One gets a series $S_{(0)}+ S_{(1)}+S_{(2)}+\ldots$, with $S_{(n)}$ corresponding to power $c^{2-2n}$. The first contributions are \cite{Gomis:2019sqv}
\begin{align}
S_{(0)} &= -mc^2 \int\mbox{d}\tau\ \dot{t}_{(0)},\\
S_{(1)} & = m \int\mbox{d}\tau \left(
-\dot{t}_{(1)} + \frac{1}{2} \frac{\dot{x}_{(0)}^2}{\dot{t}_{(0)}}
\right),\\
S_{(2)} &= \frac{m}{c^2} \int\mbox{d}\tau \left(
-\dot{t}_{(2)}+ \frac{\dot{x}_{(0)}^a\dot{x}_{(1)a}}{\dot{t}_{(0)}} - \frac{\dot{t}_{(1)} \dot{x}_{(0)}^2}{2\dot{t}_{(0)}^2}+ \frac{\dot{x}_{(0)}^4}{8\dot{t}_{(0)}^3}
\right),
\label{S2}
\end{align}
where $\dot{x}_{(0)}^4=(\dot{x}_{(0)}^2)^2$.
As shown in \cite{Gomis:2019sqv}, the symmetries of action $S_{(M+1)}$, $M\geq 0$, realize
the algebra~\cite{Khasanov:2011jr,Hansen:2019vqf,Gomis:2019fdh}
\begin{align}
\left[
J^{n}_{ab}, J^{m}_{cd}
\right] &= \delta_{cb}J^{(n+m)}_{ad}- \delta_{ac}J^{(n+m)}_{bd}+ \delta_{bd}J^{(n+m)}_{ac}- \delta_{bd}J^{(n+m)}_{ac},\nonumber\\
\left[
J^{n}_{ab}, B^{m}_{c}
\right] &= \delta_{bc} B^{(n+m)}_a - \delta_{ac} B^{(n+m)}_b,\nonumber\\
\left[
J^{n}_{ab}, P^{m}_{c}
\right] &= \delta_{bc} P^{(n+m)}_a - \delta_{ac} P^{(n+m)}_b,\nonumber\\
\left[
B^{(n)}_a, P^{(m)}_b
\right] &= \delta_{ab}H^{(n+m+1)},\nonumber\\
\left[
H^{(m)},B^{(m)}_a
\right] &= -P^{(n+m)}_a,\nonumber\\
\left[
B^{(n)}_a, B^{(m)}_b
\right] &= J^{(n+m+1)}_{ab},
\label{KMGal}
\end{align}
truncated at level $M$, that is, for $n,m=0,1,\ldots,M$, with any generator with index higher than $M$ set to zero, except for $H^{(M+1)}$ which plays the role of a central charge of the truncated algebra.\footnote{From the abstract algebra point of view, one could also have the non-central charges $J^{(M+1)}_{ab}$, but they are not realized in the actions $S_{(M+1)}$.}
The generators $B^{(0)}_a$ and $J^{(0)}_{ab}$ generate, respectively, Galilean boosts and ordinary rotations, while $B^{(n)}$ and $J^{(n)}_{ab}$ for $n\geq 1$ are generalized boosts and rotations which transform coordinates between different levels of the expansion in $1/c^2$.
Disregarding the total derivative term $-m/c^{2M} \dot{t}_{(M+1)}$ which appears in $S_{(M+1)}$, $M\geq 0$, the canonical analysis of the above actions suggest
the presence of $M+1$ first class primary constraints \cite{Gomis:2019sqv}
and therefore the existence of $M+1$ gauge transformations.
For instance, for $M=0$ one has the single constraint
\begin{align}
\phi^{(0)} &= \frac{1}{2} p_a^{(0)} p^{(0)a}- m E^{(0)},
\label{constraints0}
\end{align}
while for $M=1$ there are two constraints\footnote{Here and later on, the constraints and the generators that we will define should be understood as having an additional subscript denoting the value of $M$. We prefer not to write it in order to not to charge the notation too much. Since all the computations are done for fixed $M$, this should not cause any confusion.}
\begin{align}
\phi^{(0)} &= p_a^{(0)}p^{(1)a} - \frac{1}{2} E^{(1) 2} - \frac{m}{c^2}E^{(0)},\nonumber\\
\phi^{(1)} &= \frac{1}{2} p_a^{(1)} p^{(1)a}- \frac{m}{c^2} E^{(1)}.
\label{constraints1}
\end{align}
Here $p^{(0)}_a$ and $p^{(1)}_a$ are the canonical momenta associated to $x_{(0)a}$ and $x_{(1)a}$, respectively, where
$E^{(0)}$ and $E^{(1)}$ are minus the canonical conjugated momenta of $t_{(0)}$ and $t_{(1)}$.
The presence of the above $M$ first class constraints means that there are $M$ gauge symmetries. Fixing the gauge does not destroy the space-time invariances of the theory and just introduces a different realization.
It should be noticed \cite{Gomis:2019sqv,Gomis:2020GC} that, if in addition to fixing the gauge, one performs an adequate projection in the space of all the $x$, each of the actions in this sequence yields \textit{all} the terms of the action obtained by expanding the relativistic action up to the given order.
As is well known~\cite{Niederer:1972zz,Hagen:1972pd}, for $M=0$ the Galilean algebra can be extended to the Schr\"odinger algebra adding two new generators, $D$, the generator of space-time dilatations, and $C$, which generates special conformal transformations. Together with the Hamiltonian $H$, they form a $sl(2,\mathbb R)$ subalgebra which, in the normalization for $D$ that we will use, reads
$\{D,C\} =-2C,\ \{D,H\}=2H,\ \{C,H\}=D$.
In this paper we first construct the explicit form of the $M+1$ of the mass-shell
constraints associated to the action $S_{(M+1)}$, $M\geq 0$. These constraints
are first class and allow
us to construct the canonical action $S_{M+1}^c.$
We write then the most general canonical generator $G$ linear in the momenta.
The request that $G$ be a constant of motion implies a set of partial differential equations
for the unknown functions of the generator $G$ that we solve\footnote{
For the use of this method in non-relativistic systems see for example
\cite{Cariglia:2014dwa,Batlle:2016iel} and references therein.}.
This allows to construct the most general point symmetry transformation of the
canonical action $S_{M+1}^c$.
The algebra of these transformations is a generalization of the Schr\"odinger algebra,
and does not contain, except for the case $M=0$, an $sl(2,\mathbb R)$ algebra,
and therefore they are not truncations of the Schr\"odinger-Virasoro algebra \cite{Roger:2006rz}, and they are also different from the conformal Galilean algebras (CGA) with
dynamical exponent $z=2/N$, with $N$ positive integer, since these contain an $sl(2,\mathbb R)$
subalgebra \cite{henkel1997local,negro1997nonrelativistic,Duval:2009vt,Duval:2011mi}.
The algebras that we obtain in this paper contain the generators $H^{(n)}, B^{(n)}, J^{(n)}_{ab}$ that close
under the algebra (\ref{KMGal}) and the new generators $D^{(n)}$, which generalize the dilatations,
and a single generator $C$ of expansions.
We consider the $M+1$ Sch\"odinger equations associated to the quantization
of the post-Galilean particle described by the action $S_{M+1}^c$. We also study the
projective character of the wave function.
The paper is organized as follows. In Section \ref{can_act} we write the canonical action $S^c_{M+1}$ for arbitrary $M$. In Section \ref{postN2} we deal with the $M=1$ case, for which we compute the most general point transformation by means of the canonical version of Noether's theorem and obtain the first extended Schr\"odinger algebra. Section \ref{genSA} contains our main results, and we present the algebra for arbitrary $M$.
The Schr\"odinger equations associated to the new algebras and their projective invariance are discussed in Section \ref{GS_PI}.
Our results are summarized in Section \ref{conclusions}, and some open problems are discussed. Appendix \ref{inv_constraints} contains the proof of the invariance of the constraints under generalized boosts and rotations proposed in Section \ref{can_act}, Appendix \ref{TVC} lists the transformations of all the canonical variables, and finally Appendix \ref{IVC} discusses the invariance of the constraints under the full set of transformations.
\section{The canonical action of a post-Galilean particle}
\label{can_act}
In this section we construct the canonical action $S_{M+1}^c$ associated to the
post-Galilean action $S_{(M+1)}$ for a generic $M$, written in terms of the mass-shell constraints
$\phi^{(k)}$
\begin{equation}\label{canonicalaction}
S_{M+1}^c=\int d\tau\sum_{k=0}^{M}\big(- E^{(k)} \dot{t}_{(k)}+ p^{(k)}_a \dot{x}_{(k)}^a
-e_{(k)} \phi^{(k)}\big).
\end{equation}
The constraints $\phi^{(k)}$ are the generalization of those for $M=0$ and $M=1$
\cite{Gomis:2019sqv,Gomis:2020GC}.
Explicitly,
the $M+1$ constraints corresponding to level $M$ are
\begin{equation}
\phi^{(k)} = -\frac{m}{c^{2M}} E^{(k)} + \frac{1}{2} \sum_{l=k}^M p_a^{(l)} p^{(M+k-l)a}
- \frac{1}{2} \sum_{l=k}^{M-1} E^{(l+1)} E^{(M+k-l)},
\label{constraintsM}
\end{equation}
for $k=0,1,\ldots,M$. They reproduce (\ref{constraints0}) for $M=0$ and (\ref{constraints1}) for $M=1$, and for the next few $M$ they yield
\begin{align}
\phi^{(0)} &= p_a^{(0)} p^{(2)a} + \frac{1}{2} p_a^{(1)}p^{(1)a}- E^{(1)}E^{(2)} - \frac{m}{c^4} E^{(0)},\\
\phi^{(1)} &= p_a^{(1)}p^{(2)a} - \frac{1}{2} E^{(2) 2} - \frac{m}{c^4}E^{(1)},\\
\phi^{(2)} &= \frac{1}{2} p_a^{(2)} p^{(2)a}- \frac{m}{c^4} E^{(2)}.
\end{align}
for $M=2 $ and
\begin{align}
\phi^{(0)} &= p_a^{(0)}p^{(3)a} + p_a^{(1)}p^{(2)a} - \frac{1}{2} E^{(2)2} - E^{(1)}E^{(3)} - \frac{m}{c^6} E^{(0)},\\
\phi^{(1)} &= p_a^{(1)} p^{(3)a} + \frac{1}{2} p_a^{(2)}p^{(2)a}- E^{(2)}E^{(3)} - \frac{m}{c^6} E^{(1)},\\
\phi^{(2)} &= p_a^{(2)}p^{(3)a} - \frac{1}{2} E^{(3) 2} - \frac{m}{c^6}E^{(2)},\\
\phi^{(3)} &= \frac{1}{2} p_a^{(3)} p^{(3)a}- \frac{m}{c^6} E^{(3)}.
\end{align}
for $M=3$.
Notice that the last $M$ constraints for level $M$ coincide in form with the $M$ constraints from level $M-1$, with different variables, and that the new constraint at each level is $\phi^{(0)}$.
We can see that the constraints are first class if we use the Poisson brackets
\begin{align}\label{PB}
\{E^{(k)},t_{(j)}\} = \delta^k_j,\quad &
\{x_{(k)}^a,p_b^{(j)}\} = \delta^a_b \delta_k^j, \ \ k,j=0,1,\ldots,M.
\end{align}
For each $M$, eliminating the $M+1$ momenta $p^{(k)}_a$, energies
$E^{(k)}$ and multipliers $e_{(k)}$ one can recover the corresponding action in configuration space \cite{Gomis:2019sqv}.
Prompted by the standard Galilean boost and rotation transformations on momenta and energy
\begin{equation}
\delta p_a^{(0)} = m v_{(0)a}, \quad \delta E^{(0)}= v_{(0)}^a p^{(0)}_a,
\end{equation}
we propose the following generalized transformations for momenta and energies
\begin{align}
\delta p_a^{(k)} &= \sum_{j=0}^{M-k-1} v_{(j)a}\ E^{(k+j+1)} + \frac{m}{c^{2M}} v_{(M-k)a} + \sum_{j=0}^{M-k} \omega_{(j)ab}\ p^{(k+j)b},\nonumber\\
\delta E^{(k)} &= \sum_{j=0}^{M-k} v_{(j)}^a p_a^{(k+j)} .
\label{Ep_trans}
\end{align}
It can be seen that they realize the algebra (\ref{KMGal}) truncated at level $M$, with central extension $H^{(M+1)}=-m/c^{2M}$.
Furthermore, as shown in Appendix \ref{inv_constraints}, for given $M$ all the constraints $\phi^{(k)}$, $k=0,1,\ldots,M$, are invariant under these transformations of momenta and energies, and this provides a further justification for the form of the constraints (\ref{constraintsM}) for general $M$.
The kinetical term in the action
$- E^{(k)} \dot{t}_{(k)}+ p^{(k)}_a \dot{x}_{(k)}^a$ is quasi-invariant under transformations
(\ref{Ep_trans}) and the corresponding transformations for ${t}_{(k)},{x}_{(k)}^a$ (see \cite{Gomis:2019sqv} and Appendix \ref{TVC}).
\section{Symmetries of post-Galilean particle for M=1}
\label{postN2}
The action (\ref{S2}) for the second order expansion of a post-Galilean particle in $d+1$ space-time \cite{Gomis:2019sqv,Gomis:2020GC} without the total derivative $-\frac{m}{c^2}\dot{t}_{(2)}$ becomes
\begin{equation}
S_2= \frac{m}{c^2} \int\mbox{d}\tau \left(
\frac{\dot{x}_{(0)}^a\dot{x}_{(1)a}}{\dot{t}_{(0)}} - \frac{\dot{t}_{(1)} \dot{x}_{(0)}^2}{2\dot{t}_{(0)}^2}+ \frac{\dot{x}_{(0)}^4}{8\dot{t}_{(0)}^3}
\right) = \int\mbox{d}\tau L_2.
\label{S2b}
\end{equation}
The canonical momenta are given by
\begin{align}
E^{(0)} &= -\frac{\partial L_2}{\partial \dot{t}_{(0)}} = - \frac{m}{c^2} \left(
- \frac{\dot{x}_{(0)}^a\dot{x}_{(1)a}}{\dot{t}_{(0)}^2} + \frac{\dot{t}_{(1)}\dot{x}_{(0)}^2}{\dot{t}_{(0)}^3}- \frac{3}{8} \frac{\dot{x}_{(0)}^4}{\dot{t}_{(0)}^4}
\right),\\
E^{(1)} &= - \frac{\partial L_2}{\partial \dot{t}_{(1)} } = \frac{m}{2c^2} \frac{\dot{x}_{(0)}^2}{\dot{t}_{(0)}^2},\\
p^{(0)}_a &= \frac{\partial L_2}{\partial \dot{x}_{(0)}^a} = \frac{m}{c^2} \left(
\frac{\dot{x}_{(0)a}}{\dot{t}_{(0)}} -\frac{\dot{t}_{(1)}\dot{x}_{(0)a}}{\dot{t}_{(0)}^2} + \frac{1}{2}\frac{\dot{x}_{(0)}^2 \dot{x}_{(0)a}}{\dot{t}_{(0)}^3}
\right),\\
p^{(1)}_a &= \frac{\partial L_2}{\partial \dot{x}_{(1)}^a} = \frac{m}{c^2} \frac{\dot{x}_{(0)a}}{\dot{t}_{(0)}},
\end{align}
and they obey the primary first-class constraints
\begin{align}
\phi^{(0)} &= p_a^{(0)} p^{(1)a} -\frac{1}{2} E^{(1)2} - \frac{m}{c^2}E^{(0)},\\
\phi^{(1)} &= \frac{1}{2} p^{(1)2}- \frac{m}{c^2} E^{(1)},
\end{align}
which agree with (\ref{constraintsM}) for $M=1$.
The Dirac Hamiltonian is given by
\begin{align}
H_D &= e_{(0)} \phi^{(0)} + e_{(1)} \phi^{(1)}\nonumber\\
&= e_{(0)} \left(
p_a^{(0)} p^{(1)a} -\frac{1}{2} E^{(1)2} - \frac{m}{c^2}E^{(0)}
\right) +
e_{(1)} \left(
\frac{1}{2} p^{(1)2}- \frac{m}{c^2} E^{(1)}
\right),
\label{HD}
\end{align}
and yields the equations of motion
\begin{align}
& \dot{t}_{(0)} = -\frac{\partial H_D}{\partial E^{(0)}} = \frac{m}{c^2} e_{(0)},\quad
\dot{t}_{(1)} = -\frac{\partial H_D}{\partial E_1} = e_{(0)} E^{(1)} +\frac{m}{c^2} e_{(1)},\\
& \dot{x}_{(0)}^a = \frac{\partial H_D}{\partial p^{(0)}_a} = e_{(0)} p^{(1)a},\quad
\dot{x}_{(1)}^a = \frac{\partial H_D}{\partial p^{(1)}_a} = e_{(0)} p^{(0)a} + e_{(1)} p^{(1)a},\\
&\dot{E}^{(0)} = \frac{\partial H_D}{\partial t_{(0)}} =0,\quad
\dot{E}^{(1)} = \frac{\partial H_D}{\partial t_{(1)}} =0,\\
&\dot{p}^{(0)}_a = -\frac{\partial H_D}{\partial x_{(0)}^a} =0,\quad
\dot{p}^{(1)}_a = -\frac{\partial H_D}{\partial x_{(1)}^a} =0.
\end{align}
\subsection*{Space-time symmetries}
The canonical generator of space-time symmetries is
given by
\begin{equation}
G=-E^{(0)} \eta_{(0)} - E^{(1)} \eta_{(1)} + p^{(0)}_a \xi_{(0)}^a +p^{(1)}_a \xi_{(1)}^a - \delta F,
\label{Gd}
\end{equation}
with the $\eta$, $\xi$ and $\delta F$ are unknown functions of $t_{(0)}$, $t_{(1)}$, $x_{(0)}$ and $x_{(1)}$, so that the space-time symmetries are obtained as $\delta t_{(0)} = \eta_{(0)}$, $\delta t_{(1)}=\eta_{(1)}$, $\delta x_{(0)}^a = \xi_{(0)}^a$, $\delta x_{(1)}^a = \xi_{(1)}^a$, and then $\delta L_2 = \frac \mbox{d}{\mbox{d}\tau}\delta F$.
The equation $\dot G=0$ allows one to write the following Killing equations \cite{Cariglia:2014dwa,Batlle:2016iel}
for the space-time symmetries of $S_2$,
\begin{align}
& \partial_a^{0} \eta_{(0)} =0,\ \ \partial^{1} \eta_{(0)} = 0, \ \ \partial_a^{1}\eta_{(0)}=0, \label{eq1}
\\
& \partial_a^{1}\eta_{(1)} = 0,\label{eq2}
\\
& \partial^{1} \xi_{(0)}^a = 0, \ \ \partial_a^{1} \xi_{(0)}^b =0, \label{eq3}
\\
& \partial^{0} \delta F = 0, \ \ \partial^{1}\delta F =0, \label{eq4}
\\
& \frac{m}{c^2} \partial^{1} \xi_{(1)a} = \partial_a^{1}\delta F, \label{eq5}
\\
& \partial_a^{1} \xi_{(1)b} + \partial_b^{1} \xi_{(1)a} = \delta_{ab} \partial^{1}\eta_{(1)}, \label{eq6}
\\
& \frac{m}{c^2} \partial^{0} \xi_{(0)a} = \partial_a^{1}\delta F, \label{eq7}
\\
& \frac{m}{c^2} \partial^{0} \xi_{(1)a} = \partial_a^{0} \delta F, \label{eq8}
\\
& \partial_a^{0} \xi_{(0)b} + \partial_a^{1} \xi_{(1)b} = \delta_{ab} \partial^{0}\eta_{(0)}, \label{eq9}
\\
& \partial_a^{0} \xi_{(1)b} + \partial_a^{0} \xi_{(1)b} = \delta_{ab} \partial^{0}\eta_{(1)}, \label{eq10}
\\
& \partial^{0}\eta_{(0)} = 2 \partial^{1} \eta_{(1)}, \label{eq11}
\\
& \partial_a^{0} \eta_{(1)} = \partial^{1} \xi_{(1)a}, \label{eq12}
\end{align}
for $a, b=1,\ldots, d$, and with
$
\partial^{0} = \frac{\partial}{\partial t_{(0)}}, \ \
\partial^{1} = \frac{\partial}{\partial t_{(1)}}, \ \
\partial_a^{0} = \frac{\partial}{\partial x_{(0)}^a}, \ \
\partial_a^{1} = \frac{\partial}{\partial x_{(1)}^a}.
$
These PDE can be integrated starting from the trivial ones (\ref{eq1})---(\ref{eq4}), and one gets the unique solution
given by
\begin{align}
\eta_{(0)}(t_{(0)}) &= \frac{4}{3}\lambda t_{(0)} + \delta_{(0)},\label{eta0}\\
\eta_{(1)}(t_{(0)},t_{(1)},x_{(0)}) &= \frac{2}{3} \lambda t_{(1)} + v_{(0)}^a x_{(0)a} + \mu t_{(0)}^2 + \lambda_{(1)} t_{(0)} + \delta_{(1)},\label{eta1}\\
\xi_{(0)a}(t_{(0)},x_{(0)}) &= \lambda x_{(0)a} + v_{(0)a} t_{(0)} + \omega_{(0)ab} x_{(0)}^b + \varepsilon_{(0)a}, \label{xii}\\
\xi_{(1)a}(t_{(0)},t_{(1)},x_{(0)},x_{(1)}) &= \frac{1}{3} \lambda x_{(1)a} + v_{(0)a} t_{(1)} + \mu t_{(0)} x_{(0)a} + v_{(1)a} t_{(0)}\nonumber\\
& + \frac{1}{2} \lambda_{(1)} x_{(0)} + \omega_{(0)ab} x_{(1)}^b + \omega_{(1)ab} x_{(0)}^b + \varepsilon_{(1)a}, \label{zetai}\\
\delta F(x_{(0)},x_{(1)}) &= \frac{m}{2c^2} \left( \mu x_{(0)}^ax_{(0)a} + 2\ v_{(0)a} x_{(1)}^a + 2\ v_{(1)a} x_{(0)}^a \right), \label{F}
\end{align}
with $\lambda$, $\lambda_{(1)}$, $\mu$, $\delta_{(0)}$, $\delta_{(1)}$, $\varepsilon_{(0)}^a$, $\varepsilon_{(1)}^a$,
$v_{(0)}^a$, $v_{(1)}^a$, $\omega_{(0)}^{ab}$, and $\omega_{(1)}^{ab}$ arbitrary constants, where
$\omega_{(0)ab}=-\omega_{(0)ba}$, $\omega_{(1)ab}=-\omega_{(1)ba}$.
The non-zero (or non-constant) value of $\delta F$ is associated to the fact that we dropped the total derivative $-\frac{m}{c^2} \dot{t}_{(2)}$ from the action $S_{(2)}$. Had we kept that term, we would have obtained $\delta t_{(2)} = -\frac{c^2}{m} \delta F$ (plus an arbitrary constant corresponding to shifts in $t_{(2)}$).
The generator of the point transformations is
\begin{align}
G =& - E^{(0)} \left(
\frac{4}{3}\lambda t_{(0)} + \delta_{(0)}
\right)
-E^{(1)} \left(
\frac{2}{3} \lambda t_{(1)} + v_{(0)}^a x_{(0)a} + \mu t_{(0)}^2 + \lambda_{(1)} t_{(0)} + \delta_{(1)}
\right)
\nonumber\\
& + p^{(0)a} \left(
\lambda x_{(0)a} + v_{(0)a} t_{(0)} + \omega_{(0)ab} x_{(0)}^b + \varepsilon_{(0)a}
\right)
\nonumber\\
& + p^{(1)a} \left(
\frac{1}{3} \lambda x_{(1)a} + v_{(0)a} t_{(1)} + \mu t_{(0)} x_{(0)a} + v_{(1)a} t_{(0)}
\right.\nonumber\\
&\hspace{1.2cm}
\left. + \frac{1}{2} \lambda_{(1)} x_{(0)} + \omega_{(0)ab} x_{(1)}^b + \omega_{(1)ab} x_{(0)}^b + \varepsilon_{(1)a}
\right)
\nonumber\\
& - \frac{m}{2c^2} \left(
\mu x_{0}^ax_{(0)a} + 2\ v_{(0)a} x_{(1)}^a + 2\ v_{(1)a} x_{(0)}^a
\right),
\label{gen2d}
\end{align}
from which the individual generators can be defined,
\begin{align}
\lambda \to\ & D = - \frac{4}{3}E^{(0)} t_{(0)} -\frac{2}{3} E^{(1)} t_{(1)} + p_a^{(0)} x_{(0)}^a + \frac{1}{3} p^{(1)}_a x_{(1)}^a,\label{D}\\
\delta_{(0)} \to\ & H^{(0)} = -E^{(0)}, \label{H0}\\
\delta_{(1)} \to\ & H^{(1)} = -E^{(1)}, \label{H1}\\
\varepsilon_{(0)a} \to\ & P^{(0)_a} = p^{(0)}_a,\label{Pi}\\
\varepsilon_{(1)a} \to\ & P^{(1)_a} = p^{(1)}_a, \label{Qi}\\
\mu \to\ & C = -E^{(1)} t_{(0)} ^2 + t_{(0)} p^{(1)}_a x_{(0)}^a - \frac{1}{2} \frac{m}{c^2} x_{(0)}^ax_{(0)a}, \label{C0}\\
\lambda_{(1)} \to\ & D^{(1)} = - E^{(1)} t_{(0)} + \frac{1}{2} p^{(1)}_a x_{(0)}a, \label{C1}\\
v_{(0)}^a \to\ &B^{(0)}_a = - E^{(1)} x_{(0)a} + t_{(0)} p^{(0)}_a + t_{(1)} p^{(1)}_a - \frac{m}{c^2} x_{(1)a},\label{Bi}\\
v_{(1)}^a \to\ &B^{(1)}_a= t_{(0)}p^{(1)}_a - \frac{m}{c^2} x_{(0)a}, \label{Ki}\\
\omega_{(0)}^{ab} \to\ & J^{(0)}_{ab} = p^{(0)}_a x_{(0)b} - p^{(0)}_b x_{(0)a}+ p^{(1)}_a x_{(1)b} - p^{(1)}_b x_{(1)a}, \label{Jij}\\
\omega_{(1)}^{ab} \to\ & J^{(1)}_{ab} = p^{(1)}_a x_{(0)b} - p^{(1)}_b x_{(0)a}. \label{Sij}
\end{align}
$H^{(0)}$, $H^{(1)}$, $P^{(0)}$ and $P^{(1)}$ generate the translations in $t_{(0)}$, $t_{(1)}$, $x_{(0)}$ and $x_ {(1)}$, respectively.
$D$ is the generator of dilatations, $C$ is a generator of mixing-level special conformal transformations, $D^{(1)}$ generates mixing level dilatations and $B^{(0)}$ and $B^{(1)}$ are generators of mixing-level boosts. Finally, $J^{(0)}$ generates standard rotations at all levels, while $J^{(1)}$ rotates the $x_{(1)}$ but puts the result in the space of $x_{(0)}$.
\subsection*{Extended space-time algebra}
Using the Poisson brackets (\ref{PB}) for $M=1$
one can compute the dilatation weights $\Delta_X$ of the generators, $\{X,D\} = \Delta_X X$ given in Table \ref{tableDM1}. The standard dynamical exponent, given by the quotient of the weights of $H^{(0)}$ and $P^{(0)}$, is $z=4/3$, in contrast to the $z=2$ value of the $M=0$ case.
\begin{table}[tb]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
$X$ & $C$ & $H^{(0)}$ & $H^{(1)}$ & $P^{(0)}$ & $P^{(1)}$ & $B^{(0)}$ & $B^{(1)}$ & $D^{(1)}$ & $J^{(0)}$ & $J^{(1)}$ \\
\hline
$\Delta_X$ & $2$ & $-\frac{4}{3}$ & $-\frac{2}{3}$ & $-1$ & $-\frac{1}{3}$ & $\frac{1}{3}$ & $1$ & $\frac{2}{3}$ & $0$ & $\frac{2}{3}$ \\
\hline
\end{tabular}
\caption{Dilatation weights of the symmetry generators for $L_2$. Weights of $t_{(0)}$, $t_{(1)}$, $x_{(0)}$ and $x_{(1)}$ are the opposites of $H^{(0)}$, $H^{(1)}$, $P^{(0)}$ and $P^{(1)}$, respectively.}
\label{tableDM1}
\end{table}
The remaining non-zero brackets among generators are
\begin{align}
&
\{ H^{(0)}, C \} = - 2 D^{(1)},\quad
\{ H^{(0)}, D^{(1)} \} = -H^{(1)},\nonumber\\
&
\{ H^{(0)}, B^{(1)}_a \} = -P^{(0)}_a,\quad
\{ H^{(0)}, B^{(1)}_a \} = -P^{(1)}_a,\\
&
\{ H^{(1)}, B^{(0)}_a \} = -P^{(1)}_a,\\
&
\{ P^{(0)}_a, C \} = -B^{(1)}_a,\quad
\{ P^{(0)}_a, D^{(1)} \} = -\frac{1}{2}P^{(1)}_a,\quad
\{ P^{(0)}_a, B^{(0)}_b \} = -\delta_{ab} H^{(1)},\nonumber\\
&
\{ P^{(0)}_a, B^{(1)}_b \} = \frac{m}{c^2}\delta_{ab},\quad
\{ P^{(0)}_a, J^{(1)}_{bc} \} = \delta_{ab}P^{(1)}_c-\delta_{ac}P^{(1)}_b,\\
&
\{ P^{(1)}_a,B^{(0)}_b \} = \frac{m}{c^2}\delta_{ab},\quad
\{ D^{(1)}, B^{(0)}_a \} = -\frac{1}{2} B^{(1)}_a,\quad
\{ B^{(0)}_a,B^{(0)}_b \} = J^{(1)}_{ab},\\
&
\{ B^{(0)}_a, J^{(1)}_{bc} \} = \delta_{ab} B^{(1)}_c - \delta_{ac} B^{(1)}_b,
\end{align}
plus the rotation algebra of $J^{(0)}$ with itself and with all the generators with vector indexes. The central extension $H^{(2)}=m/c^2$ appears in both mixing-level translation-boost brackets $\{P^{(0)},B^{(1)}\}$ and $\{P^{(1)},B^{(0)}\}$, instead of in $\{P^{(0)},B^{(0)}\}$ as in the Schr\"odinger algebra.
Notice that $D^{(1)}$ transforms $B$ into $B$, $P$ into $P$ and $H$ into $H$, but changes the level from $^{(0)}$ to $^{(1)}$ in each case. In this sense it acts like a higher order dilatation, and we will refer to it as a generalized dilatation.
The generators $H^{(0)}$, $H^{(1)}$, $D$, $C$ and $D^{(1)}$ form a solvable, indecomposable 5-dimensional subalgebra. This is in contrast with the case of the Galilean particle, where $H=H^{(0)}$, $C$ and $D$ are a realization of the semisimple algebra $sl(2,\mathbb R)$.
Non-relativistic systems with higher order derivatives and with an extended phase space
$x_{(n)}$, $p^{(n)}$ with a single dilatation and Hamiltonian
have been proposed, see for example
\cite{Lukierski:2002ew,Stichel:2009sz,Stichel:2013kj,Gomis:2011dw}.
\section{Generalized Schr\"odinger algebras}
\label{genSA}
We could proceed now to the $M=2$ action and follow the same procedure as before. However, for higher $M$ the computations become very involved quite rapidly and, in particular, solving the resulting system of PDE associated to the conservation of $G$ requires the use of computer algebra packages.
Instead, in order to obtain results for arbitrary $M$, we rely on the knowledge of the $M+1$ constraints presented in Section \ref{can_act} and propose a generalization of the extended generators found for $M=0$ and $M=1$. Invariance of the constraints under the extended generators,
together with the quasi-invariance of the kinetic terms in the canonical action, justify this approach.
The form of the generators is further validated by the closure of the Poisson bracket algebra of generators.
The generators of symmetries of $L_{M+1}$, $M=0,1,\ldots$, that we propose are
\begin{align}
D&= \frac{1}{2M+1}\sum_{l=0}^M (2M+1-2l) p_a^{(l)}x_{(l)}^a - \frac{1}{2M+1}\sum_{l=0}^M (2M+2-2l)E^{(l)}t_{(l)},\\
D^{(k)} &= \frac{1}{2(M+1-k)} \sum_{l=k}^M (2(M-l)+1) p_a^{(l)} x_{(l-k)}^a
\nonumber\\
&\quad\quad -\frac{1}{M+1-k} \sum_{l=k}^M (M+1-l) E^{(l)}t_{(l-k)},\\
C &= - E^{(M)} t_{(0)}^2 + t_{(0)} p_a^{(M)} x_{(0)}^a - \frac{m}{2c^{2M}} x_{(0)}^a x_{(0)a},\\
B_a^{(k)} &= \sum_{l=k}^M p_a^{(l)} t_{(l-k)} - \sum_{l=k}^{M-1} E^{(l+1)}x_{(l-k)a} - \frac{m}{c^{2M}} x_{(M-k)a},\\
H^{(k)} &= -E^{(k)},\\
P_a^{(k)} &= p_a^{(k)},\\
J_{ab}^{(k)} &= \sum_{l=k}^M \left(
p_a^{(l)}x_{(l-k)b} - x_ {(l-k)a}p_b^{(l)}
\right),
\label{fullgen}
\end{align}
where $k=0,\ldots,M$ for all the families of generators, except for the $D^{(k)}$, for which $k=1,\ldots,M$. {{}One has thus $2M+3$ scalar generators $D$, $D^{(k)}$, $C$ and $H^{(k)}$, plus $2(M+1)d$ generators $P^{(k)}_a$, $B^{(k)}_a$ and $(M+1)\frac{d(d-1)}{2}$
generators $J^{(k)}_{ab}$, yielding a grand total of
\begin{equation}
2M+3+(M+1)\frac{d(d+3)}{2}
\end{equation}
generators.}
Using the Poisson brackets (\ref{PB}) one can compute the action of these generators on the canonical variables $x_{k}$, $p^{(k)}$, $t_{(k)}$, $E^{(k)}$, $k=0,1,\ldots,M$, and the results are given in Appendix \ref{TVC}. For boosts and rotations the transformations are, after multiplying by the corresponding parameters, those given in (\ref{Ep_trans}).
That the above generators correspond to symmetries of $S_{M+1}^c$ is proved as follows.
First one can check (see Appendix \ref{IVC}) that the constraints $\phi^{(k)}$, $k=0,1,\ldots,M$ are invariant under the above transformations, that is,
\begin{equation}
\delta^X \phi^{(k)} = \{\phi^{(k)},X\} =0
\end{equation}
for $X$ in the set $\{
D, D^{(j)}, C, H^{(j)},P_a^{(j)}, B_a^{(j)},J_{ab}^{(j)}
\}
$.
Furthermore, using the results in Appendix \ref{TVC} and the commutation of the transformations with the derivation with respect to $\tau$, one can prove that
$$
K = -\sum_{k=0}^M\left( E^{(k)}\dot{t}_{(k)}+ p^{(k)}_a \dot{x}_{(k)}^a\right)
$$
is also invariant, except for the transformations corresponding to the boosts and the special conformal transformation, for which\footnote{We use $\delta_a^{B,j}$ to denote the transformation under $B^{(j)}_a$. This kind of notation will also be employed for the other generators later on.}
\begin{align}
\delta_a^{B,j} K &= \frac{\mbox{d}}{\mbox{d}\tau}\left( \frac{m}{c^{2M}} x_{(M-j)a} \right),\\
\delta^C K &= \frac{\mbox{d}}{\mbox{d}\tau}\left( \frac{m}{2c^{2M}}x_{(0)}^a x_{(0)a} \right).
\end{align}
This means that the canonical Lagrangian in $S_{M+1}^c$ is not invariant under the full set of transformations, but it is quasi-invariant,
\begin{equation}
\delta^G L_{M+1} = \frac{\mbox{d}}{\mbox{d}\tau} \delta F_{(M)},
\end{equation}
with
\begin{equation}
\delta F_{(M)} = \frac{m}{2c^{2M}} \left(
\mu x_{(0)}^a x_{(0)a} + 2 \sum_{k=0}^M v_{(k)}^a x_{(M-k)a},
\right)
\end{equation}
where $\mu$ is the parameter of the special conformal transformation and the $v_{(k)}^a$ are the boosts parameters. This agrees with the known result for $M=0$ and the result for $M=1$ obtained in this paper and, as in those cases, $\delta F_{(M)}$ is associated with the dropping of the total derivative
$-m/c^{2M}\dot{t}_{(M+1)}$ in $S_{(M+1)}$.
This concludes the proof that our generators yield symmetry transformations of the canonical action. Furthermore, they form a closed algebra.
Indeed, the brackets of the $D$ and $D^{(k)}$ with all the generators are
\begin{align}
\{D, D^{(k)}\} &= -\frac{2k}{2M+1} D^{(k)},\ k=1,\ldots,M,\\
\{D,C\} &= -2 C,\\
\{D,B_a^{(k)}\} &= - \frac{2k+1}{2M+1} B_a^{(k)}, \ k=0,\ldots,M,\\
\{D,H^{(k)}\} &= \frac{2M+2-2k}{2M+1} H^{(k)}, \ k=0,\ldots,M,\label{DHk}\\
\{D,P_a^{(k)}\} &= \frac{2M+1-2k}{2M+1} P_a^{(k)},\ k=0,\ldots,M,\label{DPk}\\
\{D,J_{ab}^{(k)}\} &= -\frac{2k}{2M+1} J_{ab}^{(k)},\ k=0,\ldots,M,\\
\{D^{(k)},D^{(j)}\} &= (k-j) \frac{M+1-k-j}{(M+1-k)(M+1-j)} D^{(k+j)}, \ k,j=1,\ldots,M,\nonumber\\
&\text{provided that $k+j\leq M$, zero otherwise},\\
\{D^{(k)},C\} &= 0,\ k=1,\ldots,M,
\end{align}
\begin{align}
\{D^{(k)},B_a^{(j)}\} &= -\frac{2j+1}{2(M+1-k)} B_a^{(k+j)},\ k=1,\ldots,M,\ j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise},\\
\{D^{(k)},H^{(j)}\} &= \frac{M+1-k-j}{M+1-k} H^{(k+j)},\ k=1,\ldots,M,\ j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise},\\
\{D^{(k)},P_a^{(j)}\} &= \frac{2M+1-2(k+j)}{2(M+1-k)} P_a^{(k+j)},\ k=1,\ldots,M,\ j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise},\\
\{D^{(k)},J_{ab}^{(j)}\} &= -\frac{j}{M+1-k} J_{ab}^{(k+j)},\ k=1,\ldots,M,\ j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise}.
\end{align}
From (\ref{DHk}) and (\ref{DPk}) for $k=0$ one can see that the standard dynamical exponent {{} associated to the Galilean variables $t_{(0)}, x_{(0)}$} depends on $M$ and is given by
\begin{equation}
z_M = \frac{2M+2}{2M+1},\ \ M=0,1,2,\ldots
\label{ZM}
\end{equation}
{{}One has that $\lim_{M\to\infty} z_M=1$, but (\ref{ZM}) takes into account only a small part of the relations, and it is not obvious what this implies at the level of the algebra itself.}
As we already noticed for $M=1$, the above relations show that $D^{(k)}$ are a higher order version of the dilatation $D$, increasing by $k$ the levels of the generators. Furthermore, if we redefine
\begin{align}
L_k &= (M+1-k) D^{(k)}, \ k=1,\ldots,M,\\
L_0 &= \frac{2M+1}{2} D,
\end{align}
one has the non-negative part
of a truncated Witt algebra,
\begin{equation}
\{L_n, L_m\} = (n-m) L_{n+m},\quad n,m=0,\ldots,M,
\label{witt}
\end{equation}
provided that $n+m\leq M$, and zero otherwise. The remaining brackets are
\begin{align}
\{B_a^{(k)},B_b^{(j)}\} &= J_{ab}^{(k+j+1)},\ k,j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j+1\leq M$, zero otherwise},\\
\{J_{ab}^{(k)},J_{cd}^{(j)}\} &= \delta_{ad}J_{bc}^{(k+j)} +
\delta_{bc}J_{ad}^{(k+j)} - \delta_{ac}J_{bd}^{(k+j)} - \delta_{bd}J_{ac}^{(k+j)},\
k,j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise},\\
\{B_a^{(k)},J_{bc}^{(j)}\} &= \delta_{ab} B_c^{(k+j)} - \delta_{ac} B_b^{(k+j)},
k,j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise},\\
\{P_a^{(k)},J_{bc}^{(j)}\} &= \delta_{ab} P_c^{(k+j)} - \delta_{ac} P_b^{(k+j)},
k,j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise},
\end{align}
\begin{align}
\{H^{(k)},B_a^{(j)}\} &= - P_a^{(k+j)},
k,j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j\leq M$, zero otherwise},\\
\{P_a^{(k)},B_b^{(j)}\} &= -\delta_{ab} H^{(k+j+1)} + \frac{m}{c^{2M}}\delta_{ab} \delta^{k+j}_M,k,j=0,\ldots,M,
\nonumber\\ &\text{provided that $k+j+1\leq M$, zero otherwise},\\
\{C,B_a^{(k)}\} &= 0, \ k=0,\ldots,M,\\
\{C,J_{ab}^{(k)}\} &=0, \ k=0,\ldots,M,\\
\{C,P_a^{(k)}\} &= \delta^k_0 B_a^{(M)}, \ k=0,\ldots,M,\\
\{C, H^{(k)}\} &= \begin{cases}
D & \text{if $M=0$ (so that $k=0$ is the only possible value)},\\
2 \delta^k_0 D^{(M)} & \text{if $M\geq 1$}, \ k=0,\ldots,M.
\end{cases} \label{CH}
\end{align}
The special behaviour of the last relation for $M=0$ is what makes the $sl(2,\mathbb R)$ algebra of $D,C,H^{(0)}$ to appear for $M=0$, while for $M>0$ the $2M+3$ generators $C,D,D^{(k)},H^{(k)}$ form a solvable algebra (this is because $D$ does not appear in the right-hand sides for $M>0$, $C$ disappears after the first derivation, and the $D^{(k)}$ and $H^{(k)}$ are dropped in successive derivations of the algebra). The central extension $H^{(M+1)}=m/c^{2M}$ appears in brackets between translations and boosts whose indexes add to $M$.
Although some of the generators can be redefined so that some of the structure constants become independent of the level $M$, as was done for the $D$, $D^{(k)}$, there are some structure constants for which the dependence on $M$ cannot be erased, that is, increasing $M$ not only brings in new generators but it also changes the brackets of some of the old ones.
{{}
The structure constants of the brackets among $H^{(k)}$, $P^{(k)}$, $B^{(k)}$ and $J^{(k)}$, that is, those of the original algebra (\ref{KMGal}), as well as those of $C$ with
$P^{(k)}$, $B^{(k)}$ and $J^{(k)}$, do not depend on $M$ (the $1/c^{2m}$ can be absorbed into $m$). Using the $L_n$ instead of $D$, $D^{(k)}$, the remaining brackets, besides (\ref{witt}), are
\begin{align}
\{L_n, B_a^{(k)} \} &= - \frac{2k+1}{2}B_a^{(k+n)},\label{LB}\\
\{L_n, P_a^{(k)} \} &= \frac{2M+1-2(n+k)}{2}P_a^{(k+n)},\label{LP}\\
\{L_n, J_{ab}^{(k)} \} &= - k J_{ab}^{(k+n)},\label{LJ}\\
\{L_n, H^{(k)} \} &= (M+1-(n+k))H^{(k+n)},\label{LH}\\
\{L_n, C \} &= - \delta_{n0}(2M+1)C,\label{LC}\\
\{C, H^{(k)} \} &= 2\delta_0^k L_M.\label{CHb}
\end{align}
From this is clear that, for instance, one cannot redefine $P^{(k)}$ so that the dependence on $M$ dissapears from (\ref{LP}), and the same happens for (\ref{LH}) and (\ref{LC}), while (\ref{CHb}) has the problem that the generator in the right-hand side depends itself on $M$.
}
\section{Generalized Schr\"odinger equation and projective invariance}
\label{GS_PI}
The quantization of the systems that we have considered can be performed by imposing the canonical constraints on the physical states of the corresponding Hilbert space. For $M=1$ we have two constraints and we obtain a set of two generalized Schr\"odinger equations,
\begin{align}
-\frac{\partial^2 \Psi}{\partial x_{(0)a}\partial x_{(1)}^a} + \frac{1}{2} \frac{\partial^2 \Psi}{\partial t_{(1)}^2}
-i \frac{m}{c^2}\frac{\partial\Psi}{\partial t_{(0)}} & =0,\label{Sch1}\\
-\frac{1}{2} \frac{\partial^2 \Psi}{\partial x_{(1)a}\partial x_{(1)}^a}-i \frac{m}{c^2}\frac{\partial\Psi}{\partial t_{(1)}} & =0,\label{Sch2}
\end{align}
where $\Psi(t_{(0)},t_{(1)},x_{(0)},x_{(1)})$ is the wave function of the physical state in coordinate representation. Working in $d=1$ for simplicity and looking for solutions of the form
$$
\Psi(t_{(0)},t_{(1)},x_{(0)},x_{(1)})=\Psi_0(t_{(0)},x_{(0)})\Psi_1(t_{(1)},x_{(1)}),
$$
equation (\ref{Sch2}) can be solved by separation of variables with separation constant $\varepsilon$ to obtain
\begin{align}
\Psi_1(t_{(1)},x_{(1)}) &= A_+ e^{-i\left(\varepsilon t_{(1)} - \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} \right)}
+A_- e^{-i\left(\varepsilon t_{(1)} +\frac{\sqrt{2m\varepsilon}}{c} x_{(1)} \right)}\nonumber\\
&\equiv \Psi_+(t_{(1)},x_{(1)})+ \Psi_-(t_{(1)},x_{(1)}),
\label{psi1}
\end{align}
where $A_{\pm}$ are arbitrary constants.
The separation constant $\varepsilon$ characterizes the dependence of the wave function on $t_{(1)}$, $x_{(1)}$, and is the common eigenvalue of the operators corresponding to $E^{(1)}$ and $\frac{c^2}{2m}P^{(1)2}$, that is,
$i\partial_{t(1)}$ and $-\frac{c^2}{2m}\partial^2_{x(1)}$, respectively.
Each of the $\Psi_\pm(t_{(1)},x_{(1)})$ can be substituted into (\ref{Sch1}) and one obtains a first-order PDE for $\Psi_0$,
\begin{equation}
-i \left( \pm \frac{\sqrt{2m\varepsilon}}{c} \right)
\frac{\partial \Psi_0}{\partial x_{(0)}} - \frac{1}{2}\varepsilon^2 \Psi_0 - i \frac{m}{c^2} \frac{\partial \Psi_0}{\partial t_{(0)}}=0,
\label{Sch3}
\end{equation}
Equation (\ref{Sch3}) can be solved by the method of characteristics. Imposing the initial condition $\Psi_0(0,x_{(0)}) =F(x_{(0)})$ at $t_{(0)}=0$, with $F$ an arbitrary smooth function, one obtains,
\begin{equation}
\Psi_0(t_{(0)},x_{(0)}) = F_\pm\left(
x_{(0)} - \frac{c}{m} \left( \pm \sqrt{2m\varepsilon} \right) t_{(0)}
\right)
e^{i \frac{c^2}{2m}\varepsilon^2 t_{(0)}},
\end{equation}
where $F_+$ and $F_-$ are the arbitrary functions corresponding to the $\pm$ signs in (\ref{Sch3}). Finally, the total wave function solution to the system of Schr\"odinger equations is
\begin{align}
\Psi_\varepsilon(t_{(0)},t_{(1)},x_{(0)},x_{(1)}) &= F_+\left( x_{(0)} - \frac{c}{m}\sqrt{2m\varepsilon}\ t_{(0)} \right)
e^{-i\left( \varepsilon t_{(1)} - \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} - \frac{c^2}{2m} \varepsilon^2 t_{(0)}
\right)}
\nonumber\\
&+
F_-\left( x_{(0)} +\frac{c}{m}\sqrt{2m\varepsilon}\ t_{(0)} \right)
e^{-i\left( \varepsilon t_{(1)} + \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} - \frac{c^2}{2m}\varepsilon^2 t_{(0)}
\right)},
\label{solM1}
\end{align}
where we have written a $\varepsilon$ sub-index to indicate the dependence on the parameter $\varepsilon$. Alternatively, we can identify the solution using ${p}$ to refer to the two eigenvalues ${\pm p}$ of the momentum operator $-i\partial_{x_{(1)}}$ for the two components, with $\varepsilon=c^2/(2m){p}^2$, and write
\begin{align}
\Psi_{p}(t_{(0)},t_{(1)},x_{(0)},x_{(1)}) &=
F_+\left( x_{(0)} - \frac{c^2}{m}{p} t_{(0)} \right)
e^{-i\left( \frac{c^2}{2m}{p}^2 t_{(1)} - {p} x_{(1)}
- \frac{c^6}{8m^3} {p}^4 t_{(0)}
\right)}
\nonumber\\
&+
F_-\left( x_{(0)} + \frac{c^2}{m}{p} t_{(0)} \right)
e^{-i\left( \frac{c^2}{2m}{p}^2 t_{(1)} + {p} x_{(1)}
- \frac{c^6}{8m^3} {p}^4 t_{(0)}
\right)},
\label{solM1b}
\end{align}
This method can be repeated to solve the system of Schr\"odinger equations for any $M>1$. The constraint for $k=M$ yields a equation for the dependence on $t_{(M)}, x_{(M)}$ which is second order in $x_{(M)}$ and which can be solved by separation of variables, yielding left and right travelling waves in $t_{(M)}, x_{(M)}$ . Each of the two solutions can then be substituted in the equation for $k=M-1$, and one gets a first order PDE in $t_{(M-1)}, x_{(M-1)}$, which can be solved by the method of characteristics. The obtained solutions can be substituted in the equation for $k=M-2$ which is again of first order in $t_{(M-2)}, x_{(M-2)}$, and the procedure can be iterated until we reach the equation for $k=0$.
\subsection*{Projective phase}
We will discuss here the transformation properties of the above wave functions and
Schr\"odinger equations and the associated projective phases
under the post-Galilean transformations.
For simplicity we will work again in $d=1$. We will discuss the case of expansions in detail and summarize the results for the other transformations. Since we have $d=1$ we do not consider rotations and generalized rotations. Also, it should be taken into account that the finite transformations presented below are those of the individual generators.
\begin{enumerate}
\item\textbf{Expansions $C$.}
The expansions for $M=1$ define a vector field in the cotangent manifold,
with coordinates $(t_{(0)},x_{(0)}, t_{(1)},x_{(1)},E^{(0)},E^{(1)},p^{(0)},p^{(1)})$,
given by
\begin{align}
X^C &= t_{(0)}x_{(0)} \frac{\partial}{\partial x_{(1)}} + t_{(0)}^2 \frac{\partial}{\partial t_{(1)}} +
\left(-t_{(0)} p^{(1)} + \frac{m}{c^2} x_{(0)} \right) \frac{\partial}{\partial p^{(0)}}
\nonumber\\
&+\left(
-2E^{(1)} t_{(0)} + p^{(1)} x_{(0)}
\right)\frac{\partial}{\partial E^{(0)}},
\label{XC}
\end{align}
which, after a trivial integration, yields the finite expansions
\begin{align}
\hat{x}_{(0)} = x_{(0)}, &\quad \hat{p}^{(0)} =p^{(0)} + \mu \left(-t_{(0)} p^{(1)} + \frac{m}{c^2} x_{(0)} \right),\\
\hat{x}_{(1)} = x_{(1)} + \mu t_{(0)}x_{(0)} , &\quad \hat{p}^{(1)} = p^{(1)},\\
\hat{t}_{(0)} = t_{(0)}, &\quad \hat{E}^{(0)} = E^{(0)} + \mu \left(
-2E^{(1)} t_{(0)} + p^{(1)} x_{(0)}
\right),\\
\hat{t}_{(1)} = t_{(1)} + \mu t_{(0)}^2, &\quad \hat{E}^{(1)} = E^{(1)}.
\end{align}
Notice that $t_{(0)}$ and $x_{(0)}$ do not transform under $C$, while for the standard
Schr\"odinger expansions ($M=0$) one has
\begin{equation}
\hat{t}_{(0)} = \frac{t_{(0)}}{1-\mu t_{(0)}},\quad
\hat{x}_{(0)} = \frac{x_{(0)}}{1-\mu t_{(0)}}.
\end{equation}
Let $\Psi$ be a solution to (\ref{Sch1}) and (\ref{Sch2}), and let $\hat\Psi$ be the transformed solution under a finite $C$ transformation.
Assume that $\hat\Psi$ and $\Psi$ are related by
\begin{equation}
\hat\Psi(\hat{t}_{(0)},\hat{t}_{(1)},\hat{x}_{(0)},\hat{x}_{(1)}) =
e^{i\varphi(t_{(0)},t_{(1)},x_{(0)},x_{(1)})} \Psi(t_{(0)},t_{(1)},x_{(0)},x_{(1)}).
\end{equation}
If we demand
$\hat\Psi$ to satisfy (\ref{Sch1}) and (\ref{Sch2}) in the transformed coordinates one gets,
after some algebra, the following set of PDE for $\varphi$
\begin{equation}
\frac{\partial\varphi}{\partial t_{(1)}} =\frac{\partial\varphi}{\partial x_{(1)}}
=\frac{\partial\varphi}{\partial t_{(0)}} = 0,\quad
\frac{\partial\varphi}{\partial x_{(0)}} - \mu\frac{m}{c^2} x_{(0)}=0,
\end{equation}
whose solution is
\begin{equation}
\varphi(t_{(0)},t_{(1)},x_{(0)},x_{(1)}) = \mu \frac{m}{2c^2} x_{(0)}^2 + \text{constant}.
\end{equation}
Since the Jacobian
\begin{equation}
\frac{\partial(\hat{x}_{(1)},\hat{x}_{(0)})}{\partial({x}_{(1)},{x}_{(0)})}=
\left|
\begin{array}{cc}
1 & \mu t_{(0)} \\ 0 & 1
\end{array}
\right|=1
\end{equation}
is trivial, an imaginary part for the constant is not needed to compensate for a change in the measure in $x_{(0)},x_{(1)}$ space, and we can take it equal to zero.
Again, this is different from what happens in the $M=0$ case, for which the projective phase acquires an imaginary part,
\begin{equation}
\varphi(t_{(0)},x_{(0)}) = \frac{1}{2}\frac{m\mu}{1-\mu t_{(0)}} x_{(0)}^2
-\frac{i}{2} \log|1-\mu t_{(0)}|.
\end{equation}
One has then
\begin{equation}
|\hat\Psi|^2 = |1-\mu t_{(0)}||\Psi|^2,
\end{equation}
so that $|\hat\Psi|^2 \mbox{d}\hat{x}_{(0)} = |\Psi|^2 \mbox{d} x_{(0)}$, as desired.
Returning to the $M=1$ expansions, we have
\begin{equation}
\hat\Psi(\hat{t}_{(0)},\hat{t}_{(1)},\hat{x}_{(0)},\hat{x}_{(1)}) =
e^{i \mu \frac{m}{2c^2} x_{(0)}^2} \Psi(t_{(0)},t_{(1)},x_{(0)},x_{(1)}).
\label{PFCM1}
\end{equation}
Notice that the projective phase can also be obtained by iteration of the infinitesimal $\delta F$ corresponding to expansions,
\begin{equation}
\varphi = \sum_{n=1}^\infty \frac{1}{n!}\delta^nF,
\label{phiF}
\end{equation}
with $\delta^n F = \delta (\delta^{n-1}F)$. Since $\delta F=m/(2c^2)\mu x_{(0)}^2$ and $\delta x_{(0)}=0$ for expansions in the $M=1$ case, the contributions are null after the lineal one. For a general discussion about projective phase, central extensions and invariance up to a total derivative of the Lagrangian see \cite{levy1969group,marmo1988quasi,Silagadze:2011yj}.
As a check, let us assume that $\Psi$ is a solution of (\ref{Sch1}), (\ref{Sch2}) given by (\ref{solM1}), and let us prove that then $\hat\Psi$ has the same form. Using (\ref{solM1}), the right-hand side of (\ref{PFCM1}) is
\begin{align}
&
F_+\left( x_{(0)} - \frac{c}{m}\sqrt{2m\varepsilon}\ t_{(0)} \right)
e^{-i\left( \varepsilon t_{(1)} - \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} - \frac{c^2}{2m} \varepsilon^2 t_{(0)}
\right)}e^{i \mu \frac{m}{2c^2} x_{(0)}^2}
\nonumber\\
&+
F_-\left( x_{(0)} +\frac{c}{m}\sqrt{2m\varepsilon}\ t_{(0)} \right)
e^{-i\left( \varepsilon t_{(1)} + \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} - \frac{c^2}{2m}\varepsilon^2 {t}_{(0)}
\right)}e^{i \mu \frac{m}{2c^2} x_{(0)}^2},\nonumber\\
&=
F_+\left( \hat{x}_{(0)} - \frac{c}{m}\sqrt{2m\varepsilon}\ \hat{t}_{(0)} \right)
e^{-i\left( \varepsilon t_{(1)} - \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} - \frac{c^2}{2m}\varepsilon^2 \hat{t}_{(0)}
\right)}e^{i \mu \frac{m}{2c^2} \hat{x}_{(0)}^2}
\nonumber\\
&+
F_-\left( \hat{x}_{(0)} +\frac{c}{m}\sqrt{2m\varepsilon}\ \hat{t}_{(0)} \right)
e^{-i\left( \varepsilon t_{(1)} + \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} - \frac{c^2}{2m} \varepsilon^2 \hat{t}_{(0)}
\right)}e^{i \mu \frac{m}{2c^2} \hat{x}_{(0)}^2},
\end{align}
where we have used that $\hat{t}_{(0)}=t_{(0)}$, $\hat{x}_{(0)} = x_{(0)}$. Expressing now $t_{(1)}$ and $x_{(1)}$ in terms of the transformed variables one gets
\begin{align}
\varepsilon t_{(1)} \mp \frac{\sqrt{2m\varepsilon}}{c} x_{(1)} &=
\varepsilon \hat{t}_{(1)} \mp \frac{\sqrt{2m\varepsilon}}{c} \hat{x}_{(1)}
-\mu \left(
\varepsilon \hat{t}_{(0)}^2 \mp \frac{\sqrt{2m\varepsilon}}{c} \hat{t}_{(0)}
\hat{x}_{(0)}
\right).
\end{align}
The extra terms can be combined with the one with $\hat{x}_{(0)}^2$ and one gets the perfect square
\begin{equation}
-i\mu \frac{m}{2c^2} \left(
\hat{x}_{(0)} \mp \frac{c}{m}\sqrt{2m\varepsilon} \hat{t}_{(0)}
\right)^2,
\end{equation}
whose exponential can then be absorbed into $F_{\pm}$ to yield the corresponding $\hat{F}_\pm$ for $\hat\Psi$, giving it the same form as in (\ref{solM1}) but in transformed variables. Notice that the parameter $\varepsilon$ does not change when going to the transformed coordinates, since for a given solution it corresponds to the value of $E^{(1)}$, and under the expansions one has $\hat{E}^{(1)}=E^{(1)}$. This is not, however, true of some of the other transformations and the change of $E^{(1)}$, or of $\frac{c^2}{2m}p^{(1)2}$, must be taken into account.
\item\textbf{Boosts $B^{(0)}$.}
The finite transformations for parameter $v_{(0)}=v$ are in this case
\begin{align}
\hat{x}_{(0)} &= x_{(0)}+ v t_{(0)}, \\
\hat{p}^{(0)} &=p^{(0)} + v E^{(1)} +\frac{1}{2}v^2 p^{(1)} + \frac{1}{6}\frac{m}{c^2}v^3,\\
\hat{x}_{(1)} &= x_{(1)} + v t_{(1)}+\frac{1}{2}v^2 x_{(0)}+\frac{1}{6}v^3 t_{(0)},
\\
\hat{p}^{(1)} &= p^{(1)} + \frac{m}{c^2}v,\\
\hat{t}_{(0)} &= t_{(0)},\\
\hat{E}^{(0)} &= E^{(0)} + v p^{(0)} + \frac{1}{2}v^2 E^{(1)} + \frac{1}{6} v^3 p^{(1)} + \frac{1}{24}\frac{m}{c^2}v^4,\\
\hat{t}_{(1)} &= t_{(1)}+v x_{(0)} + \frac{1}{2}v^2 t_{(0)}, \\
\hat{E}^{(1)} &= E^{(1)}+ v p^{(1)} + \frac{1}{2}\frac{m}{c^2}v^2,
\end{align}
and the projective phase is
\begin{equation}
\varphi=\frac{m}{c^2}\left(
v x_{(1)} + \frac{1}{2} v^2 t_{(1)} + \frac{1}{6} v^3 x_{(0)} + \frac{1}{24}v^4 t_{(0)}
\right).
\end{equation}
\item\textbf{Generalized boosts $B^{(1)}$.}
The finite transformations corresponding to parameter $v_{(1)}=v$ are
\begin{align}
\hat{x}_{(0)} = x_{(0)}, &\quad \hat{p}^{(0)} =p^{(0)} + \frac{m}{c^2}v,\\
\hat{x}_{(1)} = x_{(1)} + v t_{(0)}, &\quad \hat{p}^{(1)} = p^{(1)},\\
\hat{t}_{(0)} = t_{(0)}, &\quad \hat{E}^{(0)} = E^{(0)} + v p^{(1)},\\
\hat{t}_{(1)} = t_{(1)}, &\quad \hat{E}^{(1)} = E^{(1)}.
\end{align}
The projective phase is in this case
\begin{equation}
\varphi=\frac{m}{c^2}v x_{(0)}.
\end{equation}
\item\textbf{Dilatations $D$.}
The finite transformations for parameter $\lambda$ are
\begin{align}
\hat{x}_{(0)} = e^{\lambda} x_{(0)}, &\quad \hat{p}^{(0)} =e^{-\lambda}p^{(0)} ,\\
\hat{x}_{(1)} = e^{\lambda/3}x_{(1)}, &\quad \hat{p}^{(1)} = e^{-\lambda/3} p^{(1)},\\
\hat{t}_{(0)} = e^{4\lambda/3}t_{(0)}, &\quad \hat{E}^{(0)} = e^{-4\lambda/3}E^{(0)} ,\\
\hat{t}_{(1)} = e^{2\lambda/3}t_{(1)}, &\quad \hat{E}^{(1)} =e^{-2\lambda/3}t_{(1)} E^{(1)}.
\end{align}
The projective phase is a constant which, however, cannot be taken as zero and in fact must be imaginary
$\varphi=i\frac{2}{3}\lambda$,
to compensate for the change in the measure in $x_{(0)}, x_{(1)}$ space,
$\mbox{d}\hat{x}_{(0)}\mbox{d}\hat{x}_{(1)} = e^{4\lambda/3}\mbox{d} x_{(0)}\mbox{d} x_{(1)}$.
\item\textbf{Generalized dilatations $D^{(1)}$.}
The finite transformations corresponding to parameter $\lambda_{(1)}=\lambda$ are
\begin{align}
\hat{x}_{(0)} = x_{(0)}, &\quad \hat{p}^{(0)} =p^{(0)} -\frac{1}{2}\lambda p^{(1)},\\
\hat{x}_{(1)} = x_{(1)} + \frac{1}{2}\lambda x_{(0)}, &\quad \hat{p}^{(1)} = p^{(1)},\\
\hat{t}_{(0)} = t_{(0)}, &\quad \hat{E}^{(0)} = E^{(0)} - \lambda E^{(1)},\\
\hat{t}_{(1)} = t_{(1)}+ \lambda t_{(0)}, &\quad \hat{E}^{(1)} = E^{(1)}.
\end{align}
and the projective phase is trivial. No constant imaginary part for $\varphi$ is needed, since
$\mbox{d}\hat{x}_{(0)}\mbox{d}\hat{x}_{(1)} = \mbox{d} x_{(0)}\mbox{d} x_{(1)}$ in this case.
\item\textbf{Time shifts and space translations $H^{(0)}$, $H^{(1)}$, $P^{(0)}$, $P^{(1)}$.}
The finite transformations are just shifts in the corresponding variables, the momenta do not transform and the projective phase can be chosen as zero in all the cases.
\end{enumerate}
\subsection*{Schr\"odinger equation with higher derivatives}
The way that we have constructed the Schr\"odinger equation of our system corresponds to what is known as weak quantization, where the constraints are imposed as operators on the states of the system. One can also consider a reduced space quantization, in which the gauge invariance is broken and a Hamiltonian in then computed. We can do this for our action (\ref{S2}), disregarding the total derivative, by imposing the two gauge conditions \cite{Gomis:2019sqv}
\begin{equation}
t_{(0)} = c^{-2}t_{(1)}=\tau,
\label{GF}
\end{equation}
whereby one obtains the gauge fixed action
\begin{equation}
S^*_{(2)} = \frac{m}{c^2}\int\mbox{d} t \left(
\dot{x}_{(0)}^a \dot{x}_{(1)a} - \frac{c^2}{2} \dot{x}_{(0)}^a \dot{x}_{(0)a} + \frac{1}{8} (\dot{x}_{(0)}^a \dot{x}_{(0)a})^2
\right) .
\end{equation}
From this one can compute the Hamiltonian
\begin{equation}
H_{(2)}^* = \frac{c^2}{m} p_{(0)a} p_{(1)}^a + \frac{1}{2} \frac{c^4}{m} p_{(1)a}p_{(1)}^a - \frac{1}{8} \frac{c^6}{m^3} ( p_{(1)a}p_{(1)}^a)^2,
\end{equation}
and write down the corresponding time-dependent Schr\"odinger equation for $\Psi(t,x_{(0)},x_{(1)})$
\begin{equation}
i \frac{\partial\Psi}{\partial t} = - \frac{c^2}{m} \frac{\partial^2\Psi}{\partial{x_{(0)a}} \partial{x_{(1)}^a}}
- \frac{1}{2}\frac{c^4}{m} \frac{\partial^2\Psi}{\partial{x_{(1)a}} \partial{x_{(1)}^a}}
- \frac{1}{8}\frac{c^6}{m^3} \frac{\partial^4\Psi}{\partial{x_{(1)a}}^2 \partial{x_{(1)}^a}^2},
\label{Sch4}
\end{equation}
which is of fourth order. We can also obtain an equation of this order by plugging (\ref{Sch2}) into (\ref{Sch1}),
\begin{equation}
i \frac{\partial\Psi}{\partial t_{(0)}} = - \frac{c^2}{m} \frac{\partial^2\Psi}{\partial{x_{(0)a}} \partial{x_{(1)}^a}}
- \frac{1}{8}\frac{c^6}{m^3} \frac{\partial^4\Psi}{\partial{x_{(1)a}}^2 \partial{x_{(1)}^a}^2},
\label{Sch5}
\end{equation}
which is also of fourth order and with the same coefficients, but lacks the term with second derivatives in $x_{(1)}$. One can conclude then that the system described by action (\ref{S2}) is one for which the two quantization procedures yield different results. Nevertheless, one should notice that the wave function obtained by using the gauge fixing (\ref{GF}) in (\ref{solM1b}), which by construction is a solution of (\ref{Sch5}) with $t_{(0)}=t$, is also a solution of (\ref{Sch4}).
\section{Conclusions and outlook}
\label{conclusions}
In this paper we have constructed the most general point symmetry transformations of
the post Galilean actions \cite{Gomis:2019sqv} that can be obtained from the Minkowskian action for a massive particle. The algebras obtained are
generalizations of the ordinary Schr\"odinger algebra~\cite{Niederer:1972zz,Hagen:1972pd}. Besides the generalized Galilean transformations, they contain dilatations, $D$, generalized dilatations $D^{(k)}$ and expansions $C$.
The algebras are different from extensions of Galilean conformal algebras with
dynamical exponent $z=2/N$, with $N$ positive integer, since these contain an $sl(2,\mathbb R)$
subalgebra \cite{henkel1997local,negro1997nonrelativistic}.
Using a weak quantization procedure,
we have introduced an Schr\"odinger equation for the post-Galilean particle that consists of $M+1$ partial differential equations, up to second order in derivatives, for a wave function living in a generalized space. Like the case of ordinary Schr\"odinger equation, the wave function
supports a ray representation of the symmetry group, and we have calculated the projective
phase for each transformation.
The symmetries of generalized Schr\"odinger equations in this paper are different from the symmetries of the higher order Schr\"odinger equations \cite{Gomis:2011dw}.
If we consider the reduced space quantization the
corresponding Schr\"odinger equation is a single differential equation of fourth order.
The two procedures of quantization do not coincide in general.
Further investigation of the difference between the Schr\"odinger equations obtained from the weak and reduced space quantizations, and the generalization of this fact for the actions $S_{(M+1)}$, will also be the subject of future work.
It will be interesting to study the relation of the higher order Schr\"odinger
equation with the expansion up to order $v^2/c^2$ of the square root
of the Klein-Gordon equation, see for example \cite{sucher1963relativistic,fiziev1985relativistic},
\begin{equation}
i \frac{\partial\Psi}{\partial t} = \sqrt{- c^2 \frac{\partial^2}{\partial{x^{a}} \partial{x_a}}+m^2 c^4}\,\Psi,
\label{Squareroot}
\end{equation}
and if it is possible to introduce interaction terms in the new Schr\"odinger equation that we have found.
{{}
Finally, another topic of interest for future research is the computation of quadratic invariants under stability subgroups of the generalized Schr\"odinger algebras and their use to construct associated space-times, as done for instance in \cite{Brugues:2006yd} to obtain pp-wave metrics from Newton-Hooke algebras.
}
\section*{Acknowledgements}
We would like to thank Eric Bergshoeff, Gary Gibbons, Axel Kleinschmidt, Patricio Salgado-Rebolledo and Paul Townsend for their comments on the paper.
{{}
We would also like to thank the anonymous reviewer of the first version of this paper for his/her insightful observations.
}
The work of CB has been partially supported by Generalitat de Catalunya through project 2017 SGR 872 and by
Spanish national project DOVELAR-IRI, RTI2018-096001-B-C32. JG has been supported in part by MINECO FPA2016-76005-C2-1-P and Consolider CPAN, and by the Spanish government (MINECO/FEDER) under project MDM-2014-0369 of ICCUB (Unidad de Excelencia Mar\'{i}a de Maeztu).
\bibliographystyle{JHEP}
| 3fc6b1774d594fb4a964424469cb3d8a93306de9 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
A knowledge graph (KG) is a multi-relational graph, whose nodes correspond to entities and directed edges indicate the specific relations between entities. For example, Fig.~\ref{fig:example}(a) shows a snapshot of the graph-structured relational triples in DBpedia. In KGs, each labeled edge is usually represented by a relational triple in the form of $(\mathtt{head}, \mathtt{relation}, \mathtt{tail})$\footnote{In the following, $(\mathtt{head}, \mathtt{relation}, \mathtt{tail})$ is abbreviated as $(h,r,t)$.}, meaning that the two entities $\mathtt{head}$ and $\mathtt{tail}$ hold a specific $\mathtt{relation}$. So, a typical KG can be defined as a triple $\mathcal{K}=(\mathcal{E},\mathcal{R},\mathcal{T})$, where $\mathcal{E}$ is the set of entities (i.e., nodes), $\mathcal{R}$ is the set of relations (i.e., edge labels),~and~$\mathcal{T}=\mathcal{E}\times\mathcal{R}\times\mathcal{E}$ denotes the set of relational triples (i.e., labeled edges). Each entity or relation is usually denoted by a URI. For example, the URI of New Zealand in DBpedia is $\mathtt{dbr:New\_Zealand}$\footnote{\url{http://dbpedia.org/resource/New_Zealand}}. However, such discrete and symbolic representations of KGs fall short of supporting the efficient knowledge inference~\cite{KGE_survey}. Thus, learning continuous and low-dimensional embedding representations for KGs has drawn much attention in recent years and facilitated many KG-related tasks, such as link prediction~\cite{TransE,ConvE,SimplE,TransR,RotatE,ComplEx,TransH,DistMult,CrossE}, entity alignment~\cite{KDCoE,MTransE,EA_Table,JAPE,BootEA,GCN-Align} and entity classification tasks~\cite{Global_embed,TypeConstrain,RDF2Vec}
\begin{figure}[!t]
\centering
\includegraphics[width=0.99\textwidth]{example_three.pdf}
\caption{(a) A snapshot of the relational facts of ``From Dusk Till Dawn'' in DBpedia. Circles represent entities and directed edges have labels. (b) Illustration of the relation-level translation between entity embeddings, where circles represent entity embeddings, and bold gray arrows denote the translation vectors of relations. (c) Illustration of the the proposed edge-centric translation, where the dotted arrows denote the contextualized representations, i.e., edge embeddings. For example, \textit{starring'} 1 and \emph{starring'} 2 are two contextualized representations of the relation \textit{starring}.}
\label{fig:example}
\end{figure}
KG embedding seeks to encode the entities and relations into vector spaces, and capture semantics by the geometric properties of embeddings. To model the relational structures of KGs, most embedding models in literature interpret relations as linear or bilinear mapping functions operating on entity embeddings, such as the relation translation in TransE~\cite{TransE}, the relation matrix factorization in DistMult~\cite{DistMult}, and the relation rotation in RotatE~\cite{RotatE}. We refer to this kind of models as relation-level embedding. However, such relation-level models represent each relation with one embedding representation for all related head-tail entity pairs, which cannot well reflect the complex relational structures of KGs. As shown in Fig.~\ref{fig:example}(a), different entity pairs may share the same relation while one entity pair may hold different relations. The relation-level embedding cannot distinguish the different contexts of relations, which would lead to indistinguishable embeddings and incorrect relation inference.
Specifically, we take the translational KG embedding model TransE~\cite{TransE} as an example to explicate the aforementioned issue. TransE interprets relations as translation vectors between entity embeddings. For example, given a relational triple $(h,r,t)$, TransE expects $\mathbf{h}+\mathbf{r}\approx\mathbf{t}$ to hold, where the boldfaced letters denote the embeddings of entities and relations. The relation embedding $\mathbf{r}$ serves as a translation vector from $\mathbf{h}$ to $\mathbf{t}$. However, such relation translation encounters issues when facing more complex relations. For example, considering the relational triples in Fig.~\ref{fig:example}(a): (From Dusk Till Dawn, \textit{starring}, Quentin Tarantino) and (From Dusk Till Dawn, \textit{starring}, Cheech Marin), translational KG embeddings would have $ \textbf{Quentin Tarantino} \approx \textbf{Cheech Marin}$, as shown in Fig.~\ref{fig:example}(b). In other words, the different entities getting involved in the same relation would be embedded very closely by the same relation translation. Such indistinguishable entity embeddings go against accurate embedding-based entity alignment. Quentin Tarantino and Cheech Marin would be mistaken for an aligned entity pair due to the high similarity of their embeddings. Besides, the similar relation embeddings, such as $ \textbf{starring} \approx \textbf{writer}$, would lead to the incorrect link prediction such as (From Dusk Till Dawn, \textit{writer}, Cheech Marin). This problem has been noticed in the link prediction scenario~\cite{TransD,TransR,TransH}. Towards link prediction that predicts the missing entities for relational triples, they propose to distinguish entity embeddings with relation-specific projections. However, such projections divest KG embeddings of relational structures by injecting ambiguity into entity embeddings.
In this paper, we introduce an edge-centric translational embedding model TransEdge, which differentiates the representations of a relation between different entity-specific contexts. This idea is motivated by the graph structures of KGs. Let us see Fig.~\ref{fig:example}(a). One head-tail entity pair can hold different relations, i.e, one edge can have different labels. Also, different edges can have the same label, indicating that there are multiple head-tail entity pairs having the same relation. Thus, it is intuitive that entities should have explicit embeddings while relations should have different contextualized representations when translating between different head-tail entity pairs. Thus, we propose to contextualize relations as different edge embeddings. The context of a relation is specified by its head and tail entities. We study two different methods, i.e., context compression and context projection, for computing edge embeddings given the edge direction (head and tail entity embeddings) and edge label (relation embeddings). To capture the KG structures, we follow the idea of translational KG embeddings and build translations between entity embeddings with edge embeddings. This modeling is simple but has appropriate geometric interpretations as shown in Fig.~\ref{fig:example}(c). Our main contributions are listed as follows:
\begin{itemize}
\item[(1)] We propose a novel KG embedding model TransEdge. Different from existing models that learn one simple embedding per relation, TransEdge learns KG embeddings by contextualizing relation representations in terms of the specific head-tail entity pairs. We refer to such contextualized representations of a relation as edge embeddings and build edge translations between entity embeddings to capture the relational structures of KGs. TransEdge provides a novel perspective for KG embedding. (Section~\ref{sec:model})
\item[(2)] We evaluate TransEdge on two tasks: entity alignment between two KGs and link prediction in a single KG. Experimental results on five datasets show that TransEdge obtains the state-of-the-art results on entity alignment. It also achieves very competitive performance (even the best Hits@1) on link prediction with low computational complexity. These experiments verify the good generalization of TransEdge. To the best of our knowledge, TransEdge is the first KG embedding model that achieves the best Hits@1 performance on both embedding-based entity alignment and link prediction. (Section~\ref{sec:exp})
\end{itemize}
\section{Related Work}
\label{sec:rw}
In recent years, various KG embedding models have been proposed. The most popular task to evaluate KG embeddings is link prediction. Besides, embedding-based entity alignment also draws much attention recently. In this section, we discuss these two lines of related work.
\subsection{KG Embeddings Evaluated by Link Prediction}
We divide existing KG embedding models evaluated by link prediction into three categories, i.e., \emph{translational}, \emph{bilinear} and \emph{neural} models. TransE~\cite{TransE} introduces the translational KG embeddings. It interprets relations as translation vectors operating on entity embeddings. Given a relational triple $(h,r,t)$, TransE defines the following energy function to measure the error of relation translation: $f_{\text{\scriptsize TransE}}(h,r,t)=||\mathbf{h} + \mathbf{r} -\mathbf{t}||$, where $||\cdot||$ denotes either the $L_1$ or $L_2$ vector norm. To resolve the issues of TransE on modeling complex relations, some improved translational models have been put forward, including TransH~\cite{TransH}, TransR~\cite{TransR} and TransD~\cite{TransD}. Their key idea is to let entities have relation-specific embeddings by transformations operating on entity embeddings, such as the hyperplane projection in TransH and the space projection in TransR and TransD. We argue that such transformations introduce ambiguity to entity embeddings as they separate the original entity embedding into many dispersive relation-specific representations. For example, for each relation $r$, entity $h$ would hold a representation $\mathbf{h}_r$. These dispersive representations compromise the semantic integrity in KGs as each relation is modeled separately in the relation-specific hyperplane or space. The general entity embeddings $\mathbf{h}$ and $\mathbf{t}$ are not explicitly translated by relation vectors. Although our model can also be viewed as a kind of translational KG embedding, we introduce the edge-centric model that contextualizes relations with edge embeddings.
Besides, there are some \emph{bilinear} models that exploit similarity-based functions to compute the energy of relational triples. DistMult~\cite{DistMult} and ComplEx~\cite{ComplEx} use the bilinear Hadamard product to compute energy. HolE~\cite{HolE} substitutes the Hadamard product with circular correlation. Analogy~\cite{Analogy} imposes analogical properties on embeddings. SimplE~\cite{SimplE} proposes an enhancement of Canonical Polyadic (CP) decomposition to compute energy. CrossE~\cite{CrossE} exploits to simulate the crossover interactions between entities and relations. RotatE~\cite{RotatE} defines each relation as a rotation from the head entity to the tail in the complex-valued embedding space. Recently, there are also some \emph{neural} embedding models including ProjE~\cite{ProjE}, ConvE~\cite{ConvE}, R-GCN~\cite{R-GCN} and ConvKB~\cite{ConvKB}. These bilinear and neural models achieve superior results on link prediction at the cost of much higher model complexity. Besides, many of these embedding models also have the identified shortcomings, such as HolE and ProjE.
\subsection{KG Embeddings for Entity Alignment}
\label{sec:related_work_align}
Recently, several embedding-based entity alignment models have been proposed. MTransE~\cite{MTransE} captures two KG-specific vector spaces and jointly learns a transformation between them.
IPTransE \cite{IPTransE} employs PTransE \cite{PTransE} to embed two KGs into a unified vector space. It iteratively updates alignment information through a self-training technique. JAPE~\cite{JAPE} incorporates attribute embeddings for entity alignment. BootEA~\cite{BootEA} solves the entity alignment problem in a bootstrapping manner. KDCoE~\cite{KDCoE} co-trains description embeddings and structure embeddings to incorporate both the literal and structural information of KGs for entity alignment. GCN-Align~\cite{GCN-Align} employs graph convolutional networks to learn KG embeddings for entity alignment. AttrE~\cite{AttrE} regards literal values as ``virtual entities'' and uses TransE to embed the attribute triples for entity alignment. Note that, some of these models exploit additional resources in KGs for entity alignment, such as relation paths (IPTransE), textual descriptions (KDCoE) and literal values (AttrE). By contrast, the proposed TransEdge leverages the basic relational structures for KG embedding, without using additional resources.
\section{Edge-centric Knowledge Graph Embedding}
\label{sec:model}
\begin{figure}[!t]
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=0.999\textwidth]{rel_trans.pdf}
\caption{Illustration of the key idea of relation-contextualized KG embeddings. The white boxes denote the general embeddings of entities and relations, and the gray boxes denote the contextualized representation for this relation, i.e., the edge embedding. $\mathbf{h}_c$ and $\mathbf{t}_c$ are the interaction embeddings for entities. $\psi$ is a contextualization operator.}
\label{fig:rel_trans}
\end{figure}
TransEdge embeds the entities and relations of KGs in a $d$-dimensional vector space. Unlike the conventional relation-level models, for a relational triple, the head and tail entity embeddings in TransEdge hold an edge translation. Fig.~\ref{fig:rel_trans} illustrates the main idea. The contextualization operator $\psi$ takes as input the combined embeddings of the head and tail entities (edge direction) as well as the relation embedding (edge label) to compute edge embeddings.
\subsection{Formulation of Energy Function}
Like TransE, we define an energy function to measure the error of edge translation between entity embeddings. For simplicity, the energy of a relational triple $(h,r,t)$ in TransEdge is written as follows:
\begin{equation}\label{eq:score}
f(h,r,t)=||\mathbf{h} + \psi(\mathbf{h}_c,\mathbf{t}_c, \mathbf{r}) -\mathbf{t}||.
\end{equation}
The edge embedding $\psi(\mathbf{h}_c,\mathbf{t}_c, \mathbf{r})$ corresponds to a translation vector between the head to tail entity embeddings. In TransEdge, we learn a general embedding for each entity, such as $\mathbf{h}$ for $h$. General embeddings capture the geometric positions and relational semantics of entities in the vector space. We also introduce interaction embeddings for entities, such as $\mathbf{h}_c$ for $h$, which are used to encode their participation in the calculation of edge embeddings. Separating the interaction embeddings from general ones can avoid the interference of such two different information.
\subsection{Contextualization Operation}
The calculation of edge embeddings $\psi(\mathbf{h}_c,\mathbf{t}_c,\mathbf{r})$ should involve the information of both the head and tail entities (edge direction), as well as the relations (edge label). We study two different methods shown in Fig.~\ref{fig:operation}, which are discussed in detail below.
\begin{figure}[!t]
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=0.9\textwidth]{operation.pdf}
\caption{Illustration of the proposed contextualization operations.}
\label{fig:operation}
\end{figure}
\subsubsection{Context Compression}
This method uses multilayer perceptrons (MLPs) to compress the embeddings of the edge direction and label. Specifically, given a MLP with one hidden layer (i.e., two layers in total plus the output layer) and the input vector $\mathbf{v}^{(0)}$, each layer is calculated with a set of weight matrices $\mathbf{W}$ and vectors $\mathbf{b}$:
\begin{equation} \label{eq:nn}
\small
\mathbf{v}^{(1)} = \sigma \big(\mathbf{W}^{(1)}\,\mathbf{v}^{(0)} + \mathbf{b}^{(1)}\big), \,\,\,
\mathbf{v}^{(2)} = \sigma \big(\mathbf{W}^{(2)}\, \mathbf{v}^{(1)} + \mathbf{b}^{(2)}\big),
\end{equation}
where $\sigma() $ is the activation function like $\tanh()$. Finally, $\textsc{mlp}(\mathbf{v}^{(0)})=\mathbf{v}^{(2)}$. As illustrated in Fig.~\ref{fig:operation}(a), given a relational triple $(h,r,t)$, we concatenate $\mathbf{h}_c$ and $\mathbf{r}$ as input and feed it to a MLP to get a combined representation. $\mathbf{t}_c$ and $\mathbf{r}$ are encoded in the same way. Finally, we employ another MLP to combine them. The three MLPs capture the non-linear combination of the representations of edge direction and label. Let $\textsc{mlp}$() denote a MLP. The edge embedding is calculated as follows:
\begin{equation}\label{eq:sm}
\psi(\mathbf{h}_c,\mathbf{t}_c,\mathbf{r}) = \textsc{mlp}_1(\textsc{mlp}_2([\mathbf{h}_c;\mathbf{r}])+\textsc{mlp}_3([\mathbf{r};\mathbf{t}_c])),
\end{equation}
where $[\mathbf{h}_c;\mathbf{r}] = \textsc{concat}(\mathbf{h}_c,\mathbf{r}) \in \mathbb{R}^{2d} $, which concatenates the given vectors.
\subsubsection{Context Projection}
Projecting embeddings onto hyperplanes~\cite{HyTE,TransH} has shown promising effects on the processing of disparate feature representations. Here, we regard the edge direction and label representations as orthogonal features and project the label representation onto the hyperplane of the edge direction representations, as illustrated in Fig.~\ref{fig:operation}(b). Given two relational triples $(h,r,t_1)$ and $(h,r,t_2)$, $\mathbf{r}'$ and $\mathbf{r}''$ are two edge embeddings for $\mathbf{r}$ projected on hyperplanes. Let $\mathbf{w}_{(h,t)}$ be the normal vector of such hyperplane. The edge embedding for ($h,r,t$) is calculated by vector projection as follows:
\begin{equation}\label{eq:bias}
\psi(\mathbf{h}_c,\mathbf{t}_c,\mathbf{r}) = \mathbf{r} - \mathbf{w}_{(h,t)}^\top\mathbf{r}\,\mathbf{w}_{(h,t)}.
\end{equation}
We use a MLP to compute the normal vector based on the concatenated embeddings of head and tail entities. Formally, $\mathbf{w}_{(h,t)}=\textsc{mlp}([\mathbf{h}_c; \mathbf{t}_c])$, s.t. $||\mathbf{w}_{(h,t)}||=1$.
\subsection{Loss Function}
Following the conventional training strategy of previous models~\cite{KGE_survey}, we train TransEdge based on the local-closed world assumption. In this case, we regard the observed relational triples in KGs as positive examples while the unobserved ones as negative samples (either false or missing triples). In our model, positive relational triples are expected to fulfill such relation-contextualized translation with low energy. Negative relational triples are supposed to hold higher energy as they are more invalid than positive ones. To this end, we minimize the following limit-based loss~\cite{BootEA}, which can create more distinguishable embedding structures than the conventional marginal ranking loss:
\begin{equation}\label{eq:rel_embed_obj}
\small
\mathcal{L} = \sum_{(h,r,t) \in \mathcal{T}}[f(h,r,t) - \gamma_1]_+ + \sum_{(h',r',t') \in \mathcal{T}^-}\alpha\,[\gamma_2 - f(h',r',t')]_+,
\end{equation}
where $[x]_+ = \max(0,x)$. $\gamma_1,\gamma_2$ are the hyper-parameters to control the energy of triples, s.t. $\gamma_1 < \gamma_2$. $\alpha$ is a hyper-parameter to balance the positive and negative samples. $ \mathcal{T}^- $ denotes the set of negative triples, which can be generated by some heuristic strategies. Here, we choose the truncated negative sampling~\cite{BootEA}, which generates negative triples by replacing either the head or tail entities of positive relational triples with some random neighbors of these entities.
\subsection{Implementation for Entity Alignment}
Given a source KG $\mathcal{K}_1=(\mathcal{E}_1,\mathcal{R}_1,\mathcal{T}_1)$ and a target KG $\mathcal{K}_2=(\mathcal{E}_2,\mathcal{R}_2,\mathcal{T}_2)$, entity alignment seeks to find entities from different KGs that refer to the same real-world object. Embedding-based entity alignment helps overcome the semantic heterogeneity in different KGs and receives increasing attention recently.
For entity alignment, we let each entity pair in seed alignment (i.e., training data) share the same embedding (called parameter sharing), to reconcile $\mathcal{K}_1$ and $\mathcal{K}_2$. In this way, the two KGs are merged into one and we can use TransEdge to learn entity embeddings from this ``combined KG''. For training, semi-supervised strategies, such as self-training and co-training, have been widely used for embedding-based entity alignment~\cite{KDCoE,BootEA,IPTransE}. This is because the size of seed alignment is usually small. For example, as investigated in~\cite{MTransE}, in Wikipedia, the inter-lingual links cover less than 15\% entity alignment. To cope with this problem, we use the bootstrapping strategy \cite{BootEA} to iteratively select the likely-aligned entity pairs, which we denote by $\mathcal{D}=\{(e_1, e_2)\in\mathcal{E}_1 \times\mathcal{E}_2 |\cos(\mathbf{e}_1, \mathbf{e}_2) > s\}$, where $s$ is the similarity threshold. As errors in the newly-found entity alignment are unavoidable, we do not make each newly-found entity pair share the same embedding. Instead, we minimize the following loss to let the proposed entity alignment has a small embedding distance (i.e., high similarity):
\begin{equation}\label{eq:semi}
\mathcal{L}_{\text{semi}} = \sum_{(e_1,e_2) \in \mathcal{D}}||\mathbf{e}_1 - \mathbf{e}_2||.
\end{equation}
In the test phase, given an entity to be aligned in $\mathcal{K}_1$, we rank entities in $\mathcal{K}_2$ as its counterpart candidates in descending order based on the cosine similarity of entity embeddings. The right counterpart is expected to have a top rank.
The parameters of TransEdge are initialized using the Xavier initializer~\cite{Xavier}. The embedding loss $\mathcal{L}$ on $\mathcal{T}_1 \cup \mathcal{T}_2$ and the semi-supervised training loss $\mathcal{L}_{\text{semi}}$ are jointly optimized using a stochastic gradient descent algorithm AdaGrad. We enforce the $L_2$ norm of KG embeddings to 1 to reduce the trivial learning by artificially increasing the embedding norms~\cite{TransE}. The variants of TransEdge that use context compression (CC) and context projection (CP) are denoted by TransEdge-CC and TransEdge-CP, respectively. For ablation study, we also develop the degraded variants of TransEdge without using semi-supervised training, which are marked by the suffix (w/o semi).
\subsection{Implementation for Link Prediction}
Link prediction is the task of inferring the missing head or tail entities when given incomplete relational triples. For example, given ($\rule{0.38cm}{0.15mm}$, \textit{capitalOf}, New Zealand), the link prediction models are expected to rank the right head entity Wellington at the first place. Link prediction is a key task for KG completion and has been widely used as an evaluation task by many previous KG embedding models.
The embeddings are learned by minimizing $\mathcal{L}$. The parameters are initialized using the Xavier initializer and the loss is also optimized using AdaGrad. In the test phrase, for head prediction $(\rule{0.38cm}{0.15mm},r,t)$, we create a set of candidate triples by replacing $\rule{0.38cm}{0.15mm}$ with all possible entities. The candidate triples can be ranked in ascending order according to their energy calculated using Eq.~(\ref{eq:score}). The right candidate triple is expected to have a top rank. Tail prediction $(h,r,\rule{0.38cm}{0.15mm})$ can be done in the same way.
\subsection{Complexity Analysis}
\label{subsect:complexity}
In general, TransEdge learns two embeddings for each entity. We provide a complexity comparison in Table \ref{tab:complexity}, where $n_e$ and $n_r$ denote the numbers of entities and relations, respectively, and $d$ is the embedding dimension. As our model introduces additional parameters for embedding entities. its complexity is $O(2n_ed + n_rd)$, which is more than that of TransE. However, it is less than the complexity of TransD. Note that, the parameter complexity of TransEdge grows linearly with the number of entities and the embedding dimension.
\begin{table}[!t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{5pt}
\caption{Complexity comparison of translational embedding models}
\label{tab:complexity}
\centering\small
\begin{tabular}{l|l}
\toprule
\ Model &\ \#Embeddings \\ \hline
\ TransE \cite{TransE} & \ $O(n_ed + n_rd)$ \\
\ TransH \cite{TransH} & \ $O(n_ed + 2n_rd)$ \\
\ TransR \cite{TransR} & \ $O(n_ed + n_rd^2)$ \\
\ TransD \cite{TransD} & \ $O(2n_ed + 2n_rd)$ \quad\\
\hline
\ TransEdge (this paper) \quad & \ $O(2n_ed + n_rd)$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Experiments}
\label{sec:exp}
We assess TransEdge on two popular embedding-based tasks: entity alignment between two KGs and link prediction in one KG. The source code of TransEdge is available online\footnote{\url{https://github.com/nju-websoft/TransEdge}}.
\subsection{Task 1: Embedding-based Entity Alignment}
\subsubsection{Datasets.}
To evaluate TransEdge on various scenarios of entity alignment, we choose the following datasets: (1) DBP15K~\cite{JAPE} is extracted from the multilingual DBpedia. It contains three cross-lingual entity alignment datasets: DBP\textsubscript{ZH-EN} (Chinese to English), DBP\textsubscript{JA-EN} (Japanese to English) and DBP\textsubscript{FR-EN} (French to English). Each dataset has 15 thousand aligned entity pairs. (2) DWY100K~\cite{BootEA} has two large-scale monolingual datasets, DBP-WD and DBP-YG, sampled from DBpedia, Wikidata and YAGO3. Each dataset has 100 thousand aligned entity pairs. For a fair comparison, we reuse their original dataset splits in evaluation.
\subsubsection{Competitive Models.}
For comparison, we choose the following state-of-the-art embedding-based entity alignment models: MTransE~\cite{MTransE}, IPTransE~\cite{IPTransE}, JAPE~\cite{JAPE}, BootEA~\cite{BootEA} and its non-bootstrapping version AlignE, as well as GCN-Align~\cite{GCN-Align}. We do not compare with some other models like KDCoE~\cite{KDCoE} and AttrE~\cite{AttrE}, since they require additional resources (e.g., textual descriptions and attribute values) that do not present in our problem setting as well as other competitors'. Furthermore, the character-based literal embedding used in AttrE~\cite{AttrE} is unsuited to cross-lingual entity alignment as the characters of different languages (such as Chinese and English) can be very heterogeneous. Our goal is to exploit the basic relational structures of KGs for entity alignment.
To further understand the benefits and limitations of KG embeddings for entity alignment, we extend several representative embedding models that are used for link prediction as the competitors, including: three translational models TransH~\cite{TransH}, TransR~\cite{TransR} and TransD~\cite{TransD}; two bilinear models HolE~\cite{HolE} and SimplE~\cite{SimplE}; and two neural models ProjE~\cite{ProjE} and ConvE~\cite{ConvE}. Note that ComplEx~\cite{ComplEx} is very similar to HolE~\cite{RotatE}. So, we pick HolE as the representative. We do not include Analogy~\cite{Analogy} and ConvKB~\cite{ConvKB}, because we find that these methods do not perform well on the datasets. Similar to TransEdge, we merge two KGs into one via parameter sharing and use these models to learn embeddings. We refer the open-source KG embedding framework OpenKE~\cite{OpenKE} to implement TransH, TransR, TransD and HolE, while SimplE, ProjE and ConvE are implemented based on their code.
\subsubsection{Experimental Settings.}
We have tuned a series of hyper-parameters. For example, we select the learning rate among \{0.001, 0.005, 0.01, 0.02\} and the positive margin $\gamma_1$ among \{0.1, 0.2, $\cdots$, 0.5\}. The selected setting of hyper-parameters is reported as follows. For TransEdge-CC, $\gamma_1 = 0.3$, $\gamma_2 = 2.0$, $\alpha = 0.3$, $s = 0.75$, $d = 75$. For TransEdge-CP, $\gamma_1 = 0.2$, $\gamma_2 = 2.0$, $\alpha = 0.8$, $s = 0.7$, $d = 75$. The activation function is $\tanh()$ for MLPs. For DBP15K, we generate $20$ negative samples for each relational triple and the batch size is $2,000$. For DWY100K, we generate $25$ negative samples for each relational triple and the batch size is $20,000$. We adopt $L_2$-norm in the energy function. The learning rate is $0.01$ and the training is terminated using early stop based on the Hits@1 performance to avoid overfitting. We use CSLS \cite{Word_Translation} as similarity measure. We choose three widely-used metrics: Hits$@k$, mean rank (MR) and mean reciprocal rank (MRR). Higher Hits$@k$ and MRR values, and lower MR values indicate better performance. Note that, Hits@1 is equivalent to precision, and MRR is more robust than MR since MRR is more able to tolerate a few poorly-ranked correct candidates.
\subsubsection{Entity Alignment Results.}
\begin{table}[!t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{5pt}
\centering
\caption{Entity alignment results on DBP15K}
\label{tab:ent_alignment}
\resizebox{1.0\textwidth}{!}{
\begin{threeparttable}\scriptsize
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{lcccccccccccc}
\toprule
& \multicolumn{4}{c}{DBP\textsubscript{ZH-EN}} & \multicolumn{4}{c}{DBP\textsubscript{JA-EN}} & \multicolumn{4}{c}{DBP\textsubscript{FR-EN}} \\
\cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-13}
& Hits@1 & Hits@10 & MRR & MR & Hits@1 & Hits@10 & MRR & MR & Hits@1 & Hits@10 & MRR & MR \\
\midrule
MTransE~\cite{MTransE}~$^\dag$ & 0.308 & 0.614 & 0.364 & 154 & 0.279 & 0.575 & 0.349 &159 & 0.244 & 0.556 & 0.335 &139 \\
IPTransE~\cite{IPTransE}~$^\ddag$ & 0.406 & 0.735 & 0.516& -- & 0.367 & 0.693 & 0.474& -- & 0.333 & 0.685 & 0.451& -- \\
JAPE~\cite{JAPE}~$^\dag$ & 0.412 & 0.745 & 0.490& 64 & 0.363 & 0.685 & 0.476& 99 & 0.324 & 0.667& 0.430& 92 \\
AlignE~\cite{BootEA} & 0.472 & 0.792 & 0.581& -- & 0.448 & 0.789 & 0.563& -- & 0.481 & 0.824 & 0.599& -- \\
BootEA~\cite{BootEA} & 0.629 & 0.848 & 0.703& -- & 0.622 & 0.854 & 0.701& -- & 0.653 & 0.874 & 0.731& -- \\
GCN-Align~\cite{GCN-Align} & 0.413 & 0.744 & -- & -- & 0.399 & 0.745 & -- & -- & 0.373 & 0.745 & --& -- \\
\midrule
TransH~\cite{TransH}~$^\triangle$ & 0.377 & 0.711 & 0.490& 52 & 0.339 & 0.681 & 0.462& 59 & 0.313 & 0.668 & 0.433& 47 \\
TransR~\cite{TransR}~$^\triangle$ & 0.259 & 0.529 & 0.349& 299 & 0.222 & 0.440 & 0.295& 315 & 0.059 & 0.225 & 0.116& 502 \\
TransD~\cite{TransD}~$^\triangle$ & 0.392 & 0.729 & 0.505& 48 & 0.356 & 0.695 & 0.468& 58 & 0.323 & 0.694 & 0.447& 43 \\
\midrule
HolE~\cite{HolE}~$^\triangle$ & 0.250 & 0.535 & 0.346& 488 & 0.256 & 0.517 & 0.343& 560 & 0.149 & 0.465 & 0.251& 1133 \\
SimplE~\cite{SimplE}~$^\diamondsuit$ &0.317 & 0.575 &0.405 & 453 &0.255 & 0.525& 0.346 & 409 &0.147&0.438&0.241& 397 \\
\midrule
ProjE~\cite{ProjE}~$^\diamondsuit$ & 0.290 & 0.527 & 0.374& 705 & 0.273 & 0.475 & 0.345& 919 & 0.283 & 0.527 & 0.368& 659 \\
ConvE~\cite{ConvE}~$^\diamondsuit$ & 0.169 & 0.329 & 0.224& 1123 & 0.192 & 0.343 & 0.246& 1081 & 0.240 & 0.459 & 0.316& 694 \\
\midrule
TransEdge-CC (w/o semi) & 0.622 & 0.868 & 0.711 & 65 & 0.601 & 0.863 & 0.696 & 56 & 0.617 & 0.891 & 0.716 & 38\\
TransEdge-CP (w/o semi) & 0.659 & 0.903 & 0.748 & 50 & 0.646 & 0.907 & 0.741 & 36 & 0.649 & 0.921 & 0.746 & 25\\
TransEdge-CC & 0.669 & 0.871 & 0.744 & 66 & 0.645 & 0.859 & 0.722 & 67 & 0.666 & 0.893 & 0.749 & 40 \\
TransEdge-CP & \textbf{0.735} & \textbf{0.919} & \textbf{0.801} & \textbf{32} & \textbf{0.719} & \textbf{0.932} & \textbf{0.795} & \textbf{25} & \textbf{0.710} & \textbf{0.941} & \textbf{0.796} & \textbf{12} \\
\bottomrule
\end{tabular}
$\dag$ Hits@$k$ and MR results are taken from~\cite{JAPE} while MRR results are taken from~\cite{BootEA}. $\ddag$ Results are taken from~\cite{BootEA}. $\triangle$ Results are produced by ourselves using OpenKE~\cite{OpenKE}. $\diamondsuit$ Results are produced by ourselves using their source code. $-$ denotes unreported results in their papers. Unmarked results are taken from their own papers. Best results are marked in boldface, and same in the following tables.
\end{threeparttable}}
\end{table}
The results of entity alignment are depicted in Tables~\ref{tab:ent_alignment} and~\ref{tab:ent_alignment_100k}. We can see that TransEdge consistently achieves the best for all the metrics on the five datasets. For example, on DBP\textsubscript{ZH-EN}, TransEdge-CP (w/o semi) achieves an improvement of $ 0.187$ on Hits@$1$ against AlignE. If compared with its bootstrapping version BootEA, TransEdge-CP (w/o semi) still achieves a gain of $0.030$ while the improvement of TransEdge-CP reaches $0.106$. We can see that BootEA is a very competitive model due to its powerful bootstrapping strategy. However, our semi-supervised variants TransEdge-CC and TransEdge-CP significantly outperform BootEA on DBP15K. This is due to the ability of TransEdge on preserving KG structures.
On DBP15K, both TransEdge-CC and TransEdge-CP show good performance. TransEdge-CC (w/o semi) still obtains superior results than AlignE and TransEdge-CC also outperforms BootEA. Furthermore, we find that TransEdge-CP achieves better results than TransEdge-CC. We think that this is because the context projection has a good geometric interpretation, as shown in Fig.~\ref{fig:rel_trans}(b), which helps capture better and more solid relational structures of KGs for entity alignment. We can also see that the proposed semi-supervised training for entity alignment brings remarkable improvement. For example, on DBP\textsubscript{ZH-EN}, it increases the Hits@1 scores of TransEdge-CP from $0.659$ (w/o semi) to $0.735$. These results indicate that the proposed context compression and projection can both accurately compute the edge embeddings. The proposed semi-supervised training also contributes to the performance improvement.
\begin{table}[!t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{5pt}
\centering
\caption{Entity alignment results on DWY100K}
\label{tab:ent_alignment_100k}
\resizebox{0.85\textwidth}{!}{
\begin{threeparttable}\scriptsize
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{4}{c}{DBP-WD} & \multicolumn{4}{c}{DBP-YG} \\
\cmidrule(lr){2-5} \cmidrule(lr){6-9}
& Hits@1 & Hits@10 & MRR & MR & Hits@1 & Hits@10 & MRR & MR \\
\midrule
MTransE~\cite{MTransE}~$^\ddag$ & 0.281 & 0.520 & 0.363 & -- & 0.252 & 0.493 & 0.334 & -- \\
IPTransE~\cite{IPTransE}~$^\ddag$ & 0.349 & 0.638 & 0.447 & --& 0.297 & 0.558 & 0.386 & -- \\
JAPE~\cite{JAPE}~$^\ddag$ & 0.318 & 0.589 & 0.411& -- & 0.236 & 0.484 & 0.320& --\\
AlignE~\cite{BootEA}~$^\ddag$ & 0.566 & 0.827 & 0.655& -- & 0.633 & 0.848 & 0.707& -- \\
BootEA~\cite{BootEA}~$^\ddag$ & 0.748 & 0.898 & 0.801& -- & 0.761 & 0.894 & 0.808& -- \\
GCN-Align~\cite{GCN-Align}~$^\nabla$ & 0.479 & 0.760 & 0.578& 1988 & 0.601 & 0.841 & 0.686& 299 \\
\midrule
TransH~\cite{TransH}~$^\triangle$ & 0.351 & 0.641 & 0.450 & 117 & 0.314 & 0.574 & 0.402& 90 \\
TransR~\cite{TransR}~$^\triangle$ & 0.013 & 0.062 & 0.031& 2773 & 0.010 & 0.052 & 0.026& 2852 \\
TransD~\cite{TransD}~$^\triangle$ & 0.362 & 0.651 & 0.456& 152 & 0.335 & 0.597 & 0.421 & 90\\
\midrule
HolE~\cite{HolE}~$^\triangle$& 0.223 & 0.452 & 0.289& 811 & 0.250 & 0.484 & 0.327 & 437 \\
SimplE~\cite{SimplE}~$^\diamondsuit$& 0.169 & 0.328 & 0.223 & 3278 & 0.131 & 0.282 & 0.183 & 3282\\
\midrule
ProjE~\cite{ProjE}~$^\diamondsuit$ & 0.312 & 0.504 & 0.382& 2518 & 0.366 & 0.573 & 0.436& 1672 \\
ConvE~\cite{ConvE}~$^\diamondsuit$ & 0.403 & 0.628 & 0.483& 1428 & 0.503 & 0.736 & 0.582& 837 \\
\midrule
TransEdge-CC (w/o semi) & 0.687 & 0.910 & 0.767 & 70 & 0.759 & 0.935 & 0.822 & 24 \\
TransEdge-CP (w/o semi) & 0.692 & 0.898 & 0.770 & 106 & 0.726 & 0.909 & 0.792 & 46 \\
TransEdge-CC & 0.732 & 0.926 & 0.803 & \textbf{65} & 0.784 & \textbf{0.948} & \textbf{0.844} & \textbf{22} \\
TransEdge-CP & \textbf{0.788} & \textbf{0.938} & \textbf{0.824} & 72 & \textbf{0.792} & 0.936 & 0.832 & 43\\
\bottomrule
\end{tabular}
$\nabla$: Results are produced using its code. Other marks mean the same in Table~\ref{tab:ent_alignment}.
\end{threeparttable}
}
\end{table}
We notice that, on DWY100K, the improvement of TransEdge is not so large as that on DBP15K. For example, on DBP-WD, TransEdge-CP only achieves an improvement of $ 0.040$ on Hits@$1$ against BootEA. We think this is because the two KGs in DBP-WD or DBP-YG have aligned relational structures and their entities are one to one aligned. But in DBP15K, there are many noisy entities that have no counterparts. Thus, DWY100K is relatively simple for entity alignment. On datasets with noisy entities, TransEdge gives a big advantage to others, which indicates the robustness of TransEdge.
It is interesting to see that some modified models also demonstrate competitive performance on entity alignment. ConvE even outperforms some alignment-oriented embedding models such as MTransE, IPTransE and JAPE on the DWY100K datasets, which indicates the potential of deep learning techniques. We also notice that the performance of TransR is very unstable. It achieves promising results on DBP\textsubscript{ZH-EN} and DBP\textsubscript{JA-EN} but fails on the other three datasets. We take a closer look at the five datasets and discover that DBP\textsubscript{ZH-EN} and DBP\textsubscript{JA-EN} contain some relation alignment. When TransR performs relation-specific projections on entities, the relation alignment would pass some alignment information to entities. The requirement of relation alignment limits the applicability of TransR to entity alignment. We can conclude that not all embedding models designed for link prediction are suitable for entity alignment.
\subsection{Task 2: Embedding-based Link Prediction}
\label{sect:link}
\subsubsection{Datasets.} We use two benchmark datasets FB15K-237~\cite{FB15k237} and WN18RR~\cite{ConvE} for link prediction. They are the improved versions of FB15K~\cite{TransE} and WN18~\cite{TransE}, respectively. As found in~\cite{ConvE,FB15k237}, FB15K and WN18 contain many symmetric triples that are easy to infer by learning some trivial patterns. Thus, the work in~\cite{ConvE,FB15k237} creates FB15K-237 and WN18RR by removing inverse relations from the testing sets. So, FB15K-237 and WN18RR are more difficult, and both of them have gradually become the most popular benchmark datasets for link prediction in recent studies~\cite{ConvE,ConvKB,RotatE,CrossE}. FB15K-237 contains $14,541$ entities, $237$ relations and $310,116$ relational triples. WN18RR has $40,943$ entities, $11$ relations and $93,003$ relational triples. For a fair comparison, we reuse the original training/validation/test splits of the two datasets in evaluation.
\subsubsection{Competitive Models.}
For comparison, we choose a wide range of embedding models for link prediction as the competitors, including five translational models, seven bilinear models and five neural models, as listed in Table \ref{tab:link_pred_fb15k-237}. For the sake of fairness and objectivity, we report the published results of them as many as possible. But there still exist some results unavailable in the reference papers. If some models have not been evaluated on FB15K-237 or WN18RR, we use their released code to produce the results by ourselves.
\subsubsection{Experimental Settings.}
We have tuned hyper-parameter values by a careful grid search. The selected setting for hyper-parameters is as follows. For FB15K-237, $\gamma_1 = 0.4$, $\gamma_2 = 0.9$, $\alpha = 0.4$, $d = 200$. The batch size is 200 and the learning rate is 0.005. We generate 10 negative samples for each triple. For WN18RR, $\gamma_1 = 0.2$, $\gamma_2 = 2.7$, $\alpha = 0.8$, $d = 500$. The batch size is $2,000$ and the learning rate is 0.01. We sample 30 negatives for each triple. The activation function is still $\tanh()$ for MLPs. We use $L_2$-norm in our energy function. When evaluating the ranking lists, we use the filtered setting~\cite{TransE}, i.e., given a candidate triple list, we remove from the list all other positive triples that appear in the training/validation/test data. Then, we get a new filtered ranking list and the right triple is expected to have a high rank. By convention, we report the average results of head prediction and tail prediction. Same as embedding-based entity alignment, we use Hits@$k$, MR and MRR.
\begin{table}[!t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{5pt}
\centering
\caption{Link prediction results on FB15K-237 and WN18RR}
\label{tab:link_pred_fb15k-237}
\resizebox{1.0\textwidth}{!}{
\scriptsize
\begin{threeparttable}
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{llcccccccc}
\toprule
& & \multicolumn{4}{c}{FB15K-237} & \multicolumn{4}{c}{WN18RR}\\
\cmidrule(lr){3-6} \cmidrule(lr){7-10}
Model& Type & Hits@1 & Hits@10 & MRR & MR & Hits@1 & Hits@10 & MRR & MR \\
\midrule
TransE~\cite{TransE}~$^\dag$& Trans. & -- & 0.436 & 0.269& 285 & -- & 0.453 & 0.412 & 5429 \\
TransH~\cite{TransH}~$^\dag$ & Trans. & -- & 0.453 & 0.281 & 292 & -- & 0.429 & 0.435 & 5102 \\
TransR~\cite{TransR}~$^{\ddag \nabla}$ & Trans. & -- & 0.429 & 0.162 & 337 & 0.017 & 0.257 & 0.094 & 3708 \\
TransD~\cite{TransD}~$^{\ddag \nabla}$ & Trans. & -- & 0.428 & 0.162 & 305 & 0.015 & 0.139 & 0.060 & 6644 \\
PTransE~\cite{PTransE}~$^\triangle$ & Trans. & 0.210 & 0.501 & 0.314 & 299& 0.272 & 0.424 & 0.337 & 5686 \\
\midrule
DistMult~\cite{DistMult}~$^\S$& Bilinear & 0.155 & 0.419 & 0.241 & 254 & 0.390 & 0.490 & 0.430 & 5110 \\
HolE~\cite{HolE}~$^{\natural \nabla}$& Bilinear & 0.133 & 0.391 & 0.222& -- & 0.284 & 0.346 & 0.308 & 4874 \\
ComplEx~\cite{ComplEx}~$^\S$& Bilinear & 0.158 & 0.428 & 0.247 & 339 & 0.410 & 0.510 & 0.440 &5261 \\
Analogy~\cite{Analogy}~$^{\sharp \nabla}$& Bilinear & 0.131 & 0.405 & 0.219 & -- & 0.389 & 0.441 & 0.407 & 3836 \\
ProjE~\cite{ProjE}& Neural & -- & 0.461 & 0.294 & 246 & -- & 0.474 & 0.453 & 4407 \\
ConvE~\cite{ConvE}& Neural & 0.239 & 0.491 & 0.316 & 246 & 0.390 & 0.480 & 0.460 & 5277 \\
R-GCN~\cite{R-GCN}& Neural & 0.153 & 0.414 & 0.248 & -- & -- & -- & -- & -- \\
ConvKB~\cite{ConvKB}& Neural & -- & 0.517 & \textbf{0.396} & 257 & -- & 0.525 & 0.248 & 2554 \\
CACL~\cite{CACL}& Neural & -- & 0.487 & 0.349 & 235 & -- & 0.543 & 0.472 & 3154 \\
SimplE~\cite{SimplE}~$^\square$& Bilinear & 0.225 & 0.461 & 0.230 & -- & -- & -- & -- & -- \\
CrossE~\cite{CrossE}~$^\diamondsuit$ & Bilinear & 0.211 & 0.474 & 0.299 &-- & 0.373 & 0.394 & 0.374 & 6091 \\
RotatE~\cite{RotatE} & Bilinear & 0.241 & \textbf{0.533} & 0.338 & \textbf{177} & 0.428 & \textbf{0.571} & \textbf{0.476} & 3340 \\
\midrule
TransEdge-CC & Trans. & 0.227 & 0.482 & 0.310 & 305 & 0.411 & 0.516 & 0.439 & \textbf{2452}\\
TransEdge-CP & Trans. & \textbf{0.243} & 0.512 & 0.333 & 219 & \textbf{0.433} & 0.487 & 0.451 & 4866 \\
\bottomrule
\end{tabular}
$\dag$: Results are taken from~\cite{CACL}. $\ddag$: Results of FB15K-237 are taken from~\cite{Re-eval}. $\nabla$: Results on WN18RR are produced using OpenKE~\cite{OpenKE}. $\triangle$: Results are produced using its source code. $\S$: Results are taken from~\cite{ConvE}. $\natural$: Results are taken from~\cite{R-GCN}. $\sharp$: Results are taken from~\cite{CrossE}. $\square$: Results are produced using the published source code. We do not include its results of WN18RR because we find them not promising. $\diamondsuit$: Results of WN18RR are produced using its source code.
\end{threeparttable}
}
\end{table}
\subsubsection{Link Prediction Results.}
Table~\ref{tab:link_pred_fb15k-237} gives the link prediction results on FB15K-237 and WN18RR. We can see that TransEdge significantly outperforms the translational models TransE, TransH, TransR and PTransE. This is because the proposed edge-centric translation can distinguish the different contexts of relations, while the relation translation of the aforementioned models usually leads to indistinguishable relation embeddings when modeling complex relational structures. When compared with the bilinear and neural models, especially with the very latest model RotatE~\cite{RotatE}, TransEdge-CP still achieves the best Hits@1 scores on both datasets. The best Hits@1 performance shows that TransEdge-CP can precisely capture the relational structures of KGs for link prediction, rather than put all possible candidates with similar and ambiguous ranks. We can also see that TransEdge-CC obtains the best MR result on WN18RR. Considering that WN18RR only has 11 relations but $40,943$ entities, we think this is because the MLPs can well fit such complex relational structures. Although the scores of TransEdge by other metrics such as Hits@10 and MRR fall behind ConvKB and RotatE, the model complexity of TransEdge is lower than them. For example, the convolution operation of ConvE and ConvKB is more complicated than the matrix multiplication used in the MLPs of TransEdge. Besides, the Euclidean vector space of real numbers generated by TransEdge is simpler than the complex vector space of ComplEx and RotatE.
\begin{table}[!t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{5pt}
\centering
\caption{Entity alignment results on DBP15K with double relations}
\label{tab:ent_alignment_double}
\resizebox{1.0\textwidth}{!}{
\begin{threeparttable}\scriptsize
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{DBP\textsubscript{ZH-EN} (double)} & \multicolumn{2}{c}{DBP\textsubscript{JA-EN} (double)} & \multicolumn{2}{c}{DBP\textsubscript{FR-EN} (double)} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& Hits@1 & Hits@1$\downarrow$ & Hits@1 & Hits@1$\downarrow$ & Hits@1 & Hits@1$\downarrow$ \\
\midrule
MTransE~\cite{MTransE} & 0.230 & 25.32\% & 0.232 & 16.85\% & 0.208 & 14.75\%\\
\midrule
TransEdge-CC (w/o semi) & 0.601 & 3.38\% & 0.578 & 3.82\% & 0.585 & 5.18\% \\
TransEdge-CP (w/o semi) & \textbf{0.652} & \textbf{1.06}\% & \textbf{0.623} & \textbf{3.56}\% & \textbf{0.641} & \textbf{1.23}\% \\
\bottomrule
\end{tabular}
\end{threeparttable}}
\end{table}
\subsection{Analysis on Complex Relational Structures in KGs}
\subsubsection{One Entity Pair with Multiple Relations.}
For further comparison, we evaluate TransEdge on KGs with double relations. We create a dummy relation $r'$ for each relation $r$ and add a dummy triple $(h,r',t)$ for $(h,r,t)$. The dummy relations and triples would not change the relational structures of KGs, but they would exacerbate the effects of the cases of one entity pair with multiple relations. We compare TransEdge (w/o semi) with the relation-level translational model MTransE~\cite{MTransE}. Due to space limitation, we report the Hits@1 results on DBP15K and the decrease rates (marked as Hits@1$\downarrow$) compared with their performance in Table~\ref{tab:ent_alignment}. The results are listed in Table~\ref{tab:ent_alignment_double}. We can see that the performance of TransEdge shows less variation than MTransE. This indicates that the complex relational structures can indeed hinder entity alignment performance while TransEdge has superior performance on modeling such structures.
\subsubsection{Multiple Entity Pairs with one Relation.}
Figure~\ref{fig:viz} shows the 2D visualization for the embeddings of some entity pairs with the same relation \textit{capital} in DBP-WD. We project these embeddings into two dimensions using PCA. We can see that the embeddings of TransEdge show flexible and robust relational structures. The translation vectors of \textit{capital} are different in directions when involved in different contexts. For the embeddings of MTransE, the translation vectors are almost parallel. This means that, if several entities get involved in the same relational triple, they would be embedded very similarly by the same relational translation, which hinders the entity alignment performance. This experiment bears out the intuition of TransEdge illustrated by Fig.~\ref{fig:example}.
\begin{figure}[!t]
\centering
\setlength{\abovecaptionskip}{2pt}
\setlength{\belowcaptionskip}{0pt}
\includegraphics[width=0.85\textwidth]{viz.pdf}
\caption{2D embedding projection of some countries (or states) and their \textit{capital} cities. The green arrows denote the translation vectors between entities.}
\label{fig:viz}
\end{figure}
\begin{figure}[!t]
\centering
\setlength{\abovecaptionskip}{2pt}
\setlength{\belowcaptionskip}{0pt}
\includegraphics[width=0.9\textwidth]{LogMap.pdf}
\caption{Results of TransEdge, LogMap~\cite{LogMap} and their combination on DWY100K.}
\label{fig:logmap}
\end{figure}
\subsection{Comparison with Conventional Entity Alignment Method}
Conventional entity alignment methods usually exploit literal attributes like names and comments, or OWL logics, to identify similar entities, which are quite different from TransEdge. We further compare TransEdge-CP with LogMap~\cite{LogMap}, a popular and accessible conventional entity alignment method. We use its web front-end system\footnote{\url{http://krrwebtools.cs.ox.ac.uk/logmap/}} to obtain its performance on the monolingual datasets DBP-WD and DBP-YG. We also design a strategy to combine TransEdge-CP and LogMap, which combines their produced entity alignment (i.e., Hits@1 alignment for TransEdge) by voting based on the predicted similarity.
We report the conventional precision, recall and F1-score results in Fig.~\ref{fig:logmap}. Note that, for embedding-based entity alignment, recall and F1-score are equal to precision, because we can always get a candidate list for each input entity based on their embeddings. We can see that LogMap shows very competitive performance and it outperforms TransEdge and the other embedding-based models. However, we find that the combined results achieve the best. This shows that TransEdge is complementary with conventional entity alignment methods.
\section{Conclusion and Future Work}
\label{sec:conc}
In this paper, we proposed a relation-contextualized KG embedding model. It represents relations with context-specific embeddings and builds edge translations between entities to preserve KG structures. We proposed context compression and projection to compute edge embeddings. Our experiments on standard datasets demonstrated its effectiveness on entity alignment and link prediction. For future work, we plan to study techniques like language models to represent multi-hop relation contexts. We also want to incorporate other proximity measures into the preserved KG structures, such as attribute similarity.\\
\noindent\textbf{Acknowledgments.} This work is funded by the National Natural Science Foundation of China (No. 61872172), and the Key R\&D Program of Jiangsu Science and Technology Department (No. BE2018131).
\bibliographystyle{splncs04}
| 95c3872477b341e5b19cf268c17edb1c0cb86ee1 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction and main results}\label{sec:Introduction}
In recent breakthroughs, random walks on various families of random graphs were shown to exhibit the cutoff phenomenon, which is a sharp transition in the convergence to equilibrium \cite{BS:CutoffNonBack,BLPS:CutoffErdos,LS:CutoffRegular}.
Motivated by these results, we address the question for which families of (random) trees, the simple random walk has cutoff. In particular, we investigate the simple random walk on Galton--Watson trees, when the offspring distribution $\mu$ has mean $m \geq 1$ and finite variance $\sigma^2 \in (0,\infty)$. Moreover, we study the simple random walk on spherically symmetric trees. Our results on cutoff for these families of trees are summarized as follows.
\begin{theorem}\label{thm:CutoffExamples} The simple random walk does (almost surely) not exhibit cutoff if the underlying trees $(G_n)_{n\geq 1}$ come from one of the following three constructions:
\begin{itemize}
\item A supercritical Galton--Watson tree ($m>1$) conditioned on survival and truncated at generation $n$.
\item A family of Galton-Watson trees conditioned on having $n$ sites.
\item A spherically symmetric tree of bounded degree truncated at generation $n$.
\end{itemize}
\end{theorem}
We refer to Section \ref{sec:CutoffAtypical} for precise formal definitions and statements. Intuitively, Theorem \ref{thm:CutoffExamples} is due to the fact that Galton--Watson trees are typically \textit{short and fat}, as shown by Addario-Berry in \cite{AB:ShortAndFat}, while spherically symmetric trees are \textit{balanced} in the number of sites by construction. We provide criteria showing that trees are typically \textit{long}, \textit{thin} and \textit{imbalanced} when leading to cutoff, see Theorems \ref{thm:NoCutoff} and \ref{thm:CutoffMiclo}. Our arguments rely on a characterization of the cutoff phenomenon on trees that was recently proven by Basu, Hermon and Peres \cite{BHP:CutoffTrees}.
Moreover, we give an explicit construction for trees such that cutoff occurs, see Theorem \ref{thm:CutoffCompactification}. This allows us to give a simple example of trees for which the simple random walk exhibits cutoff, see Corollary \ref{cor:NewTreesWithCutoff}. Previously, a first example of trees with cutoff was obtained in \cite{PS:CutoffTree}. \\
We now give a brief introduction to the theory of mixing times and refer to \cite{LPW:markov-mixing} for a more comprehensive treatment.
Let $G=(V,E,o)$ be a finite, rooted graph. Let $(\eta_t)_{t \geq 0}$ denote the (continuous-time) simple random walk on $G$ in the variable speed model, i.e.\ the walker moves along every edge of the tree at rate $1$. For two probability measures $\mu,\nu$ on $V$, let their \textbf{total-variation distance} be
\begin{equation}\label{def:TVdistance}
\TV{\mu-\nu} := \max_{A \subseteq V} \left|\mu(A)-\nu(A)\right| \ .
\end{equation} The $\boldsymbol{\varepsilon}$\textbf{-mixing time} of $(\eta_t)_{t \geq 0}$ is now given, for $\varepsilon \in (0,1)$, by
\begin{equation}\label{def:MixingTime}
t_{\textup{mix}}(\varepsilon) := \inf \left\{ t \geq 0 \colon \max_{x \in V} \TV{\P(\eta_t \in \cdot | \eta_0=x) - \pi }\leq \varepsilon \right\}
\end{equation}
where $\pi$ is the uniform distribution on $V$. For a sequence of graphs $(G_n)_{n \geq 1}$, let $(t^n_{\textup{mix}}(\varepsilon))_{n \geq 1}$ denote the collection of mixing times of the random walks on $(G_n)$. We say that the family of random walks on $(G_n)$ exhibits \textbf{cutoff} if for any $\varepsilon \in (0,1)$
\begin{equation}\label{def:Cutoff}
\lim_{n \rightarrow \infty} \frac{t^n_{\textup{mix}}(1-\varepsilon)}{t^n_{\textup{mix}}(\varepsilon)} = 1 \ .
\end{equation}
The cutoff phenomenon was first verified in \cite{DS:SpectrumCompleteGraph}, and obtained its name in the seminal paper of Aldous and Diaconis \cite{AD:Shuffling}. Ever since there has been a lot of activity towards showing that specific examples of Markov chains exhibit cutoff, see for example \cite{BS:CutoffNonBack,BLPS:CutoffErdos,LS:CutoffRegular}. The ultimate goal is to produce a necessary and sufficient condition for cutoff. In turns out that
cutoff has deep connections to the spectral properties of the underlying random walk, see the results \cite{EHP:Hardy,LMW:SpectralGapTrees,M:SpectrumMarkovChains,M:EigenfunctionsOnTrees} of Miclo and others for a discussion of the spectrum of random walks on trees, and the work of Jonasson \cite{J:MixingInterchange} for a collection of results on the spectrum of random walks on random graphs. \\
It was conjectured by Peres that a necessary and sufficient condition for cutoff is that for some $\varepsilon \in (0,1)$ (or, equivalently, for all $\varepsilon \in (0,1)$)
\begin{equation}\label{eq:ProductCriterionIntro}
\lim_{n \rightarrow \infty} t_{\text{mix}}^n(\varepsilon) \lambda_n = \infty \ ,
\end{equation} where $\lambda_n$ denotes the spectral gap of the random walk on $G_n$, see also Section \ref{sec:PrelimSpectral} for a formal definition. However, while this product criterion is indeed necessary, Aldous showed that it is not sufficient, see Chapter 18 of \cite{LPW:markov-mixing}. Nevertheless, it is believed to be sufficient for a wide range of families of Markov chains. \\
\vspace{-0.5mm}For birth-and-death chains, it was shown by Ding, Lubetzky and Peres that the product criterion is sharp \cite{DLP:BirthDeath}. Moreover, Chen and Saloff-Coste provide simple conditions such that \eqref{eq:ProductCriterionIntro} holds \cite{CS:SpectrumBirthDeath,CS:CutoffTimes}. Using their results, we will be able to exclude cutoff on spherically symmetric trees in Section \ref{sec:Spherically}, following a projection argument due to Nestoridi and Nguyen for $d$--regular trees \cite{NN:SpectrumTree}. Recently, it was also verified for the simple random walk on trees by Basu, Hermon and Peres \cite{BHP:CutoffTrees} that the product criterion is sufficient to see cutoff, see also Lemma \ref{lem:ProductCriterion} for a formal statement of their result. A first example of a family of trees with cutoff was found by Peres and Sousi \cite{PS:CutoffTree}. Note that although their results are stated for a discrete-time model of the simple random walk, they can be easily converted to the continuous-time setup using \cite{CS:DiscreteVSContinuous}. We will re-obtain the result of Peres and Sousi using Theorem \ref{thm:CutoffMiclo}, see Corollary \ref{cor:CutoffTree}, and Corollary \ref{cor:NewTreesWithCutoff} for simplified example of a family of trees with cutoff. \\
\vspace{-0.5mm}In the following, we will focus on random walks on finite trees and assume that the underlying graphs will always be a sequence of rooted trees.
Before we come to the main theorems, we introduce some notations. We let $\ell(v)$ denote the set of edges in the shortest path of a vertex $v \in V$ from the root $o$ and write $|v|:=|\ell(v)|$. For every edge $e\in E$, we let $e_{-},e_{+} \in V$ with $|e_{-}|<|e_{+}|$ denote the endpoints of the edge and set $|e|:=|e_{+}|$.
Further, let $T_v$ be the largest subtree of $G$ rooted at $v$ which consists only of sites with distance at least $|v|$ from $o$, and we use the conventions that $T_e:=T_{e_{+}}$ and $|T_e|:= |V(T_{e_+})|$ for all $e\in E$. We say that a vertex $x$ is a $\boldsymbol{\delta}$\textbf{-center of mass} if there are two trees $T$ and $\tilde{T}$ of $G$ with $V(T) \cap V(\tilde{T}) = \{x\}$ and $V(T) \cup V(\tilde{T}) = V(G)$ such that
\begin{equation}\label{def:CenterOfMass}
|V(T)|\geq \delta|V(G)| ,\qquad |V(\tilde{T})|\geq\delta |V(G)| \ .
\end{equation} The existence of a $\delta$-center of mass is guaranteed for all $\delta \in [0,\frac{1}{3}]$ by the following result. For its proof, we refer to the appendix.
\begin{proposition}\label{pro:CenterOfMass}
For all trees $G$, there must be at least one vertex which is a $\frac{1}{3}$-center of mass.
\end{proposition}
From now on, for a given family of trees $(G_n)$, we fix some $\varepsilon>0$ and assume without loss of generality that the root $o_n$ is chosen to be a $\delta$-center of mass for some $\delta>0$ which does not depend on $n$. For a pair of real-valued, non-negative sequences $(f(n))_{n \geq 0}$ and $(g(n))_{n \geq 0}$, we write $f \asymp g$ if $f$ and $g$ have the same order of magnitude, and we write $f \ll g$ if $f(n)/g(n)$ converges to $0$ for $n$ going to infinity. \\
\vspace{-0.5mm}We start with a criterion on trees which allows us to determine when cutoff does not occur for the simple random walk.
\begin{theorem}\label{thm:NoCutoff} For a sequence of trees $(G_n)$ suppose that
\begin{equation}\label{eq:NoCutoffCriterion}
\max_{e\in E(G_n)} |e| |T_e| \asymp \max_{v \in V(G_n)} \sum_{e \in \ell(v)} |T_e|\, .
\end{equation} Then the simple random walk on $(G_n)$ does not exhibit cutoff.
\end{theorem}
Using Theorem \ref{thm:NoCutoff}, one can directly check that for example the simple random walk on the segment of size $n$ or on the regular tree truncated at level $n$ does not exhibit cutoff. In order to state a converse theorem, which guarantees the existence of cutoff, we require the following definition. Fix a tree $G$ and a vertex $v \neq o$.
Let $v_0=o,v_1,\dots, v_k=v$ denote the sites in $\ell(v)$ for $k=|v|$. For each $v_i$, let $\bar{T}_i$ be the largest subtree which is attached to $v_i$ and does not intersect with $\ell(v)$ in any site other than $v_i$. A $\boldsymbol{v}$\textbf{-retraction} of $G$ is a tree $\tilde{G}$ rooted at some site $\tilde{o}$, which consists of a segment of size $|\ell(v)|$ with $\tilde{o}$ at one of its endpoints. In addition, we let a binary tree of size $|\bar{T}_i|$ emerge from each site in the segment at distance $i$ (if $|\bar{T}_i|$ is not a power of two, consider the binary tree with leaves missing at the last level).
\begin{figure} \label{fig:Retraction}
\begin{center}
\begin{tikzpicture}[scale=0.7]
\node[shape=circle,scale=1,fill] (A) at (0,0){} ;
\node[shape=circle,scale=1,fill] (B) at (3,0){} ;
\node[shape=circle,scale=1,fill] (C) at (5.5,0){} ;
\node[shape=circle,scale=1,fill] (D) at (7.5,0){} ;
\draw[line width=1.7pt,RoyalBlue,dashed] (A) to (B);
\draw[line width=1.7pt,RoyalBlue,dashed] (B) to (C);
\draw[line width=1.7pt,RoyalBlue,dashed] (C) to (D);
\node[shape=circle,scale=1,fill] (AX) at (11,0){} ;
\node[shape=circle,scale=1,fill] (BX) at (14,0){} ;
\node[shape=circle,scale=1,fill] (CX) at (16.5,0){} ;
\node[shape=circle,scale=1,fill] (DX) at (18.5,0){} ;
\draw[line width=1.7pt,RoyalBlue,dashed] (AX) to (BX);
\draw[line width=1.7pt,RoyalBlue,dashed] (BX) to (CX);
\draw[line width=1.7pt,RoyalBlue,dashed] (CX) to (DX);
\node[shape=circle,scale=1,fill] (A1) at (0,-1.5){} ;
\node[shape=circle,scale=1,fill] (A2) at (0,-3){} ;
\node[shape=circle,scale=1,fill] (A3) at (-0.8,-4.5){} ;
\node[shape=circle,scale=1,fill] (A4) at (0,-4.5){} ;
\node[shape=circle,scale=1,fill] (A5) at (0.8,-4.5){} ;
\node[shape=circle,scale=1,fill] (B1) at (3-0.4,-1.5){} ;
\node[shape=circle,scale=1,fill] (B2) at (3.4,-1.5){} ;
\node[shape=circle,scale=1,fill] (B3) at (3-0.4,-3){} ;
\node[shape=circle,scale=1,fill] (B4) at (3.4,-3){} ;
\node[shape=circle,scale=1,fill] (C1) at (5.5,-1.5){} ;
\node[shape=circle,scale=1,fill] (C2) at (5.5,-3){} ;
\node[shape=circle,scale=1,fill] (D1) at (7.5,-1.5){} ;
\draw[thick] (A) to (A1);
\draw[thick] (A1) to (A2);
\draw[thick] (A2) to (A3);
\draw[thick] (A2) to (A4);
\draw[thick] (A2) to (A5);
\draw[thick] (B) to (B1);
\draw[thick] (B) to (B2);
\draw[thick] (B1) to (B3);
\draw[thick] (B2) to (B4);
\draw[thick] (C) to (C1);
\draw[thick] (C1) to (C2);
\draw[thick] (D) to (D1);
\draw (0,-0.55) to [closed, curve through = {(-0.25,-0.7)(-0.7,-5)(0,-5.1)(0.7,-5)(0.25,-0.7)}] (0,-0.55);
\draw (3,-0.55) to [closed, curve through = {(3-0.25,-0.7)(3-0.7,-3.5)(3,-3.6)(3.7,-3.5)(3.25,-0.7)}] (3,-0.55);
\draw (5.5,-0.55) to [closed, curve through = {(5.5-0.25,-0.7)(5.5-0.3,-3.5)(5.5,-3.55)(5.8,-3.5)(5.75,-0.7)}] (5.5,-0.55);
\draw (7.5,-0.55) to [closed, curve through = {(7.5-0.25,-0.7)(7.5-0.3,-2.2)(7.5,-2.25)(7.8,-2.2)(7.75,-0.7)}] (7.5,-0.55);
\node[scale=1] at (0.95,-0.5){$v_0=o$};
\node[scale=1] at (3.5,-0.5){$v_1$};
\node[scale=1] at (6,-0.5){$v_2$};
\node[scale=1] at (8.5,-0.5){$v_3=v$};
\node[scale=0.9] at (-1,-3.5){$\bar{T}_0$};
\node[scale=0.9] at (3,-4){$\bar{T}_1$};
\node[scale=0.9] at (5.5,-4){$\bar{T}_2$};
\node[scale=0.9] at (7.5,-2.7){$\bar{T}_3$};
\node[shape=circle,scale=1,fill] (AX1) at (11-0.4,-1.5){} ;
\node[shape=circle,scale=1,fill] (AX2) at (11.4,-1.5){} ;
\node[shape=circle,scale=1,fill] (AX3) at (11-0.8,-3){} ;
\node[shape=circle,scale=1,fill] (AX4) at (11,-3){} ;
\node[shape=circle,scale=1,fill] (AX5) at (11.8,-3){} ;
\node[shape=circle,scale=1,fill] (BX1) at (14-0.4,-1.5){} ;
\node[shape=circle,scale=1,fill] (BX2) at (14.4,-1.5){} ;
\node[shape=circle,scale=1,fill] (BX3) at (14-0.8,-3){} ;
\node[shape=circle,scale=1,fill] (BX4) at (14,-3){} ;
\node[shape=circle,scale=1,fill] (CX1) at (16.1,-1.5){} ;
\node[shape=circle,scale=1,fill] (CX2) at (16.9,-1.5){} ;
\node[shape=circle,scale=1,fill] (DX1) at (18.5,-1.5){} ;
\draw[thick] (AX) to (AX1);
\draw[thick] (AX) to (AX2);
\draw[thick] (AX1) to (AX3);
\draw[thick] (AX1) to (AX4);
\draw[thick] (AX2) to (AX5);
\draw[thick] (BX) to (BX1);
\draw[thick] (BX) to (BX2);
\draw[thick] (BX1) to (BX3);
\draw[thick] (BX1) to (BX4);
\draw[thick] (CX) to (CX1);
\draw[thick] (CX) to (CX2);
\draw[thick] (DX) to (DX1);
\draw (0,-0.55) to [closed, curve through = {(-0.25,-0.7)(-0.7,-5)(0,-5.1)(0.7,-5)(0.25,-0.7)}] (0,-0.55);
\draw (3,-0.55) to [closed, curve through = {(3-0.25,-0.7)(3-0.7,-3.5)(3,-3.6)(3.7,-3.5)(3.25,-0.7)}] (3,-0.55);
\draw (5.5,-0.55) to [closed, curve through = {(5.5-0.25,-0.7)(5.5-0.3,-3.5)(5.5,-3.55)(5.8,-3.5)(5.75,-0.7)}] (5.5,-0.55);
\draw (7.5,-0.55) to [closed, curve through = {(7.5-0.25,-0.7)(7.5-0.3,-2.2)(7.5,-2.25)(7.8,-2.2)(7.75,-0.7)}] (7.5,-0.55);
\node[scale=1] at (11.5,-0.5){$\tilde{o}$};
\node[scale=1] at (19,-0.5){$v$};
\end{tikzpicture}
\end{center}
\caption{\label{fig:RetractionCap} Example of a tree $G$ on the left-hand side and its corresponding $v$-retraction $\tilde{G}$ on the right-hand side. The edges in the segment $\ell(v)$ connecting the root and the site $v$ are depicted in dashed blue.}
\end{figure}
\begin{theorem}\label{thm:CutoffCompactification} For every $G_n$ in $(G_n)_{n \geq 1}$, suppose that we find some $v^{\ast}_n \in V(G_n)$ with
\begin{equation}\label{eq:MaximalPath}
\sum_{e \in \ell(v^{\ast}_n)} |T_e| \asymp \max_{v \in V(G_n)} \sum_{e \in \ell(v)} |T_e| \gg |V(G_n)| \, .
\end{equation} If we have that
\begin{equation}\label{eq:HyperbolicGrowth}
\max_{e\in \ell(v^{\ast}_n)} |e| |T_e| \ll \max_{v \in V(G_n)} \sum_{e \in \ell(v)} |T_e|
\end{equation} then for any sequence $(\tilde{G}_n)$ of $v^{\ast}_n$-retractions of $(G_n)$, we have that the simple random walk on $(\tilde{G}_n)$ exhibits cutoff.
\end{theorem}
We will see that taking retractions is necessary for cutoff in Theorem \ref{thm:CutoffCompactification} when we study random walks on spherically symmetric trees in Section \ref{sec:Spherically}, see Remark \ref{rem:NecessaryRetraction}. Note that instead of attaching binary trees, we may allow also for attaching other classes of graphs with sufficiently fast growth, see Remark \ref{rem:AlphaRetraction} for precise conditions on the attached trees.
We will now give a simple example of a family of trees on which the simple random walk exhibits cutoff.
This follows as an immediate consequence of Theorem \ref{thm:CutoffCompactification}.
\begin{corollary}\label{cor:NewTreesWithCutoff} Consider the family of trees $(G_n)_{n\geq 1}$, where $G_n$ consists of a segment of size $n$, rooted at one of its endpoints, and binary trees of size $\lfloor n/(i+1)^{2}\rfloor$ attached at distance $i$ from the root for all $i\in \{0,1,\dots,n\}$. Then the root is a $\delta$-center of mass for $\delta=\frac{1}{6}$ and the simple random walk on $(G_n)$ exhibits cutoff.
\end{corollary}
Our last main result is a criterion which is particularly suited to verify cutoff for \textit{thin} and \textit{long} trees, see also the trees constructed in \cite{PS:CutoffTree}. For all $k \geq 0$, we set
\begin{equation}\label{V_m}
V_k=V_k(G):= \bigcup_{v \colon |v|= k} V(T_v) \ .
\end{equation}
\begin{theorem}\label{thm:CutoffMiclo} Recall \eqref{V_m} and suppose that a family of trees $(G_n)$ with maximum degrees $(\Delta_n)$ satisfies
\begin{equation}\label{eq:ComplimentAssumption}
\max_{k \geq 1} k | V_k(G_n) | \ll \frac{1}{\Delta_n}\max_{v \in V(G_n)} \sum_{e \in \ell(v)} |T_e| \ .
\end{equation} Then the simple random walk on $(G_n)$ exhibits cutoff.
\end{theorem}
\subsection{Outline of the paper} \label{sec:Outline}
This paper will be organized as follows. In Section \ref{sec:PrelimSpectral}, we discuss preliminary estimates on the spectral gap of the simple random walk on trees. We present two bounds on the spectral gap, a first bound using a characterization of the spectral gap via a discrete Hardy inequality, and a second bound using weighted paths which follows directly from the first characterization. In Section \ref{sec:PrelimMixing}, we present preliminary facts on upper and lower bounds on the mixing time using a representation as hitting times of large sets. Building on these results, we prove our main criteria for the occurrence of cutoff in Section \ref{sec:Proofs}. Section \ref{sec:CutoffAtypical} is dedicated to applying these criteria to the families of trees in Theorem \ref{thm:CutoffExamples}, and showing that the simple random walk does not exhibit cutoff. In particular, we verify the absence of cutoff for the simple random walk on spherically symmetric trees, supercritical Galton--Watson trees conditioned on survival, and families of trees which converge to the Brownian continuum random tree. The latter includes Galton--Watson trees conditioned to contain a certain number of sites. We conclude with an outlook on open problems.
\section{Some facts about the spectral gap on trees} \label{sec:PrelimSpectral}
In order to study cutoff for the simple random walk on trees, our key tool will be to give bounds on the spectral gap of the random walk $(\eta_t)_{t \geq 0}$. Let $\mathcal{L}$ denote the generator of the random walk on a tree $G=(V,E)$. We study pairs of eigenvalues and eigenfunctions for the random walk, i.e.\ we want to find $(\mu,f)$ for $\mu \in \mathbb{C}$ and $f \colon V \rightarrow \mathbb{C}$ such that
\begin{equation}
(\mathcal{L}f)(x) = \mu f(x)
\end{equation} holds for all $x \in V$. Note that since $(\eta_t)_{t \geq 0}$ is reversible, all eigenvalues of $\mathcal{L}$ are real-valued. Moreover, the function $f \equiv 1$ is always an eigenfunction with respect to the eigenvalue $\lambda=0$. Our goal is to investigate the \textbf{spectral gap} $\lambda$ of the process, i.e.\ the absolute value of the second largest eigenvalue of $\mathcal{L}$, respectively the \textbf{relaxation time}
\begin{equation}\label{relax}
t_{\textup{rel}}:=\frac{1}{\lambda}.
\end{equation}
Recall the following variational characterization of $\lambda$, see Definition 2.1.3 in \cite{S:SaintFlour}.
\begin{lemma}\label{lem:Rayleigh} Let $\lambda$ be the spectral gap of simple random walk $(\eta_t)_{t \geq 0}$ on the tree $G$. Then we have that
\begin{equation}
\lambda = \min_{f \colon V \rightarrow \mathbb{R}, \textup{Var}(f)\neq 0} \frac{\mathcal{E}(f)}{\textup{Var}(f)}
\end{equation} where we set
\begin{equation*}\label{def:DirichletVariance}
\mathcal{E}(f):=\frac{1}{|V|}\sum_{e\in E} \left( f(e_{+})-f(e_{-}) \right)^2, \quad \textup{Var}\left( f\right) := \frac{1}{|V|}\sum_{v \in V} f(v)^2 - \frac{1}{|V|^2}\Big( \sum_{v \in V} f(v)\Big)^2 .
\end{equation*}
\end{lemma} The quantities $\mathcal{E}(f)$ and $\textup{Var}\left( f\right)$ are the Dirichlet form and the variance of the function $f$, see Chapter 13 of \cite{LPW:markov-mixing} for a more general introduction.
\subsection{A discrete Hardy inequality on trees} \label{sec:Hardy}
Using Lemma \ref{lem:Rayleigh}, we obtain the following characterization of the spectral gap in terms of a (discrete) Hardy inequality on trees, see also \cite{EHP:Hardy}.
\begin{lemma}\label{lem:EquivalentHardy} Recall that $o$ is a $\delta$-center of mass for some $\delta>0$, and let $T$ and $\tilde{T}$ be two trees which satisfy \eqref{def:CenterOfMass}. Let $A$ be the smallest constant such that we have
\begin{equation}\label{eq:HardyTree1}
\sum_{v\in V(T)} \Big(\sum_{e \in \ell(v)} g(e)\Big)^2 \leq A \sum_{e \in E(T)} g(e)^2
\end{equation} for all functions $g \colon E(T) \rightarrow \mathbb{R}$ with $g \not\equiv 0$ as well as
\begin{equation}\label{eq:HardyTree2}
\sum_{v\in V(\tilde T)} \Big(\sum_{e \in \ell(v)} \tilde{g}(e)\Big)^2 \leq A \sum_{e \in E(\tilde T)} \tilde{g}(e)^2
\end{equation} for all functions $\tilde{g} \colon E(\tilde{T}) \rightarrow \mathbb{R}$ with $\tilde{g} \not\equiv 0$. Then we have that the spectral gap $\lambda$ of the simple random walk on $G$ satisfies
\begin{equation}\label{eq:RelationAtoSpectralGap}
\lambda \in \left[ \frac{1}{A},\frac{1}{\delta A}\right] \ .
\end{equation}
\end{lemma}
\begin{proof} For any function $f\colon V(G) \rightarrow \mathbb{R}$, we set
\begin{equation}
f_T(v):= (f(v)-f(o)) 1_{v \in V(T)}, \qquad f_{\tilde{T}}(v):= (f(v)-f(o)) 1_{v \in V(\tilde{T})}
\end{equation} for all $v \in V(G)$. Using the definition of $\textup{Var}(f)$, we see that
\begin{equation}\label{eq:SplitDirichlet}
\frac{\mathcal{E}(f)}{\textup{Var}(f)} \geq \frac{\mathcal{E}(f_T)+ \mathcal{E}( f_{\tilde{T}})}{\bar{f}_T+ \bar{f}_{\tilde{T}}} \geq \min\left\{ \frac{\mathcal{E}(f_T)}{\bar{f}_T} , \frac{\mathcal{E}( f_{\tilde{T}})}{ \bar{f}_{\tilde{T}}} \right\}
\end{equation} where we set
\begin{equation}
\bar{f}_{T} := \frac{1}{|V(G)|} \sum_{v\in V(T) } (f_T(v))^2 ,\qquad \bar{f}_{\tilde T} := \frac{1}{|V(G)|} \sum_{v\in V(\tilde T) } (f_{\tilde{T}}(v))^2 \ .
\end{equation} We set $g_T(e):=f_T(e_{+})-f_T(e_{-})$ for $e\in E(T)$ and $g_{\tilde{T}}(e):=f_{\tilde{T}}(e_{+})-f_{\tilde{T}}(e_{-})$ for $e\in E({\tilde{T}})$. The inequalities \eqref{eq:HardyTree1} and \eqref{eq:HardyTree2} applied to $g_T$ and $g_{\tilde{T}}$ yield
$\mathcal{E}(f_T)\geq \frac{1}{A}\bar{f}_T $ and $\mathcal{E}( f_{\tilde{T}}) \geq \frac{1}{A}\bar{f}_{\tilde{T}}$. Now, we use Lemma \ref{lem:Rayleigh} and \eqref{eq:SplitDirichlet} to conclude the lower bound on $\lambda$ which is claimed in \eqref{eq:RelationAtoSpectralGap}. \\
For the corresponding upper bound on $\lambda$, we note that the minimizer $g^{\ast}$ for $A$ is given by \eqref{eq:HardyTree1} with equality and satisfies $g^{\ast}(e) \geq 0$ for all $e \in E(T)$ (consider $|g^{\ast}|$ otherwise to see that this has to be the case). Define a function $h:V(G) \rightarrow \mathbb{R}$ by
\begin{equation}
h(v) := 1_{v\in V(T)}\sum_{e\in \ell(v)} g^{\ast}(e) \ .
\end{equation} Recall that $\pi$ is the uniform measure on $V(G)$ and use the Paley–Zygmund inequality to see that
\begin{equation*}
\textup{Var}(h) \geq (1- \pi(h>0))\frac{1}{|V(G)|} \sum_{v \in V(G)} h(v)^2
\end{equation*}
and hence, we must have
\begin{equation*}
\textup{Var}(h) \geq \pi(h=0) \mathcal{E}(h)A \geq \delta A\mathcal{E}(h) \ .
\end{equation*} Using the characterization of $\lambda$ in Lemma \ref{lem:Rayleigh}, this concludes the proof.
\end{proof}
We will use Lemma \ref{lem:EquivalentHardy} in Sections \ref{sec:BoundRelaxation} and \ref{sec:CriterionMiclo} in order to obtain upper and lower bounds on the spectral gap, respectively.
\subsection{A bound on the relaxation time by weighted paths} \label{sec:WeightedPaths}
Next, we give an estimate which allows us to achieve upper bounds on the relaxation time for the simple random walk on trees. Note that this bound was obtained already in \cite{LMW:SpectralGapTrees} in a more general setup, including a corresponding lower bound of the same form within a factor of $2$. For the convenience of the reader, we provide a short proof for our special case of the simple random walk.
Recall \eqref{relax}.
\begin{proposition} \label{pro:UpperBoundRelaxationTime} Let $(a_e)_{e \in E(G)}$ be any family of positive edge weights for the tree $G$. For any choice of the $(a_e)$'s, we have that
\begin{equation}\label{eq:UpperBoundRelax}
t_{\textup{rel}} \leq \max_{e \in E(G)}a_e^{-1}\sum_{v \in T_e} \sum_{\tilde{e} \in \ell(v)} a_{\tilde{e}} \ .
\end{equation}
\end{proposition}
\begin{proof} Recall the trees $T,\tilde{T}$ for $G$ from \eqref{def:CenterOfMass}. By Lemma \ref{lem:Rayleigh} and Lemma \ref{lem:EquivalentHardy}, if we have
for some constant $C$ that
\begin{equation}\label{eq:forsomeC}
\sum_{v\in V(T)} \Big(\sum_{e \in \ell(v)} g(e)\Big)^2 \leq C \sum_{e \in E(T)} g(e)^2
\end{equation} for all functions $g \colon E(T) \rightarrow \mathbb{R}$, and a similar statement with respect to $\tilde{T}$, we conclude $t_{\textup{rel}} \leq C$. In the following, we only show \eqref{eq:forsomeC} with respect to the tree $T$. Fix positive edge weights $a_e$ and note that by the Cauchy-Schwarz inequality,
\begin{equation*}
\sum_{v\in V(T)} \Big(\sum_{e \in \ell(v)} g(e)\Big)^2 \leq \sum_{v\in V(T)} \Big( \sum_{\tilde{e} \in \ell(v)} a_{\tilde{e}}\Big) \sum_{e \in \ell(v)} a_e^{-1}g(e)^2 \ .
\end{equation*} Rearranging the sum according to the edges and assuming without loss of generality that $g$ is non-negative, we get
\begin{align*}
\sum_{v\in V(T)} \Big(\sum_{e \in \ell(v)} g(e)\Big)^2 \leq \sum_{e \in E(T)} \Big( \sum_{v \in T_{e}} \sum_{\tilde{e} \in \ell(v) } a_{\tilde{e}} \Big) a^{-1}_{e} g(e)^2 \, .
\end{align*}
Hence, for any choice of $(a_e)$,
we can take the right-hand side of \eqref{eq:UpperBoundRelax} as $C$ in \eqref{eq:forsomeC}.
\end{proof} Note that we can choose positive weights $(a_e)$ in any particular way to obtain an upper bound. In the following, we present three special cases of $(a_e)$'s and the respective bounds on the relaxation time. We start with the case, where we set $a_e=\frac{1}{|e|}$ in Proposition \ref{pro:UpperBoundRelaxationTime} to obtain the following bound.
\begin{corollary} \label{cor:SimpleBound} We have that
\begin{equation}
t_{\textup{rel}} \leq (\log(\textup{diam}(G))+1) \max_{e\in E} |e| |T_e|
\end{equation} where $\textup{diam}(G)$ denotes the diameter of the tree $G$.
\end{corollary}
In Proposition \ref{pro:LowerBoundRelaxationTime}, we give a corresponding lower bound on the relaxation time, but without the additional factor of $\log(\textup{diam}(G))$. Next, consider $a_e=\frac{1}{f(|e|)}$ for some function $f \colon \mathbb{N} \rightarrow \mathbb{R}$, whose reciprocals are summable. We obtain the following bound as a consequence of Proposition \ref{pro:UpperBoundRelaxationTime}.
\begin{corollary} \label{cor:SimpleBound2} Suppose that we have some function $f$ on the integers with
\begin{equation*}
\sum_{n \in \mathbb{N}} f(n)^{-1} = C < \infty
\end{equation*} Then we have that
\begin{equation}
t_{\textup{rel}} \leq C \max_{e\in E} f(|e|) |T_e| \ .
\end{equation}
\end{corollary} Next, we have the following bound by choosing the weights $a_e=|T_e|$ for all $e\in E(T)$ proportional to size of the tree $T_e$ attached tree to $e$.
\begin{corollary} \label{cor:SimpleBound3} It holds that
\begin{equation}\label{eq:SimpleBound3}
t_{\textup{rel}} \leq \max_{v \in V} \sum_{e \in \ell(v)} |T_e| \ .
\end{equation}
\end{corollary}
This follows directly from Proposition \ref{pro:UpperBoundRelaxationTime} noting that
\begin{equation*}
t_{\textup{rel}} \leq \max_{e \in E} \frac{1}{|T_e|} \sum_{v \in T_{e}} \sum_{\tilde{e} \in \ell(v)} |T_{\tilde{e}}| \leq \max_{e \in E} \frac{1}{|T_e|} \sum_{v \in T_{e}} \max_{\tilde{v} \in V} \sum_{\tilde{e} \in \ell(\tilde{v})} |T_{\tilde{e}}| = \max_{v \in V} \sum_{e \in \ell(v)} |T_e| \ .
\end{equation*}
We will see in the upcoming section that the right-hand side in \eqref{eq:SimpleBound3} gives also an upper bound on the mixing time for the simple random walk.
\section{Some facts about the mixing time on trees} \label{sec:PrelimMixing}
In this section, we discuss mixing time estimates for the simple random walk $(\eta_t)_{t\geq 0}$ on a tree $G=(V,E,o)$. The main result presented in this section is that the $\varepsilon$-mixing time can be bounded in terms of the hitting time of the root $o$. For sites $x,y \in V$, let
\begin{equation}
\tau_{\textup{hit}}(x) := \inf \left\{ t \geq 0 \colon \eta_t = x \right\}
\end{equation} be the \textbf{hitting time} of $x$ and let $\mathbb{E}_{y}[\tau_{\textup{hit}}(x)]$ denote the expected hitting time of $x$ when starting the random walk from $y$. The following proposition gives an upper bound on the $\varepsilon$-mixing time.
\begin{proposition} \label{pro:HittingtimeUpper}
Let $t_{\textup{mix}}(\varepsilon)$ be the $\varepsilon$-mixing time of $(\eta_t)_{t\geq 0}$. There is a universal constant $C>0$ such that we have for all $\varepsilon \in (0,\frac{1}{2})$
\begin{equation}\label{eq:hittingTimeEstimateUpper}
t_{\textup{mix}}(\varepsilon) \leq C \log(\varepsilon^{-1})\Big(1+ \max_{v \in V} \mathbb{E}_{v}[\tau_{\textup{hit}}(o)]\Big) \leq 2C \log(\varepsilon^{-1})\max_{v \in V} \sum_{e \in \ell(v)} |T_e| .
\end{equation}
In particular,
\begin{equation}\label{mixdiam}
t_{\textup{mix}}(\varepsilon) \leq 2C \log(\varepsilon^{-1})|V|\textup{diam}(G).
\end{equation}
\end{proposition}
\begin{proof} Note that the first inequality in \eqref{eq:hittingTimeEstimateUpper} is immediate from Theorem 5 in \cite{A:MixingHitting} together with Theorem 20.3 in \cite{LPW:markov-mixing} and Lemma 5.2 of \cite{PS:MixingHitting}. To see that the second inequality holds, we claim that for two adjacent sites $v,w \in V$ with $|v|<|w|$, we have
\begin{equation}\label{eq:HittingDegree}
\mathbb{E}_{w}[\tau_{\textup{hit}}(v)] \leq \sum_{x \in T_w} \deg(x) -1 \ .
\end{equation} This follows from the fact that for the embedded discrete-time simple random walk
\begin{equation}\label{eq:DiscreteSRW}
\mathbb{E}_{w}[\tau_{\textup{hit}}(v)] = \sum_{x \in T_w} \deg(x) -1 \ ,
\end{equation}
and a time-change argument, see Section 2.3 in \cite{B:Center}. Since for any tree $\tilde{G}$, we have
\begin{equation}\label{eq:SitesToEdges}
\sum_{x \in V(\tilde{G})} \deg(x) = 2|V(\tilde{G})|-2 \ ,
\end{equation} this yields
\begin{equation*}
\mathbb{E}_{v}[\tau_{\textup{hit}}(o)] \leq 2 \sum_{e \in \ell(v)} |T_e| \ ,
\end{equation*}
and the second inequality in \eqref{eq:hittingTimeEstimateUpper} follows.
\end{proof}
Note that the bound in Proposition \ref{pro:HittingtimeUpper} does not use the assumption that the root is a $\delta$-center of mass. In fact, we see that \eqref{eq:hittingTimeEstimateUpper} holds for an arbitrarily chosen root. Next, we state a lower bound on the mixing time, following the ideas of Lemma 5.4 in \cite{BHP:CutoffTrees}, which does indeed require that the root is a $\delta$-center of mass.
\begin{proposition} \label{pro:HittingtimeLower} Assume that the root $o$ is a $\delta$-center of mass for some $\delta>0$. Let $\Delta$ be the maximum degree in $G$. Then for all $\varepsilon\leq\delta$,
\begin{equation}\label{eq:hittingTimeEstimateLower}
t_{\textup{mix}}\left(\frac{\varepsilon}{2}\right) \geq \frac{\varepsilon}{2} \max_{v \in V}\mathbb{E}_v\left[\tau_{\textup{hit}}(o) \right] \geq \frac{\varepsilon}{2 \Delta} \max_{v \in V} \sum_{e \in \ell(v)} |T_e| \ .
\end{equation}
\end{proposition}
\begin{proof}
Let $v \in V$ with $v \neq o$ be fixed and recall that $\pi$ denotes the uniform distribution on $V(G)$. Moreover, recall the trees $T$ and $\tilde{T}$ in \eqref{def:CenterOfMass} from the definition of the $\delta$-center of mass and assume that $v \in V(T)$. We now claim that
\begin{equation}\label{eq:GeometricBound}
\P_v\left(\tau_o \leq t_{\textup{mix}}\left({\varepsilon}/{2}\right) \right) \geq \P_v\Big( \eta_{t_{\textup{mix}}({\varepsilon}/{2})} \in V(\tilde{T})\Big) \geq \pi(V(\tilde{T})) - \frac{\varepsilon}{2} \geq \frac{\varepsilon}{2}\, .
\end{equation}
The first inequality in \eqref{eq:GeometricBound} follows since $\eta_{t_{\textup{mix}}({\varepsilon}/{2})} \in V(\tilde{T})$ implies that the root was visited before $t_{\textup{mix}}\left({\varepsilon}/{2}\right)$, the second inequality uses the definition of the mixing time and the third inequality follows since $\pi(V(\tilde{T}))\geq \delta \geq \varepsilon$.
In words, \eqref{eq:GeometricBound} says that with probability at least $\varepsilon/2$, the random walk hits the root until time $t_{\textup{mix}}(\varepsilon/2)$. Using the Markov property of $(\eta_t)_{t \geq 0}$, since \eqref{eq:GeometricBound} holds for any $v\in V(T)$,
we can iterate \eqref{eq:GeometricBound} to get that the probability of hitting the root for the first time until time
$k \cdot t_{\textup{mix}}(\varepsilon/2)$ is at least the probability that a
Geometric-$(\varepsilon/2)$-random variable takes a value $\leq k$. This yields
\begin{equation*}
\mathbb{E}_v\left[\tau_{\textup{hit}}(o) \right]
\leq \frac{2}{\varepsilon} t_{\textup{mix}}\left(\frac{\varepsilon}{2}\right)\, .
\end{equation*} Since $v\in T$ was arbitrary, we obtain the first inequality in \eqref{eq:hittingTimeEstimateLower}. For the second inequality, recall \eqref{eq:DiscreteSRW} and \eqref{eq:SitesToEdges}, and use a comparison with a discrete-time simple random walk which is sped up by a factor of $\Delta$.
\end{proof} So far, we gathered techniques in order to give upper and lower bounds on the mixing and relaxation time. The next seminal result by Basu et al.\ gives a characterization of cutoff for the simple random walks on trees, see \cite{BHP:CutoffTrees}.
\begin{lemma}[c.f.\ Theorem 1 in \cite{BHP:CutoffTrees}] \label{lem:ProductCriterion} Fix $\varepsilon \in (0,1)$. Let $(G_n)_{n \geq 1}$ be a family of trees with $\varepsilon$-mixing times $(t^n_{\textup{mix}})$ and relaxation times $(t^n_{\textup{rel}})$ for the simple random walk on $(G_n)$, respectively. The simple random walk on $(G_n)$ exhibits cutoff if and only if
\begin{equation}\label{seminalcomp}
t^n_{\textup{rel}} \ll t^n_{\textup{mix}} \ .
\end{equation} In other words, we see cutoff whenever then product criterion \eqref{eq:ProductCriterionIntro} holds.
\end{lemma}
\begin{remark} \label{rem:StrategyCutoff}
In view of Corollary \ref{cor:SimpleBound3}, Proposition \ref{pro:HittingtimeLower} and Lemma \ref{lem:ProductCriterion}, we aim at verifying cutoff by giving weights $(a_e)$ in Proposition \ref{pro:UpperBoundRelaxationTime}, which lead to a strictly improved bound over the weights $a_e = |T_e|$.
\end{remark}
\section{Proof of the main criteria for cutoff} \label{sec:Proofs}
We now use the preliminary bounds from the previous two sections in order to establish our main criteria on cutoff for the simple random walk on trees, i.e.\ we prove Theorem \ref{thm:NoCutoff}, Theorem \ref{thm:CutoffCompactification}, and Theorem \ref{thm:CutoffMiclo}.
\subsection{A general lower bound on the relaxation time} \label{sec:BoundRelaxation}
We start with a lower bound on the relaxation time. Recall that for a family of graphs $(G_n)$, the root is chosen to be a $\delta$-center of mass for some fixed $\delta>0$.
\begin{proposition} \label{pro:LowerBoundRelaxationTime} The constant $A$ from Lemma \ref{lem:EquivalentHardy} satisfies
\begin{equation}\label{eq:LowerBoundRelax}
A \geq \max_{e \in E(G)} |e| |T_e| \ .
\end{equation} In particular, Lemma \ref{lem:EquivalentHardy} implies
that
\begin{equation}
t_{\textup{rel}}\geq \delta \max_{e \in E(G)} |e| |T_e|\, .
\end{equation}
\end{proposition}
\begin{proof} We show the lower bound on $A$ by considering an explicit function $g_{e^{\ast}}$ in the definition of $A$ in \eqref{eq:HardyTree1} and \eqref{eq:HardyTree2}. Fix some $e^{\ast}\in E(G)$ which maximizes the right-hand side of \eqref{eq:LowerBoundRelax}. We define $g_{e^{\ast}}: E(G) \rightarrow \mathbb{R}$ to be
\begin{equation*}
g_{e^{\ast}}(e) := \begin{cases}
\frac{1}{|e^{\ast}_+|} & \text{ if } e \in \ell(e^{\ast}_+) \\
0 & \text{ if } e \notin \ell(e^{\ast}_+)
\end{cases}
\end{equation*} and note that $g_{e^{\ast}}$ satisfies
\begin{equation*}
\sum_{v\in V(G)} \Big(\sum_{e \in \ell(v)} g_{e^{\ast}}(e)\Big)^2 \geq \sum_{v\in T_{e^{\ast}}} \Big(\sum_{e \in \ell(v)} g_{e^{\ast}}(e)\Big)^2 = |T_{e^{\ast}}|
\end{equation*} as well as
\begin{equation*}
\sum_{e \in E(T)} g_{e^{\ast}}(e)^2 = \frac{1}{|e^{\ast}_+|} = \frac{1}{|e^{\ast}|} \ .
\end{equation*} Using the definition of $A$ in Lemma \ref{lem:EquivalentHardy}, we
see that \eqref{eq:LowerBoundRelax} holds.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:NoCutoff}] Recall that by Lemma \ref{lem:ProductCriterion}, there is no cutoff when the mixing time and the relaxation time are of the same order. Note that
Proposition \ref{pro:HittingtimeUpper} yields an upper bound on the mixing time, while Proposition \ref{pro:LowerBoundRelaxationTime} establishes a lower bound on the relaxation time. Both bounds are of the same order, due to assumption \eqref{eq:NoCutoffCriterion}, which finishes the proof.
\end{proof}
\subsection{Cutoff for \textit{v}-retractions of trees} \label{sec:CutoffRetraction}
In this section, we prove Theorem \ref{thm:CutoffCompactification}. We use Proposition \ref{pro:UpperBoundRelaxationTime} with a specific choice of edge weights $(a_e)$ improving the choice of weights leading to Corollary \ref{cor:SimpleBound3}.
\begin{lemma}\label{lem:CutoffWeights} Suppose that assumptions \eqref{eq:MaximalPath} and \eqref{eq:HyperbolicGrowth} of Theorem \ref{thm:CutoffCompactification} hold and recall that $(\tilde{G}_n)_{n \geq 1}$ denotes the sequence of $v_n^{\ast}$-retractions of $(G_n)_{n \geq 1}$. Then we have
\begin{equation}\label{eq:LemmaCutoffWeights}
\tilde{t}^n_{\textup{rel}} \ll \max_{v \in V(\tilde{G}_n)} \sum_{e \in \ell(v)} |T_e|
\end{equation} where $\tilde{t}^n_{\textup{rel}}$ is the relaxation time belonging to $\tilde{G}_n$.
\end{lemma}
\begin{proof} Let $k := |v_n^{\ast}|$ and let $(\bar{T}_i)_{i \in \{0,\dots,k\}}$ denote the trees in $\tilde{G}_n$ attached along the segment $\ell(v_n^{\ast})$, ordered according to their distances from the root, which are used in the construction of the $v_n^{\ast}$-retraction. We consider the following choice of the weights $(a_e)_{e \in E(G_n)}$. For $e \in \ell(v_n^{\ast})$, we let $a_e:= |e|^{-1/2}$. For $e \in E(\bar{T}_i)$, we set
\begin{equation}
a_e:= \frac{ 1}{\sqrt{\max\{i,1\} } (|e|-i)^2} \ .
\end{equation}
In order to apply Proposition \ref{pro:UpperBoundRelaxationTime}, we will now give an upper bound for
\begin{equation}\label{eq:VariationBoundCutoffWeights}
A_e := a_e^{-1}\sum_{v \in T_e} \sum_{\tilde{e} \in \ell(v)} a_{\tilde{e}}
\end{equation}
for any possible choice of the edge $e \in E(\tilde{G}_n)$. We claim that without loss of generality, it suffices to consider $e\in \ell(v_n^{\ast})$. To see this, consider $e\in \bar{T}_{i}$ for $i\geq 1$ and let $e_i$ be the corresponding edge in $\ell(v_n^{\ast})$ with $|e_i|=i$. For every $v\in \bar{T}_i$, we have
\begin{equation}\label{eq:SitesInSubtrees}
\sum_{\tilde{e} \in \ell(v)} a_{\tilde{e}} = \sum_{j=1}^{i} j^{-\frac{1}{2}} + i^{-\frac{1}{2}} \sum_{l=i+1}^{|e|} \frac{1}{(l-i)^2} \in [\sqrt{i}, 4 \sqrt{i}] \ .
\end{equation}
Together with the fact that $\bar{T}_{i}$ has a binary structure, we get that
\begin{align*}
A_{e}
\leq 4 i(|e|-i)^2 |T_e|
\leq 4 i(|e|-i)^2 2^{-(|e|-i)} |T_{e_i}| \leq 5 i |T_{e_i}| \leq 5 A_{e_i}
\end{align*} holds. Similarly, for $e \in \bar{T}_0$, we see that $A_e \leq 5 |\bar{T}_0| \leq 5|V(G_n)|$, which is of smaller order than the right-hand side in \eqref{eq:LemmaCutoffWeights} by assumption \eqref{eq:MaximalPath}.
Hence, it suffices to bound $A_e$ for all edges $e$ within $\ell(v_n^{\ast})$ to conclude. From \eqref{eq:SitesInSubtrees}, we see that the right-hand side of \eqref{eq:VariationBoundCutoffWeights} is bounded from above by
\begin{equation*}
A_e = \sqrt{|e|} \sum_{i=|e|}^{k}\sum_{v \in \bar{T}_i} \sum_{\tilde{e} \in \ell(v)} a_{\tilde{e}} \leq 4\sqrt{|e|} \sum_{i=|e|}^{k} \sqrt{i} |\bar{T}_i| \ .
\end{equation*}
It remains to show that
\begin{equation}\label{eq:InterpolationEstimate}
\sqrt{|e|} \sum_{i=|e|}^{k} \sqrt{i} |\bar{T}_i| \ll \sum_{e \in \ell(v_n^{\ast})} |\bar{T}_e| \, .
\end{equation} Using the Cauchy-Schwarz inequality and \eqref{eq:HyperbolicGrowth}, we see that for all $e \in \ell(v_n^{\ast})$,
\begin{equation}\label{eq:CSInequality}
\Big(\sqrt{|e|} \sum_{i=|e|}^{k} \sqrt{i} |\bar{T}_i| \Big)^2 \leq \Big(|e|\sum_{i=|e|}^{k} |\bar{T}_i| \Big)\Big(\sum_{i=|e|}^{k} i |\bar{T}_i| \Big) \ll \Big(\sum_{i=1}^{k} i |\bar{T}_i| \Big)^2 \, .
\end{equation} Taking square roots of both sides of \eqref{eq:CSInequality} finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:CutoffCompactification}]
For the lower bound on the mixing time from Proposition \ref{pro:HittingtimeLower}, note that the leading order does not decrease when we replace $(G_n)$ by $(\tilde{G}_n)$ for $v_n^{\ast}$ from \eqref{eq:MaximalPath}, and that the root is still a $\tilde{\delta}$-center of mass for some $\tilde{\delta}>0$. Theorem \ref{thm:CutoffCompactification} follows together with the product criterion from Lemma \ref{lem:ProductCriterion}, and Lemma \ref{lem:CutoffWeights}.
\end{proof}
\begin{remark}\label{rem:AlphaRetraction} Note that instead of binary trees, we may consider other trees $\bar{T}_i$ rooted at $v_i$ and attached to the segment at distance $i$ from $\tilde{o}$ for which
\begin{equation}\label{eq:RetractionAssumptionRem}
\max_{v \in V(T_{e})} \sum_{\bar{e} \in \ell(v)} |T_{\bar{e}}| \leq \alpha |T_{e}|
\end{equation} holds for some constant $\alpha$. The above choice of binary trees satisfies \eqref{eq:RetractionAssumptionRem} with $\alpha = 2$. Depending on
\eqref{eq:HyperbolicGrowth}, we may also allow $\alpha$ to go to infinity with $n$ sufficiently slow.
\end{remark}
\subsection{A sufficient condition for cutoff on trees} \label{sec:CriterionMiclo}
In this section, we give an upper bound on the relaxation time, which will allow us to give a sufficient condition for cutoff on trees. For a finite rooted tree $G=(V,E,o)$, we let $\mathcal{S}:=\{ B \subseteq V \colon B \neq \emptyset, o \notin B\}$, and define for all $B \in \mathcal{S}$
\begin{equation}\label{def:alphaW}
\nu_{B}:= \inf \Big\{ \sum_{e\in E} f(e)^2 \colon \sum_{e \in \ell(v)} f(e) \geq 1 \text{ for all } v \in B \Big\} > 0 \ ,
\end{equation} where the infimum is taken over functions $f\colon E \rightarrow \mathbb{R}$. For the following bound on the relaxation time, we use the ideas of Evans et al.\ for proving a Hardy inequality on continuous weighted trees \cite{EHP:Hardy}. A corresponding Hardy inequality for discrete weighted trees was obtained by Miclo, see Proposition 16 in \cite{M:SpectrumMarkovChains}. We will now provide a similar result on the relaxation time of the simple random walk.
\begin{proposition}\label{pro:UpperBoundRelaxationTimeHardy}
Recall \eqref{V_m}. For any finite tree $G$, we have that
\begin{equation}\label{eq:EstimateHardy}
t_{\textup{rel}} \leq 32 \max_{B \in \mathcal{S}} \frac{|B|}{\nu_B} \leq 32 \max_{k \in \mathbb{N}} k |V_{k}| \, .
\end{equation}
\end{proposition}
\begin{proof} Recall the trees $T,\tilde{T}$ for $G$ from \eqref{def:CenterOfMass}. For the first inequality in \eqref{eq:EstimateHardy}, it suffices by Lemma \ref{lem:EquivalentHardy} to show that for every function $g \colon E(T) \rightarrow \mathbb{R}$ with $g \not\equiv0$, we have
\begin{equation}\label{eq:ForSomeMax}
\sum_{v\in V(T)} \Big(\sum_{e \in \ell(v)} g(e)\Big)^2 \leq 32\left(\max_{B \in \mathcal{S}} \frac{|B|}{\nu_B} \right) \sum_{e \in E(T)} g(e)^2 \ ,
\end{equation} and a similar statement with respect to $\tilde{T}$. We will now show \eqref{eq:ForSomeMax} only for $T$. Fix a non-negative function $g$ (otherwise consider $|g|$), and define for all $i\in \mathbb{Z}$ the set
\begin{equation*}
B_i := \Big\{ v \in V(T) \colon 2^{i} \leq \sum_{e \in \ell(v)}g(e) < 2^{i+1} \Big\} \ .
\end{equation*}
We let $\mathcal{I}:= \{ i \in \mathbb{Z} \colon B_i \neq \emptyset \}$ and note that $B_i \in \mathcal{S}$ when $i \in \mathcal{I}$. Observe that
\begin{equation*}
\sum_{v\in V(T)} \Big(\sum_{e \in \ell(v)} g(e)\Big)^2 \leq \sum_{i \in \mathcal{I}} 2^{2i+2} |B_i| \leq \left(\max_{B \in \mathcal{S}} \frac{|B|}{\nu_B} \right) \sum_{i \in \mathcal{I}} 2^{2i+2} \nu_{B_i}
\end{equation*} holds. Hence, it suffices now to prove that for all $i \in \mathcal{I}$
\begin{equation}\label{eq:BiEstimate}
\nu_{B_i} \leq 2^{-2i+2} \sum_{e\in E(T) \colon e_+ \in B_{i-1}\cup B_i}g(e)^2
\end{equation}
is satisfied in order to conclude \eqref{eq:ForSomeMax}. To see that \eqref{eq:BiEstimate} indeed holds, consider
\begin{equation*}
g_i(e) := 2^{-i+1}g(e)\mathds{1}_{\{e_+ \in B_{i-1} \cup B_i\}}
\end{equation*} for all $e\in E(T)$. Using the definition of the sets $(B_i)_{i \in \mathbb{Z}}$ and the fact that $g$ is non-negative, we have for all $v\in B_i$ \begin{equation*}
\sum_{e \in \ell(v)} g_i(e) = 2^{-i+1}\Big(\sum_{e \in \ell(v)} g(e) - \sum_{\tilde{e} \in \ell(v) \colon \tilde{e}_+ \notin B_{i-1}\cup B_i} g(\tilde{e})\Big)\geq 2^{-i+1}\big(2^{i}- 2^{i-1}\big) = 1\ .
\end{equation*} Plugging $g_i$ into the definition \eqref{def:alphaW} of $\nu_{B_i}$ yields \eqref{eq:BiEstimate}, and hence the first inequality in \eqref{eq:EstimateHardy}.
For the second inequality in \eqref{eq:EstimateHardy}, observe that for all $B\in \mathcal{S}$
\begin{equation}\label{eq:BChoice}
\frac{\nu_{B}}{|B|} \geq \frac{1}{|B|}\max_{v \in B} \inf\Big\{ \sum_{e \in E(G)} f(e)^2 \colon \sum_{e \in \ell(v)} f(e)=1 \Big\} = \frac{1}{|B|}\max_{v \in B} \frac{1}{|v|}
\end{equation} holds, where the infimum is attained when $f(e)=|v|^{-1} \mathds{1}_{\{e\in \ell(v)\}}$. We then optimize in \eqref{eq:BChoice} over the choice of $B\in \mathcal{S}$ to conclude.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:CutoffMiclo}] Similar to the proof of Theorem \ref{thm:NoCutoff}, recall that by Lemma \ref{lem:ProductCriterion}, we have cutoff if and only if the relaxation time is of strictly smaller order than the mixing time. Note that by Proposition \ref{pro:HittingtimeLower} we obtain a lower bound on the mixing time, while Proposition \ref{pro:UpperBoundRelaxationTimeHardy} gives us an upper bound on the relaxation time. We conclude taking into account assumption \eqref{eq:ComplimentAssumption}.
\end{proof}
As a direct consequence of Theorem \ref{thm:CutoffMiclo}, we see that the simple random walk on the following family of trees constructed by Peres and Sousi in \cite{PS:CutoffTree} exhibits cutoff:
Fix $k$ and consider the tree $G_N$ for $N=N(k)$ which is constructed as follows: Set $n_i = 2^{2^{i}}$ and start with a segment of length $n_k$, rooted at one of the endpoints. At the root, we attach a binary tree of size $N:=n_k^3$. Then at distance $n_i$ from the root, attach a binary tree of size $N/{n_i}$ for all $i \in \{ k/2,\dots,k\}$. We reindex the sequence of trees $(G_N)$ to obtain a family of trees $(G_n)_{n \geq 1}$ for all $n\geq 1$. A visualization of these trees can be found in \cite{PS:CutoffTree}. Note that in this construction, the root is always a $\delta$-center of mass for some $\delta>0$ which does not depend on $n$.
\begin{corollary}\label{cor:CutoffTree} The family of trees described by Peres and Sousi has a mixing time of order $Nk$ and a relaxation time of order $N$, and hence exhibits cutoff provided that $k \rightarrow \infty$ with $N \rightarrow \infty$.
\end{corollary}
\begin{proof} We first consider the $\varepsilon$-mixing time for some $\varepsilon>0$. In Proposition \ref{pro:HittingtimeLower}, we choose $v$ to be a leaf in the tree attached at distance $n_k$ from the root and get
\begin{equation*}
t_{\textup{mix}}(\varepsilon) \geq c\sum_{i = k/2}^{k-1} (n_{i+1}-n_{i})\sum_{j=i}^{k-1} \frac{N}{n_j} \geq \frac{c}{4} k N
\end{equation*} for some constant $c=c(\varepsilon)>0$, when summing only over the sites in the attached binary trees for \eqref{eq:hittingTimeEstimateLower}. Since $N/n_i \gg n_i$ for all $i \leq k$, a corresponding upper bound follows from Proposition \ref{pro:HittingtimeUpper}. While a lower bound on $t_{\textup{rel}}$ of order $N$ is immediate from Proposition \ref{pro:LowerBoundRelaxationTime}, we obtain a corresponding upper bound of the same order by Proposition \ref{pro:UpperBoundRelaxationTimeHardy}, by a straight-forward count of the sites of distance at least $m$ from the root. We conclude that cutoff occurs by Lemma \ref{lem:ProductCriterion}.
\end{proof}
\section{Cutoff for SRW on trees is atypical} \label{sec:CutoffAtypical}
We now use the previous estimates and ask about cutoff for the simple random walk on the following three families of trees from Theorem \ref{thm:CutoffExamples}:
infinite spherically symmetric trees truncated at level $n$, supercritical Galton--Watson trees conditioned on survival and truncated at height $n$, and families of combinatorial trees converging to the Brownian continuum random tree (CRT). The latter includes critical Galton--Watson trees conditioned to have exactly $n$ sites. In all three cases, we will in the following verify that cutoff does not occur.
\subsection{Spherically symmetric trees} \label{sec:Spherically}
Let $G=(V,E,o)$ be a rooted tree. We say that $G$ is \textbf{spherically symmetric} if we have that $\deg(v)=\deg(v^{\prime})$ holds for all $v,v^{\prime} \in V$ with $|v|=|v^{\prime}|$. Examples for such trees are the integer lattice or regular trees. We write $\deg_k$ for the degree of the vertices at generation $k$ provided that $V_{k}\neq \emptyset$ holds (recall \eqref{V_m}).
\begin{proposition}\label{pro:NoCutoffSpherically} For a given infinite spherically symmetric tree $G$ of maximum degree $\Delta$, let $(G_n)_{n \geq 1}$ be the family of trees, induced by $G$ restricted to the sites $V \setminus V_n$ for all $n\in \mathbb{N}$. Then the simple random walk on $(G_n)_{n \geq 1}$ does not exhibit cutoff.
\end{proposition}
When $G=\mathbb{N}$ the claim is well-known, see \cite{LPW:markov-mixing}. Otherwise, for all $n\in \mathbb{N}$ large enough, we choose the root of $G_n$ to be the first branching point in $G$, i.e.\ the vertex closest to the root with degree at least $3$. In particular, note that the root of $G_n$ will be a $\frac{1}{4}$-center of mass for all $n$ sufficiently large. By Proposition \ref{pro:HittingtimeUpper} and \ref{pro:HittingtimeLower}, we see that the $\frac{1}{8}$-mixing time of $G_n$ satisfies
\begin{equation}\label{eq:HittingTimeSpherically}
t^n_{\textup{mix}}\left(\frac{1}{8}\right) \asymp \sum_{i=1}^{n-1} \sum_{j=i+1}^{n-1} \prod_{k=i}^{j-1} (\deg_k -1) \ .
\end{equation}
We will now show a lower bound on the relaxation time of the same order by using a comparison to birth-and-death chains. Our strategy will be similar to \cite{NN:SpectrumTree}, where Nestoridi and Nguyen study eigenvalues and eigenfunctions for the $d$--regular tree. More precisely, we will exploit that certain eigenfunctions of birth-and-death chains can be converted to eigenfunctions of the simple random walk on spherically symmetric trees. Without loss of generality, we will assume that $\deg(o)>1$ holds since removing the sites before the first branching point in $G$ changes the relaxation time by at most a constant factor. This can be seen using the characterization of the spectral gap in Lemma \ref{lem:Rayleigh}.
Let $(X_t)_{t \geq 0}$ be the continuous-time birth-and-death chain on the segment $\{1,\dots,2n-1\}$ with nearest neighbor transition rates $(r(x,y))$ given by
\begin{equation}
r(x,y)=\begin{cases} \deg_{x-n}-1 & \text{ if } y=x+1>n+1 \\
\deg_{n-x}-1 & \text{ if } y=x-1<n-1 \\
1 & \text{ otherwise}
\end{cases}
\end{equation} for all $\{x,y\} \in E(G_n)$. We make the following observation on the spectral gap of $(X_t)_{t \geq 0}$.
\begin{lemma}\label{lem:EigenfunctionBirthDeath} Let $\tilde{\lambda}$ be the spectral gap of $(X_t)_{t \geq 0}$. There exists a corresponding eigenfunction $\tilde{f} \colon \{1,\dots,2n-1\} \rightarrow \mathbb{R}$ which satisfies \begin{equation}\label{eq:EigenfunctionsBirthDeath}
\tilde{f}(x)=-\tilde{f}(2n-x)
\end{equation} for all $x \in \{1,\dots,2n-1\}$.
\end{lemma}
\begin{proof} Note that there exists an eigenfunction $g$ corresponding to the spectral gap of an irreducible birth-and-death chain which is monotone, see Lemma 22.17 in \cite{LPW:markov-mixing}. By the symmetry of the transition rates, we see that the function $h$ given by $h(x)=g(2n-x)$ for all $x \in \{1,\dots,2n-1\}$ is also an eigenfunction for $(X_t)_{t \geq 0}$ with respect to $\tilde{\lambda}$. Since $g(1) \neq g(2n-1)$, we see that the function $\tilde{f}:=g-h$ is an eigenfunction corresponding to $\tilde{\lambda}$ which has the desired properties.
\end{proof}
With these observations, we have all tools to give the proof of Proposition \ref{pro:NoCutoffSpherically}.
\begin{proof}[Proof of Proposition \ref{pro:NoCutoffSpherically}]
Observe that we can extend the eigenfunction $\tilde{f}$ of $(X_t)_{t \geq 0}$ from Lemma \ref{lem:EigenfunctionBirthDeath} belonging to the spectral gap to an eigenfunction $F$ with eigenvalue $\tilde{\lambda}$ of the random walk on $G_n$. To see this, let $x_1,x_2$ be two sites adjacent to $o$, and consider the function $F \colon V \rightarrow \mathbb{R}$ given by
\begin{equation}
F(v)=\begin{cases} f(n-|v|) & \text{ if } v \in T_{x_1} \\
f(n+|v|) & \text{ if } v \in T_{x_2} \\
0 & \text{ otherwise. }
\end{cases}
\end{equation}
The fact that $F$ is an eigenfunction of the simple random walk on $G_n$ follows by a direct verification using the generator.
Hence, in order to give a lower bound on the relaxation time of the same order as in \eqref{eq:HittingTimeSpherically}, it suffices to bound the spectral gap $\tilde{\lambda}$ of $(X_t)_{t \geq 0}$. Note that the stationary distribution $\tilde{\pi}$ of $(X_t)_{t \geq 0}$ satisfies
\begin{equation}
\tilde{\pi}(x) =\frac{1}{Z} \sum^{n-2}_{k=\min\{ x,2n-x\}} (\deg_{n-k-1} - 1)
\end{equation} for all $x\in \{ 1,\dots,2n-1\}$, where $Z$ is a normalization constant. Recalling that $G$ has maximum degree $\Delta$ and using Theorem 4.2 in \cite{CS:SpectrumBirthDeath}, we see that
\begin{equation*}
\frac{1}{\tilde{\lambda}} \geq \frac{1}{16 \Delta} \sum_{i=1}^{n} \frac{1}{\tilde{\pi}(i)} \asymp \sum_{i=1}^{n} \Big( \sum_{j=1}^n \prod_{k=j}^{n-2} (\deg_{n-k-1} - 1) \Big) \prod_{k=i}^{n-2} (\deg_{n-k-1} - 1)^{-1} \ .
\end{equation*}
This gives the desired lower bound on the relaxation time of the tree $G_n$ which is of the same order as the upper bound on the mixing time in \eqref{eq:HittingTimeSpherically}. Hence, using Lemma \ref{lem:ProductCriterion}, we see that no cutoff occurs.
\end{proof}
\begin{remark}\label{rem:NecessaryRetraction} Using Proposition \ref{pro:NoCutoffSpherically}, we can see that taking $v_n^*$-retractions in Theorem \ref{thm:CutoffCompactification} is necessary for cutoff. More precisely, consider the spherically symmetric tree $G$ with
\begin{equation}\deg_i = \begin{cases} 3 & \text{ if } i=2^{j}-1 \text{ for some } j \in \{0,1,\dots \} \\
2 & \text{ else}
\end{cases}
\end{equation} for all $i \in \mathbb{N}$. The corresponding trees $(G_n)_{n \geq 0}$ truncated at level $n$ satisfy \eqref{eq:HyperbolicGrowth}, but due to Proposition \ref{pro:NoCutoffSpherically}, the simple random walk on $(G_n)$ does not exhibit cutoff.
\end{remark}
\subsection{Galton--Watson trees conditioned on survival} \label{sec:GaltonWatsonSurvival}
In this section we consider a family of random trees $(G_n)_{n \geq 1}$ which we obtain by truncating a supercritical Galton--Watson tree conditioned on survival. More precisely, let
$\mu$ be an \textbf{offspring distribution} and assume that
\begin{equation}\label{eq:AssumptionsGWT1}
m:= \sum_{j=1}^{\infty}j \mu(j) >1 \qquad \sigma^2 :=\sum_{j=1}^{\infty}j^2 \mu(j) \in(0,\infty),
\end{equation} i.e.\ we have a supercritical Galton--Watson process whose offspring distribution has finite variance. In the following, we take the genealogical tree $G$ of a realization of the Galton--Watson process conditioned on survival. We then obtain the trees $(G_n)_{n \geq 1}$ by restricting $G$ onto the sites $V \setminus V_n$ (recalling \eqref{V_m}). We keep the sequence of trees fixed and perform simple random walks on $(G_n)_{n \geq 1}$. We denote the law of $G$ by $P$ ($P$ depends on $\mu$, but we will not write it). We now have the following result on cutoff for the simple random walk on $(G_n)_{n \geq 1}$.
\begin{proposition}\label{pro:NoCutoffGWT1} We have that for $P$-almost all trees $G$, the family of simple random walks on $(G_n)_{n \geq 1}$ does not exhibit cutoff.
\end{proposition}
\begin{proof}
From the proof of Theorem 1.4(b) in \cite{J:MixingInterchange}, we know that the relaxation time is $P$-almost surely of order $m^n$ for all $n$ sufficiently large. Hence, by Lemma \ref{lem:ProductCriterion}, it suffices to give an upper bound on the mixing time of order $m^n$ for the random walk on $G_n$ in order to exclude cutoff. Providing an upper bound on the mixing time of order $m^n$ answers a question of Jonasson \cite{J:MixingInterchange}. Recall that for any supercritical Galton--Watson tree $G=(V,E)$ with offspring distribution $\mu$ satisfying \eqref{eq:AssumptionsGWT1},
\begin{equation}\label{eq:MartingaleGenerations}
\left( \frac{|\{v \in V(G) \colon |v|=n\}|}{m^{n}}\right)_{n \geq 1}
\end{equation} is a martingale which converges $P$-almost surely and in $L^2$, due to the Kesten-Stigum Theorem.
For all $v\in V$ and $N\geq 0$ let $k(v,N)$ be the number of sites in the tree $T_v$ with distance at most $|v|+N$ from the root. From \eqref{eq:MartingaleGenerations}, we see that for all $v\in V$
\begin{equation}\label{def:MartingaleLimit}
\sup_{N \in \mathbb{N}} \frac{k(v,N)}{m^{N}} = Y_v
\end{equation} $P$-almost surely for some random variable $Y_v$. It is easy to show, applying Doob's inequality to the martingale in \eqref{eq:MartingaleGenerations}, that $Y_v$ has a finite second moment. \\
Recall that Proposition \ref{pro:HittingtimeUpper} does not require the root to be a $\delta$-center of mass, and hence
\begin{equation}\label{eq:MixingBoundGWT1}
P\left( t^n_{\textup{mix}}\left(\frac{1}{4}\right) \leq T \right)\leq P\left( C\sum_{s =0}^{n-1} m^{n-s} \max_{|v|=s}Y_v \leq T \right)
\end{equation} holds $P$-almost surely for all $T>0$ and some constant $C>0$. It remains to give a bound on the random variables $Y_v$. Note that the random variables $Y_v$ do not depend on the number of sites in the generation $|v|$ in the Galton--Watson tree. Hence, writing $E$ for the expectation corresponding to $P$, we see that
\begin{align*}
P\Big(\max_{|v|=s}Y_v > \frac{m^s}{2s^2} \Big) &\leq P\left(|\{v\colon |v|=s\}| > s^2m^s\right) + s^2m^n P\Big(Y_v > \frac{m^s}{s^2} \Big) \\
&\leq \frac{E\left[ | \{v \colon |v|= s \}|\right]}{s^2m^{s}} + s^2m^{s}\frac{E\left[ Y^{2}_o\right]}{s^{-4}m^{2s}}\leq c s^{-2}
\end{align*} is satisfied for all $s\in \mathbb{N}$ and some constant $c>0$. Together with \eqref{eq:MixingBoundGWT1} and the first Borel--Cantelli lemma, we see that $P$-almost surely
\begin{equation*
t^n_{\textup{mix}}\left(\frac{1}{4}\right) \leq \tilde{c} m^{n}
\end{equation*} holds for some $\tilde{c}>0$ depending on $G$ and all $n$ sufficiently large. Note that a bound of the same form follows when conditioning the underlying Galton--Watson tree on survival, using its representation as a multi-type Galton--Watson process with one child in every generation having a size-biased offspring distribution, see Chapter 12 of \cite{LP:ProbOnTrees}. Together with the corresponding bound on the relaxation time for the random walk on $(G_n)$ of order $m^n$ from \cite{J:MixingInterchange}, this finishes the proof.
\end{proof}
\begin{remark} For critical or subcritical Galton--Watson trees, i.e.\ when $m\leq 1$ holds in \eqref{eq:AssumptionsGWT1}, one can consider the family of random walks on $(G_n)$ which we obtain from the associated Kesten tree truncated at generation $n$, see \cite{K:KestenTree} for a formal definition of the tree and \cite{J:SurveyTrees} for a more comprehensive discussion. Note that the resulting tree consists of a segment of size $n$ with almost surely constant size trees attached. Using Theorem \ref{thm:NoCutoff}, one can show that there is $P$-almost surely no cutoff.
\end{remark}
\subsection{Combinatorial trees converging to the CRT} \label{sec:GaltonWatsonTotal}
In the previous two examples, we considered an infinite tree, which we truncated at generation $n$ in order to obtain the family of trees $(G_n)_{n \geq 1}$. We now take a different perspective and study the simple random walk on a sequence of random trees, where we assume that $G_n$ has exactly $n$ sites. \\
For each tree $(G_n)$, we assign a labeling to the $n$ sites chosen uniformly at random, and declare the vertex with label $1$ to be the root of $(G_n)$. Let $s \colon \{ 1, \dots, 2n-1\} \rightarrow V_n$ be the \textbf{contour function} of $G_n$, which is given as the walk on $V_n$ when performing depth-first search on $G_n$, giving priority to sites with a smaller label, see also Section 2.6 in \cite{A:CRT3}. Intuitively, we obtain $s$ by embedding $G_n$ into the plane such that the shortest path distance is preserved and sites with a common ancestor are ordered increasingly according to their labels, see Figure \ref{fig:ContourCap}. We will write $|s(\cdot)|$ for the distance of $s(\cdot)$ to the root, and call $|s|\colon \{ 1, \dots, 2n-1\} \rightarrow\mathbb{N} $ again contour function, with a slight abuse of notation.
For $c>0$, consider the normalized contour function $\tilde{s}:[0,1] \rightarrow \mathbb{R}$ given by
\begin{equation}
\tilde{s}_n\left(\frac{i}{2n}\right):=c n^{-\frac{1}{2}}|s(i)|
\end{equation} for all $i\in \{1,\dots,2n-1\}$, $\tilde{s}_n(0)=\tilde{s}_n(1)=0$ and linear interpolation between these values.
We say that a family of random trees $(G_n)_{n \geq 1}$ \textbf{converges to the CRT for \textit{c}} if $(\tilde{s}_n)_{n \geq 1}$ converges in distribution (in the space $\mathcal{C}([0,1])$) to the Brownian excursion $(e(t))_{t \in [0,1]}$.
Note that the terminology CRT refers to the Brownian continuum random tree, which can be seen as the limit object of $(G_n)_{n \geq 1}$, see \cite{A:CRT1,A:CRT2, A:CRT3} for an introduction and equivalent definitions, and \cite{HM:ScalingUnlabelled,MM:ScalingBinary} for examples of trees converging to the CRT. \\
Arguably the most prominent example of such a sequence of trees are independently chosen critical Galton--Watson trees
$(G_n)_{n\geq 1}$ conditioned on having exactly $n$ sites, where we assume that
\begin{equation}\label{eq:AssumptionsGWT2}
\sum_{j=1}^{\infty}j \mu(j) =1 \qquad \sum_{j=1}^{\infty}j^2 \mu(j) =:\sigma^2 \in(0,\infty) \qquad \gcd(j>0 \colon \mu(j)>0)=1
\end{equation} holds, i.e.\ the number of offspring has mean $1$, finite variance and is not supported on a sublattice of the integers. Note that the assumption of having $m=1$ is not a restriction as we can transform any given offspring distribution with positive mean into a critical offspring distribution without changing the law of the resulting graphs when conditioning on a given number of sites, see Section 2.1 in \cite{A:CRT2}. The following lemma is a classical result due to Aldous.
\begin{lemma}[c.f.\ Theorem 23 in \cite{A:CRT3}]\label{lem:AldousScaling} Under the assumptions in \eqref{eq:AssumptionsGWT2}, we have that $(G_n)_{n\geq 1}$ converges to the CRT for $c=\sigma/2$.
\end{lemma}
Note that for critical Galton--Watson trees, mixing and relaxation times were studied by Jonasson in \cite{J:MixingInterchange}, relying on estimates of the effective resistance between the root and the leaves. We now present an alternative proof of his results, which extends to more general families of combinatorial trees. In the following, we first choose the trees $(G_n)_{n \geq 1}$, keep them fixed and then perform simple random walks on $(G_n)_{n \geq 1}$. We denote the law of $(G_n)_{n \geq 1}$ by $P$.
\begin{proposition}\label{pro:NoCutoffGWT2} Let $(G_n)_{n \geq 1}$ be a family of random trees converging to the Brownian CRT for some $c$. Then we have that $P$-almost surely, the family of simple random walks on $(G_n)_{n \geq 1}$ does not exhibit cutoff.
\end{proposition}
\begin{figure} \label{fig:Contour}
\begin{center}
\begin{tikzpicture}[scale=0.7,
declare function={
func(\x)= (\x < 1) * (0) +
and(\x >= 1, \x < 4) * (\x-1) +
and(\x >= 4, \x < 5) * (6-\x+1) +
and(\x >= 5, \x < 6) * (\x-2-1) +
and(\x >= 6, \x < 7) * (8-\x+1) +
and(\x >= 7, \x < 9) * (\x-4-1) +
and(\x >= 9, \x < 13) * (12-\x+1) +
and(\x >= 13, \x < 15) * (\x-12-1) +
and(\x >= 15, \x < 17) * (16-\x+1) +
and(\x >= 17, \x < 20) * (\x-16-1) +
and(\x >= 20, \x < 23) * (22-\x+1) +
and(\x >= 23, \x < 25) * (\x-22-1) +
and(\x >= 25, \x < 27) * (26-\x+1) +
(\x >= 27) * (0) ;
}
]
\node[shape=circle,scale=0.8,draw] (B) at (3,0){$1$} ;
\node[shape=circle,scale=0.8,draw] (A1) at (0,1.5){$3$} ;
\node[shape=circle,scale=0.68,draw] (A2) at (0,3){$13$} ;
\node[shape=circle,scale=0.8,draw] (A3) at (-1.1,4.5){$2$} ;
\node[shape=circle,scale=0.8,draw] (A4) at (0,4.5){$9$} ;
\node[shape=circle,scale=0.68,draw] (A5) at (1.1,4.5){$10$} ;
\node[shape=circle,scale=0.8,draw] (B1) at (3-0.8,1.5){$4$} ;
\node[shape=circle,scale=0.8,draw] (B2) at (3.8,1.5){$7$} ;
\node[shape=circle,scale=0.68,draw] (B3) at (3-0.8,3){$12$} ;
\node[shape=circle,scale=0.8,draw] (B4) at (3.8,3){$5$} ;
\node[shape=circle,scale=0.68,draw] (C1) at (1.1,6){$11$} ;
\node[shape=circle,scale=0.8,draw] (C2) at (3.8,4.5){$8$} ;
\node[shape=circle,scale=0.68,draw] (C3) at (6,1.5){$14$} ;
\node[shape=circle,scale=0.8,draw] (C4) at (6,3){$6$} ;
\draw[thick] (A1) to (B);
\draw[thick] (A1) to (A2);
\draw[thick] (A2) to (A3);
\draw[thick] (A2) to (A4);
\draw[thick] (A2) to (A5);
\draw[thick] (B) to (B1);
\draw[thick] (B) to (B2);
\draw[thick] (B1) to (B3);
\draw[thick] (B2) to (B4);
\draw[thick] (A5) to (C1);
\draw[thick] (B4) to (C2);
\draw[thick] (B) to (C3);
\draw[thick] (C3) to (C4);
\node[scale=1] at (3.5,-0.5){$o$};
\node[scale=0.9] at (10.1,5.7){$|s(i)|$};
\node[scale=0.9] at (17,-0.5){$i$};
\begin{axis}[xshift=9.5cm,
domain=0:28,
xmin=0, xmax=28,
ymin=0, ymax=4.1,
samples=400,
axis y line=center,
axis x line=middle,
scale=1.1
]
\addplot [RoyalBlue,thick] {func(x)};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{\label{fig:ContourCap} Visualization of the linear interpolation of the contour function $|s| \colon \{1,\dots,27\} \rightarrow \mathbb{N}$ for the graph $G$ given on the left-hand side with $n=14$ sites. }
\end{figure}
Intuitively, the absence of cutoff is explained by the fact that the law of the random walk on $(G_n)_{n \geq 1}$ converges to the law of a Brownian motion on the CRT, indicating a smooth decay of the total-variation distance for the random walk at a time of order $N^{3/2}$.
In order to prove Proposition \ref{pro:NoCutoffGWT2}, we will show
that $P$-almost surely, \eqref{seminalcomp} is not satisfied. First, we will prove that the mixing time and the relaxation time are, for all $n$ sufficiently large, with positive probability both of order $n^{3/2}$.
We start with the following upper bound on the mixing time.
\begin{lemma}\label{lem:MixingTimeGWT} Let $(G_n)$ be as in Proposition \ref{pro:NoCutoffGWT2}. Then,
for all $\varepsilon>0$, there exists a constant $C_{\varepsilon}$ such that for $n$ sufficiently large,
\begin{equation}
P\left( t^n_{\textup{mix}}\left(\frac{1}{4}\right) \leq C_\varepsilon n^{\frac{3}{2}} \right) \geq 1 - \varepsilon\, .
\end{equation}
\end{lemma}
\begin{proof} Note that the tree $G_n$ satisfies $\textup{diam}(G_n) \leq 2 \max_{i \in \{1,\dots,2n-1\}}s(i)$. From Lemma \ref{lem:AldousScaling}, we see that for all $\varepsilon>0$, there is $c_1 = c_1(\varepsilon) > 0$ such that for $n$ large enough,
\begin{equation}
P\Big(\textup{diam}(G_n) \geq c_1 n^{\frac{1}{2}}\Big) < \varepsilon\, .
\end{equation}
Now we use \eqref{mixdiam} from Proposition \ref{pro:HittingtimeUpper} to conclude.
\end{proof}
It remains to show a corresponding lower bound of order $n^{3/2}$ for the relaxation time of the simple random walk on $G_n$, which requires a bit of setup. We start by giving some statements for the Brownian excursion. We then carry over these observations to trees using Lemma \ref{lem:AldousScaling}. More precisely, we first choose a new root $o_{\ast}$, which will be a $\delta$-center of mass. We then show that there exists a site $v$ at distance of order $\sqrt{n}$ from the new root with order $n$ many sites in the tree $T_v$. Note that a Brownian excursion $(e(t))_{t \in [0,1]}$ attains almost surely its extrema in every compact subset of $[0,1]$. We let $t_{\min}$ and $t_{\max}$ be such that
\begin{equation}\label{def:ExtremValuesBE}
e(t_{\min}) = \min_{t \in \left[ \frac{1}{4},\frac{3}{4}\right]} e(t), \qquad e(t_{\max}) = \max_{t \in \left[ \frac{1}{4},\frac{3}{4}\right]} e(t) \ .
\end{equation}
Consider, for $\theta> 0$ and $\delta > 0$, the events
\begin{equation}\label{eq:ExcursionArea} B_1 := \left\lbrace e(t_{\min}) > \theta \right\rbrace \cap \left\lbrace \max_{t \in [0,\delta]} e(t) \leq \frac{\theta}{2}\right\rbrace
\end{equation}
and
\begin{equation}\label{eq:ExcursionBoundary}
B_2 :=
\left\lbrace
e(t) \geq 2\theta \text{ for all } t \in [t_{\max}-\theta ,t_{\max}+\theta] \right\rbrace \ ,
\end{equation}
see Figure \ref{fig:BrownianExCap} for a visualization.
\begin{lemma}\label{lem:ExcursionArea} For every $\varepsilon>0$, there exists some $\theta>0$ and some $\delta > 0$
such that $\P(B_1 \cap B_2 ) \geq 1- \varepsilon/2$.
\end{lemma}
\begin{proof} Note that $e(t_{\max}) > e(t_{\min})>0$ holds almost surely by the construction of the Brownian excursion. Since
$(e(t))_{t \in [0,1]}$ is continuous, and $e(\frac{1}{4})$ and
$e(\frac{3}{4})$ have densities with respect to the Lebesgue measure on $[0, \infty)$, the probability of the event
$$
\left\lbrace e(t_{\min}) > \theta \right\rbrace \cap \left\lbrace
e(t) \geq 2\theta \text{ for all } t \in [t_{\max}-\theta ,t_{\max}+\theta] \right\rbrace
$$ goes to $1$ for $\theta \to 0$. Hence, choose $\theta$ such that this probability is at least $1- \varepsilon/4$.
Moreover, since $e(0)=0$ almost surely and $(e(t))_{t \in [0,1]}$ has continuous paths, we have that
for every $\varepsilon,\theta>0$, there exists some $\delta>0$ such that the event
$$
\left\lbrace \max_{t \in [0,\delta]} e(t) \leq \frac{\theta}{2}\right\rbrace
$$
has probability at least $1- \varepsilon/4$. This implies that the event $B_1\cap B_2$ has probability at least $1- \varepsilon/2$.
\end{proof}
\begin{figure} \label{fig:BrownianEx}
\begin{center}
\begin{tikzpicture}[scale=0.9]
\pgfmathsetseed{1332}
\node (A1) at (0.01*rand,0.01*rand+1){};
\node (A2) at (0.01*rand,0.01*rand+1){};
\node (A3) at (0.01*rand,0.01*rand+1){};
\node (A4) at (0.01*rand,0.01*rand+1){};
\draw[RoyalBlue] (0,0)
-- ++(0.02,0.15)
-- ++(0.02,-0.05)
-- ++(0.02,0.1)
-- ++(0.02,0.1)
-- ++(0.02,-0.05)
-- ++(0.02,+0.15)
-- ++(0.02,-0.1)
-- ++(0.02,0.2)
-- ++(0.02,-0.15)
-- ++(0.02,0.05)
\foreach \x in {1,...,740}
{ -- ++(0.02,-rand*0.2)
}-- ++(0.02,-0.23)
-- ++(0.02,-0.15)
-- ++(0.02,0)
-- ++(0.02,-0.05)
-- ++(0.02,0)
-- ++(0.02,-0.05)
-- ++(0.02,-0.2)
-- ++(0.02,+0.05)
-- ++(0.02,-0.1);
\draw[thick] (0,0) -- (15.2,0);
\draw[thick,dashed] (15.2/4+0.32,0) -- (15.2/4+0.32,4);
\draw[thick,dashed] (3*15.2/4,0) -- (3*15.2/4,4);
\draw[thick] (15.2/4+0.32,0) -- (15.2/4+0.32,-0.3);
\draw[thick] (3*15.2/4,0) -- (3*15.2/4,-0.3);
\draw[thick] (15.2,0) -- (15.2,-0.3);
\draw[thick] (0,0) -- (0,-0.3);
\draw[thick] (0.15,0) -- (0.15,-0.3);
\draw[thick,dotted] (0.15,0) -- (0.15,2);
\draw[thick,dotted] (0,0) -- (0,2);
\draw (0,2) -- (0.5,2.5);
\draw (0.15,2) -- (0.5,2.5);
\node[scale=0.85] (Actilde) at (0.62,2.62){$\delta$} ;
\node[scale=3.42, rotate=270] (Abrac) at (7.17,2.7){$\}$} ;
\node[scale=0.85] (Amin35) at (7.17,2.3){$2\theta$} ;
\node[scale=1.7] (Abrac2) at (15.2/2,0.4){$\}$} ;
\node[scale=0.85] (Amin3) at (15.2/2+0.3,0.4){$\theta$} ;
\node[shape=circle,scale=0.15,fill] (Amin) at (10.77,1){$e_{\min}$} ;
\node[shape=circle,scale=0.15,fill] (Amin2) at (7.17,4.6){$e_{\max}$} ;
\draw[thick,dotted] (15.2/4+0.32,0.8) -- (3*15.2/4,0.8);
\draw[thick,dotted] (15.2/4+0.32,0.4) -- (0,0.4);
\node[scale=0.85] (Amin3) at (10.77,0.6){$e(t_{\min})$} ;
\node[scale=0.85] (Amin4) at (7.17,4.9){$e(t_{\max})$} ;
\draw[thick,dotted] (7.17-0.8,5) -- (7.17-0.8,3);
\draw[thick,dotted] (7.17+0.8,5) -- (7.17+0.8,3);
\draw[thick,dotted] (7.17-2.1,3) -- (7.17+0.99,3);
\node[scale=0.85] (A0) at (0,-0.49){$0$} ;
\node[scale=0.85] (A1) at (15.2/4+0.1,-0.35){$\frac{1}{4}$} ;
\node[scale=0.85] (A2) at (3*15.2/4-0.2,-0.35){$\frac{3}{4}$} ;
\node[scale=0.85] (A3) at (15.2,-0.49){$1$} ;
\node[scale=0.85] (Abrac2) at (15.2/8,0.2){$\mathbf{\}}$} ;
\node[scale=0.75] (Amin3) at (15.2/8+0.4,0.18){$\theta/2$} ;
\end{tikzpicture}
\end{center}
\caption{\label{fig:BrownianExCap} Visualization of the events $B_1$ and $B_2$ for the Brownian excursion. }
\end{figure}
We now use Lemma \ref{lem:ExcursionArea}
to show the following bound on the relaxation time.
\begin{lemma}\label{lem:RelaxationTimeGWT} For all $\varepsilon>0$, there exists some constant $c_{\varepsilon}>0$ such that for $n$ sufficiently large,
\begin{equation}
P\left( t^n_{\textup{rel}} \geq c_{\varepsilon} n^{\frac{3}{2}} \right) \geq 1 - \varepsilon\, .
\end{equation}
\end{lemma}
\begin{proof} Recall the contour function $s$, respectively $|s|$. Fix some $\varepsilon>0$ and let $\theta,\delta>0$ be the constants from Lemma \ref{lem:ExcursionArea}. We start the proof by first choosing a new root $o_{\ast}$ of $G_n$ as follows: Recall the constant $c>0$ from the convergence of $(G_n)$ to the CRT. We set
\begin{equation*}
o_{\ast}:= s(i_{\ast}) \qquad \text{ with } \qquad i_{\ast} = \max\left\{ i \leq \frac{1}{2}n \colon |s(i)| \leq c^{-1}\theta \sqrt{n} \right\} \ .
\end{equation*}
We claim that whenever the event $B_1^n$ given by
\begin{equation*}
B^n_1 := \left\lbrace |s(i)|> c^{-1}\theta \sqrt{n} \text{ for all } i \in \left[\frac{1}{2} n, \frac{3}{2} n\right]\right\rbrace \cap \left\lbrace \max_{j \leq 2\delta n} |s(j)|< c^{-1}\frac{\theta}{2} \sqrt{n} \right\rbrace
\end{equation*} occurs then $o_{\ast}$ is a $\delta$-center of mass. To see this, observe that the subtree $T=T_{o_{\ast}}$, i.e.\ the largest subtree containing the new root $o_{\ast}$ and only sites of distance at least $|o_{\ast}|$ from the old root $o$, must have at least $n/2$ sites. This is due to the fact that the contour traverses, between times $n/2$ and $3n/2$, only edges which belong to $T$ and each edge is visited at most twice. To conclude that $o_{\ast}$ must be a $\delta$-center of mass, note that the tree $\tilde{T}$, induced by the vertices $V \setminus V(T) \cup \{o_{\ast}\}$, contains at least $\delta n$ many sites. This follows from the observation that all edges traversed by the contour until time $2\delta n$ belong to $\tilde{T}$ provided that $B_1^n$ occurs. Thus, $T$ and $\tilde{T}$ satisfy \eqref{def:CenterOfMass} and this proves the claim.\\
Moreover, suppose that in addition the event $B^n_2$ given by
\begin{equation*}
B^n_2 := \left\lbrace \exists i \in \left[\left(\frac{1}{2}+2\theta\right) n, \left(\frac{3}{2}-2\theta\right) n\right] \colon \min_{ j\in [i-2\theta n,i+2 \theta n]} |s(j)| \geq 2c^{-1}\theta \sqrt{n} \right\rbrace
\end{equation*} occurs. We claim that in this case for $i$ from the event $B_2^n$, the vertex $v$ given by
\begin{equation*}
v:= s(j_{\ast}) \qquad \text{ with } \qquad j_{\ast} = \max\left\{ j < i-2\theta n \colon |s(j)| < 2c^{-1}\theta \sqrt{n} \right\}
\end{equation*} has at least distance $c^{-1}\theta \sqrt{n}$ from $o_{\ast}$ and it holds that $|T_v| \geq 2\theta n$. Again, this claim follows from the definition of the contour $s$ and the events $B_1^n$ and $B_2^n$ since all edges visited by the contour during $[i-2\theta n,i+2\theta n]$ must belong to $T_v$. \\
Hence, whenever the events $B_1^n$ and $B_2^n$ both occur, Proposition \ref{pro:LowerBoundRelaxationTime} gives a lower bound of $2\delta c^{-1}\theta^2 n^{3/2}$ for the relaxation time. Notice that the events $B_1$ and $B_2$ for the Brownian excursion correspond to the events $B_1^n$ and $B_2^n$. More precisely, we have that
\begin{align*}
P\left(|s(i)|>c^{-1}\theta \sqrt{n} \text{ for all } i \in \left[ \frac{1}{2}n,\frac{3}{2}n\right] \right) &= P\left(\tilde{s}_n(y)>\theta \text{ for all } y \in \left[ \frac{1}{4},\frac{3}{4}\right] \right)
\end{align*}
converges to $\P\left(e(t_{\min})\geq \theta\right)$ by Lemma \ref{lem:AldousScaling}. A similar statement applies for all other events in the definitions of $B_1^n$ and $B_2^n$. Thus, for all $\varepsilon>0$, Lemma \ref{lem:AldousScaling} yields that there exists an $N=N(\varepsilon)$ such that for all $n\geq N$,
\begin{equation*}
P(B_1^n \cap B_2^n) \geq \P(B_1 \cap B_2) - \frac{\varepsilon}{2}\, .
\end{equation*}
We conclude with Lemma \ref{lem:ExcursionArea}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{pro:NoCutoffGWT2}] Recall from Lemma \ref{lem:ProductCriterion} that a necessary assumption for the simple random walk to satisfy cutoff is that the product criterion \eqref{eq:ProductCriterionIntro} holds.
In other words, it is enough to show that
\begin{equation}\label{theliminf}
Z_n : = \frac{t^{n}_{\textup{mix}}\left( \frac{1}{4}\right)}{t^n_{\textup{rel}}} \ \text{ satisfies } \
\text P\left( \liminf_{n \rightarrow \infty} Z_n < \infty\right) = 1\, .
\end{equation}
Note that Lemma \ref{lem:MixingTimeGWT} and Lemma \ref{lem:RelaxationTimeGWT} imply (by choosing $K_\varepsilon = \frac{C_{\varepsilon}}{c_{\varepsilon}}$) that
\begin{equation}\label{is finite}
\forall \varepsilon > 0, \ \exists K_\varepsilon < \infty \text{ such that }
P\left(Z_n \leq K_\varepsilon\right) \geq 1-2 \varepsilon\, .
\end{equation}
The following general argument shows that \eqref{is finite} implies \eqref{theliminf}.
Let $\tilde{C} > 0$ and $\tilde{c} \in (0,\tilde{C})$. Then for every $n \in \mathbb{N}$, we have that
\begin{align}\label{eq:Formerindpendence}
P\left(\liminf_{n \rightarrow \infty} Z_n \geq \tilde{C} \right) &\leq P\left( \exists N_0=N_0(\omega) < \infty \colon Z_m \geq \tilde{C} - \tilde{c} \text{ for all } m \geq N_0 \right) \nonumber \\
&\leq P\left( N_0 \geq n \right) + P\left( Z_n \geq \tilde{C} - \tilde{c} \right) \ .
\end{align}
By choosing $n$ and $\tilde{C} $ large enough, due to \eqref{is finite}, both terms on the right-hand side of
\eqref{eq:Formerindpendence} are $\leq 2\varepsilon$. Since $\varepsilon$ was arbitrary, we conclude that \eqref{theliminf} holds.
\end{proof}
\section{Open questions}
In order to exclude cutoff for the simple random walk on spherically symmetric trees, we assumed that the maximum degree is bounded.
\begin{question} Can the assumption in Proposition \ref{pro:NoCutoffSpherically} on having a bounded maximum degree be relaxed?
\end{question}
In Section \ref{sec:CutoffAtypical}, we showed that we do not see cutoff for Galton--Watson trees, which are typically \textit{short} and \textit{fat}, see also \cite{AB:ShortAndFat}.
\begin{question} Does a family of trees exist which is \textit{short} and \textit{fat} in the sense of \cite{AB:ShortAndFat} for which the simple random walk exhibits cutoff?
\end{question}
Throughout this article, we used at several points the result of Basu, Hermon and Peres that cutoff occurs for the simple random walk on a family of trees if and only if \eqref{eq:ProductCriterionIntro} is satisfied.
\begin{question} For which families of graphs is the product criterion in \eqref{eq:ProductCriterionIntro} a necessary and sufficient condition such that the simple random walk exhibits cutoff?
\end{question}
\bibliographystyle{plain}
\begin{footnotesize}
| 742381b8a5482d3cafb33b6d9470829eef7f353f | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction} \label{sec:intro}
\IEEEPARstart{P}{hysically} unclonable functions (PUFs) or physical one-way functions (POWFs) have been suggested as a method to securely authenticate a networked device or remote user. Current state-of-the-art means of authentication begins with the usage of a classical secret key, or token, stored within a read-only memory (ROM). PUFs are of particular interest since they often form the basis of hardware primitives necessary to replace shared secret keys with a non-reproducible physical object or device.
Classical CMOS-based PUFs are physical primitives that utilize process fabrication variance to create unique POWFs. Unlike non-volatile memory, where information can be stored and read digitally, information in CMOS-based PUFs is directly extracted from inherent lithographic variation, making static PUFs impossible to be duplicated; even within the original manufacturing process \cite{herder14}. Other common forms of CMOS PUFs include arbiter PUFs \cite{Xu2015} that utilize delays to measure differences in transmission times of two competing pathways in order to generate a digital response, butterfly PUFs \cite{kumar2008,Guajardo2007} that examine output from a set of cross-coupled latches, and random-access memory (RAM) PUFs \cite{Holcomb2009} that are based on randomly distributed mismatches between two transistors where the repeatable start-up conditions of cells are treated as digital responses.
The operating scheme for all types of PUFs remains essentially identical: Given a set of specific inputs, referred to as the challenge, a PUF will generate a unique output response. These inputs and corresponding outputs are known as the challenge-response pairs (CRPs). The manufacturer or user of the PUF enrolls the device's unique information by generating and recording all of the viable CRPs. The user can then verify the identity of the integrated (or remote) PUF, at a later time, by challenging the suspect device and comparing the response to the expected response. In this work, our challenge will be a set of randomly selected voltages applied to the device, $\bar{C}$. The response will be the normalized distribution of laser intensity in the output modes that results from those voltage settings, $\bar{R}$. The details of this scheme will be described in Section~\ref{sec:the_qpp}.
PUFs based on optical measurements have been proposed with differing operating mechanisms, where either the scattering of laser-light from bulk inhomogeneous media \cite{pappu2002}, multi-mode fiber \cite{Mesaritakis2018}, or non-linear interaction in specialized integrated devices \cite{grubel2017} are observed. One of the main reasons that electronic PUFs are commonly implemented into field programmable gate arrays (FPGAs) and other protected IPs is due to the electronic PUFs' ease of integration into the many existing CMOS-process devices, alongside their low size, weight, and power requirements.
Optical PUFs often require non-trivial bulk optics and ancillary support such as micron-accurate positioning stages \cite{pappu2002} or bulk disordered materials \cite{Mesaritakis2018}. A more compact solution was conceived by Grubel et al. \cite{grubel2017} through the utilization of photonic integrated circuits (PICs), however, these PIC PUFs require a set of completely custom-designed devices for the sole purpose of use as a PUF. Here we propose that any large- and well-connected-enough array of linear interferometric devices can be used for both its designed purpose \textit{and} as an optical PUF. The ongoing development of large-scale PICs, and particularly large interferometric devices \cite{obrien2018,obrien2018sci,obrien2017,englund17,harris14}, along with the wide range of applications from general information processing \cite{englund17}, quantum key distribution \cite{obrien2017}, quantum optics \cite{obrien2018}, and even the development of deep-learning and optical neural networks \cite{shen17, carolan2017apparatus}, suggest that such linear PICs will become ubiquitous components in the future.
Analogous to the development of RAM-based PUFS, our circuit was not specifically designed to act as a PUF. We demonstrate that the large interferometric circuits now being developed by the authors and other groups \cite{obrien2018,obrien2018sci,obrien2017,englund17,harris14,shen17,carolan2017apparatus,harris18} have an additional application as a PUF.
In this work we describe a linear optical interferometric PIC which acts as a PUF. We demonstrate how a small sub-circuit operates as a weak PUF, but has the ability to further meet the criteria of a strong PUF. We show how the scaling of an integrated optical circuit intrinsically carries enough randomness from multi-input interference via adjustments of Mach-Zehnder interferometers (MZIs) to act as a practical PUF.
The major advantage of using such a PUF is compatibility with existing CMOS fabrication for easy adoption with existing technologies. The PIC PUF's tight integration with other PIC devices, such as those used in quantum communication, allows for improved security over add-on alternative PUF IPs. Our PUF's performance, and large number of challenges, discussed in Section~\ref{sec:results}, give it excellent characteristics as a strong PUF. Finally, the ability to use \textit{any} large circuit of interferometers as a PUF, combined with the growing size and number of such circuits, suggest that our device could become an ubiquitous component in the near future.
\section{The Quantum Photonics Processor} \label{sec:the_qpp}
In this research we have employed the Quantum Photonics Processor (QPP)\footnote{Also known as the Programable Nanophotonic Processor (PNP).} developed in collaboration between the Air Force Research Lab (AFRL) and the photonics research group, under the direction of Dirk Englund at MIT \cite{englund17,harris18}, as our prototype linear optical interferometric PUF. The optical PUF device is designed as a silicon-on-insulator (SOI) integrated optical chip fabricated in a CMOS foundry. The PIC device consists of 88 2x2 MZIs connected in a triangular nearest-neighbor configuration, as shown in Figure~\ref{diag1}.
\begin{figure}[tbp!]
\centerline{\includegraphics[width=.45\textwidth]{figures/smith1.pdf}}
\caption{{\bf Structure of Quantum Photonics Processor}, where each rectangular box represents a single 2x2 MZI. The figure shows two logically separated devices (orange), consisting of 10 MZIs each, and the pumping scheme within the larger QPP. The largest PUF on the QPP is pumped down the center channel and consists of 66 MZIs.}
\label{diag1}
\end{figure}
The devices are standard Silicon on Insulator (SOI), a thermo-optic optic material, and as such each MZI can be independently, thermally, tuned by an integrated resistive heating element. The 2x2 MZI is capable of applying an idealized $2\times2$ unitary transformation shown in Equation~\ref{eq:mzi_xfrfn}, expanded upon in Section~\ref{sec:puf_metrics}.
\begin{equation} \label{eq:mzi_xfrfn}
U_{MZI}(\theta, \phi) = \dfrac{1}{2}
\begin{pmatrix}
e^{j\phi} & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
1 & j \\
j & 1
\end{pmatrix}
\begin{pmatrix}
e^{j\theta} & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
1 & j\\
j & 1
\end{pmatrix}
\end{equation}
Each MZI consists of two integrated phase shifters: One phase shifter between the beam splitters and a second phase shifter on a single output-leg. The unitary transformation in Equation~\ref{eq:mzi_xfrfn} includes two variables, $\theta$ and $\phi$, which map to the internal phase setting and output phase offset, respectively, for each MZI. In this work we only employed the internal phase shifters, driven by a computer-controlled voltage driver \cite{harris14}, and thus have two PUF devices consisting of 10 MZIs, pumped by a Keysight laser (model 81606A) through a single waveguide, respectively. The fiber-arrays and edge-coupling of the QPP is shown in Figure~\ref{fig:qpp_image}. Each PUF device has 8 output ports, each connected via SMF-28 to a single standard PIN photodiode (Precision Micro Optics model DPRM-412). The subset devices used within the QPP are triangular-shaped with a light-cone-like dispersion region, visualized by the orange regions in Figure~\ref{diag1}.
The first source of randomness for this device is the $\approx15.43\%$ variation between resistive heating elements, as measured, due to fabrication variances. It should be noted that the continual operation of the resistive heating elements will lead to electrode annealing, thus a change in the output of the PUF could be observed over time. A far more significant effect on randomness are the two directional couplers within each MZI. The couplers are designed to be a nominal 50:50 splitting ratio but fabrication defects stemming from variation in the etching process, sidewall roughness, and variation in minute distances between waveguides leads to unpredictable splitting ratios near $50\%$. An additional source of unpredictability leading to potential for randomness in the device comes from a minor design flaw: Since many of the MZIs share ground leads, positive feedback ground-loops are formed when a single MZI's voltage is set and the cascading MZI's resistive elements return a complex set of voltages, set by nearest-neighbor association. The effect of ground-loop feedback is approximately $-45$ dB as measured by M. Prabhu \cite{prabhu2018towards}. It can be expected that the positive feedback ground-loop voltage errors may be a minor factor in the device's overall behavior. To minimize unwanted global thermo-optic effects, the device was held at a steady temperature, slightly above ambient, throughout testing. The effects of operating at differing temperatures have not yet been explored, however, the device can operate at a wide range of steady temperatures, with each having an impact on output characteristics, without damage. The differing global temperature effects on the PUF are further discussed in Section~\ref{sec:results}.
The QPP is large enough to act as two distinct devices with identical structure. Two devices were programmed to be used for comparison by taking the QPP and pumping laser-light into two space-like separated regions such that the light from one device will not reach the other device, either directly or through reflections other than those coupled into the slab-mode. In addition, the two devices are electrically separated so that no positive ground-loop feedback effects exist between the devices.
\begin{figure*}[tbp!]
\includegraphics[keepaspectratio,width=\textwidth]{figures/smith2}
\caption{\textbf{Quantum Photonics Processor} shown with two 26-mode fiber arrays edge-coupled to the QPP. The fiber array on the left is the input of laser-light into the QPP, connected to a printed circuit board with connections to control the integrated resistive heating elements. The fiber array on the right leads to an array of PIN photodiodes to measure intensities on the outputs.}
\label{fig:qpp_image}
\end{figure*}
\section{PUF Metrics} \label{sec:puf_metrics}
We use the definitions of a weak and strong PUF given by C. Herder et al. \cite{herder14}. A weak PUF is described as:
\begin{inlineenum}
\item Having a number of CRPs linearly related to the number of components,
\item being robust against environmental effects, i.e. having stable CRPs,
\item having unpredictable responses to any stimulus and,
\item being extremely impractical to reproduce. A strong PUF is characterized by all of the previous statements regarding weak PUFs with the addition of:
\item Having enough CRPs such that the number is exponential in the number of challenge bits and
\item that the readout will reveal only the response $\bar{R}=f(\bar{C})$ and nothing else.
\end{inlineenum}
One metric chosen to test the difference between CRPs is the Euclidean distance, $\ell^2$-norm, of the $N$ outputs. To measure the Euclidean distance the analog response of each detector is divided into even-sized subsets; each of which is larger than the estimated noise of the \textit{system}. For our test we chose a subset of size $0.5\%$ of the total power detected across the $m$ outputs, scaled by a cross-normalization factor between CRPs.
To decrease and/or correct error within the testing of our PUF the size of voltage subset utilized in computation was increased to $0.1\%$. The increase in subset size serves to decrease the chances that any noise present on a particular channel straddles the bounds between two values. The increase in subset size also applies a reduction in resolution for the $\ell^2$ distance. An alternative option to decrease and/or correct error within the testing of our PUF is to increase the collection time, thus increasing the amount of sample-averaging that forms a single CRP. The drawback to relying on increased collection times are the latency requirements, which may hamper any fast electronics requiring the output of the PUF, and the possibly of allowing an adversary additional time to perform side channel attacks.
The second set of metrics used to quantify the results and operation of the PUF are the inter- and intra-device Hamming distances ($HD_{inter},HD_{intra}$) along with the inter- and intra-device Euclidean norms ($\ell^2_{inter},\ell^2_{intra}$). To analyze the results we modified the standard Hamming distances between a response $\bar{R}_i$ from challenge $\bar{C}_i$, and response $\bar{R}_j$ from challenge $\bar{C}_j$, to reduce the effects of noise. The loose Hamming distance ($LHD$) can be calculated between two noisy response vectors, $\bar{R}_i$ and $\bar{R}_j$, for all of the corresponding laser-light intensity measurements on the output modes $k$ as:
\begin{eqnarray}
\label{modham}
LHD=\sum_{k}{f(\bar{R}_i,\bar{R}_j)_k}=\begin{cases}
0, \forall k~\text{if}~|R_{i,k}-R_{j,k}|< L\\
1, \forall k~\text{if}~|R_{i,k}-R_{j,k}|\geq L\\
\end{cases}
\end{eqnarray}
Where $L\in \mathbf N$ defines the degree of looseness, where $L=1$ is the normal Hamming distance. For the case of small PUFs, $L=2$ is sufficient. The definition for $LHD$ is used to compensate for the experimental noise and rounding errors as discussed below. In addition to the $LHD$, the standard $\ell^2$-norm is is used following the standard definition given by:
\begin{equation} \label{l2norm}
\|\bar{x}\|_2=\sqrt{\sum_i{x_i^2}}.
\end{equation}
The major difference between these two metrics, for non-binary data, is that the Hamming distance represents the number of measurements which are different, while the Euclidean norm gives a metric of the significance of differentiation. Interestingly, we can expand upon the Hamming distances and determine the \textit{uniqueness} of our device as described by R. Maes et al. \cite{maes2010physically}. Uniqueness is a calculated estimate for the amount of entropy available from a PUF and can be applied to a similar population of PUFs with an identical architecture. The uniqueness, $\mathcal{U}$, can be calculated for some challenge $\bar{C}_i$ as:
\begin{equation} \label{eq:pufunique}
\mathcal{U}|_{C_i} = \left(\dfrac{2}{n(n-1)} \sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^{n} \dfrac{LHD(\bar{R}_i, \bar{R}_j)}{m}\right) \times 100\%\,,
\end{equation}
Analogous to Equation.~\ref{modham}, $L=1$ gives the standard definition of uniqueness. Here, $n$ is the number of PUFs in a population, and $m$ represents the number of bits in the response from the PUF. An optimal uniqueness value for binary PUFs would be $50\%$, as this implies uncorrelated responses. Since our PUF is continuous via electronic control, we need to modify our interpretation of Equation~\ref{eq:pufunique}. Given that $LHD=0$, i.e. a complete collision, doesn't contribute to $\mathcal{U}|_{C_i}$ and a partial collision contributes only the fraction that didn't collide, $\mathcal{U}|_{C_i}$ is counting non-colliding responses. Regardless of the looseness this is equivalent to a target uniqueness between devices of $100\%$.
\section{Results} \label{sec:results}
For this work we created several sets of data. First, using the two small sections from Figure~\ref{diag1}, we created 100,000 random CRPs mirrored on each device, and an additional single CRP was repeated 5,000 times on each device. Secondly, we tested the largest PUF that would fit on the device, again with random CRPs, and a repeated CRP. All of the CRPs were randomly selected in each variable from a uniform distribution over the $v2\pi$ voltage range required for a complete switching response of a typical MZI, detailed by the sinusoidal response from Equation~\ref{eq:mzi_xfrfn}, easily modified into a sine/cosine format shown in Equation~\ref{eq:mzi_xfrfn_sincos}.
\begin{align} \label{eq:mzi_xfrfn_sincos}
U_{MZI}(\theta, \phi) =&\dfrac{1}{2}
\begin{pmatrix}
e^{j\phi}(e^{j\theta}-1) & je^{j\phi}(e^{j\theta}+1)\vspace{3pt}\\
j(e^{j\theta}+1) & -(e^{j\theta}-1)
\end{pmatrix}
\nonumber\\= &je^{\frac{j\theta}{2}}
\begin{pmatrix}
e^{j\phi}\text{sin}\big(\frac{\theta}{2}\big) & e^{j\phi}\text{cos}\big(\frac{\theta}{2}\big)\vspace{3pt}\\
\text{cos}\big(\frac{\theta}{2}\big) & -\text{sin}\big(\frac{\theta}{2}\big)
\end{pmatrix}
\end{align}
\subsection{A Small PUF}\label{sec:smallresults}
To complete the analysis of responses of the small PUFs depicted in Figure~\ref{diag1}, eight output intensities were measured via a polled array of photodiodes for each device, with results below.
\begin{figure}[tbp!]
\centering
\subfloat[Device 1]{{\includegraphics[width=.25\textwidth]{figures/smith3a.eps}}}%
\subfloat[Device 2]{{\includegraphics[width=.25\textwidth]{figures/smith3b.eps}}}%
\caption{{\bf Distinguishability of $\mathbf{ LHD_{intra}}$}, for both devices. $LHD_{intra}$ between the same repeated challenge (orange) and between a typical challenge and random challenges (blue) on the same device. }
\label{intra}
\end{figure}
Figure~\ref{intra} shows the repeatability (orange) of the same challenge applied 5,000 times to each device. The two devices show a relatively low $LHD \leq 4$. The difference between Figure~\ref{intra}a and Figure~\ref{intra}b is accounted for by the differences in noise level, with a higher total noise on the second device\footnote{This difference is likely caused by photodetector variation due to differing production batches with a result of approximately 1.5 times the noise shown on the datasheet for the PIN photodiodes previously mentioned.}. The second dataset in both figures (blue) shows the difference between a typical CRP and the 100,000 randomly selected CRPs. The two devices show strong repeatability through the $LHD$ metric, by staying within a narrow variable range. The two devices additionally show strong metrics for distinguishability. The differences between a single CRP to a differing CRP set is easily identified. Ideally, $LHD_{intra}=0$ should be true for a fixed challenge, and $LHD_{intra}=8$ for differing challenges. The $\ell^2$-norm is necessary to provide an additional measure of the significance of differences before validating the identity of a PUF.
For applications of this PUF in authentication roles, the key importance is the inter-chip response to the same challenge. Figure~\ref{inter} depicts the $LHD_{inter}$ metric between 100,000 randomly chosen CRPs as they apply to both devices. As discussed further, the number of challenges is too large to test all possible settings. For 100,000 challenges mirrored between the two devices, we analyze Equation~\ref{eq:pufunique} to find a total uniqueness of $85.28\%$ at $L=2$. $LHD_{inter}$ is strongly centered around $LHD_{inter}=8$, approximately $70\%$ the response vectors have no corresponding measurement values in common ($R_{i,k}\neq R_{j,k}, \forall k$), and less than $10\%$ have more than two out of eight corresponding measurements in common. We found no complete collisions\footnote{In this context we take a collision to be complete where all output values are identical for two different given challenges, i.e. $\bar{R}_i$ and $\bar{R}_j$ where $R_{i,k} = R_{j,k}, \forall k$, or partial where two challenges result in some similar outputs; these are not fully distinguishable since a distinguisher can exist where a value can be differentiated from a random oracle \cite{bellare1993random}.}, which would be required for a false positive identification. This is highly encouraging as our devices are as identical as physically possible due to simultaneous fabrication (identical material, design, processing, environment, etc.) and are effectively `clones' of one-another. Any attempt to physically recreate a device to be utilized as a PUF will inherently possess additional random variance and produce a clearly differentiated $LHD_{inter}$ versus the original device.
\begin{figure}[tbp!]
\centerline{\includegraphics[width=.32\textwidth]{figures/smith4.eps}}
\caption{{\bf $\mathbf{LHD_{inter}}$}, Distances between 100,000 randomly chosen challenges applied to two near-identical devices, with the comparison between the two resulting responses, showing no full collisions.}
\label{inter}
\end{figure}
The commonality of the measurements are shown in Figure~\ref{l2dist}, where the blue data shows the $\ell^2_{inter}$ distance between the two devices, for each challenge. The smallest $\ell^2_{inter}$ distance found was 11, with a mean of 58, median of 55, and standard deviation of 23. The orange data shows the $\ell^2_{intra}$ distance between a typical response and all other responses to the same challenge on a single device. The data shown is typical with limited overlap between histograms. Some CRPs appear to have more noise than others, and multiple datasets have shown no overlap at all between histograms, the least distinguishable of which is shown as an example in Figure~\ref{l2dist}; the $\ell^2_{intra}$ data shown here has a mean of 6, median of 5, and a standard deviation of 4.
A major source of errors during measurement, unfortunately, is the instability of input- and output-coupling, leading to variability in total intensity over time. There are two significant components to the coupling errors; a high-frequency component and a slow drift caused by undamped environmental noise and sagging positioning stages holding the edge-coupled fiber arrays, respectively. The signal to noise ratio (SNR) within our system is approximately $16$ dB. The laser and detector SNR is estimated to be approximately $50$ dB, based on the dark-count of the photodetectors. The losses on the integrated chip are stable and do not vary with time. The PIN photodiodes used to measure the output signals have a constant background noise and gain, such that the effects of losing overall intensity represents a reduction in SNR. For our comparison, the loss of intensity can be counteracted by normalizing the total intensity and the set-sizes over each CRP to a fixed value prior to calculating the $\ell^2$-norm. As such, the result of comparison is the difference between relative intensities for each channel and not the total intensities. The main source of the intensity error is a result of physically edge-coupled fiber arrays, on manual positioning stages, rather than permanently affixed arrays. A packaged device with permanently affixed fiber arrays are a requirement for any practical system and will nearly eliminate this source of error.
\begin{figure}[tbp!]
\centerline{\includegraphics[width=.5\textwidth]{figures/smith5.pdf}}
\caption{{\bf Euclidean distances}, showing the distance between the response to identical voltage settings on both devices ($\ell^2_{inter}$, blue) and the response of one device to the same repeated challenge ($\ell^2_{intra}$, orange, typical). The inset shows the region of overlap.}
\label{l2dist}
\end{figure}
\subsubsection{Range and Number of CRPs}\label{subsec:recurrence_count}
The total significance of any individual setting within a CRP is not uniform across the device. Of the ten input variables in each small device, four can only affect two of the output measurements each in the current design. Physically, this can be observed as the last column of 4 MZIs or the base of the pyramid in Figure~\ref{diag1} as opposed to the input or `tip' of the pyramid that affects all output channels. A concern with this architecture is the possibility that a nearest-neighbor challenge will produce a semi-predictable response and there are many fewer distinguishable responses than there are challenges.
Therefore, an important question for any proposed PUF is how many uniquely identifiable CRPs are available and how the number of CRPs scales with the size of PUF. Weak PUFs can have as few as one CRP, although this is a severe limitation on the number of applications \cite{herder14}. Given the input amplitudes and detector noise on the output modes, we are able to clearly distinguish a voltage change on a single MZI of $\approx7mV$. As each MZI in the system has a $v2\pi\approx$7V range, we define 10 bits of resolution in voltage as $v2\pi/1024$ for each MZI. Na\"ively we could estimate that since the PUF has 10 bits of resolution, on ten independent MZIs, as shown in Figure~\ref{diag1}, there are $2^{10^{10}}\approx 1.27\times10^{30}$ possible challenges on each device. However, this ignores a large number of partial collisions in the output space of the PUF and also ignores the structure of the device. When taking the largest set of MZIs in a light-cone pattern, we see that, at most, 66 MZIs exist in our architecture. Assuming a 10-bit resolution, we could again theorize a maximal upper bound of challenges for some set of 66 MZIs of $2^{10^{66}}\approx 4.78\times10^{198}$.
However, this calculation is out of scope: Determining the number of challenges within our architecture that have \textit{differentiable} responses is a more difficult and productive task. For our architecture, with a light-cone diffusion, we can better approximate a maximal upper bound by following the Catalan numbers \cite{koshy2008catalan}, $C_n$, from combinatorics and count the number of \textit{distinguishable} settings by analyzing the MZI structure as a fully-rooted binary tree with $n+1$ leaves. We can use a rooted binary tree since we pump the PUF from a single input and can calculate an upper bound given by:
\begin{equation}
\label{catalan}
C_n=\frac{(2n)!}{(n+1)!n!}.
\end{equation}
Following the Catalan numbers, we calculate that for 10-bit resolution there are approximately $5.77\times 10^{39}$ combinations for an array of 66 MZIs, not necessarily our specific configuration of MZIs. This maximal upper limit is unfortunately still too large due to the number of configurations of 66 MZIs that are not possible within our architecture. The limit of the architecture where pure Catalan numbering cannot apply is due to the limited number of columns in our device. The limit in number of columns means that the Catalan numbering scheme will count combinations in a light-cone pattern that are impossible: Think of an arrangement where the first 11 MZIs are set to pass light linearly down in a straight line with an additional 55 MZIs hanging off of the end of our architecture.
\begin{algorithm}[htbp]
\caption{Algorithm to calculate Catalan recurrence for planar trees $t_i(n, h)$.}
\label{alg:catalanrecurr}
\begin{algorithmic}
\Require $n \geq 0$ and $h > 0$
\State \multilinestate{$\texttt{recurs} \leftarrow{}$ values of \texttt{n} and \texttt{h} into a 2 dimensional array at index [\texttt{n}, \texttt{h}] with the number of combinations.}
\State \multilinestate{$\texttt{sum} \leftarrow{}$ values of operations and calculates the sum.}
\Procedure{Recurrence}{[$n$, $h$]}
\State $i$, $j$, Local Variables
\State $n^\prime$, $h^\prime$, Shadow Copy of $n$ and $h$
\State \texttt{recurs}[0, 0] = 1
\If {$h < n$ or $h>2^n-1$}
\State \texttt{recurs}[$n^\prime$, $h^\prime$] = 0
\ElsIf {$1 \leq n \leq h \leq 2^n-1$}
\For {$i$, $j$}
\State \texttt{recurs}[$n^\prime$, $h^\prime$] = \texttt{recurs}[$n$, $h$]
\State = \texttt{sum}[\texttt{recurs}[$n-1$, $h-1-i$]
\State \quad$\times$(2$\times$\texttt{sum}[\texttt{recurs}[$j$, $i$], ($j$, 0, $n-2$)]
\State \quad+ \texttt{recurs}[$n-1$, $i$]), ($i$, 0, $h-1$)]
\EndFor
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
To overcome the configuration limit set by the standard Catalan numbers, we must utilize a lesser-known combinatorics counting method for binary trees as described by F. Qi et al. \cite{qi2017integral}, the method of counting by integral representation of the Catalan numbers. The method of integral counting can be directly applied to the planar tree variation of counting, similar to the work by P. Flajolet et al. in \cite{flajolet1982average}.
If, for a forest composed of a set of trees, $\mathcal{F} = \{t_0, t_1, \ldots, t_k \}$, we look at a single tree, $t_i(n, h)$, this tree can represent any binary tree with or without a shared child of height $h$ with $n$ nodes. Simply, $\sum_h\,t_i(n,h)=C_n$, for the $n$-th Catalan number. By analysis, the Catalan recurrence for a planar tree gives the recurrence formula for $t_i(n, h)$\footnote{This formula requires the following definitions $t_i(0,0)=1$ and $\forall \, t_i(0,-)=0$.}:
\begin{align}\label{eq:recurrence}
t_i(n+1, h+1) &= 2 \sum\limits_{m=h+1}^{n} t_i(m,h) \sum\limits_{j=0}^{h-1} t_i(n-m, j)\nonumber\\
&+ \quad \sum\limits_{m=h+1}^{n-h-1} t_i(m, h)\,t_i(n-m, h)\, .
\end{align}
The formula in Equation~\ref{eq:recurrence} utilizes the double summation to count the number of combinations to build a binary tree on $n+1$ vertices whose left sub-tree has a height $h_0$, and whose right sub-tree has height $h < h_0$. Doubling this value by a factor of 2 adds all trees whose right sub-tree have height $h^\prime_0$, and whose left sub-trees have height $h^\prime < h^\prime_0$. The final term of Equation~\ref{eq:recurrence} serves to count the planar trees on $n+1$ vertices whose left and right sub-trees are of height $h$.
To adequately enumerate the number of distinguishably different CRPs, Algorithm~\ref{alg:catalanrecurr} was used to implement Equation~\ref{eq:recurrence} with parameters $0 \leq n \leq 11$ and $0 \leq h \leq 66$. After running Algorithm~\ref{alg:catalanrecurr}, the total number of distinct CRPs that are possible with 10-bit resolution are calculated to be $\approx 6.85\times 10^{35}$. The full trees of MZIs configured into different columns with 10-bit resolution have the number of possible configurations shown in Table~\ref{tab:configs}. It should not be a surprise to see that as the number of columns increases, the possible configurations increases up to a point of maximal dispersion for sub-trees. The information regarding number of configurations due to architectural change will not be discussed further in this work, but may pose as an interesting topic of research.
\begin{table}[htbp!]
\caption{Variation in number of configurations versus number of columns for light-cone configurations within our device.}
\label{tab:configs}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cc}
\hline
\multicolumn{1}{|c|}{\textbf{Number of (Columns, MZIs)}} & \multicolumn{1}{c|}{\textbf{Possible 10-bit Configurations}}\TBstrut\\\hline\hline
\multicolumn{1}{|c|}{(4, 10)} & \multicolumn{1}{c|}{$1.19 \times 10^{5}$} \TBstrut\\ \hline
\multicolumn{1}{|c|}{(5, 15)} & \multicolumn{1}{c|}{$2.40 \times 10^{7}$} \TBstrut\\ \hline
\multicolumn{1}{|c|}{(6, 21)} & \multicolumn{1}{c|}{$2.76 \times 10^{10}$} \TBstrut\\ \hline
\multicolumn{1}{|c|}{(7, 28)} & \multicolumn{1}{c|}{$1.61 \times 10^{14}$} \TBstrut\\ \hline
\multicolumn{1}{|c|}{(8, 36)} & \multicolumn{1}{c|}{$4.40 \times 10^{18}$} \TBstrut\\ \hline
\multicolumn{1}{|c|}{(9, 45)} & \multicolumn{1}{c|}{$5.43 \times 10^{23}$} \TBstrut\\ \hline
\multicolumn{1}{|c|}{(10, 55)} & \multicolumn{1}{c|}{$2.94 \times 10^{29}$} \TBstrut \\ \hline
\multicolumn{1}{|c|}{(11, 66)} & \multicolumn{1}{c|}{$6.85 \times 10^{35}$} \TBstrut \\ \hline
\end{tabular}}
\end{table}
\subsection{A Large PUF} \label{subsec:largepuf}
The experiment above was repeated with the largest available PUFs on the current design of our chip. This was a `pyramid' consisting of 66 MZIs, with one input mode and 22 outputs. Here the `top' and `bottom' individual PUFs are so large as to significantly overlap, in fact they share 45 out of 66 MZIs. Despite this overlap, we find excellent distinguishability between the two PUFs. Not surprisingly the distinguishability is improved over the small PUFs. However, there is a limit to the improvement with size. As the total intensity is distributed over an increasingly larger number of outputs, the SNR will limit the maximum size with any given laser power and sources of loss.
Figure~\ref{l2distbig} shows the calculated $\ell^2$-norms, similar to Figure~\ref{l2dist}. Note that the lack of an inset is the result of no overlap between the histograms. Increasing the width of the interferometer array is a route to increasing the total number of CRPs, as our current chip has three such 66 MZI subsets, giving a total number of CRPs on our device of $\approx 2.05\times 10^{36}$. Simply adding another row of just 11 MZIs appears to add to $\approx 6.85\times10^{35}$ distinct CRPs.
One question we asked is in regard to the looseness of the Hamming Distance, $LHD$, and what the ideal degree of looseness would be for the system. Recall that by `loose' we refer to a Hamming distance that overlooks small errors due to the noise and instability in our system. Naturally, the ideal parameter will vary based on the nature of the noise in the distributions being compared. We calculated the Hamming distance with varying degrees of looseness from the strict definition of Hamming distance ($L=1$) to $L=10$, and found the mean on resulting probability distribution; results shown in Figure~\ref{loose}. Note that the significant overlap of the first standard deviation error bars at $L=1$ results in the two PUFs looking all but identical under the strict Hamming distance definition. Under the $\ell^2$-norm and $LHD$ there is no measured overlap in Figure~\ref{l2distbig} or Figure~\ref{loose}, respectively.
\begin{figure}[tbp]
\centerline{\includegraphics[width=.5\textwidth]{figures/smith6.eps}}
\caption{{\bf Euclidean distances of the large PUFs}, showing the distance between the response to identical voltage settings on the large devices ($\ell^2_{inter}$, blue) and the response of one device to the same repeated challenge ($\ell^2_{intra}$, orange, typical). The two peaks in the $\ell^2_{intra}$ are most likely the result of unstable coupling combined with our normalization method, creating two distinct noise levels during data collection. }
\label{l2distbig}
\end{figure}
The mean of the $LHD$ asymptotically approaches zero for for both test cases, however, the repeated challenge (orange) approaches significantly faster than the random comparisons (blue) in figure Figure \ref{loose}. The optimal parameter for the looseness was found to be between 5 and 6. At this point the two means are separated by 5.5 standard deviations of $LHD_{Intra}$, implying clear differentiability. This $LHD$-based approach is general, and as-such, this form of analysis can be used for any PUF with noisy output.
\subsection{Attributes of an Optical CMOS-Compatible PUF}
The results shown in Section~\ref{subsec:recurrence_count} serve to highlight the optical interferometric PUF's ability to scale exponentially, thus meeting the first criterion for a strong PUF by C. Herder et al. \cite{herder14}. An additional facet of the design shown is the ability to have quick reconfigurability to assess additional CRPs. Since each of the MZIs are independently tunable, we can see the response of a tuned device and change parameters for subsequent CRPs. The ability to tune our device at-will enhances application and use-cases to not only the static processing of information, but the processing of streaming information. It is thus possible to process information streamed through the device or static information where a set of CRPs is dynamically changed depending on the information received.
Unfortunately, there are several negative attributes to using a system of interferometers as a PUF. As mentioned in Section~\ref{sec:puf_metrics}, the nearest-neighbor challenges may give predictable results on smaller PUFs. In addition, the interferometric system is highly structured and fixed, such that a sufficient number of CRPs being calculated could lead to the device being fully characterized. Indeed the QPP was designed with such characterization in mind as the original use case was for applications and experimental testing of quantum optical networks \cite{mower2015high}. We point out that the device was not originally intended to act as a PUF and we are merely exploiting its attributes. Since the reconfigurability of the QPP is available, it is possible to make one device clone the function of another device; for purposes as a PUF it is suggested to utilize this device in an uncalibrated mode. Custom designed interferometric circuits with more complicated interconnections, including variable feedback loops, would be more resistant to characterization and thus act as stronger PUFs.
\begin{figure}[tbp]
\centerline{\includegraphics[width=.5\textwidth]{figures/smith7.eps}}
\caption{{\bf Mean of the LHD with looseness}, showing the difference between the mean of the $LHD$ for different and repeated challenges at various definitions of the looseness parameter $L$}
\label{loose}
\end{figure}
A second negative attribute of the current prototype is that the operating temperature must be stable within $\pm 1^\circ$C. Allowing the temperature to vary may be a route to increasing the number of challenges and response pairs (CRPs). Whether or not two devices respond differently to temperature changes based on CRP normalization is an open research question. If temperature variation were not desired, packaging the device may lead to an easy method of stabilization. Alternatively, multiple sets of CRPs can be created for an array of temperatures prior to use. The variation with temperature observed is a direct result of using common SOI and CMOS fabrication. Silicon is a thermo-optic material and was chosen for its ease of integration into existing CMOS processes. However, the design for an interferometric optical PUF can be trivially transferred to an electro-optically controlled material such as Lithium Niobate to create a more stable standalone device, or application specific integrated circuit. It should be noted, however, that Lithium Niobate will still have a small thermo-optic effect. Conversely, each challenge $\bar{C}_i$ could double at as bias setting for the device. Variation in other parameters like global heating of the device, wavelength inputs, and variance in the number of pumped channels can allow each challenge to be utilized as an individual, separate, PUF. Here we have taken these parameters to be constants for simplicity but if allowed to vary, utilizing more parameters opens an enormous space of possibilities, and significantly increases the number of CRPs theoretically available.
Finally, with the software drivers used in these experiments we take approximately 3 seconds to completely set a challenge and measure a response of 1,000 physical measurements on the QPP. This has since been significantly improved with new driver optimization. The fundamental limit to the speed of the challenge and response is set by the maximum speed that the thermal switching can occur; estimated to be in the $\approx 100 \text{kHz}$ range \cite{harris14}. This may appear slow but we stress that the experimental setup was in no way designed to optimize the speed of measurements. The system currently runs on several standard Arduino-driven Teensy boards, for ease of development. Hardware integration with an FPGA, and implementation in an electro-optical media, will result in orders of magnitude speed-ups to gigahertz speeds. If a design were optimized for usage as a PUF with the proper, previously mentioned controls, we postulate that the existence of reconfigurable optical PUFs will greatly enhance the security of future optical communications.
\section{Conclusion}
The PIC device shown in this work meets the criteria for a weak PUF given by \cite{herder14} and appears to also satisfy the definition of a strong PUF. The rapidly expanding research on large scale interferometric PICs, and the wide fields in which they are suggested for use, implies that such devices may become ubiquitous in the near future. This work shows that such large integrated devices carry with them useful amounts of unique randomness that can be used for tasks such as device identification, authentication, and other cryptographic tasks.
\section*{Acknowledgments}
The authors would like to acknowledge the group of D. Englund at MIT for assistance in the design and fabrication of the experimental optical chip. A. M. Smith would like to thank N. Stolten and F. H. Long of ARDEC for initial discussions prompting this line of study. H S. Jacinto would like to thank AFRL for doctoral fellowship support. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views or endorsement of the U.S. Air Force Research Laboratory.
\bibliographystyle{IEEEtran}
| 71ab549cff8f8fd028b3cbe4cfbcfee681a58a37 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section*{Acknowledgment}
This project has received funding from the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement no. 654168.
The authors would like to acknowledge
Deutsches Elektronen-Synchrotron DESY and Helmholtz Gesellschaft Funding for all the support.
Support at SLAC was funded by the US-Japan collaboration
and Research at the University of Oregon was supported by the U.S. Department of Energy Office of Science, Office of High Energy Physics under Award Number DE-SC0017996.
The measurements leading to these results have been performed at the test beam facility at DESY Hamburg (Germany),
a member of the Helmholtz Association.
The authors would like to thank the technical team at the {\mbox{DESY II}}\xspace accelerator and the {\DESYII Test Beam Facility}\xspace for the smooth operation of the test beam
and the support during the test beam campaign.
\section{Introduction}\label{sec:intro}
Beam telescopes have become the workhorse for many campaigns at test beam facilities.
Typically, they consist of two stations or arms, each made up of a set of tracking sensors with a configurable spacing in between, where the Device-under-test (DUT) is usually placed.
They provide very precise reference tracks
and may also provide a common trigger, DAQ integration and even common reconstruction packages.
For over a decade, \textsc{EUDET}\xspace-style pixel beam telescopes~\cite{jansen2016} have been a success-story with a wide-ranging user community
and strongly supported by the EU-funded \textsc{EUDET}\xspace~\cite{eudet}, \textsc{AIDA}\xspace~\cite{aida} and \textsc{AIDA-2020}\xspace~\cite{aida2020} projects.
\textsc{EUDET}\xspace-style telescopes have six planes of {Mimosa26}\xspace~\cite{baudot2009,huguo2010} monolithic active pixel sensors providing high-precision beam tracking of around \SI{2}{\micro\metre} in a small active area of 2$\times$\SI{1}{\square\centi\metre}.
The telescope comes in a full package including a common trigger logic unit, the \textsc{EUDET}\xspace TLU~\cite{tlu}, a DAQ framework called \textsc{EUDAQ}\xspace~\cite{Ahlburg_2020} allowing an easy integration of the user DAQ, and a complete reconstruction software suite called the \textsc{EUTelescope}\xspace package~\cite{Bisanz:2020rfv}.
Seven copies of \textsc{EUDET}\xspace-style pixel beam telescopes have been made and are installed at CERN (PS and SPS), at the {\DESYII Test Beam Facility}\xspace~\cite{desytb2018}, ELSA in Bonn, and at ESTB~\cite{Fieguth:2011be} at SLAC.
A large number of different user setups have been integrated into \textsc{EUDAQ}\xspace and benefited from the \textsc{EUDET}\xspace TLU~\cite{tlu} and used \textsc{EUTelescope}\xspace for the data reconstruction.
The user community at the various test beam facilities has been very satisfied with the \textsc{EUDET}\xspace-style pixel beam telescopes
but there were two requests which could not be met:
having a telescope for large DUTs inside the {PCMAG}\xspace \SI{1}{T} solenoid~~\cite{pcmag:yamamoto}
located at the {\DESYII Test Beam Facility}\xspace and a large-area tracking coverage for e.g.\ calorimeter setups.
The former is suffering from the fact that the \textsc{EUDET}\xspace-style telesopes have each sensor plane housed in a \SI{1.5}{\centi\metre} thick aluminium case with a dedicated cooling pipe and an auxiliary board.
While both the former and the later are limited by the small active area of the \textsc{EUDET}\xspace-style telescopes.
The design, construction and installation of a large active area, compact telescope at the {\DESYII Test Beam Facility}\xspace became one of the deliverables~\cite{lycoris-D15.2} of \textsc{AIDA-2020}\xspace.
This telescope, named {\textsc{Lycoris}}\xspace\footnote{Large Area YX COverage Readout Integrated Strip Telescope}, was from the beginning based on six planes
of silicon-strip sensors oriented in a small-angle-stereo configuration, to meet the desired coverage of at least 10$\times$\SI{10}{\square\centi\metre}.
Within \textsc{AIDA-2020}\xspace, there were strong activities on developing a successor of \textsc{EUDET}\xspace-TLU called the \textsc{AIDA}\xspace-TLU~\cite{Baesso_2019} and a successor of \textsc{EUDAQ}\xspace called \textsc{EUDAQ2}\xspace~\cite{Liu_2019}.
\textsc{AIDA}\xspace-TLU distributes a common clock to all DUTs and timestamps the triggers, thus it enables time-stamp based event reconstruction and removes the readout bottlenecks.
\textsc{EUDAQ2}\xspace has upgraded significantly from \textsc{EUDAQ}\xspace with a lot extended protocols and synchronisation modes.
Both upgrades altogether enable new data taking modes beyond the classical ``trigger-busy'' approach of the \textsc{EUDET}\xspace-TLU, where the system trigger rate has been limited by the device with the slowest readout, see details in~\cite{Baesso_2019}.
All \textsc{EUDET}\xspace-style pixel telescopes have already upgraded to \textsc{AIDA}\xspace-TLU with \textsc{EUDAQ2}\xspace or, are in the process of doing so.
Hence it was decided for {\textsc{Lycoris}}\xspace to use \textsc{EUDAQ2}\xspace and the \textsc{AIDA}\xspace-TLU right from the beginning.
This paper is organized as follows: first the design requirements and an overview of the telescope design is given, followed
by a detailed description of the sensor plane, its components and the overall system (Section~\ref{sec:hardware}). Then
the description of the DAQ and the on-line software (Section~\ref{sec:software}) and the performance of the {\textsc{Lycoris}}\xspace telescope (Section~\ref{sec:performance}).
Finally, there is a description of how to integrate a DUT (Section~\ref{sec:dut}) followed by an overall
summary (Section~\ref{sec:conclusion}).
\section{Summary and Outlook}\label{sec:conclusion}
The {\textsc{Lycoris}}\xspace telescope has been designed and built to meet the user requirements for a large-area coverage silicon-strip telescope
for the {\DESYII Test Beam Facility}\xspace with a resolution of better than \SI{10}{\micro\meter}.
The key building blocks are the {\textsc{Lycoris}}\xspace modules using a so-called hybrid-less design with two {KPiX}\xspace readout ASICs bump-bonded directly
to the silicon sensor, which has a strip pitch of \SI{25}{\micro\meter} with every second strip being read out.
The entire telescope consists of six planes located in two cassettes with three modules each.
The performance of {\textsc{Lycoris}}\xspace has been investigated in several test beam campaigns at the {\DESYII Test Beam Facility}\xspace and the achieved resolution on the y-axis (orthogonal to the strip)
per plane is on average \SI{7.07}{\micro\meter}.
The results are consistent with the expected hit resolution of \SI{7.2}{\micro\meter} estimated using $d/\sqrt{12}$, with $d$ corresponding to \SI{25}{\micro\metre}.
This demonstrates the viability of the approach selected for {\textsc{Lycoris}}\xspace of only reading out every second strip and still maintaining the expected hit resolution.
The resolutions on the x-axis (parallel to the strip) for up- and downstream cassettes are respectively \SI{0.22}{\milli\meter} and \SI{0.24}{\milli\meter},
exceeding the \SI{1}{\milli\metre} requirement by a factor of around five. The achieved resolutions are summarised in Table~\ref{tab:achieved}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{l l r r }
& & Required & Achieved \\ \toprule
Resolution in bending plane & $\sigma_y$& $<$ \SI{10}{\micro\metre} & $<$ \SI{7.2}{\micro\metre} \\
Resolution orthogonal to the bending plane & $\sigma_x$&$<$ \SI{1}{\milli\metre} &$<$ \SI{0.24}{\milli\metre} \\
Area coverage & $A_{xy}$ & 10$\times$ \SI{10}{\centi\metre\squared} & 10$\times$\SI{10}{\centi\metre\squared} \\
Thickness of single station & d & \SI{3.3}{\centi\metre} & $<$ \SI{3.5}{\centi\metre} \\ \bottomrule
\end{tabular}
\caption{\label{tab:achieved} The key requirements for {\textsc{Lycoris}}\xspace and the achieved vlaues, where the achieved area can be extended to 10$\times$\SI{20}{\centi\metre\squared} by installing three more sensors inside each station.}
\end{center}
\end{table}
{\textsc{Lycoris}}\xspace will now be rolled out to the users at the {\DESYII Test Beam Facility}\xspace and further wishes for improvements and enhancements are to be expected.
One obvious way for an improvement is possible second-generation cassette design that uses four sensor layers, two axial layers and one stereo (back-to-back) double layer in the center, which would yield three unbiased measurements on the y-axis.
\section{DAQ, Monitoring and Reconstruction Software}\label{sec:software}
In this section the DAQ and Monitoring software packages required to operate {\textsc{Lycoris}}\xspace are described,
together with the reconstruction software packages used in this paper.
These packages are all part of the complete software suite, that is provided to the users, so they can easily analyze
their test beam data taken with {\textsc{Lycoris}}\xspace. The overall data flow from the {\textsc{Lycoris}}\xspace hardware to disk and the offline reconstruction is shown in
Figure~\ref{fig:daqsw:general}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.85\textwidth]{figures/software/daq-flow-chart.pdf}
\caption{Flow chart highlighting the data flow in the DAQ back-end system for {\textsc{Lycoris}}\xspace.The components in the red box have been discussed in the Section~\ref{sec:hardware}, while
the components inside the blue and green boxes are discussed in Section~\ref{sec:software}.}
\label{fig:daqsw:general}
\end{center}
\end{figure}
The back-end DAQ software runs on a Control PC, polling all the connected devices.
It is responsible for collecting all the data and writing it to disk for the offline track reconstruction.
Additionally, it provides online monitoring to provide a quick feedback on the data-quality during data-taking.
This DAQ software for {\textsc{Lycoris}}\xspace consists of two separate layers: the layer communicating direct to the {\textsc{Lycoris}}\xspace hardware uses the \textsc{Rogue}\xspace DAQ framework while
the interfacing of {\textsc{Lycoris}}\xspace with the \textsc{AIDA}\xspace TLU and the user DAQ is handled by \textsc{EUDAQ2}\xspace.
\subsection{\textsc{Rogue}\xspace DAQ}
The software employed to communicate with the DAQ board and {KPiX}\xspace chips is based on the \textsc{Rogue}\xspace platform~\cite{roguewebsite}. \textsc{Rogue}\xspace was created by SLAC to facilitate development of
software interfaces to hardware in DAQ systems.
It has been designed with easy-to-understand mechanisms to connect independent management and data processing modules together and uses a set of
well-defined and easy-to-understand interfaces. It has been developed using both \textsc{C++}\xspace and \textsc{Python}\xspace and the interfacing between the \textsc{C++}\xspace and \textsc{Python}\xspace parts is done using \textsc{Boost.Python}\xspace.
\textsc{Rogue}\xspace provides a properly acquired and released global interpreter lock which enables true multi-threading and predefined handlers to access the data buffer pipelines and
hardware interfaces, which \textsc{EUDAQ2}\xspace is using to establish full control of the data taking with {KPiX}\xspace and {\textsc{Lycoris}}\xspace.
\subsection{\textsc{EUDAQ2}\xspace integration}
The integration of \textsc{Rogue}\xspace into \textsc{EUDAQ2}\xspace has been done using the \textsc{Python}\xspace interfaces of both packages. The {\textsc{Lycoris}}\xspace strip telescope has used two distinct versions of the {KPiX}\xspace DAQ system during its development history.
The initial version was able to only synchronize {\textsc{Lycoris}}\xspace to an external device by merely counting the incoming triggers,
while the final version can additionally synchronize the incoming triggers by timestamping with a centrally provided clock.
{\textsc{Lycoris}}\xspace is using the new {KPiX}\xspace DAQ system which enables full synchronization with other devices such as the \textsc{EUDET}\xspace telescope via the \textsc{AIDA}\xspace TLU.
The {KPiX}\xspace \texttt{Producer}\xspace, which is a part of the \textsc{EUDAQ2}\xspace package, is a \textsc{Python}\xspace-based \texttt{Producer}\xspace to operate {\textsc{Lycoris}}\xspace using the imported {KPiX}\xspace DAQ libraries.
In the \textsc{EUDAQ2}\xspace framework, the \texttt{Producer}\xspace acts as the interface between the user DAQ and the \textsc{EUDAQ2}\xspace core components including the \texttt{Run Control}\xspace and the \texttt{Data Collector}\xspace.
The \textsc{EUDAQ2}\xspace \texttt{Run Control}\xspace is used to issue all commands to the connected \texttt{Producers}\xspace and the underlying DAQ systems including {\textsc{Lycoris}}\xspace.
The \textsc{EUDAQ2}\xspace \texttt{Data Collector}\xspace uses the {KPiX}\xspace binary data format libraries to store the incoming {\textsc{Lycoris}}\xspace data-stream and the \texttt{DataConverter}\xspace
module converts {KPiX}\xspace binary format to the common format used by \textsc{EUDAQ2}\xspace. This is necessary to make the online monitoring in \textsc{EUDAQ2}\xspace available.
Furthermore, the {KPiX}\xspace user module contains a customized GUI, which provides more detailed information like the data rate during a run,
and an analysis executable called \texttt{lycorisCliConverter}, which produces a set of basic plots, e.g.\ the ADC distribution of each channel,
and stores them into a \textsc{ROOT}\xspace
file. Figure~\ref{fig:daqsw:eudaq2flow} show the interplay of the various DAQ software components.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.85\textwidth]{figures/software/eudaq2-flowchart_rev.pdf}
\caption{The interplay between the various \textsc{EUDAQ2}\xspace producers and the \textsc{EUDAQ2}\xspace \texttt{Data Collector}\xspace with the {\textsc{Lycoris}}\xspace DAQ system.}
\label{fig:daqsw:eudaq2flow}
\end{center}
\end{figure}
\subsection{Online Monitoring \& Slow Controls}
\subsubsection{Online Monitoring}
The online monitor for data taking with the {\textsc{Lycoris}}\xspace telescope has been developed using the \textsc{EUDAQ2}\xspace online monitor module \texttt{StdEventMonitor}\xspace.
It provides a list of 2D histograms showing spatial correlations of hitted strips from two different sensor planes,
which can be used to control the data quality, beam alignment and the beam spot position.
Figure~\ref{fig:daqsw:onlinemon} is a screenshot of the online monitor during {\textsc{Lycoris}}\xspace data taking, where the shown example histogram shows a clear spatial correlation between the first and the third {\textsc{Lycoris}}\xspace telescope planes.
Online monitor data streams are converted by the \texttt{DataConverter}\xspace, including data format decoding, ADC to charge conversion and baseline noise subtraction to remove pedestals and the common-mode noise, see details in Section~\ref{sec:software:reco}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{figures/software/onlinemon-kpix1}
\caption{Online monitor showing spatial correlations of two {\textsc{Lycoris}}\xspace sensor planes
from one example data taken outside the {PCMAG}\xspace with the \textsc{EUDAQ2}\xspace and the \textsc{AIDA}\xspace TLU at the {\DESYII Test Beam Facility}\xspace,
where the x and y axes are in unit of the strip readout pitch (\SI{50}{\micro\metre}).
This shows a good alingment between the two sensor planes and a beam spot of \SI{20}{\milli\metre}$\times$\SI{20}{\milli\metre} (same size as the used collimator)
located in the center of the sensor.
}
\label{fig:daqsw:onlinemon}
\end{center}
\end{figure}
\subsubsection{Slow Controls}
The {\textsc{Lycoris}}\xspace slow control system monitors the humidity and temperature inside the cassette units and the bias voltages for each sensor.
The humidity and temperature are monitored by an {I$^2$C}\xspace sensor located on the master cassette board, which is polled by the {KPiX}\xspace DAQ.
The HV modules of the WIENER MPOD system are controlled and monitored remotely through an SNMP~v2c (Simple Network Management Protocol) compliant protocol.
It is integrated into \textsc{EUDAQ2}\xspace with a \texttt{Producer}\xspace to poll the measured current and voltage, and with one dedicated \texttt{Data Collector}\xspace the collected data can
be stored directly in the format used by \textsc{EUDAQ2}\xspace.
Figure~\ref{fig:daqsw:sc} shows the bias current in \SI{}{\nano\ampere} of a sensor measured over two hours during data taking, providing fast feedback on the status of each sensor.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/software/onlinemonitor_hv.pdf}
\caption{The bias current in nA vs. time in minute over two hours of sensor plane 0
from example data taken with the \textsc{EUDAQ2}\xspace and the \textsc{AIDA}\xspace TLU at the {\DESYII Test Beam Facility}\xspace.
}
\label{fig:daqsw:sc}
\end{center}
\end{figure}
\subsection{Event Reconstruction}\label{sec:software:reco}
Two software packages are used for the track reconstruction; the first package loops over the raw data provided by {\textsc{Lycoris}}\xspace and produces hit clusters for the tracking algorithms,
while the second one does the overall alignment, the track finding and fitting and produces both a fully aligned geometry and the fitted tracks.
Figure~\ref{fig:daqsw:reco} shows a simplified block diagram of data processing for track reconstruction with {\textsc{Lycoris}}\xspace.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/software/reco_flow_chart.pdf}
\caption{Flow chart of the tracking reconstruction algorithms.}
\label{fig:daqsw:reco}
\end{center}
\end{figure}
\subsubsection{Hit Clustering}\label{sec:software:hitclustering}
The Hit Clustering analysis package reads in the raw data, applies the necessary corrections and then groups strips into hit clusters.
The package is written in \textsc{C++}\xspace and built with \textsc{CMake}\xspace~\cite{cmake}. There are two main steps in this package,
the offline event builder and the PacMan clustering algorithm.
\paragraph{The Offline Event Builder}
The first step converts the raw binary data into events suitable for reconstruction.
The definition of an event depends on whether {\textsc{Lycoris}}\xspace is operated in external trigger mode or internal trigger mode (see Section~\ref{sec:hardware:kpix:trigger}).
In external trigger mode, one event is a snapshot of the full telescope, i.e.\ one event consists of data from the same bucket
of all 1024 channels of all {KPiX}\xspace chips in the same acquisition cycle.
For the internal trigger mode, every readout channel is triggered individually but one event can still be defined
by matching an internal trigger with an external trigger using timestamps.
Therefore, one event in this case may contain data from different {KPiX}\xspace readout channels at different memory buckets of all
the {KPiX}\xspace chips in the same acquisition cycle. As already explained in Section~\ref{sec:hardware:kpix:trigger}, {\textsc{Lycoris}}\xspace
prefers to be operated using the external trigger mode and the all the following steps are targeted for this trigger mode.
The following is the baseline noise subtraction, which includes subtractions of the pedestals and the common-mode noise.
The pedestal subtraction is performed separately for each memory bucket of each {KPiX}\xspace readout channel,
and defined as the median of the charge response over a series of acquisition cycles in order to suppress the impact of outliers.
The common-mode noise subtraction is applied on an event-by-event basis for each individual {KPiX}\xspace to ensure
that data recorded in different memory buckets during different acquisition cycles are comparable.
It is calculated as the median of the pedestal-subtracted charge responses of one {KPiX}\xspace chip for one event.
Therefore, the calibration constants (see Section \ref{sec:hardware:kpix:calib}) have to be applied to convert the raw ADC
value to a charge in \si{\femto\coulomb} beforehand so that charge response is comparable from one channel to another when
calculating the common-mode noise.
At the end of the baseline noise subtraction, all {KPiX}\xspace readout channels that are not connected to a strip are removed,
so every {KPiX}\xspace channel represents one readout strip on the sensor, and thus is referred to be the strip identity.
Before grouping strips into clusters, we apply the following quality criteria to remove strips with faulty ADC readings:
\begin{itemize}
\item The calibration slope has to be non-zero;
\item The calibration needs to show a good linearity by requiring its Pearson correlation coefficient to be greater than $0.85$;
\item The Median Absolute Deviation (MAD) of the measured charge of each bucket of each strip over all cycles must be non-zero.
\end{itemize}
The intrinsic noise level $N_i$ for one strip at one memory bucket is defined as the width of its noise-corrected charge response when no signal presents.
For one charge measurement $q_i$ of one strip at one memory bucket, its significance is calculated as $q_i/N_i$.
The results of the complete noise-removal procedure and the noise performance will be discussed in Section~\ref{sec:performance}.
\paragraph{PacMan Clustering Algorithm}
The next step in the chain is the clustering algorithm which follows right after the baseline noise removal.
A relatively loose selection is applied on all strips before the clustering by requiring $S/N > 2$ to ensure a sufficient cluster purity while retaining
suitable statistics at this stage.
The clustering algorithm then searches iteratively for the most significant strip as the \textit{cluster seed}
and groups its neighbor strips to form a \textit{cluster}.
For each cluster, the group process stops if the next-neighbor strip has a higher significance in order to keep clusters isolated.
The attributes assigned to each generated cluster are the charge $Q$,
the significance (signal-to-noise ratio) and the cluster size (strip multiplicity).
The cluster charge $Q$ is defined as the sum of the charges of all associated strips.
Its noise $N$ is then calculated by adding the noise of all its associated strips in quadrature.
As a result, its significance can be calculated as
\begin{equation}
\frac{Q}{N} = \frac{\sum^{n}_{i}q_i}{\sqrt{\sum^{n}_{i}N_i^2}},
\end{equation}
where $n$ is the number of strips in the cluster, and $q_i$ and $N_i$ represent the signal charge and noise level for each associated strip.
The position of each cluster perpendicular to the strip direction (the y-axis) is obtained by charge weighting
\begin{equation}
y_{cluster} = \frac{\sum^{n}_{i}q_i\cdot y_i}{\sum^{n}_{i}q_i}.
\end{equation}
The results from the clustering procedure are called \textit{hits} throughout the remainder of this section.
\subsubsection{Alignment and Tracking}
The alignment and tracking algorithms are part of a highly modular \textsc{Python}\xspace package.
The three major algorithms of this package are the alignment code using Millepede~II~\cite{millepede2:2006},
the track-finding algorithm developed specifically for {\textsc{Lycoris}}\xspace
and the track (re)fitting using the General Broken Lines (GBL) package~\cite{claus2012}.
For the reconstruction, the following global coordinate system is used throughout the paper:
The z-axis is along the beam axis, orthogonal to the {\textsc{Lycoris}}\xspace sensor planes.
The x-axis is defined to be parallel to the silicon-strips on the axial sensor and the y-axis is perpendicular
to the silicon strips (See Figure~\ref{fig:daqsw:globalcoordinates}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.65\textwidth]{figures/software/Global_coordinates.pdf}
\caption{The global coordinate system used by the {\textsc{Lycoris}}\xspace Tracking and Alignment tool. The z-axis is along the beam axis,
orthogonal to the {\textsc{Lycoris}}\xspace sensor planes. The x-axis is defined to be parallel to the silicon-strips on the axial sensor
and the y-axis is perpendicular. The origin is located in the center of the telescope as indicated by the black dot.}
\label{fig:daqsw:globalcoordinates}
\end{center}
\end{figure}
\paragraph{Pre-selections}
There is a series of pre-selection cuts that are applied to the raw hits before the alignment and tracking algorithm.
First an event filter is applied to remove abnormal noisy events with at least 50 hits before any track finding algorithm is deployed.
Subsequently, hits from the three layers in each cassette need to be correlated
within a \SI{1}{\milli\meter} window based on an initial geometry description.
Hits that can not be correlated to others or hits with more than $5$ strips are all removed.
\paragraph{Alignment}
The alignment uses the Millepede~II algorithm which has already been used successfully by H1~\cite{Blobel:2002ax,Blobel:2006zz,Kleinwort:2006zz},
CMS~\cite{Flucke:2008zzb,Chatrchyan:2009sr}, BELLE-II~\cite{Bilka:2019tnt} and others.
Millepede~II is responsible for relative position alignment, meaning a 3-D reference frame is always needed as input; which is usually given by fixing the first and the last plane of the whole tracking system.
It builds on the case where least-squares fit problems with a very large number of parameters can be subdivided into two classes, global and local parameters.
Millepede~II then solves this to determine the alignment constants, where the global parameters are independent of the number of local parameters.
\paragraph{Track Finding}
Two track finding algorithms are designed to reconstruct stand-alone tracks with {\textsc{Lycoris}}\xspace hits.
The two algorithms can be used in any order and hits associated to one track candidate are not used for the subsequent algorithm to avoid double counting.
The track model is a straight line. In the case that a magnetic field is present, a coordinate transformation is applied to tracking parameters to account for the impact of the B-field based on the beam energy.
\begin{description}
\item[\textit{Striplet}\xspace finder:]
It constructs \textit{Striplets}\xspace in each cassette then matches them to form track candidates.
The seed of \textit{Striplet}\xspace is a doublet constructed by hits from the two stereo (\ang{2} stereo angle) layers in one cassette.
By intersecting the two hits a 2D position is obtained in the x-y plane that is later interpolated to the axial layer for a third hit search.
The third hit will be added to form the so-called \textit{Striplet}\xspace when its distance to the doublet interpolation is smaller than \SI{100}{\micro\metre}.
The four tracking parameters~\footnote{The offsets in X and Y, and the slopes in the x-z and the y-z planes.}
of one \textit{Striplet}\xspace are determined based on assumptions on slopes in both x-z and y-z planes for two reasons.
First the three 1D measurements cannot determine four tracking parameters; second the doublet intersection needs both slopes
for corrections due to the distance in z-axis between the two stereo layers.
Assumptions on slopes are given by the beam direction with a correction if a magnetic field is present.
Once all the \textit{Striplets}\xspace are found in both cassettes, \textit{Striplet}\xspace matching between cassettes starts.
First their slopes on the y-z plane are matched by requiring the curvature difference to be smaller than \SI{0.01}{\radian}.
Then they are extrapolated to the center between two cassettes for position matching by requiring the deviation to be
smaller than ($<$\SI{10}{\milli\meter}) in x and ($<$\SI{1}{\milli\meter}) in y.
To increase the purity, the matched \textit{Striplets}\xspace only become a valid track candidate if they are the unique match to each other.
\item[Strip road search:]
It first forms track roads using four hits from the seed layers, then adds in one to two test hits from other test layers
to finalize the search. The track road needs four hits to provide the full spatial description of a track.
The default choice is to use the four stereo layers as the seed layers while the two axial layers are used as test layers to maximize the x-z plane description.
An event will be rejected if there are fewer than five layers with hits or any layer possesses more than eight hits.
Once all track roads are found, they are interpolated to the test layers and only the unique hits within a distance of \SI{200}{\micro\metre} are taken.
The distance from the test hits to the interpolation is then used to calculate a $\chi^2$ to estimate the goodness of this track candidate.
At the end, only the best track candidate is selected based on the number of associated hits and the goodness of fit $\chi^2/ndf$.
\end{description}
The combination of the high purity/low efficiency \textit{Striplet}\xspace finder and low purity/high efficiency strip road search allows to reach a high
overall track-finding efficiency to account for some possible inefficient telescope plane.
\paragraph{Track Fitting}
After finding of all possible track candidates a re-fit of all valid track candidates using GBL is performed,
taking into account all measurements and scattering materials in the path of the track.
For a track with an initial trajectory from a pre-fit of the measurements
(internal seed) or an external prediction (external seed) the description of
multiple scattering is added by offsets in a local system. Along the initial
trajectory points are defined which can describe a measurement or a (thin)
scatterer (insensitive material) or both. Measurements are arbitrary functions of the local track
parameters at a point (e.g.\ 2D: position, 4D: direction+position). The re-fit
provides corrections to the local track parameters (in the local system) and the
corresponding covariance matrix at any of those points. Non-diagonal covariance
matrices of measurements will be diagonalized internally. Outliers can be
down-weighted by use of M-estimators.
A single measurement point can be omitted from the refit in order to calculate
unbiased residuals.
\section{Performance of the {\textsc{Lycoris}}\xspace telescope}\label{sec:performance}
This section presents all the results obtained using {\textsc{Lycoris}}\xspace starting from the bare sensor performance to the momentum resolution.
Firstly the test-beam setups used are described, which are the basis for most results through this section.
All the shown performance studies use only data stored in the first memory bucket of each {KPiX}\xspace readout channel as no good-quality data have been collected using other three memory buckets.
The observed symptoms for these cells are that most of the {KPiX}\xspace channels are found bad in either their ADC response or in their baseline noise level.
The root-cause for this problem is yet to be understood, but studies conducted post data-taking, have indicated, that by increasing the $T_{\mathrm{Pre-Charge}}$\xspace (see \ref{sec:hardware:kpix})
by a factor of two, the data stored in the three other memory buckets can be recovered.
Unless mentioned explicitly, all the data shown are taken in the high-gain mode using the external trigger, which is the default mode for the {\textsc{Lycoris}}\xspace telescope.
\subsection{Test Beam Setup}
The system performance of {\textsc{Lycoris}}\xspace telescope has been evaluated at the {\DESYII Test Beam Facility}\xspace in several test-beam campaigns. They can be categorized into two different categories:
one for performance measurements with an \textsc{EUDET}\xspace-style pixel telescope {\textsc{AZALEA}}\xspace outside the {PCMAG}\xspace solenoid; the other one for validating the system performance inside the {PCMAG}\xspace solenoid with the {\textsc{AZALEA}}\xspace telescope as DUT.
\subsection{Operation at {\mbox{DESY II}}\xspace}\label{sec:performance:desyii}
The {KPiX}\xspace ASIC used in {\textsc{Lycoris}}\xspace requires an \textit{Acquisition Start Signal}\xspace, so in order to operate {\textsc{Lycoris}}\xspace at the {\DESYII Test Beam Facility}\xspace efficiently,
several particular properties of {\mbox{DESY II}}\xspace have to be taken into account in order to synchronize {\textsc{Lycoris}}\xspace with the accelerator.
{\mbox{DESY II}}\xspace~\cite{Hemmie:1983et,Hemmie:1985uw} is a synchrotron using normal-conducting magnets with a circumference of \SI{292.8}{\m} and a maximum energy of \SI{7.0}{\GeV}.
Its standard operation without any extraction stores a bunch of electrons or positrons for two magnet cycles
$\mathrm{T_{DESY\;Magnet\; Cycle}}$ of \SI{80}{\milli\s} which define the \SI{160}{\milli\s} {\mbox{DESY II}}\xspace cycle.
One bunch of about 10$^{10}$ electrons or positrons is injected on-axis at $E_{\rm min} = \SI{0.45}{\GeV}$ from the linear accelerator {\mbox{LINAC II}}\xspace (LINear ACcelerator) via
the accumulator storage ring {PIA}\xspace (Positron Intensity Accumulator) and is accelerated to $E_{\rm max} = \SI{6.3}{\GeV}$.
The beam is typically stored for two magnet cycles or one {\mbox{DESY II}}\xspace cycle and is then dumped about \SI{160}{\milli\s} after injection, just before the next injection from {PIA}\xspace.
The beam at the {\DESYII Test Beam Facility}\xspace is generated using a carbon fiber, generating bremsstrahlung photons and then making electron-position pairs at a secondary target.
A threshold on the particle energy $\mathrm{E}_{\rm cut}$ can be applied with a dipole magnet. Depending on the selected $\mathrm{E}_{\rm cut}$, beam will only be available if
the {\mbox{DESY II}}\xspace beam energy is exceeding the selected particle energy~\cite{desytb2018}, as shown in Figure~\ref{fig:installation:desyii}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{figures/installation/synchronization_desy.pdf}
\caption{\label{fig:installation:desyii} Synchronization of {\textsc{Lycoris}}\xspace with the {\mbox{DESY II}}\xspace accelerator cycle.}
\end{center}
\end{figure}
Therefore, to take data efficiently with the {KPiX}\xspace acquisition cycle, the \textit{Acquisition Start Signal}\xspace (as described in Section~\ref{sec:hardware:kpix}) needs to be timed correctly that data
taking commences after the {\mbox{DESY II}}\xspace beam energy exceeding $\mathrm{E}_{\rm cut}$.
The $\mathrm{E}_{\rm min}$ signal from {\mbox{DESY II}}\xspace is used as a reference to generate the \textit{Acquisition Start Signal}\xspace by applying a configurable delay timer $\mathrm{T_{Delay}}$,
and the fixed $\mathrm{T_{Startup}}$ time of {KPiX}\xspace ASICs needs to be extracted in the calculation:
\begin{equation*}
\mathrm{T_{Delay}(E_{cut})} = \frac{\mathrm{T_{DESY\;Magnet\; Cycle}}}{2\pi} \left(\arcsin\left(\frac{\mathrm{2E_{cut}-E_{max}+E_{min}}}{\mathrm{E_{max}-E_{min}}}\right)+\frac{\pi}{2}\right) -\mathrm{T_{Startup}}
\end{equation*}
This operation ensures that {\textsc{Lycoris}}\xspace is always ready for data taking right after particles with energy above the selected energy $\mathrm{E}_{\rm cut}$ start to arrive.
The Data Acquisition period then lasts until the beam energy is below the threshold again, upon which the digitization and readout of all data take place.
A sketch of the complete synchronization scheme is shown in Figure~\ref{fig:installation:desyii}.
\subsubsection*{Setup outside of the {PCMAG}\xspace}
The performance results shown in this paper are based on the latest data collected outside {PCMAG}\xspace in 2020 March. {\textsc{Lycoris}}\xspace was installed next
to each other in between the up- and downstream arms of the {\textsc{AZALEA}}\xspace telescope, and with a close distance to the nearest {\textsc{AZALEA}}\xspace plane see
Figure~\ref{fig:performance:tbT24} (left).
Data was taken with the beam energy threshold set at \SI{4.4}{\GeV}; trigger coincidence was provided by two crossing scintillators
located closely in front of the first {\textsc{AZALEA}}\xspace plane; the beam collimator is set to $x \times y=$\SI{20}{\milli\metre}$\times$\SI{10}{\milli\metre}
and the distance from the beam collimator to the first scintillator is about \SI{220}{\centi\metre}.
The trigger rate under this configuration is high enough to fill all the four memory buckets of each {KPiX}\xspace channel at every acquisition cycle.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.475\textwidth]{figures/performance/IMG_7538.jpg}
\includegraphics[width=0.475\textwidth]{figures/installation/IMG-4017.JPG}
\caption{\label{fig:performance:tbT24} Setup for {\textsc{Lycoris}}\xspace tracking performance measurements with the {\textsc{AZALEA}}\xspace telescope outside the {PCMAG}\xspace solenoid at {\DESYII Test Beam Facility}\xspace in March 2020 (left)
and inside the {PCMAG}\xspace with the {\textsc{AZALEA}}\xspace telescope as a DUT in the center (right).}
\end{center}
\end{figure}
\subsubsection*{Setup inside the {PCMAG}\xspace}
{\textsc{Lycoris}}\xspace was installed on its dedicated rail system with {\textsc{AZALEA}}\xspace telescope inside the {PCMAG}\xspace solenoid in TB24/1, see Figure~\ref{fig:performance:tbT24} (right).
In this configuration, {\textsc{AZALEA}}\xspace is used as a DUT for {\textsc{Lycoris}}\xspace and is installed in between the {\textsc{Lycoris}}\xspace up- and downstream cassettes.
Data was taken with the magnetic field set to \SI{0.9}{\tesla} and a beam energy threshold at \SI{4.4}{\GeV}.
Three crossing scintillators are placed right after the beam collimator to provide coincidence, which is about \SI{200}{\centi\metre} away from the solenoid wall.
As TB24/1 is a subsequent area, its beam collimator is about \SI{1157}{\centi\metre} far from the primary beam collimator.
This results in a relatively low particle rate in TB24/1 with trigger coincidences varying from zero to four in one {KPiX}\xspace acquisition cycle.
\subsection{Sensor Performance}
Figure~\ref{fig:performance:ivcv} shows the measured electric properties of all 29 bare sensors using a probe station.
The measured IV curves (left) show that the dark current level of all sensors is very good, about 100 to \SI{150}{\nano\ampere}.
The backplane capacitance is measured through the bias ring with an LCR meter at \SI{10}{\kilo\hertz} AC frequency and the measured CV curve (see Figure~\ref{fig:performance:ivcv} (right))
determines the depletion voltage that is at around \SI{50}{\volt} for all the sensors.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/IV_new.pdf}
\includegraphics[width=0.49\textwidth]{figures/performance/CV_new.pdf}
\caption{\label{fig:performance:ivcv}The IV (left) and CV (right) curves of all the bare sensors produced.
6 sensors, Sensor 59, Sensor 43, Sensor 40, Sensor 48, Sensor 47 and Sensor 46, forming an active area of 10$\times$\SI{10}{\square\centi\metre} are used in studies in this paper.
}
\end{center}
\end{figure}
The assembly quality is controlled by measuring the IV curve of the sensor after every major assembly step (see Section~\ref{sec:hardware:moduleassembly}),
namely after bump-bonding, after gluing the flex onto the sensor surface, and after wire-bonding.
22 sensors are successfully, fully assembled, and all except for two are showing an expected development.
Figure~\ref{fig:performance:ivassembly} gives one example from the IV curves measured for the Sensor 41 during assembly.
Overall, the dark current at each step stays at the same order of magnitude as the bare sensor, confirming no damage occurred to the sensor during the assembly process.
The behaviors of the curves are also expected: the large increase after bump-bonding comes from impurities added from the treatment of the sensor surface during the bump-bonding procedure;
the slight decrease in dark current from the "Assembled" curve (denoted for after the step of gluing the flex onto the sensor suface) to the curve after wire-bonding is understood due to an
improvement of humidity of sensor storage.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/S41_IV.pdf}
\caption{\label{fig:performance:ivassembly}The IV curves of Sensor 41 at various assembly steps, where the "Assembled" refers to the assembly step after gluing the flex onto the sensor surface.}
\end{center}
\end{figure}
The measured bulk capacitance through bias ring of the entire sensor is \SI{1.26}{\nano\farad} on average when fully depleted.
Given the 3679 sense strips are the same in size and are coupled to the backplane in the same way,
the backplane capacitance of each sense strip is therefore \SI{1.26}{\nano\farad}$/3679=$\SI{0.34}{\pico\farad}.
The AC inter-strip capacitance is measured by the producer Hamamatsu when sensor is fully depleted (\SI{60}{\volt}) and the result
on average is \SI{8}{\pico\farad} per strip.
A MIP creates about 3.5 to \SI{4}{\fC} in \SI{320}{\micro\metre} silicon.
According to the sensor design, $20\%$ of the signal charge generated in the intermediate strip goes
to the backplane and the rest are shared by the neighbor strips for readout~\cite{TNelson}.
Assuming signal loss to the backplane is negligible for readout strips and the possibility to hit on a floating strip or a readout strip
is equal, the expected signal charge on average is then 90\% of the MIP charge thus 3.1 to \SI{3.6}{\fC}.
\subsection{{KPiX}\xspace Calibration Performance}\label{sec:performance:calib}
As described in Section~\ref{sec:hardware:kpix:calib}, ADC response of every readout channel is calibrated with an 8-bit DAC.
The calibration data used for studies shown in this paper is taken with the most accurate configuration for the {KPiX}\xspace calibration system
i.e.\ every available DAC value is measured, corresponding to 26 measurements varying from 0 to \SI{40}{\femto\coulomb} for each channel.
The interval of the 26 measurements is \SI{1}{\femto\coulomb} from 0 to \SI{10}{\femto\coulomb} and later \SI{2}{\femto\coulomb} ($\sim 12500$ electrons) from 10 to \SI{40}{\femto\coulomb}.
The slope required to convert the ADC response to \si{\femto\coulomb} (or number of electrons) can be determined by performing a
linear fit on the ADC response against the DAC input value for each channel, see one example in Figure~\ref{fig:performance:calib_1kpix} (left).
The kinks of single charge injection points reflect natural fluctuations at single measurement level which is not reproducible and expected.
The most important is have the slope reproducible, i.e. stable charge response, which has been confirmed from various calibration data samples.
Figure~\ref{fig:performance:calib_1kpix} (right) shows the distribution of the slopes of all the $1024$ channels from one {KPiX}\xspace.
\begin{figure}[htbp]
\includegraphics[width=0.49\textwidth]{figures/performance/calib_c0_k0_b0_r0.pdf}
\includegraphics[width=0.49\textwidth]{figures/performance/slopes_normalG_highG_k02.pdf}
\caption{\label{fig:performance:calib_1kpix} The ADC calibration curve of one example channel from one {KPiX}\xspace, a linear fit is performed in red (left).
The slope distribution of the ADC response to the charge injected for all 1024 channels of a {KPiX}\xspace, the blue curve refers to calibration data in normal-gain mode while
the violet one is using the high-gain mode; the mean value, its RMS and RMS error (RMSE) are given for each curve in the legend.}
\end{figure}
The two distributions shown were taken using the normal-gain and high-gain modes of {KPiX}\xspace respectively.
The ratio of the mean values of the two curves is around $1:4$ as expected from the designed value of the coupling capacitors shown in Fig~\ref{fig:hardware:kpixchannel}.
The small RMS Error of the slope distributions in Figure~\ref{fig:performance:calib_1kpix} (right) shows a good uniformity of all channels over single {KPiX}\xspace chip.
Figure~\ref{fig:performance:calib_all-kpix} shows a profile histogram of the slope distribution in the high-gain mode for all twelve {KPiX}\xspace ASICs used by {\textsc{Lycoris}}\xspace,
where eleven out of the twelve chips are showing a very uniform behavior.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.45\textwidth]{figures/performance/profile_highGain-0to12_kpix.pdf}
\caption{\label{fig:performance:calib_all-kpix} A profile histogram of the slope distribution of all twelve {KPiX}\xspace ASICs in the high-gain mode.}
\end{center}
\end{figure}
\subsection{Module Performance}
\subsubsection{Noise Performance}
In the external trigger mode, {KPiX}\xspace records the charge response of each channel when the external trigger arrives.
As a result of the short opening window for one trigger and additionally due to the low particle rate at the test beam,
only one to two out of the 1890 readout strips read out signal in every externally triggered event.
As such, the majority of the {KPiX}\xspace channels are recording random noise.
In this case, the charge distribution of one entire {KPiX}\xspace chip is expected to follow a Gaussian distribution centered at zero with outliers coming from signal contributions.
The standard deviation of the Gaussian charge distribution defines the chip's noise level
and it is robustly estimated using the MAD: $\sigma = b\cdot MAD$ with $b = 1.4826$ for Gaussian distributions.
As stated in Section~\ref{sec:software:reco}, the baseline noise subtraction involves two steps: the pedestal subtraction and
the common-mode noise subtraction.
Figure~\ref{fig:performance:noise_ped} (left) shows the recorded charge distribution after the pedestal subtraction for one
example {KPiX}\xspace chip, and its pedestal distribution against the {KPiX}\xspace channel number is shown using a candle plot in Figure~\ref{fig:performance:noise_ped} (right).
The mean of the charge distribution is close to zero after pedestal subtraction, demonstrating the pedestal subtraction corrects the offset efficiently.
However, the clearly asymmetric shape demonstrates the significant contribution from the event-by-event common-mode noise.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm]{figures/performance/Q_fC_k2_b0.pdf} %
\includegraphics[height=5cm]{figures/performance/Run_20200316_172906_pedestal_v_channel_onlyMean_k6_b0.pdf}%
\caption{\label{fig:performance:noise_ped} Charge distribution of one entire example {KPiX}\xspace chip after the pedestal subtraction (left)
and its pedestal distribution over its channels in a candle plot (right).}
\end{center}
\end{figure}
After applying the common-mode subtraction, the charge distribution of the example chip shows an expected Gaussian shape with its
mean value pushed closer to $0$ by two orders of magnitude, see Figure~\ref{fig:performance:noise} (left).
Besides, its width is reduced by a factor of more than two because the common-mode noise subtraction further compensate a time
dependent pedestal drift which is found sufficiently uniform over time in each single {KPiX}\xspace chip.
This pedestal drift is caused by the leakage of the charge stored in the memory buckets during the time between storage and digitization.
Therefore, the amount of the drift varies according to the amount of time that the data is stored in the memory bucket before digitization, i.e. when the data was stored during the acquisition period. As a consequence, this drift is uniformly distributed over time in single {KPiX}\xspace chip, and thus contributing together with the common-mode noise to the asymmetric shape of the distribution in Figure~\ref{fig:performance:noise_ped} (left).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm]{figures/performance/QTrue_k2_b0.pdf}
\includegraphics[height=5cm]{figures/performance/Q_true_k2_c855_b0.pdf} %
\caption{\label{fig:performance:noise} Charge distribution after common-mode subtraction from one entire example {KPiX}\xspace chip (left)
and from one example {KPiX}\xspace channel that is connected to a strip (right).} %
\end{center}
\end{figure}
Figure~\ref{fig:performance:noise} (right) shows the Gaussian charge distribution read out from one example strip, i.e.\ one {KPiX}\xspace channel
that connects to a strip. Its width of \SI{0.29}{\femto\coulomb} defines its noise level N and with the measured charge value Q the significance can be
calculated as Q/N which is the input for the subsequent clustering step.
\subsubsection{Cluster reconstruction}
The clusters are the reconstructed objects (see Section~\ref{sec:software:hitclustering}), which serve as the
input hits to the alignment and tracking algorithms for track reconstruction.
Figure~\ref{fig:performance:cluster} (left) shows the charge distribution of all the clusters found after clustering.
A large noise peak is presents at lower cluster charges, which can be suppressed by a cut on the cluster significance.
A signal amplitude of about \SI{3.1}{\femto\coulomb} is expected,
and the measured noise level is found to be $\sim$\SI{0.25}{\femto\coulomb} on average,
hence the signal-to-noise ratio ($S/N$) is expected to be $12:1$.
As only every second strip is read out and 20\% of the signal charge is expected to be lost to the backplane for hits on floating strips, the signal-to-noise ratio is reduced to $4.8:1$.
Therefore, we applied a criterion of $S/N\ge4$ on clusters, which serve as the input hits to the tracking algorithm.
The cluster charge distribution after this selection is shown in Figure~\ref{fig:performance:cluster} (right) demonstrating a good trade-off
between hit statistics and hit purity.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/cluster_charge_all_sens2_b0.pdf}
\includegraphics[width=0.49\textwidth]{figures/performance/cluster_charge_CUTS_sens2_b0.pdf}
\caption{\label{fig:performance:cluster} Charge distributions of the reconstructed clusters from one example sensor before
(left) and after (right) the selection of $S/N\ge4$. }
\end{center}
\end{figure}
\subsection{{\textsc{Lycoris}}\xspace Performance}
\subsubsection{Tracking with \textsc{EUDET}\xspace telescope}
In {\textsc{Lycoris}}\xspace performance studies, the \textsc{EUDET}\xspace telescope {\textsc{AZALEA}}\xspace was used to provide a reference track with which {\textsc{Lycoris}}\xspace hits can be associated to form a combined track.
Two track finding algorithms are used. They are similar to the ones used for {\textsc{Lycoris}}\xspace stand-alone track finding
(see Section~\ref{sec:software}) but specialized for the pixel telescope.
The two algorithms can be called in any order and all hits used to form a track candidate are not considered for the subsequent algorithm.
\begin{description}
\item[Triplet finder:]
In each arm of {\textsc{AZALEA}}\xspace the first and the last layers are used to form doublets along the beam direction by applying a loose selection on slopes on x-z and y-z planes.
To become a valid triplet, a doublet is interpolated to the middle layer to search for a matched third hit within a short distance.
All valid triplets inherit the tracking parameters from the doublets and are later extrapolated to the center between the two {\textsc{AZALEA}}\xspace arms for a match.
When a triplet from one arm matches to another triplet in the other arm, requiring a small deviation in both x and y as well as in the track
angle in x and y, then a valid triplet track has been found. Same as for the \textit{Striplet}\xspace finder, a double uniqueness is required, meaning triplets
from the same doublet but different third hits will be rejected here.
\item[Road search:]
The road search first forms track roads along the beam direction using the first and the last layers of the both arms.
If there are at least two other uniquely matched hits found within a short distance to the track road, i.e.\ at least four
hits can be associated with this track, then a valid track candidate has been reconstructed.
\end{description}
For the combined track finding, the road search is used to associate the {\textsc{Lycoris}}\xspace hits to the track road defined by the {\textsc{AZALEA}}\xspace track candidate.
One {\textsc{AZALEA}}\xspace track can be associated with up to six {\textsc{Lycoris}}\xspace hits that each is a unique match in a short distance.
\subsubsection{Track Hit Performance}
Track hits are the hits associated to a reconstructed track.
The track hit performance shown here is using the {\textsc{Lycoris}}\xspace stand-alone tracking algorithm described in Sec~\ref{sec:software}.
Figure~\ref{fig:performance:ypos-landau} (left) shows the position of hits in local $y$ coordinate, which shows a clear 1D profile of the beam spot at the {\DESYII Test Beam Facility}\xspace.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm]{figures/performance/y_all_standalone_run897_SoN4_rev.pdf}
\includegraphics[height=5cm]{figures/performance/Charge_l11_fitted_run897_SoNG4_SizeL5_ChargeL20_standalone_noUniqueSroad.pdf} %
\caption{\label{fig:performance:ypos-landau} The hit position in the local Y-axis of all six {\textsc{Lycoris}}\xspace layers according
to data taken with a beam collimator of $x\times y=$\SI{20}{\milli\metre}$\times$\SI{10}{\milli\metre} (left), where the pink lines indicates the projection of the beam collimator size.
The normalized charge distribution of all the hits on one example sensor plane associated to stand-alone {\textsc{Lycoris}}\xspace tracks, a Landau convoluted
Gaussian fit is performed giving the Landau most probable value of \SI{3.0}{\femto\coulomb}.
}
\end{center}
\end{figure}
\paragraph{Signal Amplitude}
Figure~\ref{fig:performance:ypos-landau} (right) shows the charge distribution of hits from one example sensor plane associated to stand-alone {\textsc{Lycoris}}\xspace tracks.
A Landau convoluted Gaussian fit is performed giving the Landau most probable value of \SI{2.9}{\femto\coulomb} and this is the measured signal amplitude for this sensor.
The measured signal amplitude is consistent with the basic expectation on a MIP in this \SI{320}{\micro\metre} thick sensor read by every other strip.
\paragraph{Signal-to-Noise Ratio}
The signal-to-noise ratio is calculated for each hit cluster that is associated to a track, and its distribution for
one of the {\textsc{Lycoris}}\xspace sensors is shown in Figure~\ref{fig:performance:snr}.
The cut-off to the left is due to the quality selection of the clusters, that are used as input to the tracking algorithm
while the majority of clusters are centered the mean value of $14.4$ giving a
good estimate of the signal-to-noise ratio.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.45\textwidth]{figures/performance/Significance_l11_standalone_SoN4.pdf} %
\caption{\label{fig:performance:snr} The signal-to-noise ratio distribution of the hits on track from one sensor.}
\end{center}
\end{figure}%
\paragraph{Charge Sharing}%
The charge sharing has been studied with two methods in this paper.
First, the spatial distribution of charge collected by the two neighbor readout strips is studied with the $\eta$ observable~\cite{etaSiDet:1983}:
\begin{equation}
\eta = \frac{Q_R}{Q_L+Q_R},
\end{equation}
where $Q_L$ and $Q_R$ are charge collected by the two neighbor strips L and R respectively.
A group of {\textsc{Lycoris}}\xspace hits with particles traversing between strips L and R are selected by demanding that its associated {\textsc{AZALEA}}\xspace track projection is
between strips L and R. Figure~\ref{fig:performance:eta} shows an example $dN/d\eta$, where contributions from different hit cluster
sizes are overlaid.
The $\eta$ distribution for single strip hit (cluster size 1) shows almost no charge sharing between two readout strips. This is expected as the
diffusion is about a quarter of the readout pitch (half of the sense pitch).
Charge sharing between readout strip and the floating strip is shown by the $\eta$ distribution for hits composed by two strips (cluster size 2).
$\delta$ electron contributions is described by the curve for hits consisting of more than two strips.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/Eta_run897_SoNG4.pdf} %
\caption{\label{fig:performance:eta} $dN/d\eta$ of hits from all six {\textsc{Lycoris}}\xspace layers.}
\end{center}
\end{figure}%
Secondly, the charge sharing is studied by comparing signal amplitudes of hits from floating strips to hits from readout strips.
The {\textsc{Lycoris}}\xspace hit is classified as hit generated from a readout strip if its associated {\textsc{AZALEA}}\xspace track projection is within one pitch
(\SI{25}{\micro\metre}) window centered by a readout strip.
Figure~\ref{fig:performance:landau_sub} shows the charge distributions of hits generated from floating strips (left) and from readout strips (right).
The ratio of the amplitudes from the floating strip hits to the readout strip hits is about $0.85$.
This means about $15\%$ charge loss to the backplane for floating strips assuming no charge loss of the readout strip hits, and this
number is consistent with the designed value of $20\%$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/charge_floating.pdf}
\includegraphics[width=0.49\textwidth]{figures/performance/charge_readout.pdf}
\caption{\label{fig:performance:landau_sub} The charge distributions of {\textsc{Lycoris}}\xspace hits generated from floating strips (left) and
from readout strips (right) and the classification on each hit uses its associated {\textsc{AZALEA}}\xspace track projection.}
\end{center}
\end{figure}
\paragraph{Cluster Size}%
Figure~\ref{fig:performance:csize} (left) shows the cluster size of hits on {\textsc{Lycoris}}\xspace stand-alone tracks from one example sensor at bias voltages of
\SI{70}{\volt}, \SI{110}{\volt}, and \SI{150}{\volt}.
A slight increase on the number of single-strip clusters and a slight drop on the number of two-strip clusters can be observed
with the increase of the bias voltage. It also demonstrates, that the default operation bias voltage \SI{70}{\volt} is sufficient.
Figure~~\ref{fig:performance:csize} (right) shows the hit cluster size for all six {\textsc{Lycoris}}\xspace sensor planes at the default bias voltage of \SI{70}{\volt}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/standalone_csize_sall_bias70V_to_150V_SoNG4.pdf}
\includegraphics[width=0.49\textwidth]{figures/performance/standalone_csize_s10_to_s15_run897_SoNG4.pdf}
\caption{\label{fig:performance:csize} The hit cluster size for {\textsc{Lycoris}}\xspace standalone tracks for one example sensor at three bias
voltages (left) and the hit cluster size for all {\textsc{Lycoris}}\xspace planes at the default \SI{70}{\volt} bias voltage (right).}
\end{center}
\end{figure}
\paragraph{Hit Efficiency}
The hit efficiency of the sensor planes has been derived using two separate approaches, using stand-alone tracks from {\textsc{Lycoris}}\xspace and using tracks from
the {\textsc{AZALEA}}\xspace telescope as probe-tracks.
Assuming a similar single hit efficiency $e$ for each of the six sensor planes, the probability to get a six hits on-track is $P(6) = e^6$
while for five hits on-track the efficiency is $P(5)=6\cdot e^5(1-e)$.
The ratio of six hits to five hits on-track is
\begin{equation}
f = \frac{P(6)}{P(6)+P(5)} = \frac{e}{6-5\cdot e},
\end{equation}
hence the hit efficiency is $e = 6f/(5f+1)$.
Figure~\ref{fig:performance:nhits} shows the hit multiplicity for {\textsc{Lycoris}}\xspace stand-alone reconstructed tracks using the default hit selection used in this paper,
from which the measured fraction is $0.86$ resulting in a single hit efficiency of approximately $97.3\%$.
The fake rate of the reconstructed tracks has been studied by applying the track reconstruction algorithm onto a pure noise dataset, i.e.\
data collected by random trigger without beam present.
From a $25000$ events sample, only $29$ tracks consisting of five hits are found with a very poor $\chi^2$ value, validating the excellent
noise rejection of the tracking algorithm developed for {\textsc{Lycoris}}\xspace.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/hit_multiplicity_UK_thesis.pdf} %
\caption{\label{fig:performance:nhits} Hit multiplicity for {\textsc{Lycoris}}\xspace stand-alone reconstructed tracks.
\end{center}
\end{figure}
An independent way to derive the single-plane tracking efficiency is to use tracks formed by {\textsc{AZALEA}}\xspace and using them as probe tracks to derive the efficiency of each
{\textsc{Lycoris}}\xspace plane independently, removing any potential bias introduced by the stand-alone method.
The {\textsc{AZALEA}}\xspace tracks are reconstructed using its standard reconstruction algorithms in the \textsc{EUTelescope}\xspace package and for the {\textsc{Lycoris}}\xspace hits to be matched,
the default hit selection is used. The single-plane efficiencies found for each plane are shown in Table~\ref{tab:performance:sigma_y}
and is very much compatible with the stand-alone method and also validating the previous assumption of very similar plane efficiencies.
\subsubsection{Tracking performance}
\paragraph{Spatial Resolution}
The spatial resolution is defined as the width of a Gaussian fit of the distribution of the residuals, i.e. the spatial distance from the {\textsc{Lycoris}}\xspace measurement to the projection of
the reference track provided by the {\textsc{AZALEA}}\xspace telescope. A single hit measured by {\textsc{Lycoris}}\xspace has only one axis information, the other axis information can only be obtained through the
\textit{Striplet}\xspace object. Therefore, the spatial resolution will be given separately.
The single point resolution for {\textsc{Lycoris}}\xspace is the resolution on the local y-axis (perpendicular to the strip direction) $\sigma_y$.
It is given by the y-axis residuals, measured by comparing every hit on a {\textsc{Lycoris}}\xspace plane with respect to the intersection of the reference {\textsc{AZALEA}}\xspace track on this plane.
{\textsc{AZALEA}}\xspace tracks can be reconstructed using {\textsc{Lycoris}}\xspace hits as input or not, respectively resulting in a biased or an unbiased measurement of the single point resolution.
Figure~\ref{fig:performance:residual-dy} shows the biased (left) and the unbiased (right) y-axis residuals for one example {\textsc{Lycoris}}\xspace plane, each of which fits to a Gaussian well.
The fitted results are both well centered at zero, showing a good alignment performance.
The width of the two Gaussian fits gives a biased resolution $\sigma_y^{biased}=$\SI{6.92}{\micro\metre} and an unbiased resolution $\sigma_y^{unbiased}=$\SI{7.74}{\micro\metre}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/bias_residual_l10.pdf}
\includegraphics[width=0.49\textwidth]{figures/performance/unbias_residual_l10.pdf} %
\caption{\label{fig:performance:residual-dy} The biased (left) and the unbiased (right) residuals on the local y-axis (perpendicular to the strip direction) in \si{\milli\meter} for one example {\textsc{Lycoris}}\xspace plane.}
\end{center}
\end{figure}
The final single point resolution of each {\textsc{Lycoris}}\xspace sensor is estimated using the geometric mean~\cite{resolution_2005} of the biased and the unbiased resolutions:
\begin{equation}
\sigma_y = \sqrt{\sigma_y^{unbias} \cdot \sigma_y^{bias}}.
\end{equation}
Table~\ref{tab:performance:sigma_y} shows the unbiased, biased and the geometric mean of the single point resolution for all six sensor planes.
The final single point resolution ranges from \SI{6.92}{\micro\metre} to \SI{7.32}{\micro\metre}, significantly better than the required \SI{10}{\micro\metre}.
\begin{table}[htbp]
\centering
\begin{tabular}{c c c c c}
Sensor ID & Efficiency & $\sigma_y^{biased}$\SI{}{{\micro\metre}} & $\sigma_y^{unbiased}$\SI{}{{\micro\metre}} & $\sigma_y$\SI{}{{\micro\metre}} \\ \toprule
upstream 0 & 95.0\% &$6.92\pm0.06$ &$7.74\pm0.08$ &$7.32\pm0.07$ \\
upstream 1 & 97.0\% &$6.53\pm0.05$ &$7.33\pm0.07$ &$6.92\pm0.06$ \\
upstream 2 & 96.2\% &$6.81\pm0.06$ &$7.67\pm0.08$ &$7.22\pm0.07$ \\
downstream 0 & 96.7\% &$6.64\pm0.05$ &$7.41\pm0.07$ &$7.01\pm0.08$ \\
downstream 1 & 96.6\% &$6.69\pm0.05$ &$7.45\pm0.07$ &$7.06\pm0.06$ \\
downstream 2 & 96.7\% &$6.64\pm0.06$ &$7.59\pm0.08$ &$7.21\pm0.07$ \\
\bottomrule
\end{tabular}
\caption{The single plane hit efficiency and the three (biased, unbiased and their geometric mean) y-axis resolutions for all six sensor planes.}
\label{tab:performance:sigma_y}
\end{table}
A single {\textsc{Lycoris}}\xspace plane measurement is not capable to retrieve information on the local x-axis (along the strip direction),
while the \textit{Striplet}\xspace object reconstructed from three planes in one cassette provides a 2D information with the \SI{2}{\degree} stereo angle arrangement.
Figure~\ref{fig:performance:residual-dx} shows the x-axis residuals of up- and downstream cassettes.
Both residual distributions are fitted to a Gaussian that is well centered at zero.
The width of the Gaussian fit gives an x-axis resolution $\sigma_x=$\SI{0.224}{\milli\metre} for the upstream cassette and $\sigma_x=$\SI{0.238}{\milli\metre} for the downstream cassette,
exceeding the design requirement of \SI{1}{\milli\metre} by a factor of around five.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/performance/residual_x_0.pdf} %
\includegraphics[width=0.49\textwidth]{figures/performance/residual_x_1.pdf}
\caption{\label{fig:performance:residual-dx}Residuals on the local x-axis (along the strip direction) in \si{\milli\meter} of \textit{Striplet}\xspace from up- (left) and downstream (right) cassettes.}
\end{center}
\end{figure}
\paragraph{Momentum Resolution} %
The momentum resolution is defined as the error propagated on the curvature measurement of {\textsc{Lycoris}}\xspace stand-alone tracks.
Data collected inside the {PCMAG}\xspace with the two {\textsc{Lycoris}}\xspace cassettes well separated is the ideal configuration for momentum measurements.
As a result of a very low data rate inside the {PCMAG}\xspace and a lack of test beam time, only $251$ valid tracks have been reconstructed
from the data collected by {\textsc{Lycoris}}\xspace inside the {PCMAG}\xspace.
Therefore, a simulation is implemented to estimate the momentum resolution in this case, and the single point resolution is taken from the average measurement value of \SI{7}{\micro\metre} shown in this paper.
The simulation takes geometry input from the test beam setup of {\textsc{Lycoris}}\xspace installed inside the {PCMAG}\xspace, where the distance is \SI{750}{\milli\metre}
from the first to the last {\textsc{Lycoris}}\xspace planes, and the distance between sensor planes in each cassette is taken from the technical drawings.
The simulated momentum resolution is \SI{3.60E-3}{\per\GeV} given the magnetic field is \SI{1}{\tesla}.
This aligns well with the desired momentum resolution of \SI{5.0E-3}{\per\GeV} expected from users for measurements inside {PCMAG}\xspace.
\subsection{Summary}
The {\textsc{Lycoris}}\xspace strip telescope has been successfully tested at the {\DESYII Test Beam Facility}\xspace. It has
demonstrated a single hit efficiency of approximately $97.3\%$. The single-point resolution orthogonal to the strip ($\sigma_y$)
is on average \SI{7.07}{\micro\meter} and the resolutions parallel to the strip ($\sigma_x$) for the up- and downstream cassettes
are \SI{0.22}{\milli\meter} and \SI{0.24}{\milli\meter} respectively.
\section{The {\textsc{Lycoris}}\xspace Telescope System}\label{sec:hardware}
The design of {\textsc{Lycoris}}\xspace was driven by two different use-cases: providing a large-area coverage for test beam setups and
providing precision tracking inside the {PCMAG}\xspace solenoid, where the {\textsc{Lycoris}}\xspace arms have to fit in the gap between the
DUT and the inner wall of the {PCMAG}\xspace field cage. It was quickly concluded that the second use-case imposes much more stringent
constraints and hence the requirements were derived from this use-case.
\subsection{Overall Requirements}\label{sec:hardware:overallreq}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/hardware/sketch-pcmag-lycoris_LCTPCuser.pdf}
\includegraphics[width=0.49\textwidth]{figures/hardware/momentum_distr_felix.png}
\caption{\label{fig:hardware:pcmag} Picture of the {PCMAG}\xspace (left)~\cite{Behnke:2020krd} and the measured momentum spread (right)~\cite{Mueller_2016}.}
\end{center}
\end{figure}
For tests in the {PCMAG}\xspace solenoid, the DUT (e.g.\ a Time Projection Chamber (TPC)) will be installed inside the {PCMAG}\xspace with a
small gap on either side, where the {\textsc{Lycoris}}\xspace telescope will be installed.
It will then be used to provide a precise measurement of the particle trajectory inside the magnet.
As a result of interactions of the electrons with the magnet wall ($20\%$ radiation length $X_0$), the particles have a large momentum spread when entering
the magnet volume (see Figure~\ref{fig:hardware:pcmag}) compared to the momentum spread of the electron test beam itself.
Effects such as poor momentum resolution due to potential electric field inhomogeneities within a
TPC volume could be studied and corrected with precision information from {\textsc{Lycoris}}\xspace.
Simulation studies have been performed to determine the general requirements on such a system.
A \textsc{Geant4}\xspace~\cite{Allison:2006ve,Allison:2016lfl} simulation accurately describing the current infrastructure including the material
and dimensions of the {PCMAG}\xspace and a TPC field cage (a detector taken as an example of one potential DUT) has been performed.
These studies have shown that a spatial resolution of better than \SI{10}{\micro\metre} is required in the bending plane of a homogeneous \SI{1}{\tesla} magnetic field,
in order to provide a comparable momentum measurement to a TPC with resolution of $\sim$\SI{5.0E-3}{\per\GeV}.
The overall active area was required to be at least 10$\times$\SI{10}{\centi\metre\squared} %
in order to cover 96\% of beam particles at \SI{6}{\GeV} according to the simulation results shown in Figure~\ref{fig:hardware:G4simu}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/hardware/Simu_6GeV_beam-spread_2ndSi_rev.pdf}
\includegraphics[width=0.49\textwidth]{figures/hardware/Simu_6GeV_beam-spread_4thSi_rev.pdf}
\caption{\label{fig:hardware:G4simu} Simulated spread of \SI{6}{\GeV} beam at the silicon sensor plane before the TPC field cage (left) and at the last sensor plane (right).
The coordination origin of the simulation is set to be the center of the {PCMAG}\xspace geometrical center.
The beam spot is centered at $z=$ \SI{2}{\metre} with radius of \SI{0.5}{\milli\metre} and a spread of \SI{2}{\milli\radian},
in order to well describe the beam status with a \SI{1}{\milli\metre} diameter collimator at {\DESYII Test Beam Facility}\xspace.}
\end{center}
\end{figure}
Two technology candidates -- silicon-strip sensors and silicon-pixel sensors -- have been considered.
While the level of 3D information for a pixel system was superior, the available pixel sensors could not meet the area requirements at an affordable cost.
Thus silicon-strip sensors with a sufficiently small pitch were chosen for this project.
To provide the required 3D point resolution information, six silicon-strip sensors with a small-angle stereo configuration are used.
The limiting factor guiding these choices is the available gap, which is as little as \SI{3.5}{\centi\metre} and therefore provides a rather small lever arm.
From these studies, it was decided to build a telescope with six layers in two stations of three sensors each
and foresee configurations with 10$\times$\SI{10}{\centi\metre\squared} and 10$\times$\SI{20}{\centi\metre\squared} coverage,
requiring six or twelve sensors in total.
The key design requirements are summarized in Table~\ref{tab:requirements}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{l l r} \toprule
Resolution in bending plane & $\sigma_y$& $<$ \SI{10}{\micro\metre} \\
Resolution orthogonal to the bending plane & $\sigma_x$&$<$ \SI{1}{\milli\metre} \\
Area coverage & $A_{xy}$ & 10$\times$ \SI{10}{\centi\metre\squared} \\
Thickness of single station & d & $<$ \SI{3.5}{\centi\metre} \\ \bottomrule
\end{tabular}
\caption{\label{tab:requirements} The key design requirements for developing the {\textsc{Lycoris}}\xspace telescope.}
\end{center}
\end{table}
Moreover, this design also meets the need to operate the telescope outside of the {PCMAG}\xspace, where several user groups
have indicated their interest to make use of the large area coverage provided by {\textsc{Lycoris}}\xspace, e.g. from the \textsc{CALICE} Collaboration.
\subsection{Overall Telescope Design}
The resolution requirements described above imply a strip pitch of about \SI{25}{\micro\metre} or smaller.
Most silicon-strip sensors used in particle physics or nuclear physics experiments do not provide
the required spatial resolution and/or the area coverage requirements for this telescope.
For example, silicon-strip sensors developed for the HL-LHC mainly use a pitch of $\approx$~\SI{75}{\micro\metre}~\cite{Collaboration:2017mtb,Collaboration:2272264}.
The silicon-strip sensor designed for the main tracker of the {SiD}\xspace detector~\cite{Aihara:2009ad,Behnke:2013lya} for the ILC~\cite{Behnke:2013xla}
was found to have the needed properties and was chosen for {\textsc{Lycoris}}\xspace.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{figures/hardware/lycoris_schematic_drawing.pdf}
\caption{\label{fig:hardware:lycorisoverall} The overall design of the {\textsc{Lycoris}}\xspace telescope.
The cassettes are indicated by the green dashes and host up to six sensor modules and two cassette boards.
The High Voltage(HV)/Low Voltage(LV) connections are indicated in red, while the connections for the DAQ are indicated in dark grey,
the Ethernet connections using TCP/IP in blue and the connections using the Kapton-flex in orange.}
\end{center}
\end{figure}
The overall architecture of {\textsc{Lycoris}}\xspace is shown in Figure~\ref{fig:hardware:lycorisoverall}. The two separate stations
are called ``cassettes'' and provide the mechanical support for up to six sensor planes in total. Three sensor units -- the modules -- are
controlled by one cassette board, which acts as the interface to the power supply crate (Wiener MPOD)
and the DAQ using the SLAC {KPiX}\xspace DAQ board. The cassette board can be daisy-chained to a secondary cassette board
to control six sensor planes. The DAQ board supports up to four cassette boards
and interfaces with a control PC via Ethernet and with the \textsc{AIDA}\xspace-TLU~\cite{Baesso_2019} via an HDMI connector.
Triggering is provided by a coincidence from a set of scintillators~\footnote{Usually scintillators are placed two before and two after the telescope
in the test beam, however, when telescope is located inside a magnet, scintillators are placed outside the magnet right after the beam.} read out by photo-multiplier tubes (PMT).
The individual telescope components are described in detail in Section~\ref{sec:hardware:module}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.30cm]{figures/installation/PC-MAG2.pdf}
\caption{\label{fig:installation:pcmag}CAD drawing of the mechanical support to integrate {\textsc{Lycoris}}\xspace telescope inside the {PCMAG}\xspace, where the rails marked in light-blue are used by the two Lycoris cassettes.
}
\end{center}
\end{figure}
The most stringent mechanical requirements are due to operating {\textsc{Lycoris}}\xspace inside the {PCMAG}\xspace solenoid.
The available gap between the current TPC and inner wall of the {PCMAG}\xspace allows merely a cassette thickness of \SI{3.5}{\centi\metre}. The {PCMAG}\xspace provides a rail
system to install a DUT, so it can be moved in and out with high accuracy.
The cassettes are mounted on a separate rail system, so they can slide in and out independently of the DUT itself. This simplifies both
installation and operation. A CAD drawing of the actual setup is shown in Figure~\ref{fig:installation:pcmag}.
\subsection{The {\textsc{Lycoris}}\xspace Silicon Sensor Module\label{sec:hardware:module}}
The central building block of {\textsc{Lycoris}}\xspace is the silicon sensor module shown in Figure~\ref{fig:hardware:sidsensor}
which is composed of one {SiD}\xspace tracker sensor, two {KPiX}\xspace ASICs~\cite{kpix} and a Kapton-flex cable.
As the {KPiX}\xspace ASICs are bump-bonded directly on top of the silicon-sensor, there is no need for a hybrid PCB.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{figures/hardware/IMG_20180122_113215353.jpg}
\caption{\label{fig:hardware:sidsensor} The {\textsc{Lycoris}}\xspace module composed of one {SiD}\xspace tracker sensor, two {KPiX}\xspace ASICs and a Kapton-flex readout cable.}
\end{center}
\end{figure}
\subsubsection{The {SiD}\xspace Tracker Sensor}\label{sec:hardware:sensor}
The sensor designed for the {SiD}\xspace tracker, called SiD-OUTER-SSSD 6972, is an n-type silicon micro-strip sensor
with \SI{25}{\micro\metre} pitch (read out on every other strip) manufactured by Hamamatsu~\cite{hamamatsu-www}. It provides a spatial resolution of
$\sim$\SI{7.2}{\micro\metre}~\footnote{This theoretical number is calculated using the simple approach of single strip signals with binary readout}.
The sensor is designed with two aluminum metallization layers, as shown in the schematic cross-section in Figure~\ref{fig:hardware:sensor},
which enables the hybrid-less design.
Table~\ref{tab:hardware:sensor} gives the detailed properties of this sensor.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{figures/hardware/Sensor_layering.pdf}
\caption{\label{fig:hardware:sensor} Schematic cross-section of the {SiD}\xspace tracker sensor.}
\end{center}
\end{figure}
The second metallization layer routes signals of each strip to a bump pad that is bonded directly to the {KPiX}\xspace readout ASIC.
This layer is also used for powering the {KPiX}\xspace ASIC and transmitting other digital signals such as Command, Clock and Trigger.
This approach to a hybrid-less strip sensor design has been tested for the first-time in this project.
\begin{table}[htbp]
\centering
\begin{tabular}{l l}
& SiD-OUTER-SSSD 6972\\
\toprule
Sensor size & $93.53\times$\SI{93.53}{\square\milli\metre}\\
Thickness & 320$\pm$\SI{15}{\micro\metre} \\
Crystal Orientation & <100> \\
Wafer type & n-type\\
Strip implant & p+ type\\
Strip p+ width & \SI{8}{\micro\metre}\\
Strip Al width & \SI{9}{\micro\metre}\\
Readout Al width & \SI{4}{\micro\metre}\\
Strip bias resistance & 10-\SI{50}{\mega\ohm}/strip \\
Strip readout coupling & AC \\
Strip Pitch (readout) & \SI{25}{\micro\metre} (\SI{50}{\micro\metre})\\
Strip Multiplicity (readout) & 3679 (1840)\\
\bottomrule
\end{tabular}
\caption{The {SiD}\xspace tracker sensor (SiD-OUTER-SSSD 6972) properties.}
\label{tab:hardware:sensor}
\end{table}
\subsubsection{The {KPiX}\xspace ASIC}\label{sec:hardware:kpix}
The {KPiX}\xspace readout chip was designed by SLAC as a multi-purpose read-out ASIC for the {SiD}\xspace detector concept for the ILC. It was manufactured
using a \SI{250}{\nano\metre} mixed-mode CMOS process from TSMC~\cite{tsmc-www}.
The chip consists of a 1024 channel fully digital readout with a 13-bit ADC and a timing generator controlling the data acquisition.
Each channel consists of a charge amplifier, a shaper and a discriminator. The charge amplifier can be operated either in a high-gain
or low-gain mode and additionally offers a dynamically switchable gain range.
If the default range of \SI{400}{\fC} has been exceeded, which is detected in the range control circuitry, a \SI{10}{\pico\farad} capacitor is automatically added to
extend the range to \SI{10}{\pico\coulomb}.
For a bare {KPiX}\xspace, the noise floor is about 1000 electrons in the low-gain mode and 700 electrons for the high-gain mode
For a full assembled sensor, a noise floor of 5\% of a Minimum Ionizing Particle (MIP) in the \SI{320}{\mu\metre} thick silicon tracker sensor was achieved.
The dynamic range switching then extends the available range to 2500 MIPs.
Figure~\ref{fig:hardware:kpixchannel} shows the block diagram of a single {KPiX}\xspace readout channel.
Each channel has four sample and hold capacitors and four timing registers to permit four separate measurements
of the signal amplitude and the time of threshold crossing. This results in four memory buckets for each channel.
In normal operation, the charge amplifier is synchronously reset after each bunch-crossing.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{figures/hardware/kpix-single-channel.pdf}
\caption{\label{fig:hardware:kpixchannel} Simplified block diagram of a single {KPiX}\xspace readout channel (out of 1024 channels in total).
Shown are the charge-amplifier in light green with its gain control and dynamic range-control circuitry (shown in pink) and the shaper (green).
The main digital blocks are indicated in light and dark blue with the memory buckets shown in orange.}
\end{center}
\end{figure}
Each channel has a calibration system, leakage compensation, ``DC'' reset and polarity inversion.
{KPiX}\xspace supports both AC and DC-coupled signals and in DC mode the leakage is compensated by a servo circuit.
The total amount of leakage is determined without any signal being present during the data taking period.
The ``DC'' reset works for asynchronous operation for cosmic-ray runs.
{KPiX}\xspace can switch the input polarity for use with e.g.\ GEM detectors~\cite{White:2010zza,Yu:2011ij}.
The noise floor is about 1000 electrons in low-gain mode (\SI{0.15}{\femto\coulomb}) and the maximum signal charge is \SI{10}{\pico\coulomb} with the full
dynamic range corresponding to 17 bits.
{KPiX}\xspace uses two separate power supply lines for its analog (AVDD) and digital (DVDD) circuits,
requiring nominally \SI{2.5}{\volt} and \SI{2.0}{\volt} respectively. {KPiX}\xspace has been designed to achieve an average power
consumption lower than \SI{20}{\mu\watt\per channel} for applications at the ILC realized by power-pulsing.
This is realized by powering the analog front end only when beam is present, i.e.\ during the signal acquisition period.
This power-pulsing cycle is called the {KPiX}\xspace data acquisition cycle.
It starts with an external \textit{Acquisition Start Signal} followed by five different operational states and a shut-off idle phase.
The sequence of the five states and the idle phase is shown in Figure~\ref{fig:hardware:kpixtiming}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{figures/hardware/kpix-cycle.png}
\caption{\label{fig:hardware:kpixtiming}The five operational states of {KPiX}\xspace data acquisition cycle followed by the between beam bunches shut-off idle phase (relative time periods for states not to scale).}
\end{center}
\end{figure}
An externally provided DAQ board is able to adjust the period of the {KPiX}\xspace clock separately for each state, in \SI{5}{\nano\second} increments with a minimum period of
\SI{10}{\nano\second}. This allows a wider acquisition and pre-charge window to be achieved, while at the same time reducing the time spent on digitization and readout.
Table~\ref{tab:hardware:kpix-timing} shows the typical clock periods to operate {KPiX}\xspace at each state, including the five data acquisition operational states and the shut-off idle phase.
These unique properties of the {KPiX}\xspace ASIC make it ideal for the ILC environment. In addition to its use in the {SiD}\xspace
tracking system, as discussed in Section~\ref{sec:hardware:sensor}, {KPiX}\xspace is also the proposed readout strategy for the {SiD}\xspace Silicon-tungsten sampling electromagnetic
calorimeter~\cite{Steinhebel:2017qze,Barkeloo_2019}, as the many readout channels read out between beam bunch crossings through
a power-pulsing cycle allow for a highly granular and compact calorimeter design with a minimal cooling system.
\begin{table}[htbp]
\centering
\begin{tabular}{lr}
Clock & Typical clock period (ns) \\ \toprule
$T_{\mathrm{Acquisition}}$\xspace & 320\\
$T_{\mathrm{Digitization}}$\xspace & 50\\
$T_{\mathrm{Pre-Charge}}$\xspace & 6000\\
$T_{\mathrm{Readout}}$\xspace & 200\\
$T_{\mathrm{Idle}}$\xspace & 200\\
\bottomrule
\end{tabular}
\caption{The {KPiX}\xspace clock domains and the typical clock periods to operate {KPiX}\xspace at each state.}
\label{tab:hardware:kpix-timing}
\end{table}
The five operational states of the {KPiX}\xspace data acquisition cycle (shown in Figure~\ref{fig:hardware:kpixtiming}) are described in the following.
\begin{description}
\item[Start-up:] {KPiX}\xspace powers up after receiving the \textit{Acquisition Start Signal}.
The start up phase duration is configurable via the TimeBunchClkDelay setting and during this phase the {KPiX}\xspace uses the acquisition clock $T_{\mathrm{Acquisition}}$\xspace. {KPiX}\xspace operation during this
phase includes the ramp up of analog and digital current to operation values and the resetting of internal {KPiX}\xspace registers and the Gray counter.
\begin{equation*}
t_{\mathrm{Start-up}} = T_{\mathrm{Acquisition}} \cdot TimeBunchClkDelay
\end{equation*}
\item[Acquisition:]
The acquisition phase is the period when the chip records data and the clock used is the acquisition clock $T_{\mathrm{Acquisition}}$\xspace.
During this phase, the chip is able to store per channel up to four events in the analog buffers and timestamp each event using a 13-bit
counter called BunchClockCount (BCC) which increments by every eight clock pulses and is cleared every cycle.
The opening time for this phase $t_{\mathrm{acquisition}}$ is configurable by setting the maximum BCC ($n_{BCC,max.}$) thus calculated as
\begin{equation*}
t_{\mathrm{Acquisition}} = n_{BCC,max.} \cdot 1\;BCC
= n_{BCC,max.} \cdot 8\; T_{\mathrm{Acquisition}} \,\,.
\end{equation*}
There is a small configurable pause between readout and digitization which is used to prepare the analog buffers for the upcoming digitization.
\item[Digitization:]
The analog information stored in four storage capacitors for each of the 1024 channels is now digitized in
four cycles, doing 1024 analog buckets in parallel. The digitization is handled by a Wilkinson ADC with a 13-bit resolution for each channel.
A current mirrored into each of the 1024 channels runs down the charge in the storage capacitor for each cycle.
The duration of the so-called pre-charge period is $2\cdot$ $T_{\mathrm{Pre-Charge}}$\xspace per individual storage capacitor or $8\cdot$$T_{\mathrm{Pre-Charge}}$\xspace in total. During the pre-charge
the analog bus connecting the storage capacitors to the Wilkinson converters is charged to a known level as it can contain charge from a previous event
as such it has to be done before connecting to the next storage capacitor. In addition, the read-out bus is pre-charged to high as the read-out can only pull the bus low.
A ramp-threshold discriminator then detects the transition through zero, and the content of a common Gray counter is then stored
in the corresponding memory bucket together with the corresponding timestamp determined by BCC and the range identifier bits.
The total time required for the digitization of a single bucket, excluding the time for the pre-charge is
\begin{equation*}
t_{\mathrm{Single-Digitization}}= (8192+18)\cdot T_{\mathrm{Digitization}}
\end{equation*}
with the total digitization time being given by
\begin{equation*}
t_{\mathrm{Digitization}}= 4 \cdot (8192+18)\cdot T_{\mathrm{Digitization}} \,\,.
\end{equation*}
\item[Readout:]
After digitization is complete, all data is sequentially read out using the $T_{\mathrm{Readout}}$\xspace clock.
A {KPiX}\xspace~{\textit{word}} corresponds to 13~bits, corresponding to the ADC resolution and the BCC.
The readout is organized in rows of 32 channels, which corresponds to 416 bits equivalent to $416\cdot$$T_{\mathrm{Readout}}$\xspace.
Reading out each shift register is preceded by the parallel loading
of all the shift registers which takes $300\cdot$$T_{\mathrm{Readout}}$\xspace.
In total the readout of one row takes $716 \cdot$ $T_{\mathrm{Readout}}$\xspace.
For each channel there are nine words in total and {KPiX}\xspace has 32 rows of 32 channels in total.
Overall the entire readout of {KPiX}\xspace takes $32\cdot9\cdot716\cdot$$T_{\mathrm{Readout}}$\xspace$= 206208 \cdot$$T_{\mathrm{Readout}}$\xspace.
\item[Power-down:]
After the readout has completed, the current is pulsed down while maintaining the supply voltages.
\end{description}
Additional details on the various {KPiX}\xspace trigger modes, the calibration circuitry and the interconnects are given below.
\paragraph{{KPiX}\xspace Trigger Modes}\label{sec:hardware:kpix:trigger}
{KPiX}\xspace can take data in two different operation modes, corresponding to two different triggering schemes:
\begin{description}
\item[Self triggering:] If the charge generated within a channel rises above a user
defined threshold, it will be recorded by {KPiX}\xspace.
\item[External triggering:] When an external trigger arrives, {KPiX}\xspace records the generated charge
from all channels simultaneously.
\end{description}
In both modes a maximum of four triggers per channel can be stored by {KPiX}\xspace.
Given the preferred beam telescope operation and the usual need of an external trigger by DUT setups,
the {KPiX}\xspace external trigger mode is the preferred running mode for the {\textsc{Lycoris}}\xspace telescope.
\paragraph{Calibration}\label{sec:hardware:kpix:calib}
In order to calibrate the ADC response, each of the 1024 readout channels is equipped with a calibration module which injects
a series of user defined calibration amplitudes through an 8-bit DAC for verifying the ADC response.
The ADC has a linear response to the different calibration values and its slope is consequently used in the following studies to convert the raw ADC value to
a charge in fC. The calibration data-taking is operated using the external triggering mode with a software-controlled start signal.
All the four memory buckets for every channel can be calibrated individually to correct for potential pedestal and offset differences.
The first calibration pulse can be selected for a high calibration range, so that the system can be tested up to the full-scale range of \SI{10}{\pico\coulomb}.
For negative-polarity signals, an inverter is inserted after the charge amplifier so the polarity of the calibration signal is reversed as well.
\paragraph{Interconnects}
{KPiX}\xspace has been designed to be bump-bonded to various Silicon sensors. The eutectic Sn-Pb bumps on a $200 \times $\SI{500}{\micro\metre} lattice are placed in wells on each {KPiX}\xspace pad.
Figure~\ref{fig:hardware:kpix:bumpbond-map} (left) shows a schematic diagram of the {KPiX}\xspace bump bonding array,
where the center bump matrix (blue/red) provides connections to the {KPiX}\xspace readout channels while the two outermost columns (black/green)
route the AVDD/DVDD power, the digital input signals including the clock, control, external trigger and the outgoing digital data stream.
Two {KPiX}\xspace chips are necessary to read out the 1840 strips of one {SiD}\xspace tracker sensor, but only 920 (blue) out of the 1024 readout channels of a single {KPiX}\xspace are connected
to a strip while the others (red) are bonded to the sensor bump pads and are left floating.
The bump pads on each side of {KPiX}\xspace are designed with the same functionality.
The {SiD}\xspace tracker sensor is designed to connect only one side (green) of the {KPiX}\xspace power/signal bump-pads, thereby the other side (black) is left open.
Figure~\ref{fig:hardware:kpix:bumpbond-map} (right) shows a microscope photo of the pads on the sensor surface for bump-bonding and wire-bonding,
where the marginal bump-bonding pads to provide {KPiX}\xspace power and signals are routed to the wire-bonding pads via the second metallization layer.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4cm]{figures/hardware/KPIX_bump_bond_connections.pdf}
\includegraphics[height=4cm]{figures/hardware/bump-pads-on-sensor_new.png}
\caption{\label{fig:hardware:kpix:bumpbond-map} A schematic diagram of {KPiX}\xspace bump-bonding array (left) with the 920 connected channels in (blue).
The two outermost columns (black/green) on each side route the AVDD/DVDD power to {KPiX}\xspace, as well as exchange digital signals.
A microscope photo (right) of a section of the pad grid on the sensor surface.
The bump-bonding pads (blue) connect to the used Power/Signal column (black on left sketch) of one {KPiX}\xspace and some of its neighboring pads.
The wire-bonding pads (yellow) connect the Power/Signal column through the sensor's second metallization layer to outside (via Kapton-Flex).
}
\end{center}
\end{figure}
\subsubsection{The Kapton-Flex Cable}
The Kapton-Flex cable shown in Figure~\ref{fig:hardware:flex-cable} is designed to route both HV and LV power
and digital traces between the sensor module and the DAQ system.
Both HV and LV lines are further filtered on the Kapton-Flex to further reduce the noise due to power supplies.
It feeds the sensor bias voltage through wire-bonding from the HV bonding pad to the sensor bias ring,
and the bias return is made by connecting the HV return to the sensor back plane using conductive silver epoxy paste.
For the two {KPiX}\xspace chips on the same sensor, their low voltage traces (AVDD and DVDD) as well as their grounds, are tied together.
For the same {KPiX}\xspace, the AGND and DGND are tied together through a \SI{0}{\ohm} resistor.
The bias voltage is AC coupled to the {KPiX}\xspace AVDD through a bypass capacitor; {KPiX}\xspace is measuring
through its analogue voltage instead of its analogue ground to avoid any mismeasurement from grounding differences.
The clock and the trigger signals are Low Voltage Differential Signals (LVDS), of which the termination resistors are placed on this Kapton-Flex cable.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/hardware/kapton-flex_new.jpg}
\caption{\label{fig:hardware:flex-cable}The Kapton-Flex cable used on the {\textsc{Lycoris}}\xspace module.}
\end{center}
\end{figure}
\subsubsection{Module Assembly}\label{sec:hardware:moduleassembly}
Assembling a {\textsc{Lycoris}}\xspace module is a four-step process:
\begin{enumerate}
\item The bump-bonding between the two flipped {KPiX}\xspace chips and the silicon sensor;
\item Gluing the Kapton-Flex cable onto the silicon sensor;
\item Connecting the pads on Kapton-Flex cable with the corresponding pads on the silicon sensor using wire-bonding;
\item Silver epoxy glue the HV return pad on the Kapton-Flex cable to the sensor back plane.
\end{enumerate}
Both bump-bonding and wire-bonding are well-established industry standard processes and have excellent track-records in assembling
sensor modules for the LHC experiments.
The bump-bonding were performed at Fraunhofer IZM. The bumps has been pre-applied on the {KPiX}\xspace by its manufacturer and the under bump metalization has also been pre-applied on the sensor by its manufacturer. The wire-bonding step were performed in-house at DESY.
Placing a Kapton-Flex cable accurately on the active sensor surface using glue, however, is a highly customized process.
Therefore, a dedicated gluing tool was designed to glue the Kapton-Flex to the sensor (see Figure~\ref{fig:hardware:gluetool}).
The non-conductive epoxy paste adhesive \texttt{Araldite 2011} requiring 12~hours curing has been used based on the experiences of ATLAS~\cite{Abdesselam:2006wt} and
the gluing procedure has been validated to ensure it meets the following requirements:
\begin{itemize}
\item No damage to the sensor when applying an even pressure on the glue area to obtain an optimal cure;
\item The cable is positioned precisely to the area given by the edge of the three bonding areas and stays steady during the cure;
\item No extra glue split over the wire-bonding pads which directly connects to the second metallization layer.
\end{itemize}
Among the 29 sensors produced, 24 have been successfully assembled, while two of them have not yet been fully assembled for future upgrades to the flex cable. The rest five sensors failed to be assembled for different reasons: one sensor was grinded down to validate the bump-bonding quality, two sensors failed to be wire-bonded due to aged bonding pads on the flex cable, and the rest two sensors were damaged by a high pressure during the gluing process.
The quality of the 24 assembled sensor modules are further controlled by IV measurements. All the sensors show a typical development as shown in Section~\ref{sec:performance:calib}, except for two sensors that cannot deplete anymore~\footnote{The reason is not identified though many inspections have been conveyed, one promising cause is micro-level damages in the sensor bulk during the gluing step.}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{figures/hardware/Gluing_tool_explanation.png}
\caption{\label{fig:hardware:gluetool}The custom gluing tool used to assemble the {\textsc{Lycoris}}\xspace modules.}
\end{center}
\end{figure}
\subsection{The Cassette}
The {\textsc{Lycoris}}\xspace cassette, which holds up to two stacks of three sensors, consists of an aluminum frame and two cassette boards and is
covered by two carbon fiber windows as shown in Figure~\ref{fig:hardware:cassette}.
The cassette is \SI{3.3}{\centi\metre} tall, \SI{12.1}{\centi\metre} wide and \SI{32.1}{\centi\metre} deep.
Each individual sensor module is glued to a frame which is designed together with the cassette so that the frame can be
installed with three (+2, -2, 0 degree) different orientations.
The frame is made out of Torlon, which is non-conductive but of similar strength as aluminum, so that it can be milled using standard techniques.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.5cm]{figures/hardware/Cassette-CAD_exploded_view.png}
\includegraphics[height=4.5cm]{figures/hardware/Cassette_with_sizes.pdf}
\caption{\label{fig:hardware:cassette}
The Cassette: An exploded view of one {\textsc{Lycoris}}\xspace cassette holding two stacks of three sensors and two cassette boards (left) and a photo of the cassette frame with
dimensions (\SI{3.3}{\centi\metre} tall, \SI{12.1}{\centi\metre} wide and \SI{32.1}{\centi\metre} deep) (right).}
\end{center}
\end{figure}
\subsubsection{The Cassette Board}
The cassette board serves as a transition board between the DAQ board and the modules. It is designed to drive up to three sensor modules in a cassette.
It distributes Low Voltage (LV) power to the connected {KPiX}\xspace chips through linear regulators with an input voltage of \SI{3.0}{\volt},
connects each High Voltage (HV) power line from the power supply to each connected sensor,
and distributes all digital signals between the {KPiX}\xspace chips and the DAQ board.
Each output line of the linear regulator is filtered separately for noise.
AVDD (2.5V) is separately regulated for the two {KPiX}\xspace chips on the same sensor,
while the digital power DVDD (2.0V) is shared by all {KPiX}\xspace ASICs from a single regulator.
Two versions of the cassette board exist and can be daisy-chained via a custom Kapton-Flex cable.
One is installed at the front of the cassette and acts as the primary board while the other is installed at the other end of the cassette and serves as the secondary.
They drive the connected sensor modules in the same way, but the primary cassette board has two extra functions.
First, its outer side as shown in Figure~\ref{fig:hardware:boards} (left)
hosts the interfaces with the DAQ board and the power supply for the sensors.
Second, it has one {I$^2$C}\xspace temperature and humidity sensor equipped on its inner side as shown in Figure~\ref{fig:hardware:boards}
(middle) to monitor the environment inside the cassette.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm]{figures/hardware/master_cassette_board_outerside_with_caption.jpg}
\includegraphics[height=6cm]{figures/hardware/master_cassette_board_innerside_with_caption.jpg}
\includegraphics[height=6cm]{figures/hardware/DAQBoard_with_caption.jpg}
\caption{\label{fig:hardware:boards}The primary cassette board (outer side - left and inner side - middle) and the DAQ board used by {\textsc{Lycoris}}\xspace (right).}
\end{center}
\end{figure}
\subsection{Trigger and DAQ Hardware}
The DAQ board used by {\textsc{Lycoris}}\xspace is shown in Figure~\ref{fig:hardware:boards} (right).
It requires \SI{12}{\volt} / \SI{0.5}{\ampere} for operation.
It connects to the user control PC through an SFP port, allowing for flexibility in the physical interface used.
Currently, the RJ-45 module is used for ease in connecting to standard 1000Base-T networking equipment, but it is possible to
upgrade to optical Ethernet or a custom optical interface in the future if needed.
The board is equipped with a Xilinx Kintex7 FPGA to provide I/O interfaces and event building.
It has been designed with the following features:
\begin{itemize}
\item All signal interconnects with the connected cassette boards are communicated via isolation buffers, so that the DAQ board and
all the connected {KPiX}\xspace chips are electrically isolated;
\item Multiple input connections to provide acquisition start signals and external triggers for the {KPiX}\xspace data taking: two LEMO ports, two BNC ports and one HDMI port;
\item An Ethernet interface using SFP+ plugins to connect to a PC running the DAQ software.
\end{itemize}
{\textsc{Lycoris}}\xspace is designed to synchronize with other devices using the \textsc{AIDA-2020}\xspace Trigger Logic Unit (TLU).
This has been implemented by correlating timestamps on triggers recorded respectively by {\textsc{Lycoris}}\xspace and by \textsc{AIDA-2020}\xspace TLU.
The DAQ board is able to record a global timestamp on each incoming trigger using a 64-bit counter which is incremented with
every clock pulse of the \SI{200}{\mega\hertz} FPGA system clock and is reset by an external start signal at the beginning of each data run.
In order to correlate the {\textsc{Lycoris}}\xspace trigger timestamps to the TLU trigger timestamps,
the DAQ board can use the \SI{40}{\mega\hertz} TLU clock to generate its system clock
and the 64-bit counter can be reset by the start signal $T_0$ sent out by the TLU.
Both of the TLU clock and the $T_0$ signals are received through the HDMI input.
Besides the clock and the trigger signals, the TLU is also able to send out a so-called shutter signal to indicate the
presence of the beam. This signal is generated with a configurable delay on the $E_{min}$ signal from the {\mbox{DESY II}}\xspace
accelerator (see Section~\ref{sec:performance:desyii} for further details)
and serves as the {KPiX}\xspace \textit{Acquisition Start Signal}\xspace signal.
The latency of the trigger timestamps registered by {\textsc{Lycoris}}\xspace and by the TLU is
important for the offline analysis in order to synchronize events between
different devices. This latency has been measured and verified to be very
stable for the same port of the same TLU unit. The variation among ports of
different TLU units is very small resulting a typical latency of \SI{105}{\nano\second}~\footnote{If the trigger comes at the very edge of one clock cycle, then the latency is extended for one additional clock cycle (\SI{25}{\nano\second}).}.
\subsubsection{DAQ Board FPGA Firmware}
The FPGA firmware serves as a bridge between the DAQ software running on
commodity PC hardware, and the interfaces required by the {KPiX}\xspace ASICs and the
TLU timing system. The firmware allows for configuration and acquisition control
of up to 24 attached {KPiX}\xspace chips. Configuration registers allow for acquisition
to be driven by the TLU interface or a combination of DAQ software commands and
TTL inputs. The later configuration is typically used for taking baseline and
calibration data. Figure~\ref{fig:hardware:firmware} shows the block diagram of
the FPGA firmware. It reveals what is behind the digital traces between the DAQ
board and the connected {KPiX}\xspace chips.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/hardware/KPIX-Firmware.pdf}
\caption{\label{fig:hardware:firmware}Block diagram of the DAQ firmware design.}
\end{center}
\end{figure}%
The interface to the DAQ software consists of an Ethernet/IP/UDP stack, with an
additional Reliable UDP (rUDP) layer on top to ensure reliable transmission of
UDP packets. This entire stack actually accounts for over half of the firmware
logic utilization of the design. The firmware uses an AXI-Lite bus internally
for handling register transactions from software.
The {KPiX}\xspace ASIC digital interface consists of four signals: CLK, CMD, RSP, TRIG.
Register read and write commands are sent serially on the CMD line. Register
read responses arrive from the {KPiX}\xspace on the RSP line. Requests to start a new
acquisition cycle are also sent on the CMD line, and the acquired data arrives
serially on the RSP line once the acquisition is complete. Register access is
not allowed during a {KPiX}\xspace acquisition. The firmware logic handles the
serialization and deserialization of CMD and RSP, respectively. It also
translates register access requests and acquisition start commands into the
serial format expected by {KPiX}\xspace. During acquisition data readout, it reformats
the serial data stream into 64-bit ``samples", with each sample representing the
ADC and timestamp data for a single channel/bucket in the system.
The firmware allows for the clock speed to change dynamically during each phase
of the {KPiX}\xspace acquisition cycle. To accomplish this, the firmware runs a local
copy of the {KPiX}\xspace digital logic, so that it can snoop on the current state of the
{KPiX}\xspace. All {KPiX}\xspace chips in the system run in lockstep with each other, so the state
of the local {KPiX}\xspace is the same as the chips on the sensors. The clock period to
use during each phase is set via configuration registers prior to the start of
each run.
For each acquisition, the Event Builder combines the 64-bit samples from each of the (up to) 24 attached {KPiX}\xspace ASICs into a single large data frame.
Each {KPiX}\xspace has 1024 channels, and each channel has four buckets, so a data frame from an acquisition may contain up to 1024$\times$4$\times$24 samples. The
data frame also contains a 4 $\times$ 64-bit word header containing timestamp and run count metadata.
\subsection{Power Supplies}
The power supply system used is a commercially available MPOD Mini system from Wiener which is mounted into a 19-inch rack.
The \texttt{WIENER LV MPV8016I} provides all the LV (DAQ, AVDD and DVDD) and the one for HV is \texttt{WIENER/ISEQ EHS F205x\_106HV}.
This system can be remote-controlled via Ethernet using the SNMP protocol, thus providing online monitoring of the current and the voltage of each power line.
The power supply LV lines for the sensors in a cassette are common.
They connect to a set of linear regulators on the cassette board and are then distributed to the sensors.
Two LV lines serve each cassette board, cleanly separating analog and digital power provided to the {KPiX}\xspace chips.
Each sensor has its a dedicated HV line, which allows the fine-tuning of depletion voltage for each sensor individually.
\section{Possibilities for User DUT integration}\label{sec:dut}
The design of {\textsc{Lycoris}}\xspace was driven by the requirements of operating it inside the {PCMAG}\xspace at the {\DESYII Test Beam Facility}\xspace,
however, much care was taken to keep the design as flexible as possible. So {\textsc{Lycoris}}\xspace can also be operated at other test beam facilities world-wide, if there is a user need.
Provided there is a suitable mechanical support for the cassettes, the only requirement to the facility is a suitable signal to derive a \textit{Acquisition Start Signal}\xspace, in order to start up {\textsc{Lycoris}}\xspace.
The {\textsc{Lycoris}}\xspace telescope can receive an external clock and timestamp its events using this clock as a time base.
Therefore, {\textsc{Lycoris}}\xspace can synchronize to a device that is able to issue or receive a common clock line and timestamp its events based on this clock.
Therefore, integrating a user DUT can be realized in the following two methods:
\begin{itemize}
\item one is to synchronize the user DUT with {\textsc{Lycoris}}\xspace through an external trigger source, with which {\textsc{Lycoris}}\xspace is able to synchronize;
\item the other is to synchronize the user DUT with {\textsc{Lycoris}}\xspace directly.
\end{itemize}
Both methods mentioned above can be used to operate {\textsc{Lycoris}}\xspace in a self or external trigger mode.
The {\textsc{Lycoris}}\xspace telescope has been designed from the beginning with a user DUT interface, that is fully compatible with the \textsc{AIDA}\xspace TLU and \textsc{EUDAQ2}\xspace, which simplifies the
integration of a user DUT. By using this interface, the data taking sequencing is managed by \textsc{EUDAQ2}\xspace and the synchronization is realized with the \textsc{AIDA}\xspace TLU.
Therefore, the user DUT needs to be integrated with the \textsc{AIDA}\xspace TLU, which includes receiving the TLU triggers and synchronizing to the TLU events
by receiving the trigger ID or timestamping the triggers using the TLU common clock.
It is also possible to integrate a DUT differently instead of using the \textsc{AIDA}\xspace TLU compatible interface.
This can be done by synchronizing with {\textsc{Lycoris}}\xspace by providing a common clock signal through the NIM or the BNC port on the {\textsc{Lycoris}}\xspace DAQ board.
Furthermore, a common external trigger can be fed through one of these ports as well.
This of course requires additional engineering efforts and extra tests to verify the synchronization status and thus will vary case by case.
Therefore, in this paper only basic guidelines are given for such an integration scheme:
\begin{itemize}
\item {\textsc{Lycoris}}\xspace needs to receive a common clock signal with a clock counter start signal $T_0$, and the clock has to be a fraction or multiples of \SI{200}{\mega\hertz};
\item If external triggers are going to be used, it is recommended but not necessary to veto all triggers outside the {KPiX}\xspace active data taking window to simplify the synchronization.
\end{itemize}
| 692dcb28bf2bee4a12b187b95f3ec57750ea7348 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}\label{S:1}
The classical result of Aharoni states that every separable
metric space (in particular every separable Banach space) can be bi-Lipschitz
embedded (the definition is given below) into $c_0$.
The natural problem of embeddings of metric spaces into $c_0(\Gamma)$,
for an arbitrary set $\Gamma$, has been
treated by several authors, in particular Pelant and Swift. The
characterizations that they obtained,
and which play a crucial role in our argument, are described below.
Our main interest, motivated by
some problems posed in \cite{GLZ},
lies in the case of embeddings of Banach spaces into $c_0(\Gamma)$.
We now state the main results of this paper. We first define the following
cardinal numbers inductively.
We put $\lambda_0=\omega_0$, and, assuming for $n\in\mathbb{N}_0$, $\lambda_n$ has
been defined, we put $\lambda_{n+1}=2^{\lambda_n}$.
Then we let
\begin{equation}\label{def-lam}
\lambda=\lim_{n\to\infty} \lambda_n.
\end{equation}
It is clear that
assuming the generalized continuum hypothesis (GCH) $\lambda=\aleph_\omega$.
\begin{thmA}
If $X$ is a Banach space with density
$\text{dens}(X)\ge\lambda$, which admits a coarse (or uniform)
embedding into any $c_0(\Gamma)$, then $X$ fails to have nontrivial
cotype, i.e. $X$ contains $\ell_\infty^n$ $C$-uniformly for some
$C>1$ (equivalently, every $C>1$).
\end{thmA}
Our method of proof gives a much stronger result for Banach
spaces with a symmetric basis. Namely, under the assumptions
of Theorem A, such spaces are linearly isomorphic with $c_0(\Gamma)$
(Theorem \eqref{sym-case}).
Theorem A will follow from the following combinatorial result
which is of independent interest.
\begin{thmB} Assume that $\Lambda$ is a set whose cardinality is at least
$\lambda$, $n\in\mathbb{N}$, and
$\sigma:[\Lambda]^n\to \mathcal C$ is a map into an arbitrary set $\mathcal C$.
Then (at least) one of the following conditions holds:
\begin{enumerate}
\item There is a sequence $(F_j)_{j=1}^\infty$ of pairwise
disjoint elements of $[\Lambda]^n$, so that
$\sigma(F_i)=\sigma(F_j)$, for all $i,j\in\mathbb{N}$.
\item There is an $F\in[\Lambda]^{n-1}$ so that
$\sigma\big(\big\{ F\cup\{\gamma\}: \gamma\in\Lambda\big\}\big)$ is infinite.
\end{enumerate}
\end{thmB}
The above Theorem B was previously deduced in \cite{PHK} from a combinatorial result
of Baumgartner, provided $\Lambda$ is a weakly compact cardinal number
(whose existence is not provable in ZFC, as it is inaccessible \cite{Je} p. 325, p. 52).
The authors in \cite{PHK} pose a question whether assuming that $\Gamma$ is
uncountable is sufficient in Theorem B.
Theorem B is used in order to obtain a scattered compact set $K$
of height $\omega_0$, such that $C(K)$
does not uniformly embed into $c_0(\Gamma)$. It is easy to check that
our version of Theorem B implies a ZFC example of such a $C(K)$ space.
It is further shown in \cite{PHK} that the space $C[0,\omega_1]$
does not uniformly embed into any $c_0(\Gamma)$.
Let us point out that a special case of Theorem A
was obtained by Pelant and Rodl \cite{PR}, namely it was shown there that
$\ell_p(\lambda), 1\le p<\infty$,
spaces (which are well known to have nontrivial cotype)
do not uniformly embed into any $c_0(\Gamma)$.
The paper is organized as follows. In Section \ref{S:2} we recall
Pelant's \cite{Pe,PHK} amd Swift's \cite{Sw} conditions for
Lipschitz, uniform, and coarse embeddability into $c_0(\Gamma)$.
In Section \ref{S:3} we provide a proof for Theorem B.
Finally, in Section \ref{S:4}, we provide a proof
of Theorem A as well as the symmetric version of the result.
All set theoretic concepts and results used in our note can be found in
\cite{Je}, whereas for facts concerning nonseparable Banach spaces
\cite{HMVZ} can be consulted.
\section[Pelant's and Swift's criteria]{Pelant's and Swift's criteria for Lipschitz, uniform, and coarse embeddability into $c_0(\Gamma)$}\label{S:2}
In this section we recall some of the notions and results by
Pelant \cite{Pe,PHK} and Swift \cite{Sw} about embeddings into $c_0(\Gamma)$.
For a metric space $(M,d)$ a {\em cover} is a set $\mathcal U$ of subsets of $M$
such that $M=\bigcup_{U\in\mathcal U} U$.
A cover $\mathcal U$ of $M$ is called {\em uniform} if
there is an $r>0$ so that for all $x\in M$ there is a $U\in\mathcal U$,
so that $B_r(x)=\{x'\in M: d(x',x)<r\}\subset U$.
It is called {\em uniformly bounded} if the diameters of the $U\in\mathcal U$
are uniformly bounded, and it is called {\em point finite} if every $x\in M$
lies in only finitely many $U\in\mathcal U$.
A cover $\mathcal V$ of $M$ is a {\em refinement } of a cover $\mathcal U$,
if for every $V\in\mathcal V$
there is a $U\in\mathcal U$, for which $V\subset U$.
\begin{defin}{\rm\cite{PHK}} A metric space $(M,d)$ is said to have the {\em Uniform Stone Property} if every uniform cover $\mathcal U$ of $M$ has a point finite uniform refinement.
\end{defin}
\begin{defin}{\rm\cite{Sw}} A metric space $(M,d)$ is said to have the {\em Coarse Stone Property} if every bounded cover is the refinement of a point finite uniformly bounded cover.
\end{defin}
\begin{defin} Let $(M_1,d_1)$ and $(M_2,d_2)$ be two metric spaces. For a map $f:M_1\to M_2$ we define the {\em modulus of uniform continutiy} $w_f:[0,\infty)\to [0,\infty]$, and the
{\em modulus of expansion} $\rho:[0,\infty)\to [0,\infty]$ as follows
\begin{align*}
w_f(t)&=\sup\big\{ d_2\big(f(x),f(y)\big): x,y\in M_1,\, d_1(x,y)\le t\big\} \text{ and } \\
\rho_f(t)&=\inf\big\{ d_2\big(f(x),f(y)\big): x,y\in M_1,\, d_1(x,y)\ge t\big\}.
\end{align*}
The map $f$ is called {\em uniform continuous } if
$\lim_{t\to 0} w_f(t)=0$, and it is called a {\em uniform embedding}
if moreover $\rho_f(t)>0$ for every $t>0$. It is called {\em coarse} if
$w_f(t)<\infty$, for all $0<t<\infty$ and it is called a {\em coarse embedding},
if $\lim_{t\to\infty}\rho_f(t)=\infty$.
The map $f$ is called {\em Lischitz continuous} if
$$\text{{\rm Lip}}(f)=\sup_{x\not=y} \frac{d_2(f(x),f(y))}{d_1(x,y)} <\infty,$$
and a bi-Lipschitz embedding, if $f$ is injective and $\text{{\rm Lip}}(f^{-1})$ is also finite.
\end{defin}
The following result recalls results from \cite{PHK}(for (i)$\iff$(ii)) and \cite{Sw} (for (ii)$\iff$(iii)$\iff$(iv)$\iff$(v)).
\begin{thm} For a Banach space $X$ the following properties are equivalent.
\begin{enumerate}
\item[(i)] $X$ has the uniform Stone Property.
\item[(ii)] $X$ is uniformly embeddable into $c_0(\Gamma)$, for some
set $\Gamma$.
\item[(iii)] $X$ has the coarse Stone Property.
\item[(iv)] $X$ is coarsely embeddable into $c_0(\Gamma)$, for some
set $\Gamma$.
\item[(v)] $X$ is bi-Lipschitzly embeddable into $c_0(\Gamma)$, for some
set $\Gamma$.
\end{enumerate}
\end{thm}
It is easy to see, and was noted in \cite{PHK,Sw}, that the uniform
Stone property and the coarse Stone property are inherited by subspaces.
The equivalence (i)$\iff$(ii) was used in \cite{PHK} to show that
$C[0,\omega_1]$ does not uniformly embed in any $c_0(\Gamma)$. It was also used
to prove that certain other $C(K)$-spaces do not uniformly embed into
$c_0(\Gamma)$:
Let $\Lambda$ be any set and denote for $n\in\mathbb{N}$ by $[\Lambda]^{\le n}$ and $[\Lambda]^n$ the subsets of $\Lambda$ which have cardinality at most $n$ and
exactly $n$, respectively. Endow $[\Lambda]^{\le n}$ with the restriction of the product topology on $\{0,1\}^\Lambda$ (by identifying each set with its characteristic function).
Then define $K_\Lambda$ to be the one-point Alexandroff compactification of the topological sum of the spaces $[\Lambda]^{\le n}$, $n\in\mathbb{N}$.
It was shown in \cite{PHK} that if $\Lambda$ satisfies Theorem B
then
$C(K_\Lambda)$ is not uniformly Stone and thus does not embed uniformly into
any $c_0(\Gamma)$.
\section{A combinatorial argument}\label{S:3}
We start by introducing property $P(\alpha)$ for a cardinal
$\alpha$ as follows.
\begin{align}
&\text{For every $n\in\mathbb{N}$ and any map $\sigma:[\alpha]^n\to \mathcal C$, $\mathcal C$ being an arbitrary set,} \tag{$P(\alpha)$}\\
&\text{(at least) one of the following two conditions hold:} \notag \\
&\text{There is a sequence $(F_j)$ of pairwise disjoint elements of $[\lambda]^n$, with $\sigma(F_i)\!=\!\sigma(F_j)$,} \label{E:3.1}\\
&\text{for any $i,j\in\mathbb{N}$.}\notag\\
&\text{There is an $F\in [\lambda]^{n-1}$, so that $\sigma\big(\big\{ F\cup\{\gamma\}: \gamma\in \lambda\setminus F\big\}\big)$ is infinite.} \label{E:3.2}
\end{align}
As remarked in Section \ref{S:2}, if $\kappa$ is an uncountable weakly
compact cardinal number, then $P(\kappa )$ holds. But the existence of
weakly compact
cardinal numbers requires further set theoretic axioms, beyond ZFC \cite{Je}.
In
\cite[Question 3]{PHK} the authors ask if $P(\omega_1)$ is true.
\begin{thm}\label{T:3.1}For $\lambda$ defined by \eqref{def-lam},
$P(\lambda)$ holds. \end{thm}
For our proof of Theorem \ref{T:3.1} it will be more convenient to reformulate it into a statement about $n$-tuples, instead of sets of cardinality $n$.
We will first introduce some notation.
Let $n\in\mathbb{N}$ and $\Gamma_1$, $\Gamma_2,\ldots \Gamma_n$ be sets of infinite cardinality, and put $\Gamma=\prod_{i=1}^n \Gamma_i$. For $a\in \Gamma$ and $1\le i\le n$
we denote the $i$-the coordinate of $a$ by $a(i)$. We say that two points $a$ and $b$ in $\Gamma$ are {\em diagonal}, if $a(i)\not=b(i)$, for all $i\in\{1,2,\ldots,n\}$.
Let $a\in \Gamma_i$ for $i\in\mathbb{N}$.
For $i\in\{1,2\ldots,n\}$ we call the set
$$H(a,i)=\big\{(b_1,b_2,\ldots, b_{i-1}, a(i),b_{i+1}, \ldots ,\alpha_{n}): b_j\in \Gamma_j,\text{ for $j\in\{1,2\ldots n\}\setminus\{i\}$}\big\},$$
the {\em Hyperplane through the point $a$ orthogonal to $i$}.
We call the set
$$L(a,i)=\big\{(a(1),\ldots,a(i-1),b_i,a(i+1), \ldots a(n)): b_i\in \Gamma_i\big\},$$
the {\em Line through the point $a$ in direction of $i$}.
For a cardinal number $\beta$, we define recursively the following sequence of cardinal numbers
$\big(\exp_+(\beta,n):n\kin\mathbb{N}_0$\big): $\exp_+(\beta,0)=\beta$, and, assuming $\exp_+(\beta,n)$ has been defined for some $n\in\mathbb{N}_0$, we put
$$\exp_+(\beta,n+1)= \big(2^{\exp_+(\beta,n)^+}\big)^+.$$
Here $\gamma^+$ denotes the {\em successor cardinal}, for a cardinal $\gamma$, \textit{i.e.,}\ the smallest cardinal number $\gamma'$ with $\gamma'>\gamma$.
Note that since $\exp_+(\gamma,1)\le 2^{2^{2^\gamma}}$, it follows for the above defined cardinal number $\lambda$, that
$$\lambda =\lim_{n\to\infty} \exp_+(\omega_0,n).$$
Secondly, successor cardinals are regular \cite{Je}, and thus every set of cardinality $\gamma$, with $\gamma$ being a successor cardinal, can
be partitioned for $n\in\mathbb{N}$ into $n$ disjoint sets
$\Gamma_1$, $\Gamma_2,\ldots, \Gamma_n$, all of them
having also cardinality $\gamma $, and the
map
$\Gamma_1\times \Gamma_2\times\ldots\times \Gamma_n\to
\big[ \bigcup_{i=1}^n \Gamma_i\big]^n$, $(a_1,a_2,\ldots, a_n)
\mapsto \{a_1,a_2,\ldots a_n\}$, is injective.
We therefore deduce that the following statement implies Theorem \ref{T:3.1}.
\begin{thm}\label{T:3.2} Let $n\in\mathbb{N}$ and a assume that the sets
$\Gamma_1$, $\Gamma_2,\ldots \Gamma_n$ have cardinality at
least $\exp_+(\omega_1, n^2)$. For any function
$$\sigma: \Gamma:=\prod_{i=1}^n\Gamma_i\to \mathcal C,$$
where $\mathcal C$ is an arbitrary set,
at least one of the following two conditions hold
\begin{align}
\label{E:3.2.1} &\text{There is a sequence $(a^{(j)})_{j=1}^\infty$, of pairwise diagonal elements in $\Gamma$,
so that}\\
&\text{$\sigma(a^{(i)})=\sigma(a^{(j)})$, for any $i,j\in\mathbb{N}$.}\notag\\
\label{E:3.2.2} &\text{There is a line $L\subset \Gamma$,
for which $\sigma(L)$ is infinite.}
\end{align}
\end{thm}
We will make the following observation before proving Theorem \ref{T:3.2}.
\begin{lem}\label{L:3.3}
Let $n\in\mathbb{N}$ and $\Gamma_1$, $\Gamma_2,\ldots \Gamma_n$ be non empty sets.
Let
$$\sigma: \Gamma:=\prod_{i=1}^n\Gamma_i\to \mathcal C,$$
be a function that fails both conditions \eqref{E:3.2.1} and \eqref{E:3.2.2}.
Then there is a set $\tilde\mathcal C$ and a function
$$\tilde\sigma: \Gamma:=\prod_{i=1}^n\Gamma_i\to\tilde \mathcal C,$$
that fails both \eqref{E:3.2.1} and \eqref{E:3.2.2} and moreover has
the property that
\begin{align}\label{E:3.3.1} \text{for every $c\in\mathcal C$ there is a hyperplane $H_c\subset \Gamma$ so that $\{b\in \Gamma: \tilde\sigma(b)=c\}\subset H_c$.}\end{align}
\end{lem}
\begin{proof} We may assume without loss of generality that $\sigma$ is
surjective. Since \eqref{E:3.2.1} is not satisfied for each $c\in\mathcal C$
there exists an $m(c)\in\mathbb{N}$
and a (finite) sequence
$(a^{(c,j)})_{j=1}^{m(c)}\subset \sigma^{-1}(\{c\})$,
which is pairwise diagonal, and maximal, with this property.
Hence
$$\sigma^{-1}(\{c\})\subset\bigcup_{j=1}^{m(c)} \bigcup_{i=1}^n H(a^{(c,j)},i).$$
Indeed, from the maximality of $(a^{(c,j)})_{j=1}^{m(c)}\subset \sigma^{-1}(\{c\})$, it follows that each $b\in\sigma^{-1}(\{b\})$ must have at least one coordinate in common with at least
one element of $(a^{(c,j)})_{j=1}^{m(c)}\subset \sigma^{-1}(\{c\})$.
We define
$$\tilde\mathcal C=\bigcup_{c\in \mathcal C} \{1,2,\ldots, m(c)\}\times\{1,2\ldots n\}\times \{c\},$$
and
\begin{align*}
&\tilde\sigma:\Gamma\to \tilde\mathcal C,\quad b\mapsto (c,j,i),\text{ where }\\
&c=\sigma(b),\quad j=\min \Big\{ j': b\in \bigcup_{i'=1}^n H(a^{(c,j')},i')\Big\}, \text{ and } i= \min\{ i': b\in H(a^{(c,j)},i)\}.
\end{align*}
It is clear that $\tilde\sigma$ satisfies \eqref{E:3.3.1}. Since for every $c\in\mathcal C$,
$$\{b\in \Gamma: \sigma(b)=c\}=\bigcup_{j=1}^{m(c)}\bigcup_{i=1}^n \{ b\in \Gamma: \tilde\sigma(b)=(c,j,i)\},$$
$\tilde\sigma$ does not satisfy \eqref{E:3.2.1}.
In order to verify that \eqref{E:3.2.2} is not satisfied, assume
$L\subset \Gamma$ is a line, and let $\{c_1,c_2,\ldots,c_p\}$ be
the image
of $L$ under $\sigma$. By construction,
$$\tilde\sigma(L)\subset \{ (j,i,c_k), k\le p, \, j\le m(c_k), \i\le n\},$$
which is also finite.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:3.2}] We assume that
$\sigma: \Gamma=\Gamma_1\times\Gamma_2\times\ldots \times\Gamma_n\to \mathcal C$
is a map which fails both
\eqref{E:3.2.1} and \eqref{E:3.2.2}. By Lemma \ref{L:3.3} we may also assume
that $\sigma$ satisfies \eqref{E:3.3.1}. For each $a\in \Gamma$ we
fix
an $i(a)\in \{1,2,\ldots, n\}$ so that $\sigma^{-1}\big(\{\sigma(a)\} \big)\subset H(a,i(a))$. It is important to note that, since \eqref{E:3.2.2} is not satisfied, it follows that each
line $L$, whose direction is some $j\in\{1,2,\ldots,n\} $, can
only have finitely many elements $b$ for which $i(b)=j$. Indeed, if $i(b)=j$
then $b$ is uniquely determined by the value $\sigma(b) $.
To continue with the the proof the following {\em Reduction Lemma} will be essential.
\begin{lem}\label{L:3.4} Let $\beta$ be an uncountable regular
cardinal. Assume that $\tilde\Gamma_1\subset \Gamma_1$,
$\tilde\Gamma_2\subset \Gamma_2,\ldots,\tilde\Gamma_n\subset \Gamma_n$
are such that
$|\tilde\Gamma_i|\ge \exp_+(\beta,n)$, for all $i\in\{1,\dots,n\}$.
Then, for any $i\in\{1,2 \ldots,n \}$
there are a number $K_i\in\mathbb{N}$, and subsets
$\Gamma_1'\subset \tilde\Gamma_1, \Gamma_2'\subset\tilde\Gamma_2,\ldots,
\Gamma_n'\subset\tilde\Gamma_n,$
with $|\Gamma'_j|\ge \beta$, so that
\begin{align}\label{E:3.4.1}
\forall (a_1,a_2,\ldots, a_{i-1},a_{i+1},&\ldots a_n)\kin \prod_{j=1, j\not=i}^n\Gamma_i'\quad\\
&\big|\{a\in \Gamma_i': i(a_1,a_2,\ldots a_{i-1},a,a_{i+1},\ldots a_n)=i\}\big|\le K_i.\notag
\end{align}
\end{lem}
\begin{proof} We assume without loss of generality that $i=n$. Abbreviate $\beta_j=\exp_+(\beta,j)$, for $j=1,2\ldots n$. We
choose subsets $\tilde\Gamma_j^{(0)}\subset \tilde \Gamma_j$, for which $\big|\tilde\Gamma^{(0)}_j\big|=\beta_{n+1-j}$.
Since the $\beta_j$'s are regular, it follows for each $j=1,2\ldots ,n-1$ that
\begin{align*}
\big|\tilde\Gamma^{(0)}_j\big|&=\beta_{n+1-j}\\
&>2^{\beta_{n-j}}\\
&=2^{|\tilde\Gamma_{j+1}^{(0)}\times \tilde\Gamma_{j+1}^{(0)}\times\ldots\times \tilde\Gamma^{(0)}_n|}\\
&=\big|\{ f: \tilde\Gamma_{j+1}^{(0)}\times \tilde\Gamma_{j+1}^{(0)}\times\ldots\times \tilde\Gamma^{(0)}_n\to \mathbb{N}\}\big|.
\end{align*}
Abbreviate for $i=1, \ldots , n$.
$$\mathcal F_{j}=\{ f: \tilde\Gamma_{j}^{(0)}\times \tilde\Gamma_{j+1}^{(0)}\times\ldots\times \tilde\Gamma^{(0)}_n\to \mathbb{N}\} $$
and consider the function
$$\phi_1: \prod_{j=1}^{n-1} \tilde\Gamma_j^{(0)}\to\mathbb{N},\quad (a_1,a_2,\ldots, a_{n-1})\mapsto\big|\{ a\kin \tilde\Gamma_n^{(0)}: i(a_1,a_2,\ldots,a_{n-1},a)\!=\! n\}\big|.$$
For fixed $a_1\in \Gamma_1^{(0)}$,
$\phi_1(a_1,\cdot)\in \mathcal F_2$, and the cardinality of $\mathcal F_2$ is by
the above estimates smaller than the cardinality of
$\tilde\Gamma_1^{(0)}$, which is regular.
Therefore we can find a function $\Phi_2\in \mathcal F_2$ and a subset $\Gamma'_1\subset \tilde\Gamma_i^{(0)}$ of cardinality $\beta_n$ so that
$\phi_1(a_1,\dot)=\phi_2$ for all $a_1\in\Gamma'_1$. We continue the process and find $\Gamma'_{j}\subset \tilde\Gamma_{j}^{(0)}$, for $j=1,2\ldots, n-2$ of cardinality
$\beta_{n+1-j}$ and functions $\phi_j\in F_j$, for $j=1,2,\ldots ,n-1$, so that for all $(a_1,a_2,\ldots, a_{n-2})\in \prod_{j=1}^{n-2}\Gamma'_j$ and $a_{n-1}\in \tilde\Gamma_{n-1}^{(0)}$, we have
\begin{align}
\phi_1(a_1,a_2,\ldots, a_{n-1})=\phi_2(a_2,\ldots, a_{n-1})=\ldots= \phi_{n-1}(a_{n-1}) .
\end{align}
Then, since $\phi_{n-1}$ is $\mathbb{N}$ valued, we can finally choose an $K_n\in\mathbb{N}$ and a subset $\Gamma'_{n-1}$, of cardinality at least $\beta$, so that $\phi_{n-1}(a_1)\le K_{n}$, which finishes our argument.
\end{proof}
\medskip\noindent{\em Continuation of the proof of Theorem \ref{T:3.2}}.
We apply Lemma \ref{L:3.4} successively to all $i\in\{1,2\ldots n\}$,
and the cardinals
$\beta^{(i)}= \exp_+(\omega_1, n(n-i))$.
We obtain numbers $K_1,K_2,\ldots K_n$ in $\mathbb{N}$ and infinite sets
$\Lambda_j\subset \Gamma_i$, for $j\le n$, so that for all $i=1,2\ldots n$ and all $a\in\prod_{j=1}^n \Lambda_j$
$$\big|\{ a\in\Lambda_i: i(a_1,a_2,\ldots, a_{i-1},a, a_{i+1},\ldots a_n)=i\}\big|\le K_i.$$
In order to deduce a contradiction choose for each $j=1,\ldots n$ a subset $A_j$ of $\Lambda_j$ of cardinality $l_j=(n+1)K_j$.
Then it follows that
\begin{align*}
\prod_{j=1} l_j&= \Big|\prod_{j=1}^n A_j\Big|\\
&=\sum_{i=1}^n \sum_{ a\in\prod_{j=1,j\not=i}^n A_j} \big|\{a\in A_i: i(a_1,a_2,\ldots, a_{i-1},a, a_{i+1},\ldots a_n)=i\}\big|\\
&\le \sum_{i=1}^n K_i\prod_{j=1, j\not=i}^n l_j\le\frac{n}{n+1}\prod_{j=1}^nl_j
\end{align*}
which is a contradiction and finishes the proof of the Theorem.
\end{proof}
We can now state the ZFC version of Theorem 4.1. in \cite{PHK}, in which it was shown that for weakly compact cardinalities
$\kappa_0$ the space $C(K_{\kappa_0})$, where $K_{\kappa_0}$ was defined at the end of Section \ref{S:2},
cannot be uniformly
(or coarsely) embedded into any $c_0(\Gamma)$, where $\Gamma$ has any cardinality.
Since the only property of $\kappa_0$, which is needed in \cite{PHK}, is the fact that $(P(\kappa_0))$ holds, we deduce
\begin{cor}\label{C:2.8}
$C(K_\lambda) $ does not coarsely (or uniformly) embed into $c_0(\Gamma)$, for any cardinality $\Gamma$.
\end{cor}
\section{Proof of Theorem A}\label{S:4}
In this section we use our combinatorial Theorem B from Section \ref{S:3} to show Theorem A.
Recall that a long Schauder basis of a Banach space $X$ is a transfinite
sequence $\{e_\gamma\}_{\gamma=0}^\Gamma$ such that for every $x\in X$ there exists
a unique transfinite sequence of scalars $\{a_\gamma\}_{\gamma=0}^\Gamma$
such that $x=\sum_{\gamma=0}^\Gamma a_\gamma e_\gamma$.
Similarly, a long Schauder basic sequence in a Banach space $X$ is a transfinite
sequence $\{e_\gamma\}_{\gamma=0}^\Gamma$ which is a long Schauder basis
of its closed linear span.
Recall that the $w^*-\text{dens}(X^*)$
is the smallest cardinal such that there exists a $w^*$-dense subset of $X^*$.
Analogously to the classical Mazur construction of a Schauder basic sequence
in a separable Banach space we have the following result, proved e.g.
in \cite[p.135]{HMVZ} (the fact that the basis is normalized, i.e.
$\|e_\gamma\|=1$,
is not a part of the statement in \cite{HMVZ}, but it is
easy to get it by normalizing the existing basis).
\begin{thm}
Let $X$ be a Banach space with $\Gamma=w^*-\text{dens} X^*>\omega_0$.
Then $X$ contains a monotone normalized
long Schauder basic sequence of length $\Gamma$.
\end{thm}
\begin{proof}[Proof of Theorem A]
Using the Hahn-Banach theorem it is easy to see
that $w^*-\text{dens} X^*\le \text{dens} X$.
On the other hand, since every $x\in X$ is uniquely determined by its
values on a $w^*$-dense subset of $X^*$, it is clear that
\[
\text{dens} X\le\text{card} X\le 2^{w^*-\text{dens} X^*}
\]
It follows that for $\lambda$ defined in \eqref{def-lam} we get
that $\lambda=w^*-\text{dens} X^*$ holds if and only if
$\lambda=\text{dens} X$.
In order to prove Theorem A
we may assume without loss of generality that
$X$ has a long normalized and monotone Schauder basis $(e_\mu)_{\mu<\lambda}$, of length $\lambda$, i.e. $\Gamma=\lambda$.
Set
\[
D_n=\{F\subset\lambda: |F|=n\},\;\;n\in\mathbb{N}
\]
Suppose that $F=\{\gamma_1,\dots,\gamma_n\}$ where $\gamma_1<\dots<\gamma_n$
are elements of $[0,\lambda)$ arranged in an increasing order.
Consider the corresponding
finite set $M_F=\{\sum_{i=1}^n\varepsilon_i e_{\gamma_i}: \varepsilon_i\in\{-1,1\}\}$,
containing $2^n$ distinct vectors of $X$,
and put a linear order $\prec$ on this set according to the arrangement
of the signs $\varepsilon_i$, setting
\[
\sum_{i=1}^n\varepsilon_i e_{\gamma_i}\prec
\sum_{i=1}^n\tilde{\varepsilon}_i e_{\gamma_i}
\]
if and only if for the minimal $i$, such that $\varepsilon_i\ne\tilde{\varepsilon}_i$,
it holds $\varepsilon_i<\tilde{\varepsilon}_i$.
In order to prove Theorem A it suffices to show that if $M=\cup_{F\in D_n, n\in\mathbb{N}} M_F\subset X$
has the coarse Stone property then $X$ fails to have nontrivial cotype.
To this end,
starting with $\mathcal U=\{ B_2(x): x\in M\}$ we find a uniform bounded cover
$\mathcal V$, which is point finite and so that $\mathcal U$ refines $\mathcal V$, \textit{i.e.,}\
for all $x\in M$ there is a $V_x\in \mathcal V$ with $B_2(x)\subset V_x$.
Let $r>0$ be such that each $V\in \mathcal V$ is a subset of a ball of radius $r$.
Let $\mathcal C$ be the set consisting of all finite tuples $(V^1,\dots, V^m)$,
where $V^j\in\mathcal V$.
We now define the function $\sigma:M\to\mathcal C$ as follows.
If $F\in D_n, F=\{\gamma_1,\dots,\gamma_n\}$ where $\gamma_1<\dots<\gamma_n$,
we let
\begin{equation}\label{sig-def}
\sigma(F)=(V_{y_1},\dots,V_{y_{2^n}}),
\end{equation}
where $y_1\prec\dots\prec y_{2^n}$ are the elements of $M_F$
arranged in the increasing order.
Applying Theorem B to the function $\sigma$, for a fixed $n\in\mathbb{N}$,
yields one of two possibilities.
Either
there is an $F=\{\gamma_1,\dots,\gamma_{n-1}\}$, where
$\gamma_1<\dots<\gamma_{n-1}$, so that
$\sigma\big(\big\{ F\cup\{\tau\}: \tau\in \lambda\setminus F\big\}\big)$
is infinite. In this case, pick an infinite sequence of distinct $\{\tau_j\}_{j=1}^\infty$
witnessing the desired property.
By passing to a subsequence, we may assume without loss of generality that
either there exists $k, 1\le k\le n-1$, so that for all $j\in\mathbb{N}$,
$\gamma_k<\tau_j<\gamma_{k+1}$, or $\tau_j<\gamma_1$ for all $j\in\mathbb{N}$,
or $\gamma_{n-1}<\tau_j$ for all $j\in\mathbb{N}$.
For simplicity of notation, assume the last case,
i.e. $\gamma_1<\dots<\gamma_{n-1}<\tau_j$ holds for all $j\in\mathbb{N}$.
Denoting $F^j=\{\gamma_1,\dots,\gamma_{n-1},\tau_j\}$, we
conclude that there exists a fixed selection of signs
$\varepsilon_1,\dots,\varepsilon_n$ such that the set
\[
B=\Big\{V_y: y=\sum_{i=1}^{n-1}\varepsilon_i e_{\gamma_i}+\varepsilon_n\tau_j, j\in\mathbb{N}\Big\}
\]
is infinite. Indeed, otherwise the set of values
$\{\sigma(\{\gamma_1,\dots,\gamma_{n-1},\tau_j\}), j\in\mathbb{N}\}$,
which are determined by the definition \eqref{sig-def}, would have only a finite
set of options for each coordinate, and would therefore have to be finite.
This is a contradiction with the point finiteness of the system $\mathcal V$,
because
\[
\sum_{i=1}^{n-1}\varepsilon_i e_{\gamma_i}\in V_y,\;\;\text{for all}\; V_y\in B.
\]
It remains to consider the other case when
there is a sequence $(F_j)$ of pairwise disjoint elements
of $[\lambda]^n$, with $\sigma(F_i)\!=\!\sigma(F_j)$,
for any $i,j\in\mathbb{N}$. In fact, it suffices to choose just a pair
of such disjoint elements (written in an increasing order of ordinals)
$F=\{\gamma_1,\dots,\gamma_{n}\}$, $G=\{\beta_1,\dots,\beta_{n}\}$,
such that $\sigma(F)=\sigma(G)$.
This means, in particular, that for every
fixed selection of signs
$\varepsilon_1,\dots,\varepsilon_n$,
\[
V_{\sum_{i=1}^{n}\varepsilon_i e_{\gamma_i}}=
V_{\sum_{i=1}^{n}\varepsilon_i e_{\beta_i}}
\]
By our assumption, the elements of $\mathcal V$ are contained in a ball
of radius $r$, hence
\begin{equation}\label{all-sign}
\|\sum_{i=1}^{n}\varepsilon_i e_{\gamma_i}-\sum_{i=1}^{n}\varepsilon_i e_{\beta_i}\|\le2r
\end{equation}
holds for any selection of signs
$\varepsilon_1,\dots,\varepsilon_n$.
Let $u_j=e_{\gamma_j}-e_{\beta_j}$, $j\in\{1,\dots,n\}$. Because
$\{e_\gamma\}$ is a monotonne normalized long Schauder basis, we have
the trivial estimate $1\le\|u_j\|\le2$.
The equation \eqref{all-sign} means that
\begin{equation}\label{all-sign-2}
1\le \|\sum_{i=1}^{n}\varepsilon_i u_i\|\le2r
\end{equation}
holds for any selection of signs
$\varepsilon_1,\dots,\varepsilon_n$. Since norm functions are convex, this means that for the unite vector ball $B_E$ of of $E=\text{span}(u_i:i\le )$ it follows that
$$\Big\{ \sum_{j=1}^n a_j u_j: |a_j|\le \frac1{2r}\Big\}\subset B_E \subset\Big\{\sum_{j=1}^n a_j u_j :|a_j|\le 2\Big\},$$
which means that $(u_j)_{j=1}^n$ is $4r$-equivalent to the unit vector basis of $\ell_\infty^n$.
\end{proof}
In fact, our proof gives a much stronger condition than just failing cotype,
because our copies of $\ell_\infty^k$ are formed by vectors of the type
$e_\alpha-e_\beta$. This fact can be used to obtain
much stronger structural results for spaces with special bases.
Recall that a long Schauder basis $\{e_\gamma\}_{\gamma=1}^\Lambda$
is said to be symmetric if
\[
\|\sum_{i=1}^n a_i e_{\gamma_i}\|=
\|\sum_{i=1}^n a_i e_{\beta_i}\|
\]
for any selection of $a_i\in\mathbb{R}$, and any pair of sets
$\{\gamma_i\}_{i=1}^n\subset[1,\Lambda)$,
$\{\beta_i\}_{i=1}^n\subset[1,\Lambda)$. It is well-known (c.f.
\cite[ Prop. II.22.2]{Si}), that each symmetric basis is
automatically unconditional, i.e. there exists $K>0$ such that
\[
\frac1K\|\sum_{i=1}^n |a_i| e_{\gamma_i}\|\le
\|\sum_{i=1}^n a_i e_{\gamma_i}\|\le K
\|\sum_{i=1}^n |a_i| e_{\gamma_i}\|.
\]
In particular,
\[
\frac1K\|\sum_{i\in A} a_i e_{\gamma_i}\|\le
\|\sum_{i\in B} a_i e_{\gamma_i}\|
\]
whenever $A\subset B$.
\begin{thm}\label{sym-case}
Let $X$ be a Banach space of density $\Lambda\ge\lambda$, with a symmetric basis
$\{e_\gamma\}_{\gamma=1}^\Lambda$, which coarsely (or uniformly)
embeds into some $c_0(\Gamma)$.
Then $X$ is linearly isomorphic
with $c_0(\Lambda)$.
\end{thm}
\begin{proof}
By the proof of the above results, if $X$ embeds into $c_0(\Gamma)$,
there exists an $C>0$, such that for each $k\in\mathbb{N}$ there are some vectors
$\{v_i\}_{i=1}^k$ of the form $v_i=e_{\gamma_i}-e_{\beta_i}$ satisfying
the conditions
\begin{equation}\label{V-f}
\frac12\max_j|a_i|\le
\Big \|\sum_{i=1}^k a_i v_i\Big\|
\le C\max_j|a_j|.
\end{equation}
Using the fact that the basis $\{e_\gamma\}$ is unconditional,
(and symmetric) we obtain by an easy manipulation that there exist some
constants $A,B>0$ such that
\begin{equation}\label{sssi}
A\Big\|\sum_{i=1}^k a_i e_{\gamma_i}\Big\|\le
\Big\|\sum_{i=1}^k a_i v_i\Big\|\le B\Big\|\sum_{i=1}^k a_i e_{\gamma_i}\Big\|
\end{equation}
Combining \eqref{V-f} and \eqref{sssi} we finally obtain
that for some $D\ge 1$, and any $k\in\mathbb{N}$,
\begin{equation}\label{V-f-2}
\frac1D\max_j|a_i|\le
\|\sum_{i=1}^k a_i e_{\gamma_i}\|\le
D\max_j|a_j|.
\end{equation}
for all $\{\gamma_1,\dots,\gamma_k\}\subset[1,\Lambda)$,
which proves our claim.
\end{proof}
\section{Final comments and open problems}\label{S:5}
Let us mention in this final section some problems of interest.
First of all, we do not know whether or not Theorem A is true if we replace $\lambda$ by smaller cardinal numbers.
\begin{prob}\label{Prob:1} Assume that $X$ is a Banach space with $\text{\rm dens}(X)\ge \omega_1$, and assume that $X$ coarsely embeds into $c_0(\Gamma)$ for some cardinal number $\Gamma$.
Does $X$ have trivial co-type? If moreover $X$ has a symmetric basis, must it be isomorphic to $c_0(\omega_1)$?
\end{prob}
Of course Problem \ref{Prob:1} would have a positive answer if the following is true.
\begin{prob}\label{Prob:2} Is Theorem B true for $\omega_1$?
\end{prob}
Connected to Problems \ref{Prob:1} and \ref{Prob:2} is the following
\begin{prob}\label{Prob:3} Does $\ell_\infty$ coarsely embed into $c_0(\kappa)$ for some uncountable cardinal number $\kappa$.
\end{prob}
Another line of interesting problems asks which isomorphic properties do non separable Banach spaces have which coarsely embed into $c_0(\Gamma)$
\begin{prob}\label{Prob:4} Does a non separable Banach space which coarsely embeds into some $c_0(\Gamma)$, $\Gamma$ being uncountable, contain copies $c_0$, or even $c_0(\omega_1)$.
\end{prob}
| 8f8ac352f22f48700d2a56b93933d9a42f9bc79a | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
\label{intro}
Hypernuclei are being produced in manifold ways, by photon-, pion-, antikaon-, proton-, antiproton- and nucleus-nucleus interactions \cite{Bando:1990yi}.
All these types of reactions, except the $\bar p A$ one, are rather well studied both experimentally and theoretically.
But up to now, there are only few theoretical studies of hypernuclear production in antiproton-nucleus interactions
\cite{Bando:1989nx,Cugnon:1990aw,Gaitanos:2011fy,Gaitanos:2014bla,Feng:2015jma,Gaitanos:2016glr}, and all of them address the incoherent production mechanism.
Incoherent hypernucleus production in central collisions is initiated by production of an antikaon in the in-medium $\bar pN$ annihilation
followed by the strangeness exchange process of the type $\bar K N \to Y\pi$. Such reactions are the ideal tool to investigate simultaneously
the production of single- and multi-strangeness systems, as discussed e.g. in the cited works. However, a different approach is required
if spectroscopic studies of the final hypernuclei are the aim. For that purpose, the proper method are peripheral reactions by which hypernuclei
in bound discrete quantum states are obtained. A comprehensive overview of the status of such studies can be found in the recent review of ref. \cite{Gal:2016boi}.
The coherent hypernuclear production in proton- and pion-induced reactions has been investigated in refs. \cite{Shyam:2005mq} and \cite{Bender:2009cj},
respectively, and in photo-induced reactions in ref. \cite{Shyam:2007fm}. Since such coherent reactions are of a perturbative character, they can be described
quantum mechanically by distorted wave methods.
In this paper, we consider $\bar p + {}^AZ \to {}^A_\Lambda(Z-1) + \bar{\Lambda}$ annihilation reactions on a target $^AZ$
leading to the production of a particle-stable hypernucleus $^A_\Lambda(Z-1)$.
The full process is sketched in Fig.~\ref{fig:hyperProd}.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{hyperProd.eps}
\caption{\label{fig:hyperProd} The Feynman graph of the ${}^AZ(\bar p,\bar\Lambda){}^A_\Lambda(Z-1)$ process.
Dashed line represents the propagator of the exchange meson.
The gray ellipsoids correspond to the wave functions
of the initial ground state nucleus $^AZ$ and final hypernucleus \mbox{$^A_\Lambda(Z-1)$}.}
\end{center}
\end{figure}
Such reactions could be realized in the foreseeable future at the upcoming \={P}ANDA experiment at FAIR.
Our interest is twofold, namely, first, on the reaction mechanism and, second, on the production dynamics.
The strong coupling of antibaryons to the various annihilation channels requires to account properly for initial (ISI) and final (FSI) state interactions.
For that part we take advantage of our previous study of $\bar p A$ elastic scattering in ref. \cite{Larionov:2016xeb}.
The production proceeds through the elementary $\bar p + p \to \bar \Lambda +\Lambda$ vertex. Since the initial proton and the final $\Lambda$
are constrained to be bound to a nucleus we need to account for the binding potentials. They are described in a relativistic mean-field approach with scalar
and vector fields, similar to the descriptions of hypernuclear bound states in refs. \cite{Bender:2009cj,Glendening:1992du,Keil:1999hk}.
Not much is known, in fact, on the basic $\bar p + p \to \bar \Lambda +\Lambda$ reaction amplitude. Here, we use a meson exchange model.
On the antibaryon side a $\bar u$-quark must be changed into a $\bar s$ quark while on the baryon side a $u$-quark has to be transformed into a $s$-quark.
That can be viewed as the propagation of positively charged mesons of a $[u \bar s]$ quark structure with strangeness $S=1$ from baryon to antibaryon
(or of $[\bar u s]$ mesons with strangeness $S=-1$ in the opposite direction).
Obvious candidates for such a process are the pseudo-scalar ($0^-$) kaon $K$ and vector ($1^-$) $K^*$ mesons.
However, we have to expect that also the $\pi K$ correlated exchange in the scalar $0^+$-channel may play an important role.
The $0^+,S=1$-channel is represented by the $K^*_0(800)$ or $\kappa$ mesons \cite{Agashe:2014kda} which may be considered as the $S=1$ members
of the (hypothetical) scalar meson octet to which also the $\sigma/f_0(600)$ and the $\delta/a_0(980)$ mesons belong.
Like other $0^+$-mesons, the $\kappa/K^*_0$ meson is characterized by a rather broad spectral distribution with uncertain mass and width.
In the present context, the $\kappa$ meson contributes of course through $t$-channel exchange processes. In that sense, we consider the $\kappa$ exchange
as an economical way to take into account the correlated $\pi K$ channel. The $\kappa$ exchange channel is of particular interest for multi-strangeness
baryonic matter in heavy ion collisions and in neutron stars. The $\kappa$ exchange is also an indispensable part of baryon-baryon interaction approaches
utilizing the $SU(3)$-flavour group structure. The Nijmegen group was probably the first one to introduce that channel explicitly \cite{Timmermans:1988hm,Timmermans:1992fu}
into their treatment of baryon-baryon scattering while in the Juelich model that channel is treated dynamically as a $\pi K$-correlation \cite{Haidenbauer:2005zh}.
We note that in the present context the unnatural parity $K$-exchange is strongly suppressed for transition involving bound proton and $\Lambda$ states,
because it is a purely relativistic effect proceeding through the lower wave function components of the Dirac spinors.
Thus, coherent hypernucleus production reactions are perfect tools to addressing specifically the exchange of the natural parity $K^*$ and $\kappa$ mesons.
Any other independent source of information allowing to probe hyperon-nucleon and hyperon-nucleus interactions is highly wanted.
In this respect, hypernuclear reaction physics may provide important clues.
The paper is structured as follows. In section \ref{model} we introduce the Lagrangians describing our covariant annihilation model
and the relativistic mean-field approach for bound baryon states. Both the elementary $\bar p p \to \bar\Lambda \Lambda$
as well as the hypernucleus production $\bar p + {}^AZ \to {}^A_\Lambda(Z-1) + \bar{\Lambda}$ amplitudes
are studied without and with scalar meson exchange. ISI and FSI of the antibaryons in the nucleus
are taken into account in the eikonal approximation. In section \ref{results} the theoretical approach is applied to reactions on an
${}^{40}\mbox{Ar}$ target populating discrete bound states in the $^{40}_{~\Lambda}\mbox{Cl}$ hypernucleus.
Angular distributions and total hypernucleus production cross sections are discussed.
Special attention is paid to the effects introduced by the scalar interaction channel.
In section \ref{SumConcl} we summarize our results and present our conclusions.
\section{The Model}
\label{model}
\subsection{Strangeness production in Antiproton Annihilation Reactions}
The theoretical models for the process $\bar p p \to \bar\Lambda \Lambda$ are divided into two groups: the $t$-channel
strange meson exchange models \cite{Tabakin:1985yv,Kohno:1986mg,LaFrance:1988if,Timmermans:1988hm,Timmermans:1992fu,Haidenbauer:1991kt}
and the quark-gluon models \cite{Rubinstein:1985bb,Furui:1987cc,Burkardt:1988pk,Alberg:1988qu}.
The quark-gluon models are based on the one-gluon ($^3S_1$) or vacuum-type ($^3P_0$) $\bar u u \to \bar s s$ transitions.
Of course, generally, the amplitude of the process $\bar p p \to \bar\Lambda \Lambda$ may be a superposition of the $t$-channel
meson exchanges and the pure quark-gluon transitions. Moreover, based on the existing data currently there is no clear preference
of one type of models over another one.
Thus, we will use here a relatively simple, although well established, $t$-channel meson-exchange framework.
We will introduce the $K$, $K^*$ and $\kappa$ exchanges by using the following
interaction Lagrangians \cite{Cheoun:1996kn,Tsushima:1998jz,Han:1999ck}:
\begin{eqnarray}
{\cal L}_{KN\Lambda} &=& -ig_{KN\Lambda} \bar N \gamma^5 \Lambda K + \mbox{h.c.}~, \label{Lag_KNL}\\
{\cal L}_{K^*N\Lambda} &=& \bar N (G_v\gamma^\mu - \frac{G_t}{m_N+m_\Lambda}\sigma^{\mu\nu}\partial_\nu^{K^*})\Lambda K^*_\mu
+ \mbox{h.c.}~, \label{Lag_KsNL}\\
{\cal L}_{\kappa N\Lambda} &=& -g_{\kappa N\Lambda} \bar N \Lambda \kappa + \mbox{h.c.}~. \label{Lag_kappaNL}
\end{eqnarray}
The invariant matrix elements for the process $\bar p p \to \bar\Lambda \Lambda$ with the plane wave incoming and outgoing
states can be evaluated by applying standard Feynman rules:
\begin{eqnarray}
iM_K &=& -g^2_{KN\Lambda} F^2_K(q^2) \sqrt{\Omega}\,\bar u_{-p_1,-\lambda_1} \gamma^5 u_{-p_3,-\lambda_3}
\frac{i}{q^2-m_K^2} \bar u_{p_4\lambda_4} \gamma^5 u_{p_2\lambda_2}~, \label{M_K}\\
iM_{K^*} &=& -F^2_{K^*}(q^2) \sqrt{\Omega}\,\bar u_{-p_1,-\lambda_1} \Gamma^\mu(-q) u_{-p_3,-\lambda_3}
iG_{\mu\nu}(q) \bar u_{p_4\lambda_4} \Gamma^\nu(q) u_{p_2\lambda_2}~, \label{M_Ks}\\
iM_{\kappa} &=& g^2_{\kappa N\Lambda} F^2_{\kappa}(q^2) \sqrt{\Omega}\,\bar u_{-p_1,-\lambda_1} u_{-p_3,-\lambda_3}
\frac{i}{q^2-m_{\kappa}^2+im_{\kappa}\Gamma_{\kappa}} \bar u_{p_4\lambda_4} u_{p_2\lambda_2}~, \label{M_kappa}
\end{eqnarray}
where $p_i$ is the four-momentum and $\lambda_i=\pm1/2$ is the spin magnetic quantum number of a
particle $i=1,2,3,4$ (see Fig.~\ref{fig:hyperProd} for the notation),
$q=p_3-p_1$ is the four-momentum transfer. In Eq.(\ref{M_Ks}),
\begin{equation}
G_{\mu\nu}(q) = \frac{-g_{\mu\nu} + q_\mu q_\nu/m_{K^*}^2}{q^2-m_{K^*}^2+im_{K^*}\Gamma_{K^*}} \label{G_mu_nu}
\end{equation}
is the $K^*$ meson propagator. The $K^*N\Lambda$ vertex function is defined as
\begin{equation}
\Gamma^\mu(q)=iG_v\gamma^\mu + \frac{G_t}{m_N+m_\Lambda} \sigma^{\mu\nu} q_\nu~. \label{Gamma^mu}
\end{equation}
The vertex form factors are chosen in the monopole form:
\begin{equation}
F_j(q^2) = \frac{\Lambda_j^2-m_j^2}{\Lambda_j^2-q^2}~,~~~j=K,K^*,\kappa~. \label{FFs}
\end{equation}
Similar to refs. \cite{Sopkovich,Shyam:2014dia,Shyam:2015hqa} we included in Eqs.(\ref{M_K})-(\ref{M_kappa}) the factor $\sqrt{\Omega}$
to describe ISI and FSI where absorption of the flux into other annihilation channels is especially
important. For simplicity, we assume the attenuation factor $\Omega$ to be energy independent.
With $\Omega=1$, Eqs.(\ref{M_K})-(\ref{M_kappa}) correspond to the Born approximation.
The Dirac spinors are normalized according to ref. \cite{BLP}:
$\bar u_{p \lambda} u_{p \lambda} = 2m_{N(\Lambda)}$, $\bar u_{-p,-\lambda} u_{-p,-\lambda} = -2m_{N(\Lambda)}$.
The angular differential cross section in the center-of-mass (c.m.) frame is given by the standard expression:
\begin{equation}
\frac{d\sigma_{\bar p p \to \bar \Lambda \Lambda}}{d\Omega}
= \frac{p_{\bar \Lambda \Lambda}}{256\pi^2sp_{\bar p p}}\,
\sum_{\lambda_1,\lambda_2,\lambda_3,\lambda_4} |M_K+M_{K^*}+M_{\kappa}|^2~, \label{dsigmadOmega}
\end{equation}
where $s=(p_1+p_2)^2$ is the c.m. energy squared,
$p_{\bar p p}=(s/4-m_p^2)^{1/2}$ and $p_{\bar \Lambda \Lambda}=(s/4-m_\Lambda^2)^{1/2}$ are the c.m. momenta of the
initial and final particles, respectively.
Note that the interference terms of the kaon exchange amplitude with the
$K^*$ and $\kappa$ exchange amplitudes are equal to zero after summation over spin states
since, in the Born approximation, the unnatural and natural parity exchange amplitudes
do not interfere for unpolarized beam and target (cf. \cite{Tabakin:1985yv}).
The choice of coupling constants is based on $SU(3)$ relations \cite{deSwart:1963pdg}:
\begin{eqnarray}
g_{KN\Lambda} &=& -g_{\pi NN} \frac{3-2\alpha_{PS}}{\sqrt{3}}~, \label{g_KNL}\\
G_{v,t} &=& -G_{v,t}^\rho \frac{3-2\alpha_{E,M}}{\sqrt{3}}~, \label{G_vt}\\
g_{\kappa N\Lambda} &=& -g_{\sigma NN} \frac{3-2\alpha_S}{3-4\alpha_S}~, \label{g_kappaNL}
\end{eqnarray}
where $\alpha$'s are the $D$-type coupling ratios.
The $\pi NN$ coupling constant is very well known, $g_{\pi NN}=13.4$ \cite{Dumbrajs:1983jd}.
The vector $\rho NN$ coupling constant is also fixed, $G_{v}^\rho=2.66$, however, the tensor $\rho NN$ coupling constant
is quite uncertain, $G_{t}^\rho=10.9\div20.6$ \cite{Cheoun:1996kn}.
The $\sigma NN$ coupling constant can be estimated either
from the Bonn model \cite{Machleidt:1987hj} or from the Walecka-type models (cf. \cite{Lalazissis:1996rd}).
In both cases one obtains $g_{\sigma NN} \simeq 10$.
The $\alpha$'s for the octets of light pseudoscalar and vector mesons
are reasonably well determined \cite{Cheoun:1996kn,Han:1999ck}:
$\alpha_{PS} \simeq 0.6$, $\alpha_{E} \simeq 0$, $\alpha_{M} \simeq 3/4$. However, there is no any
phenomenological information on $\alpha_S$.
Thus, the coupling constants $G_{t}$ and $g_{\kappa N\Lambda}$,
the cutoff parameters $\Lambda_K, \Lambda_{K^*}$ and $\Lambda_\kappa$, and the attenuation factor $\Omega$
remain to be determined from comparison with experimental data. We adjusted these parameters to describe
the beam momentum dependence of the total $\bar p p \to \bar\Lambda \Lambda$ cross section. The two sets
of parameters, (1) without $\kappa$ meson and (2) with $\kappa$ meson, are listed in Table~\ref{tab:par}.
In the calculations we used the mass $m_\kappa=682$ MeV and the width $\Gamma_\kappa=547$ MeV \cite{Olive:2016xmw}.
\begin{table}[htb]
\caption{\label{tab:par}
Parameters of the $\bar p p \to \bar\Lambda \Lambda$ amplitude.
The value of $g_{KN\Lambda}$ slightly differs from
-13.3 as given by Eq.(\ref{g_KNL}) and is taken from
from $K^+N$ scattering analysis of ref. \cite{Buettgen:1990yw}.
The cutoff parameters $\Lambda_K$, $\Lambda_{K^*}$ and $\Lambda_\kappa$ are in GeV. In the last column, the attenuation factors are shown.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Set~~~& $g_{KN\Lambda}$~~~~~& $G_{v}$~~~~~& $G_{t}$~~~~~& $g_{\kappa N\Lambda}$~~~~~& $\Lambda_K$~~~~~& $\Lambda_{K^*}$~~~~~&\
$\Lambda_\kappa$~~~~~& $\Omega$~~~~~\\
\hline
1 & -13.981 & -4.6 & -8.5 & --- & 2.0 & 1.6 & --- & 0.015 \\
2 & -13.981 & -4.6 & -9.0 & -7.5 & 1.8 & 2.0 & 1.8 & 0.005 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.4]{sigma_Lbar_L_old.eps}
\includegraphics[scale = 0.4]{sigma_Lbar_L.eps}
\end{center}
\caption{\label{fig:sigma_Lbar_L} Total cross section of the process $\bar p p \to \bar\Lambda \Lambda$
as a function of the beam momentum calculated without (Set 1) and with (Set 2) inclusion of the $\kappa$ meson.
Experimental data are from ref. \cite{Bald87}.}
\end{figure}
As we see from Fig.~\ref{fig:sigma_Lbar_L}, in the calculation with set 1 the peak of the total $\bar p p \to \bar\Lambda \Lambda$
cross section at $p_{\rm lab} \simeq 2$ GeV/c is saturated by the $K$ exchange.
In contrast, in the case of set 2 the peak is saturated mostly by the $\kappa$ exchange. The $K^*$ exchange contribution grows monotonically
with beam momentum and becomes dominant at $p_{\rm lab} > 3\div4$ GeV/c.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.4]{dsigdOmega_old.eps}
\includegraphics[scale = 0.4]{dsigdOmega.eps}
\end{center}
\caption{\label{fig:dsigdOmega} Angular differential cross section of the process $\bar p p \to \bar\Lambda \Lambda$
in the c.m. frame at $p_{\rm lab}=2.060$ GeV/c calculated without (Set 1) and with (Set 2) inclusion of the $\kappa$ meson.
Experimental data are from ref. \cite{Jayet:1978yq}.}
\end{figure}
The effect of the different couplings is better visible in the angular differential cross section displayed in
Fig.~\ref{fig:dsigdOmega}. The kaon exchange contribution to $d\sigma_{\bar p p \to \bar \Lambda \Lambda}/d\Omega$
becomes small at $\Theta=0$ due to the presence of $\gamma^5$ in the matrix element (\ref{M_K}) which interchanges
the upper and lower components of the Dirac spinor\footnote{For the particle at rest the lower component is zero.
Thus, for example, for the elastic $NN$ scattering the parity changing pion exchange contribution vanishes at $\Theta=0$.}.
As a result, at forward c.m. angles the cross section is dominated by $K^*$ and/or $\kappa$ exchange.
Moreover, the latter provides steeper rising differential cross section towards $\Theta=0$ improving
the agreement with experiment.
In the case of the bound proton and $\Lambda$ we include their wave functions in the field operators of the Lagrangians
(\ref{Lag_KNL})-(\ref{Lag_kappaNL}) and calculate the $S$-matrix in the second order perturbation theory using Wick theorem.
After some standard algebra (cf. ref. \cite{BLP}) this leads to the following expression for the $S$-matrix:
\begin{equation}
S=\frac{2\pi\delta(E_1+E_2-E_3-E_4)}{(2E_1V 2E_3V)^{1/2}} i{\cal M}~, \label{S}
\end{equation}
where $E_i,~i=1,2,3,4$ are particle energies (see Fig.~\ref{fig:hyperProd} for notation) and $V$ is the normalization volume.
The matrix element ${\cal M}$ in Eq.(\ref{S}) is expressed as a sum of the $K,~K^*$ and $\kappa$ exchange contributions:
\begin{equation}
{\cal M} = {\cal M}_K + {\cal M}_{K^*} + {\cal M}_{\kappa}~, \label{calM}
\end{equation}
where
\begin{eqnarray}
i{\cal M}_K &=& -g^2_{KN\Lambda} F^2_K(q^2) \sqrt{\Omega}\,\bar u_{-p_1,-\lambda_1} \gamma^5 u_{-p_3,-\lambda_3}
\frac{i}{q^2-m_K^2}
\int d^3r e^{-i\bm{q}\bm{r}} \bar \psi_4(\bm{r}) \gamma^5 \psi_2(\bm{r})~, \label{calM_K}\\
i{\cal M}_{K^*} &=& -F^2_{K^*}(q^2) \sqrt{\Omega}\,\bar u_{-p_1,-\lambda_1} \Gamma^\mu(-q) u_{-p_3,-\lambda_3}
iG_{\mu\nu}(q)
\int d^3r e^{-i\bm{q}\bm{r}} \bar \psi_4(\bm{r}) \Gamma^\nu(q) \psi_2(\bm{r})~, \label{calM_Ks}\\
i{\cal M}_{\kappa} &=& g^2_{\kappa N\Lambda} F^2_{\kappa}(q^2) \sqrt{\Omega}\,\bar u_{-p_1,-\lambda_1} u_{-p_3,-\lambda_3}
\frac{i}{q^2-m_{\kappa}^2+im_{\kappa}\Gamma_{\kappa}}
\int d^3r e^{-i\bm{q}\bm{r}} \bar \psi_4(\bm{r}) \psi_2(\bm{r})~. \label{calM_kappa}
\end{eqnarray}
Here, $\psi_2(\bm{r})$ and $\psi_4(\bm{r})$ are the wave functions of the bound proton and $\Lambda$, respectively.
They satisfy the normalization conditions:
\begin{equation}
\int d^3r \psi_i^\dag(\bm{r}) \psi_i(\bm{r}) = 1~,~~~i=2,4~. \label{normCond}
\end{equation}
The differential cross section in the rest frame of the target nucleus is defined
as follows:
\begin{equation}
d\sigma = \frac{2\pi\delta^{(4)}(p_1+p_A-p_3-p_B)}{2p_{\rm lab}} \overline{|{\cal M}|^2}
\frac{d^3p_3}{(2\pi)^32E_3} d^3p_B, \label{dSigma}
\end{equation}
where $p_A$ and $p_B$ are the four momenta of the initial nucleus ($A$) and final hypernucleus ($B$).
The $\delta$ function in Eq.(\ref{dSigma}) takes into account the recoil of the hypernucleus.
The averaged modulus squared of the matrix element in Eq.(\ref{dSigma}), i.e. transition probability, is defined as
\begin{equation}
\overline{|{\cal M}|^2} \equiv \frac{1}{2} \sum_{m,m_\Lambda,\lambda_1,\lambda_3} |{\cal M}|^2~, \label{calM2}
\end{equation}
where $m$ and $m_\Lambda$ are the spin magnetic quantum numbers of the occupied proton state from the valence
shell and of the $\Lambda$ hyperon, respectively, and the factor of $1/2$ expresses the averaging over $\lambda_1$.
The matrix elements (\ref{calM_K})-(\ref{calM_kappa}) are obtained in the impulse approximation (IA).
More realistic calculation should take into account the distortion of the incoming $\bar p$ and outgoing $\bar\Lambda$
waves, mostly due to strong absorption of the antibaryons in the nucleus. In the eikonal approximation the incoming
$\bar p$ wave is multiplied by the factor
\begin{equation}
F_{\bar p}(\bm{r}) =
\exp\left(-\frac{1}{2}\sigma_{\bar pN}(1-i\alpha_{\bar pN}) \int\limits_{-\infty}^0 d\xi
\rho(\bm{r}+\frac{\bm{p}_{\bar p}}{p_{\bar p}}\xi)\right)~, \label{F_barp}
\end{equation}
and the outgoing $\bar\Lambda$ wave is multiplied by
\begin{equation}
F_{\bar\Lambda}(\bm{r}) =
\exp\left(-\frac{1}{2}\sigma_{\bar \Lambda N}(1-i\alpha_{\bar \Lambda N}) \int\limits_0^{+\infty} d\xi
\rho(\bm{r}+\frac{\bm{p}_{\bar\Lambda}}{p_{\bar\Lambda}}\xi)\right)~,
\label{F_barL}
\end{equation}
where $\rho(\bm{r})$ is the nucleon density, $\sigma_{jN}$ is the total $jN$ cross section,
$\alpha_{jN}=\mbox{Re}f_{jN}(0)/\mbox{Im}f_{jN}(0)$ is the ratio of the real-to-imaginary part of the forward $jN$ amplitude
($j=\bar p, \bar\Lambda$).
Equations (\ref{F_barp}),(\ref{F_barL}) can be obtained by applying the eikonal approximation to solve the Schr\"odinger equation
for the scattering of a particle in the external potential (cf. ref. \cite{LL}) which is then replaced by the optical potential
in the low-density approximation. Since the factors $F_{\bar p}(\bm{r})$, $F_{\bar\Lambda}(\bm{r})$ are weakly changed on the
distances $\sim m_K^{-1}$, the $S$-matrix can be calculated in the local approximation which results in multiplying the integrands
in the matrix elements (\ref{calM_K})-(\ref{calM_kappa}) by $F_{\bar p}(\bm{r}) F_{\bar\Lambda}(\bm{r})$. (Similar expressions can be
also found, e.g., in refs. \cite{Bando:1990yi,Frankfurt:1994nn}.) In numerical calculations we applied the momentum dependent total $\bar pN$ cross
section and the ratio $\alpha_{\bar p N}$ as described in ref. \cite{Larionov:2016xeb}. We have assumed that
$\sigma_{\bar\Lambda N} = \sigma_{\bar p N}$ at the same beam momenta which is supported by experimental data on the total $\bar\Lambda p$
cross section at $p_{\rm lab}=4\div14$ GeV/c \cite{Eisele:1976fe}. For simplicity we have set $\alpha_{\bar \Lambda N}=0$.
Note that the factor $\sqrt{\Omega}$ in Eqs.(\ref{calM_K})-(\ref{calM_kappa}) expresses the modification of the elementary
$\bar p p \to \bar\Lambda \Lambda$ amplitude due to ISI and FSI in the colliding system.
However, the factor of $F_{\bar p}(\bm{r}) F_{\bar\Lambda}(\bm{r})$ takes into account the modification of the
$^AZ(\bar p,\bar\Lambda)\,^A_\Lambda(Z-1)$ amplitude due to sequential elastic rescattering of the incoming $\bar p$ and outgoing
$\bar \Lambda$ on the different nucleons.
\subsection{Nuclear Structure Aspects}
In agreement with the covariant formulation of the production vertices the nucleon and hyperon single particle bound state wave functions are determined
as solutions of a static Dirac equation with scalar and vector potentials, similar to refs. \cite{Glendening:1992du,Keil:1999hk,Bender:2009cj}.
The baryon Dirac-spinors are obtained from the fermion wave equation (cf. \cite{BLP}):
\begin{equation}\label{statDirac}
\left( -i \bvec{\alpha}\cdot \bvec{\nabla} + \beta m^*_B(r) + V_B(r)+q_BV_C(r)- \varepsilon \right) \psi_B(\bm{r}) =0,
\end{equation}
where $m^*_B(r)=m_B+S_B(r)$ is the effective (Dirac) mass.
Both the scalar ($S_B$) and nuclear vector ($V_B$) potentials are in general superpositions of
the classical meson fields $U_{BM}$, weighted by the strong interaction coupling constants appropriate for the given baryon.
Here, $M=\sigma(I=0,J^{P}=0^+),~\omega(0,1^-),~\delta(1,0^+),~\rho(1,1^-)$ stands for the meson mediating the interaction in the respective channel.
For the nucleons the scalar and vector potentials are defined as
\begin{eqnarray}
S_N(r) &=& U_{N\sigma}(r) + U_{N\delta}(r) \tau^3~, \label{S_N}\\
V_N(r) &=& U_{N\omega}(r) + U_{N\rho}(r) \tau^3~, \label{V_N}
\end{eqnarray}
where $\tau^3=+1(-1)$ for the neutron (proton).
For charged particles with charge $q_B$ also the static Coulomb potential ($V_C$) contributes \cite{Keil:1999hk}.
The meson fields are parameterized by Woods-Saxon (WS) form factors:
\be
U_{NM}(r) = \frac{U_{NM}^{(0)}}{e^\frac{r-R_M}{a_M}+1}~. \label{U_WS}
\ee
Assuming spherically symmetric potentials the eigenfunctions of the Dirac equation are characterized by radial, orbital and total angular momentum quantum numbers,
$n,~l,~j$, respectively, together with the magnetic quantum numbers $m \equiv j_z$. The spinors are given by the upper and lower Pauli-type components
\begin{equation}
\psi_{nljm}(\bm{r})=
\left(
\begin{array}{l}
f_{nlj}(r) \mathcal{Y}_{jm}^l(\Theta,\phi)\\
i g_{nlj}(r) \mathcal{Y}_{jm}^{l^\prime}(\Theta,\phi)
\end{array}
\right)~, \label{psi_nljm}
\end{equation}
where $l^\prime=2j-l$, and $\mathcal{Y}_{jm}^l(\Theta,\phi)$ denotes the spherical spin-orbit spinor \cite{VMKh}.
During the calculation, the strength factors $U_{NM}^{(0)}$ and the geometrical parameters $R_M, a_M$
are considered as global variational parameters.
They are determined self-consistently by the constraint to reproducing nuclear binding energies and nuclear root-mean-square radii.
The fitted potentials correspond to the full self-energies, including rearrangement contributions.
Nuclear binding energies are calculated by projecting out the rearrangement self-energy contributions \cite{Lenske:1995wyj}.
In the spirit of the relativistic mean-field (RMF) approach, the volume integrals of the rearrangement-corrected potentials
are related to the density-averaged meson-baryon coupling constants as follows:
\be
g^2_{MNN} = (-1)^{J+1} m_M^2\, \frac{\int d^3r U_{NM}(r)}{\int d^3r \rho_M(r)}~, \label{eq:Coupling}
\ee
where $m_M$ is the meson mass. The source densities of the meson fields are determined as the expectation values of
the nucleon field, $\psi(\bm{r})$, operator products:
$\rho_\sigma(r)=\langle\bar\psi(\bm{r})\psi(\bm{r})\rangle$,
$\rho_\omega(r)=\langle\psi^\dag(\bm{r})\psi(\bm{r})\rangle$,
$\rho_\delta(r)=\langle\bar\psi(\bm{r}) \tau^3 \psi(\bm{r})\rangle$,
$\rho_\rho(r)=\langle\psi^\dag(\bm{r}) \tau^3 \psi(\bm{r})\rangle$.
The hyperon self-energies are defined correspondingly. In that case the vertex is given by a product of coupling constant $g^2_{MNN} \to g_{MYY}g_{MNN}$.
As in \cite{Keil:1999hk} we define the scaling factor $R_{YM}=g_{MYY}/g_{MNN}$ which allows to write the hyperon potentials in leading order
as $U_{YM}(r)=R_{YM}U_{NM}(r)$.
Since the $\Lambda$ hyperon is an uncharged isoscalar particle, its scalar and vector potentials contain only isoscalar components,
i.e. $S_{\Lambda}(r)=U_{\Lambda\sigma}(r)$ and $V_{\Lambda}(r)=U_{\Lambda\omega}(r)$.
\section{Application to Hypernucleus Production on an ${}^{40}\mbox{Ar}$ Target}
\label{results}
As a representative case we consider the reaction
$
\bar p + {}^{40}\mbox{Ar} \to \bar\Lambda + {}^{40}_{~\Lambda}\mbox{Cl}
$.
The choice of the ${}^{40}\mbox{Ar}$ target is motivated by the future \={P}ANDA experiment at FAIR where noble gases will be
used as targets\footnote{For lighter nuclei, such as ${}^{20}\mbox{Ne}$, the recoil corrections should be taken into account
in more detail, cf. \cite{Bando:1990yi}.}.
The WS parameters of the scalar and nuclear vector potentials in that mass region are displayed in Table~\ref{tab:SelfE} where also the derived coupling constants
for standard values of the meson masses are shown. It is seen that the self-consistently derived values of the $\sigma NN$ and $\omega NN$ coupling constants
are almost perfectly agreeing with the values used in other RMF approaches, e.g. the widely used NL3-parameter set \cite{Lalazissis:1996rd}.
However, here we include also the otherwise often neglected scalar-isovector interaction channel, represented by the $\delta/a_0(980)$ meson, which is important
to keep track of the mass evolution far off beta-stability.
Since the signs of the scalar-isovector and vector-isovector fields are opposite, these two fields largely compensate each other. Thus, the $\rho NN$ coupling constant
is larger than that of NL3 (see also the dedicated study of nuclear matter properties in the RMF models with and without $\delta$ meson in ref. \cite{Liu:2001iz}).
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Self-Energy & $U_{NM}^{(0)}$ [MeV]& $r_{0,M}$ [fm] & $a_M$ [fm]& Meson & Mass [MeV]& $g^2_{MNN}/4\pi$ \\ \hline
scalar-isoscalar & -402.0& 1.0806 & 0.553 & $\sigma$ & 550 & 8.1179 \\
vector-isoscalar & 328.0& 1.0700 & 0.520 & $\omega$ & 783 & 12.8052 \\
scalar-isovector & $-80.0 \alpha$& 1.1800 & 0.500 & $\delta$ & 980 & 6.3037\\
vector-isovector & $ 90.0 \alpha$& 1.1500 & 0.520 & $\rho$ & 775 & 4.1794\\
\hline
\end{tabular}
\caption{\label{tab:SelfE} Nucleon mean-field potentials and meson-nucleon coupling constants.
The potential radii in Eq.(\ref{U_WS}) are expressed as $R_M=r_{0,M}A^{1/3}$. The coupling constants are defined by Eq.\protect{(\ref{eq:Coupling})}.
Note that the isovector potentials include the isospin asymmetry factor of the nucleus $\alpha=(N-Z)/A$.}
\end{center}
\end{table}
The calculated binding energy of the ${}^{40}\mbox{Ar}$ nucleus is $B=343.58$~MeV, which compare very well to the value from the AME compilation
\cite{Audi:2014eak}, $B_{exp}=343.81$~MeV. The r.m.s radii of proton and neutron density distributions are, respectively,
$\sqrt{\langle r^2 \rangle_p}=3.30$ (3.33) fm, and $\sqrt{\langle r^2 \rangle_n}=3.41$ (3.43) fm, where the phenomenological values from
the Skyrme-Hartree-Fock systematics are given in brackets. Without going into details we mention that after a very modest, $\sim 0.01\%$,
modification of the radius parameter $r_{0,\sigma}$ the binding energies of the neighboring isotopes, i.e. ${}^{39}\mbox{Cl}$ and ${}^{39}\mbox{Ar}$,
are reproduced, thus describing properly also the proton and neutron separation energies in ${}^{40}\mbox{Ar}$.
Under the assumption that the nuclear potentials do not change after a sudden removal of the valence proton, the $\Lambda$-hyperon scalar and vector potentials
in the ${}^{40}_{~\Lambda}\mbox{Cl}$ nucleus were obtained by multiplying the scalar and vector nucleon potentials in the ${}^{40}\mbox{Ar}$ nucleus by the factors
$R_{\Lambda\sigma}=0.525$ and $R_{\Lambda\omega}=0.550$, respectively. This leads to a good agreement of the $\Lambda$ energy levels with the empirical
systematics and with the previous relativistic mean-field calculations \cite{Keil:1999hk}, as seen from Table~\ref{tab:Lambda_bind}.
\begin{table}[htb]
\caption{\label{tab:Lambda_bind}
Binding energies of the $\Lambda$ states in the ${}^{40}_{~\Lambda}\mbox{Cl}$ nucleus.
Empirical $\Lambda$ binding energies (spin-orbit splitting not resolved) for ${}^{40}_{~\Lambda}\mbox{Ca}$
from ref. \cite{Keil:1999hk} are given in brackets.}
\begin{center}
\begin{tabular}{|l|l|}
\hline
$\Lambda$ state~~~& $B_\Lambda$ [MeV]~~~~~\\
\hline
$1s_{1/2}$ & 18.55 ($18.7\pm1.1$) \\
$1p_{3/2}$ & 10.20 ($9.9\pm1.1$) \\
$1p_{1/2}$ & 9.26 ($9.9\pm1.1$) \\
$1d_{5/2}$ & 2.14 ($1.5\pm1.1$) \\
$2s_{1/2}$ & 1.44 \\
$1d_{3/2}$ & 0.84 ($1.5\pm1.1$) \\
\hline
\end{tabular}
\end{center}
\end{table}
In order to assure that after the reaction the residual core nucleus carries as little excitation energy as possible,
we consider only strangeness creation processes on protons of the ${}^{40}\mbox{Ar}$ $1d_{3/2}$ valence shell.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.4]{dsigdO_sum_set1.eps}
\includegraphics[scale = 0.4]{dsigdO_sum_set2.eps}
\end{center}
\caption{\label{fig:dsigdO_sum} Angular differential cross section of the reaction ${}^{40}\mbox{Ar}(\bar p,\bar \Lambda){}^{40}_{~\Lambda}\mbox{Cl}$
at $p_{\rm lab}=2$ GeV/c. Lines show the calculations for $\Lambda$ in various states, as indicated. Left and right panels display calculations
without (Set 1) and with (Set 2) $\kappa$ exchange.}
\end{figure}
The differential hypernuclear production cross sections with the $\Lambda$ occupying various shells are compared in Fig.~\ref{fig:dsigdO_sum}.
Irrespective of spin-orbit effects, overall the cross sections are larger for larger hyperon orbital angular momentum, i.e. $l_\Lambda$.
This is a consequence of the interplay of several effects:
\begin{itemize}
\item The momentum transfer at $\Theta=0$ is small ($\sim 0.3$ GeV/c) implying a suppression of the $p \to \Lambda$ transitions with large orbital momentum transfer.
\item The number of the spin states of the $\Lambda$ contributing to the transition probability of Eq.(\ref{calM2}), i.e. $2(2l_\Lambda+1)$, grows obviously with $l_\Lambda$.
\item For $\Lambda$ states with larger $l_\Lambda$ the hyperon probability distribution is increasingly shifted to larger radii.
Hence, the absorption effects are diminished with increasing $l_\Lambda$.
\end{itemize}
The inclusion of $\kappa$ exchange leads to significant enhancement of the cross sections at small polar angles for all states of the produced hypernucleus
which is also expected from Fig.~\ref{fig:dsigdOmega}.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.4]{dsigdO_1d2.5_set1.eps}
\includegraphics[scale = 0.4]{dsigdO_1d2.5_set2.eps}
\end{center}
\caption{\label{fig:dsigdO_1d2.5} Angular differential cross section of the reaction ${}^{40}\mbox{Ar}(\bar p,\bar \Lambda){}^{40}_{~\Lambda}\mbox{Cl}$
at $p_{\rm lab}=2$ GeV/c with $1d_{5/2}$ $\Lambda$ state.
As indicated, the IA calculation, the full calculation (with absorption), and the separate meson contributions
to the full calculation are shown by different lines.
The left and right panels display the results without (Set 1) and with (Set 2) $\kappa$ meson, respectively.}
\end{figure}
The largest cross section is obtained for the ${}^{40}_{~\Lambda}\mbox{Cl}$ hypernucleus with $\Lambda$ in the $1d_{5/2}$ state.
The differential angular distribution for this case is analyzed in more detail in Fig.~\ref{fig:dsigdO_1d2.5}.
From the comparison of the full and IA calculations we observe that the absorption of $\bar p$ and $\bar \Lambda$ has a quite significant effect:
it reduces the cross section drastically, amounting at forward angles
to about two orders of magnitude, and smears out the diffractive structures.
Similar effects of the absorption are present also for the other $\Lambda$ states (not shown).
A deeper insight into the production mechanism is obtained by decomposing the total reaction amplitude into different meson exchange parts.
From the partial meson exchange contributions, shown in Fig.~\ref{fig:dsigdO_1d2.5}, it is remarkable that for Set 1 the kaon contribution is small
and the spectrum is dominated by $K^*$, even at large angles, while , on first sight, from Figs.~\ref{fig:sigma_Lbar_L},\ref{fig:dsigdOmega}
one would expect the opposite. For example, for $\bar p A$ collisions at $p_{\rm lab}=2$ GeV/c the $\bar \Lambda$ produced at $\Theta_{\rm lab}=30\degree$
carries away the momentum transfer of $\sim 1$ GeV/c . This corresponds approximately to $\Theta=90\degree$ in c.m. frame if translated
into the $\bar p p \to \bar\Lambda \Lambda$ reaction in free space. Thus, we should expect (see left Fig.~\ref{fig:dsigdOmega}) that the kaon exchange
should be a factor of five larger than the $K^*$ exchange.
However, in the case of the nuclear target the $K^*$ exchange contribution is larger than that of $K$ exchange even at $\Theta_{\rm lab}=30\degree$.
This surprising result can be understood by the fact that the momentum transfer to the $\bar\Lambda$ is provided by the nucleus as a whole while
the hyperon is almost at rest. The exchange by pseudoscalar meson is suppressed in this case since it proceeds through the lower components
of the proton and $\Lambda$ Dirac spinors which are suppressed by factors $\sim 1/m_BR$, where $R$ is the nuclear radius.
In contrast, in the case of the free space $\bar p p \to \bar\Lambda \Lambda$ process at $\Theta=90\degree$
the $\Lambda$ is produced with momentum $\sim 1$ GeV/c and, thus, the upper and lower components of its Dirac spinor
are of comparable magnitude which favors the pseudoscalar meson exchange.
The situation is very different in the case of Set 2. Here, $\kappa$ plays the dominant role both for the free scattering
$\bar p p \to \bar\Lambda \Lambda$ and for the hypernucleus production since scalar exchange is not suppressed in recoilless kinematics.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.4]{Ar40_spectrum.eps}
\end{center}
\caption{\label{fig:Ar40_spectrum} The $\Lambda$ binding energy spectrum of the ${}^{40}_{~\Lambda}\mbox{Cl}$ hypernuclei
coherently produced in $\bar p\,{}^{40}\mbox{Ar}$ collisions at $p_{\rm lab}=2$ GeV/c.
The smooth curves are obtained by multiplying the angle-integrated cross sections for the hypernucleus production
in $1s_{1/2}$, $1p_{3/2}$ and $1d_{5/2}$ states by the Gaussians of a width FWHM=1.5 MeV which is
a typical experimental energy resolution.}
\end{figure}
As we see from Fig.~\ref{fig:Ar40_spectrum}, the cross section of coherent hypernucleus production in the different states is much larger
when the $\kappa$ exchange is included. This is pure quantum coherence effect since the total
$\bar p p \to \bar\Lambda \Lambda$ cross sections differ by $\sim 30\%$ only at $p_{\rm lab}=2$ GeV/c
(Fig.~\ref{fig:sigma_Lbar_L}) while the hypernuclear production cross sections differ by almost one order of magnitude for Set 1 and Set 2.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.3]{Ar40_1d2.5L_1d1.5ph_md.eps}
\includegraphics[scale = 0.3]{Ar40_1d1.5L_1d1.5ph_md.eps}
\includegraphics[scale = 0.3]{Ar40_1p1.5L_1d1.5ph_md.eps}
\includegraphics[scale = 0.3]{Ar40_1p0.5L_1d1.5ph_md.eps}
\includegraphics[scale = 0.3]{Ar40_1s0.5L_1d1.5ph_md.eps}
\includegraphics[scale = 0.3]{Ar40_2s0.5L_1d1.5ph_md.eps}
\end{center}
\caption{\label{fig:md} Beam momentum dependence of the ${}^{40}_{~\Lambda}\mbox{Cl}$ hypernucleus production cross section
in $\bar{p}\,{}^{40}\mbox{Ar}$ collisions. The thick and thin solid lines show, respectively, the results with (Set 2)
and without (Set 1) $\kappa$ exchange. Different panels display calculations for different $\Lambda$ states,
as indicated.}
\end{figure}
The robust signal of the $\kappa$ exchange is visible in the momentum dependence of the hypernucleus
production cross section, as seen in Fig.~\ref{fig:md}.
In calculations without $\kappa$ the cross section is dominated by $K^*$ exchange which leads to a growing cross section with increasing beam energy.
Using set 2, the $\kappa$ meson dominates at moderate beam momenta $\sim 1.5\div 3$ GeV/c (right Fig.~\ref{fig:sigma_Lbar_L}).
Its contributions are seen as a characteristic shoulder in $p_{\rm lab}$-dependence of the hypernuclear production cross section
and even as the appearance of the maximum for $1d_{5/2}$ $\Lambda$ state.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.3]{Ar40_1d2.5L_1d1.5ph_tstAbs_md.eps}
\end{center}
\caption{\label{fig:tstAbs_md} Same as in Fig.~\ref{fig:md} for $1d_{5/2}$ $\Lambda$ state,
but for calculations without antihyperon absorption and using IA, as indicated.
In the case of IA, shown are the cross sections multiplied by 0.16.}
\end{figure}
In Fig.~\ref{fig:tstAbs_md} we present the results for
the beam momentum dependence of the ${}^{40}_{~\Lambda}\mbox{Cl}(1d_{5/2})$ hypernucleus production cross section obtained
by neglecting the absorption of $\bar \Lambda$ and by using the IA. In Set 2 calculation without
$\bar\Lambda$ absorption, the maximum in the $p_{\rm lab}$ dependence shifts to smaller beam momenta and becomes
more sharp. The same effect is reached by further removing the $\bar p$ absorption (IA calculation). Thus, removing
initial/final state absorption makes the difference between Set 1 and Set 2 calculations even stronger.
Therefore this difference is a clean manifestation of the $\kappa$ exchange and not an artifact of particular approximation
for the ISI/FSI effects.
\section{Summary and conclusions}
\label{SumConcl}
In the present work, the coherent hypernucleus production in $\bar pA$ collisions was investigated.
The production dynamics of the elementary and the in-medium annihilation amplitudes were described in a covariant meson exchange approach.
ISI and FSI of the scattered baryons have been taken into account by eikonal theory.
Baryon bound states were obtained by a variational approach using a RMF-model.
The approach was applied to hypernuclear production in coherent reactions on the medium-heavy ${}^{40}\mbox{Ar}$ target nucleus in the momentum range
$p_{\rm lab}\sim 1.5\div20$~GeV/c. It has been found that the total hypernucleus production cross sections populating a fixed quantum state generally
grow with increasing beam momentum from several nb to a few of 10~nb with a certain sensitivity on the $\Lambda$ bound state. Dynamics of the
${}^{40}\mbox{Ar}(\bar p,\bar\Lambda){}^{40}_{~\Lambda}\mbox{Cl}$ reaction on the $1d_{3/2}$ valence shell proton favors the production of $\Lambda$
states with $j=l+1/2$ with the cross sections increasing with $l$.
We have demonstrated that the pseudoscalar ($K$) exchange is strongly suppressed for the reactions replacing a bound proton by a bound $\Lambda$ hyperon.
Thus, the production mechanism is governed by the exchange of natural parity vector and scalar strange mesons. Including only $K^*$ exchange produces
smooth and structure-less cross sections increasing steadily with beam momentum for all possible bound $\Lambda$ states. However, if the exchange of
the scalar $\kappa$ meson is taken into account we find that at the beam momenta in the range of $p_{\rm lab}=4\div6$~GeV/c a rather sudden transition
from increase to saturation occurs or, as in the case of the $1d_{5/2}$ $\Lambda$ state, a maximum is emerging. These results strongly suggest
that the coherent hypernuclear production in $\bar p A$ annihilation reactions could be a suitable tool to test in quite detail the dynamics
of the production process, down to the possibility to identifying contributions from scalar $\pi K$ correlation as described by the $\kappa$ meson.
As mentioned before, the planned \={P}ANDA@FAIR experiment would be a suitable facility for such studies but experiments could be performed also at J-PARC
if the occasionally discussed antiproton option will be realized.
The theoretical methods sketched above are of general character. With the appropriate choice of parameters
they can be applied to any kind of coherent hyperon production process on nuclei,
in particular, to the process $(\bar p, \Lambda)$ with the capture of $\bar \Lambda$ in the residual nucleus, which requires large momentum
transfer to the struck proton. It is clear that the shell model description of the nuclear ground state is absolutely necessary for the description
of such processes.
However, we would also like to mention that a wide class of hard semiexclusive processes,
such as $(p,pp)$, $(\bar p, \bar p p)$, $(\bar p, \bar \Lambda \Lambda)$, related to the color transparency studies,
might be sensitive to the shell model treatment of the nuclear ground state \cite{Frankfurt:1994nn} and can be studied
theoretically with similar methods.
\begin{acknowledgments}
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. Le439/9.
Stimulating discussions with M. Bleicher and M. Strikman are gratefully acknowledged.
\end{acknowledgments}
| 3494f700d7aaaa9655cbc4b0856cfdac52bdd8b4 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}
The motivation for the logics in this paper comes from mathematical morphology as used in image processing and from the extension of this body of techniques
to the case of sets of pixels having the structure of a graph.
A black and white image can be identified with a subset of $\mathbb{Z}^2$; the black pixels being those in the subset.
Mathematical morphology uses what it calls a {\em structuring element} as a `probe' or a `lens' through which to view an image. By use of the appropriate structuring element
certain features of an image may be removed or rendered less prominent whereas other features can be accentuated. The mathematical basis of this approach is that
a relation $R$ on a set $U$ provides functions which map subsets of $U$ to subsets of $U$. In mathematical morphology these functions are known as {\em dilation} and {\em erosion},
defined as follows where $X \subseteq U$.
$$\begin{array}{lrcl}
\text{Dilation: } & X \dilate R & = & \{u \in U \mid \text{ for some $x$} \; (x \mathrel{R} u \text{ and } x \in X)\},\\[1ex]
\text{Erosion: } & R \erode X & = & \{u \in U \mid \text{ for all $x$} \; (u \mathrel{R} x \text{ implies } x \in X)\}.
\end{array}$$
From these operations the {\em opening} and {\em closing} of $X$ by $R$ are defined respectively by $(R \erode X) \dilate R$ and
$R \erode (X \dilate R)$. Given an appropriate choice of $R$ and when $X$ consists of the black pixels in an image,
the opening can be used to remove unwanted black pixels from the image. Dually the the closing can be used to add pixels to an image, for example to fill in white cracks between black parts of an image.
The most common formulation of mathematical morphology is in terms of structuring elements which usually consist of small patterns of pixels, however it is
well-known~\cite[p.60]{NajmanTalbot2010} that every structuring element gives rise to a relation on the set of all pixels.
A recent development in mathematical morphology has been the extension of the techniques to graphs~\cite{CoustyNajmanCVIU2013}.
One possibility is to start with $\mathbb{Z}^2$ as a grid of pixels but with the graph structure that arises by taking pixels as nodes and putting edges between adjacent pixels. A subgraph then specifies a set of pixels but not necessarily containing all the adjacencies (edges) from the underlying graph.
To extend the operations of dilation and erosion to the case of graphs needs an appropriate notion of a relation on a graph.
While there are several possible notions of what should be meant by a relation on a graph, the definition which appears in~\cite{Stell2015} can be justified in view
of the bijective correspondence between these relations and union-preserving functions on the lattice of subgraphs. Relations on graphs are most conveniently developed in the more general situation of hypergraphs -- in which edges can be incident with arbitrary non-empty sets of nodes. These relations are relations on the set of all edges
and nodes which statisfy a stability condition.
\begin{definition}
A {\em hypergraph} is a set $U$ together with an incidence relation $H \subseteq U \times U$ such that $H$ is reflexive and whenever
$x \mathrel{H} y$ and $y \mathrel{H} z$ then $x = y $ or $y = z$.
A hypergraph is thus a special case of a set $U$ equipped with a pre-order, $H$. A subset $X \subseteq U$ of a hypergraph $(U,H)$ is a {\em sub hypergraph} or more briefly a {\em subgraph} when for all $x \in X$, if $x \mathrel{H} u$ then
$u \in X$.
\end{definition}
Given a hypergraph, $(U, H)$ every $u \in U$ is either an {\em edge} or a {\em node}. The nodes are those elements, $u$, for which $u \mathrel{H} v$ implies $u = v$,
and the edges are those $u$ for which there exists a $v$ such that $u \mathrel{H} v$ and $u \neq v$.
If $u$ is an edge, and $v$ is a node, and $u \mathrel{H} v$ then we say that $u$ and $v$ are {\em incident}.
A graph is the special case of a hypergraph in which each edge is incident with either one or two nodes.
The relations we consider on hypergraphs are
those $R \subseteq U \times U$ which satisfy the stability condition $H;R;H \subseteq R$.
Connections between modal logic and mathematical morphology have already been studied by~\cite{BlochJANCL2002} in the set-based case.
While not following exactly the approach in~\cite{BlochJANCL2002}, the general idea is that
a frame $(U,R)$ in Kripke semantics provides a set $U$ of pixels, subsets of which are black pixels in particular images,
and an accessibility relation $R$ generated by a structuring element.
The modalities $\dia$, $\bdia$ and $\Box$ function semantically as
operators taking subsets to subsets, with $\dia$
associated to $X \mapsto X \dilate \breve{R}$ (where $\breve{R}$ is the ordinary converse of $R$), $\bdia$ associated to $X \mapsto X \dilate R$, and $\Box$ associated to $X \mapsto R \erode X$. A propositional variable $p$ can be understood as a set of pixels;
the opening and closing of this image are then $\bdia \Box p$ and $\Box \bdia p$ respectively,
and properties of the morphological operations and statements about specific images and parts of images can be expressed in the logic.
Set-based mathematical morphology is naturally associated to a logic which is classical (since the set of subsets of a set is
a Boolean algebra) and which is modal (since each structuring element provides an accessibility relation).
The generalization of this to mathematical morphology on hypergraphs results in a logic which is bi-intuitionistic (since the set
of subgraphs of a hypergraph forms a bi-Heyting algebra) and which is modal too.
These considerations led to the development of a bi-intititionistic tense logic, $\mathbf{BiSKt}$, in~\cite{Stell2016}.
The study of intuitionistic modal and tense logic has been done in the several literature, e.g., in~\cite{Esakia2006,Ewald1986,Ono1977,Wolter1999,Sotirov1980,Hasimoto2001}. The bi-intuitionistic tense logic studied in~\cite{Stell2016} can be regarded as both a tense expansion of bi-intuitionistic logic and an expansion of intuitionistic modal logic studied in~\cite{Ono1977,Wolter1999} with the `past-tense' operator and the coimplication. Hilbert-system of bi-intuitionistic logic was first given by Rauszer~\cite{Rauszer1974a}. When we pay attention to the intuitionistic modal fragment of bi-intuitionistic stable tense logic, our Kripke semantics coincides with Kripke semantics for intuitionistic modal logic given in~\cite{Wolter1999}. Finite model property of intuitionistic modal logic was studied in~\cite{Ono1977,Sotirov1980,Hasimoto2001}. As far as the authors know, any general results on strong completeness and finite model property have not been yet known for bi-intuitionistic tense logic. This paper gives the first step toward this direction (see also the discussion in Section \ref{sec:Related}).
This paper is organized as follows. Section \ref{sec:semantics} introduces the notion of stable relations on preorders and then moves to our syntax and Kripke semantics of bi-intuitionistic stable tense logic. Section \ref{sec:axiomatisation} provides Hilbert system for the smallest bi-intuitionistic stable tense logic and shows that it is sound for Kripke semantics. We note that our axiomatisation for bi-intuitionistic fragment is much simpler than Rauszer's axiomatisation~\cite{Rauszer1974a}. Section \ref{sec:completeness} establishes the strong completeness of the bi-intuitionistic stable tense logic for Kripke semantics and Section \ref{sec:extension} extends this argument to several extensions of $\mathbf{BiSKt}$. Finally, Section \ref{sec:fmp} follows Hasimoto's technique for intuitionistic modal logic to show the finite model property of some extensions of $\mathbf{BiSKt}$, which implies the decidability of the extensions, provided the logics are finitely axiomatizable. Section \ref{sec:Related} comments on the related literature and future work of this paper.
\section{Kripke Semantics for Bi-intuitionistic Stable Tense Logic}
\label{sec:semantics}
\subsection{Stable Relations on Preorders}
\begin{definition}
\label{defn-stable}
Let $H$ be a preorder on a set $U$. We say that $X \subseteq U$ is an {\em $H$-set} if $X$ is closed under $H$-successors, i.e., $uHv$ and $u \in X$ jointly imply $v \in X$ for all elements $u$, $v \in U$.
Given a preorder $(U,H)$, a binary relation $R \subseteq U \times U$ is {\em stable} if it satisfies $H;R;H \subseteq R$.
\end{definition}
\noindent It is easy to see that a relation $R$ on $U$ is stable, if and only if, $R; H \subseteq R$ and $H;R \subseteq R$. Given any binary relation $R$ on $U$, $\breve{R}$ is defined as the ordinary converse of $R$. Even if $R$ is a stable relation on $U$, its converse $\breve{R}$ may be not stable.
\begin{definition}[\cite{Stell2016}]
The {\em left converse} $\leftconv R$ of a stable relation $R$ is $H;{\breve{R}};H$.
\end{definition}
It is easy to verify the stability of the left converse $\leftconv R$.
\begin{example}
\label{ex:stable}
\begin{enumerate}
\item A preorder $(U_{1},H_{1})$ is defined as follows: $U_{1}$ = $\setof{0,1,2,3}$ and $H_{1}$ = $\setof{(0,1), (2,3)} \cup \inset{(u,u)}{u \in U_{1}}$. Take a stable relation $R_{1}$ on $(U_{1},H_{1})$ defined by $R_{1}$ = $\setof{(0,3),(1,3)}$ (see the first graph below, where the double solid lines are for $R_{1}$ and the single solid lines are for $H_{1}$ but the reflexive $H_{1}$-arrows are disregarded). Then $\breve{R_{1}}$ = $\setof{(3,0), (3,1)}$ is not stable, but $\leftconv R_{1}$ = $\setof{(3,0),(3,1),(2,0),(2,1)}$ is stable (see the dotted double lines in the second graph below).
\item A preorder $(U_{2},H_{2})$ is defined as follows: $U_{2}$ = $\setof{a,b,c}$ and $H_{2}$ = $\setof{(a,b)} \cup \inset{(u,u)}{u \in U_{2}}$. Take a stable relation $R_{2}$ on $(U_{2},H_{2})$ defined by $R_{2}$ = $\setof{(a,c),(b,c)}$ (see the third graph below where we follow the same convension as in item (1)). Then $\breve{R_{2}}$ = $\setof{(c,a), (c,b)}$ is already stable and so $\breve{R_{2}}$ = $\leftconv R_{2}$ (see the dotted double lines in the fourth graph below).
\[
\xymatrix{
*++[o][F-]{1} \ar@{=>}[r]^{R_{1}} & *++[o][F-]{3} \\
*++[o][F-]{0} \ar[u]^{H_{1}} \ar@{=>}[ur]^{R_{1}} & *++[o][F-]{2} \ar[u]^{H_{1}} \\
}\qquad
\xymatrix{
*++[o][F-]{1} & *++[o][F-]{3} \ar@{:>}[l] \ar@{:>}[dl] \\
*++[o][F-]{0} \ar[u]^{H_{1}} & *++[o][F-]{2} \ar[u]^{H_{1}} \ar@{:>}[l] \ar@{:>}[ul] \\
}
\qquad
\xymatrix{
*++[o][F-]{b} \ar@{=>}[r]^{R_{2}} & *++[o][F-]{c} \\
*++[o][F-]{a} \ar[u]^{H_{2}} \ar@{=>}[ur]^{R_{2}} & \\
}
\qquad
\xymatrix{
*++[o][F-]{b} & *++[o][F-]{c} \ar@{:>}[l] \ar@{:>}[dl] \\
*++[o][F-]{a} \ar[u]^{H_{2}} & \\
}
\]
\end{enumerate}
\end{example}
\subsection{Syntax and Kripke Semantics for Bi-intuitionistic Stable Tense Logic}
Let $\mathsf{Prop}$ be a countable set of propositional variables. Our syntax $\mathcal{L}(\mathsf{Mod})$ for bi-intuitionistic stable tense logic consists of all logical connectives of bi-intuitionistic logic, i.e., two constant symbols $\bot$ and $\top$, disjunction $\lor$, conjunction $\land$, implication $\to$, coimplication $\coimp$, and a finite set $\mathsf{Mod} \subseteq \setof{\bdia,\Box, \dia, \bbox}$ of modal operators. The set of all formulas in $\mathcal{L}(\mathsf{Mod})$ is defined in a standard way. For example, when $\mathsf{Mod}$ = $\setof{\bdia,\Box, \dia, \bbox}$, the set $\mathsf{Form}_{\mathcal{L}(\mathsf{Mod})}$ of all formulas of the syntax $\mathcal{L}(\mathsf{Mod})$ is defined inductively as follows:
\[
\varphi ::= \top \,|\, \bot \,|\, p \,|\, \varphi \land \varphi \,|\,\varphi \lor \varphi \,|\, \varphi \to \varphi \,|\, \varphi \coimp \varphi \,|\, \bdia \varphi \,|\, \Box \varphi \,|\, \dia \varphi \,|\, \bbox \varphi \quad (p \in \mathsf{Prop}).
\]
In this section, we mostly assume that $\mathsf{Mod}$ = $\setof{\bdia,\Box, \dia, \bbox}$.
We define the following abbreviations:
\[
\neg \varphi := \varphi \to \bot, \quad \coneg \varphi := \top \coimp \varphi, \quad \varphi \leftrightarrow \psi := (\varphi \to \psi) \land (\psi \to \varphi).
\]
\begin{definition}
\label{defn-semantics}
$F$ = $(U,H,R)$ is an {\em $H$-frame} if $U$ is a nonempty set, $H$ is a preorder on $U$, and $R$ is a {\em stable} binary relation on $U$. A {\em valuation} on an $H$-frame $F$ = $(U,H,R)$ is a mapping $V$ from $\mathsf{Prop}$ to the set of all $H$-sets on $U$. $M$ = $(F,V)$ is an {\em $H$-model} if $F$ = $(U,H,R)$ is an $H$-frame and $V$ is a valuation.
Given an $H$-model $M$ = $(U,H,R,V)$, a state $u \in U$ and a formula $\varphi$, the satisfaction relation $M,u \models \varphi$ is defined inductively as follows:
\[
\begin{array}{lll}
M,u \models p &\iff& u \in V(p), \\
M,u \models \top, && \\
M,u \not\models \bot, & & \\
M,u \models \varphi \lor \psi &\iff& M,u \models \varphi \text{ or } M,u \models \psi, \\
M,u \models \varphi \land \psi &\iff& M,u \models \varphi \text{ and } M,u \models \psi, \\
M,u \models \varphi \to \psi &\iff& \text{ For all $v \in U$ } ((uHv \text{ and } M,v \models \varphi) \text{ imply } M,v \models \psi),\\
M,u \models \varphi \coimp \psi &\iff& \text{ For some $v \in U$ }(vHu \text{ and } M,v\models \varphi \text{ and }M,v \not\models \psi), \\
M,u \models \bdia \varphi &\iff& \text{ For some $v \in U$ } (vRu \text{ and } M,v \models \varphi),\\
M,u \models \Box \varphi &\iff& \text{ For all $v \in U$ } (uRv \text{ implies } M,v \models \varphi),\\
M,u \models \dia \varphi &\iff& \text{ For some $v \in U$ } ((v,u) \in \leftconv R \text{ and } M,v \models \varphi), \\
M,u \models \bbox \varphi &\iff& \text{ For all $v \in U$ } ((u,v) \in \leftconv R \text{ implies } M,v \models \varphi).\\
\end{array}
\]
The {\em truth set} $\den{\varphi}_{M}$ of a formula $\varphi$ in an $H$-model $M$ is defined by $\den{\varphi}_{M}$ := $\inset{u \in U}{M,u \models \varphi}$. If the underlying model $M$ in $\den{\varphi}_{M}$ is clear from the context, we drop the subscript and simply write $\den{\varphi}$. We write $M \models \varphi$ (read: `$\varphi$ is valid in $M$') to means that $\den{\varphi}_{M}$ = $U$ or $M,u\models \varphi$ for all states $u \in U$. For a set $\Gamma$ of formulas, $M \models \Gamma$ means that $M \models \gamma$ for all $\gamma \in \Gamma$.
Given any $H$-frame $F$ = $(U,H,R)$, we say that a formula $\varphi$ is {\em valid} in $F$ (written: $F \models \varphi$) if $(F,V) \models \varphi$ for any valuation $V$ and any state $u \in U$, i.e., $\den{\varphi}_{(F,V)}$ = $U$.
\end{definition}
As for the abbreviated symbols, we may derive the following satisfaction conditions:
\[
\begin{array}{lll}
M,u \models \neg \varphi &\iff& \text{ For all $v \in U$ } (uHv \text{ implies } M,v \not\models \varphi),\\
M,u \models \coneg \varphi &\iff& \text{ For some $v \in U$ }(vHu \text{ and } M,v \not\models \varphi),\\
M,u \models \varphi \leftrightarrow \psi &\iff& \text{ For all $v \in U$ } (uHv \text{ implies } (M,v \models \varphi \iff M,v \models \psi) ). \\
\end{array}
\]
\begin{proposition}[\cite{Stell2016}]
Given any $H$-model $M$, the truth set $\den{\varphi}_{M}$ is an $H$-set.
\end{proposition}
\begin{proof}
By induction on $\varphi$. When $\varphi$ is of the form $\bdia \psi$, $\Box \psi$, $\dia \psi$ or $\bbox \psi$, we need to use $R;H \subseteq R$, $H;R \subseteq R$, $\leftconv R;H \subseteq \leftconv R$, $H;\leftconv R \subseteq \leftconv R$, respectively. Note that these properties hold since $R$ and $\leftconv R$ are stable.
\end{proof}
\begin{definition}
Given a set $\Gamma \cup \setof{\varphi}$ of formulas, $\varphi$ is a {\em semantic consequence} of $\Gamma$ (notation: $\Gamma \models \varphi$) if, whenever $M,u \models \gamma$ for all $\gamma \in \Gamma$, $M,u \models \varphi$ holds, for all $H$-models $M$ = $(U,H,R,V)$ and all states $u \in U$. When $\Gamma$ is a singleton $\setof{\psi}$ of formulas, we simply write $\psi \models \varphi$ instead of $\setof{\psi} \models \varphi$. When both $\varphi \models \psi$ and $\psi \models \varphi$ hold, we use $\varphi \eqv \psi$ to mean that they are equivalent with each other. When $\Gamma$ is an emptyset, we also simply write $\models \varphi$ instead of $\emptyset \models \varphi$.
\end{definition}
The following proposition is easy to verify (cf.~\cite[Lemma 11]{Stell2016}).
\begin{proposition}[\cite{Stell2016}]
\label{prop:sem-basic}
\begin{enumerate}
\item $\Gamma \models \varphi \to \psi$ iff $\Gamma \cup \setof{\varphi} \models \psi$.
\item $F \models \varphi \land \psi \to \gamma$ iff
$F \models \varphi \to (\psi \to \gamma)$. Therefore, $\varphi \land \psi \models \gamma$ iff $\varphi \models \psi \to \gamma$.
\item $F \models (\varphi \coimp \psi) \to \gamma$ iff $F \models \varphi \to (\psi \lor \gamma)$. Therefore, $\varphi \coimp \psi \models \gamma$ iff $\varphi \models \psi \lor \gamma$.
\item $F \models \bdia \varphi \to \psi$ iff $F \models \varphi \to \Box \psi$. Therefore, $\bdia \varphi \models \psi$ iff $\varphi \models \Box \psi$.
\item $F \models \dia \varphi \to \psi$ iff $F \models\varphi \to \bbox \psi$. Therefore, $\dia \varphi \models \psi$ iff $\varphi \models \bbox \psi$.
\item $\dia \varphi \eqv \coneg \Box \neg \varphi$.
\item $\bbox \varphi \eqv \neg \bdia \coneg \varphi$.
\end{enumerate}
\end{proposition}
The last two items suggest that $\dia$ and $\bbox$ are definable in the syntax $\mathcal{L}(\bdia,\Box)$. In this sense, we may drop the modal operators $\dia$ and $\bbox$ from the syntax $\mathcal{L}(\bdia,\Box, \dia, \bbox)$. Conversely, we may also ask the question if $\bdia$ and $\Box$ are definable in $\mathcal{L}(\bbox)$ and $\mathcal{L}(\dia)$, respectively. We give negative answers to these two questions. For this purpose, we introduce the appropriate notion of {bounded morphism} for the syntax $\mathcal{L}(\dia,\bbox)$.
\begin{definition}
Let $M_{i}$ := $(U_{i},H_{i},R_{i},V_{i})$ ($i$ = 1 or 2) be an $H$-model. We say that a mapping $f: U_{1} \to U_{2}$ is a {\em $\mathcal{L}(\dia,\bbox)$-bounded morphism} if it satisfies the following conditions:
\begin{description}
\item[(Atom)] $u \in V_{1}(p)$ $\iff$ $f(u) \in V_{2}(p)$ for all $p \in \mathsf{Prop}$,
\item[($H$-forth)] $uH_{1}v$ implies $f(u)H_{2}f(v)$,
\item[($H$-back)] $f(u)H_{2}v'$ implies that $uH_{1}v$ and $f(v)$ = $v'$ for some $v \in U_{1}$,
\item[($\breve{H}$-back)] $f(u)\breve{H_{2}}v'$ implies that $u\breve{H_{1}}v$ and $f(v)$ = $v'$ for some $v \in U_{1}$,
\item[($\leftconv R$-forth)] $(u,v) \in \leftconv R_{1}$ implies $(f(u),f(v)) \in \leftconv R_{2}$,
\item[($\leftconv R$-back)] $(f(u),v') \in \leftconv R_{2}$ implies that $(u,v) \in \leftconv R_{1}$ and $f(v)$ = $v'$ for some $v \in U_{1}$,
\item[($\breve{\leftconv R}$-back)] $(v',f(u)) \in \leftconv R_{2}$ implies that $(v,u) \in \leftconv R_{1}$ and $f(v)$ = $v'$ for some $v \in U_{1}$.
\end{description}
\end{definition}
By induction on $\varphi$, we can prove the following.
\begin{proposition}
\label{prop:bbox-dia-bmor}
Let $M_{i}$ := $(U_{i},H_{i},R_{i},V_{i})$ ($i$ = 1 or 2) be an $H$-model and $f:U_{1} \to U_{2}$ an {\em $\mathcal{L}(\dia,\bbox)$-bounded morphism}. For any formula $\varphi$ in $\mathcal{L}(\dia,\bbox)$ and any state $u \in U_{1}$, the following equivalence holds:
\[
\text{ $M_{1},u \models \varphi$ $\iff$ $M_{2},f(u) \models \varphi$. }
\]
\end{proposition}
\begin{proposition}\label{prop:undefinable}
\begin{enumerate}
\item The modal operator $\bdia$ is not definable in $\mathcal{L}(\bbox)$.
\item The modal operator $\Box$ is not definable in $\mathcal{L}(\dia)$.
\end{enumerate}
\end{proposition}
\begin{proof}
\noindent (1) Suppose for contradiction that $\bdia p$ is definable by a formula $\varphi$ in $\mathcal{L}(\bbox)$.
Let us define an $H$-model $M_{1}$ = $(U_{1},H_{1},R_{1},V_{1})$ as a pair of an $H$-frame in Example \ref{ex:stable} (1) and a valuation $V_{1}$ defined by $V_{1}(p)$ = $\setof{1,2,3}$, which is an $H$-set. We observe that $M_{1},2 \not\models \bdia p$, i.e., $M_{1},2 \not\models \varphi$ since there is no $R_{1}$-predecessor from $2$. Let us define an $H$-model $M_{2}$ = $(U_{2},H_{2},R_{2},V_{2})$ as a pair of an $H$-frame in Example \ref{ex:stable} (2) and a valuation $V_{2}$ defined by $V_{2}(p)$ = $\setof{b,c}$, which is an $H$-set. Then we have $M_{2},c \models \bdia p$ hence $M_{2},c \models \varphi$. Consider the mapping $f:U_{1} \to U_{2}$ defined by $f(0)$ = $a$, $f(1)$ = $b$ and $f(2)$ = $f(3)$ = $c$. We can verify that $f$ is an $\mathcal{L}(\dia,\bbox)$-bounded morphism.
Since $\varphi$ is also a formula in $\mathcal{L}(\dia,\bbox)$, $M_{1},2 \not\models \varphi$ implies $M_{2},f(2) \not\models \varphi$
by Proposition \ref{prop:bbox-dia-bmor}.
But this is a contradiction with $M_{2},c \models \varphi$.
\noindent (2) Suppose for contradiction that $\Box p$ is definable by a formula $\varphi$ in $\mathcal{L}(\bbox)$. Define $H$-models $N_{1}$ and $N_{2}$ as follows (see the first and the third graphs below, where all reflexive $H$-arrows are omitted).
\begin{itemize}
\item $N_{1}$ = $(U_{1},H_{1},R_{1},V_{1})$ is defined by: $U_{1}$ = $\setof{0,1,2,3}$, $H_{1}$ = $\setof{(1,0),(3,2)} \cup \inset{(u,u)}{u \in U_{1}}$, $R_{1}$ = $\setof{(1,2),(1,3)}$ and $V_{1}(p)$ = $\setof{0,1,2}$, which is an $H$-set. We have $N_{1},0 \models \Box p$ hence $N_{1},0 \models \varphi$ since there is no $R_{1}$-successor. Moreover, $\leftconv R_{1}$ = $\setof{(2,1),(3,1),(2,0),(3,0)}$ (see the dotted double lines in the second graph below).
\item $N_{2}$ = $(U_{2},H_{2},R_{2},V_{2})$ is defined as follows: $U_{2}$ = $\setof{a,b,c}$, $H_{2}$ = $\setof{(c,b)} \cup \inset{(u,u)}{u \in U_{2}}$, $R_{2}$ = $\setof{(a,b),(a,c)}$, and $V_{2}(p)$ = $\setof{a,b}$, which is an $H$-set. We note that $N_{2},a \not\models \Box p$ hence $N_{1},0 \models \varphi$. Moreover, $\leftconv R_{2}$ = $\setof{(b,a),(c,a)}$ (see the dotted double lines in the fourth graph below).
\end{itemize}
\[
\xymatrix{
*++[o][F-]{1} \ar@{=>}[r]^{R_{1}} \ar@{=>}[dr]^{R_{1}} \ar[d]^{H_{1}} & *++[o][F-]{3} \ar[d]^{H_{1}}\\
*++[o][F-]{0} & *++[o][F-]{2} \\
}
\quad
\xymatrix{
*++[o][F-]{1} \ar[d]^{H_{1}} & *++[o][F-]{3} \ar[d]^{H_{1}} \ar@{:>}[l] \ar@{:>}[ld] \\
*++[o][F-]{0} & *++[o][F-]{2} \ar@{:>}[l] \ar@{:>}[lu] \\
}
\quad
\xymatrix{
*++[o][F-]{a} \ar@{=>}[dr]^{R_{2}}\ar@{=>}[r]^{R_{2}} & *++[o][F-]{c} \ar[d]^{H_{2}} \\
& *++[o][F-]{b} \\
}
\quad
\xymatrix{
*++[o][F-]{a} & *++[o][F-]{c} \ar[d]^{H_{2}} \ar@{:>}[l] \\
& *++[o][F-]{b} \ar@{:>}[lu] \\
}
\]
Consider the mapping $f:U_{1} \to U_{2}$ defined by $f(0)$ = $f(1)$ = $a$, $f(2)$ = $b$, $f(3)$ = $c$. Then $f$ is an $\mathcal{L}(\dia,\bbox)$-bounded morphism. By the similar argument as in (1), a contradiction follows.
\end{proof}
\begin{definition}
We say that a set $\Gamma$ of formula {\em defines} a class $\mathbb{F}$ of $H$-frames if for all $H$-frames $F$, $F \in \mathbb{F}$ iff $F \models \varphi$ for all formulas $\varphi \in \Gamma$. When $\Gamma$ is a singleton $\setof{\varphi}$, we simply say that $\varphi$ defines a class $\mathbb{F}$.
\end{definition}
The following frame definability results are already established in~\cite[Theorem 10]{Stell2016}.
\begin{proposition}[\cite{Stell2016}]
\label{prop:definable}
Let $F$ = $(U,H,R)$ be an $H$-frame. Let $S_{i} \in \setof{R,\leftconv R}$ for $i$ = $1, \ldots, m$ and for each $i$ let
\[
\mathsf{B}_{i} =
\begin{cases}
\Box &\text{ if $S_{i}$ = $R$ } \\
\bbox &\text{ if $S_{i}$ = $\leftconv R$ } \\
\end{cases}
\text{ and let }
\mathsf{D}_{i} =
\begin{cases}
\bdia &\text{ if $S_{i}$ = $R$ } \\
\dia &\text{ if $S_{i}$ = $\leftconv R$ } \\
\end{cases}
\]
Let $0 \leqslant k \leqslant m$ (where the composition of a sequence of length 0 is understood as $H$). Then the following are equivalent: (1) $S_{1};\cdots;S_{k} \subseteq S_{k+1};\cdots;S_{m}$; (2) $F \models \mathsf{D}_{k} \cdots \mathsf{D}_{1}p \to \mathsf{D}_{m} \cdots \mathsf{D}_{k+1}p$; (3) $F \models \mathsf{B}_{k+1} \cdots \mathsf{B}_{m} \to \mathsf{B}_{1}p \cdots \mathsf{B}_{k}p$.
\end{proposition}
Table \ref{table:def} demonstrates the content of Proposition \ref{prop:definable}.
\begin{table}
\begin{center}
\begin{tabular}{@{}lllll}
\hline
\raisebox{-1.25ex}{\rule{0ex}{4ex}}Condition & Inclusion &
\begin{tabular}{@{}c}Diamond\\Form\end{tabular} &
\begin{tabular}{@{}c}Box\\Form\end{tabular} &
\begin{tabular}{@{}c}Mixed\\Form\end{tabular} \\
\hline &&&&\\
reflexive & $H \subseteq R$ & $ p \to
\bD p$ & $\wB p \to p$ & \\[1ex]
\begin{tabular}{@{}l}
converse\\ reflexive
\end{tabular} & $H \subseteq \leftconv R$ & $p \to \wD p
$ & $ \bB p \to p$ & \\[2ex]
pathetic & $R \subseteq H$ & $\bD p
\to p$ & $p \to \wB p$ &
\\[1ex]
\begin{tabular}{@{}l}
converse\\ pathetic
\end{tabular} & $\leftconv R \subseteq H$ & $\wD p
\to p$ & $p \to \bB p$ &
\\[2ex]
functional & $\leftconv R ; R \subseteq H$ & $\bD \wD p
\to p $ & $p \to \bB \wB p$ & $\wD p \to \wB p$ \\[1ex]
injective & $R ; \leftconv R \subseteq H$ & $\wD \bD p
\to p $ & $p \to \wB \bB p$ & $\bD p \to \bB p$ \\[1ex]
surjective & $H \subseteq \leftconv R ; R$ & $p \to
\bD \wD p $ & $ \bB \wB p \to p$ & $\bB p \to \bD p$ \\[1ex]
total & $H \subseteq R ; \leftconv R$ & $p \to
\wD \bD p $ & $ \wB \bB p \to p$ & $\wB p \to \wD p$ \\[1ex]
\begin{tabular}{@{}l}
weakly \\ symmetric
\end{tabular} & $R \subseteq \leftconv R$ & $ \bD p
\to \wD p $ & $\bB p \to \wB p$ & $p \to \wB \wD p$ \\[2ex]
\begin{tabular}{@{}l}
strongly \\ symmetric
\end{tabular} & $\leftconv R \subseteq R$ & $ \wD p
\to \bD p $ & $\wB p \to \bB p$ & $\wD \wB p \to p$ \\[2ex]
transitive & $R ; R \subseteq R$ & $\bD \bD p
\to \bD p$ & $\wB p \to \wB \wB p$ & \\[1ex]
\begin{tabular}{@{}l}
converse\\ transitive
\end{tabular} & $\leftconv R ; \leftconv R \subseteq \leftconv R$ & $\wD \wD p \to
\wD p$ & $\bB p \to \bB \bB p$ & \\[2ex]
dense & $R \subseteq R ; R $ & $\bD p \to
\bD \bD p$ & $\wB \wB p \to \wB p$ & \\[1ex]
\begin{tabular}{@{}l}
converse\\ dense
\end{tabular} & $\leftconv R \subseteq \leftconv R ; \leftconv R$ & $\wD p \to \wD
\wD p$ & $\bB \bB p \to \bB p$ & \\[2ex]
Euclidean & $\leftconv R ; R \subseteq R$ & $\bD \wD p
\to \bD p$ & $\wB p \to \bB \wB p$ & $\wD \wB p \to \wB p$ \\[1ex]
\begin{tabular}{@{}l}
weak\\ Euclidean
\end{tabular} & $\leftconv R ; R \subseteq \leftconv R$ & $\bD \wD p \to
\wD p$ & $\bB p \to \bB \wB p$ & $\wD p \to \wB \wD p$ \\[2ex]
\begin{tabular}{@{}l}
converse\\ Euclidean
\end{tabular} & $R ; \leftconv R \subseteq R$ & $\wD \bD p \to \bD p$ &
$\wB p \to \wB \bB p$ & \\[2ex]
\begin{tabular}{@{}l}
weak converse\\ Euclidean
\end{tabular} & $ R ; \leftconv R \subseteq \leftconv R$ & $\wD \bD p \to \wD
p$ & $\bB p \to \wB \bB p$ & \\[2ex]
confluent & $\leftconv R ; R \subseteq R ; \leftconv R$
& $\bD \wD p
\to \wD \bD p$
& $\wB \bB p \to \bB \wB p$
& $\wD \wB p \to \wB \wD p$\\[1ex]
divergent & $R ; \leftconv R \subseteq \leftconv R ; R $
& $\wD \bD p
\to \bD \wD p$
& $\bB \wB p \to \wB \bB p$
& $\bD \bB p \to \bB \bD p$\\[1ex]
\hline
\end{tabular}
\end{center}
\caption{\label{CorrTable}Modal correspondents arising from inclusions from~\cite{Stell2016}}
\label{table:def}
\end{table}
\section{Hilbert System of Bi-intuitionistic Stable Tense Logic}
\label{sec:axiomatisation}
In view of the last two items of Proposition \ref{prop:sem-basic}, we employ $\mathcal{L}(\bdia,\Box)$ as our syntax in what follows in this paper and we simply write $\mathcal{L}$ to mean $\mathcal{L}(\bdia,\Box)$ if no confusion arises.
\begin{definition}
We say that a set $\Lambda$ of formulas is a {\em bi-intuitionistic stable tense logic} (for short, $bist$-logic) if $\Lambda$ contains all the axioms of Table \ref{table:hil-biskt} and closed under all the rules of Table \ref{table:hil-biskt}. Given a $bist$-logic $\Lambda$ and a set $\Gamma \cup \setof{\varphi}$ of formulas, we say that $\varphi$ is {\em $\Lambda$-provable} from $\Gamma$ (notation: $\Gamma \vdash_{\Lambda} \varphi$) if there is a finite set $\Gamma' \subseteq \Gamma$ such that $\bigwedge \Gamma' \to \varphi \in \Lambda$, where $\bigwedge \Delta$ is the conjunction of all elements of $\Delta$ and $\bigwedge \Delta$ := $\top$ when $\Delta$ is an emptyset. Moreover, when $\Gamma$ = $\emptyset$, we simply write $\vdash_{\Lambda} \varphi$ instead of $\emptyset \vdash_{\Lambda} \varphi$, which is equivalent to $\varphi \in \Lambda$.
We define $\mathbf{BiSKt}$ as the smallest $bist$-logic $\bigcap \inset{\Lambda}{\text{$\Lambda$ is a $bist$-logic}}$. Given a set $\Sigma$ of formulas, the smallest $bist$-logic $\mathbf{BiSKt}\Sigma$ containing $\Sigma$ is defined by: $\mathbf{BiSKt}\Sigma := \bigcap \inset{\Lambda}{\text{$\Lambda$ is a $bist$-logic and $\Sigma \subseteq \Lambda$}}$.
\end{definition}
\noindent Table \ref{table:hil-biskt} provides Hilbert-style axiomatisation of $\mathbf{BiSKt}$. In what follows in this paper, we assume that the reader is familiar with theorems and derived inference rules in intuitionistic logic.
\begin{table}[htbp]
\caption{Hilbert-style axiomatisation of $\mathsf{H}\mathbf{BiSKt}$
}
\label{table:hil-biskt}
\begin{center}
\begin{tabular}{|llll|}
\hline
\multicolumn{4}{|l|}{Axioms and Rules for Intuitionistic Logic}\\
\hline
$(\texttt{A0})$ & \multicolumn{3}{l|}{ $p \to (q \to p)$ } \\
$(\texttt{A1})$ & \multicolumn{3}{l|}{ $(p \to (q \to r)) \to ((p \to q) \to (p \to r))$ } \\
$(\texttt{A2})$ & $p \to (p \lor q)$ & $(\texttt{A3})$ & $q \to (p \lor q)$ \\
$(\texttt{A4})$ & $(p \to r) \to ((q \to r)\to(p \lor q \to r))$ & $(\texttt{A5})$ & $(p \land q) \to p$ \\
$(\texttt{A6})$ & $(p \land q )\to q$ & $(\texttt{A7})$ & $(p \to (q \to p \land q))$ \\
$(\texttt{A8})$ & $\bot \to p$ & $(\texttt{A9})$ & $p \to \top$ \\
$(\texttt{MP})$ & \multicolumn{3}{l|}{ From $\varphi$ and $\varphi \to \psi$, infer $\psi$ } \\
$(\texttt{US})$ & \multicolumn{3}{l|}{ From $\varphi$, infer a substitution instance $\varphi'$ of $\varphi$ } \\
\hline
\multicolumn{4}{|l|}{Additional Axioms and Rules for Bi-intuitionistic Logic}\\
\hline
$(\texttt{A10})$ & $p \to (q \lor (p \coimp q))$ & $(\texttt{A11})$ & $((q \lor r) \coimp q) \to r$ \\
$(\texttt{Mon}\coimp)$ & \multicolumn{3}{l|}{ From $\delta_{1} \to \delta_{2}$, infer $(\delta_{1} \coimp \psi) \to (\delta_{2} \coimp \psi)$} \\
\hline
\multicolumn{4}{|l|}{Additional Axioms and Rules for Tense Operators}\\
\hline
$(\texttt{A12})$ & $p \to \Box \bdia p$ & $(\texttt{A13})$ & $\bdia \Box p \to p$ \\
$(\texttt{Mon}\Box)$ & From $\varphi \to \psi$, infer $\Box \varphi \to \Box \psi$ & $(\texttt{Mon}\bdia)$ & From $\varphi \to \psi$, infer $\bdia \varphi \to \bdia \psi$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{proposition}
\label{prop:prf-basic}
Let $\Lambda$ be a $bist$-logic.
\begin{multienumerate}
\mitemxx{$\vdash_{\Lambda} (\varphi \coimp \psi) \to \gamma$ $\iff$ $\vdash_{\Lambda} \varphi \to (\psi \lor \gamma)$.}{$\vdash_{\Lambda} \bdia \varphi \to \psi$ $\iff$ $\vdash_{\Lambda} \varphi \to \Box \psi$. }
\mitemxx{$\vdash_{\Lambda} (\Box \varphi \land \Box \psi) \leftrightarrow \Box (\varphi \land \psi)$}{$\vdash_{\Lambda} \top \leftrightarrow \Box \top$.}
\mitemxx{$\vdash_{\Lambda} (\bdia \varphi \lor \bdia \psi) \leftrightarrow \bdia (\varphi \lor \psi)$}{ $\vdash_{\Lambda} \bot \leftrightarrow \bdia \bot$}
\end{multienumerate}
\end{proposition}
\begin{proof}
Here we prove (1) and (2) alone, since items (3) to (6) are consequence from the adjoint or the residuation `$\bdia \dashv \Box$' i.e., left adjoints ($\bdia$) preserve colimits (finite disjunctions) and right adjoints ($\Box$) preserve limits (finite conjunctions).
\begin{enumerate}
\item $(\Rightarrow)$ Assume that $\vdash_{\Lambda} (\varphi \coimp \psi) \to \gamma$. We obtain $\vdash_{\Lambda} (\psi \lor (\varphi \coimp \psi)) \to (\psi \lor \gamma)$. By axiom $(\texttt{A10})$, we get $\vdash_{\Lambda} \varphi \to (\psi \lor \gamma)$. ($\Leftarrow$) Assume that $\vdash_{\Lambda} \varphi \to (\psi \lor \gamma)$. By rule $(\texttt{Mon}\coimp)$, $\vdash_{\Lambda} (\varphi \coimp \psi) \to ((\psi \lor \gamma) \coimp \psi)$. It follows from $(\texttt{A10})$ that $\vdash_{\Lambda} (\varphi \coimp \psi) \to \gamma$, as desired.
\item $(\Rightarrow)$ Assume that $\vdash_{\Lambda} \bdia \varphi \to \psi$. By rule $(\texttt{Mon}\Box)$, $\vdash_{\Lambda} \Box \bdia \varphi \to \Box \psi$. It follows from $(\texttt{A12})$ that $\vdash_{\Lambda} \varphi \to \Box \psi$. ($\Leftarrow$) Suppose that $\vdash_{\Lambda} \varphi \to \Box \psi$. By rule $(\texttt{Mon}\bdia)$, $\vdash_{\Lambda} \bdia \varphi \to \bdia \Box \psi$. By axiom $(\texttt{A13})$, we obtain $\vdash_{\Lambda} \bdia \varphi \to \psi$.
\end{enumerate}
\end{proof}
\begin{theorem}[Soundness]
\label{thm:sound}
Given any formula $\varphi$, $\vdash_{\mathbf{BiSKt}} \varphi$ implies $\models \varphi$.
\end{theorem}
\begin{proof}
We demonstrate the soundness of axioms and rules other than those of intuitionistic logic.
By Proposition \ref{prop:sem-basic}, we have the following equivalences:
\begin{align}
\tag{$\star$}\label{eq:ad-coimp-dis}
\models (\varphi \coimp \psi) \to \gamma &\iff\, \models \varphi \to (\psi \lor \gamma)\\
\tag{$\dagger$}\label{eq:ad-bdia-box}
\models \bdia \varphi \to \psi &\iff\, \models \varphi \to \Box \psi
\end{align}
\begin{itemize}
\item[($\texttt{A10}$)] By $\models (p \coimp q) \to (p \coimp q)$ and (\ref{eq:ad-coimp-dis}), $\models p \to q\lor (p \coimp q)$ holds, as desired.
\item[($\texttt{A11}$)] By $\models (q \lor r) \to (q \lor r)$ and (\ref{eq:ad-coimp-dis}), we obtain $\models ((q \lor r) \coimp q) \to r$.
\item[($\texttt{Mon}\coimp$)] Assume $\models \delta_{1} \to \delta_{2}$. To establish $\models (\delta_{1} \coimp \psi) \to (\delta_{2} \coimp \psi)$, it suffices to show $\models \delta_{1}\to (\psi \lor (\delta_{2} \coimp \psi))$. From our assumption and the validity of $(\texttt{A10})$, our goal follows.
\item[($\texttt{A13}$)] By $\models \bdia p \to \bdia p$ and (\ref{eq:ad-bdia-box}), we obtain $\models p \to \Box \bdia p$.
\item[($\texttt{A14}$)] By $\models \Box p \to \Box p$ and (\ref{eq:ad-bdia-box}), we obtain $\models \bdia \Box p \to p$.
\item[($\texttt{Mon}\Box$)] Suppose $\models \varphi \to \psi$. To demonstrate $\models \Box \varphi \to \Box \psi$, it suffices to show $\models \bdia \Box \varphi \to \psi$ by (\ref{eq:ad-bdia-box}). By $\models \bdia \Box \varphi \to \varphi$ (due to the validity of $(\texttt{A14})$) and the supposition $\models \varphi \to \psi$), we get our goal.
\item[($\texttt{Mon}\bdia$)] Suppose $\models \varphi \to \psi$. To conclude $\models \bdia \varphi \to \bdia \psi$, it suffices to show $\models \varphi \to \Box \bdia \psi$ by (\ref{eq:ad-bdia-box}). By the supposition $\models \varphi \to \psi$ and $\models \psi \to \Box \bdia \psi$ (due to the validity of $(\texttt{A13})$), we get our goal.
\end{itemize}
\end{proof}
\section{Kripke Completeness of Bi-intuitionistic Stable Tense Logic}
\label{sec:completeness}
Given a finite set $\Delta$ of formulas, $\bigvee \Delta$ is defined as the disjunction of all formulas in $\Delta$, where $\bigvee \emptyset$ is understood as $\bot$.
\begin{definition}
Let $\Lambda$ be a $bist$-logic. A pair $(\Gamma,\Delta)$ of formulas is {\em $\Lambda$-provable} if $\Gamma \vdash_{\Lambda} \bigvee \Delta'$ for some finite $\Delta' \subseteq \Delta$. We say that a pair $(\Gamma,\Delta)$ of formulas is {\em $\Lambda$-unprovable} if it is not $\Lambda$-provable. A pair $(\Gamma,\Delta)$ is {\em complete} if $\Gamma \cup \Delta$ = $\mathsf{Form}_{\mathcal{L}}$, i.e., $\varphi \in \Gamma$ or $\varphi \in \Delta$ for all formulas $\varphi$.
\end{definition}
We remark that a pair $(\Gamma,\Delta)$ is $\Lambda$-unprovable iff $\not\vdash_{\Lambda} \bigwedge \Gamma' \to \bigvee \Delta'$ for all finite $\Gamma'\subseteq \Gamma$ and all finite $\Delta' \subseteq \Delta$. The following lemma holds because $\Lambda$ `contains' intuitionistic logic.
\begin{lem}
\label{lem:dist-lat}
Let $\Lambda$ be a $bist$-logic and $(\Gamma,\Delta)$ a complete and $\Lambda$-unprovable pair.
Then,
\begin{multienumerate}
\mitemxx{$($$\Gamma \vdash_{\Lambda} \varphi$ implies $\varphi \in \Gamma$$)$ for all formulas $\varphi$}{$\Lambda \subseteq \Gamma$}
\mitemxx{If $\varphi \in \Gamma$ and $\varphi \to \psi \in \Gamma$ then $\psi \in \Gamma$}{$\bot \notin \Gamma$}
\mitemxx{$\top \in \Gamma$}{$\varphi \land \psi \in \Gamma$ iff $\varphi \in \Gamma$ and $\psi \in \Gamma$}
\mitemx{$\varphi \lor \psi \in \Gamma$ iff $\varphi \in \Gamma$ or $\psi \in \Gamma$}
\end{multienumerate}
\end{lem}
\begin{lem}
\label{lem:extension}
Let $\Lambda$ be a $bist$-logic.
Given a $\Lambda$-unprovable pair $(\Gamma,\Delta)$, there exists a complete and $\Lambda$-unprovable pair $(\Gamma^{+},\Delta^{+})$ such that $\Gamma \subseteq \Gamma^{+}$ and $\Delta \subseteq \Delta^{+}$.
\end{lem}
\begin{definition}
Let $\Lambda$ be a $bist$-logic. The $\Lambda$-canonical $H$-model $M^{\Lambda}$ = $(U^{\Lambda},H^{\Lambda},R^{\Lambda},V^{\Lambda})$ is defined as follows.
\begin{itemize}
\item $U^{\Lambda}$ := $\inset{(\Gamma,\Delta)}{\text{ $(\Gamma,\Delta)$ is a complete and $\Lambda$-unprovable pair}}$.
\item $(\Gamma_{1},\Delta_{1})H^{\Lambda}(\Gamma_{2},\Delta_{2})$ iff $\Gamma_{1} \subseteq \Gamma_{2}$.
\item $(\Gamma_{1},\Delta_{1})R^{\Lambda}(\Gamma_{2},\Delta_{2})$ iff ($\Box \varphi \in \Gamma_{1}$ implies $\varphi \in \Gamma_{2}$) for all formulas $\varphi$.
\item $(\Gamma,\Delta)\in V^{\Lambda}(p)$ iff $p \in \Gamma$.
\end{itemize}
\end{definition}
\noindent It is clear that $H^{\Lambda}$ is not a pre-order but also a partial order. Moreover, $(\Gamma_{1},\Delta_{1})H^{\Lambda}(\Gamma_{2},\Delta_{2})$ implies $\Delta_{2} \subseteq \Delta_{1}$ by completeness.
\begin{lem}
\label{lem:access}
Let $(\Gamma_{i},\Delta_{i}) \in U^{\Lambda}$ $($$i$ = 1 or 2$)$. The following are all equivalent:
\begin{enumerate}
\item $(\Gamma_{1},\Delta_{1})R^{\Lambda}(\Gamma_{2},\Delta_{2})$,
\label{item:r1}
\item $($$\varphi \in \Delta_{2}$ implies $\Box \varphi \in \Delta_{1}$$)$ for all formulas $\varphi$,
\label{item:r2}
\item $($$\varphi \in \Gamma_{1}$ implies $\bdia \varphi \in \Gamma_{2}$$)$ for all formulas $\varphi$,
\label{item:r3}
\item $($$\bdia \varphi \in \Delta_{2}$ implies $\varphi \in \Delta_{1}$$)$ for all formulas $\varphi$.
\label{item:r4}
\end{enumerate}
\end{lem}
\begin{lem}
\label{lem:stable}
$R^{\Lambda}$ is stable in the $\Lambda$-canonical $H$-model $M^{\Lambda}$.
\end{lem}
\begin{proof}
It suffices to show show that (i) $H^{\Lambda};R^{\Lambda} \subseteq R^{\Lambda}$ and (ii) $R^{\Lambda};H^{\Lambda}\subseteq R^{\Lambda}$. For (i), let us assume that $(\Gamma_{1}, \Delta_{1})H^{\Lambda}(\Gamma_{2}, \Delta_{2})R^{\Lambda} (\Gamma_{3}, \Delta_{3})$. To show $(\Gamma_{1}, \Delta_{1})R^{\Lambda}(\Gamma_{3}, \Delta_{3})$, fix any formula $\varphi$ such that $\Box \varphi \in \Gamma_{1}$. Our goal is to show that $\varphi \in \Gamma_{3}$. By $(\Gamma_{1}, \Delta_{1})H^{\Lambda}(\Gamma_{2}, \Delta_{2})$, we get $\Box \varphi \in \Gamma_{2}$. It follows from $(\Gamma_{2}, \Delta_{2})R^{\Lambda} (\Gamma_{3}, \Delta_{3})$ that $\varphi \in \Gamma_{3}$, as required. For (ii), let us assume that $(\Gamma_{1}, \Delta_{1})R^{\Lambda}(\Gamma_{2}, \Delta_{2})H^{\Lambda} (\Gamma_{3}, \Delta_{3})$. To show $(\Gamma_{1}, \Delta_{1})R^{\Lambda}(\Gamma_{3}, \Delta_{3})$, we use Lemma \ref{lem:access} (4) to fix any formula $\bdia \varphi \in \Delta_{3}$. We show that $\varphi \in \Delta_{1}$. Since $\Delta_{3} \subseteq \Delta_{2}$, we have $\bdia \varphi \in \Delta_{2}$. By $(\Gamma_{1}, \Delta_{1})R^{\Lambda}(\Gamma_{2}, \Delta_{2})$, Lemma \ref{lem:access} (4) enables us to conclude $\varphi \in \Delta_{1}$.
\end{proof}
\begin{lem}
\label{lem:unprov}
Let $(\Gamma,\Delta)$ be a complete $\Lambda$-unprovable pair.
\begin{enumerate}
\item If $\psi \to \gamma \notin \Gamma$, then $(\setof{\psi}\cup \Gamma,\setof{\gamma})$ is $\Lambda$-unprovable.
\item If $\psi \coimp \gamma \in \Gamma$, then $(\setof{\psi},\setof{\gamma}\cup \Delta)$ is $\Lambda$-unprovable.
\item If $\Box \psi \notin \Gamma$, then $(\inset{\gamma}{\Box \gamma \in \Gamma},\setof{\psi})$ is $\Lambda$-unprovable.
\item If $\bdia \psi \in \Gamma$, then $(\setof{\psi}, \inset{\delta}{\bdia \delta \in \Delta})$ is $\Lambda$-unprovable.
\end{enumerate}
\end{lem}
\begin{proof}
Here we prove items (2) and (4) alone. For (2), assume that $\psi \coimp \gamma \in \Gamma$. Suppose for contradiction that $(\setof{\psi},\setof{\gamma}\cup \Delta)$ is $\Lambda$-provable. We can find a finite set $\Delta' \subseteq \Delta$ such that $\vdash_{\Lambda} \psi \to (\gamma \lor \bigvee \Delta')$. It follows from Proposition \ref{prop:prf-basic} (1) that $\vdash_{\Lambda} (\psi \coimp \gamma) \to \bigvee \Delta'$. Since $\psi \coimp \gamma \in \Gamma$, $\bigvee \Delta' \in \Gamma$ holds by Lemma \ref{lem:dist-lat}. This contradicts the $\Lambda$-unprovability of $(\Gamma,\Delta)$. For (4), suppose that $\bdia \psi \in \Gamma$. Assume for contradiction that $(\setof{\psi}, \inset{\delta}{\bdia \delta \in \Delta})$ is $\Lambda$-provable.
There exists some formulas $\delta_{1}$, \ldots, $\delta_{n}$ such that each $\bdia \delta_{i} \in \Delta$ and $\vdash_{\Lambda} \psi \to \bigvee_{1 \leqslant i \leqslant n} \delta_{i}$. By rule $(\texttt{Mon}\bdia)$ (monotonicity of $\bdia$) and commutativity of $\bdia$ over finite disjunctions (due to Proposition \ref{prop:prf-basic} (5) and (6)), it holds that $\vdash_{\Lambda} \bdia \psi \to \bigvee_{1 \leqslant i \leqslant n} \bdia \delta_{i}$. By $\bdia \psi \in \Gamma$ and Lemma \ref{lem:dist-lat}, we obtain $\bigvee_{1 \leqslant i \leqslant n} \bdia \delta_{i} \in \Gamma$, which implies $\bdia \delta_{i} \in \Gamma$ for some indices $i$, again by Lemma \ref{lem:dist-lat}. Fix such index $i$. Together with $\bdia \delta_{i} \in \Delta$, we establish that $(\Gamma,\Delta)$ is $\Lambda$-provable, a contradiction.
\end{proof}
\begin{lem}[Truth Lemma]
\label{lem:truth}
Let $\Lambda$ be a $bist$-logic. Then, for any formula $\varphi$ and any complete $\Lambda$-unprovable pair $(\Gamma,\Delta)$, the following equivalence holds: $\varphi \in \Gamma$ $\iff$ $M^{\Lambda}, (\Gamma,\Delta) \models \varphi$.
\end{lem}
\begin{proof}
By induction on $\varphi$. When $\varphi$ is a propositional variable from $\mathsf{Prop}$, it is immediate from the definition of $V^{\Lambda}$. When $\varphi$ is of the form $\psi \land \gamma$ or $\psi \lor \gamma$, we can establish the equivalence by Lemma \ref{lem:dist-lat} and induction hypothesis. In what follows, we deal with the remaining cases, in particular the cases where $\varphi$ is of the form of $\psi \coimp \gamma$ or $\bdia \psi$.
\noindent \textbf{($\varphi$ is of the form $\psi \coimp \gamma$)} First we show the right-to-left direction and so assume that $M^{\Lambda},(\Gamma,\Delta) \models \psi \coimp \gamma$. Then there exists a pair $(\Sigma,\Theta) \in U^{\Lambda}$ such that $(\Sigma,\Theta)H^{\Lambda}(\Gamma,\Delta)$ (so $\Sigma \subseteq \Gamma$) and $M^{\Lambda},(\Sigma,\Theta) \models \psi$ but $M^{\Lambda},(\Sigma,\Theta) \not\models \gamma$. By induction hypothesis, $\psi \in \Sigma$ and $\gamma \notin \Sigma$. Our goal is to show that $\psi \coimp \gamma \in \Gamma$. We have $\vdash_{\Lambda} \psi \to (\psi \coimp \gamma) \lor \gamma$ by axiom $(\texttt{A10})$. Since $\psi \in \Sigma$, we obtain $(\psi \coimp \gamma) \lor \gamma \in \Sigma$ hence $\psi \coimp \gamma\in \Sigma$ or $\gamma \in \Sigma$ by Lemma \ref{lem:dist-lat}.
By $\gamma \notin \Sigma$, $\psi \coimp \gamma\in \Sigma$ holds. It follows from $(\Sigma,\Theta)H^{\Lambda}(\Gamma,\Delta)$ that $\psi \coimp \gamma\in \Sigma$, as required. Second we show the left-to-right direction. Suppose that $\psi \coimp \gamma \in \Gamma$. By Lemma \ref{lem:unprov} (2), $(\setof{\psi},\setof{\gamma}\cup \Delta)$ is $\Lambda$-unprovable. By Lemma \ref{lem:extension}, we can find $(\Sigma,\Theta) \in U^{\Lambda}$ such that $\psi \in \Sigma$ and $\setof{\gamma}\cup \Delta \subseteq \Theta$.
It follows from $\Delta \subseteq \Theta$ that $(\Sigma,\Theta)H^{\Lambda}(\Gamma,\Delta)$. By induction hypothesis, we also obtain $M^{\Lambda},(\Sigma,\Theta)\models\psi$ but $M^{\Lambda},(\Sigma,\Theta)\not\models\gamma$. Therefore, $M^{\Lambda},(\Gamma,\Delta) \models \psi \coimp \gamma$, as desired.
\noindent \textbf{($\varphi$ is of the form $\bdia \psi$)} First we show the right-to-left direction and so assume that $M^{\Lambda},(\Gamma,\Delta) \models \bdia \psi$. We can find a pair $(\Sigma,\Theta) \in U^{\Lambda}$ such that $(\Sigma,\Theta) R^{\Lambda} (\Gamma,\Delta)$ and $M^{\Lambda}, (\Sigma,\Theta) \models \psi$. By induction hypothesis, we get $\psi \in \Sigma$. By Lemma \ref{lem:access}, $\bdia \psi \in \Gamma$, as desired. Second we show the left-to-right direction. Suppose that $\bdia \psi \in \Gamma$. By Lemma \ref{lem:unprov}, $(\setof{\psi}, \inset{\delta}{\bdia \delta \in \Delta})$ is $\Lambda$-unprovable. By Lemma \ref{lem:extension}, we can find a pair $(\Sigma,\Theta) \in U^{\Lambda}$ such that $\psi \in \Sigma$ and $\inset{\delta}{\bdia \delta \in \Delta} \subseteq \Theta$. By Lemma \ref{lem:access} and induction hypothesis, we obtain $(\Sigma,\Theta)R^{\Sigma}(\Gamma,\Delta)$ and $\psi \in \Sigma$. Therefore, $M^{\Lambda}, (\Gamma,\Delta)\models \bdia\psi$.
\end{proof}
\begin{theorem}[Strong Completeness of $\mathbf{BiSKt}$]
\label{thm:complete}
Given any set $\Gamma \cup \setof{\varphi}$ of formulas,
\[
\text{ $\Gamma \models \varphi$ implies $\Gamma \vdash_{\mathbf{BiSKt}} \varphi$. }
\]
\end{theorem}
\begin{proof}
Put $\Lambda$ := $\mathbf{BiSKt}$. Fix any set $\Gamma \cup \setof{\varphi}$ of formulas. We prove the contrapositive implication and so assume that $\Gamma \not\vdash_{\Lambda} \varphi$. It follows that $(\Gamma,\setof{\varphi})$ is $\Lambda$-unprovable. By Lemma \ref{lem:extension}, we can find a complete and $\Lambda$-unprovable pair $(\Sigma,\Theta) \in U^{\Lambda}$ such that $\Gamma \subseteq \Sigma$ and $\varphi \in \Theta$. By Lemma \ref{lem:truth} (Truth Lemma), $M^{\Lambda},(\Sigma,\Theta) \models \gamma$ for all $\gamma \in \Gamma$ and $M^{\Lambda},(\Sigma,\Theta) \not\models \varphi$. Since $M^{\Lambda}$ is an $H$-model by Lemma \ref{lem:stable}, we can conclude $\Gamma \not\models \varphi$, as desired.
\end{proof}
\section{Kripke Completeness of Extensions of BiSKt}
\label{sec:extension}
This section establishes that a $bist$-logic extended with {\em any} set of formulas from Table \ref{table:def} enjoys the strongly completeness.
\begin{definition}
Let $\mathbb{F}$ be a frame class. We say that $\varphi$ is a {\em semantic $\mathbb{F}$-consequence from} $\Gamma$ (written: $\Gamma \models_{\mathbb{F}} \varphi$) if, whenever $(F,V),u \models \gamma$ for all $\gamma \in \Gamma$, it holds that $(F,V),u\models \varphi$, for any $H$-frame $F$ = $(U,H,R) \in \mathbb{F}$, any valuation $V$ on $U$ and any state $u \in U$.
\end{definition}
\noindent When $\Gamma$ is empty in the notation `$\Gamma \models_{\mathbb{F}} \varphi$', we simply write $\models_{\mathbb{F}} \varphi$, which is equivalent to the statement that $F \models \varphi$ for all $F \in \mathbb{F}$.
Given a set $\Sigma$ of formulas, recall that $\mathbf{BiSKt}\Sigma$ is the smallest $bist$-logic containing $\Sigma$. By Proposition \ref{prop:definable}, we obtain the following soundness result.
\begin{theorem}
\label{thm:sound-extension}
Let $\Sigma$ be a possibly infinite set of formulas of the form $\mathsf{D}_{k} \cdots \mathsf{D}_{1}p \to \mathsf{D}_{m} \cdots \mathsf{D}_{k+1}p$ and $\mathbb{F}_{\Sigma}$ be the class of $H$-frames defined by $\Sigma$.
Then $\mathbf{BiSKt}\Sigma$ is sound for the class $\mathbb{F}_{\Sigma}$, i.e., $\vdash_{\mathbf{BiSKt}\Sigma }\varphi$ implies $\mathbb{F}_{\Sigma} \models \varphi$, for all formulas $\varphi$.
\end{theorem}
In what follows, we show that, for any set $\Sigma$ of formulas of the form $\mathsf{D}_{k} \cdots \mathsf{D}_{1}p \to \mathsf{D}_{m} \cdots \mathsf{D}_{k+1}p$, $\mathbf{BiSKt}\Sigma$ is strongly complete for the class of $H$-frames defined by $\Sigma$.
\begin{proposition}
\label{prop:coneg-dia}
Let $\Lambda$ be a $bist$-logic.
\begin{multienumerate}
\mitemxx{$\vdash_{\Lambda} \varphi \to \neg \psi$ iff $\vdash_{\Lambda} \psi \to \neg \varphi$.}{$\vdash_{\Lambda} \coneg \varphi \to \psi$ iff $\vdash_{\Lambda} \coneg \psi \to \varphi$. }
\mitemxx{$\vdash_{\Lambda} \dia \varphi \to \psi$ iff $\vdash_{\Lambda} \varphi \to \bbox \psi$.}{$\vdash_{\Lambda} \dia \bot \leftrightarrow \bot$.}
\mitemx{$\vdash_{\Lambda} \dia(\varphi \lor \psi) \leftrightarrow (\dia \varphi \lor \dia \psi)$. }
\end{multienumerate}
\end{proposition}
\begin{proof}
(4) and (5) follows from (3) similarly as in the proof of Proposition \ref{prop:prf-basic}, i.e., `left adjoints ($\dia$) preserves colimits (finite disjunctions).' Recall that $\dia$ := $\coneg \Box \neg$ and $\bbox$ := $\neg \bdia \coneg$.
(1) is easy to show. So, we focus on items (2) and (3).
For (2), let us first recall that $\coneg \varphi$ := $\top \coimp \varphi$.
By Proposition \ref{prop:prf-basic}, we proceed as follows: $\vdash_{\Lambda} \coneg \varphi \to \psi$ iff $\vdash_{\Lambda} \top \to (\varphi \lor \psi)$ iff $\vdash_{\Lambda} \top \to (\psi \lor \varphi)$ iff $\vdash_{\Lambda} \coneg \psi \to \varphi$. We finished to prove (2). For (3), we proceed as follows: $\vdash_{\Lambda} \coneg \Box \neg \varphi \to \psi$ iff
$\vdash_{\Lambda} \coneg \psi \to \Box \neg \varphi$ (by item (2)) iff
$\vdash_{\Lambda} \bdia \coneg \psi \to \neg \varphi$ (by Proposition \ref{prop:prf-basic})
iff $\vdash_{\Lambda} \varphi \to \neg \bdia \coneg \psi$ (by item (1)).
\end{proof}
\begin{lem}
\label{lem:leftconv}
Let $\Lambda$ be a $bist$-logic and $(\Gamma,\Delta), (\Sigma,\Theta) \in U^{\Lambda}$. Then,
\[
\begin{array}{lll}
(\Gamma,\Delta) H^{\Lambda};\breve{R^{\Lambda}};H^{\Lambda}(\Sigma,\Theta) &\iff& \inset{\varphi }{\coneg \Box \neg \varphi \in \Theta} \subseteq \Delta.\\
\end{array}
\]
Therefore, $(\Gamma,\Delta) \leftconv R^{\Lambda} (\Sigma,\Theta)$ iff $($$\dia \varphi \in \Theta$ implies $\varphi \in \Delta$$)$ for all formulas $\varphi$.
\end{lem}
\begin{lem}
\label{lem:definable}
Let $\Lambda$ be a $bist$-logic and $S_{i}^{\Lambda} \in \setof{R^{\Lambda},\leftconv R^{\Lambda}}$ for $1 \leqslant i \leqslant m$ and for each $i$ let $\mathsf{D}_{i}$ be $\bdia$ if $S_{i}^{\Lambda}$ = $R^{\Lambda}$; $\dia$ if $S_{i}^{\Lambda}$ = $\leftconv R^{\Lambda}$. For all pairs $(\Gamma,\Delta),(\Sigma,\Theta)\in U^{\Lambda}$, we have the following equivalence: $(\Gamma,\Delta) S_{1}^{\Lambda}; \cdots ;S_{m}^{\Lambda}(\Sigma,\Theta)$ $\iff$ $\inset{\varphi}{\mathsf{D}_{m} \cdots \mathsf{D}_{1} \varphi \in \Theta } \subseteq \Delta$.
\end{lem}
\begin{proof}
By induction on $m$. (\textbf{Basis}) When $m$ = 0, we need to show the equivalence: $(\Gamma,\Delta) H^{\Lambda}(\Sigma,\Theta)$ iff $\Theta \subseteq \Delta$. This is immediate. (\textbf{Inductive Step}) Let $m$ = $k+1$. We show the equivalence:
\[
\begin{array}{lll}
(\Gamma,\Delta) S_{1}^{\Lambda}; \cdots ;S_{k+1}^{\Lambda}(\Sigma,\Theta) &\iff& \inset{\varphi}{\mathsf{D}_{k+1} \cdots \mathsf{D}_{1} \varphi \in \Theta } \subseteq \Delta.
\end{array}
\]
By Lemmas \ref{lem:access} and \ref{lem:leftconv}, the left-to-right direction is easy to establish, so we focus on the converse direction. Assume $\inset{\varphi}{\mathsf{D}_{k+1} \cdots \mathsf{D}_{1} \varphi \in \Theta } \subseteq \Delta$. We show that there exists a pair $(\Gamma_{1},\Delta_{1}) \in U^{\Lambda}$ such that $(\Gamma,\Delta) S_{1}^{\Lambda}(\Gamma_{1},\Delta_{1})$ and $(\Gamma_{1},\Delta_{1})S_{2}^{\Lambda}; \cdots ;S_{k+1}^{\Lambda}(\Sigma,\Theta)$. It suffices to show that
\[
(\inset{\mathsf{D}_{1} \gamma}{\gamma \in \Gamma},\inset{\varphi}{\mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \varphi \in \Theta })
\]
is $\Lambda$-unprovable. This because Lemmas \ref{lem:access} and \ref{lem:leftconv} and our induction hypothesis allow us to get the desired goal. Suppose otherwise. Then we can find a formulas $\gamma_{1},\ldots,\gamma_{a} \in \Gamma$ and $\varphi_{1}, \ldots, \varphi_{b}$ such that $\mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \varphi_{j} \in \Theta$ for all indices $j$ and $\vdash_{\Lambda} {\bigwedge}_{i \in I} \mathsf{D}_{1} \gamma_{i} \to {\bigvee}_{j \in J} \varphi_{j}$ where $I$ = $\setof{1,\dots,a}$ and $J$ = $\setof{1,\dots,b}$.
Now by monotonicity of $\mathsf{D}_{1}$ (due to Propositions \ref{prop:prf-basic} and \ref{prop:coneg-dia}), $\vdash_{\Lambda} \mathsf{D}_{1}{\bigwedge}_{i \in I} \gamma_{i} \to {\bigvee}_{j \in J} \varphi_{j}$.
By monotonicity of `$\mathsf{D}_{k+1} \cdots \mathsf{D}_{2}$' (by Propositions \ref{prop:prf-basic} and \ref{prop:coneg-dia}), $\vdash_{\Lambda} \mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \mathsf{D}_{1}{\bigwedge}_{i \in I} \gamma_{i} \to \mathsf{D}_{k+1} \cdots \mathsf{D}_{2} {\bigvee}_{j \in J} \varphi_{j}$. By commutativity of `$\mathsf{D}_{k+1} \cdots \mathsf{D}_{2}$' over finite disjunctions (again due to Propositions \ref{prop:prf-basic} and \ref{prop:coneg-dia}),
\[
\vdash_{\Lambda} \mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \mathsf{D}_{1}{\bigwedge}_{i \in I} \gamma_{i} \to {\bigvee}_{j \in J} \mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \varphi_{j}.
\]
Since $\mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \varphi_{j} \in \Theta$ for all $j \in J$, we get ${\bigvee}_{j \in J} \mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \varphi_{j} \in \Theta$. By the implication established above, we obtain $\mathsf{D}_{k+1} \cdots \mathsf{D}_{2} \mathsf{D}_{1}{\bigwedge}_{i \in I} \gamma_{i} \in \Theta$. By our initial assumption of $\inset{\varphi}{\mathsf{D}_{k+1} \cdots \mathsf{D}_{1} \varphi \in \Theta } \subseteq \Delta$, we obtain ${\bigwedge}_{i \in I} \gamma_{i} \in \Delta$. On the other hand, since all $\delta_{i}$s belong to $\Gamma$, this implies that ${\bigwedge}_{i \in I} \gamma_{i} \in \Gamma$, a contradiction with $\Lambda$-unprovability of $(\Gamma,\Delta)$.
\end{proof}
\begin{theorem}
\label{thm:complete-extension}
Let $\Sigma$ be a possibly infinite set of formulas of the form $\mathsf{D}_{k} \cdots \mathsf{D}_{1}p \to \mathsf{D}_{m} \cdots \mathsf{D}_{k+1}p$ $($where $\mathsf{D}_{i} \in \setof{\bdia,\dia}$$)$ and $\mathbb{F}_{\Sigma}$ be the class of $H$-frames defined by $\Sigma$. Then $\mathbf{BiSKt}\Sigma$ is strongly complete for the class $\mathbb{F}_{\Sigma}$, i.e., if $\Gamma \models_{\mathbb{F}_{\Sigma}} \varphi$ then $\Gamma \vdash_{\mathbf{BiSKt}\Sigma }\varphi$, for all sets $\Gamma \cup \setof{\varphi}$ of formulas.
\end{theorem}
\begin{proof}
Let us put $\Lambda$ := $\mathbf{BiSKt}\Sigma$. Suppose that $\Gamma \not\vdash_{\Lambda}\varphi$. Our argument for $\Gamma \models_{\mathbb{F}_{\Sigma}} \varphi$ is almost the same as in the proof of Theorem \ref{thm:complete}, but we need to check that the frame part $F^{\Lambda}$ = $(U^{\Lambda},H^{\Lambda},R^{\Lambda})$ of the $\Lambda$-canonical model $M^{\Lambda}$ belongs to the class $\mathbb{F}_{\Sigma}$. In the notation of Lemma \ref{lem:definable}, it suffices to show that $S_{1}^{\Lambda};\cdots;S_{k}^{\Lambda} \subseteq S_{k+1}^{\Lambda};\cdots;S_{m}^{\Lambda}$ for any formula $\mathsf{D}_{k} \cdots \mathsf{D}_{1}p \to \mathsf{D}_{m} \cdots \mathsf{D}_{k+1}p$ from $\Sigma$. Suppose that $(\Gamma,\Delta)S_{1}^{\Lambda};\cdots;S_{k}^{\Lambda} (\Gamma',\Delta')$. To show
$(\Gamma,\Delta) S_{k+1}^{\Lambda};\cdots;S_{m}^{\Lambda} (\Gamma',\Delta')$, we assume that $\mathsf{D}_{m}\cdots\mathsf{D}_{k+1} \varphi \in \Delta'$ by Lemma \ref{lem:definable}. Our goal is to establish $\varphi \in \Delta$. Since $\mathsf{D}_{k} \cdots \mathsf{D}_{1}p \to \mathsf{D}_{m} \cdots \mathsf{D}_{k+1}p \in \Lambda$, it holds that $\vdash_{\Lambda} \mathsf{D}_{k} \cdots \mathsf{D}_{1}\varphi \to \mathsf{D}_{m} \cdots \mathsf{D}_{k+1}\varphi$. It follows from our assumption that $\mathsf{D}_{k} \cdots \mathsf{D}_{1} \varphi \in \Delta'$. By the supposition $(\Gamma,\Delta)S_{1}^{\Lambda};\cdots;S_{k}^{\Lambda} (\Gamma',\Delta')$, Lemma \ref{lem:definable} allows us to conclude $\varphi \in \Delta$. Therefore, $F^{\Lambda} \in \mathbb{F}_{\Sigma}$, as required.
\end{proof}
\begin{corollary}
\label{cor:strong-comp-table}
Let $\Sigma$ be a set of formulas from Table \ref{table:def} and $\mathbb{F}_{\Sigma}$ be the class of $H$-frames defined by $\Sigma$. Then $\mathbf{BiSKt}\Sigma$ is strongly complete for the class $\mathbb{F}_{\Sigma}$.
\end{corollary}
\section{Finite Model Property for Bi-intuitionistic Stable Tense Logics}
\label{sec:fmp}
A $bist$-logic $\Lambda$ has the {\em finite model property} if for every non-theorem $\varphi \notin \Lambda$, there is a finite frame $F$ such that $F \models \Lambda$ but $F \not\models \varphi$.
We say that a $bist$-logic $\Lambda$ is {\em finitely axiomatizable} if $\Lambda$ = $\mathbf{BiSKt}\Sigma$ for some finite set $\Sigma$ of formulas. It is well-known that if $\Lambda$ is finitely axiomatizable and has the finite model property then it is decidable. In this section, we show that some strongly complete extensions of $\mathbf{BiSKt}$ enjoy the finite model property and so the decidability. We employ the filtration method in~\cite{Hasimoto2001} for intuitionistic modal logics to establish the finite model property also for some $bist$-logics.
Let $M$ = $(U,H,R,V)$ be an $H$-model and $\Delta$ a subformula closed set of formulas. We define an equivalence relation $\sim_{\Delta}$ by: $x \sim_{\Delta} y$ $\iff$ $(M,x \models \varphi \text{ iff } M,y \models \varphi)$ for all $\varphi \in \Sigma$.
When $x \sim_{\Delta} y$ holds, we say that $x$ and $y$ are $\Delta$-equivalent. We use $[x]$ to mean an equivalence class $\inset{y \in U}{x \sim_{\Delta} y}$ of $x \in U$.
\begin{definition}[Filtration]
We say that a model $M_{\Delta}$ = $(U_{\Delta}, H_{\Delta}, R_{\Delta}, V_{\Delta})$ is a {\em filtration} of an $H$-model $M$ = $(U,H,R,V)$ through a subformula closed set $\Delta$ of formulas if the following conditions are satisfied.
\begin{enumerate}
\item $U_{\Delta}$ = $\inset{[x]}{x \in U}$.
\item For all $x,y \in U$, if $xHy$ then $[x]H_{\Delta}[y]$.
\item For all $x,y \in U$ and $\varphi \in \Delta$, if $[x]H_{\Delta}[y]$ and $M,x \models \varphi$ then $M,y \models \varphi$.
\item For all $x,y \in U$, if $xRy$ then $[x]R_{\Delta}[y]$.
\item For all $x,y \in U$ and $\Box \varphi \in \Delta$, if $[x]R_{\Delta}[y]$ and $M,x \models \Box \varphi$ then $M,y \models \varphi$.
\item For all $x,y \in U$ and $\bdia \varphi \in \Delta$, if $[x]R_{\Delta}[y]$ and $M,x \models \varphi$ then $M,y \models \bdia \varphi$.
\item $V_{\Delta}(p)$ = $\inset{[x]}{x \in V(p)}$ for all $p \in \Delta$.
\end{enumerate}
\end{definition}
When $\Delta$ is finite, we note that $U_{\Delta}$ is also finite.
\begin{proposition}
\label{prop:fil-lem}
Let $M_{\Delta}$ = $(U_{\Delta}, H_{\Delta}, R_{\Delta}, V_{\Delta})$ is a filtration of an $H$- model $M$ = $(U,H,R,V)$ through a subformula closed set $\Delta$ of formulas. Then for every $x \in U$ and every $\varphi \in \Delta$, the following equivalence holds: $M,x \models \varphi$ $\iff$ $M_{\Delta},[x] \models \varphi$.
\end{proposition}
\begin{proof}
We only show the case where $\varphi$ is of the form $\psi \coimp \gamma$. Note that $\psi,\gamma \in \Delta$.
For the left-to-right direction, assume $M,x \models \psi \coimp \gamma$, i.e., there exists $y \in U$ such that $yHx$ and $M,y \models \psi$ and $M,y \not\models \gamma$.
Fix such $y$. By the condition (2) of filtration, $[y]H_{\Delta}[x]$.
It follows from induction hypothesis that $M_{\Delta},[y] \models \psi$ and $M_{\Delta},[y] \not\models \gamma$.
Therefore, $M_{\Delta},[x]\models \psi \coimp \gamma$.
For the right-to-left direction, assume $M_{\Delta},[x] \models \psi \coimp \gamma$.
So, we can find an equivalence class $[y]$ such that $[y]H_{\Delta}[x]$ and
$M_{\Delta},[y] \models \psi$ and $M_{\Delta},[y] \not\models \gamma$.
By induction hypothesis, $M,y \models \psi$ and $M,y \not\models \gamma$.
Since $yHy$, we obtain $M,y \models \psi \coimp \gamma$.
By $[y]H_{\Delta}[x]$ and the condition (3) of filtration, $M,x \models \psi \coimp \gamma$ holds.
\end{proof}
While our definition of filtration does not guarantee us the {\em existence} of a filtration, the following definition and proposition provide an example of filtration. We remark that the filtration in Definition \ref{dfn:fin-fil} is called the {\em finest filtration} and is shown to be the smallest filtration in~\cite{Hasimoto2001}.
\begin{definition}
\label{dfn:fin-fil}
Given an $H$-frame $F$ = $(U,H,R)$ and a subformula closed set $\Delta$, $\underline{H}_{\Delta}$ and $\underline{R}_{\Delta}$ are defined by:
\[
\begin{array}{lll}
[x] \underline{H}_{\Delta} [y] & \iff & x'Hy' \text{ for some $x' \in [x]$ and some $y' \in [y]$, } \\
[x] \underline{R}_{\Delta} [y] & \iff & x'Ry' \text{ for some $x' \in [x]$ and some $y' \in [y]$. } \\
\end{array}
\]
Put $\underline{R}_{\Delta}^{s}$ := $\underline{H}_{\Delta}^{+};\underline{R}_{\Delta};\underline{H}_{\Delta}^{+}$ where $X^{+}$ is the transitive closure of a binary relation $X$ on a set.
\end{definition}
As noted in~\cite{Hasimoto2001}, we need to take the `stability-closure' of $\underline{R}_{\Delta}$ to define $\underline{R}_{\Delta}^{s}$, because we cannot always assure that $\underline{R}_{\Delta}$ is stable.
\begin{proposition}
\label{prop::fin-fil}
Let $M$ = $(U,H,R,V)$ be an $H$-model and $\Delta$ a subformula closed set. Then $M_{\Delta}^{s}$ := $(U_{\Delta},\underline{H}^{+}_{\Delta},\underline{R}_{\Delta}^{s},V_{\Delta})$ is an $H$-model and a filtration of $M$ through $\Delta$.
\end{proposition}
\begin{proof}
We show that all conditions for filtration are satified. We check the condition (6) alone, since the others are already shown in~\cite{Hasimoto2001} for intuitionistic modal logics. \\
\noindent (6) Assume $[x]\underline{R}^{s}_{\Delta}[y]$ and $M,x \models \varphi$. We show that $M,y \models \bdia \varphi$.
By $[x]\underline{R}^{s}_{\Delta}[y]$, there exists $[x'], [y'] \in U_{\Delta}$ such that $[x]\underline{H}_{\Delta}^{+}[x']$ and $[x']\underline{R}_{\Delta}[y']$ and $[y']\underline{H}_{\Delta}^{+}[y]$. It follows from (3) in the above, our assumption and $[x]\underline{H}_{\Delta}^{+}[x']$ that $M,x' \models \varphi$. By $[x']\underline{R}_{\Delta}[y']$, there exists $x'' \in [x']$ and $y'' \in [y']$ such that $x''Ry''$. We obtain $M,x'' \models \varphi$ by $x'' \in [x']$. So $M,y'' \models \bdia \varphi$. By $y'' \in [y']$, $M,y' \models \bdia \varphi$. By (3) in the above and $[y']\underline{H}_{\Delta}^{+}[y]$, we can conclude $M,y \models \bdia \varphi$.
\end{proof}
\begin{proposition}
\label{prop:preserve-finitest}
Let $F$ = $(U,H,R)$ be an $H$-frame. Let $S_{i} \in \setof{R,\leftconv R}$ for $1 \leqslant i \leqslant m$.
\begin{enumerate}
\item If $(x,y) \in \leftconv {R}$ then $([x],[y]) \in \leftconv \underline{R}_{\Delta}^{s}$.
\item If $F$ satisfies $H \subseteq S_{1};\cdots;S_{m}$ then $(U_{\Sigma},\underline{H}^{+}_{\Delta},\underline{R}_{\Delta}^{s})$ also satisfies the corresponding property.
\item If $F$ satisfies $R \subseteq S_{1};\cdots;S_{m}$ then $(U_{\Sigma},\underline{H}^{+}_{\Delta},\underline{R}_{\Delta}^{s})$ also satisfies the corresponding property.
\end{enumerate}
\end{proposition}
\begin{proof}
We focus on items (1) and (3) alone. For (1), assume that $(x,y) \in \leftconv R$. This means that $xHx'$, $y'Rx'$ and $y'Hy$ for some $x',y' \in U$.
It follows that $[y']\underline{H}_{\Delta}[y']\underline{R}_{\Delta}[x']\underline{H}_{\Delta}[x']$ hence $[y']\underline{R}_{\Delta}^{s}[x']$. By assumption, we also obtain $[x]\underline{H}_{\Delta}^{+}[x']$ and $[y']\underline{H}_{\Delta}^{+}[y]$.
We can now conclude $([x],[y]) \in \leftconv \underline{R}_{\Delta}^{s}$.
For (3), assume that $R \subseteq S_{1};\cdots;S_{m}$. We show that $\underline{R}_{\Delta}^{s} \subseteq (S_{1})_{\Delta}; \cdots ;(S_{m})_{\Delta}$ and so suppose that $[x] \underline{R}_{\Delta}^{s} [y]$.
This means that $[x] \underline{H}_{\Delta}^{+} [x'] \underline{R}_{\Delta} [y'] \underline{H}_{\Delta}^{+} [x]$ for some $[x']$, $[y'] \in U_{\Delta}$. By definition, there exist $a \in [x']$ and $b \in [y']$ such that $a R b$.
By assumption, $(a,b)\in S_{1};\cdots;S_{m}$. By item (1) and the condition (4) of filtration for $\underline{R}_{\Delta}^{s}$, we obtain $([x'],[y'])\in (S_{1})_{\Delta};\cdots;(S_{m})_{\Delta}$. Since $(S_{i})_{\Delta}$ is stable, it follows from $[x] \underline{H}_{\Delta}^{+} [x']$ and $[y'] \underline{H}_{\Delta}^{+} [x]$ that $([x],[y])\in (S_{1})_{\Delta};\cdots;(S_{m})_{\Delta}$, as desired.
\end{proof}
\begin{theorem}
\label{thm:fmp-fin}
Let $\Sigma$ be a possibly empty {\em finite} set of formulas of the form $p \to \mathsf{D}_{1} \cdots \mathsf{D}_{m}p$ or $\bdia p \to \mathsf{D}_{1} \cdots \mathsf{D}_{m}p$ $($where $\mathsf{D}_{i} \in \setof{\bdia,\dia}$$)$. Then $\mathbf{BiSKt}\Sigma$ enjoys the finite model property. Therefore, $\mathbf{BiSKt}\Sigma$ is decidable.
\end{theorem}
\begin{proof}
It suffices to show the former part, i.e., the finite model property.
Let $\varphi \notin \mathbf{BiSKt}\Sigma$, i.e., $\varphi$ is a non-theorem of $\mathbf{BiSKt}\Sigma$.
By Theorem \ref{thm:complete-extension}, there is a model $M$ = $(U,H,R,V)$ such that $M \not\models \varphi$ and $F$ = $(U,R,V) \models \Sigma$. By Proposition \ref{prop:definable}, $F$ satisfies the corresponding properties to all the elements of $\Sigma$.
Put $\Delta$ as the set of all subformulas of $\varphi$. By Proposition \ref{prop:fil-lem}, we obtain $M_{\Delta}^{s} \not\models \varphi$ hence $F_{\Delta}^{s} \not\models \varphi$ where $F_{\Delta}^{s}$ is the frame part of $M_{\Delta}^{s}$. Moreover, Proposition \ref{prop:preserve-finitest} implies $F_{\Delta}^{s} \models \Sigma$, which implies $F_{\Delta}^{s} \models \mathbf{BiSKt}\Sigma$.
\end{proof}
When $\Sigma$ is a set of formulas from Table \ref{table:def} which satisfies the syntactic condition in the statement of Theorem \ref{thm:fmp-fin}, then the theorem implies that $\mathbf{BiSKt}\Sigma$ is always decidable. In particular:
\begin{corollary}[\cite{Stell2016}]
\label{cor:dec-biskt}
$\mathbf{BiSKt}$ is decidable.
\end{corollary}
When $\Sigma$ contains the formula $\bdia \bdia p \to \bdia p$ which defines the transitivity of $R$ (recall Table \ref{table:def}), we have the following partial results on the decidability of $\mathbf{BiSKt}\Sigma$.
\begin{proposition}
\label{prop:tra-fil}
Let $M$ = $(U,H,R,V)$ be an $H$-model and $\Delta$ a subformula closed set. If $R$ is transitive, then $M_{\Delta}^{s+}$ := $(U_{\Delta},\underline{H}^{+}_{\Delta},(\underline{R}_{\Delta}^{s})^{+},V_{\Delta})$ is an $H$-model and a filtration of $M$ through $\Delta$, where $(\underline{R}_{\Delta}^{s})^{+}$ is the transitive closure of $\underline{R}_{\Delta}^{s}$.
\end{proposition}
\begin{theorem}
\label{thm:fmp-K4-S4}
$\mathbf{BiSKt}\setof{\bdia \bdia p \to \bdia p}$ and
$\mathbf{BiSKt}\setof{\bdia \bdia p \to \bdia p, p \to \bdia p}$ enjoy the finite model property.
Therefore, they are decidable.
\end{theorem}
\begin{proof}
Put $\Lambda$ as one of the $bist$-logics in the statement. We only show the finite model property.
Let $\varphi \notin \Lambda$, i.e., $\varphi$ is a non-theorem of $\Lambda$.
By Theorem \ref{thm:complete-extension}, there is an $H$-model $M$ = $(U,H,R,V)$ such that $M \not\models \varphi$ and $F$ = $(U,R,V) \models \Sigma$. By Proposition \ref{prop:definable}, $F$ satisfies the corresponding properties to $\bdia \bdia p \to \bdia p$, $p \to \bdia p$ in $\Lambda$. Put $\Delta$ is the set of all subformulas of $\varphi$. By Propositions \ref{prop:fil-lem} and \ref{prop:tra-fil}, we obtain $M_{\Delta}^{s+} \not\models \varphi$ hence $F_{\Delta}^{s+} \not\models \varphi$ where $F_{\Delta}^{s+}$ is the frame part of $M_{\Delta}^{s+}$. We note that $(\underline{R}_{\Delta}^{s})^{+}$ is clearly transitive. If $R$ is reflexive, so is $(\underline{R}_{\Delta}^{s})^{+}$. This implies $F_{\Delta}^{s+} \models \Lambda$.
\end{proof}
\section{Related Literature and Further Work}
\label{sec:Related}
There is some closely related work in the existing literature which we only became aware of after completing the work reported above.
We are grateful to the reviewers for drawing this to our attention.
Kripke semantics in which there is an accessibility relation together with an ordering on the set of worlds occurs, for instance, in~\cite{GhilardiMeloni1997,CelaniJanasa1997}.
In particular, Celani and Jansana~\cite{CelaniJanasa1997} show that the semantics for positive modal logic which they
give using this ordering has more convenient properties than earlier work by Dunn.
The stability condition used in our work
is essentialy the bimodule axiom in~\cite[p7]{GhilardiMeloni1997}. Their $\prec$ being our $\breve{H}$ and their accessibility relation, $\mid$,
being $\conv{H} \comp R \comp \conv{H}$. Thus the (ordinary) converse of the accessibility relation in~\cite{GhilardiMeloni1997} corresponds to
our left converse $\leftconv{R}$ so that the adjoint pair of modalities in~\cite{GhilardiMeloni1997} would then be our $\dia$ and $\bbox$.
However, in our system these two do not suffice to define $\bdia$ and $\Box$ as shown by our Proposition~\ref{prop:undefinable},
so the connection with our results needs further work before it can be made clear.
There is a similar situation with~\cite{GehrkeNagahashiVenema2005} where there are four independent accessibility relations and a partial order $\leqslant$. In this context our $R$ would be the relation
denoted in~\cite{GehrkeNagahashiVenema2005} by $R_\Box$ and our $\conv{H} \comp R \comp \conv{H}$ would be $R_{\dia}$.
Thus the modalities in~\cite{GehrkeNagahashiVenema2005} appear to be our $\dia$ and $\bbox$ and it is not immediately clear how our
$\bdia$ and $\Box$ fit into that framework.
Another difference between our work and these papers is that~\cite{CelaniJanasa1997} and~\cite{GehrkeNagahashiVenema2005} use a language without implication.
In fact, it is stated in~\cite[p101]{GehrkeNagahashiVenema2005} that intuitionistic implication does fit into their framework but they do not include details
of how this is done.
Although Kripke semantics for intuitionistic modal logic generally take the form of a single set $U$ equipped with two relations $H$ and $R$ on $U$,
the approach of Ewald~\cite{Ewald1986} is somewhat different.
Ewald uses a family of relations indexed by a poset $(\Gamma, \leqslant)$, so for each $\gamma \in \Gamma$ there is a set $T_\gamma$ and a relation
$u_\gamma \subseteq T_\gamma \times T_\gamma$. These are required to satisfy the condition that if $\gamma_1 \leqslant \gamma_2$ then
$T_{\gamma_1} \subseteq T_{\gamma_2}$ and $u_{\gamma_1} \subseteq u_{\gamma_2}$. We can however define
$U = \{(t,\gamma) \mid \gamma \in \Gamma \text{ and } t \in T_\gamma\}$ to get a single set, and then define relations $H$ and $R$ on $U$ as follows.
We put $(t_1, \gamma_1) \mathrel{H} (t_2, \gamma_2)$ iff $t_1 = t_2$ and $\gamma_1 \leqslant \gamma_2$,
and we put $(t_1, \gamma_1) \mathrel{R} (t_2, \gamma_2)$ iff $\gamma_1 = \gamma_2$ and $t_1 \mathrel{u_{\gamma_1}} t_2$.
This $R$ will not necessarily be stable in our sense,
but it does satisfy the weaker condition that for any $X \subseteq U$ which is an $H$-set as in Definition~\ref{defn-stable},
we have $X \oplus R = X \oplus (H \comp R \comp H)$
where $\oplus$ denotes the dilation defined in the Introduction. Although this is a weaker condition than stability, the models used by
Ewald satisfy the additional constraint that for any $H$-set $X$ we have $X \oplus \breve{R} = X \oplus (H \comp \breve{R} \comp H)$.
Applying the same semantics as in our Definition~\ref{defn-semantics},
but for a language without $\coimp$,
we then have the same semantics as in~\cite{Ewald1986}
by taking the tense modalities $F, P, G, H$ to be $\dia, \bdia, \Box$ and $\bbox$ respectively.
Extending Ewald's approach to the language with $\coimp$ is then immediate but it can be shown that,
due to the weakening of stability,
the formula $\dia p \leftrightarrow \coneg \Box \neg p$ is no longer valid in all frames.
It is interesting to note that $\dia p \to \coneg \Box \neg p$ is valid in this weaker setting.
Bi-intuitionistic tense logic is studied proof-theoretically by Gor{\'e} et al.\ in~\cite{GoreBiIntAiML2010}.
This includes a discussion which obtains Ewald's semantics by identifying the two separate accessibility relations that are
used in~\cite{GoreBiIntAiML2010}.
The use of two relations makes the work of Gor{\'e} et al. more general than ours, a point already made in~\cite{Stell2016}, however
we contend that
the connection between $\dia$ and $\Box$ that appears explicitly in~\cite{Stell2016}, and implicitly as just noted in~\cite{Ewald1986},
is sufficiciently interesting to merit further study.
The correspondence results mentioned here and published in~\cite{Stell2016} also appear to be closely related to earlier work. Sahlqvist theorems for positive tense logic and for intuitionistic modal logic are proved in~\cite{GehrkeNagahashiVenema2005}, and in~\cite{GhilardiMeloni1997} respectively. There is a Sahlqvist theorem for bi-intuitionistic modal mu-calculus which was established by Conradie et al~\cite{ConradieFomatatiPalmigianoSourabh2015}. Further work is needed to determine whether these results could be generalized in a straighforward way to our setting.
In this paper we have focussed on the logical theory rather than the applications that originally motivated this work.
Within artificial intelligence the topic of spatial reasoning is of considerable practical importance~\cite{CohnRenzHbk2008}.
Spatial regions can be modelled as subgraphs, which can also be identified with collections of pixels in images to make the connection with the
mathematical morphology in the Introduction. Investigating the ability of the logics presented here to express spatial relations, in the sense
of~\cite{CohnRenzHbk2008}, between subgraphs is another topic for future work.
\footnote{The authors would like to thank the anonymous reviewers for their helpful and constructive comments that greatly contributed to improving the final version of the paper. The work of the first author was partially supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (B) Grant Number 15K21025 and JSPS Core-to-Core Program (A. Advanced Research Networks).}
\bibliographystyle{eptcs}
| eca6fcfa2abddd78c0370514dff593350f07d151 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Kleinian groups -- discrete isometry groups of negatively curved symmetric spaces}\label{ch:1}
\subsection{Basics of negatively curved symmetric spaces} \label{sec:rk1}
In this section we review basic properties of negatively curved symmetric spaces; later on, we will compare and contrast their properties with the ones of {\em higher rank} symmetric spaces of noncompact type. We refer the reader to \cite{Mostow} and \cite{Parker} for a detailed discussion
of negatively curved symmetric spaces.
Recall that negatively curved symmetric spaces $X$ (also known as {\em rank 1 symmetric spaces of noncompact type}) come in four families:
$${\mathbb H}^n, {\mathbf C}{\mathbb H}^n, {\mathbf H} {\mathbb H}^n, {\mathbf O} {\mathbb H}^2,$$
i.e., real hyperbolic spaces, complex hyperbolic spaces, quaternionic hyperbolic spaces and the octonionic hyperbolic plane. We will normalize their Riemannian metrics so that the maximum of the sectional curvature is $-1$.
The identity component of the isometry group of $X$ will be denoted $G$.
The basic fact of the geometry of negatively curved symmetric spaces is that two geodesic segments in $X$
are $G$-congruent if and only if they have the same length. (This property fails in higher rank.)
The {\em visual boundary} of a symmetric space $X$ consists of the equivalence classes of geodesic rays in $X$, equipped with a suitable topology. Here two rays are equivalent if and only if they are within finite distance from each other. The visual boundary of $X$ is denoted $S=\partial_{\infty} X$. The elements of $\partial_{\infty} X$ are called {\em ideal boundary points} of $X$. A ray representing a point $\xi\in S$ is said to be {\em asymptotic} to $\xi$. Given any point $x\in X$, there is a unique ray emanating from $x$ and asymptotic to $\xi$; this ray is denoted $x\xi$. Thus, after we fix a basepoint $o\in X$, we can identify $S$ with the unit tangent sphere $U_oX$ in $T_oX$: Each ray $o\xi$ corresponds to its (unit) velocity vector at $o$. This identification endows $S$ with a natural smooth structure
and a Riemannian metric depending on $o$.
The sphere $S$ is a homogeneous $G$-space and the point stabilizers are the minimal parabolic subgroups $B< G$.
An important feature of negatively curved symmetric spaces (which also fails in higher rank) is:
\begin{lemma}
For any two asymptotic rays $r_i: [0, \infty)\to X, i=1, 2$, there exist $t_1, t_2\in {\mathbb R}_+$ such that
the rays $r_i: [t_i, \infty)\to X$ are {\em strongly asymptotic:}
$$
\lim_{t\to\infty} d(r_1(t_1+ t), r_2(t_2+t))=0.
$$
\end{lemma}
\medskip
Attaching the visual boundary to $X$ provides the {\em visual compactification} of the symmetric space $X$:
$$
\overline X= X\sqcup S,
$$
where a sequence of points $x_n\in X$ converges to an ideal point $\xi\in S$ if and only if the sequence of geodesic segments $ox_n$ converges to the geodesic ray $o\xi$ representing $\xi$.
Alternatively, one can describe this compactification as the {\em horofunction compactification} of $X$. This compactification is defined for a much larger class of metric spaces and we will use it later in section \ref{sec:finsler geometry} in the context of {\em Finsler metrics} on symmetric spaces. See Appendix \ref{sec:horoboundary}. We now return to negatively curved symmetric spaces.
\medskip
{\em Visibility property.} Any two distinct ideal boundary points $\xi, \hat\xi\in S$ are connected by a (unique) geodesic
line $l: {\mathbb R}\to X$:
$$
\lim_{t\to\infty} l(t)=\xi, \quad \lim_{t\to-\infty} l(t)=\hat\xi.
$$
This property again fails in higher rank.
\medskip
{\bf Isometries.} The isometries $g$ of $X$ are classified according to their (convex) displacement functions
$$
d_g(x)= d(x, gx).
$$
\begin{itemize}
\item Hyperbolic isometries: $\inf_X d_g > 0$.
In this case, the infimum is attained on a $g$-invariant geodesic $a_g\subset X$, called the {\em axis} of $g$. The ideal endpoints of $a_g$ are fixed by $g$.
\item Parabolic isometries: $\inf_X d_g=0$ and is not attained. Each parabolic isometry has a unique fixed ideal boundary point $\xi\in \partial_{\infty} X$,
and
$$
\xi= \lim_{i\to\infty} x_i
$$
for every sequence $x_i\in X$ such that $\lim_{i\to\infty} d_g(x_i)=0$. Horospheres centered at $\xi$ are invariant under $g$.
\item Elliptic isometries: $\inf_X d_g=0$ and is attained, i.e. $g$ fixes a point in $X$.
\end{itemize}
In particular, there are no isometries $g$ for which $\inf_X d_g > 0$ and the infimum is not attained.
This again fails in higher rank.
\medskip
There are two fundamental facts about symmetric spaces of negative curvature which will
guide our discussion: The {\em Morse Lemma} and the {\em Convergence Property}.
\begin{theorem}
[Morse Lemma \cite{Mostow, Gromov_hypgps}]
Quasigeodesics in $X$ are uniformly close to geodesics. More precisely, given constants $L,A$, there exists a number $D=D(L,A)$ such that each $(L, A)$ quasigeodesic $q$ in $X$ is within Hausdorff distance $D$ from a geodesic.
\end{theorem}
While the Morse Lemma fails in higher rank symmetric spaces $X$
(because it fails in euclidean plane),
an important result that we prove in \cite{morse} is that {\em uniformly regular} quasigeodesics are uniformly close to {\em diamonds} in $X$, see Theorem \ref{thm:HRMorse} below.
\subsection{The rank 1 convergence property}
\label{sec:rank1conv}
Given two points $\alpha, \omega\in S$ we define the {\em quasiconstant map}
$$
\alpha_\omega: \overline X - \{\omega\}\to \{\alpha\}
$$
which is undefined at $\omega$. A sequence in a locally compact topological space is {\em divergent} if it has no accumulation points in the space. We will use this in the context of sequences in Lie groups. We say that a divergent
sequence $g_k\in G$ is {\em contracting} if it {\em converges to a quasiconstant map}
$$
g_k\to \alpha_\omega,
$$
i.e.
$$
g_k|_{\overline {X} - \{\omega\}}\to \alpha$$
uniformly on compacts. The point $\alpha$ is the {\em limit point} (or the {\em attractor}) for the sequence $(g_k)$ and $\omega$ is the {\em exceptional point} (or the {\em repeller}) of the sequence.
\begin{rem}
If $(g_k)$ converges to $\alpha_\omega$, then $(g_k^{-1})$ converges to $\omega_\alpha$.
\end{rem}
\begin{theorem}
[Convergence Property]
\label{thm:conv}
Every divergent sequence $(g_k)$ in $G$ contains a contracting subsequence.
\end{theorem}
While the naive generalization of the convergence property fails for the ideal boundaries of higher rank symmetric spaces and for flag manifolds, nevertheless, a certain version of the convergence property continues to hold, see section \ref{sec:conprop}.
The convergence at infinity of sequences in $X$ yields a notion of convergence at infinity for divergent
sequences in $G$:
\begin{definition}
A sequence $(g_k)$ in $G$ {\em converges} to a point $\alpha\in S$, $g_k\to \alpha$, if for some (equivalently, every) $x\in X$,
$$
\lim_{k\to\infty} g_kx=\alpha.
$$
\end{definition}
For instance, each contracting sequence converges to its attractor.
The convergence property implies the following equivalent dynamical characterization of the convergence $g_k\to \alpha$
{i in terms of contraction. In particular, it yields a characterization in terms of the dynamics at infinity.}
\begin{lemma}\label{lem:flag-con-1}
For each $\alpha\in S$ and sequence $(g_k)$ in $G$ the following are equivalent:
1. $g_k\to \alpha$.
2. Every subsequence of $(g_k)$ contains a {contracting} subsequence which converges to $\alpha_\omega$ for some $\omega$.
3. There exists a bounded sequence $(b_k)$ in $G$ such that the sequence $g_k b_k$
{is contracting and} converges to
$\alpha_\omega$ for some $\omega$.
In addition, in parts 2 and 3, it suffices to verify the convergence to $\alpha_\omega$ on $S - \{\omega\}$.
\end{lemma}
\subsection{Discrete subgroups}
\begin{definition}
A subgroup $\Gamma < G$ is called {\em discrete} if it is a discrete subset of $G$.
\end{definition}
\begin{definition} [Limit set]
The {\em limit set} $\Lambda(\Gamma)$ of a discrete subgroup $\Gamma < G$ is the
accumulation set in $\partial_{\infty} X$ of one
$\Gamma$-orbit $\Gamma x$ in $X$.
\end{definition}
All orbits $\Gamma x\subset X$ have the same accumulation set in $\partial_{\infty} X$.
This fact is an immediate application of the property that if $(x_i), (y_i)$ are two sequences in $X$ within bounded distance from each other and $x_i\to \xi\in S$, then $y_i\to \xi$ as well.
\begin{definition}
A discrete subgroup $\Gamma< G$ is {\em elementary} if $\Lambda(\Gamma)$ consists of at most two points.
\end{definition}
Every discrete subgroup $\Gamma< G$ enjoys the convergence property,
i.e.\ every divergent sequence $(\gamma_k)$ in $\Gamma$ contains a contracting subsequence,
compare Theorem~\ref{thm:conv}.
The convergence dynamics leads to a definition of the limit set in terms of the dynamics at infinity
and implies a dynamical decomposition of the $\Gamma$-action into discontinuous and chaotic part:
\begin{lemma} For each discrete subgroup $\Gamma< G$ we have:
1. $\Lambda(\Gamma)$ is the set of values $\alpha$ of attractors of contracting sequences of elements of $\Gamma$.
2. $\Lambda(\Gamma)$ is the set of exceptional points of contracting sequences of elements of $\Gamma$.
3. Unless $\Gamma$ is elementary, its action on $\Lambda(\Gamma)$ is {\em minimal}: Every $\Gamma$-orbit in $\Lambda$ is dense.
4. The domain $\Omega(\Gamma):= S - \Lambda(\Gamma)$ equals the {\em wandering set} of the action $\Gamma\curvearrowright S$.
5. The action $\Gamma\curvearrowright X\cup \Omega(\Gamma)$ is properly discontinuous.
\end{lemma}
The subset $\Omega(\Gamma) = S - \Lambda(\Gamma)$ is called the {\em domain of discontinuity} of the
subgroup $\Gamma< G$.
\subsection{Conical convergence}\label{sec:conical_convergence}
The notion of conical convergence plays a central role in describing geometric finiteness (both in rank 1 and in higher rank),
so we define it below in several different ways.
\begin{definition}
A sequence $x_k\in X$ converges to a point $\xi\in S$ {\em conically}
$$
x_k \stackrel{con}{\longrightarrow} \xi
$$
if $x_k\to \xi$ and for every ray $x\xi$ there is $R<\infty$ such that $x_k\in N_R(x\xi)$, the $R$-neighborhood of $x\xi$. A sequence $g_k\in G$ converges to a point $\xi\in S$ {\em conically}
$$
g_k \stackrel{con}{\longrightarrow} \xi,
$$
if for some (equivalently, every) $x\in X$, the sequence $x_k=g_k x$ converges to $\xi$ conically. \end{definition}
The name {\em conical} in this definition
comes from the fact that, e.g. in the upper half space model of ${\mathbb H}^n$, tubular neighborhoods of rays
resembles cones.
\medskip
As in the case of convergence $g_k\to \xi$ one can also characterize conical convergence
in terms of the dynamics of $(g_k)$ on $S$.
\begin{lem}\label{lem:eq-con}
Suppose that $g_k\to \xi\in S$, $g_k\in G$. Then $g_k \stackrel{con}{\longrightarrow} \xi$
if and only if either one of the two equivalent properties hold:
1. For some, equivalently, every complete geodesic $l\subset X$ asymptotic to $\xi$,
the sequence $g^{-1}_k l$ is relatively compact in the space of all geodesics in $X$.
2. For some, equivalently, every $\hat\xi\in S - \{\xi\}$ the sequence $g^{-1}_k(\xi, \hat\xi)$ is
relatively compact in
$$
(S\times S)^{opp}= S \times S - Diag(S\times S).
$$
\end{lem}
\subsection{The expansion property}\label{sec:expansion}
The notion of conical convergence $g_k \stackrel{con}{\longrightarrow} \xi$ is closely related to the concept of {\em expansivity}. We will use the expansivity concepts discussed in Appendix \ref{sec:expanding_actions}
for actions on $S$ equipped with an arbitrary Riemannian metric. The choice of the metric will not be important.
\begin{prop}\label{prop:con-exp}
Suppose that $g_k\to \xi\in S$, $g_k\in G$. Then the conical convergence
$$
g_k \stackrel{con}{\longrightarrow} \xi
$$
implies that the sequence $(g_k^{-1})$ has diverging
{ infinitesimal}
expansion at $\xi$.
\end{prop}
\subsection{Conical limit points}
We return to discussing discrete subgroups $\Gamma< G$.
\begin{definition}
[Conical limit points of $\Gamma$] A limit point $\xi$ of $\Gamma$ is {\em conical} if there exists
a sequence $\gamma_k\in \Gamma$ which converges to $\xi$ conically. The set of conical limit points of $\Gamma$ is denoted
$\Lambda_c(\Gamma)$.
\end{definition}
In view of Lemma \ref{lem:eq-con},
one has the following characterization of conical limit points in terms of the dynamics at infinity:
\begin{lem}
Suppose that the limit set of $\Gamma$ consists of at least two points. Then the following are equivalent:
1. $\xi\in \Lambda(\Gamma)$ is a conical limit point of $\Gamma$.
2. There exists a sequence $\gamma_k\to\xi$ in $\Gamma$ such that
for some
$\hat\xi\in \Lambda(\Gamma) - \{\xi\}$
the sequence of pairs
$\gamma_k^{-1}(\xi,\hat\xi)$ converges to a pair of distinct points.
\end{lem}
\medskip
The situation when {\em all limit points are conical}
can also be characterized in terms of the action on the space $T\Lambda$ of triples of distinct points:
\begin{theorem}
[See e.g. \cite{Bowditch_config}] \label{thm:traction}
Suppose that $\Gamma$ is nonelementary. Then all limit points are conical iff the action of $\Gamma$ on the triple space $T\Lambda$ is cocompact.
\end{theorem}
{ The triple space is an intrinsic replacement for the convex hull of the limit set in the symmetric space,
and the theorem provides one of the characterizations of {\em convex cocompactness}
to be discussed in the next section,
compare Theorem~\ref{thm:coco-GF2} below.}
\subsection{Geometrically finite groups}\label{sec:GFG}
The notion of geometric finiteness played a critical role in the development of the theory of Kleinian groups. It was
originally introduced by Ahlfors, who defined geometric finiteness in terms of fundamental polyhedra. Subsequent equivalent definitions were established by Marden, Beardon and Maskit, Thurston, Sullivan and others.
In this section we will give a list of equivalent definitions of convex-cocompactness in the rank 1 setting (equivalently, geometric finiteness without parabolic elements). In what follows, we will only consider discrete subgroups $\Gamma < G$ of rank 1 Lie groups which contain no parabolic elements: These definitions require modifications if one allows parabolic elements, we refer the reader to \cite{Bowditch93, Bowditch_gf, Ratcliffe} for more details.
\subsubsection{Finitely sided fundamental domains}
\begin{definition}
[L.~Ahlfors]
$\Gamma<G$ is CC0 if for some point $o\in X$ not fixed by any nontrivial element of $\Gamma$ the associated
{\em Dirichlet fundamental domain} $D_o$ of $\Gamma$,
$$
D_o=\{x\in X: \forall \gamma\in \Gamma, d(x, o)\le d(x, \gamma o)\},
$$
is {\em finite-sided}. The latter means that only finitely many ``half-spaces''
$$
{\mathrm Bis}(\gamma o, o)= \{x\in X: d(x, o)\ge d(x, \gamma o)\}, \gamma\in \Gamma,
$$
have nonempty intersection with $D_o$.
\end{definition}
This definition was proposed by L.~Ahlfors in \cite{Ahlfors}; it was historically the first definition of geometric finiteness
and the main one, until Thurston's work \cite{Thurston}.
\subsubsection{Convex cocompactness}
\begin{definition}
[Convex cocompact subgroups]\label{def:cc1}
$\Gamma<G$ is CC1 ({\em convex cocompact}) if there exists a nonempty $\Gamma$-invariant closed convex subset $C\subset X$ such that $C/\Gamma$ is compact.
\end{definition}
This definition explains the terminology ``convex cocompact'' since it is stated in terms of cocompactness of the
$\Gamma$-action on a certain convex subset of $X$.
There is a unique smallest nonempty $\Gamma$-invariant closed convex subset
if $|\Lambda(\Gamma)|\ge 2$,
namely the {\em convex hull} $C_\Gamma$ of $\Lambda(\Gamma)$, which is the closed convex hull of the union of all
geodesics connecting limit points of $\Gamma$, see e.g. \cite{Bowditch_gf}.\footnote{A convex subset $C\subset X$
as in Definition~\ref{def:cc1} contains $\Gamma$-orbits.
Hence $\Lambda(\Gamma)\subseteq\partial_{\infty} C$, and therefore $C_\Gamma\subseteq C$.}
Hence, to verify CC1, one needs to test only $C_\Gamma$:
\begin{lemma}
Assume that $|\Lambda(\Gamma)|\ge 2$. Then $\Gamma$ is convex cocompact iff $C_\Gamma/\Gamma$ is compact.
\end{lemma}
\medskip
Definitions CC0 and CC1 do not appear to be particularly useful in higher rank; below we present definitions which,
except for CC8,
do generalize to higher rank (after suitable modifications).
\subsubsection{Beardon--Maskit condition: Dynamics on the limit set}
The next definition is motivated by the work of Beardon and Maskit \cite{Beardon-Maskit}
who characterized the discrete subgroups of $PSL(2,{\mathbb C})$ satisfying Ahlfors' CC0 condition
in terms of their dynamics on the limit set.
\begin{definition}[A.~Beardon, B.~Maskit]
$\Gamma<G$ is CC2 if each limit point of $\Gamma$ is conical.
\end{definition}
Theorem \ref{thm:traction} can be reformulated as:
\begin{thm}\label{thm:coco-GF2}
A nonelementary group $\Gamma$ is CC2 iff $\Gamma$ acts cocompactly on $T\Lambda(\Gamma)$.
\end{thm}
\begin{rem}
In the presence of parabolics one requires that each limit point is either conical or a ``bounded'' parabolic fixed point (A. Beardon, B. Maskit, B. Bowditch, see \cite{Beardon-Maskit}, \cite{Bowditch_gf}; cf. also \cite{Bishop}).
\end{rem}
\medskip
Note that the condition CC2 {\em a priori} does not even imply finite generation of $\Gamma$.
\subsubsection{Asymptotically embedded groups}
Recall that each word hyperbolic group $\Gamma$ has a {\em Gromov boundary}
$\partial_{\infty} \Gamma$, which is a metrizable compact on which $\Gamma$ acts via homeomorphisms. (One constructs this boundary by looking at equivalence classes of geodesic rays in the Cayley graph of $\Gamma$ or via horofunctions,
see \cite{CP}.)
\begin{definition} [Asymptotically embedded]
$\Gamma< G$ is CC3 ({\em asymptotically embedded}) if it is Gromov-hyperbolic and
$\partial_{\infty} \Gamma$ is equivariantly homeomorphic to $\Lambda(\Gamma)$.
\end{definition}
Equivalently:
\begin{definition} [Boundary embedded]
$\Gamma< G$ is {\em boundary embedded} if it is Gromov-hyperbolic and
there exists an equivariant topological embedding $\beta: \partial_{\infty} \Gamma \to \partial_{\infty} X$.
\end{definition}
The equivalence of CC3 and {\em boundary embedded} is easy to see using again the convergence property;
it is also easy to see that $\beta(\partial_{\infty} \Gamma)=\Lambda(\Gamma)$.
\subsubsection{Coarse geometric definitions}
\medskip
The next definition involves the {\em coarse geometry} of discrete subgroups:
\begin{definition}
$\Gamma< G$ is CC4 if it is finitely generated and {\em undistorted} in $G$.
\end{definition}
Here $\Gamma< G$ is {\em undistorted} if the word metric on $\Gamma$ is comparable to the extrinsic metric coming from $G$. Equivalently, one (equivalently, each) orbit map $\Gamma \to \Gamma x\subset X$ is a QI (quasiisometric) embedding of $\Gamma$ into $X$.
\medskip
A minor variation on this definition (which will become major in higher rank) is:
\begin{definition}
A discrete subgroup $\Gamma< G$ is {\em Morse}, or {\em satisfies the Morse property}, if $\Gamma$ is
word hyperbolic and each discrete geodesic in $\Gamma$ maps (via the orbit map) to a
discrete path in $X$ uniformly close to a geodesic.
\end{definition}
Note that this definition does not a priori assume undistortion of $\Gamma$ in $G$.
\medskip
The implication CC4 $\Rightarrow$ Morse follows immediately from the Morse Lemma. For the converse implication one observes that images of discrete geodesics in $\Gamma$ under the orbit map are contained in uniform neighborhoods of geodesics in $X$ and have bounded backtracking.
\subsubsection{Quasiconvexity and coarse retractions}
A subset $Y\subset X$ is called {\em quasiconvex} if there exists a constant $R$ such that for any pair of points $x, y\in Y$ the (unique) geodesic $xy$ between $x$ and $y$ is contained in $N_R(Y)$,
the $R$-neighborhood of $Y$ in $X$. Each convex subset is, of course, also quasiconvex. While the opposite implication is false, it follows from the work of M.~Anderson that each quasiconvex subset $Y\subset X$ is within finite Hausdorff distance from its convex hull in $X$ (see \cite[Proposition 2.3.4]{Bowditch_gf}).
\begin{definition}
[Quasiconvex subgroups]
$\Gamma < G$ satisfies CC5 if it is {\em quasiconvex}, i.e., one (equivalently, every) orbit $\Gamma x\subset X$ is a quasiconvex subset.
\end{definition}
For each nonempty closed convex subset $C\subset X$ the nearest point projection $X\to C$ is $1$-Lipschitz, i.e., is distance non-increasing. Similarly, if $Y$ is a quasiconvex subset of a geodesic Gromov-hyperbolic space then there exists an $(L,A)$ coarse Lipschitz retraction $X\to Y$, which can be viewed as a coarsification of the nearest point projection. (A nearest point in $Y$ may not exist, instead, one projects $x\in X$ to $y\in Y$ such that for all $y'\in Y$, $d(x,y')\le d(x,y)+1$.) Here a map $f: X\to Y$ between metric spaces is $(L,A)$ coarse Lipschitz if
$$
d(f(x_1), f(x_2))\le L d(x_1, x_2) + A, \forall x_1, x_2\in X.
$$
A subset $Z\subset X$ is called a
{\em coarse retract} if there exists a coarse Lipschitz retraction $X\to Z$.
\begin{definition}
[Coarse retract]
\label{defn:retract}
A finitely generated subgroup $\Gamma < G$ is a {\em coarse retract} if for one (equivalently, every) $x\in X$ there exists a coarse Lipschitz map $r: X\to \Gamma$ such that the composition
$$
\gamma \mapsto \gamma x \stackrel{r}{\mapsto} \gamma'\in \Gamma,
$$
is within finite distance from the identity map. Here we equip $\Gamma$ with a word metric.
\end{definition}
{
\begin{rem}
This definition makes sense, of course, not only for negatively curved symmetric spaces but for all nonpositively curved symmetric spaces $X$, where $G$ is the identity component of the isometry group of $X$.
\end{rem}
}
In view of the Morse Lemma and the coarse Lipschitz property of nearest point projections to quasiconvex subsets of $X$, one obtains:
\begin{thm}\label{thm:CC4=CC5}
A finitely generated discrete subgroup $\Gamma< G$ is undistorted iff it is quasiconvex iff it is a coarse retract.
\end{thm}
\subsubsection{Expanding actions}
We refer the reader to Appendix \ref{sec:expanding_actions} for definitions of
metric expansion and infinitesimal expansion.
\begin{definition}
[Expanding subgroups, D.~Sullivan, \cite{Sullivan}] A discrete subgroup $\Gamma< G$ is CC6 ({\em expanding})
if for each $\xi\in \Lambda(\Gamma)$ there exists $\gamma\in \Gamma$ which is metrically expanding on $S$ at $\xi$.
\end{definition}
Below are two variations on the expansion axiom:
\begin{theorem}\label{thm:CC2=CC6}
The following are equivalent:
1. $\Gamma$ is {\em infinitesimally expanding} at $\Lambda(\Gamma)$: For each $\xi\in \Lambda(\Gamma)$ there exists $\gamma\in \Gamma$
which is infinitesimally expanding at $\xi\in S$.
2. $\Gamma< G$ is CC6 (expanding).
3. $\Gamma$ is nonelementary and the action of $\Gamma$ is metrically
expanding on $\Lambda(\Gamma)$ (i.e., it suffices to check the expansion of distances only between limit points).
4. The group $\Gamma$ is CC2.
\end{theorem}
\par\medskip\noindent{\it Proof. } It is clear that $1\Rightarrow 2 \Rightarrow 3$. The implication $3\Rightarrow 4$ is proven in Theorem \ref{thm:conical} in the Appendix \ref{app:congru}. Lastly, the implication $4\Rightarrow 1$ follows from {\em extrinsic} conicality of the limit points of $\Gamma$ (Lemma \ref{lem:eq-con}) and Proposition \ref{prop:con-exp}. \qed
\medskip
The advantage of CC6 and its variations is that they make sense for general topological/smooth dynamical systems and, hence, are easy to extend to higher rank.
\subsubsection{Natural compactification of locally symmetric space}
Our next definition is formulated in terms of existence of a natural compactification of the locally symmetric space
$X/\Gamma$:
\begin{definition}
[A.~Marden, \cite{Marden}] $\Gamma$ is CC7 if the space $(X\cup \Omega(\Gamma))/\Gamma$ is compact.
\end{definition}
{ This definition first appeared in Marden's paper \cite{Marden} where he proved its equivalence to CC0 in the case
of $X={\mathbb H}^3$.}
\begin{figure}[tbh]
\centerline{\epsfxsize=4.5in \epsfbox{fig1.pdf}}
\caption{Quotient space of a geometrically finite group.}
\label{figure1.fig}
\end{figure}
\subsubsection{Finiteness of volume}
The last definition states geometric finiteness in terms of the volume of the quotient space:
\begin{definition}
[W.~Thurston; B.~Bowditch] A discrete subgroup $\Gamma< G$ is CC8 if either $|\Lambda(\Gamma)|\le 1$ or $|\Lambda(\Gamma)|\ge 2$ and:
1. The orders of the torsion elements of $\Gamma$ are bounded.
2. For some (every) $\epsilon>0$ the quotient $N_\epsilon(C_\Gamma)/\Gamma$ has finite volume.
\end{definition}
Here $C_\Gamma$ is, as before, the closed convex hull of the limit set of $\Gamma$ and $N_\epsilon$ is the $\epsilon$-neighborhood of $C_\Gamma$ in $X$.
\begin{rem}
{This definition is mostly due to W.~Thurston \cite{Thurston} who stated it for isometry groups of the hyperbolic 3-space
without the extra conditions on torsion elements. The latter assumption was added by B.~Bowditch in the general setting. The restriction on orders of torsion elements is essential, unless $X$ is the (real) hyperbolic space of dimension $\le 3$ (E.~Hamilton, \cite{Hamilton}).}
\end{rem}
\subsubsection{An equivalence theorem}
The following is a combination of work of many people:
\begin{theorem}\label{thm:main1}
For discrete isometry groups of rank 1 symmetric spaces (without parabolic elements),
all the conditions CC1---CC8 are equivalent.
\end{theorem}
\par\medskip\noindent{\it Proof. } {
The equivalence of conditions CC1, CC2, CC7, CC8 is in Bowditch's paper \cite{Bowditch_gf}; note that Bowditch proved this result for discrete isometry groups of negatively pinched Riemannian manifolds, not just symmetric spaces of negative curvature. The equivalence of CC2 and CC6 is Theorem \ref{thm:CC2=CC6}. The equivalence of
CC4 and CC5 is Theorem \ref{thm:CC4=CC5}. If $\Gamma$ is CC5 then the convex hull of $\Gamma x\subset X$ is Hausdorff-close to $\Gamma x$, hence, $\Gamma$ is CC1. If $\Gamma$ is CC1 then, taking $x\in C$ (as in the definition of CC1), and taking into account compactness of $C/\Gamma$, we conclude that $\Gamma$ is CC5.
Assume that $\Gamma$ is asymptotically embedded (CC3). Then $\Gamma$ is Gromov-hyperbolic and every $\xi\in \partial_{\infty} \Gamma$ is a conical limit point, see \cite{Tukia1994}. Hence, $\Gamma$ is CC2. Assume that $\Gamma$ is convex-cocompact (CC1) and acts cocompactly on the closed convex subset $C=C_\Gamma\subset X$, the convex hull of the limit set of $\Gamma$. Then $C$ is a Gromov-hyperbolic geodesic metric space quasiisometric to $\Gamma$. Hence $\Gamma$ is Gromov-hyperbolic; the ideal boundary of $\Gamma$ is naturally homeomorphic to the ideal boundary of $C$, i.e. the limit set of
$\Gamma$. Hence, $\Gamma$ is asymptotically embedded. \qed
}
{
\begin{rem}
The equivalence of CC0 and CC1 in the case of the real hyperbolic spaces is proven in \cite{Bowditch93} and \cite[Theorem 12.4.5]{Ratcliffe}. Their proofs rely upon convexity of Dirichlet domains. While Dirichlet domains
for general negatively curved symmetric spaces are not convex, they are {\em quasiconvex} which can be used to extend
the arguments of \cite{Bowditch93} and \cite[Theorem 12.4.5]{Ratcliffe} to this more general setting. \end{rem}
}
\subsection{Consequences of geometric finiteness}\label{sec:corGF}
The first consequence of geometric finiteness is immediate:
\begin{corollary}
[C1] For a convex cocompact subgroup $\Gamma< G$, the quotient $\Omega(\Gamma)/\Gamma$ is compact.
\end{corollary}
The next theorem, known as the {\em structural stability property} was first proven by D.~Sullivan
\cite{Sullivan} using methods of symbolic dynamics and later by C. Yue \cite{Yue} using Anosov flows.
\begin{thm}
[C2] Convex cocompactness implies structural stability: If $\Gamma< G$ is convex cocompact then any homomorphism $\rho: \Gamma\to G$ close to $id: \Gamma\hookrightarrow G$ is injective and $\rho(\Gamma)< G$ is a
convex cocompact subgroup which is topologically conjugate to $\Gamma$ on the limit set: There exists a $\rho$-equivariant homeomorphism
$$
h_\rho: \Lambda(\Gamma)\to \Lambda(\rho(\Gamma)).
$$
\end{thm}
Moreover:
\begin{thm}
[C3] In the context of C2:
a. There exists a $\rho$-equivariant topological conjugation $f_\rho: \overline {X}\to \overline {X}$,
which is smooth away from the limit set.
b. If a sequence of representations $\rho_i$ converges to the identity representations, then the maps $f_{\rho_i}$ can be chosen so that
$$
\lim_{i\to\infty} f_{\rho_i} =id.$$
Here convergence is uniform on $\overline {X}$ and $C^\infty$-uniform on compacts
in the complement to the limit set.
\end{thm}
This stronger stability theorem is a result of combined efforts of many people, see \cite{Bowditch-stab, Izeki}.\footnote{Bowditch and Izeki only consider the case of the real-hyperbolic space but the proofs go through for other negatively curved symmetric spaces as well.}
\begin{thm}
[C4] Convex cocompactness is {\em semidecidable}.
\end{thm}
Recall that an algorithmic problem is {\em semidecidable} if there is an algorithm which answers
YES in finite time if and only if the answer is positive (and runs forever if the answer is negative).
Since we are dealing with computations over the reals, one has to specify the computability model: Here and below we are using the BSS (Blum-Shub-Smale), also known as the Real RAM, computability model. See \cite{BCSS} for the details.
\medskip
There are two ways to interpret the semidecidability of convex cocompactness.
\begin{thm}
\cite{morse}. Suppose that $\Gamma$ is a word hyperbolic group defined in terms of a finite presentation. Then there is an algorithm which, given a representation $\rho: \Gamma\to G$ (defined in terms of the images of the generators) will terminate with the positive answer if and only if $\rho$ has finite kernel and the image $\rho(\Gamma)$ is convex cocompact.
\end{thm}
The first written proof of this theorem seems to be in \cite{morse} (in the context of Morse actions of hyperbolic groups on higher rank symmetric spaces), although some special cases of this theorem might have been known earlier.
One drawback of the above semidecidability theorem is that we are required to know in advance which hyperbolic group is being represented. The following theorem is limited (for various reasons) to hyperbolic 3-space, but does not require a priori knowledge of the algebraic structure of $\Gamma$;
the algorithm appears to be first discussed (and implemented) by R.~Riley, see \cite{Riley}; see also {a} paper by J.~Gilman \cite{Gilman2} and {one by} J.~Manning \cite{Manning}.
\begin{thm}
Geometric finiteness\footnote{Here we allow parabolic elements.} is semidecidable for subgroups of
the isometry group of hyperbolic 3-space $G=Isom({\mathbb H}^3)$.
\end{thm}
\par\medskip\noindent{\it Proof. } The proof is in the form of a {\em ``Poincar\'e'' algorithm} for constructing a finite sided Dirichlet domain for discrete subgroups of $G$.
The input for the (semi)algorithm is a tuple $(g_1,...,g_n)$ of elements of $G$. It attempts to construct a finite sided Dirichlet fundamental domain of the group $\Gamma$ generated by $g_1,...,g_n$ by computing, inductively, intersections $I_k$ in ${\mathbb H}^3$ of half-spaces bounded by bisectors of pairs $o, w_i(o)$, where the $w_i$ are reduced words in $g_1^{\pm 1},...,g_n^{\pm 1}$,
$$
I_k= \bigcap_{i=1}^k {\mathrm Bis}(o, w_i o),
$$
where
$$
{\mathrm Bis}(o, w o)= \{x\in {\mathbb H}^3: d(o, x)\le d(x, w o)\}.
$$
See Figure \ref{figure2.fig}. (There is a separate issue of making sure that $o\in {\mathbb H}^3$ is not fixed by a nontrivial element of $\Gamma$, we will not address this problem here.) The sequence $(w_i)$ is chosen to exhaust the free group on the generating set $g_1,...,g_n$. After constructing $I_k$ (by solving a system of linear inequalities in the Lorentzian space ${\mathbb R}^{3,1}$), the algorithm
checks if the conditions of Poincare's Fundamental domain theorem (see \cite{Maskit, Ratcliffe}) are satisfied by $I_k$. If they are satisfied for some $k$, then $\Gamma$ admits a finite sided Dirichlet domain, namely, $I_k$. If $\Gamma$ is geometrically finite, then this algorithm terminates (for any choice of base point). If $\Gamma$ is not geometrically finite, this algorithm will run forever. \qed
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig2.pdf}
\caption{Constructing a Dirichlet fundamental domain.}
\label{figure2.fig}
\end{figure}
\medskip
Note that if $\Gamma$ is geometrically finite, one can read off a finite presentation of $\Gamma$ from
the fundamental domain.
\begin{rem}
Note that for 2-generator subgroups of $PSL(2, {\mathbb R})$ there are (more efficient) alternatives to Riley's algorithm, due to J.~Gilman and B.~Maskit, see \cite{Gilman1, Gilman-Maskit} and also \cite{Gilman2} for comparison of the computational complexities.
\end{rem}
\medskip
As an aside, we discuss the question of decidability of discreteness for finitely generated subgroups of connected
Lie groups. For instance, one can ask if
a representation $F_n\to G$ (where $G$ is a connected algebraic Lie group) has discrete image.
{
First of all, one has to eliminate certain classes of representations, otherwise the discreteness problem
is undecidable already for $n=1$ and $G=U(1)$, cf. \cite{Kapovich2015}.
\begin{definition}
Let ${\mathcal E}\subset Hom(F_n, G)$ denote the subset consisting of representations $\rho$ such that
$\rho(F_n)$ contains a nontrivial normal nilpotent subgroup.
\end{definition}
For instance, in the case $G=PSL(2,{\mathbb C})$, a representation $\rho$ belongs to ${\mathcal E}$ if and only if the group
$\Gamma= \rho(F_n)$ either has a fixed point in ${\mathbb C} P^1$ or preserves a 2-point subset of ${\mathbb C} P^1$. Algebraically, (for subgroups of $PSL(2,{\mathbb C})$) this is equivalent to the condition that $\Gamma$ is solvable.
Secondly, one has to specify the {\em computability model}; as before we use the BSS (Blum--Schub--Smale) computability model. Restricting to representations in $Hom(F_n, G) \setminus {\mathcal E}$, one obtains the following
folklore theorem, see Gilman's papers \cite{Gilman1, Gilman2} in the case $G=PSL(2,{\mathbb C})$:
\begin{thm}
For a connected algebraic Lie group $G$, it is semidecidable whether a representation
$\rho\in Hom(F_n, G) \setminus {\mathcal E}$ is nondiscrete.
\end{thm}
\par\medskip\noindent{\it Proof. } The key to the proof is a theorem of Zassenhaus, see e.g. \cite{Raghunathan, Kapovich00}, where
we regard $G$ as a real algebraic subgroup of $GL(k, {\mathbb C})$ for some $k$, which we equip with the standard operator norm:
\begin{theorem}
[H. Zassenhaus] There exists a (computable) number $\epsilon$ such that the neighborhood $U= G\cap B(1, \epsilon)$ (called a {\em Zassenhaus neighborhood} of $1$ in $G$) satisfies the following property: Whenever $\Gamma< G$ is
a subgroup, the subgroup $\Gamma_U< \Gamma$ generated by $\Gamma\cap U$ is either nondiscrete or nilpotent.
\end{theorem}
Suppose that $\Gamma< G$ is nondiscrete, $\overline{\Gamma}^0$ is the identity component of $1\in \overline{\Gamma}$, the
closure of $\Gamma$ in $G$ (with respect to the standard matrix topology). Then
$\overline{\Gamma}^0$ is a normal subgroup of $\overline{\Gamma}$ of positive dimension. Therefore, the intersection
$\overline{\Gamma}^0\cap U$ is nondiscrete and, hence, the subgroup $\Gamma_U$ is nondiscrete as well.
There are two cases which may occur:
1. The subgroup $N=\overline{\Gamma}^0$ is nilpotent. Since $N$ is a Lie subgroup of $G$, there exists a neighborhood $V\subset U$ of $1\in G$ such that $\Gamma \cap V$ is contained in $N$. In particular, $\Gamma$ contains a nontrivial normal nilpotent subgroup, namely $\Gamma\cap N$. This cannot happen if $\Gamma=\rho(F_n)$,
$\rho\in Hom(F_n, G)\setminus {\mathcal E}$.
2. The subgroup $N=\overline{\Gamma}^0$ is not nilpotent. Note that by Lie--Kolchin theorem,
every connected nilpotent subgroup of $GL(k, {\mathbb C})$ is conjugate to the group of upper triangular matrices.
In particular, a connected Lie subgroup of $G$ is nilpotent if and only if it is (at most) $k-1$-step nilpotent.
Thus, in our case, there exist elements $g_1,...,g_{k}\in N\cap U$
such that the $k$-fold iterated commutator
$$
[... [[g_1, g_2], g_3],...,g_k]
$$
is not equal to $1\in G$. By continuity of the $k$-fold commutator map, there exist
$\gamma_1,..., \gamma_{k}\in \Gamma\cap U$ such that
$$
[... [[\gamma_1, \gamma_2], \gamma_3],..., \gamma_k] \ne 1.
$$
We now describe our (semi)algorithm: We enumerate $k$-tuples of elements
$(x_1,..., x_k)\in F_n\times ... \times F_n$ and, given $\rho\in Hom(F_n, G)\setminus {\mathcal E}$,
look for the tuples such that
$$
\gamma_i=\rho(x_i), i=1,...,k
$$
satisfy $\gamma_i\in U, i=1,...,k$ and
$$
[... [[\gamma_1, \gamma_2], \gamma_3],..., \gamma_k] \ne 1.
$$
If $\Gamma=\rho(F_n)$ is nondiscrete then we eventually find such a tuple thereby verifying nondiscreteness of $\Gamma$. \qed
}
\medskip
In the case when $G=PSL(2,{\mathbb R})$, a finitely generated subgroup of $G$ is discrete if and only if it is geometrically finite. Therefore, one can use Riley's algorithm in combination with nondiscreteness algorithm to determine if an $n$-generated nonsolvable subgroup of $PSL(2,{\mathbb R})$ is discrete. Hence, discreteness is decidable in $PSL(2,{\mathbb R})$. On the other hand:
\begin{thm}
\cite{Kapovich2015}. Being discrete is undecidable for nonsolvable 2-generated subgroups in $PSL(2,{\mathbb C})$.
\end{thm}
\bigskip
\section{Geometry of symmetric spaces of noncompact type}\label{sec:2}
\subsection{Basic geometry}\label{sec:Basicgeometry}
We refer to \cite{Eberlein, BGS} and \cite{Helgason} for a detailed treatment of symmetric spaces.
From now on, $X$ is a symmetric space of noncompact type: It is a nonpositively curved symmetric space without euclidean factor, $G=Isom_o(X)$ is the identity component of the full isometry group of $X$. We will use the notation $xy$ for (oriented) geodesic segments in $X$ connecting $x$ to $y$. Recall that each symmetric space admits a {\em Cartan involution} $s_x$ about every point $x\in X$; such $s_x$ fixes $x$ and acts as $-id$ on the tangent space $T_xX$.
Then $X\cong G/K$, where $K<G$ is a maximal compact subgroup (the stabilizer in $G$ of a basepoint $o$ in $X$); $G$ is a {\em semisimple} real Lie group; the Riemannian metric on $X$ is essentially uniquely determined by $G$ (up to rescaling for each simple factor of $G$). An important example is
$$
G=PSL(n,{\mathbb R}), K=PSO(n);
$$
the symmetric space $X=G/K$ can in this case be identified with the projectivized space of positive definite bilinear forms on ${\mathbb R}^n$.
\begin{rem}
For our examples we will frequently use $SL(n)$ instead of $PSL(n)$. The difference is that the group $SL(n)$ acts on the associated symmetric space with finite kernel.
\end{rem}
A symmetric space $X$ is {\em reducible} if it metrically splits as a product $X_1\times X_2$.
Each symmetric space $X$ of noncompact type admits a canonical (up to permutation of factors) product decomposition
$$
X= X_1\times \ldots \times X_n
$$
into irreducible symmetric spaces.
\medskip
{\bf Classification of isometries.} For each $g\in G$, as in rank 1, we consider its convex displacement function
$$
d_g(x)=d(x, gx).
$$
\begin{definition}
The isometries of $X$ are classified as follows:
1. An isometry $g$ of $X$ is {\em axial} or {\em hyperbolic} if $\inf_{x\in X}d_g>0$ and the infimum is realized.
In this case, there exists a $g$-invariant geodesic in $X$, an {\em axis of $g$},
along which $g$ translates.\footnote{In general, this axis is not unique, but all axes of $g$ are parallel.}
The union of axes is the minimum set of the convex function $d_g$.
2. $g$ is {\em mixed} if $\inf_{x\in X}d_g>0$ but the infimum is not realized.
3. $g$ is {\em parabolic} if $\inf_{x\in X}d_g=0$ but the infimum is not realized.
4. $g$ is {\em elliptic} if $d_g=0$ and the infimum is realized. Equivalently, $g$ has a fixed point in $X$.
\end{definition}
\medskip
An axial isometry $g$ is a {\em transvection} if it preserves parallel vector fields along one and hence any
axis. Equivalently, $g$ is a product of two different Cartan involutions.
A parabolic isometry $g$ is {\em unipotent}
if the closure of its conjugacy class contains the neutral element,
i.e.\ if there is a sequence $h_k\in G$ such that
$$
\lim_{k\to\infty} h_k g h_k^{-1}=e\in G.
$$
\medskip
\begin{definition}
A {\em flat} in $X$ is a (totally geodesic) isometrically embedded euclidean subspace in $X$. A {\em maximal flat} is a flat in $X$ which is not properly contained in a larger flat.
\end{definition}
\begin{ff}
All maximal flats in $X$ are $G$-congruent.
\end{ff}
\begin{defn}
$r=\mathop{\hbox{rank}}(X)$ is the dimension of a maximal flat.
\end{defn}
Note that $r=\mathop{\hbox{rank}}_{{\mathbb R}}(G)$.
\begin{ff}
[Cartan decomposition]
$$G=K A_+ K ,$$
where $A < G$ is a {\em Cartan subgroup} (equivalently, a maximal abelian group of transvections, equivalently, a maximal ${\mathbb R}$-split torus), and $A_+\subset A$ is a certain sharp closed convex cone with tip at $e$ (a subsemigroup).
\end{ff}
More precisely,
the unique maximal flat $F\subset X$ preserved by $A$ contains the fixed point $o\in X$ of $K$,
$o\in F$.
The cone $V=A_+o\subset F$ is a {\em euclidean Weyl chamber}
with tip at $o$.
The Cartan decomposition corresponds to the fact that every $K$-orbit in $X$ intersects $V$ in precisely one point.
\begin{example}
\label{ex:cartandeco}
For $G=SL(n,{\mathbb R})$ and $K=SO(n)$,
the Cartan subgroup $A< G$ can be chosen as the subgroup of diagonal matrices with positive entries,
and $A_+\subset A$ as the subset of diagonal matrices $a=diag(a_1,...,a_n)$
with decreasing diagonal entries:
$$a_1\ge a_2 \ge ... \ge a_n>0$$
The Cartan decomposition in this case is also known as the {\em singular value decomposition} of a matrix: $g= uav$ with $u, v\in SO(n)$ and $a\in A_+$. The diagonal entries $a_1,...,a_n$ of the matrix $a$ are known as the {\em singular values} of the matrix $g$.
\end{example}
The $G$-stabilizer $G_F$ of a maximal flat $F\subset X$ acts on $F$ (in general unfaithfully); the image of the restriction homomorphism
$$
G_F\to \operatorname{Isom}(F)
$$
is a semidirect product
$$
W_{aff}= {\mathbb R}^r \rtimes W,
$$
where ${\mathbb R}^r\cong A$ is the full group of translations of $F$ and $W$ is a certain finite reflection group of isometries of $F$, called {\em the Weyl group} of $X$ (and of $G$). In view of the $G$-congruence of maximal flats, the action
$W\curvearrowright F$ is independent of the choices.
\begin{rem}
The subgroup ${\mathbb R}^r$ lifts to $G_F$ as the group of transvections (in $X$) along $F$.
In contrast, the subgroup $W$ does in general not lift to $G_F$.
\end{rem}
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig3.pdf}
\caption{Spherical and euclidean Weyl chambers.}
\label{figure3.fig}
\end{figure}
We pick a maximal flat $F\subset X$ through the base point $o$ and regard it as the {\em model} flat $F_{mod}$.
We will assume that $W$ fixes $o$ and denote by $\Delta=\Delta_+=\Delta_{mod}\subset F_{mod}$
a certain fundamental domain of $W$,
the {\em model euclidean Weyl chamber}
(see Figure \ref{figure3.fig}.)
It is a complete cone
over a spherical simplex $\sigma_{mod}\subset \partial_{\infty} F_{mod}$, the {\em model spherical Weyl chamber}.
The tip of the cone $\Delta$ is the {origin} $o\in F_{mod}$.
The cone $A_+\subset A$ is the subsemigroup of transvections preserving the flat $F_{mod}$ and
mapping $\Delta$ into itself,
i.e.\ acting on $F_{mod}$ via translations
$$
x\mapsto x+v, \quad v\in \Delta.
$$
A {\em model Weyl sector} $V_{\tau_{mod}}$ is a face of $\Delta_{mod}$
which is the complete cone over a face $\tau_{mod}$ of $\si_{mod}$.
Euclidean Weyl chambers and Weyl sectors in $X$ are isometric copies of the model Weyl cone and the model Weyl sectors under $G$-congruences.
We will frequently identify $\si_{mod}$ with a spherical simplex in the unit sphere in $F_{mod}$ centered at $o$, the intersection of this unit sphere with $\Delta$.
\medskip
{\em The opposition involution} $\iota: \Delta\to \Delta$ (also known as {\em Chevalley} or {\em duality} involution) is defined as the composition
$$
\iota= w_0\circ (-id),
$$
where $w_0$ is the longest element of $W\curvearrowright F_{mod}$, the one sending the positive chamber in the model flat to the opposite one, and $-id$ is the antipodal map of $F_{mod}$ fixing $o$.
For each pointed maximal flat $(F,x)$ in $X$
there are finitely many euclidean Weyl chambers $V\subset F$ with tip $x$,
and they tessellate $F$.
\begin{thm}
The following are equivalent:
1. The symmetric space $X$ is irreducible.
2. The action $W\curvearrowright {\mathbb R}^r$ is irreducible.
3. $G$ is a simple Lie group.
\end{thm}
\medskip
In the irreducible case, the Weyl groups $W$ are classified into $A$, $B=C$, $D$ (classical types) and $G_2, F_4, E_6, E_7, E_8$ (exceptional types). For instance, $SL_n$ has type $A_{n-1}$, $W\cong S_n$, the permutation group on $n$ symbols.
The group $Sp_n$ has type $C_n$ and its Weyl group is isomorphic to the semidirect product
${\mathbb Z}_2^n \rtimes S_n$ where $S_n$ acts on ${\mathbb Z}_2^n$ by permuting its basis elements.
\medskip
{\em Walls} in $F_{mod}$ are the fixed hyperplanes of reflections in $W_{aff}$. Walls in $X$ are the images of walls in $F_{mod}$ under elements of $G$.
\medskip
{\em Regular} (geodesic) segments in $X$ are the segments not contained in any wall. {\em Singular} segments are the segments contained in walls.
Equivalently: A geodesic segment $xy$ is regular iff it is contained in a unique maximal flat.
\medskip
Each oriented segment $xy$ in $X$ defines a vector $v$ in $\Delta$,
$$
v=d_\Delta(x,y),
$$
the {\em $\Delta$-valued distance from $x$ to $y$}.\footnote{The map $\mu: xy\mapsto v$ is also known as the {\em Cartan projection}, while the map $g\mapsto d_\Delta(x, gx)$ is sometimes called the {\em Lyapunov projection}.}
Namely, since $G$ acts transitively on pointed maximal flats in $X$, we can map $F$ to the model flat
$F_{mod}$ and $x$ to the point $o\in F_{mod}$ via some $g\in G$.
Now, project the oriented segment $g(xy)$ to the vector $v$ in $\Delta$ using the action of $W$.
The vector $v$
is the {\em complete
$G$-congruence invariant} of the pair $(x,y)$: Given two pairs $(x,y), (x',y')$, there exists $g\in G$ sending $(x,y)$ to $(x',y')$ iff
$d_\Delta(x,y)= d_\Delta(x',y')$.
In the case of rank 1 spaces, $\Delta\cong[0,\infty)$ and $d_\Delta$ is the usual distance function.
We refer to \cite{KLM} for the description of a complete set of {\em generalized triangle inequalities} for the chamber-valued distance function. The simplest of these inequalities has the form:
$$
d_\Delta(x,z)\le_{\Delta^*} d_\Delta(x,y) + d_\Delta(y,z),
$$
where $\Delta^*$ is the cone dual to $\Delta$, also known as the {\em root cone}:
$$
\Delta^*= \{u: \< u, x\> \ge 0 \;\forall x\in \Delta \}.
$$
We also refer the reader to \cite{Parreau} for discussion of ``nonpositive curvature'' properties of $d_\Delta$.
\begin{remark}\label{rem:ineq}
1. Here, given a convex cone $C$ with tip $0$ in a vector space $V$, we define the partial order $\le_C$ on $V$ by:
$$
u\le_C v \iff v-u\in C.
$$
2. In general, $d_\Delta$ is not symmetric, but it satisfies the identity
$$
d_\Delta(y,x) = \iota d_\Delta(x,y).
$$
\end{remark}
\begin{rem} The theory of regular/singular segments has a relative analogue, relative to a face $\tau_{mod}$ of $\sigma_{mod}$;
we will not cover the relative version in this paper. However, the relativization is important for the notion of $\tau_{mod}$-Morse maps and group actions, which correspond to $P$-Anosov subgroups
in the sense of \cite{Labourie,GW} for parabolic subgroups $P<G$.
The discrete subgroups theory described in this survey is the one of $B$-Anosov subgroups, where $B < G$ is a minimal parabolic subgroup.
We refer the reader to \cite{morse} for the definition of $\tau_{mod}$-regularity.
\end{rem}
\begin{example}
Consider the case of the symmetric space associated with the group $G=PGL(n,{\mathbb R})$,
i.e. $X$ consists of positive definite
$n\times n$ matrices with unit determinant. Assume that $o\in X$ corresponds to the identity matrix.
Then, up to scaling,
$$
d_\Delta(o,y)= \frac{1}{2} \bigl(\log(\lambda_1), \log(\lambda_2), \ldots, \log(\lambda_n)\bigr), $$
where $\lambda_1\ge \lambda_2\ge ....\ge \lambda_n$ are the eigenvalues of the matrix $y$ counted with multiplicity.
The segment $oy$ is regular if and only if $\lambda_i> \lambda_{i+1}$ for all $i=1,...,n-1$.
\end{example}
\subsection{Finsler geometry}\label{sec:finsler geometry}
Each symmetric space comes with a nonpositively curved Riemannian metric and the corresponding Riemannian distance function. Nevertheless, it turns out that many asymptotic aspects of $X$ (and of its quotients, locally symmetric spaces) are better captured by
suitable $G$-invariant {\em polyhedral Finsler metrics} on $X$.
Pick a regular vector $\bar\theta\in \sigma_{mod}$ (where we regard $\si_{mod}$ as a simplex in the unit sphere in $F_{mod}$), and define the linear functional $\varphi$ on $F_{mod}$ dual to the vector $\bar\theta$.
For simplicity, we assume $\bar\theta$ to be $\iota$-invariant.
(See \cite{bordif} for the general treatment.)
\begin{rem}
There are several natural choices of the vectors $\bar\theta$ and, thus, of the dual
linear functionals $\varphi$ and of the Finsler metrics defined below. For instance,
one can take $\varphi$ to be the sum of all positive roots $\alpha\in R$ (positive with respect to the chamber
$\Delta$). This linear functional will be {\em regular}, i.e.\ given by the inner product with a regular vector $\bar\theta$
in $\Delta$, and moreover $\iota$-invariant. While the metric $d_{\bar\theta}$ depends on the choice of $\bar\theta$, the compactification $\overline{X}^{Fins}$ of $X$ is independent of $\bar\theta$, see \cite{bordif}.
For concreteness, the reader can assume that $\varphi$ is the sum of positive roots.
\end{rem}
Given $\varphi$ (equivalently, $\bar\theta$), we define in \cite{bordif}
a Finsler distance function $d_{\bar\theta}$ on $X$ as follows.
First, we define a polyhedral Finsler norm on the vector space
$F_{mod}={\mathbb R}^r$ by
$$
||v||_{\bar\theta}:= \varphi(d_\Delta(0, v)).
$$
The unit ball $B_{mod}$ for this norm is the intersection of half-spaces
$$
\{x: (w^* \varphi)(x)\le 1\}, \quad w\in W.
$$
Since this norm is $W$-invariant, it extends to a $G$-invariant Finsler metric on $X$ by defining the norm $||v||$ for a vector $v\in T_x X$ using the formula
$$
||v||= ||dg(v)||_{\bar\theta}
$$
where $g: x\mapsto o\in X$, $dg(v)\in T_oF_{mod}$, $g\in G$. This norm on tangent spaces is
a Finsler metric on the entire symmetric space $X$,
and one has the Finsler distance function
$$
d_{\bar\theta}(x,y):= \inf \int_{0}^{1} ||c'(t)|| dt,
$$
where the infimum is taken over all smooth paths $c: [0,1]\to X$, $c(0)=x, c(1)=y$.
This distance function is also given by the explicit formula
$$
d_{\bar\theta}(x,y)= \varphi(d_\Delta(x,y)), \quad x, y\in X
$$
which is the definition that we are going to use.
Due to our assumption that $\bar\theta$ is $\iota$-invariant,
the distance $d_{\bar\theta}$ is symmetric
and hence a metric in the usual sense.
We will refer to any such distance $d_{\bar\theta}$ as a {\em regular polyhedral Finsler metric} on $X$.
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig4.pdf}
\caption{Polyhedral Finsler norm.}
\label{figure4.fig}
\end{figure}
\medskip
{Regular polyhedral} Finsler metrics on $X$ are used in \cite{bordif} to construct a {\em Finsler compactification} $\overline{X}^{Fins}$ by adding to $X$ {\em Finsler horofunctions} in the manner similar to compactifying $X$ by adding to it Riemannian Busemann functions. We will discuss this in more detail in \S \ref{Finsler-com}.
\subsection{Boundary theory} \label{sec:Boundary theory}
As in the rank 1 case, the {\em visual boundary} $\partial_{\infty} X$ of a symmetric space $X$ is defined as the set of asymptotic equivalence classes of geodesic rays in $X$: Two rays are asymptotic iff they are within finite distance from each other. There are two useful $G$-invariant topologies on $\partial_{\infty} X$: The first one is the visual topology, the topology of a (the) unit tangent sphere in
$X$. This identification is achieved by choosing a reference point $x\in X$ and considering the set $Ray_x$ of geodesic rays $\rho$ emanating from $x$: Each geodesic ray in $X$ is equivalent to one and only one ray of such form.
The set $Ray_x$ is identified with the unit tangent sphere $U_xX\subset T_xX$ by sending each ray $\rho$ to its velocity vector at $x$.
However, $\partial_{\infty} X$ also carries the structure of a (spherical) simplicial complex, defined via ideal boundary simplices of maximal flats in $X$. For each maximal flat $F$, the visual boundary $\partial_{\infty} F$ is identified with the unit sphere in $F$, and hence the $W$-action defines a Coxeter simplicial complex on $\partial_{\infty} F$.
\begin{ff}
For any two maximal flats $F, F'$ the intersection $\partial_{\infty} F\cap \partial_{\infty} F'$ is a (convex) subcomplex of both
$\partial_{\infty} F$ and $\partial_{\infty} F'$.
\end{ff}
This proves that the tilings of the visual boundaries of the maximal flats are compatible.
The topology of this simplicial complex is called {\em Tits topology}.
It is induced by the {\em Tits metric}, which restricts to the angular metric on the visual boundary spheres of
maximal flats.
The simplicial complex is a {\em Tits building}, the {\em Tits boundary} of $X$, denoted $\partial_{Tits} X$.
Its dimension equals $\mathop{\hbox{rank}}(X)-1$.
The identity map
$$
\partial_{Tits} X\to \partial_{\infty} X
$$
is a continuous (bijection), but never a homeomorphism,
i.e. the Tits topology is strictly finer than the visual topology.
{\em Apartments} in $\partial_{Tits} X$ are visual boundaries of maximal flats.
Facets (i.e.\ top-dimensional simplices) of the apartments
are called {\em chambers}.
Given a point $x\in X$ and a chamber $\sigma$ in $\partial_{Tits} X$,
we let $V(x,\sigma)$ denote the {\em euclidean Weyl chamber} in $X$, which is the union of geodesic rays $x\xi$, $\xi\in \sigma$. Similarly, for a
face $\tau$ of the simplicial complex $\partial_{Tits} X$, we let $V(x,\tau)$ denote the {\em Weyl sector} equal to the union of rays $x\xi$, $\xi\in \tau$. A point $\xi\in \partial_{\infty} X$ is {\em regular} if it belongs to the interior of a chamber $\sigma\subset \partial_{Tits} X$; equivalently, for some (every) $x\in X$ the geodesic ray $x\xi$ is regular.
\begin{ff}
Any two ideal points (equivalently, chambers) belong to a common apartment.
\end{ff}
Every $G$-orbit in $\partial_{\infty} X$ intersects every chamber exactly once,
and we have the {\em type map}
$$\theta: \partial_{\infty} X\to \partial_{\infty} X/G\cong\sigma_{mod} .$$
For a maximal flat $F\subset X$,
the $G$-orbits in $\partial_{\infty} X$ intersect $\partial_{\infty} F$ in Weyl orbits,
and the restriction $\theta|_{\partial_{\infty} F}:\partial_{\infty} F\to\si_{mod}$ divides out the action of the Weyl group (of $F$ resp.\ $\partial_{\infty} F$).
\medskip
{\bf Example:} (a) Rank 1 case: $\partial_{Tits} X$ is a discrete space.
(b) $SL(n, {\mathbb R})$ case: $\partial_{Tits} X$ is
the incidence complex of ${\mathbb R} P^{n-1}$. Chambers are complete flags:
$$
V_\bullet=(V_1\subset \ldots\subset V_{n-1}\subset {\mathbb R}^n),
$$
where $\dim(V_i)=i$; other faces are partial flags.
The incidence relation: a partial flag $V_\bullet'$ is a face of a full flag $V_\bullet$
iff the full flag is a refinement of the partial flag.
For instance, if $n=3$, then full flags are pairs
$$
V_\bullet=(V_1\subset V_2),
$$
and partial flags are lines $V_1'$ or planes $V_2'$; they yield the vertices of the incidence graph. Then $V_1'$ is a vertex of
$V_\bullet$ iff $V_1'=V_1$; $V_2'$ is a vertex of $V_\bullet$ iff $V_2'=V_2$. Thus, two vertices $V_1, V_2$ are connected by an edge iff $V_1\subset V_2$ (the line is contained in the plane).
\begin{remark} $\mathop{\hbox{rank}}(X)\geq2$ iff $\partial_{Tits} X$ is connected.
\end{remark}
The {\em Furstenberg boundary} $\partial_{F\ddot u} X$ of $X$ is the space of {\em chambers} in $\partial_{Tits} X$.
The $G$-action on $\partial_{F\ddot u} X$ is transitive
and the stabilizers in $G$ of the chambers are the {\em minimal parabolic subgroups} $B<G$.
Hence
$$\partial_{F\ddot u} X\cong G/B.$$
The topology on $\partial_{F\ddot u} X$ induced by the visual topology coincides with its manifold topology
as a homogeneous space.
From the smooth viewpoint,
$\partial_{F\ddot u} X$ is a compact smooth homogeneous $G$-space,
and from the algebraic geometry viewpoint a homogeneous $G$-space with an underlying projective variety.
For instance,
in the case $G=SL(n)$,
the Furstenberg boundary is the full flag manifold,
and a minimal parabolic subgroup $B$ is given by the upper-triangular matrices, which is the stabilizer of the full flag\footnote{Here
and in what follows, $\< S\>$ denotes the linear span of a subset $S$ of a vector space.}
$$
\<e_1\>\subset \<e_1, e_2\>\subset \ldots \<e_1,\ldots, e_{n-1}\>.
$$
More generally,
for a face $\tau_{mod}\subseteq\si_{mod}$, we define the
{\em generalized partial flag manifold} $\operatorname{Flag_{\tau_{mod}}}$ as the space of simplices $\tau\subset\partial_{Tits} X$ of type
$\theta(\tau)=\tau_{mod}$.
The $G$-action on $\operatorname{Flag_{\tau_{mod}}}$ is again transitive.
The stabilizers of the simplices $\tau$ are the {\em parabolic subgroups} $P_{\tau}<G$ of type $\tau_{mod}$.
They form a conjugacy class and, denoting by $P_{\tau_{mod}}$ a representative,
we can write
$$\operatorname{Flag_{\tau_{mod}}}\cong G/P_{\tau_{mod}}.$$
Note that $\operatorname{Flag_{\si_{mod}}}=\partial_{F\ddot u} X$.
For a simplex $\tau\in\operatorname{Flag_{\tau_{mod}}}$, we define its {\em star} $$\operatorname{st}(\tau)\subset\partial_{F\ddot u} X$$
as the set of chambers of the Tits building $\partial_{Tits} X$ containing $\tau$:
\begin{equation}\label{eq:star}
\operatorname{st}(\tau):= \{\sigma\in \partial_{F\ddot u} X: \tau\subset \sigma\}.
\end{equation}
\begin{definition}
Ideal boundary points $\xi_\pm\in\partial_{\infty} X$ are {\em antipodal} if they are connected by a geodesic in $X$.
Two chambers $\sigma_\pm$ are antipodal if they contain antipodal regular points. Equivalently, they are swapped by a Cartan involution in $X$.
\end{definition}
{\bf Notation.} $\sigma^{opp}\subset\partial_{F\ddot u} X$ denotes the set of chambers antipodal to $\sigma$.
\begin{remark} 1. $\sigma^{opp}$ is an open subset of $\partial_{F\ddot u} X$, called {\em open (maximal) Schubert cell of $\sigma$}.
2. Antipodal implies distinct but not vice versa!
3. The complement $\partial_{F\ddot u} X - \sigma^{opp}$ is a union of proper Schubert cycles in the projective variety $\partial_{F\ddot u} X\cong G/B$,
and hence a proper algebraic subvariety.
\end{remark}
\begin{example}
1. In {the} $SL(n)$ case, two full flags $V_\bullet, W_\bullet$
are antipodal iff they are {\em transversal}: $V_i$ is transversal to $W_{n-i}$ for each $i$.
2. In the rank 1 case, antipodal is equivalent to distinct.
The Tits boundary of a Gromov-hyperbolic space is a zero-dimensional building.
\end{example}
\subsection{Quantified regularity}
Fix an $\iota$-invariant nonempty compact convex
subset $\Theta\subset \si_{mod}^o$, where $\sigma_{mod}^o=\operatorname{int}(\si_{mod})$ is the interior of $\si_{mod}$.
Define $V_\Theta\subset \Delta$, the $\Theta$-cone, as the cone with tip at the origin $o$ over the subset $\Theta$,
$$
V_\Theta= {\mathbb R}_{\ge 0} \cdot \Theta.
$$
We define {\em $\Theta$-regular segments} in $X$ as segments whose $\Delta$-length is in $V_\Theta$.
More generally, given $x\in X$ and a euclidean Weyl chamber $V(x,\sigma)$, we define the {\em $\Theta$-cone}
$$
V_\Theta(x,\sigma)= \{y\in V(x,\sigma): d_\Delta(x, y)\in V_\Theta\}.
$$
\begin{figure}[tbh]
\includegraphics[width=70mm]{fig5.pdf}
\caption{Cone $V_\Theta=V_\Theta(0,\si_{mod})$.}
\label{figure5.fig}
\end{figure}
\begin{remark}
Due to the $\iota$-invariance of $\Theta$,
the notion of $\Theta$-regularity is independent of the orientation of the segments.
\end{remark}
\medskip
For a negatively curved symmetric space $X$, a sequence $x_i\in X$ is divergent if and only if the sequence of distances $d(o,x_i)$ from a basepoint $o\in X$ diverges. Things become more complicated in higher rank symmetric spaces, since the ``right'' notion of a distance in $X$ is not a number but a vector in $\Delta$. This opens several possibilities for diverging to infinity and leads to (several) notions of (asymptotic) regularity for sequences. In this survey we restrict to the simplest ones.
\begin{definition}
[Regular sequence in $X$]
\label{def:regseq}
A sequence $(x_i)$ in $X$ is
1. {\em regular} if the sequence of vectors $v_i=d_\Delta(o, x_i)$ diverges away from the boundary of $\Delta$.
2. {\em $\Theta$-regular} if it diverges to infinity and the sequence $(v_i)$ accumulates at $\Theta$.
3. {\em uniformly regular} if it is $\Theta$-regular for some $\Theta$. Equivalently,
the accumulation set of the sequence $(v_i)$ is contained in $\si_{mod}^o$. Equivalently, there exists $\Theta'$ (a compact convex subset of $\si_{mod}^\circ$)
such that for all but finitely many values of $i$ the vector $v_i$ belongs to $V_{\Theta'}$.
\end{definition}
Analogously, we can define regularity for sequences of isometries $g_i$ of $X$:
\begin{definition}
[Regular sequence in $G$]
\label{def:regseqgp}
A sequence $(g_i)$ in $G$ is {\em regular} (resp. {\em uniformly regular}, resp. {\em $\Theta$-regular})
if for some (equivalently, every) $x\in X$ the orbit sequence $x_i=g_i x$ has this property.
\end{definition}
Thus, a divergent sequence in $X$ is uniformly regular iff all its subsequential limits in $\partial_{\infty} X$ are regular points. We will see later how to characterize regular sequences $(g_i)$ in $G$ in terms of their action
on the flag manifold $G/B$.
\medskip
\begin{remark} 1. Our notion of regularity for sequences is different from the notion
introduced by Kaimanovich in \cite{Kaimanovich}, where a sequence in $X$ is called {\em regular}
if it diverges at most sublinearly from a geodesic ray.
2. (Uniform) regularity of a sequence in $X$ is independent of the choice of base point.
3. If $(x_i)$ and $(y_i)$ are sequences in $X$ within uniformly bounded distance from each other,
$\sup_i d(x_i, y_i)<\infty$, then $(x_i)$ is (uniformly) regular iff $(y_i)$ is.
\end{remark}
\begin{example}
\label{ex:regseq}
Suppose that $G=SL(n, {\mathbb R})$ or $SL(n,{\mathbb C})$.
Then for each $g\in G$ we have its vector of singular values
$$
a(g)=(a_1(g)\ge \ldots\ge a_n(g))
$$
where the $a_j$'s are the diagonal entries of the diagonal matrix $a$ in the Cartan resp.\ singular value decomposition,
cf.\ Example~\ref{ex:cartandeco}.
A sequence $(g_i)$ in $G$ is regular iff
$$
\lim_{i\to\infty} \frac{a_l(g_i)}{a_{l+1}(g_i)}=\infty
\quad \hbox{ for }l=1,...,n-1.
$$
\end{example}
\begin{rem}
The singular values of a matrix depend on the choice of a euclidean/hermitian scalar product on ${\mathbb R}^n$ or ${\mathbb C}^n$ (this amounts to choosing a base point in the symmetric space of $G$), but the regularity of a sequence is independent of this scalar product.
\end{rem}
In line with the notion of regular sequences in $X$ (which are maps ${\mathbb N}\to X$), one defines regular maps from other metric spaces into $X$. The most relevant for us is the following notion:
\begin{definition}
[Regular quasiisometric embedding]
\label{def:BTregular}
An $(L,A)$-quasiisometric embedding $f: Y\to X$ from a metric space $Y$ to a symmetric space $X$
is {\em $(\Theta,B)$-regular} if for all $y_1, y_2\in Y$ satisfying $d(y_1,y_2)\ge B$,
the segment $f(y_1) f(y_2)$ is $\Theta$-regular in $X$. A map $f: Y\to X$ is a {\em uniformly regular quasiisometric embedding} if it is a $(\Theta,B)$-regular quasiisometric embedding for some $B$ and $\Theta$.
\end{definition}
The most important cases when we will be using this definition are when $Y$ is a finitely generated group (equipped with a word metric) or a (possibly infinite) geodesic segment. We will discuss regular quasiisometric embeddings and regular quasigeodesics in more details in the next section.
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig6.pdf}
\caption{The path $q$ in this figure is a Finsler geodesic, it is also a regular quasigeodesic in the model flat. The path $r$ is a Finsler geodesic but is not a regular quasigeodesic.}
\label{figure9.fig}
\end{figure}
\begin{example}[Regular and non-regular quasigeodesics]
Consider the case of quasiisometric embeddings into ${\mathbb R}^2=F_{mod}$, the model maximal flat of $SL(3)$. We assume that the $x$-axis in ${\mathbb R}^2$ is a wall. Then, a piecewise linear function $f: {\mathbb R}\to {\mathbb R}$, yields a Finsler geodesic $q: x\mapsto (x, f(x))$, which is also a uniformly regular quasigeodesic in $F_{mod}$, provided that the slopes of linear segments in the graph of $f$ lie in the interval $[\epsilon, \sqrt{3} - \epsilon]$ for some $\epsilon>0$. In contrast, the graph of the function $g(x)=|x|$ is not a regular quasigeodesic.
The reason is that for each $x>0$ the segment connecting the points $(-x, x), (x, x)$ in the graph of $g$ is horizontal and, hence, singular. The graph $r$ of the function
$$
h(x)= \begin{cases}
0 & \hbox{if~~} x<0\\
\sqrt{3}x & \hbox{if~~} x\ge 0
\end{cases}
$$
is a Finsler geodesic which is not a regular quasigeodesic.
\end{example}
\medskip
One of the geometric tools for studying regular quasiisometric embeddings are {\em diamonds} which we will define now.
Diamonds
can be regarded as the ``right'' generalization of geodesic segments
when dealing with the metric $d_\Delta$ and with regular polyhedral Finsler metrics on $X$.
\begin{definition}[Diamonds \cite{morse,anolec}]
\label{def:diamo}
For a regular segment $xy$,
the {\em diamond} $\diamondsuit_{x,y}\subset X$
is the intersection of the two
euclidean Weyl chambers with tips $x$ and $y$ containing $xy$.
\end{definition}
Diamonds are contained in maximal flats.
\begin{figure}[tbh]
\includegraphics[width=90mm]{figure06c.pdf}
\caption{A diamond in the model flat.}
\label{figure6.fig}
\end{figure}
\begin{example}
1. If $X$ is the product of rank 1 spaces
$$
X=X_1\times \ldots\times X_n
$$
then each diamond is the product of geodesic segments $s_i\subset X_i$.
2. If $\mathop{\hbox{rank}}(X)=2$, then each diamond is a parallelogram, its faces are contained in walls.
\end{example}
Similarly, for $\Theta$-regular segments $xy$,
one defines the {\em $\Theta$-diamond} $\diamondsuit^{\Theta}_{x, y}\subset \diamondsuit_{x,y}$ by
$$
\diamondsuit^{\Theta}_{x,y}= V_\Theta(x,\sigma)\cap V_\Theta(y,\hat\sigma),
$$
where $\sigma, \hat\sigma$ are the (antipodal) chambers
such that $xy$ is contained in both $\Theta$-cones $V_\Theta(x,\sigma)$ and $V_\Theta(y,\hat\sigma)$.
As before, $xy\subset \diamondsuit^{\Theta}_{x,y}$. See Figure \ref{figure7.fig}.
\begin{figure}[tbh]
\includegraphics[width=90mm]{figure07a.pdf}
\caption{$\Theta$-diamond in the model flat.}
\label{figure7.fig}
\end{figure}
\begin{prop}
[Finsler description of diamonds \cite{bordif}]
$\diamondsuit_{x,y}$ is the union of all Finsler geodesics\footnote{With respect to a fixed regular polyhedral Finsler metric on $X$.}
connecting $x$ to $y$.
\end{prop}
It is quite clear that the diamond is filled out by Finsler geodesics connecting its tips.
The less part is to show that all Finsler geodesics connecting its tips are contained in the diamond.
Diamonds are enlargements of the Riemannian geodesic segments connecting their tips and,
in view of the proposition,
may be regarded as their natural Finsler replacements,
reflecting the nonuniqueness of Finsler geodesics for polyhedral Finsler metrics.
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig9.pdf}
\caption{Diamond as the union of Finsler geodesics.}
\label{figure8.fig}
\end{figure}
\medskip
\subsection{Morse quasigeodesics and the Higher Rank Morse Lemma}
\medskip
We will now discuss a higher rank version of the Morse Lemma for quasigeodesics in rank 1 spaces.
The Morse Lemma in rank 1 states that quasigeodesic segments are uniformly Hausdorff close to geodesic segments.
This is no longer true in higher rank, because it already fails in euclidean plane:
\begin{example}
[Failure of the naive version of the Morse Lemma] Take an $L$-Lipschitz function $f: {\mathbb R}\to {\mathbb R}$. Then $x\mapsto (x, f(x))$ is a quasigeodesic in ${\mathbb R}^2$, which, in general, is not close to any geodesic. For instance, take $f(x)=|x|$. Further examples can be obtained by using suitable maps $r\mapsto (r,\theta(r))$ in polar coordinates.
\end{example}
We next define a class of quasigeodesics in $X$
which satisfy a higher rank version of the conclusion of the rank 1 Morse Lemma,
where geodesic segments are replaced with ``diamonds'',
see the previous section and Definition~\ref{def:diamo}.
That is,
we require the quasigeodesics and their subsegments to be uniformly close to $\Theta$-diamonds with tips at their endpoints:
\begin{definition}
[Morse quasigeodesics and maps \cite{morse}]
\label{def:mrs}
Let $\Theta\subset \si_{mod}^o$ be nonempty $\iota$-in\-variant compact convex,
and let $B, L, A, D,S>0$.
1. A map $q: I\to X$ from an interval $I$
is a {\em $(\Theta, B,L,A,D)$-Morse quasigeodesic}
if it is an $(L,A)$-quasigeodesic,
and if the image $q([s,t])$ is
for any subinterval $[s,t]\subset I$ of length $t -s > B$
contained in the $D$-neighborhood of
the $\Theta$-diamond $\diamondsuit^{\Theta}_{x,y}$ with tips $x=q(s), y=q(t)$.
2. A map $q: I\to X$ is a $(\Theta, B,L,A,D,S)$-{\em local Morse quasigeodesic}
if its restrictions to subintervals of length $\leq S$ are $(\Theta, B,L,A,D)$-Morse quasigeodesics.
3. A map $f: Y\to X$ from a metric space $Y$ is {\em Morse} if
it sends uniform quasigeodesics in $Y$ to uniform Morse quasigeodesics in $X$.
\end{definition}
Here, we call families of (Morse) quasigeodesics
with fixed quasiisometry, respectively, Morse constants {\em uniform}.
Note that
Morse quasigeodesics are {\em uniformly regular} in the sense of the previous section,
cf.\ Definition~\ref{def:BTregular}.\footnote{More precisely,
$(\Theta, B,L,A,D)$-Morse quasigeodesics are $(\Theta',B')$-regular
with $\Theta'\subset \si_{mod}^o$ and $B'>0$ depending only on
$X$ and the Morse data $(\Theta, B,L,A,D)$. }
Similarly, each Morse map is a uniformly regular quasiisometric embedding, provided that $Y$ is a {\em quasigeodesic metric space}, i.e.\ any two points in $Y$ can be connected by a uniform quasigeodesic in $Y$.
It is nontrivial that, vice versa, uniform regularity already forces quasigeodesics to be Morse:
\begin{theorem}[Higher Rank Morse Lemma \cite{mlem}]
\label{thm:HRMorse}
Uniformly regular uniform quasigeodesics in $X$ are uniformly Morse.
\end{theorem}
In other words,
uniformly regular quasigeodesics in $X$ are Morse,
with Morse data depending on the quasiisometry constants and the uniform regularity data (and on $X$).
\begin{remark}\label{rem:addend}
1. The Morse Lemma holds as well for quasirays (diamonds are replaced with euclidean Weyl chambers)
and quasilines (diamonds are replaced with maximal flats\footnote{Or
with unions of opposite Weyl cones in these maximal flats.}).
2. We also proved a version of our theorems with a weaker regularity assumption
(relative to a face $\tau_{mod}$ of $\sigma_{mod}$).
In this setting, diamonds are replaced by certain convex subsets of parallel sets,
namely, intersections of opposite ``Weyl cones.''
\end{remark}
For maps, one obtains accordingly:
\begin{corollary}
Uniformly regular uniform quasiisometric embeddings $Y\to X$ from geodesic metric spaces $Y$ are uniformly Morse.
\end{corollary}
The closeness to diamonds in the Morse condition can be nicely reformulated
in Finsler terms:
\begin{prop}
Uniform Morse quasigeodesics are uniformly Hausdorff close to Finsler geodesics.\footnote{Clearly,
these Finsler geodesics are then uniformly regular.}
\end{prop}
The Morse Lemma for quasigeodesics then becomes:
\begin{corollary}
\label{cor:mlemfins}
Uniformly regular uniform quasigeodesics in $X$ are uniformly Hausdorff close to Finsler geodesics.
\end{corollary}
There is a basic restriction on the coarse geometry of domains of uniformly regular quasiisometric embeddings
into symmetric spaces:
\begin{theorem}
[Hyperbolicity of regular subsets \cite{mlem}]
If $Y$ is a geodesic metric space which admits a uniformly regular quasiisometric embedding to $X$,
then $Y$ is Gromov-hyperbolic.
\end{theorem}
\begin{rem}
The Morse Lemma for quasirays implies that
uniformly regular quasirays converge at infinity to Weyl chambers in a suitable sense.
For uniformly regular quasiisometric embeddings from Gromov hyperbolic geodesic metric spaces,
this leads to the existence of natural boundary maps.
We will make this precise at the end of next section,
see Theorem~\ref{thm:bdmp},
after introducing the notion of flag convergence of regular sequences in $X$ to chambers at infinity.
\end{rem}
It is a fundamental property of Morse quasigeodesics that they satisfy
the following {\em local-to-global principle}:
\begin{theorem}
[Local-to-global principle for Morse quasigeodesics \cite{morse}]
\label{thm:L2G}
If a coarse Lipschitz path in $X$
is locally a uniform Morse quasigeodesic
on a sufficiently large scale compared to the Morse data,
then it is globally a Morse quasigeodesic (for different Morse data).
More precisely:
For Morse data $(\Theta, B,L,A,D)$
and another convex compact subset $\Theta'\subset\si_{mod}^o$ with $\Theta\subset\operatorname{int}(\Theta')$
there exist constants $S,B',L', A', D'>0$
(depending also on $X$)
such that every $(\Theta,B,L,A,D,S)$-local Morse quasigeodesic
is a $(\Theta',B',L',A',D')$-Morse quasigeodesic.
\end{theorem}
This theorem parallels the local-to-global principle for quasigeodesics in Gromov-hyperbolic spaces,
see e.g.\ \cite[Thm.\ 1.4 in ch.\ 3]{CDP}.
It is derived from a basic local-to-global principle for straight paths
which we will explain later,
see Theorem~\ref{thm:straight-paths} below.
\subsection{Flag convergence}
\label{sec:flagcv}
We introduce the following notion of convergence at infinity for regular sequences in $X$ to chambers in $\partial_{F\ddot u} X$:
\begin{definition}[Flag convergence \cite{coco13, morse, anolec}]
\label{defn:flagconvergenceX}
A regular sequence $(x_n)$ in $X$ is said to
{\em flag converge} to
a chamber
$\sigma\in \partial_{F\ddot u} X$,
if for some base point $o\in X$ and some sequence $(\sigma_n)$ in $\partial_{F\ddot u} X$ with
\begin{equation*}
\sup_nd\bigl(x_n, V(o,\sigma_n)\bigr) < \infty
\end{equation*}
it holds that $\sigma_n\to\sigma$ (in the manifold topology of $\partial_{F\ddot u} X$).
\end{definition}
This convergence is independent of the choices of $o$ and $(\sigma_n)$, see \cite{coco13, anolec}.
For {\em uniformly} regular sequences,
flag convergence can be described in terms of the visual compactification:
A uniformly regular sequence in $X$ flag converges to $\sigma\in \partial_{F\ddot u} X$
iff its accumulation set in the visual compactification $\overline{X}$ is contained in $\sigma$.
Flag convergence is induced by a natural topology on $X\sqcup \partial_{F\ddot u} X$,
making it a {\em partial compactification} of $X$,
see \cite[\S 3.8]{mlem}.
If $\mathop{\hbox{rank}}(X)=1$, then $\partial_{F\ddot u} X$ is the visual boundary of $X$ and the topology on
$X\sqcup \partial_{F\ddot u} X$ is the visual topology described in \S \ref{sec:rk1}, making $X\sqcup \partial_{F\ddot u} X$ homeomorphic to a closed ball. The situation in higher rank is more complex,
since then $\partial_{F\ddot u} X$ is not even a subset of the visual boundary $\partial_{\infty} X$.\footnote{However,
$\partial_{F\ddot u} X$ is a subset of the {\em Finsler} boundary, see \S~\ref{Finsler-com} below.}
The topology on $X\sqcup \partial_{F\ddot u} X$ is obtained as follows.
Fix a basepoint $o\in X$
and define the {\em shadow} $Sh(B(y,R))$ of an {open} metric ball $B(y,R)\subset X$ in $X\sqcup \partial_{F\ddot u} X$ as
$$
\{x\in X: ox \hbox{~~is regular and~~} \diamondsuit_{ox}\cap B(y,R)\ne \emptyset\} \cup \{\sigma\in \partial_{F\ddot u} X: V(o, \sigma)\cap B(y,R)\ne \emptyset\}.
$$
Then a basis of the {\em shadow topology} on $X\sqcup \partial_{F\ddot u} X$ at $\sigma\in \partial_{F\ddot u} X$ consists of all sets $Sh(B(y,R))$ with $R>0$ and $y\in V(o, \sigma)$. We retain the metric topology on $X$.
\begin{prop}
1. The shadow topology is independent of the basepoint $o\in X$.
2. The shadow topology is 2nd countable and Hausdorff.
3. The shadow topology restricts on $\partial_{F\ddot u} X\cong G/B$ to the manifold topology.
4. In rank 1, the shadow topology coincides with the visual topology.
\end{prop}
A regular sequence in $X$ flag converges to a chamber $\sigma\in \partial_{F\ddot u} X$
iff it converges to $\sigma$ in the shadow topology, see \cite{mlem}.
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig10.pdf}
\caption{Shadow topology and flag convergence: The sequence of chambers $\sigma_i$ flag converges to the chamber $\sigma$.}
\label{figure18.fig}
\end{figure}
We extend the notion of flag convergence to sequences in $G$:
\begin{definition}[Flag convergence in $G$]
\label{defn:flagconvergenceG}
A regular sequence $(g_n)$ in $G$ {\em flag converges} to $\sigma\in \partial_{F\ddot u} X$
if for some (equivalently, every) $x\in X$ the sequence $(g_nx)$ flag converges to $\sigma$.
\end{definition}
Proposition \ref{prop:equiv-flag} in section \ref{sec:conprop} will provide equivalent conditions for flag convergence of sequences in $G$.
\medskip
Now, with the flag topology at our disposal,
we can formulate the following {\em Addendum to the Higher Rank Morse Lemma}
regarding boundary maps for regular quasiisometric embeddings:
\begin{theorem} [Existence of boundary map \cite{mlem}]
\label{thm:bdmp}
Each uniformly regular quasiisometric embedding $f: Y\to X$
from a $\delta$-hyperbolic geodesic metric space $Y$
continuously extends to a map $$Y\sqcup \partial_{\infty} Y\to X\sqcup \partial_{F\ddot u} X.$$
The boundary extension $\partial_{\infty} f: \partial_{\infty} Y\to \partial_{F\ddot u} X$ is antipodal,
i.e.\ maps distinct ideal points to opposite chambers.
\end{theorem}
\begin{remark}
1. The appearance of $\partial_{F\ddot u} X$ in the extension map comes from the fact that the restrictions $q=f\circ r$ of $f$ to geodesic rays $r=y\xi$ in $Y$ are Morse quasirays and hence are uniformly close to (subsets of) euclidean Weyl chambers $V(f(y),\sigma)\subset X$, see part 1 of Remark \ref{rem:addend}.
The quasiray $q$ then accumulates at the chamber $\sigma=:\partial_{\infty} f(\xi)$.
2. The antipodality of the boundary map follows from the Higher Rank Morse Lemma
for quasilines,
see also part 1 of Remark \ref{rem:addend}.
\end{remark}
\subsection{Finsler compactifications}\label{Finsler-com}
We apply the horofunction compactification construction (see Appendix \ref{sec:horoboundary})
to the symmetric space $X$ and the Finsler distance
function $d_{\bar\theta}$. The resulting compactification
$$
\overline{X}^{\bar\theta}=\overline X^{Fins}=X\sqcup\partial_{\infty}^{Fins}X
$$
is independent of $\bar\theta$ (as long as it is regular), in the sense that the identity map $X\to X$ extends to a homeomorphism of the compactifications
$$
\overline{X}^{\bar\theta_1} \to \overline{X}^{\bar\theta_2}
$$
for any two regular elements $\bar\theta_i\in \si_{mod}$.
\medskip
In the case of the horofunction compactification of symmetric spaces (equipped with their standard Riemannian metrics), horofunctions were identified with asymptotic equivalence classes of geodesic rays in $X$. A similar identification can be done in the Finsler setting, but rays are replaced with {\em Weyl sectors} and the equivalence relation is a bit more complicated.
Two Weyl sectors $V(x,\tau)$, $V(x', \tau')$ are equivalent if and only if:
1. $\tau=\tau'$.
2. For every $\epsilon>0$ there exists $y\in V(x,\tau), y'\in V(x',\tau')$ such that the Hausdorff distance between
$V(y,\tau)$, $V(y', \tau')$ is $<\epsilon$.
\medskip
Note that if $\mathop{\hbox{rank}}(X)=1$, Weyl sectors are geodesic rays in $X$ and two sectors are equivalent iff the rays are asymptotic. The connection of equivalence classes of sectors to Finsler horofunctions comes from the following theorem that allows an identification of element of $\partial_{\infty}^{Fins}X$ with equivalence classes $[V(x,\tau)]$ of Weyl sectors. In this theorem we let $d^{\bar\theta}_{x}$ be the function sending $y\in X$ to $d_{\bar\theta}(x,y)$.
\begin{thm}
[Weyl sector representation of points at infinity in Finsler compactification \cite{bordif}]
1. Let $x_i\in V(p, \tau)$ be a sequence diverging away from the boundary faces of the sector $V(p,\tau)$. Then
the sequence of Finsler distance functions $d^{\bar\theta}_{x_i} - d_{\bar\theta}(x_i, p)$ converges to a horofunction which will be denoted
$b_{p,\tau}^{\bar\theta}$. The limit horofunction is independent of the sequence $(x_i)$.
2. Every horofunction $b\in \partial_{\infty}^{Fins}X$ is equivalent (i.e. differ by a constant) to a horofunction
$b_{p,\tau}^{\bar\theta}$.
3. Two Finsler horofunctions $b_{p,\tau}^{\bar\theta}$, $b_{p',\tau'}^{\bar\theta}$ are equivalent if and only if the
sectors $V(p, \tau)$, $V(p', \tau')$ are equivalent.
4. The identification
$$
\left[b_{p,\tau}^{\bar\theta}\right] \leftrightarrow [V(p, \tau)]$$
is $G$-equivariant, where $G$ acts on horofunctions by the precomposition.
\end{thm}
This identification determines the following stratification of $\partial_{\infty}^{Fins}X$. The {\em small strata} are the sets
$$
S_{\tau}= \{[V(p, \tau)]: p\in X\},
$$
where $\tau$'s are simplices in $\partial_{Tits} X$. The {\em big strata} $S_{\tau_{mod}}$ are the unions
$$
S_{\tau_{mod}}= \bigcup_{\tau\in \theta^{-1}(\tau_{mod})} S_\tau.
$$
The group $G$ acts on each big stratum transitively. This $G$-invariant stratification extends to $\overline{X}^{Fins}$ by declaring the entire $X$ to be a single big stratum, $S_{\emptyset}$. The smallest big stratum is $S_{\si_{mod}}$, which is $G$-equivariantly homeomorphic to $\partial_{F\ddot u} X$. This stratum is
the unique closed $G$-orbit in $\overline X^{Fins}$;
$$S_{\si_{mod}}\cong \partial_{F\ddot u} X\cong G/B.$$
On the opposite extreme, the orbit $S_{\emptyset}=X$ is open and dense in $\overline{X}^{Fins}$.
The strata $S_{\tau_{mod}}$ for $\tau_{mod}\neq\emptyset$,
are {\em blow-ups} of the corresponding flag manifolds $\operatorname{Flag_{\tau_{mod}}}= G/P_{\tau_{mod}}$,
where $P_{\tau_{mod}}$ are representatives of conjugacy classes of parabolic subgroups of $G$,
parameterized by faces $\tau_{mod}$ of $\si_{mod}$. More precisely,
there are $G$-equivariant fibrations
$$S_{\tau_{mod}}\longrightarrow\operatorname{Flag_{\tau_{mod}}}$$
with contractible fibers.
The fiber $S_{\tau}\subset S_{\tau_{mod}}$ over $\tau\in\operatorname{Flag_{\tau_{mod}}}$
can be interpreted geometrically
as the space of {\em strong asymptote classes of Weyl sectors} $V(x,\tau)$ asymptotic to $\tau$,
cf.\ \cite[\S 3]{bordif}.
In particular, it is a symmetric space of rank
$$\dim\si_{mod}-\dim\tau_{mod}<\mathop{\hbox{rank}} X=1+\dim\si_{mod}.$$
The topological boundary $\partial S_{\tau}=\overline S_{\tau}-S_{\tau}$ of $S_\tau$
is the union of small strata,
namely of the $S_{\nu}$ for the simplices $\nu$ strictly ``refining" $\tau$
in the sense that $\nu\supsetneq\tau$.
\begin{thm}[\cite{bordif}]
$\overline X^{Fins}$ is $K$-equivariantly homeomorphic to the closed unit ball in $X$ with respect to the {\em dual Finsler metric} $d^*_{\bar\theta}$ on $X$. This compactification is $G$-equivariantly homeomorphic to the {\em maximal Satake compactification} $\overline{X}^S_{max}$ of the symmetric space $X$, see \cite{Borel-Ji} for the definition.
\end{thm}
\begin{cor}
$\overline X^{Fins}$ is a real-analytic manifold with corners on which $G$ acts real-analytically.
\end{cor}
\begin{example} We now describe the (regular polyhedral)
Finsler compactification of the model flat $F_{mod}$ for $SL(3,{\mathbb R})/SO(3)$, in which case $\bar\theta$ is
the midpoint of {the edge} $\si_{mod}$.
Let $\sigma_1,\ldots, \sigma_6$ denote the spherical chambers of $F_{mod}$ listed in the cyclic order. Let $\zeta_i\in \sigma_i$ denote the midpoint of $\sigma_i$. Let $\tau_{i}$ denote the common vertex of $\sigma_{i}, \sigma_{i+1}$.
Each chamber $\sigma_i$ determines a vertex $v_i$ of the Finsler compactification $\overline{F_{mod}}^{Fins}$ of $F_{mod}$; each $\tau_i$ determines an edge $e_i$ of $\overline{F_{mod}}^{Fins}$. In terms of Finsler horofunctions: Each vertex $v_i$ corresponds to the horofunction whose restriction to $F_{mod}$ is
$$
b_{\zeta_i}(x)= - x\cdot \zeta_i.
$$
Each edge $e_i$ corresponds to the 1-parameter (modulo additive constants) family of Finsler horofunctions
$$
\max \left( b_{\zeta_i} + s, b_{\zeta_{i+1}} + t\right), \quad s, t\in {\mathbb R}.
$$
Using the normalization $s+t=0$ ({we are} assuming that all Finsler horofunctions are normalized to vanish at the origin), we can write this family as
$$
b_{i,s}=\max \left( b_{\zeta_i} + s, b_{\zeta_{i+1}} - s\right) - |s|, \quad s\in {\mathbb R}.
$$
As $s\to+ \infty$, $b_{i,s}$ converges (uniformly on compacts in $F_{mod}$) to $b_{\zeta_i}$, while as $s\to-\infty$, $b_{i,s}$ converges to $b_{\zeta_{i+1}}$, representing the two vertices of the edge $e_i$. We, thus, obtain a description of the stratified space $\overline{F_{mod}}^{Fins}$ as a hexagon, dual to the unit ball $B_{mod}$ of the regular polyhedral Finsler norm on $F_{mod}$.
Regarding the small strata of $\overline{X}^{Fins}$: They are points (corresponding to the spherical chambers, elements of $\partial_{F\ddot u} X$) and open 2-dimensional disks, which have natural geometry of hyperbolic 2-planes, and $X$ itself. Note that there are two types of open 2-disks, corresponding to two types of vertices of the spherical building $\partial_{\infty} X$. Taking two opposite vertices $\tau, \hat\tau$ of $\partial_{\infty} X$ and the parallel set $P(\tau, \hat\tau)$ (the union of all geodesics asymptotic to $\tau, \hat\tau$) splits as ${\mathbb H}^2\times {\mathbb R}$. The Finsler compactification of this parallel set contains ${\mathbb H}^2\times \{\pm \infty\}$, the open disk strata of $\overline{X}^{Fins}$ which have different type. See Figure \ref{figure10.fig}.
\end{example}
We refer the reader to \cite{bordif} for more details and to \cite{JS} for the description of compactifications of finite dimensional vector spaces equipped with polyhedral norms.
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig11.pdf}
\caption{Finsler compactification of the model flat.}
\label{figure10.fig}
\end{figure}
\begin{definition}
We say that a subset of $\partial_{\infty}^{Fins}X$ is {\em saturated} if it is a union of small strata.
\end{definition}
It is worth noting that
the stabilizers of points in the Finsler compactification
are {\em pairwise different} closed subgroups of $G$.
The stabilizers of the points at infinity in $S_{\tau}$ are contained in the parabolic subgroup $P_{\tau}$, where $P_\tau$ is the stabilizer in $G$ of the simplex $\tau$.
We conclude this section with the following theorem which provides a satisfying metric interpretation of the shadow topology:
\begin{thm}[Prop. 5.41 in \cite{bordif}]
The subspace topology on $X\sqcup \partial_{F\ddot u} X$ induced from $\overline X^{Fins}$ is equivalent to the shadow topology on
$X\sqcup \partial_{F\ddot u} X$.
\end{thm}
\subsection{The higher rank convergence property}
\label{sec:conprop}
We consider the action of $G$ on the full flag manifold $G/B=\partial_{F\ddot u} X$. The
usual convergence property,
compare section~\ref{sec:rank1conv},
fails in this context:
In higher rank,
a divergent sequence $(g_k)$
never converges
to a constant map
on the complement of a point in $\partial_{F\ddot u} X$.
However, as we noted earlier, in higher rank {\em distinct} should be replaced with {\em antipodal}.
Given two chambers $\alpha, \omega \in \partial_{F\ddot u} X$ we define the {\em quasiprojective map}
$$
\alpha_\omega: \omega^{opp} \to \{\alpha\},
$$
left undefined on the set $\partial_{F\ddot u} X - \omega^{opp}$ consisting of chambers which are not antipodal to $\omega$. The chamber $\alpha$ is called the {\em attractor} and $\omega$ is called the {\em repeller}. We say that a sequence $(g_k)$ in $G$ {\em converges} to a quasiprojective map $\alpha_\omega$ if $g_k$ converges to $\alpha$ uniformly on compacts in $\omega^{opp}$.
\begin{theorem}
[The higher rank convergence property \cite{coco13, morse,anolec}]
\label{thm:regconv}
Each regular sequence $(g_k)$ in $G$ contains a subsequence $(g_{k_i})$ which converges to the map $\alpha_\omega$ for some $\alpha, \omega\in \partial_{F\ddot u} X$. Conversely, if a sequence $(g_k)$ has such a limit $\alpha_\omega$, then it is regular.
\end{theorem}
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig12.pdf}
\caption{The convergence property.}
\label{figure11.fig}
\end{figure}
\begin{remark}
The complement $G/B - \omega^{opp}$ is the exceptional set for this convergence (where uniform limit fails locally).
\end{remark}
This theorem gives a {\em dynamical characterization} of regular sequences in $G$:
\begin{cor}
\label{cor:regconveq}
A sequence $(g_k)$ in $G$ is regular iff every subsequence $(g_{k_i})$ contains a further subsequence
$(g_{k_{i_j}})$ which converges to some quasiprojective map $\alpha_\omega$.
\end{cor}
As in the rank 1 case:
$$
g_k\to \alpha_\omega \iff g_k^{-1}\to \omega_\alpha.
$$
\begin{rem}
1. More generally, one defines $\tau_{mod}$-regularity of a sequence relative to a face $\tau_{mod}\subseteq \sigma_{mod}$. Each $\tau_{mod}$ determines a (partial) flag manifold
$$\operatorname{Flag_{\tau_{mod}}}=G/P_{\tau_{mod}}.$$
Then the convergence property (for arbitrary sequences $(g_k)$ in $G$) reads as:
Each sequence $(g_k)$ in $G$ contains a subsequence $(g_{k_i})$ which is either bounded in $G$ or is
$\tau_{mod}$-regular for some face $\tau_{mod}$.
The latter is equivalent to convergence (uniform on compacts) of $(g_{k_i})$ to a quasiprojective map
$$
\alpha_{\omega}: \omega^{opp}\subset \operatorname{Flag_{\tau_{mod}}} \to \alpha\in \operatorname{Flag_{\tau_{mod}}}.$$
Here $\omega$ is a face of $\partial_{Tits} X$ of the type opposite to $\tau_{mod}$.
We refer to \cite{morse,anolec} for details.
2. An equivalent notion of convergence of sequences in $G$ had been introduced earlier by Benoist in \cite{Benoist},
see in particular part (5) of his Lemma 3.5.
\end{rem}
\begin{example}
Consider the case $G=SL(n, {\mathbb R})$ and a sequence of diagonal matrices $g_k= Diag(a_{1,k},\ldots, a_{n,k}) \in G$
with $a_{1,k}\geq\ldots\geq a_{n,k}>0$.
Recall from Example~\ref{ex:regseq}
that regularity of the sequence $(g_k)$ amounts to the conditions
$$
\lim_{k\to\infty} \frac{a_{i,k}}{a_{i+1,k}}=\infty \quad\hbox{ for } i=1,...,n-1.
$$
The attractive flag for the sequence $(g_k)$ is
$$
\alpha= \left(\<e_1\> \subset \< e_1, e_2\> \subset \ldots \subset \< e_1,...,e_{n-1}\>\right),
$$
and the repelling flag is
$$
\omega = \left(\<e_n\> \subset \< e_n, e_{n-1}\> \subset \ldots \subset \< e_n,...,e_{2}\>\right).
$$
\end{example}
{It is useful to reformulate
the definition of flag convergence of a regular sequence $(g_n)$ to a chamber $\alpha\in \partial_{F\ddot u} X$
in terms of the dynamics on the flag manifold $\partial_{F\ddot u} X$:}
\begin{prop}
[Flag convergence criteria, \cite{coco15}]
\label{prop:equiv-flag}
The following are equivalent for a regular sequence $(g_n)$ in $G$:
1. The sequence $(g_n)$ flag converges to $\alpha\in \partial_{F\ddot u} X$.
2. Every subsequence in $(g_n)$ contains a further subsequence which converges to a quasiprojective map
$\alpha_\omega: \partial_{F\ddot u} X\to \partial_{F\ddot u} X$.
3. There exists a bounded sequence $b_n\in G$ such that the sequence $(g_n b_n)$ converges to a quasiprojective map $\alpha_\omega: \partial_{F\ddot u} X\to \partial_{F\ddot u} X$.
\end{prop}
We equip the smooth compact manifold $G/B$ with an auxiliary Riemannian metric
(not necessarily $K$-invariant).
This allows us to define expansion properties for elements $g\in G$ at chambers $\sigma\in G/B$ in the same way it was done in the rank 1 situation, see section \ref{sec:expansion}.
We can now introduce the stronger notion of conical convergence of regular sequences in $G$ to chambers in $\partial_{F\ddot u} X$, which first appeared in \cite{Albuquerque}.
\begin{definition} [Conical convergence in $X$ to chambers at infinity]
\label{def:coniconv}
Let $\sigma\in\partial_{F\ddot u} X$ be a chamber.
A regular sequence $(x_k)$ in $X$ converges to $\sigma$ {\em conically} if there exists a constant $R$ and a euclidean Weyl chamber $V=V(x,\sigma)\subset X$ such that the sequence $(x_k)$ is contained in the
$R$-neighbor\-hood of $V$. A sequence $(g_k)$ in $G$ converges to $\sigma\in \partial_{F\ddot u} X$ conically if for some (equivalently, every)
$x\in X$ the orbit sequence $(g_kx)$ converges conically to $\sigma$.
\end{definition}
Thus, conical convergence implies flag convergence, but the converse is false, as in rank 1. We conclude with alternative formulations of the conical convergence $g_k\to \sigma$.
\begin{prop}[Conical convergence criteria, \cite{morse,anolec}]
\label{prop:equiv_conicality}
a. Suppose that a regular sequence $g_k\in G$ flag converges to $\sigma\in \partial_{F\ddot u} X$. Then the following are equivalent:
1. $(g_k)$ converges conically to $\sigma$.
2. For some (equivalently, every) point $x\in X$ and maximal flat $F\subset X$ whose visual boundary contains $\sigma$,
the sequence of maximal flats $g_k^{-1}(F)$ is precompact in the space of all maximal flats in $X$.
3. For some (equivalently, every) chamber $\hat\sigma\in \sigma^{opp}$, the sequence $g_k^{-1}(\sigma, \hat\sigma)$ is precompact in the space of antipodal pairs of chambers in $\partial_{F\ddot u} X$.
b. Conical convergence $g_k\to \sigma$
implies that the sequence $(g_k^{-1})$ has diverging infinitesimal expansion\footnote{See
\S \ref{sec:expanding_actions} for the definition.}
at $\sigma$.
\end{prop}
\bigskip
\section{Discrete subgroups: Geometric and dynamical conditions} \label{sec:3}
As before, $G$ denotes the identity component of the isometry group of a symmetric space $X$ of noncompact type,
and $\Gamma<G$ denotes a discrete subgroup.
We first define certain classes of discrete subgroups $\Gamma<G$ within which we will be working
throughout most of this paper,
namely of discrete subgroups which exhibit {\em rank 1 behavior} relative to
(conjugacy classes of) parabolic subgroups $P< G$.
We then discuss and compare various geometric and dynamical conditions for such subgroups.
As we noted earlier,
in this survey we describe the theory for simplicity only in the (regular) case relative to minimal parabolic subgroups $P=B$.
In the general ($\tau_{mod}$-regular) case, almost all the results go through with suitable modifications;
for the details we refer the reader to either one of the papers \cite{coco15, morse, anolec}.
Among the major differences in the $\tau_{mod}$-regular case are that one has to replace limit sets in the full flag manifold $G/B$ with limit sets in partial flag manifolds $G/P$, Weyl cones over chambers are replaced with suitable Weyl cones over
stars of $\tau_{mod}$-type simplices, the expansion property occurs in the
partial flag manifolds, various notions of regularity have to be modified and the Bruhat order on the Weyl group is replaced with orders on its coset spaces.
\subsection{Regularity and limit sets}
\begin{definition}
The
{\em visual limit set}
$$\Lambda(\Gamma)\subset\partial_{\infty} X$$ of $\Gamma$ is the set of accumulation points of a $\Gamma$-orbit
$\Gamma x\subset X$
in the visual compactification $\overline X=X\sqcup\partial_{\infty} X$.
The elements of $\Lambda(\Gamma)$ are the
{\em visual limit points}. Similarly, the {\em Finsler limit set}
$$\Lambda^{Fins}_{x}(\Gamma)\subset \partial_{\infty}^{Fins}X$$
of $\Gamma$ is the accumulation set of the orbit
$\Gamma x$ in the Finsler compactification $\overline{X}^{Fins}=X\sqcup\partial_{\infty}^{Fins}X$.
\end{definition}
\begin{rem}
While the visual limit set is independent of the orbit $\Gamma x$, the Finsler limit set depends on it.
\end{rem}
\smallskip
We define regularity of subgroups as an asymptotic geometric condition on their orbits:
\begin{definition}
[Regular subgroups \cite{coco13}]
A discrete subgroup $\Gamma< G$ is {\em regular}
(resp.\ {\em uniformly regular})
if each divergent sequence of elements in $\Gamma$
has this property
(cf.\ Defs.~\ref{def:regseq} and~\ref{def:regseqgp}).
\end{definition}
Regularity can be read off the location of limit sets:
\begin{remark}
1. Uniform regularity of $\Gamma$ is equivalent to the property that the visual limit set
consists only of regular ideal boundary points.
2. Regularity of $\Gamma$ is equivalent to the property that the Finsler limit set $\Lambda^{Fins}_x(\Gamma)$
of some (equivalently, any) orbit $\Gamma x$
is contained in the stratum $\partial_{F\ddot u} X\subset \partial_{\infty}^{Fins} X$.
\end{remark}
\begin{rem}
The notion of regularity for subgroups $\Gamma< Isom(X)$ equally
makes sense when $X$ is a euclidean building.
The definition
remains the same
since for euclidean buildings one also has a $\Delta$-valued ``distance'' function $d_\Delta: X\times X\to \Delta$,
where $\Delta$ is the model euclidean Weyl chamber of $X$,
see Appendix~\ref{sec:modsp}.
Most of the results mentioned in this survey go through without much change in the case when $X$ is a locally
compact euclidean building.
\end{rem}
\begin{definition}
[Chamber limit set \cite{coco13}]
The {\em chamber limit set}
$$
\Lambda_{ch}(\Gamma)\subset \partial_{F\ddot u} X$$
consists of the chambers $\sigma\in \partial_{F\ddot u} X$ for which there exists a sequence $(\gamma_k)$ in $\Gamma$ flag converging to $\sigma$,
$\gamma_k\to\sigma$ (see Definitions~\ref{defn:flagconvergenceX} and~\ref{defn:flagconvergenceG}).
The subgroup
$\Gamma< G$ is ($\si_{mod}$-){\em elementary}
if $\Lambda_{ch}(\Gamma)$ consists of at most
two points.
\end{definition}
\begin{rem}
1. More generally, in \cite[section 6.4]{coco15} we define the notion of $\tau_{mod}$-limit sets $\La_{\tau_{mod}}(\Gamma)\subset\operatorname{Flag_{\tau_{mod}}}=G/P_{\tau_{mod}}$ for discrete subgroups $\Gamma< G$.
(One has $\Lambda_{ch}=\La_{\si_{mod}}$.)
2. Benoist introduced in \cite[\S 3.6]{Benoist}
a notion of limit set $\Lambda_{\Gamma}$ for Zariski dense subgroups $\Gamma$ of reductive algebraic groups over local fields which in the case of real semisimple Lie groups
is equivalent to our concept of chamber limit set $\Lambda_{ch}$.\footnote{Benoist's limit set $\Lambda_{\Gamma}$ is contained in a partial flag manifold $Y_{\Gamma}$ which in the case of real Lie groups is the full flag manifold $G/B$, see the beginning of \S 3 of his paper.
In this case, $\Lambda_{\Gamma}$ consists of the limit points of the sequences in $\Gamma$ contracting on $G/B$, cf.\ his Definitions 3.5 and 3.6.} What we call the $\tau_{mod}$-limit set $\La_{\tau_{mod}}$ for other face types $\tau_{mod}\subsetneq\si_{mod}$ is mentioned in his Remark 3.6(3), and his work implies that, in the Zariski dense case,
$\La_{\tau_{mod}}$ is the image of $\Lambda_{ch}$ under the natural projection $\partial_{F\ddot u} X=\operatorname{Flag_{\si_{mod}}}\to\operatorname{Flag_{\tau_{mod}}}$ of flag manifolds.
\end{rem}
\begin{example}\label{ex:product-case}
Consider $X=X_1\times X_2$, the product of two real hyperbolic spaces,
$g=(g_1, g_2)$ an infinite order isometry of $X$, where $g_1, g_2$ are isometries of $X_1, X_2$. Then the cyclic subgroup $\Gamma=\<g\>$ is regular if and only if neither $g_1$ nor $g_2$ is elliptic. The subgroup $\Gamma$ is uniformly regular if and only if both $g_1, g_2$ are hyperbolic
isometries of $X_1, X_2$ or both are parabolic isometries.
A cyclic group generated by an element of {\em mixed type} is {\em not} uniformly regular.
The Furstenberg boundary of $X$ is the product $\partial_{\infty} X_1\times \partial_{\infty} X_2$.
If $\Gamma$, as above, is regular and $\lambda_i^+,\lambda_i^-$ are the fixed points of $g_i$
in $\partial_{\infty} X_i$,\footnote{I.e.\ $\lambda_i^+$ and $\lambda_i^-$ are the attractive and repulsive fixed points if $g_i$ is hyperbolic,
and $\lambda_i^+= \lambda_i^-$ is the unique fixed point if $g_i$ is parabolic.}
then $\Lambda_{ch}(\Gamma)= \{(\lambda_1^-, \lambda_2^-), (\lambda_1^+, \lambda_2^+)\}$.
In particular, in the mixed case
if, say, $g_1$ is hyperbolic and $g_2$ is parabolic with the unique fixed point $\lambda_2^+=\lambda_2^-=:\lambda_2$, then $\Lambda_{ch}(\Gamma)= \{(\lambda_1^-, \lambda_2), (\lambda_1^+, \lambda_2)\}$.
Note that if $\Gamma$ is uniformly regular then the limit set $\Lambda_{ch}(\Gamma)$ is antipodal, but it is not antipodal if $\Gamma$ is merely regular.
The limit chambers are conical limit points if $g$ is uniformly regular of type hyperbolic-hyperbolic,
and otherwise they are not.
\end{example}
The next proposition gives alternative descriptions of chamber limit sets for regular and
uniformly regular subgroups in terms of Finsler and visual compactifications:
\begin{prop}[{\cite{bordif,coco13}}]
1. If $\Gamma< G$ is regular then $\Lambda_{ch}(\Gamma)\subset \partial_{F\ddot u} X$ is the Finsler limit set of $\Gamma$.
(In this case, it is independent of the $\Gamma$-orbit.)
2. If $\Gamma< G$ is uniformly regular then $\Lambda_{ch}(\Gamma)\subset \partial_{F\ddot u} X$ is the set of
chambers $\sigma\in \partial_{F\ddot u} X$
which contain visual limit points.
(These are then contained in their interiors.)
\end{prop}
Let us mention in this context the following structural result for the visual limit set:
\begin{thm}
[{\cite[part of Thm.\ 6.4]{Benoist}}] For every Zariski dense discrete
subgroup $\Gamma< G$ there exists
an $\iota$-invariant closed convex subset $\ell(\Gamma)\subset \si_{mod}$
with nonempty interior,
such that for each chamber
$\sigma\in\partial_{F\ddot u} X$ satisfying $\operatorname{int}(\sigma)\cap \Lambda(\Gamma)\neq\emptyset$ it holds that $\theta(\sigma\cap \Lambda(\Gamma))=\ell(\Gamma)$.
\end{thm}
Thus, in the case of uniformly regular Zariski dense
subgroups $\Gamma< G$, the visual limit set $\Lambda(\Gamma)$ is a
$\Gamma$-equivariant product bundle over $\Lambda_{ch}(\Gamma)$ with fiber $\cong \ell(\Gamma)$.
\medskip
In general, verifying (uniform) regularity of a subgroup is not an easy task.
See e.g.\ \cite[Thm 3.51]{anolec}
and Theorem \ref{thm:Anosov} of this paper
for results of this kind.
For Zariski dense subgroups the verification of regularity becomes easier.
The next result provides a sufficient condition:
\begin{thm}
Let $\rho: \Gamma\to G$ be a representation whose image is Zariski dense in $G$.
Suppose that $Z$ is a compact metrizable space, $\Gamma\curvearrowright Z$ is a discrete convergence group action (with finite kernel),
and $f: Z\to \partial_{F\ddot u} X$ is a $\rho$-equivariant topological embedding.
Then $\rho$ has finite kernel and $\rho(\Gamma)$ is regular.
\end{thm}
\par\medskip\noindent{\it Proof. }
In view of the Zariski density of $\rho(\Gamma)$, also $f(Z)$ is Zariski dense in $\partial_{F\ddot u} X$.
Consequently,
the assumption that $\Gamma$ acts on $Z$ with finite kernel implies that $\rho$ has finite kernel.
We assume that $\rho(\Gamma)$ is not regular.
We will be using certain notions and a proposition from
\cite[\S 9.1.2]{bordif}. Given a simplex $\tau\in\operatorname{Flag_{\tau_{mod}}}$, the subvariety
$$\operatorname{st}_{F\ddot u}(\tau) \subset\partial_{F\ddot u} X$$
is the set of chambers $\sigma$ containing $\tau$ as a face. Similarly, for $\tau_-\in \Flag_{\iota\tau_{mod}}$,
$$C_{F\ddot u}(\tau_-)\subset\partial_{F\ddot u} X$$
is the Zariski open and dense subset equal to the union
$$
\bigcup_{\tau\in C(\tau_-)} \operatorname{st}_{F\ddot u}(\tau).
$$
Suppose that for some sequence $\gamma_i\to\infty$ in $\Gamma$,
the sequence $g_i=\rho(\gamma_i)\in G$ is not regular. Hence, it contains a subsequence contained in a tubular neighborhood of the boundary of $\Delta$. Then, after extraction, since $\Delta$ has only finitely many faces,
the sequence $(g_i)$ is {\em $\tau_{mod}$-pure} for some proper face $\tau_{mod}$ of $\si_{mod}$. This means that there exists a constant $D$ such that for each $i$ the vectors $v_i:=d_\Delta(o, g_i(o))\in \Delta$
belong to the $D$-neighborhood of some proper face $V_{\tau_{mod}}$ of $\Delta$ (the Weyl sector over the face
$\tau_{mod}$). Therefore, according to \cite[Prop. 9.4] {bordif},
after further extraction, there exists a pair of simplices
$\tau_+ \in \operatorname{Flag_{\tau_{mod}}}, \tau_-\in \Flag_{\iota\tau_{mod}}$
such that the sequence $(g_i)$ converges on the Zariski open and dense subset
$C_{F\ddot u}(\tau_-)\subset \partial_{F\ddot u} X$ to a nonconstant {\em algebraic} map $\phi: C_{F\ddot u}(\tau_-) \to \operatorname{st}_{F\ddot u}(\tau_+)$.
Since $\phi$ is algebraic, it cannot be constant on a Zariski dense subset.
On the other hand, by the convergence property on $Z$, after extraction, $(g_i)$
converges to a constant map on $f(Z) -\{f(z)\}$
for some (exceptional) point $z\in Z$.
A contradiction. \qed
\subsection{Generalized convergence subgroups}
It is useful to reformulate the concepts of (chamber) limit set and regularity for discrete subgroups
purely in terms of their dynamics on $\partial_{F\ddot u} X$.
\begin{definition}
[Convergence subgroups \cite{coco15}]
\label{def:simod-convergence}
A discrete subgroup $\Gamma < G$ is a {\em $\si_{mod}$-convergence\footnote{We add the prefix $\si_{mod}$
in order to distinguish from the notion of abstract {\em convergence group} in topological dynamics,
cf.\ Appendix~\ref{app:congru}.} subgroup}
if for every divergent sequence of elements $\gamma_k\in \Gamma$,
every subsequence of $(\gamma_k)$ contains a further subsequence
which converges to a quasiprojective map $\alpha_\omega: \partial_{F\ddot u} X\to \partial_{F\ddot u} X$,
cf.\ section~\ref{sec:conprop}.
\end{definition}
Corollary~\ref{cor:regconveq} yields:
\begin{thm}[{\cite{morse,anolec}}]
\label{thm:regconvsbgp}
A discrete subgroup $\Gamma< G$ is regular iff it is a $\si_{mod}$-convergence subgroup.
\end{thm}
Furthermore,
the chamber limit set $\Lambda_{ch}(\Gamma)$
is the set of chambers
$\alpha\in \partial_{F\ddot u} X$ for which there exists a sequence $\gamma_k\in \Gamma$ such that $\gamma_k\to \alpha_\omega$ for some $\omega \in \partial_{F\ddot u} X$.
\medskip
We note that in \cite{coco15} we formulate a more abstract notion of generalized convergence actions of groups on topological spaces in terms of {\em accumulation of sequences $g_k\in G$ at subsets}, which covers $\si_{mod}$-convergence groups and their limit sets above. This more abstract notion explains why {\em balanced thickenings}
(see Definition \ref{defn:thickenings} and \eqref{eq:thickening})
of limit sets appear naturally in the study of regular subgroups of $G$. We also note that the convergence type behavior in the sense of accumulation
has been studied earlier by Karlsson, Papasoglu and Swenson
in the general context of nonpositive curvature, see \cite[Thm.\ 1]{Karlsson} and \cite[Thm.\ 4]{PapaSwen}.
\subsection{Rank 1 discrete subgroups $\Gamma<G$}
Currently, it appears that regularity (or even uniform regularity) alone is not enough to fully capture ``rank 1'' behavior of discrete subgroups $\Gamma< G$. We introduce one extra condition on the chamber limit set:
\begin{definition}
A discrete subgroup $\Gamma< G$ is {\em antipodal} (A), if its limit chambers are pairwise antipodal.
\end{definition}
We now can define a class of discrete subgroups which exhibit {\em rank 1 behavior:}
\begin{definition}
A discrete subgroup $\Gamma< G$ is {\em regular antipodal} (RA) or a {\em rank 1 discrete subgroup} of $G$
if it is regular and antipodal.
\end{definition}
The higher rank convergence property (Definition \ref{def:simod-convergence}) then implies:
\begin{corollary}
If $\Gamma$ is RA,
then the action $\Gamma\curvearrowright\Lambda_{ch}(\Gamma)$
is an abstract convergence action.\footnote{See Appendix
\ref{app:congru}.}
\end{corollary}
\par\medskip\noindent{\it Proof. } Suppose that $(\gamma_k)$ is a sequence of distinct elements in $\Gamma$.
In view of the regularity of $\Gamma$, after extraction, $\gamma_k\to \alpha_\omega$ uniformly on compacts in $\omega^{opp}$
for some limit chambers $\alpha,\omega\in\Lambda_{ch}(\Gamma)$.
Due to antipodality, $\Lambda_{ch}(\Gamma)- \{\omega\}\subset \omega^{opp}$.
Therefore, $\gamma_k$ converges to $\alpha$ uniformly on compacts in
$\Lambda_{ch}(\Gamma)- \{\omega\}$. \qed
\medskip
Thus, restricting to the chamber limit set of an RA subgroup
brings us back to the familiar rank 1 setting!
\subsection{RCA subgroups}\label{sec:conical_convergence2}
We now begin discussing various geometric and dynamical conditions for regular discrete subgroups.
The first one concerns the asymptotic geometry of orbits,
namely whether limit chambers can be reached along orbits in a ``straight'' way:
\begin{definition}[Conical]
A limit chamber $\sigma\in \Lambda_{ch}(\Gamma)$ is {\em conical}
if there exists a sequence
$\gamma_k\in\Gamma$ such that $\gamma_k\to \sigma$ conically,
cf.\ Definition~\ref{def:coniconv}.
A discrete subgroup $\Gamma < G$ is {\em conical} if all its limit chambers are conical.
\end{definition}
\begin{theorem}
[Extrinsic conicality is equivalent to intrinsic conicality \cite{morse,anolec}]
For nonelementary RA subgroups $\Gamma<G$,
conicality is equivalent to intrinsic conicality in terms of the action $\Gamma\curvearrowright\Lambda_{ch}(\Gamma)$,
as defined in Appendix \ref{app:congru}.
\end{theorem}
\begin{corollary}
A nonelementary RA subgroup $\Gamma<G$ is conical iff the action $\Gamma \curvearrowright T\Lambda_{ch}$
on triples of distinct limit chambers
is cocompact.
\end{corollary}
We now arrive to the first definition of geometric finiteness in higher rank, generalizing the Beardon-Maskit definition:
\begin{definition}[RCA subgroups \cite{coco13, coco15}]
A discrete subgroup $\Gamma< G$ is {\em RCA} if it is regular, conical and antipodal.
\end{definition}
\begin{remark}
An analogous definition and theory exist in the $\tau_{mod}$-regular case.
One replaces the $\Gamma$-action on $\partial_{F\ddot u} X$ with the action on the partial flag manifold $\operatorname{Flag_{\tau_{mod}}}$.
\end{remark}
Note that, a priori, it is unclear even why RCA groups are finitely generated.
However, as a consequence of Bowditch's theorem \cite{Bowditch_char}
about the dynamical characterization of word hyperbolic groups,
one obtains:
\begin{corollary}
Each nonelementary RCA subgroup $\Gamma$ is word hyperbolic
and its Gromov boundary $\partial_{\infty} \Gamma$ is equivariantly homeomorphic to $\Lambda_{ch}(\Gamma)$.
\end{corollary}
For RCA groups regularity is equivalent to uniform regularity:
\begin{theorem}
[RCA implies uniform regularity \cite{morse,anolec}]
If $\Gamma< G$ is nonelementary RCA then it is uniformly regular.
\end{theorem}
\subsection{Expansion at infinity}
The RCA condition discussed above is in terms
of the asymptotics of the group action on the symmetric space $X$.
Our next definition is in terms of the dynamics at infinity, more precisely, of the action
on the Furstenberg boundary $\partial_{F\ddot u} X$.
\begin{definition}[CEA subgroup \cite{coco15,anolec}]
A discrete subgroup $\Gamma < G$ is {\em CEA} (convergence, expanding, antipodal) if:
1. $\Gamma< G$ is a $\si_{mod}$-convergence subgroup.
2. The action $\Gamma\curvearrowright\partial_{F\ddot u} X$ is expanding at $\Lambda_{ch}(\Gamma)$
in the sense of Appendix~\ref{sec:expanding_actions}.
3. The chamber limit set $\Lambda_{ch}(\Gamma)$ is antipodal.
\end{definition}
We recall that the convergence condition is equivalent to regularity, see Theorem~\ref{thm:regconvsbgp}.\footnote{We
impose the convergence instead of the regularity condition
in order to make the notion purely dynamical.}
For nonelementary subgroups $\Gamma<G$,
the second condition is satisfied
if the restricted action $\Gamma\curvearrowright\Lambda_{ch}(\Gamma)$
is expanding,
compare Theorem~\ref{thm:conical} in Appendix~\ref{app:congru}.
Expansivity is useful for proving cocompactness of the $\Gamma$-action on domains of discontinuity, see \cite{coco13}.
We will come back to this later.
\subsection{Asymptotically embedded subgroups and Anosov representations}
The next condition we give is in terms of boundary maps into $\partial_{F\ddot u} X$.
It requires the discrete subgroups $\Gamma<G$ to be intrinsically word hyperbolic,
unlike our earlier conditions where hyperbolicity was a consequence.
We will consider boundary maps of the following kind:
\begin{definition}
A map into $\partial_{F\ddot u} X$ is {\em antipodal}
if it sends distinct points to antipodal chambers.
\end{definition}
\begin{remark}
1. Antipodal maps are injective.
2. An antipodal continuous map is the same thing as a {\em transversal map} in the sense of \cite{GW}.
\end{remark}
\begin{definition}[Asymptotically embedded subgroups \cite{coco13, anolec}]
A discrete subgroup $\Gamma<G$ is {\em asymptotically embedded}
if it is RA, intrinsically word hyperbolic, and if there exists a $\Gamma$-equivariant homeomorphism
$$\beta: \partial_{\infty} \Gamma \stackrel{\cong}{\longrightarrow} \Lambda_{ch}(\Gamma)\subset\partial_{F\ddot u} X.$$
\end{definition}
Note that the boundary map $\beta$ is necessarily antipodal in this case.
Furthermore,
any orbit map $\Gamma\to X$
continuously extends by $\beta$
to a map
$$\Gamma\sqcup\partial_{\infty}\Gamma\to X\sqcup \partial_{F\ddot u} X$$
from the visual (Gromov) compactification of $\Gamma$ into the partial compactification $X\sqcup \partial_{F\ddot u} X$
equipped with the topology of flag convergence\footnote{Compare \S \ref{sec:flagcv}.},
see \cite[Prop 3.20]{anolec}.
\medskip
Below we present two related notions,
{\em boundary embedded subgroups} and {\em Anosov subgroups}.
Instead of requiring an identification of the Gromov boundary with the chamber limit set, we can at first require only the existence of an
equivariant antipodal embedding into $\partial_{F\ddot u} X$:
\begin{definition}[Boundary embedded subgroups \cite{coco13, anolec}]
A discrete subgroup $\Gamma< G$ is {\em boundary embedded}
if it is intrinsically word hyperbolic and there exists an equivariant antipodal continuous map
$$
\beta': \partial_{\infty} \Gamma \to \partial_{F\ddot u} X,
$$
called a {\em boundary embedding} for $\Gamma$.
\end{definition}
\begin{remark} 1. {\em Boundary embedded} is the topological part of the definition of {\em Anosov} subgroups
in \cite{Labourie,GW}.
The {\em dynamical} part is omitted in this definition.
2. Boundary embeddings are in general {\em not unique}.
This is so by trivial reasons if $|\partial_{\infty}\Gamma|=2$,
but it also happens when $|\partial_{\infty}\Gamma|\geq3$, see e.g.\ \cite[Example 6.20]{morse}.
\end{remark}
There is the following {\em dichotomy} for the relation of boundary embeddings with the chamber limit set:
\begin{thm}[Boundary embedding dichotomy {\cite[Thm.\ 3.11]{anolec}}]
Suppose that $\Gamma< G$ is a boundary embedded regular subgroup.
Then for each boundary embedding $\beta': \partial_{\infty} \Gamma \to \partial_{F\ddot u} X$ we have the following dichotomy:
1. Either $\beta'(\partial_{\infty} \Gamma)= \Lambda_{ch}(\Gamma)$, or
2. $\beta'(\partial_{\infty} \Gamma)\cap \Lambda_{ch}(\Gamma)=\emptyset$ and
$$
\Lambda_{ch}(\Gamma)\subset \bigcap_{\sigma\in \beta'(\partial_{\infty} \Gamma)} (\partial_{F\ddot u} X - \sigma^{opp}).
$$
In the first case, $\Gamma$ is {asymptotically} embedded,
while the second alternative implies that $\Lambda_{ch}(\Gamma)$ is contained in a proper subvariety of $\partial_{F\ddot u} X$.
\end{thm}
The second alternative cannot occur in the Zariski dense case. Therefore:
\begin{cor}
If $\Gamma< G$ is regular, Zariski dense and $\beta': \partial_{\infty} \Gamma \to \partial_{F\ddot u} X$ is a boundary embedding,
then
$\beta'(\partial_{\infty} \Gamma)=\Lambda_{ch}(\Gamma)$. In particular, $\Gamma< G$ is asymptotically embedded.
\end{cor}
We note that the last part of this corollary was already proven in \cite{GW}.
The next theorem shows that one does not need Zariski density in order to conclude that
$\Gamma< G$ is asymptotically embedded.
\begin{thm}[{\cite{morse} and \cite[Thm.\ 3.15]{anolec}}]
\label{thm:bae}
A regular subgroup $\Gamma < G$ is boundary embedded
iff it is asymptotically embedded.
\end{thm}
\begin{remark}
1. In general, there may exist several boundary embeddings for $\Gamma$,
and only one of them yields the asymptotic embedding.
2. Theorem \ref{thm:bae} is one of the few results in the theory
which hold only in the regular case (as opposed to the $\tau_{mod}$-regular case).
\end{remark}
Now we give our versions, see \cite{morse,anolec},
of the definition of Anosov subgroups,
which were originally defined
in \cite{Labourie,GW}
using expansion properties of geodesic flows.
Our definitions do not use geodesic flows of word hyperbolic groups
but replace them by a simpler coarse geometric object,
namely by the space of discrete geodesics with respect to a word metric.
These were the first such definitions which are close to Anosov in spirit
but do not use flows.
We fix a Riemannian metric on $\partial_{F\ddot u} X$.
Moreover, we will assume in the next definition that the word hyperbolic group $\Gamma$ is equipped with a fixed word metric.
\begin{definition}
[Anosov subgroups, {\cite[\S 6.5]{morse}}]
1. A subgroup $\Gamma< G$ is {\em Anosov} if it is boundary embedded
with boundary embedding $\beta'$ and, in addition,
for each normalized\footnote{$r(0)=1\in \Gamma$} discrete geodesic ray $r: k\mapsto \gamma_k\in \Gamma$ asymptotic to $\xi\in \partial_{\infty} \Gamma$
the sequence $(\gamma_k^{-1})$ is uniformly exponentially infinitesimally expanding at $\beta'(\xi)\in\partial_{F\ddot u} X$.
More precisely, there are constants $C,A>0$
depending only on the subgroup $\Gamma< G$,
the word metric on $\Gamma$
and the Riemannian metric on $\partial_{F\ddot u} X$
such that
$$
\epsilon(\gamma_k^{-1}, \beta'(\xi)) \ge A e^{Ck}
$$
for $k\geq0$.
Here, $\epsilon$ is the infinitesimal expansion factor
defined in Appendix~\ref{sec:expanding_actions}.
2. A subgroup $\Gamma< G$ is {\em non-uniformly Anosov}
if it is boundary embedded with boundary embedding $\beta'$
and, in addition,
for each discrete geodesic ray $r: k\mapsto \gamma_k\in \Gamma$ asymptotic to $\xi\in \partial_{\infty} \Gamma$, the sequence
$(\gamma_k^{-1})$ contains a subsequence with diverging infinitesimal expansion at $\beta'(\xi)\in\partial_{F\ddot u} X$,
$$
\sup_{k\in{\mathbb N}}\epsilon(\gamma_k^{-1}, \beta'(\xi)) =\infty.
$$
\end{definition}
Note that due to the stability of quasigeodesics in word hyperbolic groups,
the definition is independent of the word metric on $\Gamma$.
\begin{thm}[\cite{morse,anolec}]\label{thm:Anosov}
Suppose that $\Gamma<G$ is intrinsically word hyperbolic
and not virtually cyclic.
Then:
1. $\Gamma< G$ is non-uniformly Anosov iff $\Gamma< G$ is Anosov iff $\Gamma< G$ is asymptotically embedded.
2. If these conditions are satisfied,
then the boundary maps for the Anosov and asymptotic embeddedness conditions
coincide.
\end{thm}
\begin{rem}
Note that the original Anosov condition
in \cite{Labourie,GW}
involves the space ${\mathcal G}$
of (equivalence classes) of all {\em parameterized} geodesics in $\Gamma$, equipped with a suitable topology.
This space admits two commuting actions:
a left action of $\Gamma$ and a right action of ${\mathbb R}$ (shifting geodesics). Let
$(\partial_{F\ddot u} X\times\partial_{F\ddot u} X)^{opp}$ denote the subset of $\partial_{F\ddot u} X\times\partial_{F\ddot u} X$ consisting of pairs of opposite chambers. We regard the product space
$$
{\mathcal B}:= {\mathcal G} \times (\partial_{F\ddot u} X\times\partial_{F\ddot u} X)^{opp}
$$
as a trivial bundle over ${\mathcal G}$; then the boundary map $\beta: \partial_{\infty} \Gamma\to\partial_{F\ddot u} X$
defines a section of this bundle which projects to a section $s_\beta$ of the quotient bundle
$$
\Gamma\backslash {\mathcal B} \to \Gamma\backslash {\mathcal G}.
$$
The commuting actions of
$\Gamma$ and ${\mathbb R}$ lift to commuting actions on
${\mathcal B}$, where ${\mathbb R}$ acts trivially on the second factor, while $\Gamma$ acts on the second factor via the restriction of the natural product action on $\partial_{F\ddot u} X$.
The original Anosov axiom amounts to an expansion/contraction condition (along $s_\beta$)
for the {\em right} ${\mathbb R}$-action on $\Gamma\backslash {\mathcal B}$. The basic dynamical duality principle suggests that this condition can be reinterpreted as expansion/contraction property for the {\em left} $\Gamma$-action on ${\mathcal B}/{\mathbb R}$:
This is what the our interpretation of the Anosov property amounts to (after a careful rewriting of the definitions involved),
see \cite[\S 6.5]{morse} for a detailed discussion.
\end{rem}
\subsection{URU subgroups}
The next set of definitions is in terms of
extrinsic {\em coarse geometric properties}.
A finitely generated subgroup $\Gamma<G$
is said to be {\em undistorted}
if for some (every) point $x\in X$
the orbit map
$$
\gamma\mapsto \gamma x\in X
$$
is a quasiisometric embedding $\Gamma\to X$, where $\Gamma$ is equipped
with a word metric.
Equivalently,
the inclusion $\Gamma\subset G$ is a quasiisometric embedding.
\begin{definition}[URU subgroups, {\cite{mlem,bordif}}]
A finitely generated discrete subgroup $\Gamma < G$ is {\em URU}
if it is uniformly regular and undistorted.
\end{definition}
\begin{remark}
There are regular undistorted subgroups which are not uniformly regular.
Take, for instance,
the cyclic groups in Example \ref{ex:product-case}
where $g_1$ is hyperbolic and $g_2$ is parabolic.
There are also finitely generated nonabelian free subgroups of this kind in $PSL(2,{\mathbb R})\times PSL(2,{\mathbb R})$;
moreover, some of such subgroups are not even $P$-Anosov for any $P$,
see \cite[Appendix A]{GGKW1}.
Similar examples also exist among closed surface subgroups of
$PSL(2,{\mathbb C})\times PSL(2,{\mathbb C})$ and closed surface subgroups of $\operatorname{Isom}(T\times T)$, where $T$ is a simplicial tree, see \cite{KL-undistorted}.
\end{remark}
The next condition imposes a ``Morse'' property on the images of discrete geodesics in $\Gamma$.
A priori stronger than URU,
it is equivalent to it because of the Higher Rank Morse Lemma
and can be viewed as describing the extrinsic coarse geometry of URU subgroups.
\begin{dfn}[Morse subgroups, {\cite{morse}}]
\label{def:morssbgp}
A finitely generated discrete subgroup $\Gamma<G$ is {\em Morse}
if some (equivalently, every) orbit map $\Gamma\to X$ is Morse,
cf.\ Definition~\ref{def:mrs}.
\end{dfn}
The next condition is motivated by the Finsler geometric interpretation of the Morse Lemma,
see Corollary~\ref{cor:mlemfins},
that uniformly regular quasigeodesics in $X$ are Finsler quasiconvex.
Here,
a subset $A\subset X$ is called {\em Finsler quasiconvex}
if there exists a constant $R>0$
such that for any pair of points $x_1,x_2\in A$
there {\em exists} a Finsler geodesic\footnote{I.e. a geodesic with respect to a fixed regular polyhedral Finsler metric $d_{\bar\theta}$ on $X$.}
from $x_1$ to $x_2$ contained in the $R$-neighborhood of $A$.
\begin{definition}[Finsler quasiconvex subgroups, {\cite{bordif}}]
A subgroup $\Gamma<G$ is {\em Finsler quasiconvex}
if some (equivalently, every) $\Gamma$-orbit $\Gamma x\subset X$ is Finsler quasiconvex.
\end{definition}
This notion mimics the notion of quasiconvexity for discrete subgroups of rank 1 Lie groups
discussed in section \ref{sec:GFG}.
The key difference with rank 1 is that, when $\mathop{\hbox{rank}}(X)\geq2$,
Finsler geodesics connecting pairs of points in $X$ are no longer unique.
\subsection{Equivalence of conditions}
We can now put together a theorem which states
the equivalence of various geometric and dynamical notions of geometric finiteness for
discrete isometry groups of symmetric spaces exhibiting rank 1 behavior.
This theorem is a combination of results
of \cite{morse} and \cite{mlem}. It will be augmented by two more equivalent notions in section \ref{sec:bordif}
(Corollary~\ref{cor:S-coco}).
\begin{theorem}[Equivalence]
\label{thm:main}
For discrete subgroups $\Gamma< G$ the following conditions are equivalent
in the nonelementary\footnote{Here, ``nonelementary'' means $|\partial_{\infty}\Gamma|\geq3$ in the Anosov conditions 5 and 6,
which assume word hyperbolicity but no regularity,
and means $|\Lambda_{ch}(\Gamma)|\geq3$ in all other cases.}
case:
1. $\Gamma< G$ is RCA.
2. $\Gamma < G$ is CEA.
3. $\Gamma< G$ is asymptotically embedded.
4. $\Gamma< G$ is boundary embedded.\footnote{This, unlike the other equivalences,
is limited to $\si_{mod}$-regular subgroups $\Gamma <G$.}
5. $\Gamma< G$ is Anosov.
6. $\Gamma< G$ is non-uniformly Anosov.
7. $\Gamma < G$ is Morse.
8. $\Gamma < G$ is URU.
9. $\Gamma<G$ is uniformly regular and Finsler quasiconvex.
\end{theorem}
The most difficult step in the proof of this theorem is from URU to Morse:
It follows from the Morse Lemma for uniformly regular quasigeodesics
and the companion results on hyperbolicity and boundary maps in \cite{mlem}.
\begin{rem}
The {\em nonelementary} assumption in the theorem most likely could be dropped.
The reason for including it
is that it is currently unknown if there are RCA (or CEA) subgroups $\Gamma< G$ with $\Lambda_{ch}(\Gamma)$ consisting of a single point.
\end{rem}
\begin{rem}[Relation with the paper \cite{GGKW1}]
A weaker form of the equivalence of the conditions {\em Anosov} and {\em URU}
was established in \cite[Thm.\ 1.3]{GGKW1}
after \cite{morse,mlem} had been available.
There are two major differences:
First,
the discussion in \cite{GGKW1} is restricted to word hyperbolic subgroups,
while URU only assumes finite generation.
Second,
the URU condition is replaced in \cite{GGKW1} with the (a priori) stronger ``CLI'' (coarse linear increase) condition,
see \cite[Thm 1.3(iv)]{GGKW1}.
The difference in character between the URU and CLI conditions is, roughly,
like the difference between asymptotic linear growth and quasiisometric embedding
for Lipschitz maps ${\mathbb N}\to{\mathbb R}_+$;
the former implies the latter but not conversely.
More precisely,
to describe the difference between URU and CLI
for a finitely generated subgroup $\Gamma<G$,
fix a word metric on $\Gamma$ and a point $x\in X$.
Then consider for all discrete geodesic rays $r:{\mathbb N}\to\Gamma$ normalized by $r(0)=e$
their $\Delta$-distance projections
\begin{equation*}
\bar r:=d_{\Delta}(x,rx):{\mathbb N}\to\Delta .
\end{equation*}
The subgroup $\Gamma<G$ is URU iff
the paths $\bar r$ are drifting away from $\partial\Delta$
at a {\em uniform linear rate}, in a coarse sense.\footnote{In the general $\tau_{mod}$-regular case,
$\partial\Delta$ is replaced with $\partial_{\tau_{mod}}\Delta=V(0,\partial_{\tau_{mod}}\si_{mod})$,
the union of the walls of $\partial\Delta$ not containing the sector $V(0,\tau_{mod})$.}
This is equivalent to the {\em uniform linear growth} of $\alpha\circ \bar r$ for all {\em simple} roots $\alpha$.
On the other hand,
$\Gamma<G$ is CLI
iff the $\alpha\circ \bar r$ {\em uniformly increase}, again in a coarse sense,
for all {\em positive} roots $\alpha$.\footnote{In the $\tau_{mod}$-regular case,
one takes the simple, respectively, positive roots
which do not vanish on $V(0,\tau_{mod})$,
equivalently, which are nonnegative on the symmetrized cone $W_{\tau_{mod}}\Delta\subsetF_{mod}$.
Here $W_{\tau_{mod}}<W$ denotes the stabilizer of $\tau_{mod}$.}
There is the following geometric interpretation of the CLI condition from the Morse viewpoint:
CLI is equivalent to the $\bar r$ being {\em uniform Morse quasigeodesic rays}.
For arbitrary Lipschitz paths ${\mathbb N}\to\Delta$,
the linear drift condition is strictly weaker than the Morse condition.
In particular, URU follows from CLI and, on the face of it, appears weaker.
However,
it is not hard to see that the paths $\bar q:=d_{\Delta}(q(0),q):[0,T]\to\Delta$
coming from uniform Morse quasigeodesics $q:[0,T]\to X$ are themselves uniform Morse quasigeodesics in $\Delta$.
In particular, CLI is a consequence of Morse.
Thus,
\cite[Thm.\ 1.3]{GGKW1} and in particular the implication {\em Anosov$\Rightarrow$CLI}
follow from \cite{morse,mlem}
(that also the latter implication follows
is stated as unclear in \cite[\S 1.3 of version 5]{GGKW1}).
On the other hand,
the implication {\em URU$\Rightarrow$Anosov},
which is based on our Higher Rank Morse Lemma \cite{mlem},
does not follow from \cite{GGKW1}.
\end{rem}
\subsection{Consequences}
In this section, we briefly discuss some properties shared by the groups
satisfying (one of) the conditions listed in Theorem~\ref{thm:main}.
In some cases, it is more natural to talk about representations rather than subgroups.
We call a representation $\Gamma\to G$ of a word hyperbolic group $\Gamma$ a
{\em Morse representation} if some (every) orbit map $\Gamma\to X$ is Morse,
see \cite{morse}.
\bigskip
{\bf 1. Local-to-global principle.}
The local-to-global principle for Morse quasigeodesics (Theorem~\ref{thm:L2G})
implies one for Morse representations:
\begin{thm}
[Local-to-global principle for Morse representations \cite{morse}]
\label{thm:ltgpmrp}
Suppose that $\rho: \Gamma\to G$ is a representation of a word hyperbolic group $\Gamma$
such that some orbit map $\Gamma\to X$, restricted to a sufficiently large ball in the Cayley graph
(say, centered at $1$),
is {\em locally Morse} of sufficiently good quality.
Then $\rho$ is a Morse representation.
More precisely, given Morse data $(\Theta, B, L, A, D)$ and the scale $S$ determined by them via Theorem~\ref{thm:L2G}
(for some $\Theta'$),
if the orbit map
$\Gamma\to X$
sends discrete geodesic segments of length $\leq S$ in the Cayley graph
(say, passing through $1$)
to $(\Theta, B, L, A, D)$-Morse quasigeodesic segments in $X$,
then $\rho$ is a Morse representation.
\end{thm}
{\bf 2. Structural stability.}
Structural stability was first established for convex-cocompact subgroups of rank 1 Lie groups by Sullivan,
see section \ref{sec:corGF}. The following theorem is a generelization of Sullivan's result:
\begin{thm}[Structural stability of Morse representations]
\label{thm:strstb}
For a word hyperbolic group $\Gamma$,
the space of Morse representations $\rho:\Gamma\hookrightarrow G$
is an open subset of $\operatorname{Hom}(\Gamma,G)$.
On this subset,
the boundary map
$\beta_{\rho}: \partial_{\infty} \Gamma \to \Lambda_{ch}(\rho(\Gamma))$
depends continuously on the Morse representation $\rho$.
\end{thm}
Here, the representation space $\operatorname{Hom}(\Gamma, G)$
is equipped with the topology of pointwise convergence,
i.e. $\lim_{n\to\infty} \rho_n=\rho$ iff for every element $\gamma$ of $\Gamma$
$$
\lim_{n\to\infty} \rho_n(\gamma)=\rho(\gamma).
$$
\begin{remark} Structural stability of Anosov representations has first been proven in \cite{Labourie}
for fundamental groups of closed negatively curved manifolds,
and in \cite{GW} for general word hyperbolic groups.
Our proof in \cite{morse}
derives structural stability,
from the Morse viewpoint,
as a direct consequence of the local-to-global principle for Morse maps (see Theorem \ref{thm:L2G}).
\end{remark}
\medskip
{\bf 3. Semidecidability.}\footnote{See section \ref{sec:corGF} for the definition of semidecidability.}
It is {\em semidecidable} if a representation $\rho: \Gamma\to G$
of a word hyperbolic group $\Gamma$ is Morse.
\medskip
The proof of semidecidability given in \cite{morse} is also based on the local-to-global principle for Morse maps: The algorithm explores finite subsets $F$ of the Cayley graph of $\Gamma$ and ranges of Morse data $(\Theta, B,L,A,D)$ to determine if an orbit map $\Gamma\to X$ is $(\Theta, B,L,A,D)$-Morse on $F$.
\medskip
{\bf 4. Cocompactness.} Each Morse subgroup $\Gamma< G$ acts properly discontinuously and cocompactly on various domains associated with the action $\Gamma\curvearrowright X$. These domains are contained in the flag manifold $\partial_{F\ddot u} X= G/B$ and in the Finsler compactification of $X$. We refer to sections \ref{sec:dd} and \ref{sec:bordif} for the precise statements.
\subsection{Examples: Morse-Schottky subgroups}
Let $\Gamma$ be a free group on $k$ generators, denoted $\alpha_1, \alpha_2$,..., $\alpha_k$.
Realize each $\alpha_i$ as a {\em regular} hyperbolic isometry $g_i$ of $X$,
i.e.\ $g_i$ preserves a regular geodesic line and translates along it.
Assume furthermore that the isometries $g_i$ are in {\em general position} with respect to each other,
in the sense that the subset of $\partial_{F\ddot u} X$ consisting of the $2k$ attractive and repulsive chambers
$\sigma_1^\pm,...,\sigma_k^\pm$ of the isometries $g_1,...,g_k$ is antipodal.
The following theorem was proven in \cite[Thm. 7.40]{morse} for $k=2$ (2-generated free groups), but the same proof goes through for arbitrary $k\in{\mathbb N}$.
\begin{thm}\label{thm:MS-actions}
There exists $N_0$ such that for all $N\ge N_0$
$$\rho: \alpha_i\mapsto g_i^N, \quad i=1,...,k,$$
defines a (faithful) Morse representation $\rho:\Gamma\to G$.
\end{thm}
\begin{rem}
(i) Regarding earlier work on the construction of free subgroups of Lie groups,
note that Tits, when proving the Tits alternative using his ping-pong argument,
only shows the {\em injectivity} of certain representations of free groups,
although his proof clearly also implies the {\em discreteness} of their images.
Benoist \cite{Benoist} improved on Tits' result and obtained control on the {\em asymptotic geometry}.
In particular,
he produced discrete free subgroups which, in our terminology, are {\em uniformly regular}.
Our construction \cite[Theorem 7.40]{morse}
is the first to control the {\em coarse geometry}.
We prove that the resulting free subgroups are {\em Morse},
which amounts to describing the extrinsic coarse geometry of their orbits
(see Definitions~\ref{def:mrs}(i) and~\ref{def:morssbgp}).
In particular, they are {\em undistorted}.
Whereas the arguments of Tits and Benoist use the dynamics at infinity,
our approach is different.
We work inside the symmetric space and
build up representations of free groups
using a version of the local-to-global principle for Morse representations,
see Theorem~\ref{thm:straight-paths} below.
(ii) In \cite[Theorem~7.40]{morse} we prove a more general version of Theorem~\ref{thm:MS-actions}
which allows for $\tau_{mod}$-regular generators.
\end{rem}
Our proof of Theorem~\ref{thm:MS-actions}
is based on the notion of {\em straight paths} in symmetric spaces.\footnote{In fact,
the notion of straight paths and Theorem~\ref{thm:straight-paths} below
are the key technical tools for proving our local-to-global principles
Theorems~\ref{thm:L2G} and ~\ref{thm:ltgpmrp} for Morse quasigeodesics and Morse subgroups.}
This concept is a higher rank analogue of a piecewise geodesic path in a rank 1 symmetric space, where each edge is sufficiently long and the vertex angles are close to $\pi$;
such paths are known to be uniformly quasigeodesic. In higher rank, the angle condition has to be suitably modified,
in order to make sure that the path bends ``transversally to the flat directions''.
Below is the precise definition.
Let
$$
x_0 x_1 \ldots x_n$$
be a piecewise geodesic path in $X$ with vertices $x_i$.
We call such a path {\em $s$-spaced} if
$$
d(x_{i-1}, x_i)\ge s
$$
for all $i$.
In order to define {\em straightness},
we consider the chain of diamonds
$$
D_i= \diamondsuit_{x_{i-1} x_{i}}
$$
associated to our path
and define {\em $\bar\zeta$-angles}
$$
\angle^{\bar\zeta}(D_{i-1}, D_i)
$$
between consecutive diamonds as follows.
As an auxiliary datum, we fix an $\iota$-invariant regular type $\bar\zeta\in \operatorname{int}(\si_{mod})$.
For every regular segment $xy$, respectively, for the associated diamond
$$
\diamondsuit_{x y}= V(x,\sigma)\cap V(y, \hat\sigma)
$$
we define the tangent vector $v_{xy} \in T_x V(x,\sigma)$ as the unique unit vector of type $\bar\zeta$,
i.e. such that the geodesic ray from
$x$ in the direction $v_{xy}$ is asymptotic to the point $\zeta\in \sigma$ of type $\bar\zeta$,
$\theta(\zeta)=\bar\zeta$.
Then define the $\bar\zeta$-angle
between two diamonds $\diamondsuit_{xy}, \diamondsuit_{xz}$ at $x$
as the Riemannian angle
$$
\angle^{\bar\zeta}(\diamondsuit_{xy}, \diamondsuit_{xz}) :=\angle(v_{xy}, v_{xz}).
$$
\begin{definition}
Let $\epsilon>0$ and $\Theta \subset \operatorname{int}(\si_{mod})$ be compact convex.
The piecewise geodesic path $x_0 x_1 \ldots x_n$ is called $(\Theta,\epsilon)$-{\em straight} if for all $i$ the segments
$x_{i-1} x_i$ are $\Theta$-regular
and
$$
\angle^{\bar\zeta}(D_{i-1}, D_i)\ge \pi- \epsilon.
$$
\end{definition}
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig13.pdf}
\caption{A string of diamonds.}
\label{figure12.fig}
\end{figure}
Now we can formulate:
\begin{thm}
[Local-to-global principle for straight paths \cite{morse}]
\label{thm:straight-paths}
Each sufficiently spaced and sufficiently straight
piecewise geodesic path in $X$
is a uniform Morse quasigeodesic.
More precisely,
given
compact convex subsets $\Theta, \Theta'\subset \operatorname{int}(\si_{mod})$ with
$\Theta\subset \operatorname{int}(\Theta')$,
there exist numbers $\epsilon, s, B, L, A, D>0$,
depending also on $\bar\zeta$,
such that each $s$-spaced and $(\Theta,\epsilon)$-straight piecewise geodesic path
is $(\Theta',B,L,A,D)$-Morse.
\end{thm}
Now we return to the setup of Theorem~\ref{thm:MS-actions}
and apply this local-to-global result to construct Morse representations of free groups.
We let $T$ denote the Cayley tree of $\Gamma$ associated with the generating set $\{\alpha_1,...,\alpha_k\}$;
its vertex set is identified with $\Gamma$.
For a point $x\in X$,
we extend the orbit map $o_x:\Gamma\to\Gamma x\subset X, \gamma\mapsto\rho(\gamma)x$,
to a piecewise geodesic map $f_x: T\to X$
of the Cayley tree
by sending its edges to geodesic segments in $X$
We would be done if we could arrange $f_x$ to be straight in the sense that
it maps lines in $T$ to $s$-spaced and $(\Theta,\epsilon)$-straight paths in $X$ for good data $s,\Theta$ and $\epsilon$,
because these image paths would be uniformly Morse by Theorem~\ref{thm:straight-paths},
which means that the representation $\rho$ would be Morse.
However, this is impossible to arrange if $k\geq2$,
due to the ``lack of space'' in the unit tangent spheres:
The $2k$ image edges connecting the orbit point $x$
to the adjacent orbit points $\rho(\beta)x$, $\beta\in\{\alpha_1^{\pm1},\ldots,\alpha_k^{\pm1}\}$,
cannot have pairwise $\bar\zeta$-angles close to $\pi$,
equivalently,
the $2k$ directions $v_{x\rho(\beta)x}$ at $x$ cannot have pairwise Riemannian angles close to $\pi$.\footnote{In
euclidean buildings,
it is easy to construct straight piecewise geodesic trees.}
We circumvent this difficulty by looking at {\em midpoint paths}:
Let $l\subset T$ be a line passing through the sequence of consecutive vertices
$\ldots,\gamma_{-1},\gamma_0,\gamma_1,\ldots$.
Its $f_x$-image is the biinfinite piecewise geodesic path
$$ \ldots x_{-1}x_0x_1\ldots$$
with vertices at the orbit points $x_i=\rho(\gamma_i)x$.
Let $m_i$ denote the midpoint of the segment $x_{i-1}x_i$
and consider the {\em midpoint path}
$$ \ldots m_{-1}m_0m_1\ldots$$
We are again done if we can show that these midpoint paths for all lines $l\subset T$ are uniformly
well spaced and straight.
This approach works
and it is how our proof of Theorem \ref{thm:MS-actions} proceeds:
The point $x\in X$ can be chosen arbitrarily.
We show that for a suitable compact convex subset $\Theta\subset \operatorname{int}(\si_{mod})$ and arbitrary $\epsilon,s>0$
the midpoint paths for all lines $l\subset T$
are $s$-spaced and $(\Theta,\epsilon)$-straight, provided that $N$ is sufficiently large.
The $s$-spacedness for large $N$ easily follows from our genericity assumption
that the chambers $\sigma^\pm_1,...,\sigma^\pm_k$ are pairwise antipodal.
Due to $\Gamma$-equivariance, the $(\Theta,\epsilon)$-straightness condition
can be verified locally by looking at special short midpoint paths:
For every triple of generators
$$
\alpha, \beta, \gamma\in \{\alpha_1^{\pm 1},...,\alpha_k^{\pm 1}\}
$$
with $\alpha\ne \beta, \beta\gamma\ne 1$ consider the quadruple
$$
(\gamma_0,\gamma_1,\gamma_2,\gamma_3):=(\alpha, 1,\beta, \beta\gamma)
$$
of elements in $\Gamma$.
Then the $\gamma_0\gamma_1\gamma_2\gamma_3$ are geodesic paths in $T$,
and it suffices to check $(\Theta,\epsilon)$-straightness for the associated midpoint paths $m_0m_1m_2$.
The latter is deduced from smallness of $\bar\zeta$-angles,
$$
\angle^{\bar\zeta}(\diamondsuit_{m_1m_0}, \diamondsuit_{m_1x}) =\angle(v_{m_1m_0}, v_{m_1x}) <\frac{\epsilon}{2},
$$
and sufficient spacing (to ensure the regularity).
The smallness of angles and the regularity are verified
by a direct geometric argument using the regularity of the elements $g_i$ and their general position.
We refer the reader to \cite[sect. 7.6]{morse} for the details.
\begin{figure}[tbh]
\includegraphics[width=90mm]{figure13a.pdf}
\caption{Special short midpoint paths.}
\label{figure13a.fig}
\end{figure}
\medskip
\subsection{Further examples}
Other examples of Morse, equivalently, Anosov subgroups are provided by Hitchin representations, see \cite{Labourie},
which are the origin of the notion of Anosov representations. Similarly, one obtains the {\em complex version} of these examples: Start with the (unique up to isomorphism) irreducible representation
$$
\rho_n: SL(2,{\mathbb C})\to G=SL(n,{\mathbb C}).
$$
Then for each discrete subgroup $\Gamma< SL(2,{\mathbb C})$ its image $\rho_n(\Gamma)=\Gamma_n< G$ is an RA subgroup.
If, in addition, $\Gamma$ is convex-cocompact, then $\Gamma_n$ is RCA, equivalently, Anosov.
Due to structural stability,
any representation $\rho$ sufficiently close to $\rho_n$, is also Anosov.
In the case when ${\mathbb H}^3/\Gamma$ is noncompact one obtains many ``interesting'' deformations of $\rho_n$, cf. \cite[Thm. 8.44]{Kapovich00} and \cite{HP}.
Note, that, unlike in the case of Hitchin representations, the connected component of $\rho_n: \Gamma\to SL(n,{\mathbb C})$ also contains representations which are not Anosov
(and some which are not discrete and faithful), since this is already the case for $SL(2,{\mathbb C})$-representations.
\medskip
Weakening the regularity condition to $\tau_{mod}$-regularity and, accordingly, Anosov actions to $\tau_{mod}$-Anosov actions, one obtains more classes, e.g.
groups of projective transformations acting properly discontinuously cocompactly on bounded strictly convex solids in the affine space ${\mathbb R}^n$. Such groups are $\tau_{mod}$-Anosov subgroups of $PGL(n+1,{\mathbb R})$, where $\tau_{mod}$ is the edge of $\sigma_{mod}$ corresponding to the partial flag {\em line $\subset$ hyperplane}, see \cite[Prop. 6.1]{GW}.
\bigskip
\section{Discrete subgroups: Domains of proper discontinuity}\label{sec:4}
Note that, so far, we were only looking at limit sets and ignoring domains of discontinuity. The first successful, but limited, treatment of domains of discontinuity was given in \cite{GW}:
It was proven there that each Anosov subgroup $\Gamma< G$ admits a (possibly empty!)
domain of {proper} discontinuity in a certain bundle $G/AN$ over the full flag manifold $\partial_{F\ddot u} X\cong G/B$.
These domains were obtained by using a certain embedding of $G$ into a larger Lie group. We will now describe a more comprehensive
{\em and intrinsic}
treatment of domains of discontinuity, following \cite{coco13,coco15}.
There are two key points in this treatment:
1. Domains of {\em proper} discontinuity are not unique and therefore {\em not canonical} (unlike in the rank 1 case).
There are several natural choices which depend on a certain auxiliary combinatorial datum.
2. Mumford's GIT (Geometric Invariant Theory) in algebraic geometry serves as a guiding principle.
\subsection{Digression: Mumford's GIT}
We begin with the basic {\em topological dynamics} framework of GIT, to be found not in Mumford's book \cite{Mumford}, but e.g.\ in Newstead's lectures \cite{Newstead} and Dolgachev's book \cite{Dolgachev}.
\medskip
{\bf GIT Mantra:} Let $H$ be a topological group (say, a discrete group or an algebraic group), $Z$ a compact Hausdorff space and $H\times Z\to Z$ a continuous action of $H$ on $Z$. We would like to form a quotient $Z//H$ which is again compact and Hausdorff. In order to do so, we have to partition $Z$ ($H$-invariantly) into {\em semistable} and {\em unstable} points:
$$
Z= Z_{sst} \sqcup Z_{u}
$$
so that $Z_{u}$ is closed. Note that $Z_{sst}$ could be empty. This partition is further refined as follows:
1. $Z_u$ is filtered as an increasing union of closed subsets
$$
Z_0\subset Z_1\subset \ldots \subset Z_{u},
$$
where $Z_0$ is the set of {\em maximally unstable points}.
2. $Z_{sst}$ contains an open subset $Z_{st}$ of {\em stable} points,
on which the $H$-action is {\em proper}.
(There is also a subset of {\em nice semistable points}, but we will ignore this.)
The set of maximally unstable points is, typically, canonical and depends only on the action $H\curvearrowright Z$, while the rest of the unstable filtration (including the choice of $Z_{u}$ itself) depends on an auxiliary datum.
In the algebro-geometric context,
this datum consists of a {\em positive algebraic line bundle} $L\to Z$ (defined up to its tensor power\footnote{In
the sense that $L$ and $L^{\otimes n}$, $n>0$, lead to the same sets of stable/semistable points.}),
while in our geometric group theory context, it will be a {\em thickening}, see section~\ref{sec:thick} below.
One can think of this partition as: {\em good} (stable), {\em bad} (unstable) and {\em ugly} (semistable but not stable).
The construction works best when the partition it is {\em neat}\footnote{Note that this is our terminology, it appears that algebraic geometers do not have one.}
in the sense that {\em stable= semistable}, i.e., ugly = $\emptyset$.
In order to form the GIT quotient $Z//H$ do the following:
a. Remove the {\em bad} ($Z_{u}$).
b. Keep the {\em good} ($Z_{st}$) and take the usual topological quotient $Z_{st}/H$ (with the quotient topology), it will be Hausdorff. The set $Z_{st}$ will be {\bf a} domain of
{proper} discontinuity (in the framework of discrete groups), or {\em domain of properness} in general.
In the {\em neat} case, you are done. If not:
c. Deal with the {\em ugly}: For the semistable points use the {\em extended orbit equivalence relation}:
$$
z\sim z'\iff \overline{H z}\cap \overline{H z'}\ne \emptyset
$$
where the closure is taken in $Z_{sst}$. For stable points this amounts to the usual orbit equivalence:
$$
H z= H z'.
$$
Equip the quotient with the quotient topology. Now, if the stars are aligned in your favor, then the resulting quotient is both compact and Hausdorff.
\begin{remark} One can (and should!) vary the auxiliary datum and watch how the quotient space transforms. In the context of symplectic geometry ({\em symplectic reduction}), one sees the variation of the symplectic structure; generically, one has the {\em neat case} and some degeneration occurs when {\em semistable, non-stable points} appear.
This is called {\em wall-crossing}.
\end{remark}
What we managed to do in \cite{coco13} is to adapt this mantra to the Morse group actions on various flag manifolds $\Gamma\curvearrowright Z=G/P$. Note that:
1. \cite{coco13} dealt only with the regular case, but \cite{coco15} covers the general case of $\tau_{mod}$-regular Morse subgroups.
2. \cite{coco13,coco15} succeeded only in the {\em neat} case: We do not have a theory dealing with the {\em ugly}.
\medskip
\begin{bexample}
\label{bex:newst}
(cf. Newstead's Example 1.1): Consider the action of $\Gamma=\<\gamma\>\cong{\mathbb Z}$ on the real projective plane, which, in an affine patch, is given by:
$$
\gamma(x,y)= (\lambda x, \lambda^{-1} y), \quad\lambda > 1.
$$
The domain of discontinuity is the projective plane minus the three fixed points
$[1:0:0]$, $[0:1:0]$ and $[0:0:1]$,
which are the only points with infinite stabilizer.
However, the action on this domain is not proper and the quotient is non-Hausdorff.
The maximally unstable set consists only of the two points
$$[1:0:0], [0:1:0].$$
The projective plane minus the $x$- and $y$-axes belongs to the stable part $Z_{st}$
(for any choice of a ``line bundle'', or a ``thickening'', in our terminology).
There, the action is proper.
In order to obtain a larger domain of {\em proper} discontinuity,
one must add a suitable {\em part} of the coordinate axes minus the origin $[0:0:1]$.
Now, we describe three different such enlargements and the corresponding quotients (two of which will be homeomorphic),
resulting from suitable choices of auxiliary data:
Left, Right (both {\em neat}) and the Center ({\em non-neat}).
{\bf Left:} Make the entire $x$-axis unstable and include the $y$-axis (minus the origin) into the set of stable points.
The action of $\Gamma$ will be properly discontinuous, cocompact with the quotient $Z_{st}/\Gamma\cong T^2$.
{\bf Right:} Do the same, but make the $y$-axis unstable and add the $x$-axis (minus the origin) to the stable set.
The quotient is again $T^2$.
Both left and right partitions of the projective plane will result from what we call {\em balanced thickenings};
these will be introduced in Definition~\ref{defn:thickenings} and equation~\eqref{eq:thickening}.
{\bf Center:} Declare the coordinate axes (including the origin) to be {\em semistable but not stable}.
Then the action on $Z_{sst}$ is not proper (of course!), but the GIT quotient $Z//\Gamma$ is compact and Hausdorff:
It results from $T^2$ by collapsing the union of two parallel essential simple loops to a point.
\end{bexample}
Note that in Mumford's setting, the degree of unstability is determined by a real (actually, rational) number, the {\em slope}, the value of the Hilbert-Mumford {\em numerical function}. The unstable filtration is given by the standard order on the real numbers: Positive slope means unstable, the more positive the slope is, the more unstable the point is.
In the KLP setting, the real numbers are replaced with the {\em Weyl group $W$} and its (partial) Bruhat order:
The smaller the value $w\in W$, the more unstable a point is.
The value $w$ will measure ``how far'' an element of $G/B$ is from the chamber limit set $\Lambda_{ch}(\Gamma)$. What corresponds to ``zero'' in this order is not at all clear (and depends on the choice of a thickening).
Loosely speaking, we have to ``cut $W$ in half'' in order to define an analogue of zero.
\begin{rem}
This description captures only the $\Gamma$-action on $G/B$; in full generality, we also need Bruhat orders on quotients of $W$ by its parabolic subgroups, but we will ignore this here.
\end{rem}
\subsection{Relative position, Bruhat order and thickenings}
\label{sec:thick}
Given two chambers $\sigma, \sigma'\in G/B$ we define their $W$-valued {\em distance}
or the {\em position of $\sigma$ relative to $\sigma'$}
$$
\delta(\sigma, \sigma') = w\in W
$$
as follows.
Let $\kappa: a_{mod}\to a\subset \partial_{Tits} X$ be a chart\footnote{See Appendix \ref{sec:modsp}.}
whose image is an apartment $a$ in $\partial_{Tits} X$
containing $\sigma, \sigma'$ and such that $\kappa$ sends the model chamber $\si_{mod}\subset a_{mod}$ to the chamber $\sigma'$.
Then $w\in W=Aut(a_{mod})$ is defined as the unique element sending $\si_{mod}$ to $\kappa^{-1}(\sigma)$.
Since transition maps
between different charts are restrictions of elements of $W$, it follows that
$w$ is independent of the choice of $\kappa$.\footnote{Note that, by the convexity of apartments,
$a$ must contain the convex hull of $\sigma'\cup\sigma$.
Since $\kappa^{-1}$ is determined on the chamber $\sigma'$,
it follows that it is determined on this convex hull, and in particular also on $\sigma$.}
The relative position is $G$-invariant: $\delta(g\sigma, g\sigma')= \delta(\sigma, \sigma')$ for all $g\in G$.
In general, it is nonsymmetric:
$$\delta(\sigma', \sigma)= \delta(\sigma, \sigma')^{-1} .$$
We also define the {\em complementary position}
$$\hbox{c-}\delta(\sigma,\sigma'):= w_0 \delta(\sigma,\sigma'),$$
where $w_0$ is the longest element of $W$, see \S \ref{sec:Basicgeometry}. In other words,
if $a\subset \partial_{Tits} X$ is an apartment containing $\sigma, \sigma'$ and
$\widehat{\sigma'}\subset a$ is the chamber opposite to $\sigma'$ then
$$
\hbox{c-}\delta(\sigma,\sigma') = \delta(\sigma, \widehat{\sigma'}).
$$
Since we wish to use $\delta$ as a ``distance'' on $G/B$, we need a (partial) order on $W$ which allows us to compare distances. This order is the {\em Bruhat order}, which we discuss next.
We first introduce the Bruhat order combinatorially.
(A very detailed discussion of the Bruhat order with many examples can be found in \cite[ch.\ 2]{BB}.)
Afterwards we discuss a geometric way of defining it as the {\em folding order},
which is how we use it in our papers.
\medskip
{\bf Bruhat order.} We fix a standard generating system $S$ for $W$
(its elements are called {\em simple reflections}
and they are the reflections in the faces of the positive fundamental chamber $\si_{mod}$),
which defines the {\em word length} $\ell$ on $W$.
A partial order on $W$, called the {\em (strong) Bruhat order}, is induced by the following convention (and transitivity):
If $v=ur$, where $r$ is a reflection in $W$ (a conjugate of one of the generators) and $\ell(u)< \ell(v)$, then $u<v$.
In this case one writes
$$
u \stackrel{r}{\longrightarrow} v.
$$
In particular, $1$ is the smallest element of $W$ and $w_0$ is the largest.
Equivalently, this order is given by the condition that $u\le v$ iff a subword (consisting of not necessarily consecutive letters) of a reduced word for $v$ equals a reduced word for $u$.
We note that left multiplication with $w_0$ {\em reverses} the Bruhat order.
\begin{example}
Consider $W=S_n$, the permutation group on $n$ letters. As usual, we identify permutations $\pi$ with the
strings $\pi(1)\ldots \pi(n)$, where we put or not put commas between the adjacent symbols $\pi(i), \pi(i+1)$ when convenient.
The (standard) simple reflections in $W$ are the transpositions $s_1=(1,2), s_2=(2,3),..., s_{n-1}= (n-1,n)$.
In the examples below we will always equip $S_n$ with this generating set. The reflections in $W$ are the
transpositions $(i,j), i < j$. For $r=(i,j)$, the notation $\pi \stackrel{r}{\longrightarrow} \pi'$ means that one moves from $\pi$ to $\pi'$ by transposing $\pi(i), \pi(j)$ in the string $\pi(1)\ldots \pi(n)$, where $\pi(i)<\pi(j)$. In the poset diagrams of $S_3$ and $S_4$ below we connect nodes
$u$ and $v$ whenever $u \stackrel{r}{\longrightarrow} v$.
\end{example}
\begin{figure}[tbh]
\includegraphics[width=90mm]{A2a.pdf}
\caption{The poset diagram of the Bruhat order for $W=S_3$. The larger permutations are higher in the figure, $w_1\ge w_2$ iff the corresponding nodes of the poset diagram are connected by a descending edge path. The circled nodes of the diagram constitute the unique balanced thickening.}
\label{figure13.fig}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=90mm]{A11.pdf}
\caption{The poset diagram of the Bruhat order for $W={\mathbb Z}_2\times {\mathbb Z}_2$ with the generators $a, b$ and $w_0=ab$.
The larger permutations are higher in the figure. The circled nodes of the diagram constitute one of the two balanced thickenings.}
\label{A11.fig}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=90mm]{B2.pdf}
\caption{The poset diagram of the Bruhat order for $W=B_2$. The larger permutations are higher in the figure.
The circled nodes of the diagram constitute one of the two balanced thickenings.}
\label{B2.fig}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=90mm]{G2.pdf}
\caption{The poset diagram of the Bruhat order for $W=G_2$ which is generated by two simple reflections $a, b$.
The action of $w_0=(ab)^3$ is by the 180 degree rotation of the diagram. The larger permutations are higher in the figure. The subset $\{1, a, b, ab, ba\}$ is contained
in every fat thickening. The circled nodes of the diagram constitute one of the two balanced thickenings.}
\label{G2.fig}
\end{figure}
\begin{figure}[tbh]
\centerline{\epsfxsize=4.5in \epsfbox{A3.pdf}}
\caption{The poset diagram of the Bruhat order for $W=S_4$, see \cite[page 31]{BB}. The larger permutations are higher in the figure. The involution $w_0$ (marked by the 2-sided arrows) acts on this diagram by reversing the order of the labels. For instance, $3241 \stackrel{w_0}{\longleftrightarrow} 1423$.
Each balanced thickening contains exactly one vertex of each pair $(1432, 2341), (2413, 3142), (3214, 4123)$ since the members of each pair are swapped by $w_0$.
It follows that all balanced thickenings contain the vertices $1234, 1243, 1324, 2134, 1342, 2143, 3124$ (marked in solid black). We describe the balanced thickenings by what other vertices they contain.
(i) Including in addition both vertices $1423, 2314$ results in a ``metric'' balanced thickening
(see \cite[\S 4.4]{coco13} or \cite[\S 3.4.1]{coco15} for the definition).
There are 8 such thickenings.
(ii) There are two ``nonmetric'' balanced thickenings, each determined by whether the vertex $3241$ or $4132$ is chosen. For instance, choosing $3241$ forces the thickening to contain the vertices $2341, 3142, 3214$ and $2314$ (these vertices are marked in grey). In total, there are 10 balanced thickenings.}
\label{S4.fig}
\end{figure}
\medskip
{\bf Geometric interpretation of the Bruhat order as the folding order,}
see \cite[\S 4.2+3]{coco13}.
Fix a reference chamber $\si_{mod}\subseta_{mod}$.
It will represent the identity element $1\in W$.
For each chamber $\bar\sigma\subseta_{mod}$ we have a unique $w\in W$ such that $\bar\sigma=w\si_{mod}$.
Thus, we will identify $W$ with the set of chambers in the model apartment $a_{mod}$. Given a reflection $s_H\in W$ whose fixed hyperplane (wall) $H$ separates a chamber $\bar\sigma$ from $\sigma_{mod}$, we set
$$
s_H\bar\sigma< \bar\sigma.
$$
Now extend this order by transitivity to the entire set of chambers in $a_{mod}$. The result is the Bruhat order.
It is useful to further redefine the Bruhat order non-recursively in terms of ``foldings'' of the model apartment onto itself.
By a {\em folding map} $a_{mod}\toa_{mod}$,
we mean a type preserving continuous map which sends chambers isometrically onto chambers.
In particular, such maps are 1-Lipschitz.
Intuitively, a folding map fixing the reference chamber $\si_{mod}$ moves the other chambers in $a_{mod}$ ``closer'' to $\si_{mod}$.
The simplest examples of folding maps fixing $\si_{mod}$ are obtained as follows:
A wall $m\subseta_{mod}$ splits $a_{mod}$ into two (simplicial) hemispheres,
the inner hemisphere $h^+$ containing $\sigma_{mod}$
and the outer hemisphere $h^-$.
This decomposition gives rise to the folding map
which fixes $h^+$ and reflects $h^-$ onto it.
We call a composition of such folding maps at walls $m_i$
a {\em special folding}.
The above geometric interpretation of the Bruhat order can thus be rephrased:
Two chambers $\bar\sigma_1,\bar\sigma_2\subseta_{mod}$ satisfy $\bar\sigma_1\leq\bar\sigma_2$
iff there exists a special folding moving $\bar\sigma_2$ to $\bar\sigma_1$.
In general, not all foldings are special.
Nevertheless, there is no need of recognizing whether or not a folding is special.
Indeed, one can show that it makes no difference to the order
whether one uses all foldings fixing $\si_{mod}$ or only the special ones:
\begin{thm}[{\cite[Cor 4.5]{coco13}}]
For chambers $\bar\sigma_1,\bar\sigma_2\subseta_{mod}$
it holds that $\bar\sigma_1\leq\bar\sigma_2$
iff there exists a folding map $a_{mod}\toa_{mod}$ fixing $\si_{mod}$ and sending $\bar\sigma_2\mapsto\bar\sigma_1$.
\end{thm}
\medskip
{\bf Thickenings.} We will use special subsets $\mathop{\hbox{Th}}\nolimits\subset W$, called ``thickenings of $1\in W$ inside $W$''.
They are defined as unions of sublevels of the Bruhat order,
i.e.\ they satisfy the property:
$$ v\in\mathop{\hbox{Th}}\nolimits\;\hbox{~~and~~}\; u<v\Rightarrow u\in\mathop{\hbox{Th}}\nolimits .$$
In particular, $1\in \mathop{\hbox{Th}}\nolimits$ for every {nonempty thickening} $\mathop{\hbox{Th}}\nolimits$. One can think of {thickenings} as being starlike with respect to $1\in W$ (and the Bruhat order defining intervals). Simple examples of thickenings are given by the ``closed balls''
$$
B(1, r)=\{w\in W: w\le r\},
$$
where one can think of $r\in W$ as the ``radius'' of the ball. General thickenings are unions of such ``balls''.
\begin{rem}
In the theory of posets, thickenings are called {\em (lower) ideals} and thickenings of the form $\{w\le r\}$ are called {\em principal ideals}.
\end{rem}
A thickening is {\em proper} if it is nonempty and is different from the entire $W$.
The latter condition is equivalent to the requirement that the longest element
$w_0$ is not in the thickening.
\begin{definition}\label{defn:thickenings}
1. A thickening $\mathop{\hbox{Th}}\nolimits\subset W$ is {\em slim} if $\mathop{\hbox{Th}}\nolimits\cap w_0 \mathop{\hbox{Th}}\nolimits=\emptyset$.
2. A thickening $\mathop{\hbox{Th}}\nolimits\subset W$ is {\em fat} if $\mathop{\hbox{Th}}\nolimits\cup w_0 \mathop{\hbox{Th}}\nolimits=W$.
3. A thickening $\mathop{\hbox{Th}}\nolimits\subset W$ is {\em balanced} if it is both slim and fat.
\end{definition}
Thus, for each $w\in W$,
a slim (fat, balanced) thickening contains at most (at least, exactly) one
of the pair of complementary elements $w$ and $w_0w$.
In particular, a balanced thickening consists of precisely half of the elements of $W$.
\begin{theorem}
\cite{coco13, coco15}. Each finite Weyl group $W$ admits at least one balanced thickening.
\end{theorem}
\medskip
{\bf Examples.} Among the rank 2 Weyl groups, $A_2$ admits exactly one balanced thickening,
whereas $B_2, G_2$ and $A_1\times A_1$ admit exactly two!
\begin{example}
Consider $W= {\mathbb Z}_2 \times {\mathbb Z}_2$, the Coxeter group of the type $A_1\times A_1$, see Figure \ref{A11.fig}.
We let $a, b$ denote the generators of the direct factors of $W$; these are the simple reflections and the only reflections in $W$ (since $W$ is abelian). The poset graph of $W$ is completely described by the inequalities $w_0=ab > a > 1$ and $w_0> b>1$.
The action of the involution $w_0$ swaps the nodes $a, b$ as well as the nodes $1, w_0$. Therefore, the thickenings $B(1, a)$, $B(1, b)$ are balanced. The only other two proper thickenings are $\{1\}$ and $W - \{w_0\}$; these are respectively slim and fat. In particular, the group $W= {\mathbb Z}_2 \times {\mathbb Z}_2$ has exactly two balanced thickenings.
\end{example}
\begin{example}
Consider $W=S_3$, the Coxeter group of the type $A_2$. We refer the reader to
Figure \ref{figure13.fig} for the description of the poset graph of $W$.
We will describe all proper thickenings in $W$. Every nonempty thickening contains $1\in W$. Each thickening different from $\{1\}$ also contains at least one of the transpositions $(12)$ and $(23)$.
The slim thickenings consisting of two elements are $B(1, (123))$ and $B(1, (231))$.
Since the involution $w_0$ swaps the upper and lower halves of the poset diagram, the thickening $I:= \{(123), (132), (213)\}$ is balanced.
A thickening containing either $(231)$ or $(312)$
also contains $I$.
Hence $I$ is the only balanced thickening.
The fat thickenings consisting of four elements are $B(1, (231))$ and $B(1, (312))$.
\end{example}
\begin{example}\label{ex:B2}
Consider the Coxeter group $W$ of the type $B_2$. We refer the reader to
Figure \ref{B2.fig} for the poset graph of $W$. The action of $w_0=(ab)^2$ swaps $ab$ and $ba$, $a$ and $bab$, $b$ and $aba$. Since balanced thickenings consist of exactly four elements, they can contain neither $aba$ nor $bab$.
Also, a balanced thickening has to contain either $ab$ or $ba$ but not both. From this, we conclude that
the only two balanced thickenings in $W$ are $B(1, ab)$ and $B(1, ba)$.
\end{example}
\begin{example}
Consider the Coxeter group $W$ of the type $G_2$, see
Figure \ref{G2.fig} for the poset graph of $W$. The only two balanced thickenings in $W$ are $B(1, aba)$ and
$B(1, bab)$.
\end{example}
\begin{rem}
A subset $R\subset W$ determines the thickening
$$
\mathop{\hbox{Th}}\nolimits_R:=\bigcup_{r\in R} B(1, r).
$$
Every thickening has this form. For instance, in Example \ref{ex:B2} we can take $R=\{ab, ba\}$ and hence obtain the fat unbalanced thickening
$$
\mathop{\hbox{Th}}\nolimits_R= \{1, a, b, ab, ba\}.
$$
In order to simplify the notation we omit the symbol $R$ in the notation $\mathop{\hbox{Th}}\nolimits_R$ from now on: $\mathop{\hbox{Th}}\nolimits$ will always denote {\em a certain thickening}.
\end{rem}
\begin{rem}
In { our work \cite{morse, coco15, bordif}}, we need the folding order (and thickenings)
more generally also for 2-sided quotients $W_P\backslash W/W_Q$, where $W_P$ and $W_Q$ are Coxeter subgroups generated by subsets of the set of simple reflections.
However, {for the sake of simplicity}, we will not discuss these here.
\end{rem}
\begin{question}
Is there a reasonable combinatorial classification of balanced thickenings for a given finite Coxeter group
$W$? What are the asymptotics of the numbers of balanced thickenings in the Weyl groups of the types
$$A_n, \quad B_n, \quad D_n, \quad \underbrace{A_1\times \ldots \times A_1}_{n\; \hbox{\scriptsize times}}$$
as $n\to\infty$?
\end{question}
\medskip
We now turn to discussing flag manifolds. For each chamber $\sigma\in\partial_{F\ddot u} X\cong G/B$ and $r\in W$ we define the ``combinatorial $\delta$-sphere'' of the ``combinatorial radius'' $r$,
$$
S(\sigma,r)=\{\sigma'\in G/B: \delta(\sigma', \sigma)= r\},
$$
also known as a {\em Schubert cell} in $G/B$, and the ``combinatorial $\delta$-ball''
$$
B(\sigma, r)=\{\sigma'\in G/B: \delta(\sigma', \sigma) \le r\},
$$
{also known as a {\em Schubert cycle}. The following is a basic fact of the theory of Lie groups that
plays a critical role in our analysis of discontinuity domains.
It expresses that the Bruhat order on $W$ corresponds to the inclusion order on Schubert cycles in $G/B$
(with respect to a fixed reference chamber $\sigma$, respectively, minimal parabolic subgroup $B$):
\begin{theorem}\label{thm:basic}
The distance $\delta$ is lower semicontinuous with respect to the manifold topology.
Moreover,
\begin{equation}
\overline{S(\sigma, r)}= B(\sigma, r),
\end{equation}
where the closure is taken in the manifold topology of $G/B$. \end{theorem}
Consequently, $S(\sigma, r')\subseteq\overline{S(\sigma, r)}$ iff $r'\leq r$,
and otherwise $S(\sigma, r')\cap\overline{S(\sigma, r)}=\emptyset$.
In the case of complex Lie groups, this theorem goes back to work of Chevalley in the 1950s \cite{Chevalley}, see
also \cite{BB}; for the proofs in the general case (including reductive groups over local fields), see \cite{Borel-Tits} as well as \cite{Mitchell, Mitchell2008}. The most general case dealing with subsets of partial flag manifolds is established in \cite{coco15}.}
\medskip
We next use the thickenings of the neutral element inside $W$
to produce corresponding thickenings of (sets of) chambers inside $\partial_{F\ddot u} X\cong G/B$.
Let $\mathop{\hbox{Th}}\nolimits\subset W$ be a thickening. Given a chamber $\sigma\in \partial_{F\ddot u} X$, define its {\em thickening} inside $\partial_{F\ddot u} X$ by
\begin{equation}\label{eq:FTH}
\mathop{\hbox{Th}}\nolimits(\sigma)= \{\sigma'\in \partial_{F\ddot u} X: \delta(\sigma', \sigma)\in \mathop{\hbox{Th}}\nolimits\}.
\end{equation}
For a subset $\Lambda\subset \partial_{F\ddot u} X$ we define its {\em $\mathop{\hbox{Th}}\nolimits$-neighborhood} or {\em thickening} as
\begin{equation}\label{eq:thickening}
\mathop{\hbox{Th}}\nolimits(\Lambda):= \bigcup_{\lambda\in \Lambda} \mathop{\hbox{Th}}\nolimits(\lambda)= \{\sigma'\in \partial_{F\ddot u} X : \exists \lambda\in \Lambda, \delta(\sigma', \lambda)\in \mathop{\hbox{Th}}\nolimits\}.
\end{equation}
It is clear ({from the $G$-invariance of $\delta$}) that thickenings are $G$-invariant:
$$
\mathop{\hbox{Th}}\nolimits(g \Lambda)= g \mathop{\hbox{Th}}\nolimits(\Lambda), g\in G.
$$
Thus, if $\Gamma<G$ is a subgroup preserving $\Lambda$, it also preserves $\mathop{\hbox{Th}}\nolimits(\Lambda)$.
Our motivation for introducing the notion of slimness is the observation,
that the slimness of a thickening in $W$
is equivalent to the disjointness of the corresponding thickenings
of any two antipodal chambers in $\partial_{F\ddot u} X$:
\begin{lem}
[{\cite{coco15}}]
\label{lem:sldisj}
Let $\mathop{\hbox{Th}}\nolimits\subset W$ be a slim thickening.
Then for any two antipodal chambers $\sigma,\hat\sigma\in\partial_{F\ddot u} X$
it holds that
$$\mathop{\hbox{Th}}\nolimits(\sigma)\cap\mathop{\hbox{Th}}\nolimits(\hat\sigma)=\emptyset .$$
\end{lem}
\par\medskip\noindent{\it Proof. }
This follows from the definition of slimness and the triangle inequality\footnote{The inequality
can be regarded as a triangle inequality in $G/B$ for the $W$-valued combinatorial side lengths
of the triangle with vertices $\sigma,\sigma'$ and $\hat\sigma$.}
$$ \delta(\sigma',\hat\sigma)\geq \hbox{c-}\delta(\sigma',\sigma)$$
for chambers $\sigma'\in\partial_{F\ddot u} X$.
Indeed, suppose that $\sigma'\in\mathop{\hbox{Th}}\nolimits(\sigma)\cap\mathop{\hbox{Th}}\nolimits(\hat\sigma)$.
Then $\delta(\sigma',\hat\sigma), \delta(\sigma',\sigma)\in\mathop{\hbox{Th}}\nolimits$.
Due to the inequality,
also $\hbox{c-}\delta(\sigma',\sigma)\in\mathop{\hbox{Th}}\nolimits$,
equivalently, $\delta(\sigma',\sigma)\in w_0\mathop{\hbox{Th}}\nolimits$.
It follows that $\mathop{\hbox{Th}}\nolimits\cap w_0\mathop{\hbox{Th}}\nolimits\neq\emptyset$, contradicting slimness.
To verify the inequality,
consider the apartment $a\subset\partial_{Tits} X$ containing $\sigma,\hat\sigma$
and a folding retraction $r:\partial_{Tits} X\to a$,
i.e.\ a type preserving continuous map which fixes $a$ pointwise.
Such a retraction is given e.g.\ by the natural projection
$\partial_{Tits} X\to\partial_{Tits} X/B_{\sigma}\cong a$
where $B_{\sigma}$ denotes the minimal parabolic subgroup fixing $\sigma$.
Then $ \delta(\sigma',\hat\sigma) \geq \delta(r\sigma',\hat\sigma) = \hbox{c-}\delta(r\sigma',\sigma)=c-\delta(\sigma',\sigma)$.
\qed
\medskip
The importance of slimness comes therefore from the following fact.
Suppose $\mathop{\hbox{Th}}\nolimits\subset W$ is a slim thickening and $\Lambda\subset \partial_{F\ddot u} X$ is an {\em antipodal subset},
i.e.\ a subset where any two distinct elements are antipodal.
Then for every $\sigma\in \mathop{\hbox{Th}}\nolimits(\Lambda)$ there exists a unique $\lambda=\lambda_\sigma\in \Lambda$ such that $\delta(\sigma, \lambda)\in\mathop{\hbox{Th}}\nolimits$.
Thus, we obtain a natural projection
$$
\pi: \sigma\mapsto \lambda_\sigma, \mathop{\hbox{Th}}\nolimits(\Lambda)\to \Lambda.
$$
As an exercise, let us prove continuity (in the subspace topology induced from $\partial_{F\ddot u} X$ )
of this projection, provided that $\Lambda$ is closed:
Let $\sigma_i\to \sigma$ in $\mathop{\hbox{Th}}\nolimits(\Lambda)$, where $\sigma_i\in \mathop{\hbox{Th}}\nolimits(\lambda_{\sigma_i}), \sigma\in \mathop{\hbox{Th}}\nolimits(\lambda_{\sigma})$.
After extraction, $\lambda_{\sigma_i}\to \lambda\in \Lambda$, since $\Lambda$ is compact.
By the semicontinuity of
$\delta$ and the fact that $\mathop{\hbox{Th}}\nolimits$ is a thickening, we obtain that
$\delta(\sigma, \lambda)\in \mathop{\hbox{Th}}\nolimits$ and hence $\lambda=\lambda_{\sigma}$,
establishing continuity.
One verifies further that the projection $\pi$ is a fiber bundle over $\Lambda$ with fibers homeomorphic to $\mathop{\hbox{Th}}\nolimits(\lambda)$, $\lambda\in \Lambda$ (see \cite{coco15}).
\medskip
The fatness of a thickening, in turn, does not imply that the union
$\mathop{\hbox{Th}}\nolimits(\sigma)\cup \mathop{\hbox{Th}}\nolimits(\hat\sigma)$ is the entire $\partial_{F\ddot u} X$, but it does imply that $\mathop{\hbox{Th}}\nolimits(\sigma)\cup \mathop{\hbox{Th}}\nolimits(\hat\sigma)$ covers the entire chamber set of the unique apartment $a\subset \partial_{Tits} X$ containing $\sigma, \hat\sigma$.
The importance of the notion of fatness is less immediate.
The proof of Theorem~\ref{thm:proper} below shows why
it is useful for the proof of proper discontinuity of discrete group actions on certain domains in flag manifolds.
\medskip
Here is how one can think of thickenings $\mathop{\hbox{Th}}\nolimits(\lambda)\subset \partial_{F\ddot u} X$ of points in the Furstenberg boundary.
First of all, if $\mathop{\hbox{Th}}\nolimits=\mathop{\hbox{Th}}\nolimits_r, r\in W$, then one can think of
$\mathop{\hbox{Th}}\nolimits_r(\lambda)$ as the ``combinatorial $r$-neighborhood'' of $\lambda$ in $ \partial_{F\ddot u} X$,
as it consists of all $\sigma\in \partial_{F\ddot u} X$ which are within $\delta$-distance $\le r$ from $\lambda$.
However, caution is needed here, since for any proper thickening $\mathop{\hbox{Th}}\nolimits\subset W$,
the corresponding thickening $\mathop{\hbox{Th}}\nolimits(\lambda)\subset \partial_{F\ddot u} X$ is nowhere dense in the visual topology
of $\partial_{F\ddot u} X$. Therefore, a better way to think of thickenings of subsets in $\partial_{F\ddot u} X$ is as follows.
The choice of $\mathop{\hbox{Th}}\nolimits$ describes the ``degree of nongenericity'' of the relative position of chambers $\sigma\in \partial_{F\ddot u} X$ with respect to $\lambda$. For instance, for $\mathop{\hbox{Th}}\nolimits=\mathop{\hbox{Th}}\nolimits_r$ ($r\in W$),
the larger the element $r$, the more generic the relative position we allow for points in
$B(\lambda, r)\subset \partial_{F\ddot u} X$. The most generic relative position is achieved for points
in the open Schubert cell $S(\lambda, w_0)=\lambda^{opp}$, consisting of all chambers $\sigma$ in $\partial_{F\ddot u} X$ opposite of
$\lambda$. The closure of this open Schubert cell is the entire Furstenberg boundary, the metric ball $B(\lambda, w_0)$.
One implication of Theorem \ref{thm:basic} is that taking limits of sequences
$\sigma_i$ as $i\to \infty$, can only result in the decrease of genericity (the limit of a sequence
can only be ``more special'' with respect to $\lambda$, not ``less special'').
\medskip
The next lemma implies the important fact that thickenings of compact subsets are compact.
\begin{lemma}
\label{lem:closth}
For every thickening $\mathop{\hbox{Th}}\nolimits\subset W$ and every subset $\Lambda\subset \partial_{F\ddot u} X$, we have
$$
\overline{\mathop{\hbox{Th}}\nolimits(\Lambda)}= \mathop{\hbox{Th}}\nolimits(\overline{\Lambda}).
$$
\end{lemma}
\par\medskip\noindent{\it Proof. } Suppose that $(\lambda_i)$ is a sequence in $\Lambda$ and $\sigma_i\in \mathop{\hbox{Th}}\nolimits(\lambda_i)$
such that $\sigma_i\to\sigma$ in $\partial_{F\ddot u} X$.
After extraction, $\lambda_i\to\lambda\in \overline{\Lambda}$.
In view of semicontinuity (Theorem \ref{thm:basic}),
we have
$\delta(\sigma, \lambda)\leq\delta(\sigma_i, \lambda_i)$
for large $i$.
Since $\delta(\sigma_i, \lambda_i)\in \mathop{\hbox{Th}}\nolimits$,
it follows that $\delta(\sigma, \lambda)\in\mathop{\hbox{Th}}\nolimits$.
This shows that $\overline{\mathop{\hbox{Th}}\nolimits(\Lambda)}\subset \mathop{\hbox{Th}}\nolimits(\overline{\Lambda})$.
Conversely, consider a sequence $\lambda_i\in \Lambda$ converging to $\lambda\in\overline{\Lambda}$ and let $\sigma\in \mathop{\hbox{Th}}\nolimits(\lambda)$.
Then, since $\partial_{F\ddot u} X= G/B$, there exists a sequence $g_i\to1$ in $G$ such that $\lambda_i=g_i\lambda$.
Then $\sigma$ is the limit of the $\sigma_i:= g_i\sigma\in\mathop{\hbox{Th}}\nolimits(\lambda_i)\subset \mathop{\hbox{Th}}\nolimits(\Lambda)$
and, therefore, belongs to $\overline{\mathop{\hbox{Th}}\nolimits(\Lambda)}$. \qed
\begin{example}
\label{ex:ellth}
Consider $G=SL(3,{\mathbb R})$ and the unique balanced
thickening $$\mathop{\hbox{Th}}\nolimits=B(1, (12)) \cup B(1, (23)).$$
Let $\lambda$ be a chamber
in the Tits building $\partial_{Tits} X$ of the symmetric space $SL(3,{\mathbb R})/SO(3)$, which is the incidence graph of the real projective plane.
We will think of $\lambda$ as a flag $(p,l)$ (where $p$ is a point and $l$ is a line in projective plane).
The thickening $\mathop{\hbox{Th}}\nolimits(\lambda)$ consists of all chambers in $\partial_{Tits} X$ which share a vertex with $\lambda$. In other words,
the condition that a flag $(p',l')$ belongs to $\mathop{\hbox{Th}}\nolimits(\lambda)$ means that either
$p'\in l$ or $p\in l'$.
Clearly, $\mathop{\hbox{Th}}\nolimits(\lambda)$ is a closed subset of the flag manifold, homeomorphic to the wedge of two projective lines. It equals the closure of the union of two
``combinatorial spheres'' $S(\lambda, (12))$ and $S(\lambda, (23))$; this union consists of chambers sharing a vertex with $\lambda$ but different from $\lambda$.
Next, take an ellipse $E\subset {\mathbb R}^2\subset {\mathbb R} P^2$. The (projectivized) tangent bundle $PE$
of $E$ defines a lift $\tilde{E}$ of $E$ to the flag manifold $\operatorname{Flag}({\mathbb R}^3)$, the full flag manifold of ${\mathbb R}^3$.
It consists of the {\em tangent flags} $(p,l)$, $p\in E$, $l$ is the tangent line to $E$ at $p$; clearly, $\tilde E$ is homeomorphic to $E$.
Now, let $\mathop{\hbox{Th}}\nolimits$ be the unique balanced thickening in the group $W=S_3$.
Then the corresponding thickening $\mathop{\hbox{Th}}\nolimits(\tilde E)$
consists of the flags $(q, m)$, where either $q\in E$ (and $m$ is any line through $q$) or $m$ is a line tangent to $E$ and $q$ is any point on $m$.
Topologically speaking, $\mathop{\hbox{Th}}\nolimits(\tilde E)$ is the trivial bundle over $\tilde E$ with fibers homeomorphic to $S^1\vee S^1$,
and $\tilde E$ is (the image) of its distinguished section
with values in the singular points of the fibers.
The projection $\pi: \mathop{\hbox{Th}}\nolimits(\tilde E)\to \tilde E$ sends
each flag $(p,m)$, $p\in E$, to the corresponding tangent flag $(p,l)$;
and it sends each flag $(q,l)$, $l$ is tangent to $E$, to the corresponding tangent flag $(p,l)$.
\end{example}
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig19.pdf}
\caption{The balanced thickening of $\tilde E$}
\label{figure14.fig}
\end{figure}
\subsection{Domains of proper discontinuity, cocompactness and nonemptiness}
\label{sec:dd}
We start this section
by reviewing some basic notions from topological dynamics.
We consider topological actions $\Gamma\curvearrowright Z$ of discrete groups on metrizable locally compact topological spaces.
Recall that the $\Gamma$-action on an invariant open subset $\Omega\subset Z$
is {\em properly discontinuous} if for each compact $K\subset \Omega$ it holds that
$$\gamma K\cap K=\emptyset$$
for all but finitely many $\gamma\in\Gamma$.
A weaker condition is the {\em discontinuity} of the action. A point $z\in Z$ is said to be {\em wandering} for the $\Gamma$-action
if there exists a neighborhood $U$ of $z$ such that $$\gamma U\cap U=\emptyset$$ for all but finitely many $\gamma\in \Gamma$.
An action is called {\em discontinuous} if each point is wandering.
The {\em domain of discontinuity} $\Omega_{disc}\subset Z$ for the action $\Gamma\curvearrowright Z$ is the set of wandering points.
This set is clearly open and invariant; in general, however, the action on the domain of discontinuity is {\em not proper}.
\begin{example}\label{ex:cyclic}
(Compare Example~\ref{bex:newst})
$\gamma\in SL(3,{\mathbb R})$, $\gamma=Diag(\lambda, 1, \lambda^{-1})$, $\lambda>1$.
Then $\gamma$ has on ${\mathbb R} P^2$ the three fixed points $e_1=[1:0:0], e_2=[0:1:0], e_3=[0:0:1]$.
The point $e_1$ is attractive, $e_3$ is repulsive and $e_2$ is hyperbolic for the action of $\gamma$ on ${\mathbb R} P^2$. Denoting $L_{ij}$ the projective line through $e_i, e_j$, $i<j$, we obtain that the domain of discontinuity for the action of $\Gamma=\<\gamma\>$ on ${\mathbb R} P^2$ is the complement to its fixed point set, $\{e_1, e_2, e_3\}$.
However, the action of $\Gamma$ on this domain is not proper. In order to get a maximal domain of proper discontinuity,
one removes from ${\mathbb R} P^2-\{e_1, e_2, e_3\}$ either the entire line $L_{12}$ or $L_{23}$.
\end{example}
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig20.pdf}
\caption{Dynamics of a cyclic subgroup $\Gamma=\<\gamma\>$ on the projective plane. The points $\xi, \xi'$ are dynamically related.}
\label{figure15.fig}
\end{figure}
\medskip
Our arguments for proving proper discontinuity will be based on the fact that
it is equivalent to the {\em absence of dynamical relations} between points
relative to the $\Gamma$-action.
\begin{defn}
Two points $\xi, \xi'\in Z$ are {\em $\Gamma$-dynamically related},
$\xi\stackrel{\Gamma}{\sim}\xi'$,
if for each pair of neighborhoods $U, U'$ of $\xi, \xi'$ respectively, there are infinitely many elements $\gamma\in \Gamma$ such that
$$
\gamma U\cap U'\ne \emptyset.
$$
\end{defn}
Note that a point is wandering iff it is not dynamically related to itself.
It is straightforward that, since the space $Z$ is Hausdorff and 1st countable,
$\Gamma$-dynamical relation $\xi\stackrel{\Gamma}{\sim}\xi'$ can be reformulated as follows:
There exists a sequence of distinct elements $\gamma_n\in \Gamma$ and a sequence $\xi_n\to \xi$ in $Z$ such that $\gamma_n\xi_n\to \xi'$. We will write then more precisely
$$\xi\stackrel{(\gamma_n)}{\sim}\xi'.$$
\begin{lemma}
The $\Gamma$-action on an open invariant subset $\Omega\subset Z$
is properly discontinuous iff no two points of $\Omega$ are $\Gamma$-dynamically related.
\end{lemma}
\par\medskip\noindent{\it Proof. }
Suppose first that the action $\Gamma\curvearrowright\Omega$ is not properly discontinuous.
Then there exists a compact $C\subset \Omega$
and elements $\gamma_n\to\infty$ in $\Gamma$ such that $\gamma_n C\cap C\ne \emptyset$.
Hence there are points $x_n\in C$ such that also $\gamma_nx_n\in C$.
By compactness,
after extraction, we have convergence $x_n\to x\in C$ and $\gamma_nx_n\to x'\in C$
and it follows that $x,x'$ are $\Gamma$-dynamically related.
Conversely, suppose that the points $x, x'\in \Omega$ are dynamically related.
Taking $U, U'$ to be relatively compact neighborhoods of $x, x'$ respectively, we obtain that
$\gamma U\cap U'\ne \emptyset$
for infinitely many $\gamma\in \Gamma$.
For these $\gamma$ and the compact $C=\overline{U}\cup \overline{U'}$ it holds that $\gamma C\cap C\ne \emptyset$.
\qed
\medskip
We now take up the discussion of the topological dynamics of
discrete group actions on flag manifolds.
We will restrict to the case of actions on the full flag manifold $\partial_{F\ddot u} X\cong G/B$.
Regarding proper discontinuity,
the connection between dynamical relation
and the Bruhat order and thickenings
comes in the form of the following key lemma
which provides a relative position inequality for dynamically related points.
Roughly speaking,
it says that they cannot be both far, in the sense of the combinatorial distance $\delta$, from the chamber limit set $\Lambda_{ch}(\Gamma)$.
The lemma, in turn, is derived from the {\em higher rank convergence property} for the action $\Gamma\curvearrowright G/B$
discussed in section~\ref{sec:conprop}.
\begin{klemma}[\cite{coco13, coco15}]
Suppose that $\Gamma< G$ is regular and $\xi, \xi'\in G/B$ are $\Gamma$-dynamically related.
Then there exist (not necessarily distinct) limit chambers $\lambda, \lambda'\in \Lambda_{ch}(\Gamma)$ such that
$$
\delta(\xi',\lambda') \le {\hbox{c-}}\delta(\xi,\lambda).
$$
\end{klemma}
\par\medskip\noindent{\it Proof. } Suppose first that we have a dynamical relation $\xi\stackrel{(\gamma_n)}{\sim}\xi'$, $\gamma_n\in \Gamma$.
Then, by the definition of dynamical relation, there exists a sequence $(\xi_n)$ in $G/B$ such that
$\xi_n\to\xi$ and $\gamma_n\xi_n\to\xi'$.
The regularity of the subgroup $\Gamma<G$ translates via Theorem~\ref{thm:regconv}
into the higher rank convergence property for the action $\Gamma\curvearrowright G/B$.
Hence,
after extraction, there exists a pair of limit chambers $\sigma_\pm\in\Lambda_{ch}(\Gamma)$ such that
$\gamma_n$ converges to $\sigma_+$ uniformly on compacts in the open Schubert cell $\sigma_-^{opp}$.
Let $a\subset\partial_{\infty} X$ be an apartment containing $\sigma_-$ and $\xi$.
Nearby apartments $a_n$ containing $\xi_n$
can be obtained by using small isometries $h_n\to 1$ in $G$,
with $\xi_n=h_n\xi$ and putting $a_n=h_na$.
Let $\hat\sigma_-\subset a$ be the chamber opposite to $\sigma_-$,
and let $\sigma_n=h_n\hat\sigma_-\subset a_n$.
Then $\sigma_n\to\hat\sigma_-$.
Since $\hat\sigma_-\in \sigma_-^{opp}$,
the locally uniform convergence of $\gamma_n$ to $\sigma_+$
implies that $\gamma_n\sigma_n\to\sigma_+$.
We obtain
\begin{equation*}
\delta(\xi',\sigma_+)\le \delta(\gamma_n\xi_n,\gamma_n\sigma_n)
=\delta(\xi_n,\sigma_n)
=\delta(h_n\xi,h_n\hat\sigma_-)
=\delta(\xi,\hat\sigma_-)={\hbox{c-}}\delta(\xi,\sigma_-)
\end{equation*}
for large $n$,
where the first inequality follows from
the semicontinuity of $\delta$, see Theorem \ref{thm:basic}.
Putting $\lambda=\sigma_-$ and $\lambda'=\sigma_+$ yields the assertion.
\qed
\medskip
As a consequence,
no dynamical relations occur in domains which are far enough from the chamber limit set
in the combinatorial sense of relative position,
i.e.\ which avoid a sufficiently large thickening of it.
(Recall that this means that the points in these domains have sufficiently generic position with respect to all limit chambers.)
We obtain:
\begin{thm}
[Proper discontinuity \cite{coco13}]
\label{thm:proper}
Suppose that $\Gamma< G$ is regular and $\mathop{\hbox{Th}}\nolimits\subset W$ is a fat thickening.
Then no two points in the domain
$$\Omega_{Th}(\Gamma):=G/B-\mathop{\hbox{Th}}\nolimits(\Lambda_{ch}(\Gamma))$$ are $\Gamma$-dynamically related.\footnote{We recall that
$\mathop{\hbox{Th}}\nolimits(\Lambda_{ch}(\Gamma))$ is compact, because $\Lambda_{ch}(\Gamma)$ is, cf.\ Lemma~\ref{lem:closth}.}
In other words, the action $\Gamma\curvearrowright\Omega_{Th}(\Gamma)$ is properly discontinuous.
\end{thm}
\par\medskip\noindent{\it Proof. } Suppose that $\xi, \xi'\in G/B$ are dynamically related.
Then, by the lemma, there exist $\lambda, \lambda'\in\Lambda_{ch}(\Gamma)$ such that
$$
\delta(\xi',\lambda') \le \hbox{c-}\delta(\xi,\lambda).
$$
By the definition of fat thickening, for the relative position $w:=\delta(\xi,\lambda)$ either
\begin{enumerate}
\item $w\in \mathop{\hbox{Th}}\nolimits$, or
\item $w_0 w\in \mathop{\hbox{Th}}\nolimits$.
\end{enumerate}
In the former case, $\xi\in \mathop{\hbox{Th}}\nolimits(\Lambda_{ch})$.
In the latter case, $\hbox{c-}\delta(\xi,\lambda)\in \mathop{\hbox{Th}}\nolimits$, which,
{\em by the definition of a thickening}, implies that $\delta(\xi',\lambda')\in \mathop{\hbox{Th}}\nolimits$,
and thus $\xi'\in \mathop{\hbox{Th}}\nolimits(\Lambda_{ch})$.
Hence, $\xi, \xi'$ cannot be both in $\Omega_{Th}(\Gamma)$. \qed
\medskip
Note that we are not assuming here that the chamber limit set $\Lambda_{ch}$ is antipodal.
Antipodality is used, in conjunction with {\em slimness} of $\mathop{\hbox{Th}}\nolimits$ and the {\em expansion} axiom, to ensure the {\em cocompactness} of $\Gamma$-actions.
It is an important fact that,
for a slim thickening $\mathop{\hbox{Th}}\nolimits$ of an antipodal set $\Lambda$, the natural projection $\mathop{\hbox{Th}}\nolimits(\Lambda)\to \Lambda$ sending $\xi\in \mathop{\hbox{Th}}\nolimits(\Lambda)$ to the unique $\lambda\in \Lambda$ with $\delta(\xi,\lambda)\in \mathop{\hbox{Th}}\nolimits$, is a topological fibration,
compare Lemma~\ref{lem:sldisj} and the discussion afterwards.
We use this fact for chamber limit sets of RA (regular and antipodal) subgroups.
For RCA subgroups,\footnote{Recall that,
in addition to being regular and antipodal, RCA subgroups are also expanding at $\Lambda_{ch}(\Gamma)$,
compare \S \ref{sec:conical_convergence2}.
The RCA property is equivalent to the Anosov property, cf.\ Theorem~\ref{thm:main}.}
we have the following counterpart to the above proper discontinuity result:
\begin{thm}[Cocompactness \cite{coco13}]\label{thm:CC}
Suppose that $\Gamma<G$ is an RCA subgroup
and $\mathop{\hbox{Th}}\nolimits\subset W$ is a slim thickening.
Then the action $\Gamma\curvearrowright\Omega_{Th}(\Gamma)$ is cocompact.
\end{thm}
If one works with balanced thickenings which, by definition, are both fat and slim,
one can conclude both proper discontinuity and cocompactness for suitable classes of discrete subgroups:
\begin{cor}\label{cor:flPDCC}
If $\Gamma<G$ is an RCA subgroup
and $\mathop{\hbox{Th}}\nolimits\subset W$ is a balanced thickening,
then the action $\Gamma\curvearrowright\Omega_{Th}(\Gamma)$ is properly discontinuous and cocompact.
\end{cor}
We obtain such results more generally
for $\tau_{mod}$-RCA subgroups acting on partial flag manifolds $G/P_{\tau_{mod}}$,
see \cite{coco15}.
\begin{example}
Let $\Gamma< PO(2,1)$ be a cocompact Fuchsian group.
Then $\Gamma<PO(2,1)< PGL(3,{\mathbb R})$ is Morse, preserves the Klein model ${\mathbb H}^2$ of the hyperbolic plane in ${\mathbb R} P^2$.
The hyperbolic plane ${\mathbb H}^2\subset {\mathbb R} P^2$ is bounded by an ellipse $E$, cf. Example~\ref{ex:ellth}. The group $\Gamma$
acts properly discontinuously on ${\mathbb H}^2$ and ergodically on the complement, since the latter is $\Gamma$-invariantly isomorphic to the space of unparameterized geodesics in the hyperbolic plane. In particular, $\Gamma$ does not act properly discontinuously on the complement of
$E$ in the projective plane. However, $\Gamma$ acts properly discontinuously and cocompactly on the complement $\operatorname{Flag}({\mathbb R}^3) - Th(\tilde{E})$, where $\tilde{E}=\Lambda_{ch}(\Gamma)$
is the lift of $E$ to the flag manifold of $PGL(3,{\mathbb R})$ and $\mathop{\hbox{Th}}\nolimits$ is the unique balanced thickening, see Example ~\ref{ex:ellth}.
\end{example}
Given the above results,
the question arises if and when $\Omega_{Th}(\Gamma)$ is nonempty.
Note that in rank 1, the domain of discontinuity in $G/B=\partial_{\infty} X$ is empty in the case of lattices $\Gamma<G$.
In contrast,
it turns out that in higher rank
our domains $\Omega_{Th}(\Gamma)$
for RA subgroups $\Gamma$ and balanced thickenings $\mathop{\hbox{Th}}\nolimits$
have a tendency to be nonempty.
Intuitively,
the reason is that the emptiness of such a domain
would imply the existence of certain ball packings at infinity,
e.g.\ of a packing of $\partial_{F\ddot u} X$ by the combinatorial ``balls'' $\mathop{\hbox{Th}}\nolimits(\lambda)$ for $\lambda\in\Lambda_{ch}(\Gamma)$,
and such packings do not exist for many Weyl groups.
We show:
\begin{thm}[Nonemptiness \cite{coco13,coco15}]
\label{thm:nonempt}
Suppose that $X$ has at least one de Rham factor not of the type $A_1, B_2$ or $G_2$.
Then for each RA subgroup $\Gamma< G$,
there exists a balanced thickening $\mathop{\hbox{Th}}\nolimits\subset W$ for which $\Omega_{Th}(\Gamma)$ is nonempty.
\end{thm}
\begin{rem}
For some Lie groups $G$ of type $B_2$, we can still prove nonemptiness of $\Omega_{Th}(\Gamma)$
for some balanced thickenings $\mathop{\hbox{Th}}\nolimits$
(independent of the discrete group $\Gamma<G$).
This includes $O(n,2)$ with $n$ odd.
See \cite{coco15}.
\end{rem}
Now we can explain the analogy with GIT: For any balanced thickening $\mathop{\hbox{Th}}\nolimits$,
the domain $\Omega_{Th}(\Gamma)$ serves as the set of stable points for the $\Gamma$-action on $G/B$,
while the thickening of the limit set $\mathop{\hbox{Th}}\nolimits(\Lambda_{ch}(\Gamma))$ plays the role of the set of unstable points,
and the limit set $\Lambda_{ch}(\Gamma)$ itself of the set of maximally unstable points.
\begin{rem}
Comparison of our discontinuity and cocompactness results with that of \cite{GW}:
1. Our treatment of domains of discontinuity is intrinsic,
while in \cite{GW}
first a theory for $P$-Anosov subgroups of $Aut(F)$ is developed
(where the $F$'s are certain bilinear and hermitian forms)
and then general semisimple Lie groups are embedded into groups $O(p,q)$.
2. Due to the intrinsic nature of our construction, we gain much better control of the nature of domains of proper discontinuity which allows us to get them in $G/B$ (and other flag manifolds) instead of $G/AN$ as in \cite{GW}, for general semisimple Lie groups. (Note, however, that in the case of ``classical'' Lie groups, \cite{GW} also obtain a domain of cocompactness and proper discontinuity inside $G/B$.)
3. While for some Lie groups of types $A_2,B_2$
the outcomes of the two constructions are the same, it appears that our construction is more general. For instance, we expect that discontinuity domains constructed via the
two non-metric balanced thickenings for $SL(4,{\mathbb R})$,
see Figure \ref{S4.fig}, cannot be obtained via the construction in \cite{GW}.
4. Theorem~\ref{thm:nonempt} is both weaker and stronger than the
nonemptiness results in \cite[Thms.\ 1.11, 1.12 and 9.10]{GW}.
It is stronger in the sense that it applies to hyperbolic groups $\Gamma$ without assumptions on their cohomological dimension,
unlike the results in \cite{GW} which require small cohomological dimension.
On the other hand, it is weaker in the sense that it addresses only the $\si_{mod}$-regular case.
We also note that some examples of Anosov subgroups for which some discontinuity domains are empty are given in \cite[Remark 8.5]{GW}.
\end{rem}
\subsection{Example: Thickenings in visual boundaries of products of rank one spaces}
In this section we work out in detail the case when $W={\mathbb Z}_2^n$. We identify ${\mathbb Z}_2$ with the multiplicative group $\{-1, 1\}$. Elements of $W$ are identified with $n$-tuples of $\pm 1$'s. The model flat is ${\mathbb R}^n$ and the generators of $W$ act via reflections in the coordinate hyperplanes (walls). We choose the fundamental chamber $\Delta$ to be the orthant
given by the inequalities $x_i\ge 0$, $i=1,\ldots,n$ (it is clearly a fundamental domain for the action of $W$ on ${\mathbb R}^n$).
The {\em central direction} in $\Delta$ is given by the vector
$$
\bar\zeta= (1,\ldots, 1).
$$
The longest element $w_0=(-1,\ldots, -1)$ acts as $-\operatorname{id}$.
The Bruhat order is given by
$$
w=(\epsilon_1,\ldots,\epsilon_n)\le w'= (\epsilon_1',\ldots,\epsilon_n') \quad \iff \quad \epsilon_i\ge \epsilon'_i \;\forall i.
$$
Examples of thickenings are given by strict and nonstrict linear inequalities as follows. Let $a=(a_1,\ldots,a_n)$ be a vector with (strictly) positive entries. The subsets
$$
{\mathop{\hbox{Th}}\nolimits}_a=\{w\in W: a\cdot w> 0\}, \overline{\mathop{\hbox{Th}}\nolimits}_a=\{w\in W: a\cdot w\ge 0\}
$$
are {\em metric thickenings}. The former thickening is slim while the latter is fat. A thickening $\mathop{\hbox{Th}}\nolimits_a$ is balanced iff $a$ does not satisfy an equation
$$
\sum_{i\in I} a_i = \sum_{j\notin I} a_j,
$$
for any subset $I\subset \{1,\ldots,n\}$. Hence, for ``generic'' values of $a$, $\mathop{\hbox{Th}}\nolimits_a$ is balanced.
Consider now a rank one symmetric space $Y$ (e.g.\ $Y={\mathbb H}^2$) and $S=\partial_{\infty} Y$.
Let $X=Y^n$, the $n$-fold product of $Y$.
Then $Z:=\partial_{F\ddot u} X=S\times ... \times S$, the $n$-fold product of $S$.
Moreover, let $D\subset Z$ denote the diagonal
$$D=\{(s, \ldots,s): s\in S\}. $$
We will think of elements of $Z$ as configurations of points in $S$.
The relative position of two configurations $z=(s_i)$ and $z'=(s'_i)$ equals $\delta(z',z)=(\epsilon_i)$
iff $$ s'_i=s_i\iff\epsilon_i=+1 ,$$
i.e.\ $\delta$ records the entries $i$ where $z'$ agrees with $z$.
Consequently,
$\delta(z',z)\leq(\epsilon_i)$
iff $s'_i=s_i$ whenever $\epsilon_i=+1$.
The vector $a$ assigns {\em weights} $a_i$ to the $i$-th members of the configuration. Each weighted configuration $z=(s_1,\ldots, s_n)$ thus gives rise to a finite measure $\mu$ on $S$,
$$
\mu_z= \sum_{i=1}^n a_i \delta_{s_i},
$$
where $\delta_s$ is the probability measure on $S$ supported at the point $s$ (masses add when points $s_i$ ``collide''). The total mass of $\mu_z$ equals
$$
M=a_1+\ldots+a_n.
$$
A weighted configuration $z$
is called {\em stable} if $\mu_z(s)< M/2$ for all points $s\in S$,
and {\em semistable} if $\mu_z(s)\le M/2$ for all $s\in S$.
In the balanced case, these notions agree: ``stable=semistable.'' It is then immediate that
$$
z\in \mathop{\hbox{Th}}\nolimits_a(D) \iff \mu_z \hbox{~~is not semistable}
$$
and
$$
z\in \overline{\mathop{\hbox{Th}}\nolimits}_a(D) \iff \mu_z \hbox{~~is not stable}.
$$
The sets of stable and semistable weighted configurations are denoted $Z_{st}$ and $Z_{sst}$.
They of course depend on $a$.
For instance, if $a_i> M/2$ for some $i$,
then $Z_{sst}=\emptyset$.
On the other hand,
if $a_i< M/2$ for all $i$,
then $Z_{st}\neq\emptyset$;
e.g.\ all configurations of pairwise distince points $s_i$ are stable.
\medskip
Assume now that $H$ is the isometry group of $Y$ acting diagonally on $X$ and, hence, on $Z$.
The latter action preserves the diagonal $D$,
which we can regard as the chamber limit set of the Lie subgroup $H<G:=\operatorname{Isom}(X)\cong H^n$,
$D=\Lambda_{ch}(H)$.
Mumford's GIT defines the {\em Mumford quotient}
$$
Z//_a H= Z_{sst}//H.
$$
In the balanced case, we simply have
$$
Z//_a H= Z_{sst}//H= Z_{st}/H.
$$
A nice exercise is to prove directly that $Z_{sst}//H$ is compact and Hausdorff in this case. For instance, if $H=PSL(2,{\mathbb R})$, $Y={\mathbb H}^2$, $n=3$ and $a=(1,1,1)$ then $Z//_a H$ consists of exactly two points represented by configurations of three distinct points on the circle with different cyclic orders. Continuing with $Y={\mathbb H}^2$ and letting $n=4$, one verifies that for $a=(2,1,1,1)$ the Mumford quotient is homeomorphic to $S^1$, while for $a=(5,4,3,1)$ the
Mumford quotient is homeomorphic to the disjoint union of two circles. Taking $n=5$, one obtains that for
$a=(1,1,1,1,1)$ the Mumford quotient is the genus 4 oriented surface, while for $a= (5,4,1,1,1)$ the quotient is the disjoint union of two 2-spheres. Thus, we see that quotients can be non-homeomorphic for distinct choices of $a$.
We refer the reader to \cite[Theorem 2]{KM} for proofs of these descriptions of Mumford quotients using their identification with polygon spaces.
\begin{rem}
The hyperplanes $\sum_{i\in I} a_i = \sum_{j\notin I} a_j$ (called {\em walls}),
where $I$ runs through the subsets of $\{1,\ldots,n\}$,
partition the space
$$
A=\{(a_1,\ldots,a_n): a_i>0\}
$$
into open convex subsets called {\em chambers} (they are not fundamental domains for the $S_n$-action!).
The topology of $Z//_a H$ does not change as long as $a$ varies in a single chamber;
permuting the chambers does not change the topology either;
however, {\em crossing through a wall} amounts to a certain Morse surgery on the manifold $Z//_a H$.
This can be seen by identifying the quotients $Z//_a H$ with certain spaces of polygons with fixed side lengths:
In the case when $H=PSL(2,{\mathbb R})$, these are polygons in the euclidean plane, see \cite{KM}.
It was conjectured by Kevin Walker (in his undergraduate thesis written in 1986 under Bill Thurston; Walker was working with euclidean polygons) that, for $n\ge 5$, if $a, a'$ belong to chambers in distinct $S_n$-orbits, then the Mumford quotients are not homeomorphic. This conjecture was proven 20 years later in ``most'' cases by Farber, Hausmann and Sch\"utz \cite{FHS} and in full generality by Sch\"utz \cite{Schutz}. Similar results hold when the circle is replaced by a $k$-sphere; in fact, different quotients are distinguished by their intersection cohomology rings, see \cite{Schutz1, Schutz2}.
\end{rem}
Now, suppose that $\Gamma< H$ is a uniform lattice.
Then $\Lambda_{ch}(\Gamma)=D$.
The subgroup $\Gamma$, diagonally embedded in $G$, is uniformly regular in $G$: $\Gamma$ preserves the diagonally embedded copy of $Y$ in $X$,
and any geodesic segment in it has $\Delta$-length contained in the diagonal of $\Delta\cong[0,\infty)^n$.
We conclude that $\Gamma$ is $\Theta$-regular with $\Theta$ consisting of a single point,
namely the center of the model spherical chamber of $X$, represented by the unit vector
$$
\frac{1}{\sqrt{n}}(1,\ldots, 1).
$$
The group $\Gamma$ is quasiisometrically embedded in $H$ and hence in $G$. Thus, $\Gamma<G$ is URU.
Given a balanced metric thickening $\mathop{\hbox{Th}}\nolimits=\mathop{\hbox{Th}}\nolimits_a$, the domain of discontinuity $\Omega_{Th}(\Gamma)$ equals the set $Z_{st}$
of stable weighted $n$-point configurations in $S$ (stability is, of course, defined with respect to $a$).
We now specialize to the case when
$H=PSL(2,{\mathbb R})$ or $PSL(2,{\mathbb C})$
and $\Gamma$ is torsion-free.
Then the group $H$
acts freely and properly on $Z_{st}$, and we have a principal $H$-bundle
$$
H\to Z_{st}\to Z_{st}/H= Z//_a H.
$$
Dividing $Z_{st}$ by $\Gamma$ instead of $H$, we obtain a fiber bundle
$$
F \to Z_{st}/\Gamma\to Z_{st}/H,
$$
with fiber $F=\Gamma\backslash H$, the oriented orthonormal frame bundle over the manifold $Y/\Gamma$. In particular, by taking non-homeomorphic Mumford quotients $Z_{st}/H$,
we may obtain non-homeomorphic quotients $\Omega_{Th}/\Gamma=Z_{st}/\Gamma$.
For instance,
taking $H=PSL(2,{\mathbb R})$ and $n=4$, we obtain three distinct topological types of quotients: The empty quotient,
a connected nonempty quotient (a bundle over the circle with fiber $F$, the unit tangent bundle of a hyperbolic surface) and a disconnected quotient
(an $F$-bundle over $S^1 \sqcup S^1$).
\subsection {Finsler bordifications of locally symmetric spaces} \label{sec:bordif}
For a regular subgroup $\Gamma< G$ and a thickening $\mathop{\hbox{Th}}\nolimits\subset W$, we define the {\em Finsler thickening}
of the chamber limit set $\Lambda_{ch}(\Gamma)$ as follows.
First,
recall the definition \eqref{eq:FTH} of the thickening $\mathop{\hbox{Th}}\nolimits(\sigma)\subset \partial_{F\ddot u} X$ of a chamber $\sigma\in \partial_{F\ddot u} X$
inside the Furstenberg boundary,
and the definition \eqref{eq:star} of the star $\operatorname{st}(\tau)$ of a simplex $\tau$.
We then introduce the {\em Finsler thickening} of the chamber $\sigma$ as the union of small strata
$$\mathop{\hbox{Th}}\nolimits^{Fins}(\sigma)= \bigcup\bigl\{ S_{\tau}: \operatorname{st}(\tau)\subset \mathop{\hbox{Th}}\nolimits(\sigma)\bigr\} \subset \partial_{\infty}^{Fins}X .$$
Finsler thickenings of antipodal chambers are disjoint if $\mathop{\hbox{Th}}\nolimits$ is slim.
We obtain the {\em Finsler thickening} of the chamber limit set $\Lambda_{ch}(\Gamma)$
by taking the union of the Finsler thickenings of all limit chambers,
$$
\mathop{\hbox{Th}}\nolimits^{Fins}(\Lambda_{ch}(\Gamma))=\bigcup_{\sigma\in \Lambda_{ch}(\Gamma)} \mathop{\hbox{Th}}\nolimits^{Fins}(\sigma)\subset \partial_{\infty}^{Fins}X.
$$
This subset is closed, $\Gamma$-invariant and {\em saturated}, i.e. a union of small strata $S_{\tau}$.
We consider the domain at infinity
$$\Omega_{Th}^{Fins}(\Gamma)= \partial_{\infty}^{Fins} X - \mathop{\hbox{Th}}\nolimits^{Fins}(\Lambda_{ch}(\Gamma))$$
and the domain
$$ X\sqcup \Omega_{Th}^{Fins}(\Gamma) = \overline{X}^{Fins}- \mathop{\hbox{Th}}\nolimits^{Fins}(\Lambda_{ch}(\Gamma)).$$
Recall from section~\ref{Finsler-com}
that the Furstenberg boundary sits inside the Finsler boundary (as a big stratum),
$\partial_{F\ddot u} X\subset \partial_{\infty}^{Fins}X$,
and note that our domains in the latter extend the domains in the former,
$$ \Omega_{Th}^{Fins}(\Gamma) \cap\partial_{F\ddot u} X = \Omega_{Th}(\Gamma),$$
because
$\mathop{\hbox{Th}}\nolimits^{Fins}(\sigma)\cap\partial_{F\ddot u} X=\mathop{\hbox{Th}}\nolimits(\sigma)$.
Theorems~\ref{thm:finsPD}, \ref{thm:Fcocom}
and Corollary~\ref{cor:finsPDCC} below are
{\em Finsler extensions}
of Theorems \ref{thm:proper}, \ref{thm:CC}
and Corollary \ref{cor:flPDCC} about discrete group actions on the Furstenberg boundary $\partial_{F\ddot u} X\cong G/B$.
\begin{thm}[Finsler domains of proper discontinuity {\cite[Theorem 9.13]{bordif}}]
\label{thm:finsPD}
Suppose that $\Gamma< G$ is regular
and $\mathop{\hbox{Th}}\nolimits\subset W$ is a fat thickening.
Then the action
$$ \Gamma\curvearrowright X\sqcup \Omega_{Th}^{Fins}(\Gamma) $$
is properly discontinuous.
\end{thm}
We note that our construction of domains provides, more generally,
domains of proper discontinuity for the action
of {\em arbitrary} discrete subgroups $\Gamma< G$ on $\overline{X}^{Fins}$,
not only for subgroups which are $\tau_{mod}$-regular
for some $\tau_{mod}$
(see Theorems 9.16 and 9.18 of \cite{bordif}).
These more general domains involve complements to unions of Finsler thickenings of $\tau_{mod}$-limit sets
of the subgroups $\Gamma$ with $\tau_{mod}$ running through all the faces of $\si_{mod}$.
\begin{thm}[Nonemptiness {\cite[Prop.\ 9.20]{bordif}}]\label{thm:finsNE}
Suppose that $\Gamma< G$ is an RA subgroup,
$\mathop{\hbox{Th}}\nolimits\subset W$ is a slim thickening and $\mathop{\hbox{rank}}(X)\geq2$.
Then $\Omega_{Th}^{Fins}(\Gamma)$ is nonempty.
\end{thm}
Note that, unlike Theorem \ref{thm:nonempt}, this result does not exclude products of
symmetric spaces of type $B_2$ and $G_2$. It is also not limited to $\si_{mod}$-regular subgroups, but holds for all
$\tau_{mod}$-regular antipodal subgroups.
In order to address cocompactness,
we convert the action $\Gamma\curvearrowright \overline{X}^{Fins}$ to a topological convergence group action via the following $\Gamma$-invariant collapsing procedure:
Form a quotient of $\overline{X}^{Fins}$
by simultaneously collapsing the thickenings $\mathop{\hbox{Th}}\nolimits^{Fins}(\sigma)$ for all $\sigma\in \Lambda_{ch}(\Gamma)$
to points. Let $Z$ denote the resulting quotient space,
and $\Lambda$ the projection of
$\mathop{\hbox{Th}}\nolimits^{Fins}(\Lambda_{ch}(\Gamma))$ to $Z$. Then $\Lambda$ is equivariantly homeomorphic to $\Lambda_{ch}(\Gamma)$.
\begin{thm}[{\cite[Corollary 11.7, Lemma 11.9]{bordif}}]\label{thm:ca}
If $\Gamma<G$ is an RA subgroup and $\mathop{\hbox{Th}}\nolimits\subset W$ is a balanced thickening,
then the (obviously compact) quotient space $Z$ is metrizable
and $$\Gamma \curvearrowright Z$$ is a convergence group action with limit set $\Lambda$.
\end{thm}
The last theorem is yet another indication of the ``rank 1 nature'' of RA subgroups $\Gamma< G$.
It is used in \cite{bordif} to prove:
\begin{thm}[Finsler cocompactness {\cite[Theorem 11.11]{bordif}}]
\label{thm:Fcocom}
Suppose that $\Gamma<G$ is an RCA subgroup and $\mathop{\hbox{Th}}\nolimits\subset W$ is a slim thickening.
Then the action $\Gamma\curvearrowright X\sqcup \Omega_{Th}^{Fins}(\Gamma)$
is cocompact.
\end{thm}
Combining Theorems \ref{thm:finsPD} and \ref{thm:Fcocom} we obtain:
\begin{cor}\label{cor:finsPDCC}
If $\Gamma<G$ is an RCA subgroup
and $\mathop{\hbox{Th}}\nolimits\subset W$ is a balanced thickening,
then the action $\Gamma\curvearrowright X\sqcup \Omega_{Th}^{Fins}(\Gamma)$ is properly discontinuous and cocompact.
\end{cor}
Note that in this result in the $\si_{mod}$-regular case one does not need antipodality of the limit set to conclude proper discontinuity and cocompactness, provided that $\mathop{\hbox{Th}}\nolimits$ is a {\em metric thickening} associated with a {\em nearly root element} $\bar\theta$ { (see the May 2015 version of the preprint \cite{bordif} for the details)}.
The RCA assumption is, however, needed in the general $\tau_{mod}$-regular case.
We apply our construction of domains to obtain bordifications and compactifications of locally symmetric spaces:
\begin{cor}
\label{cor:lcsm}
1. For each regular subgroup $\Gamma< G$, the locally symmetric orbifold
$X/\Gamma$ admits a real-analytic bordification as an orbifold with corners\footnote{{See Appendix \ref{sec:corners} for the precise definition.}}
$$
\left(X\sqcup \Omega_{Th}^{Fins}(\Gamma)\right)/\Gamma,
$$
provided that $\mathop{\hbox{Th}}\nolimits\subset W$ is fat. When this quotient is treated as an orbifold with boundary, the boundary of this
orbifold is $(\Omega_{Th}^{Fins}(\Gamma))/\Gamma$.
2. If $\Gamma$ is RCA and $\mathop{\hbox{Th}}\nolimits$ is balanced, then this bordification of $X/\Gamma$ is a compact orbifold with corners. \end{cor}
\begin{rem}
This corollary implies the {\em topological tameness} of the orbifold $X/\Gamma$. However, topological tameness is a weaker property than the existence of a compactification given by the corollary. For instance, considering finitely generated discrete subgroups $\Gamma< PSL(2,{\mathbb C})$, all quotient spaces ${\mathbb H}^3/\Gamma$ of such groups are topologically tame, but for many groups $\Gamma$ the bordification
$$
({\mathbb H}^3\cup \Omega(\Gamma))/\Gamma
$$
is not compact. The latter happens, for instance, for singly degenerate groups.
\end{rem}
We show furthermore a converse to the cocompactness part of Theorem \ref{thm:Fcocom},
implying that Anosov subgroups are characterized among uniformly regular subgroups by
the cocompactness of their action on complements to balanced Finsler thickenings.
More generally, we consider the following property
of admitting cocompact domains of proper discontinuity in the Finsler compactification:
\begin{dfn}
[$S$-cocompact {\cite[Def.\ 12.4]{bordif}}]
We say that a discrete subgroup $\Gamma<G$ is {\em $S$-cocompact}
if there exists a $\Gamma$-invariant saturated open subset $\Omega_{\infty}\subset\partial_{\infty}^{Fins}X$
such that the action
$\Gamma\curvearrowright X\sqcup\Omega_{\infty}$
is properly discontinuous and cocompact.
\end{dfn}
\begin{remark}
{The terminology $S$-cocompact comes from ``saturated'',
although the letters S from ``Satake'' and ``stratified'' also appear naturally in this context.}
\end{remark}
A useful implication of $S$-cocompactness is given by:
\begin{thm}[Cocompactness implies retract {\cite[Thm. 12.5]{bordif}}]
\label{thm:S->retract}
$S$-cocompact discrete subgroups $\Gamma<G$ are coarse retracts\footnote{cf.\ Definition \ref{defn:retract}}.
In particular, they are undistorted.
\end{thm}
Combining this theorem with the fact that URU subgroups are Anosov, we obtain:
\begin{thm}[Cocompactness implies Anosov {\cite[Thm.\ 1.9]{bordif}}]
\label{thm:URS->A}
$S$-co\-com\-pact uniformly regular subgroups $\Gamma<G$ are Anosov.
\end{thm}
We conclude:
\begin{cor}
[Dynamical characterizations of Anosov subgroups II: actions on Finsler compactifications
{\cite[Cor.\ 1.10]{bordif}}]\label{cor:S-coco}
For uniformly regular subgroups $\Gamma<G$,
the following properties are equivalent:
(i) Anosov
(ii) $S$-cocompact
(iii) coarse retract
\end{cor}
Combining Corollary \ref{cor:S-coco} with Theorem \ref{thm:main}, we obtain
a higher rank analogue of the Equivalence Theorem for convex cocompact groups of isometries of rank 1 symmetric spaces, see Theorem \ref{thm:main1}, with the conditions CC0, CC1 and CC8 excluded as inappropriate in higher rank.
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig21.pdf}
\caption{The action of a cyclic subgroup $\Gamma=\<\gamma\>$ on the Finsler compactification $\overline{F}^{Fins}_{mod}$
of the model flat and the quotient space of $\overline{F}^{Fins}_{mod}- \mathop{\hbox{Th}}\nolimits^{Fins}(\Lambda_{ch})$ by the $\Gamma$-action.}
\label{figure16.fig}
\end{figure}
\begin{example}
We now work out an example illustrating these results.
Consider an infinite cyclic subgroup $\Gamma=\<\gamma\><PGL(3,{\mathbb R})$
generated by a regular hyperbolic isometry $\gamma$.
For simplicity, we only describe the action on the Finsler compactification
of the unique invariant maximal flat $F\subset X$.
The Finsler compactification $\overline F^{Fins}$ is a hexagon with vertices $v_1,\ldots,v_6$ and edges $e_1,\ldots,e_6$.
The vertex set equals the Furstenberg boundary,
$\partial_{F\ddot u} F=\{v_1,\ldots,v_6\}$.
We label the vertices so that $v_1$ and $v_4$ correspond to the repulsive and attractive chambers $\sigma_-,\sigma_+\in\partial_{F\ddot u} F$.
The vertices are fixed by $\gamma$, but $\gamma$ has nontrivial dynamics on the edges:
The interior points of each edge $e_i=[v_i, v_{i+1}]$ are moved by $\gamma$ towards one of the two endpoints of $e_i$,
namely to the one which corresponds to the chamber in $\partial_{F\ddot u} F$
whose position relative to the attractive chamber $\sigma_+$
is smaller in the Bruhat order.
This is in stark contrast with the action of $\gamma$ on the visual boundary of $\partial_{\infty} F$ (with respect to the flat metric),
which is fixed pointwise.
The chamber limit set $\Lambda_{ch}(\Gamma)\subset\partial_{F\ddot u} X$
is the 2-point set $\{\sigma_-, \sigma_+\}=\{v_1, v_4\}\subset\partial_{F\ddot u} F$.
The balanced thickening of $\Lambda_{ch}(\Gamma)$ inside $\partial_{\infty}^{Fins} F$ is the union (of closed edges)
$$
\mathop{\hbox{Th}}\nolimits^{Fins}(\sigma_-) \cup \mathop{\hbox{Th}}\nolimits^{Fins}(\sigma_+)= \left( e_3 \cup e_4\right) \cup \left( e_1 \cup e_6\right)
$$
The intersection
$$
\Omega=\Omega(\Gamma)= \Omega_{Th}^{Fins} (\Gamma) \cap \partial_{\infty}^{Fins}F
$$
is the union of the interiors of the edges $e_2$ and $e_5$.
The rectangle $\Phi$ in Figure \ref{figure16.fig} is a (compact) fundamental domain for the action of $\Gamma$ on $F\cup \Omega$.
The quotient $\Omega/\Gamma$ is homeomorphic to the cylinder $S^1 \times [-1,1]$.
Now, let us collapse each thickening
$\mathop{\hbox{Th}}\nolimits^{Fins}(\sigma_-), \mathop{\hbox{Th}}\nolimits^{Fins}(\sigma_+)$ to a point. The result is a convergence action of $\Gamma$ on the quotient space $Q$,
homeomorphic to the closed 2-disk $D^2$. {Note that collapsing is natural here since, before the collapse,
the mapping $\gamma$ has too many fixed points in $\partial_{\infty}^{Fins}F$, namely all vertices $v_1,...,v_6$, while
an infinite cyclic group acting as a discrete convergence group can have at most two fixed points \cite{Tukia1994}.
After the collapse only two fixed points are left, namely the projections (still denoted $\sigma_+, \sigma_-$) of $v_1$ and $v_4$.
On the quotient space $Q$ we recover the familiar attractive-repulsive dynamics of hyperbolic isometries $\gamma$ of ${\mathbb H}^2$ acting on the visual compactification of ${\mathbb H}^2$: The point $\sigma_+$ is the attractive point and the point $\sigma_-$ is the repulsive point for the action of $\gamma$:
$$
\lim_{n\to\infty} \gamma^n= \sigma_+,
$$
uniformly on compacts in $Q - \{\sigma_-\}$, and
$$
\lim_{n\to-\infty} \gamma^{-n}= \sigma_-,
$$
uniformly on compacts in $Q - \{\sigma_+\}$.}
\end{example}
\begin{example}
[A product example] We continue with Example \ref{ex:product-case}
of a cyclic isometry subgroup $\Gamma$ of the product $X=X_1\times X_2$ of two hyperbolic spaces.
The Finsler compactification of $X$ is naturally homeomorphic to $\overline{X}_1\times \overline{X}_2$.
Assume that $g=(g_1, g_2)$ where $g_1$ is hyperbolic (with the fixed points $\lambda_1^+, \lambda_1^-$) and $g_2$ is parabolic (with the fixed point $\lambda_2$). As we noted in Example \ref{ex:product-case}, the group $\Gamma=\<g\> < \operatorname{Isom}(X)$ is regular but not uniformly regular. Therefore, it is not Anosov. On the other hand, it is $S$-cocompact. Namely, it acts properly discontinuously and cocompactly on
$$
(\overline{X}_1 - \{\lambda_1^-, \lambda_1^+\} ) \times \overline{X_2}.
$$
In particular,
$\Gamma$ is a coarse retract, and hence undistorted. Thus, uniform regularity cannot be weakened to regularity
in Theorems \ref{thm:main} (item 8), \ref{thm:URS->A} and Corollary~\ref{cor:S-coco}.
\end{example}
\begin{figure}[tbh]
\includegraphics[width=90mm]{fig22.pdf}
\caption{Collapsing thickened limit set for the action of a cyclic subgroup $\Gamma=\<\gamma\>$ on the Finsler compactification of the model flat.}
\label{figure17.fig}
\end{figure}
\begin{rem}[Relation with the work \cite{GW,GGKW2}]
(i)
We note that
the existence of an orbifold with boundary compactification of locally symmetric quotients by
Anosov subgroups of some special classes of simple Lie groups (namely, $Sp(2n,{\mathbb R}), SU(n,n), SO(n,n)$)
appeared in \cite{GW}.
(ii)
The main results of this section
dealing with Finsler bordifications of locally symmetric spaces
(Theorems \ref{thm:finsPD}, \ref{thm:Fcocom}
and Corollaries~\ref{cor:finsPDCC}, \ref{cor:lcsm})
are contained in the second version of \cite{bordif}.
After that work had been completed, the e-print \cite{GGKW2} was posted,
also addressing the compactification of locally symmetric spaces.
Theorem~1.2 there provides orbifolds with corners compactifications of $X/\Gamma$
via the maximal Satake compactification of $X$
for $\tau_{mod}$-Anosov subgroups $\Gamma<G$ of special face types $\tau_{mod}$.
However,
Theorem~1.1 of \cite{GGKW2} dealing with the general case
still lacks a complete proof.
It remains unclear
whether the compactifications constructed there
are orbifolds with corners.
Namely,
the approach uses ``generalized'' Satake compactifications,
but the proof of Lemma A9, establishing that the latter are manifolds with corners,
lacks details.
Note also that in the first version of \cite{GGKW2}
there was a {basic} mistake in the cocompactness argument.
It was corrected in the third version
using methods from \cite{coco13}.
\end{rem}
\bigskip
\section{Future directions}\label{sec:5}
{\bf Regular antipodal subgroups.} One can think of the class RA of {\em regular antipodal} (or URA: uniformly regular antipodal) subgroups of $G$ as {\em discrete subgroups exhibiting rank 1 behavior}: We saw several examples of this at work. Dropping conicality, we obtain (ignoring the issue of parabolic elements) an analogue of (rank 1) Kleinian groups without any geometric finiteness assumptions. Quite likely, this class by itself deserves some attention. More generally, one can define {\em uniformly rank $k$} discrete subgroups $\Gamma< G$ as those whose visual limit sets $\Lambda(\Gamma)\subset \partial_{\infty} X$ are disjoint from the codimension $k$ skeleton of the Tits boundary of $X$.
A {\em limit simplex} of $\Gamma$ is a simplex $\tau\subset\partial_{\infty} X$ which contains a limit point.
\medskip
{\em Geometric finiteness} by no means should be limited to subclasses of uniformly regular
(and, more generally, uniformly rank $k$)
discrete subgroups. Here is a {\em wish list} for a good
notion of geometric finiteness in higher rank:
\medskip
\noindent {\bf A. Conjectural properties of geometrically finite subgroups:}
1. Geometrically finite groups should be stable under small deformations, provided that we have the {\em right algebraic restrictions} (yet to be determined) on the deformations of representations. In rank 1, such restrictions amount to a certain control on the deformations of maximal parabolic subgroups, see \cite{Bowditch-stab}.
2. The locally symmetric quotient spaces $X/\Gamma$ of geometrically finite groups should admit geometrically natural compactifications,
by attaching quotients of domains of discontinuity at infinity and compactifying ``cusps'', incorporating (as a special case) the Borel--Serre compactifications (see e.g. \cite{Borel-Ji})
of locally-symmetric spaces of finite volume. In particular, such groups should be finitely presented.
3. Algebraically speaking, geometrically finite groups should be (suitably relativized) semihyperbolic groups.
Note that semihyperbolic groups (cf. \cite[III.$\Gamma$.4]{BH}) represent a coarsification of the notion of CAT(0) groups. Any notion of relative semihyperbolicity should include relatively hyperbolic groups as a special case.
We refer the reader to \cite{KR} for one possible definition of relatively semyhyperbolic groups.
4. Geometric finiteness should be semidecidable.
5. Geometric finiteness should be stable under embeddings of the ambient Lie groups:
If $\Gamma< G_1$ is geometrically finite and $\phi: G_1\to G_2$ is an embedding of semisimple (or reductive) Lie groups, then $\phi(\Gamma)< G_2$ is again geometrically finite. (Note that this fails, in general, for Anosov subgroups.)
6. Geometric finiteness should be stable under taking finite index subgroups: If $\Gamma_1< \Gamma_2$ is a finite index subgroup, then $\Gamma_2< G$ is geometrically finite iff $\Gamma_1< G$ is geometrically finite.
\bigskip
\noindent {\bf B. A conjectural class of geometrically finite subgroups should include:}
1. Direct products of geometrically finite groups: If $\Gamma_i< G_i$ are geometrically finite, $i=1, 2$, then
$\Gamma_1\times \Gamma_2< G_1 \times G_2$ is also geometrically finite.
2. All geometrically finite groups in rank one (allowing parabolics).
3. Standard representations of Coxeter groups \cite[Sect. V.4]{Bourbaki}.
4. Groups of projective transformations acting properly discontinuously on bounded convex domains in the affine space such that the quotient has finite volume with respect to the Hilbert metric.
5. Lattices in semisimple algebraic Lie groups (of any rank).
\medskip
At this point, the following two definitions of geometric finiteness, in the setting of discrete groups containing no parabolic elements, appear to be most promising:
\begin{definition}
$\Gamma$ is geometrically finite if it is $S$-cocompact.
\end{definition}
For instance, every convex cocompact subgroup $\Gamma< G$ in the sense of \cite{convcoco, Quint} is $S$-co\-com\-pact.
Every Anosov subgroup $\Gamma< G$ is $S$-cocompact as well.
\begin{definition}
$\Gamma<G$ is geometrically finite if it is an equivariant coarse retract (cf.\ Definition~\ref{defn:retract}).
\end{definition}
Note that, according to Theorem \ref{thm:S->retract}, $S$-cocompactness implies
being an equivariant coarse retract.
In particular, either definition implies semihyperbolicity of $\Gamma$.
Moreover, the definition using retractions is clearly stable under embeddings of Lie groups as mentioned above.
\section{Appendix. Horofunction compactification}
\label{sec:horoboundary}
Let $(Y,d)$ be a locally compact geodesic metric space. For each $y\in Y$ define the 1-Lipschitz function
$d_y= d(y, \cdot)$ on $Y$. This leads to the embedding $\kappa: Y\to C(Y)=C(Y,{\mathbb R})$, $y\mapsto d_y$. We let ${\mathbb R}\subset C(Y)$ denote the linear subspace of constant functions. Composing the embedding $\kappa$ with the projection $C(Y)\to C(Y)/{\mathbb R}$ we obtain the {\em Kuratowski embedding} of $Y$,
$$
Y\hookrightarrow C(Y)/{\mathbb R}.
$$
Then $\overline{Y}$, the closure of $Y$ in ${C(Y)}/{\mathbb R}$, is the {\em horofunction compactification} of $Y$.
Functions representing points in $\partial_{\infty} Y= \overline{Y}- Y$ are the {\em horofunctions} on $Y$. In other words, horofunctions on $Y$ are limits (uniform on compacts in $Y$) of sequences of normalized distance functions $d_{y_i} - d_{y_i}(o)$, where $y_i\in Y$ are divergent sequences in $Y$. Each geodesic ray $r(t)$ in $Y$ determines a horofunction in $Y$ called a {\em Busemann function} $b_r$, which is the subsequential limit
$$
\lim_{i\to\infty} d_{r(i)} - d_{r(i)}(o).
$$
If $Y$ is a CAT(0) space, then each limit as above exists (without passing to a subsequence). Furthermore, each horofunction is a Busemann function. This yields a topological identification of the visual compactification of $Y$ and its horofunction compactification. Level sets of Busemann functions are called {\em horospheres} in $X$. The point $r(\infty)\in \partial_{\infty} Y$ is the {\em center} of the horosphere $\{b_r=c\}$. We refer the reader to \cite{Gromov_hypmfs, Ballmann} for further details and to \cite{bordif} for the detailed treatment of this construction in the case of nonsymmetric metrics.
\section{Appendix. Expanding and relatively expanding actions}
\label{sec:expanding_actions}
Let $(Z,d)$ be a compact metric space. A map $f: Z\to Z$ is said to be {\em metrically expanding} at a point
$z\in Z$ if there exists a neighborhood $U$ of $z$ in $Z$, a number $c > 1$ (an {\em expansion factor})
such that for all $z', z''\in U$,
$$
d(f(z'), f(z''))\ge c d(z', z'').
$$
A sequence of maps $f_n: Z\to Z$ is said to have {\em metrically diverging expansion} at $z\in Z$ if there exists a
system of neighborhoods $U_n$ of $z$ and expansion factors $c_n\to \infty$ such that each $f_n|_{U_n}$ expands
with the expansion factor $c_n$.
A topological action $\Gamma\curvearrowright Z$ is said to be {\em expanding} at a $\Gamma$-invariant subset
$E\subset Z$ if for each $z\in E$ there exists $\gamma\in \Gamma$ which is expanding at $z\in Z$.
The expansion concepts have infinitesimal versions in the case of diffeomorphisms and smooth group actions on Riemannian manifolds. Suppose that $M$ is a Riemannian manifold and $f: M\to M$ is a diffeomorphism. The {\em infinitesimal expansion factor} of $f$ at a point $x\in M$ is the number
$$
\epsilon(f, x)= \inf_{u\in U_xM} |df(u)|
$$
where $U_xM$ is the unit sphere in $T_xM$.
A smooth map $f: M\to M$ is said to be {\em infinitesimally expanding} at $x$ if $\epsilon(f,x)>1$. It is easily seen that a smooth map is infinitesimally expanding at $x$ iff it is metrically expanding at $x$. A sequence of smooth maps $f_n: M\to M$ is said to have {\em diverging infinitesimal expansion} at $x\in M$ if
$$
\lim_{n\to\infty} \epsilon(f_n, x)=\infty.
$$
{A group of diffeomorphisms $\Gamma< Diff(M)$ is {\em infinitesimally expanding} at a
subset $Z\subset M$ if for every $z\in Z$ there exists $\gamma\in \Gamma$ which is infinitesimally expanding at $z$.}
\medskip
More generally, one defines {\em relatively expanding actions} of groups on metric spaces. Suppose again that
$(Z,d)$ is a compact metric space, $\Gamma\curvearrowright Z$ is a topological group action preserving a compact subset $E\subset Z$.
Suppose, furthermore, that $\pi: E\to\Lambda$ is a continuous map, which is equivariant with respect to actions
$\Gamma\curvearrowright E, \Gamma\curvearrowright \Lambda$. We let $E_\lambda=\pi^{-1}(\lambda)$ for $\lambda\in \Lambda$. The action
of $\Gamma$ is said to be {\em relatively expanding} at $E$ with respect to $\pi$
(or {\em expanding relative to} $\pi: E\to \Lambda$)
if:
For each $\lambda\in \Lambda$ there exists a neighborhood $U_\lambda$ of $E_\lambda$ in $Z$, a number $c>1$ and an element $\gamma\in \Gamma$ such that for all $E_{\lambda'}\subset U$ and $z\in U$,
$$
d(\gamma(z), \gamma E_{\lambda'})\ge c d(z, E_{\lambda'}).
$$
Here the distance $d(z, W)$ froma point $z\in Z$ to a subset $W\subset Z$ is
$$
d(z, W):= \inf_{w\in W} d(z,w).
$$
Such relatively expanding actions frequently appear with the sets $E_\lambda$ being {\em stable sets} of the action, when the dynamics of $\gamma$ inside $E_{\lambda}$ is complicated (say, non-expanding), but is still relatively expanding with respect to $\pi$. One can think of this setting as {\em actions expanding transversally to the fibers of $\pi$}.
\begin{lemma}
[\cite{coco13, coco15}]\label{lem:exp_coco}
If $\Gamma\curvearrowright Z$ is expanding relative to $\pi: E\to\Lambda$ then $(Z- E)/\Gamma$ is compact (not necessarily Hausdorff, of course).
\end{lemma}
The idea of the proof is that, if $V$ is a sufficiently small neighborhood of $E$ in $Z$, then $V$ cannot contain the entire $\Gamma$-orbit as some points of the orbit will be repulsed away from $E$ (into the complement of $V$) by an expanding element $\gamma\in\Gamma$.
{
\begin{example}
Below are two examples of expanding actions with non-Hausdorff quotients:
1. $Z=S^1$, ${\mathbb Z}\cong \Gamma < Isom(S^1)$, $\Lambda = E=\emptyset$. Then the action of $\Gamma$ is expanding
relative to $\pi: E\to\Lambda$, but every $\Gamma$-orbit is dense in $S^1$. In particular, $S^1/\Gamma$ is infinite with trivial topology.
2. A more interesting example is given by a cocompact Fuchsian subgroup $\Gamma < PSL(2, {\mathbb R})$ and its product action
on $Z=S^1\times S^1$. We let $E=\Lambda$ be the diagonal in $S^1\times S^1$ with the identity map $\pi: E\to \Lambda$.
The action $\Gamma\curvearrowright Z$ is expanding relative to $\pi$; this can be seen, for instance, by observing that the action
$\Gamma\curvearrowright S^1$ is infinitesimally expanding. On the other hand, $(Z - E)/\Gamma$ is non-Hausdorff since the action
$\Gamma\curvearrowright Z$ is ergodic and, hence, almost every orbit is dense.
\end{example}
}
\section{Appendix. Abstract convergence actions and groups}\label{app:congru}
Let $Z$ be a compact metric space which consists of at least three points. We define the space $TZ$ to be the subset of $Z^3$ consisting of triples of pairwise distinct points in $Z$. Every topological action $\Gamma\curvearrowright Z$ induces a topological action $\Gamma \curvearrowright TZ$.
\begin{definition}[Convergence action]
An action $\Gamma\curvearrowright Z$ is called a {\em convergence action} and the image of $\Gamma$ in $\operatorname{Homeo}(Z)$ is said to be a {\em convergence group} if one of the following equivalent conditions hold:
(i) The action $\Gamma \curvearrowright TZ$ is properly discontinuous.
(ii) For every sequence $\gamma_n\to\infty$ in $\Gamma$
there exist points $z_{\pm}\in Z$ and a subsequence of
$(\gamma_n)$ which converges to the constant map $\equiv z_+$
uniformly on compacts in $Z-\{z_-\}$. The points $z_+$ and $z_-$ are called the {\em limit point} (or the {\em attractor}) and the {\em exceptional point} (or the {\em repeller}) of this subsequence.\footnote{Of course, it might happen that $z_-=z_+$.}
A convergence action $\Gamma\curvearrowright Z$ is said to be {\em uniform}
if the action $\Gamma\curvearrowright TZ$ is cocompact.
\end{definition}
A proof for the equivalence of the definitions (i) and (ii) can be found in
\cite{Bowditch_config}.
The main example of convergence actions comes from the following fact:
Every discrete group $\Gamma$ of isometries of a proper Gromov hyperbolic geodesic metric space $X$ acts as a convergence group on the Gromov boundary $\partial_{\infty} X$ of $X$.
Furthermore, every word hyperbolic group $\Gamma$ acts on its Gromov boundary $\partial_{\infty} \Gamma$ as a uniform convergence group. See e.g. \cite{Tukia1994}.
Bowditch proved that, vice versa,
this dynamical behavior characterizes the natural actions of
word hyperbolic groups
on their boundaries:
\begin{thm}[{\cite[Thm.\ 0.1]{Bowditch_char}}]
\label{thm:charhypbow}
Let $\Gamma \curvearrowright Z$ be a uniform convergence action
on a nonempty perfect\footnote{Recall that a topological space is called perfect if it has no isolated points.}
compact metrizable space. Then $\Gamma$ is word hyperbolic and $Z$ is equivariantly homeomorphic
to $\partial_{\infty}\Gamma$.
\end{thm}
For every convergence action $\Gamma\curvearrowright Z$ one defines the {\em limit set}, the {\em conical limit set} and the domain
of discontinuity (which is the same as the domain of proper discontinuity).
\begin{definition}
A sequence $(\gamma_n)$ in $\Gamma$ is said to {\em converge} to a point $z_+\in Z$, $\gamma_k\to z_+$,
if every subsequence in $(\gamma_n)$ contains a further subsequence, which converges to $z_+$ uniformly on compacts in $Z- \{z_-\}$, for some $z_-\in Z$ (which depends on the subsubsequence).
\end{definition}
\begin{definition}
[See Section 8 of \cite{Bowditch_char}]
A sequence $(\gamma_n)$ in $\Gamma$ which converges to $z_+$ is said to {\em converge conically} to $z$ if for every point
$\hat{z}\in Z - \{z\}$, the sequence of pairs $\gamma_n^{-1}(z, \hat{z})$ is relatively compact in $Z^2 -Diag(Z^2)$. \end{definition}
\begin{definition}
The {\em limit set} $\Lambda(\Gamma)\subset Z$ of a convergence action $\Gamma\curvearrowright Z$ is the subset consisting of limits
$z$ of sequences $\gamma_k\to z$, $\gamma_k\in \Gamma$. The {\em conical limit set} $\Lambda_c(\Gamma)\subset Z$
is the subset consisting of conical limits of sequences $\gamma_k\in \Gamma$.
\end{definition}
Both $\Lambda(\Gamma)$ and $\Lambda_c(\Gamma)$ are $\Gamma$-invariant; the limit set $\Lambda(\Gamma)$ is closed, while the conical limit set $\Lambda_c(\Gamma)$, in general, is not closed. The {\em domain of discontinuity} of the action $\Gamma\curvearrowright Z$ is the complement
$Z - \Lambda(\Gamma)$. The action of $\Gamma$ on $\Omega(\Gamma)$ is properly discontinuous. An action $\Gamma\curvearrowright Z$ is called {\em elementary} if $\Lambda(\Gamma)$ contains at most two points and {\em nonelementary} otherwise. The limit set of every nonelementary convergence action is perfect.
In the case when $\Gamma$ is a regular antipodal subgroup of the isometry group $G$ of a symmetric space $X$ and
$Z=\Lambda_{ch}(\Gamma)\subset \partial_{F\ddot u} X$, we refer to conical limit points for the convergence action $\Gamma\curvearrowright Z$ as {\em intrinsically conical} in order to distinguish this notion of conicality from the {\em extrinsic notion} described in Sections \ref{sec:conical_convergence} and \ref{sec:conical_convergence2}.
\begin{thm}[{\cite[Thm.\ 8.1]{Bowditch_char}}, \cite{Tukia}]
\label{thm:bowditch-conical}
A convergence action $\Gamma \curvearrowright Z$ on a perfect compact metric space $Z$
is uniform if and only if $Z=\Lambda_c(\Gamma)$, i.e., every point of $Z$ is a conical limit point of $\Gamma$.
\end{thm}
We now fix a metric $d$ on the metrizable space $Z$.
\begin{definition}
A convergence action $\Gamma\curvearrowright (Z,d)$ is {\em expanding} if it is expanding at $\Lambda=\Lambda(\Gamma)$ in the sense of Section \ref{sec:expanding_actions}.
\end{definition}
The following theorem is proven in \cite{morse} using a different method:
\begin{thm}\label{thm:conical}
Each nonelementary expanding convergence action $\Gamma\curvearrowright Z$ restricts to a uniform action on $\Lambda$. In particular, if $Z$ is perfect then for every expanding convergence action $\Gamma\curvearrowright Z$, all limit points of $\Gamma$ are conical.
\end{thm}
\par\medskip\noindent{\it Proof. } Our argument will use, and will illustrate, a generalization of the the concept of {\em thickening} discussed earlier in the context of group actions on higher rank symmetric spaces and flag manifolds. For each $\lambda\in \Lambda=\Lambda(\Gamma)$ define its {\em thickening} $Th(\lambda)\subset \Lambda^3$,
$$
Th(\lambda)= \{\lambda\} \times \{\lambda\} \times \Lambda \cup \{\lambda\} \times \Lambda \times \{\lambda\} \cup \Lambda \times \{\lambda\} \times \{\lambda\}.
$$
Clearly, $Th(\gamma \lambda)= \gamma Th(\lambda), \lambda\in \Lambda, \gamma\in \Gamma$.
Note that the subsets $Th(\lambda)$ are pairwise disjoint (i.e. the thickening is {\em slim}).
Then
$$
Th(\Lambda):= \bigcup_{\lambda\in \Lambda} Th(\lambda)
$$
is the {\em large diagonal} in $\Lambda^3$. Of course,
$$
T\Lambda= \Lambda^3 - Th(\Lambda).
$$
We have the $\Gamma$-equivariant fibration
$$
\pi: Th(\Lambda)\to \Lambda, \quad Th(\lambda)\to \{\lambda\}.
$$
We equip $\Lambda^3$ with the following product metric induced from the metric $d$ on $Z$:
$$
d^2((z_1, z_2, z_3), (w_1, w_2, w_3))= d^2(z_1, w_1) + d^2(z_2, w_2) + d^2(z_3, w_3).
$$
The fact that the action $\Gamma \curvearrowright \Lambda$ is expanding translates to the statement that the action $\Gamma\curvearrowright \Lambda^3$ is expanding relative to $\pi: Th(\Lambda)\to \Lambda$. Indeed, for $z=(z_1, z_2, z_3)$, $z_i\in \Lambda$, and $\lambda'\in \Lambda$,
$$
d(z, Th(\lambda'))= \min\left( \sqrt{d^2 (z_1, \lambda') + d^2(z_2, \lambda')}, \sqrt{d^2 (z_1, \lambda') + d^2(z_3, \lambda')},
\sqrt{d^2 (z_2, \lambda') + d^2(z_3, \lambda')} \right).
$$
The expansion condition for the action of $\gamma\in \Gamma$ on a neighborhood $U\subset \Lambda$ of $\lambda\in \Lambda$ (containing the points $z_1, z_2, z_3$ and $\lambda'$) implies that
$$
d(\gamma z_i, \gamma \lambda') \ge c d(z_i, \lambda'), \quad c>1, i=1, 2, 3.
$$
From this we conclude that
$$
d(\gamma z, Th(\gamma \lambda')) \ge c d(z, Th(\lambda')),
$$
which means relative expansion. Therefore, according to Lemma \ref{lem:exp_coco}, applied to the action $\Gamma\curvearrowright (\Lambda^3, Th(\Lambda))$, the action $\Gamma\curvearrowright T\Lambda= \Lambda^3 - Th(\Lambda)$ is cocompact.
In other words, $\Gamma\curvearrowright \Lambda$ is a uniform convergence action \qed
\section{Appendix. Model spaces: Symmetric spaces and buildings}
\label{sec:modsp}
In these lectures we use two classes of buildings: Spherical and euclidean\footnote{Although the latter are only mentioned in passing in sections 2---5.}. Spherical buildings were introduced by Tits in order to generalize {\em incidence geometry} from classical groups to general semisimple Lie groups; they emerged as an important geometric tool for studying geometry of symmetric spaces and Lie groups. Similarly, euclidean buildings were introduced by Bruhat and Tits as a tool for studying algebraic groups over fields with nonarchimedean valuations, e.g. $p$-adic numbers ${\mathbb Q}_p$. A way to think about buildings is as hybrids of simplicial complexes and manifolds equipped with geometric structures: From manifolds they acquire an atlas, from simplicial complexes they acquire certain discrete features. We first define {\em model spaces} which include both symmetric spaces and buildings and then add more axioms to specialize to buildings.
Below are basic axioms of buildings. As in the case of geometric structures, one starts with a {\em model space} and
a group acting on this space. The model space in our setting is a {\em model apartment} $a_{mod}$, which is either a unit sphere (in the case of spherical buildings) or a euclidean space (for euclidean buildings). One also fixes a {\em model Coxeter group} $W$ acting isometrically on $a_{mod}$; this group is generated by reflections in hyperplanes in $a_{mod}$. In the case of spherical apartment, $W$ is required to be finite; in the euclidean case $W$ is required to have finite linear part. For euclidean apartments, in general, no discreteness of $W$ is assumed. In order to avoid the notation confusion, we will frequently use the notation $W_{aff}$ for the Coxeter group in the euclidean case. The pair $(a_{mod}, W)$ is called a {\em Coxeter complex}.
\begin{example}
For instance, if $a_{mod}=A$ is an affine space and $W$ is a finite reflection group of isometries of $A$, then one takes $W_{aff}= T \rtimes W$, where $T$ is the full group of translations of $A$.
\end{example}
Hyperplanes fixed by reflections in $W$ are called {\em walls} in $a_{mod}$. A (closed) half-apartment in $a_{mod}$ is a half-space bounded by a wall. In the spherical case, $a_{mod}$ is divided into fundamental domains of $W$, called {\em chambers}. The {\em model chamber} $\sigma_{mod}$ is the quotient $a_{mod}/W$, it can be identified with one of the chambers in $a_{mod}$. The quotient projection $\theta: a_{mod}\to \sigma_{mod}$ is the {\em type map}. In the spherical case one uses the notation $\angle$ for the angular distance on $a_{mod}$.
Now we can state the main axioms of model spaces:
\medskip
{\bf Axiom 1.} A model space is a metric space which is either a CAT(1) space (spherical case) or a CAT(0) space (euclidean case).
{\bf Axiom 2.} A model space is a metric space $X$ equipped with an atlas where charts are isometric embeddings $a_{mod}\to X$, such that transition maps are restrictions of elements of the model Coxeter group.
\medskip
Note that images of charts (called {\em apartments} in the case of buildings) are not required to be open in $X$ (unlike the case of geometric structures on manifolds). Using this axioms one can transport various (invariant) notions from the model apartment to the model space; in particular, one defines {\em walls} in $X$ as images of walls in the model apartment.
\medskip
{\bf Axiom 3.} For any two points $x, y$ of a model space there is a chart whose image contains both $x$ and $y$.
Note that the model apartment clearly satisfies the first three axioms. One frequently adds one more general axiom in order to distinguish model apartments from ``more interesting'' model spaces:
\medskip
{\bf Axiom 4.} A model space $X$ is {\em thick} if any wall in $X$ equals the intersection of three half-apartments.
\begin{example}
1. A metric tree is a model space modeled on $(A,W)$, where $A$ is the line and $W$ is the full group of euclidean isometries of $A$.
2. Each symmetric space $X$ is a model space. Let $G$ denote the connected component of the identity in the isometry group of $X$. The model apartment is a maximal flat $F$ in $X$ and the model Coxeter group acting on $F$ is the image $W_{aff}$ in $Isom(F)$ of the stabilizer $G_F$ of $F$ in $G$.
\end{example}
In order to differentiate between symmetric spaces and buildings, one introduces one more {\em angle discreteness} axiom. This axiom is void in the spherical case and we, therefore, restrict now to the case of euclidean model spaces with the Coxeter group $W_{aff}=T\rtimes W$, where $W$ is a finite Coxeter group. Let $\sigma_{mod}$ denote the model chamber for action of $W$ on the unit sphere in the affine space $A=a_{mod}$. We let $\Delta\subset A$ denote a model euclidean Weyl chamber, the cone over $\sigma_{mod}$. We then have the $\Delta$-distance function
$$
d_\Delta(x,y)\in \Delta
$$
defined for all points $x,y\in X$: Pick a chart $\phi: A\to X$; $\phi(x')=x, \phi(y')=y$, then consider the vector $v$ in $A$ represented by the directed segment $x'y'$ and project $v$ to $\Delta=A/W$. The result is $d_\Delta(x,y)$.
For each nondegenerate geodesic segment $xy$ in $X$, we define its
$\sigma_{mod}$-{\em direction} $\theta(xy)$ as the unit vector in the direction of $d_\Delta(x,y)$.
For each $x\in X$ one has the {\em space of directions} $\Sigma_x X$ of $X$ at $x$, which is the space of germs of nondegenerate geodesic segments emanating from $x$. On this space we have the (metric) notion of {\em angle} denoted $\angle$. We also have the {\em type map} $\theta_x: \Sigma_x X\to \sigma_{mod}$, sending each $xy$ to
its direction $\theta(xy)$.
\medskip
{\bf Axiom 5: Angle discreteness}. Let $v_1, v_2$ be elements of $\Sigma_xX$. Then we require that $\angle(v_1,v_2)$
belongs to the finite set of angles
$$
\angle(\theta(v_1), w \theta(v_2)), \quad w\in W.
$$
A Riemannian manifold (of dimension $\ge 2$) cannot satisfy this axiom.
\begin{definition}
A (thick) spherical building is a model space, modeled on a spherical Coxeter complex and satisfying Axioms 1---4. A (thick) euclidean building is a model space modeled on a euclidean Coxeter complex and satisfying Axioms 1---5.
\end{definition}
For instance, metric trees and their products are examples of euclidean buildings. A building is said to be {\em discrete} if the model Coxeter group is discrete. Below is an example of a discrete euclidean building $X$ on which $PSL(3,{\mathbb Q}_p)$ acts isometrically. We will only describe the underlying simplicial complex and not the rest of the structure. Vertices of $X$ are equivalence classes of ${\mathbb Z}_p$-lattices $\Lambda$ in ${\mathbb Q}_p^3$, where two lattices are equivalent iff they differ by a ${\mathbb Q}_p$-scaling. Edges of $X$ represented by pairs of lattices:
$$
\Lambda\subset \Lambda', \quad |\Lambda':\Lambda|=p.
$$
A 2-simplex in $X$ is a chain of proper inclusions of lattices
$$
\Lambda_0\supset \Lambda_1 \supset \Lambda_2 \supset p\Lambda_0
$$
where each inclusion is necessarily of the index $p$ (note that $\Lambda_0, p\Lambda_0$ represent the same vertex).
It turns out that for each point $x$ in a building (spherical or euclidean), the space of directions $\Sigma_xX$ has a natural structure of a spherical building. In the case of a vertex in a discrete building, the building $\Sigma_xX$ (treated as a simplicial complex) is identified with the link of $x$.
Buildings enter naturally into the theory of symmetric spaces via two {\em asymptotic} constructions:
1. The visual boundary of a symmetric space $X$ (also, of a euclidean building) is a spherical building, see e.g. \cite{Eberlein}.
2. Every {\em asymptotic cone} of a symmetric space $X$
is again a euclidean building modeled on the same Coxeter complex, see \cite{qirigid}.
\medskip
{\bf The $SL(3,{\mathbb R})$ example.} Below we supplement our earlier discussion of visual boundaries with
the detailed example of the symmetric space $X$ of the group $G=SL(3,{\mathbb R})$. Our treatment follows \cite[Appendix 5]{BGS}. This symmetric space $X$ is identified with the space $P(3,{\mathbb R})$ of conformal structures on ${\mathbb R}^3$, more precisely, positive definite bilinear forms $b$ on ${\mathbb R}^3$ up to scalar. After fixing the standard euclidean bilinear form $q_0$ on ${\mathbb R}^3$, such structures can be identified with symmetric positive-definite $3\times 3$ matrices $A$ of the unit determinant:
$$
b(u, v)= u^T A v.
$$
The group $G$ acts on bilinear forms $b$
by change of variables:
$$
g^*b=b', b'(u, v)= b(g(u), g(v)),
$$
in terms of matrices, the action is given by
$$
g^* A= g^T A g, \quad g\in SL(3,{\mathbb R}).
$$
In matrix terms, the Riemannian metric on $X$ at the identity matrix is given by
$$
( a, b)= tr(ab),
$$
where $a, b$ are symmetric traceless $3\times 3$ matrices.
A maximal flat $F$ in $X$ is given by diagonal matrices with positive diagonal entries $x_1, x_2, x_3$
and unit determinant (the corresponding quadratic forms have principal axes equal to the coordinate lines in ${\mathbb R}^3$).
The isometry of $F$ to the euclidean plane is given by
$$
Diag(x_1, x_2, x_3)\mapsto (\log(x_1), \log(x_2), \log(x_3))\in {\mathbb R}_0^3=$$
$$
\{(y_1,y_2,y_3)\in{\mathbb R}^3: y_1+ y_2+y_3=0\}.
$$
The action of $W=S_3$ on $F$ is by permuting the diagonal entries. The walls are given by the equations
$$
x_i=x_j, 1\le i\ne j\le 3.
$$
The positive chamber in $F$
is defined by the inequalities
$$
x_1\ge x_2 \ge x_3>0 $$
and in ${\mathbb R}^3_0$ by the inequalities
$$
y_1\ge y_2 \ge y_3.
$$
(We will think of the positive chamber $\Delta$ as sitting in ${\mathbb R}^3_0$ since the metric of the symmetric space equals the euclidean metric in this setting.)
If $q\in X$ is a general quadratic form, then the segment $q_0 q$ is regular iff all three principal axes of the ellipsoid of $q$ are distinct. The $\Delta$-distance $d_\Delta(q_0, q)$ is the vector
$$
(\log(\lambda_1), \log(\lambda_2), \log(\lambda_3)),
$$
where $\lambda_i$'s are the eigenvalues of the matrix $A$ of $q$, which are arranged in the descending order. In terms of a matrix $g\in SL(3,{\mathbb R})$,
$$
d_\Delta(q_0, g(q_0))= (\log(\mu_1), \log(\mu_2), \log(\mu_3))
$$
where $\mu_i$'s are the singular values of the matrix $g$, again arranged in the descending order. A sequence $(g_i)$ is regular iff
$$
\lim_{i\to\infty} \frac{\mu_1(g_i)}{\mu_2(g_i)}= \lim_{i\to\infty} \frac{\mu_2(g_i)}{\mu_3(g_i)}=\infty.
$$
\medskip
We now describe the visual boundary of $X$ which we will identify with the space of asymptotic classes of geodesic rays $\rho(t), t\ge 0$, emanating from $q_0$. The initial velocity of such a ray is the matrix with the eigenvalues $a\ge b\ge c, a+b+c=0, a^2+b^2+c^2=1$; the asymptotic behavior of the ray is determined by the parameter
$$
r= \frac{b-c}{a-b}, \quad r\in [0,\infty].
$$
Let $v_a, v_b, v_c\in {\mathbb R}^3$ denote the unit eigenvectors corresponding to the eigenvalues $a, b, c$. Define the strip
$$
s_\rho= {\mathbb R} v_a + [-r,r]v_b.
$$
There are three possibilities:
1. $r=0$, i.e., $b=c$, the strip degenerates to the line ${\mathbb R} v_a$. We associate with the ray $\rho$ the line $\<v_a\>$. Note that the vectors $v_b, v_c$ are not uniquely defined. This corresponds to the fact that there is no unique maximal flat through the ray $\rho$.
2. $r=\infty$, i.e., $a=b$, the strip is the plane $\<v_a, v_b\>$ spanned by $v_a, v_b$. We associate with the ray $\rho$ the plane $\<v_a, v_b\>$. Again, the vectors $v_a, v_b$ are not uniquely defined (only the vector $v_c$ and the plane $\<v_a, v_b\>= v_c^\perp$ are well-defined). This again corresponds to nonuniqueness of a maximal flat through the ray $\rho$.
3. $r\in (0,\infty)$, equivalently, the direction of $\rho$ is regular. The corresponding flag is
$$
(L\subset P)= (\<v_a\>\subset \<v_a, v_b\>).$$
Note that in this case $a>b>c$ and the one-dimensional eigenspaces of $A$ are unique.
If we fix $v_a, v_b$, the set of resulting geodesic rays corresponds to the set of all strips in $\<v_a, v_b\>$ interpolating between the line $\<v_a\>$ and the plane $\<v_a, v_b\>$. This is the Weyl chamber corresponding to the flag $L\subset P$.
If we equip the set of strips with Gromov-Hausdorff topology, then the resulting set is homeomorphic to the set of rays emanating from $q_0$, i.e., to the visual boundary of $P(3,{\mathbb R})$. The subspaces corresponding to the sets of singular directions are homeomorphic to the spaces of lines and, resp. planes, in ${\mathbb R}^3$. Thus we see that the Tits boundary of $P(3,{\mathbb R})$ is naturally homeomorphic to the incidence graph of the projective plane.
{
\section{Appendix. Manifolds with corners} \label{sec:corners}
For simplicity we consider here only the concept of manifolds with corners and
good orbifolds with corners since only they appear in the context of this paper.
We refer to \cite{Joyce} for the definition of orbifolds with corners in general.
The concept of manifolds with corners generalizes the notion of manifolds with boundary.
Recall that the latter are defined via atlases with values in the euclidean spaces and half-spaces.
Let $I$ denote the closed interval $[0,1]$ and $I^n$ the $n$-dimensional cube. The cube $I^n$ is a {\em stratified space}
where the (open) $k$-dimensional stratum $S_k(I^n)$ is the union of open $k$-dimensional faces of $I^n$, i.e.
$k$-dimensional subcubes which are products of several copies of the open intervals $(0,1)$ and
singletons from the set $\{0, 1\}$. Dimensions of the strata range from $0$ to $n$. We let $H(I^n)$ denote
the pseudogroup of homeomorphisms between open subsets of $I^n$ which preserve the stratification of $I^n$, i.e.
map points of $S_k(I^n)$ to points of $S_k(I^n)$ for every $k=0,...,n$.
\begin{definition}
An $n$-dimensional topological manifold with corners is a 2nd countable Hausdorff topological space $X$ equipped with
a certain {\em atlas}, which is a maximal system of homeomorphisms (``charts'') $\phi_\alpha: U_\alpha\to V_\alpha$, from open subsets $U_\alpha\subset I^n$ to open subsets $V_\alpha\subset X$. It is required that the (partially defined) transition maps
$$
g_{\alpha,\beta}= \phi_\beta^{-1}\circ \phi_\alpha
$$
preserve the stratification of $I^n$: $g_{\alpha,\beta}\in H(I^n)$ for all $\phi_\beta, \phi_\alpha$.
Thus, every manifold with corners is stratified as
$$
S_0(X)\sqcup S_1(X) \sqcup \ldots \sqcup S_n(X),
$$
where the {\em strata} $S_k(X)$ consist of the points $x\in X$ which are mapped to
$S_k(I^n)$ under the maps $\phi_\alpha^{-1}$.
\end{definition}
\begin{example}
If $X$ is a manifold with corners and $Y\subset X$ is an open subset,
then one obtains a pull-back structure of the manifold with corners from $X$ to $Y$, where the charts for $Y$
are suitable restrictions of the charts for $X$. Similarly, one defines the pull-back of the manifold with corners structure
via a local homeomorphism $f: Y\to X$, where $Y$ is Hausdorff and 2nd countable and $X$ is a manifold with corners.
\end{example}
We note that, in particular, every manifold with corners is automatically a manifold with boundary,
where $S_n(X)=\operatorname{int}(X)$ and the union of the rest of the strata is the boundary of $X$. Conversely, every $n$-manifold with boundary $X$ has a natural structure of a manifold with corners where all strata of dimension $< n-1$ are empty.
Unlike the topological boundary, the strata $S_k(X)$ are not uniquely determined by the topology of $X$.
\begin{example}
The cube $X=I^n$ has a natural manifold with corners structure which is the maximal atlas containing the identity map $I^n\to I^n$ and $S_k(X)=S_k(I^n)$, $k=0,...,n$. On the other hand, $I^n$ is homeomorphic to the closed ball $B^n$
which we treat as a manifold with boundary. Hence, $S_k(B^n)=\emptyset$ for $k<n-1$.
\end{example}
As in the case of manifolds, one defines manifolds with corners in other categories, e.g. smooth (with smooth transition maps) or real-analytic (with real-analytic transition maps). Here we recall that a smooth (respectively, real analytic)
map of an open subset of $I^n\subset {\mathbb R}^n$ to ${\mathbb R}^k$ is a map which admits a smooth (respectively, real analytic) extension to an open subset of ${\mathbb R}^n$. A homeomorphism $f: X\to Y$ between $n$-dimensional manifolds with corners is
an {\em isomorphism} if for every pair of charts $\phi_\alpha: U_\alpha\subset I^n \to X$,
$\phi_\beta: U_\beta\subset I^n\to Y$ the composition $\phi_\beta^{-1} \circ f \circ \phi_\alpha$ belongs to the pseudogroup
$H(I^n)$. Similarly, one defines an isomorphism between smooth (respectively, real analytic) manifolds with corners
as an isomorphism of the underlying topological manifolds with corners which gives rise to local diffeomorphisms
(respectively, real-analytic diffeomorphisms) $\phi_\beta^{-1} \circ f \circ \phi_\alpha$. We let $Aut(X)$ denote the group
of automorphisms of the topological (respectively, smooth, or real-analytic) manifold with corners $X$.
\begin{definition}
Let $X$ be a real analytic manifold with corners and $\Gamma < Aut(X)$ be a subgroup which acts properly discontinuously on
$X$. Then the quotient $X/\Gamma$ is called a
{\em good real-analytic orbifold with corners}. Here $X/\Gamma$ is a topological space equipped with the collection of
{\em orbi-charts} which are quotient maps of open subsets $U_\alpha\subset I^n$ by finite groups of real-analytic automorphisms of $U_\alpha$ obtained by restrictions of subgroups of $\Gamma$.
\end{definition}
We note that (analogously to an ordinary orbifold) the orbi-charts of a good orbifold with corners satisfy certain compatibility conditions which are used to define an orbifold with corners in full generality, see \cite{Joyce}.
}
| 918c267ec95eff47190fffe488fb15f238b52d62 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section{Introduction}\label{sec:intro}
For the past century, chemists and physicists have advocated fundamentally different perspectives on materials: while chemists have adopted an {intuitive} ``local'' viewpoint of hybridization, ionic chemical bonding and finite-range interactions, physicists have
{described} materials through band-structures or Fermi surfaces in a nonlocal, momentum-space picture. These two descriptions seem disjoint, especially with the advent of {TIs}, {which are} exclusively understood in terms of the nontrivial topology of
Bloch Hamiltonians throughout the Brillouin zone (BZ) in momentum space. Despite the apparent success that the field has had in predicting \emph{some} [mostly time-reversal {(TR) invariant]} {TIs}, {conventional band theory} is ill-suited to a \emph{natural} treatment of {TIs}. {Given the paucity of known TIs (less than 400 materials out of 200,000 existent in crystal structure databases!), one may ask whether topological materials are truly so rare, or if this reflects a failing of the conventional theory.}
By their very nature, the topological properties of energy bands are properties \emph{global} in momentum space. The duality between real and momentum (direct and reciprocal) space suggests that properties of bands which are nonlocal in momentum space will manifest locally in real space. In this paper, we unify the real and momentum space descriptions of solids, and in doing so provide a new, powerful, complete and predictive band theory.
Our procedure provides a complete understanding of the structure of bands in a material \emph{and} links its topological to the chemical orbitals at the Fermi level. It is therefore a theory of Topological Quantum Chemistry.
Developing a complete theory of topological bands requires an extremely large body of work made up of several ingredients. First, we {compile} \emph{all} the possible ways energy bands in a solid can be connected throughout the BZ to obtain \emph{all} {realizable} band structures in \emph{all} non-magnetic space-groups. Crystal symmetries place strong constraints on the allowed connections of bands. At high symmetry points, $\mathbf{k}_i$, in the BZ, Bloch functions are classified by irreducible representations (irreps) of
the symmetry group of {$\mathbf{k}_i$},
which also determine the degeneracy.
Away from these high symmetry points, fewer symmetry constraints exist, and some degeneracies are lowered. This is the heart of the $\mathbf{k}\cdot\mathbf{p}$ approach\cite{Kittel87} to band structure, which gives a good description nearby high symmetry $\mathbf{k}-$points. However, {the global band structure requires patching} together the different $\mathbf{k}\cdot\mathbf{p}$ {theories} at various high symmetry points. Group theory places constraints -- ``compatibility relations'' -- on how this can be done. Each solution to these compatibility relations gives groups of bands with different connectivities, corresponding to different physically-realizable phases of matter (trivial or topological). We solve all compatibility relations {for all} 230 space groups {(SGs)} by mapping connectivity in band theory to the graph-theoretic problem of constructing multipartite graphs. Classifying the allowed connectivities of energy bands becomes a combinatorial problem of graph enumeration: we present a fully tractable, algorithmic solution.
Second, we develop the tools to compute how the real-space orbitals in a material determine the symmetry character of the electronic bands. Given only the Wyckoff positions and the orbital symmetry ($s,p,d$) of the elements/orbitals in a material, we derive the symmetry character of all energy bands at \emph{all} points in the BZ. We do this by extending the notion of a band representation {(BR)}, first introduced in Refs.~\onlinecite{Zak1982,Bacry1988}, to the physically relevant case of materials with spin-orbit coupling (SOC) and/or TR symmetry. {A BR consists of all} bands linked to localized orbitals respecting the crystal symmetry (and possibly TR). The set of BRs is strictly smaller than the set of groups of bands obtained from our graph theory\cite{Bacry1993}. {We identify a special subset of ``elementary'' BRs (EBRs)\cite{Zak1982,Evarestov1997}, {elaborated upon in the Supplementary Material (SM)}, which are the smallest sets of bands
{derived from} local atomic-like Wannier functions.}{\cite{Marzari2012}}
We work out all the {(10300)} different EBRs for all the SGs, Wyckoff positions, and orbitals, which we will present in a separate data paper.{\cite{GroupTheoryPaper}}
If the number of electrons is a fraction of the number of connected bands (connectivity) forming an EBR, then the system is a symmetry-enforced semimetal. The EBR method allows us to easily identify candidate semimetallic materials. As an amusing fact, we find that the largest possible number of connected bands in an EBR is {24} and hence the smallest possible fraction of filled bands in a semimetal is 1/{24}. If, however, our graph analysis {reveals an instance where the number of connected bands} is \emph{smaller} than the total number of bands in the EBR, we conclude that a $\mathbf{k}$-space description exists but a Wannier one does not\cite{Soluyanov2011,Soluyanov2012,Read2016}, i.e. the {disconnected} bands are topological. TIs are then those materials with bands that are \emph{not} in our list of elementary components but are in our graph enumeration. We thus reformulate the momentum-space approach to topological indices as a real-space obstruction to the existence of atomic-like Wannier functions. In tandem with our graph-theoretic analysis of band connectivity, we enumerate \emph{all} the ways the transition to a topological phase can occur. Hence, we are able to classify \emph{all} topological crystalline insulators. This leads to previously unrecognized large classes of {TIs}. We show the power of our approach by predicting hundreds of new {TIs} and semimetals.
\section{Graph Theory and Band Structure}\label{sec:graphs}
To construct a band theory which accounts for the global momentum space structure of energy bands, we piece together groups of bands from distinct points in the BZ. Consider a $D$-dimensional crystalline material invariant under a SG $G$, containing elements of the form $\{R|\mathbf{d}\}$, where $R$ is a rotation or rotoinversion, and $\mathbf{d}$ is a translation. A Bravais lattice of translations is generated by a set of linearly independent translations $\{E|\mathbf{t}_i\}$, $i=1 \ldots D$, where $E$ represents the identity.
Each $\mathbf{k}$ vector in the reciprocal lattice is left invariant by its little group, $G_\mathbf{k}\subset G$, and the Bloch wavefunctions $|u_n(\mathbf{k})\rangle$ transform under a sum of irreps of $G_\mathbf{k}$; bands at high-symmetry $\mathbf{k}$-vectors will have (non-accidental) degeneracies equal to the dimension of these representations {(reps)}. For spinless or spin-orbit free/coupled systems, these are ordinary linear/double-valued reps. Away from high symmetry points, degeneracies are reduced and bands disperse according to conventional $\mathbf{k}\cdot\mathbf{p}$ theory.
Consider two different high-symmetry $\mathbf{k}$ vectors, $\mathbf{k}_1$ and $\mathbf{k}_2$, and a line $\mathbf{k}_t=\mathbf{k}_1+t(\mathbf{k}_2-\mathbf{k}_1)$, $t\in [0,1]$ connecting them.
{To determine} how the irreps at $\mathbf{k}_1$ connect to the irreps at $\mathbf{k}_2$ to form bands along this line, first note that the little group, $G_{\mathbf{k}_t}$, at any $\mathbf{k}_t$ on the line is a subgroup of both $G_{\mathbf{k}_1}$ and $G_{\mathbf{k}_2}$. Thus, an irrep, $\rho$, of $\mathbf{G}_{\mathbf{k}_1}$ at $\mathbf{k}_1$ will
split (subduce, or restrict) {along the line} to a (direct) sum of irreps $\bigoplus_i \tau_i$ of $G_{\mathbf{k}_t}$; symbolically (here and throughout we use ``$\approx$'' to denote equivalence of representations)
\begin{equation}
\rho\downarrow G_{\mathbf{k}_t}\approx\bigoplus_i\tau_i.\label{eq:comprelexample}
\end{equation}
\noindent These restrictions are referred to as \emph{compatibility relations}. Heuristically, they are found by taking the representation $\rho$ and ``forgetting'' about the symmetry elements of $G_{\mathbf{k}_1}$ which are not in $G_{\mathbf{k}_t}$. As energy bands in a crystal do not discontinuously end, the representations, $\sigma$, of $G_{\mathbf{k}_2}$ at $\mathbf{k}_2$ must also satisfy
\begin{equation}
\sigma\downarrow G_{\mathbf{k}_t}\approx\bigoplus_i\tau_i.
\end{equation}
Each representation, $\tau_i$, corresponds to a (group of) band(s) along the line $\mathbf{k}_t$; bands coming from $\mathbf{k}_1$ in each irrep $\tau_i$ must join with a group of bands transforming in the same irrep coming from $\mathbf{k}_2$. We refer to each set of such pairings as a solution to the compatibility relations.
{Compatibility relations apply to} \emph{each and every} connection line/plane between each pair of $\mathbf{k}$ points in the Brillouin zone, leading to strong but factorially redundant restrictions on how bands may connect in a crystal. To construct the nonredundant solutions to the compatibility relations, we map the question to a problem in graph theory.
Each {irrep} at the different symmetry distinct $\mathbf{k}$ vectors labels a node in a graph. In our previous example, {the nodes would be} labelled by $\rho,\sigma, \{\tau_1,\tau_2,\dots \}$ for the $\mathbf{k}_1$, $\mathbf{k}_2$, and $\mathbf{k}_t$ high symmetry points and line, respectively.
{We draw the edges of the graph by the following} rules: \textbf{1.} Irreps at the same $\mathbf{k}$ vector can never be connected by edges -- our graph is multi-partite. \textbf{2.} Nodes corresponding to irreps at $\mathbf{k}_a$ and $\mathbf{k}_b$ can be connected only if $G_{\mathbf{k}_a}\subseteq G_{\mathbf{k}_b}$ or $G_{\mathbf{k}_b}\subseteq G_{\mathbf{k}_a}$ (i.e. $\mathbf{k}$-vectors are compatible). \textbf{3.} Edges must be consistent with the compatibility relations. For instance, Eq.~(\ref{eq:comprelexample}) {corresponds to} an edge from the node labelled by $\rho$ to each node labelled by $\tau_i$. We refer to such a graph as a \emph{connectivity graph}.
We developed an algorithm (described in the SM) that outputs \emph{all} distinct connectivity graphs for \emph{all} SGs-- a gargantuan task.
{The factorial complexity is handled by} several subroutines, which ensure that the minimal set of paths in momentum space is considered. Additional filters remove redundant or isomorphic solutions to the compatibility relations. The tools of graph theory then allow us to partition the nodes of the graph (the little-group irreps) into distinct connected components (subgraphs). Each component corresponds to a connected, isolated group of bands that can describe a set of valence bands in \emph{some} insulating system or protected semimetal, depending on the filling. In particular, such a list consists of \emph{all} (both topologically trivial and nontrivial) valence band groups. The familiar example of graphene with SOC is given in Fig.~\ref{fig:graphenechart} and the SM. We now define and classify topologically nontrival bands in terms of \emph{localized} Wannier functions.
\section{Topologically (non)trivial bands}\label{sec:ebrs}
Consider a group of connected bands in the spectrum of a crystal Hamiltonian separated by a gap from all others. Using existing machinery, {to determine whether} this group is topologically nontrivial requires discovering topological invariants (indices or Wilson loops) from the analytic structure of the Bloch eigenfunctions. We now prove that the \emph{algebraic} global structure of the energy spectrum itself (including connectivities) contains a complete classification of topological materials. We define:
\begin{itemize}
\item\textbf{Definition 1.} An insulator (filled group of bands) is {\bf topologically nontrivial} if it cannot be continued to {\bf any} atomic limit without either closing a gap or breaking a symmetry.
\end{itemize}
To every isolated group of energy bands, we associate a set of Wannier functions -- orbitals obtained by Fourier transforming linear combinations of the Bloch wavefunctions.
{In an atomic limit, the Wannier functions are exponentially localized, respect the symmetries of the crystal (and possibly TR) and coincide in most cases (however, see Section~\ref{sec:chemistry}) with the exponentially localized atomic orbitals at infinite (atomic limit) separation.}
Under the action of the crystal symmetries, different atomic sites are distributed into orbits, belonging to Wyckoff Positions (WPs); we denote the points in a Wyckoff orbit in a unit cell as $\{\mathbf{q}_i\}$. In analogy with the
symmetry group of a $\mathbf{k}$-vector, to each site $\mathbf{q}_i$ there is a finite subgroup, $G_{\mathbf{q}_i}$, of the full SG, $G$, which leaves $\mathbf{q}_i$ invariant, called the \emph{site-symmetry group.} For example, the A, B sites in graphene {belong to WP} $2b$ (the multiplicity $2$ refers to the number of symmetry-related sites in the unit cell); its site-symmetry group is isomorphic to $C_{3v}$. Wannier functions at each site $\mathbf{q}_i$ transform under some rep, $\rho_i$, of $G_{\mathbf{q}_i}$. Crucially, through the mathematical procedure of induction, the real-space transformation properties of these localized Wannier functions determine the little group reps of the bands at every point in the BZ: the action of the SG on the full lattice (rather than just the unit cell) of Wannier functions gives an infinite-dimensional rep. Its Fourier transform gives the $\mathbf{k}$ dependent matrix rep of all symmetry elements. This restricts to reps of the little group of each $\mathbf{k}$-vector. Following Zak \cite{Zak1982}, we refer to this as a band representation (BR), $\rho_{iG}$, induced\cite{Fulton2004} in the space-group $G$ by the rep $\rho_i$ of $G_{\mathbf{q}_i}$:
\begin{equation}
\rho_{iG}=\rho_i\uparrow G.
\end{equation}
The above is true with or without TR symmetry. {BRs} which also respect time-reversal symmetry in real space are \emph{physical} band representations {(PBRs)}.
By inducing {BRs}, we enumerate all groups of bands with exponentially localized and symmetric Wannier functions. Each such group forms a BR, and every band representation is a sum of EBRs. {We have identified the 10300 EBRs: }
\begin{itemize}
\item\textbf{Proposition 1.} A band representation $\rho_G$ is elementary if and only if it can be induced from an irreducible representation $\rho$ of a {maximal (as a subgroup of the space group. See the SM)} site-symmetry group $G_\mathbf{q}$ which does not appear in a list of exceptions{\cite{Bacry1993,Michel2001} given in the SM}.
\end{itemize}
We prove a similar statement for physically elementary band representations (PEBRs) and work out their extensive list of exceptions.\cite{GroupTheoryPaper} In analogy with irreps, an (P)EBR cannot be written as a direct sum of other (P)BRs. A BR which is not elementary is {a ``composite'' BR (CBR)}. Crucially, \emph{any} topologically trivial group of bands is equivalent to some {(P)BR}, a conclusion of Definition~1. Conversely, any topologically nontrivial group of bands cannot be equivalent to any of the enumerated {(P)BRs} (a caveat is discussed in Section~\ref{sec:chemistry}).
We thus conclude that the Wannier functions of a topologically trivial group of bands are smoothly continuable into {an} atomic limit, exponentially localized, and transform under a BR. In a topological material, the Wannier functions for the valence (group of) bands either \textbf{1.} fail to be exponentially localizable, or \textbf{2.} break the crystal symmetry. An example of \textbf{1} is the Chern insulator: a nonvanishing Chern number is an obstruction to exponentially localized Wannier functions.{\cite{Brouder2007}} The Kane-Mele model of graphene\cite{Kane04} (Section~\ref{sec:classify}), is an example of \textbf{2}: in the ${Z}_2$ nontrivial phase, exponentially localized Wannier functions for the valence bands necessarily break TR symmetry\cite{Soluyanov2011} (when the valence and conduction bands are taken together, atomic-like Wannier functions can be formed). Hence, in order to transition to the atomic limit a gap must close.
{We have tabulated all the (P)EBR's induced from every maximal {WP (i.~e.~a WP with maximal site-symmetry group)} for all $230$ SGs, with and without SOC and/or TR symmetry. This data allows us to enumerate all topologically trivial band structures.
In an accompanying publication\cite{GroupTheoryPaper} we describe the myriad group-theoretic data (subduction tables, etc) \emph{for each of the $5646$ EBRs and $4757$ PEBRs that we find}. {To generate this data, we generalized the well-known induction algorithm based on Frobenius reciprocity and presented in Refs.~\onlinecite{Evarestov1997} and \onlinecite{Bilbao3} to the case of double-valued representations. We present the full details of the computational methods in Ref.~\onlinecite{GroupTheoryPaper}.} Additionally, the data can be accessed through programs hosted on the Bilbao Crystallographic Server\cite{progbandrep}. We also give a summary table of all EBRs and PEBRs in Sec.~VII of the Supplementary Material}
{This allows us to give, for the first time, a classification of TIs that is both \emph{descriptive} and \emph{predictive}. {Rather than providing a classification (with a topological index, for instance) of the topological phases in each space group, divorced from predictive power, we instead formulate a procedure to determine whether a band structure is topologically trivial or topologically nontrivial, and enumerate the possible non-trivial band structures for each space group.}
Given the band structure of a material, we can compare isolated energy bands to our tabulated list of (P)EBRs. Any group of bands that transforms as a (P)EBR is topologically trivial. Those groups of bands that remain are guaranteed to be topologically nontrivial.
Conversely, knowing just the valence orbitals and crystal structure of a material, we can immediately determine under which -- if any -- EBRs the bands near the Fermi level transform. If graph theory reveals that these EBRs can be disconnected, we deduce that this phase is topological.} {While a disconnected EBR itself serves as a topological index, standard techniques can be used to diagnose which (if any) of the more standard K-theoretic topological indices\cite{Freed2013} are nontrivial.} In the subsequent sections, we outline this recipe, and present hundreds of new predicted topological materials.
{Our identification of topological crystalline phases also goes beyond recently proposed classifications based on symmetry eigenvalues of occupied bands in momentum space, first proposed in Ref.~\onlinecite{kanegraphs}, and applied in a modified form to three dimensional systems in Ref.~\onlinecite{ashvincomplete}. A shortcoming common to both methods is that while they produce a list of topological indices for each space group, they provide no insight into how to find or engineer materials in any nontrivial class. Second, in constrast to the claims of Ref.~\onlinecite{ashvincomplete},
by focusing only on eigenvalues in momentum space rather than the real-space structure of Wannier functions (or equivalently, the analytic structure of Bloch functions in momentum space), essential information about the topological properties of certain band structures is lost. For instance, the topological phases of SG $P6mm$ ($183$) presented in the Supplementary Material fall outside the scope of Ref.~\onlinecite{ashvincomplete}. Viewed in this light, our classification based on band representations generalizes the notion of a topological eigenvalue invariant in such a way as to capture these missing cases.
}
\section{Classification of {TIs}}\label{sec:classify}
Combining the notion of a BR with the connectivity graphs, we identify several broad classes of TI's, {distinguished by the number of relevant EBRs at the Fermi level when transitioning from a topologically trivial to a nontrivial phase.}
{If, in a trivial phase, the Fermi level sits in a single (P)EBR,} the material is necessarily a semimetal. If tuning an external control parameter (strain, SOC, etc), opens a gap at the Fermi level, the material \emph{necessarily} becomes topological. {This is because} if an {(P)}EBR splits into a disconnected valence and conduction band, then neither the valence nor the conduction band can form BRs: only both together form a (P)EBR.
{This situation occurs exactly when an EBR}
can be consistently realized in a disconnected way in the BZ.
This is precisely the sort of task suited for the graph-theoretic machinery of Section~\ref{sec:graphs}!
The archetypal example for this behavior is the quantum spin Hall transition in the Kane-Mele model of graphene with both next nearest neighbor ``Haldane''\cite{haldanemodel} and Rashba SOC, {which we illustrate schematically in Fig.~\ref{fig:graphenechart}}.
This model hosts two {spinful} $p$-orbitals per hexagonal unit cell, for a total of four bands forming an EBR. In the Rashba SOC regime, all four bands are connected and the material is a gapless semimetal.
Turning up Haldane SOC opens a band gap.
By the preceding analysis, this gap \emph{must} be topological. {In Ref.~\onlinecite{GroupTheoryPaper}, we give all (hundreds of) cases {-- each specified by an orbital, Wyckoff position, and SG -- } where this can occur. We give the full details in the SM}
{The second class of TI's is defined by the presence of more than one relevant EBR at the Fermi level.}
The trivial phase of such a material {can be} an insulator, with EBRs above and below the Fermi level.
Without loss of generality, we consider one EBR in the conduction band and one in the valence band; generically, any transition involving more than two EBRs can be resolved into a sequence of pairwise transitions.
A topological phase transition occurs when a gap closes and reopens after a band inversion, such that neither the valence bands nor the conduction bands form a BR.
In the trivial phase, the little group irreps of the filled bands at each $\mathbf{k}$ point are those of the valence band EBR. After the topological phase transition, the little group irreps at each $\mathbf{k}$ point are \emph{not} consistent with an EBR. This mechanism {describes} the zeitgeist $3D$ {TI} Bi$_2$Se$_3$.{\cite{Zhang09,Xia09}} Without, or with very small, SOC, Bi$_2$Se$_3$ is a trivial insulator; its valence and conduction bands transform in two distinct EBRs. {Increasing} SOC pushes the bands together; at a critical strength, the gap between the valence and conduction bands closes at the $\Gamma$ {point of the BZ}. Above this critical value, a gap reopens, with certain states (labelled by irreps of $G_\Gamma$) exchanged between the valence and conduction bands. {Ultimately}, neither the valence bands nor the conduction bands transform as EBRs and the insulator is topological as per Def.~1\cite{Winkler2016}. If, on the other hand, a full gap does not reopen after band inversion, then we are left with symmetry-protected semimetal a la Cd$_3$As$_2$\cite{LiuJiangZhouEtAl2014} or Na$_3$Bi\cite{LiuZhouZhangEtAl2014}.
{When the phase transition is driven by SOC, we label the classes by $(n,m)$, where $n$ is the number of EBRs at the Fermi level in the trivial phase (without SOC) and $m$ is the number in the topological phases (after SOC is turned on). These phases are indicated in Table~\ref{table:titable}, along with material examples.}
To summarize the theoretical results: in Section~\ref{sec:graphs} we showed that the constraints placed by group theory in \emph{momentum space} on the allowed connectivity of bands can be solved via a mapping to graphs. We constructed all possible allowed isolated band groupings for all $230$ SGs. We then showed that our group-theoretic analysis in \emph{real space} determines -- through the notion of BRs -- which of these isolated band groups are described by localized symmetric Wannier functions (topologically trivial insulators.) {It follows that any} other group of valence bands in an insulator {necessarily constistute a TI}. Importantly, our theory also shows that there exist different classes of topologically trivial insulators, which cannot be adiabatically continued to one another. {We now link this important fact to orbital hybridization. }
\section{Chemical Bonding, Hybridization, and Non-Equivalent Atomic Limits}\label{sec:chemistry}
{Given a} topologically trivial crystal, it is tempting -- but wrong -- to assume that the electronic Wannier functions, like the constituent atomic orbitals, are localized at the atomic positions. Basic chemistry {informs us} that orbitals from different atomic sites can hybridize to form bonding and antibonding ``molecular'' orbitals, centered away from any individual atom\cite{hoffmann1987chemistry}.
{In a crystal formed of these tightly bound molecular (rather than atomic) units, the valence and conduction BRs are induced from the (generically maximal) WPs of the molecular orbitals, rather than from the atomic orbitals; consequently, the valence and conduction band Wannier functions are localized at the molecular orbital WPs, away from the atoms.}
Thus, orbital hybridization, when viewed in the solid-state, represents the {required} transition between two symmetry distinct atomic limit phases. In both phases localized, symmetric Wannier functions exist; the distinction lies in where the orbitals sit in the atomic limit\cite{Zakphase,ksv}. In the first atomic limit, the orbitals lie on the atomic sites. In the second, however, the orbitals do not coincide with the atoms. This phase has been called topological\cite{sshistop,bernevigbook}, but we {refer to} it as an ``obstructed'' atomic limit, {since} it is also described by localized Wannier states\cite{Kivelson1982, Read2016}. The prototypical example is the $1D$ Su-Schrieffer-Heeger (Rice-Mele) chain, whose two phases are distinguished by their hybridization pattern, an electric dipole moment that is $0$ ($\frac{1}{2}$) in the trivial (nontrivial) phase. Pumping between two different atomic limits always leads to a nontrivial cycle with observable transport quantities.\cite{RiceMele,Atala13,Nakajima16}
A similar phenomenon is observed in the newly discovered quadrupole insulators.\cite{hughesbernevig2016} {Signatures of the obstructed atomic limit also appear in the real-space entanglement structure of the insulating ground state in finite systems\cite{Fang2013}, and these signatures persist to systems containing only a single molecule.\cite{Tubman2014}}
{We now describe orbital hybridization with EBRs.}
{The valence and conduction bands} are each described by an EBR. {Taken together, the two EBR's comprise a} {CBR}. In both the first and the second atomic limit, as well as at the critical point between them, the {CBR} does not change, although the individual EBR's of the valence and conduction bands do. We denote the {CBR} in the first atomic limit as $\sigma_v\uparrow G\oplus \sigma_c\uparrow G$, where $\sigma_v$ and $\sigma_c$ are irreps of the site symmetry group, $G_a$, of the atomic sites. In the second atomic limit, the {CBR} is $\rho_v\uparrow G \oplus \rho_c \uparrow G$, where $\rho_v$ and $\rho_c$ are irreps of the site symmetry group, $G_m$, of the
molecular sites.
{As we show in the SM, a symmetry-preserving transition} can only happen when the two site symmetry groups, $G_{a}$ and $G_m$, have a common subgroup $G_0$ with a representation $\eta$ such that
\begin{equation}
\eta\uparrow G_a\approx\sigma_v\oplus\sigma_c,\; \eta\uparrow G_m\approx\rho_v\oplus\rho_m.
\end{equation}
{This equation indicates} that there is a line joining the atomic sites, and that Wannier functions localized along the line give the same set of bands as those localized at either endpoint. Because of this, the Wannier functions {for the valence band} can move from the atomic to the molecular sites while preserving all symmetries {upon passing through a critical point}.
{In the SM, we illustrate this using the example of $sp$ orbital hybridization.} {Note, furthermore, that we can define an analogous notion of an obstructed atomic limit for systems lacking translational symmetry, using properties of the point group only. In this way, the preceding discussion generalizes straightforwardly to finite-sized crystals, molecules, and even quasicrystals. In the latter case, obstructions to the naive atomic limit are precisely covalent bonds.} We conclude that hybridization, and thus chemical bonding, can be treated as a phase transition.
\section{Algorithmic materials search}\label{sec:matsearch}
{We demonstrate the power of our theory by proposing two algorithms that use databases -- such as the Inorganic Crystal Structure Database (ICSD)\cite{ICSD} -- {which tabulate the occupied} {WPs} for each element in a chemical compound, along with simple energy estimates, in order to identify many new classes of {TIs} and semimetals.
We distinguish between the number of EBRs at the Fermi level, i.e., the two classes defined in Section~\ref{sec:classify} and summarized in Table~\ref{table:titable}.
We further distinguish between cases with and without SOC, which can be treated separately by existing ab-initio methods. The SOC strength can be viewed as a control parameter driving the topological phase transition.{For EBRs without SOC, we count bands on a per-spin basis: all states are doubly degenerate.} }
Further new topological semimetals, such as a series of ones at filling -1/8, can be obtained by our method.
\subsection{Single PEBRs at the Fermi Level}
{As described in Section~\ref{sec:classify} and Table~\ref{table:titable}, a $(1,1)$ type TI occurs when a single PEBR is realized as a sum of two band groups disconnected in momentum space.} {Here, we utilize the fact that
$(1,1)$ TIs can be realized by} band representations induced from $1D$ site symmetry group irreps that are EBRs but not PEBRs. In this case, the Wannier functions for the valence band involve a single orbital per site, and hence do not respect the twofold time-reversal symmetry degeneracy \emph{in real space}: since their Wannier states break time-reversal symmetry the material is necessarily a TI.
{An example is furnished by lead suboxide Pb$_2$O\cite{pb20ref}, a material whose topological properties were until now unexplored. As discussed in the Supplementary Material, this is a non-symmorphic cubic crystal in the space group $Pn\bar{3}m$ ($224)$. Although metallic, this material features a topologically disconnected PEBR far below the Fermi level, shown in Fig.~\ref{fig:2}\textbf{a}. However, we can consider the application of $z-$axis uniaxial strain, under which the crystal symmetry is lowered to the tetragonal subgroup $P4_2/nnm$ (134) of the original space group. There are fewer symmetry constraints imposed on the band structure in this space group, and in particular degeneracies protected by threefold rotational symmetry will be broken. This allows for a gap to open at the Fermi level, leading us to predict that strained Pb$_2$O will be a topological insulator with a small Fermi pocket, as shown in Fig.~\ref{fig:2}\textbf{b}. As an aside, we note that this analysis shows that by using group-subgroup relations, we can make predictions about the topological character of strained systems when the unstrained band structure is known.}
Cu$_3$SbS$_4$, {a candidate TI\cite{cu3sbs4ref,hasancuti}, is also an example of a $(1,1)$ type material}. In Fig.~\ref{fig:2}\textbf{c} we show the calculated band structure. The states near the Fermi level form a single EBR coming from the Cu $d$-orbital electrons. {With SOC, the EBR is gapped,} leading to topologically nontrivial valence and conduction bands. A trivial spurious EBR also lies energetically within the topological gap (shown in black in the inset to Fig.~\ref{fig:2}\textbf{c}), causing the effective transport gap to be smaller than the topological gap (similar to HgTe). {There are $35$ additional materials in this Cu$_2$ABX$_4$ class of materials.}
We use these considerations to {design a systematic method to} search for $(1,1)$ type TI's. First, we identify all disconnected PEBRs {from} our data paper Ref.~\onlinecite{GraphDataPaper}, which {is organized by WP and site-symmetry irrep;} in the SM we indicate which orbital types give rise to these representations. Next, we cross-reference with the ICSD\cite{ICSD}, which yields a list of candidate materials. This list can be further reduced {by restricting to semi-metals (since an insulator would not yield a topological gap near the Fermi level after turning on SOC).}
Finally, an electron counting analysis, using only the atomic-limit orbital energies, will determine whether or not the topologically relevant {BRs} will lie near the Fermi level.
{Carrying out this procedure led us to identify the Cu$_2$ABX$_4$ material class introduced in the previous paragraph.}
We also provide a list\cite{suppmatt} of materials guaranteed to be semi-metals by the minimal-insulating-filling criteria\cite{Watanabe16} (a sufficient, but far from necessary condition for a semi-metal).
\subsection{Multiple PEBRs}
{As described in Section~\ref{sec:classify}, a CBR can be disconnected}
in such a way that neither the valence nor the conduction bands form PEBRs. When these are double-valued (spinful) BRs, we classify them by whether the BR is composite or elementary without SOC. In the
first case, a single, connected, {SOC-free} EBR decomposes after turning on SOC into a sum of two PEBRs that are disconnected in a topologically nontrivial way. These constitute $(1,2)$ type TIs. This class includes materials composed of layered Bi$^{-1}$ square nets and related structures, which we discuss further in the SM. The relevant states at the Fermi level are the Bi $p_x$ and $p_y$ orbitals, which induce a single EBR when {SOC} is neglected. These materials are filling-enforced semimetals with a symmetry-protected nodal (degenerate) line at the Fermi level.
{SOC fully gaps the nodal line, causing the EBR to disconnect. Consequently, this is a topological gap}.
Fig.~\ref{fig:3} depicts band structures for the representative {TIs} SrZnSb$_2$ and ZrSnTe\cite{129tis,zrsnte}.
A similar materials search program to that described in the previous subsection can be implemented for the $(1,2)$ type of Table~\ref{table:titable}.
{SOC-}free EBRs which decompose into {CBRs} with SOC can be quickly identified {from Ref.~\onlinecite{GroupTheoryPaper}.} With these in hand, a list of materials with the proper orbitals and occupied {WPs} can be compiled from the ICSD\cite{ICSD}. Finally, a{n SOC-}free electron counting analysis will reveal which of these materials are semi-metallic without SOC. These are prime candidates to become {TIs} when SOC is included.
{This systematic search has led us} to identify {over 300} $(1,2)$ materials in {SG $P4/nmm$ ($129$)\cite{129tis,zrsnte,schoop2015}, as well as 58 new candidate TIs in SG \textit{Pnma} ($62$)}. {SG \textit{Pnma} $(62)$ arises from an in-plane distortion of SG $P4/nmm$ ($129$), and again demonstrates that our group-theoretic approach allows us to predict the topological character of materials upon structural distortion.}
{We present these materials in the SM.}
Finally, {we discuss the known TIs} Bi$_2$Se$_3$ and KHgSb. In these materials, without SOC the band representation at the Fermi level is gapped and composite.
With infinitesimal SOC, the band representation remains gapped and composite.
{A topological phase transition occurs only when SOC is strong enough to drive a band inversion that exchanges states with distinct little group reps at $\Gamma$ between the conduction and valence bands.}
{Because such a transition depends on the strength of SOC -- unlike in the $(1,1)$ and $(1,2)$ cases, where the transition is at infinitesimal SOC -- Bi$_2$Se$_3$ and KHgSb are described by our method but could not be unequivocally predicted.}
\subsection{Partially filled bands and semimetals}
Although not the main focus of this work, we also note that our method allows for the prediction of metals and semimetals. Going beyond standard methods of counting multiplicities of occupied WPs\cite{hoffmann1987chemistry}, we can predict symmetry-protected semimetals by looking for partially-filled, connected EBRs induced from high-dimensional site-symmetry representations. In this way we have found the A$_{15}$B$_4$ family\cite{batteryref} of sixteen-fold connected metals in SG $I\bar{4}3d$ (220) with A$=$Cu,Li,Na and B$=$Si,Ge,Sn,Pb. Through charge-transfer, the sixteen bands in this PEBR are $7/8$ filled; this exotic filling holds promise for realizing novel physics when interactions are included. The band structure for Li$_{15}$Ge$_4$ is shown in Fig.~\ref{fig:2}\textbf{d}. There is a symmetry-protected threefold degeneracy near the Fermi level at the $P$ point\cite{Bradlyn2016} Finally, we calculate that Cu$_3$TeO$_6$ in SG $Ia\bar{3}$ (206)\cite{24foldref} has a half filled, connected \emph{twenty-four} band PEBR at the Fermi level when interactions are neglected. Our EBR theory reveals that this is the highest symmetry-enforced band connectivity in any SG.
{\subsection{Systematic materials search: Summary}
While the materials presented above represent the proof-of-principle for our material search strategy, a full, systematic search of the entire materials database based on our criteria reveals a myriad of new topological insulator and semimetal candidates. While we defer a full discussion of our new materials predictions to a forthcoming work, we shall here list some of our more promising candidate materials, as identified through our systematic search, and verified with ab-intio DFT calculations. In SG P$\bar{3}m1$ ($164$) we predict that CeAl$_2$Ge$_2$, CeISi, BiTe, and Nb$_3$Cl$_8$ will be topological insulators. In SG $P4/mmm$ ($123$) we have LiBiS$_2$ and AgSbTe$_2$. A particularly promising family of topological materials is given by TiAsTe, ZrSbTe, HfSbTe, Hf$_3$Ni$_4$Ge$_4$, Sr$_3$Li$_4$Sb$_4$, Ba$_2$Bi$_3$, and Ba$_3$Al$_2$Ge$_2$ in SG $I/mmm$ ($71$). Additionally, we find NaAu$_2$ in SG $Fd\bar{3}m$ ($227$), LaPd$_2$O$_4$ in SG $I4_1/a$ ($88$), BaGe$_2$Ru$_2$ in SG $Fddd$ ($70$), Ni$_2$Ta$_2$Te$_4$ in SG $Pbam$ ($53$), Ag$_2$Ca$_4$Si$_6$ in SG $Fmmm$ ($69$), Ta$_2$Se$_8$I in SG $I422$ ($97$), SnP in SG $I4mm$ ($107$), and Tl$_4$CuTe$_3$ in SG $I4/mcm$ ($140$) each of which we predict hosts novel topological bands.}
\section{Conclusion}\label{sec:conclusion}
We have combined group theory, chemistry, and graph theory to provide the framework of Topological Quantum Chemistry.
{We provide a complete description of the Wannier-Bloch duality between real and momentum space, in the process}
linking the extended (physics) versus local (chemistry) approaches to electronic states. Our theory is descriptive and predictive: we can algorithmically search for and predict new {TIs} and semimetals. In a series of accompanying Data papers, we present the group-theoretic data and graph-theoretic algorithms necessary to deduce the conclusions of Sections~\ref{sec:graphs} and \ref{sec:ebrs}, and to implement the materials search described in Section~\ref{sec:matsearch}. By taking the ideas presented in this paper to their logical conclusion, we arrive at a new paradigm, {which applies not only to TIs, but to semimetals and to band theory in general.}
The synthesis of symmetry and topology, of localized orbitals and Bloch wavefunctions, allows for a full understanding of noninteracting solids, which we have only begun to explore in this work. Furthermore, our emphasis on the symmetry of localized orbitals opens a promising avenue to incorporate magnetic groups or interactions into the theory of topological materials.
{\bf Data Availability:} All data supporting the conclusions of this work is hosted on the Bilbao Crystallographic Server (\url{http://cryst.ehu.es}). All information about EBRs, PEBRs, and their connectivity graphs can be accessed via the BANDREP application\cite{progbandrep}. The algorithms used to generate this data, as well as a guide to the use of all relevant programs, can be found in the accompanying data papers, Refs.~\onlinecite{GroupTheoryPaper,GraphDataPaper}.
{\bf Acknowledgements:} BB would like to thank Ivo Souza, Richard Martin, and Ida Momennejad for fruitful discussions. MGV would like to thank Gonzalo Lopez-Garmendia for help with computational work. BB, JC, ZW, and BAB acknowledge the hospitality of the Donostia International Physics Center, where parts of this work were carried out. JC also acknowledges the hospitality of the Kavli Institute for Theoretical Physics, and BAB also acknowledges the hospitality and support of the \'{E}cole Normale Sup\'{e}rieure and Laboratoire de Physique Th\'{e}orique et Hautes Energies. The work of MVG was supported by FIS2016-75862-P and FIS2013-48286-C2-1-P national projects of the Spanish MINECO. The work of LE and MIA was supported by the Government of the Basque Country (project IT779-13) and the Spanish Ministry of Economy and Competitiveness and FEDER funds (project MAT2015-66441-P). ZW and BAB, as well as part of the development of the initial theory and further ab-initio work, were supported by the Department of Energy de-sc0016239, Simons Investigator Award, the Packard Foundation, and the Schmidt Fund for Innovative Research. The development of the practical part of the theory, tables, some of the code development, and ab-initio work was funded by NSF EAGER Grant No. DMR-1643312, ONR - N00014-14-1-0330, and NSF-MRSEC DMR-1420541.
{\bf Author Contributions:} BB, LE, JC, MGV, and ZW contributed equally to this work. BB, JC, ZW and BAB provided the theoretical analysis, with input from CF. JC developed specific models to test the theory. LE and MIA performed the computerized group-theoretic computations. BB, LE and MGV devised and developed the graph algorithms, as well as the EBR connectivities; LE and MGV performed the computerized graph theory computations. ZW discovered the new materials presented in this paper with input from CF, and performed all first-principles calculations.
{\bf Author Information:} Reprints and permissions information is available at www.nature.com/reprints. The authors declare no competing financial interests. Correspondence and requests for materials should be addressed to bernevig@princeton.edu.
\section{Band Representations and Wannier Functions}\label{sec:bandreps}
Here we expand upon the notion of {band representations} as discussed in the main text. First introduced by Zak\cite{Zak1982}, a band representation for a space group $G$ is a set of energy bands {$E_n(\mathbf{k})$} {spanned} by a given collection of (exponentially) localized Wannier orbitals. To be consistent with the crystal symmetries, the localization centers of these Wannier orbitals {(Wannier centers)} must form an orbit under the action of $G$: given a Wannier function centered at a point $\mathbf{q}_1$ {in the unit cell}, there is a Wannier function centered at $g\mathbf{q}_1$ for all $g\in G$, {including} all positions related to these by lattice translations. The set $\{\mathbf{q}_\alpha\}$ of these Wannier centers form an orbit {under the space group action and hence can be labelled by a} {\emph{Wyckoff position}} of the \emph{space} group $G$. Note that every system representable with a tight-binding model has such a real-space [direct space] description.
Before we show how to construct a band representation from a set of localization centers {of the Wannier orbitals}, we will carefully define our terminology. Note that we use the {conventional origin choice (origin choice $2$)} for all space groups as given by the Bilbao Crystallographic Server\cite{Bilbao1,Bilbao2,Bilbao3}. {Our terminology follows that of Refs.~\onlinecite{Bacry1988,Evarestov1997,Cracknell,Bilbao1,Bilbao2,Bilbao3}. For basic facts about the theory of finite groups, we refer the reader to Refs.~\onlinecite{Serre,Fulton2004}}.
\begin{defn}
A {\bf symmetry site} $\mathbf{q}$ is any point in the unit cell of a crystal. The set of symmetry operations $g\in G$ that leave $\mathbf{q}$ fixed (absolutely, not up to lattice translations) is called the {\bf stabilizer group}, or {\bf site-symmetry group} $G_\mathbf{q}\subset G$. By definition, a site-symmetry group is isomorphic to a crystallographic point group. A site-symmetry group is called {\bf non-maximal}
if there exists a finite group $H$, such that $G_\mathbf{q}\subset H\subset G$; a site-symmetry group that is not non-maximal is {\bf maximal}.
\end{defn}
Note that the translation part of $g\in G_\mathbf{q}$ may include lattice translations, {so long as it keeps {the point} $\mathbf{q}$ fixed. Nonetheless, $G_\mathbf{q}$ must be isomorphic to a \emph{point} group.}
\begin{defn}
The orbit $\{\mathbf{q}_\alpha=g_\alpha \mathbf{q} | {g_\alpha \notin G_\mathbf{q}} \}, \alpha=1,\dots,n$ of a symmetry site $\mathbf{q}$ modulo lattice translations {are classified by} a {\bf Wyckoff position} of multiplicity $n$. Note that we define the multiplicity with respect to the primitive, rather than the conventional cell. The stabilizer groups $G_{\mathbf{q}_\alpha}$ are all isomorphic and conjugate to the stabilizer group $G_\mathbf{q}\equiv G_{\mathbf{q}_1}$. We say that a Wyckoff position is maximal if the stabilizer group $G_\mathbf{q}$ is maximal.
\label{def:Wyckoff}
\end{defn}
As an example, if $\mathbf{q}$ is {a general} point in the unit cell {with trivial stabilizer group, $G_\mathbf{q}=\{E|000\}$}, then $\mathbf{q}$ belongs to the ``general'' Wyckoff position with multiplicity equal to the order of the point group of the space group. This is not a maximal position in general, but it {is} \emph{a} position.
Let us now return to the problem of constructing a band representation. Without loss of generality, consider the case of Wannier functions localized on symmetry sites $\{\mathbf{q}_\alpha| \alpha = 1,... , n\}$ classified by a single Wyckoff position {of multiplicity $n$}. Then the $n_q$ functions localized on the site $\mathbf{q}\equiv\mathbf{q}_1$ transform under some representation $\rho$ of the site-symmetry group $G_\mathbf{q}$, with dimension $n_q$. For the time being we do not specify whether or not $\rho$ is irreducible; we will show later that we need only concern ourselves with the irreducible representations (irreps). Crystal symmetry dictates that there are $n_q$ orbitals localized on the other equivalent symmetry sites $\mathbf{q}_\alpha$ {in the orbit}, and that these transform under the {conjugate} representation defined by
\begin{equation}
{\rho_\alpha(h)=\rho(g_\alpha^{-1} h g_\alpha)}
\end{equation}
for $h\in G_{\mathbf{q}_\alpha}$.
One can see that $g_\alpha^{-1} h g_\alpha \in G_q $ because $h\in G_{\mathbf{q}_\alpha} \Rightarrow h q_\alpha = q_\alpha \Rightarrow h g_\alpha q = g_\alpha q$.
We can thus index our Wannier functions as {$W_{i\alpha}(\mathbf{r}-\mathbf{t}_\mu)$}, where $i=1\dots n_q$ indexes the functions localized on symmetry site $\mathbf{q}_\alpha+\mathbf{t}_\mu$ and $\mathbf{t}_\mu$ is a lattice vector.
Finally, the elements $g_\alpha,\;\alpha\neq 1$ act by permuting the different symmetry sites {$\mathbf{q}_\alpha$}. Taking all of these facts together allows us to define the {\bf induced representation}\cite{Fulton2004} $\rho_G\equiv\rho\uparrow G\equiv \mathrm{Ind}_{G_\mathbf{q}}^{G}\rho$ of the space group $G$ induced from the representation $\rho$ of $G_\mathbf{q}$. This representation is {$n_q\times n\times N$ dimensional (assuming periodic boundary conditions), where $N\rightarrow\infty$ is the number of (primitive) unit cells in the system}, and the
representation matrices have a block structure, with {$n^2\times N^2$} blocks of $n_q\times n_q$ dimensional submatrices; a group element $g$ whose matrix representative has a nonvanishing $(\alpha\beta)$ block maps $\mathbf{q}_\beta$ to $\mathbf{q}_\alpha$.
For our purposes, it is most convenient to work with the Fourier transforms
\begin{equation}
a_{i\alpha}(\mathbf{k},\mathbf{r})=\sum_{\mu}e^{i\mathbf{k}\cdot \mathbf{t}_\mu}W_{i\alpha}(\mathbf{r}-\mathbf{t}_\mu),
\end{equation}
{where the sum is over the lattice vectors and $\alpha = 1,..., n$. In this way we can exchange our infinite $N^2\times n^2\times n_q^2$ matrices for finite-dimensional $n^2\times n_q^2$ matrix-valued functions of $\mathbf{k}$, {which takes $N^2$ values in the first Brillouin zone (BZ)}}. {Any translationally-invariant, quadratic Hamiltonian acting in the Hilbert space of these Wannier functions commutes with these matrices.}
The concrete formula for the induced representation matrices $\rho_G(g)$ can then be defined as follows:
\begin{defn}\label{def:br}
The {\bf band representation} $\rho_G$ induced from the {$n_q-$dimensional} representation $\rho$ of the site-symmetry group $G_\mathbf{q}$ of a particular point $\mathbf{q}$, {whose orbit belongs to the Wyckoff position $\{\mathbf{q}_\alpha \equiv g_\alpha \mathbf{q} | g_\alpha \notin G_\mathbf{q}\, {\mathrm{for}\, \alpha\neq 1} \}$ of multiplicity $n$,} is defined for all $h\in G$ by the action
\begin{equation}
(\rho_G(h)a)_{i\alpha}(\mathbf{k},\mathbf{r})=e^{-i(h\mathbf{k})\cdot\mathbf{t}_{\beta\alpha}}{\sum_{i'=1}^{n_\mathbf{q}}}\rho_{i'i}(g_{\beta}^{-1}\{E|-\mathbf{t}_{\beta\alpha}\}hg_\alpha)a_{i'\beta}(h\mathbf{k},\mathbf{r}),
\label{eq:inducedrep}
\end{equation}
{here $\alpha,\beta,i,j$ are matrix indices, where for each choice of $\alpha$ the index $\beta$ is determined by the {unique}} coset of $G$ that contains $hg_\alpha$:
\begin{equation}
hg_\alpha =\{E|\mathbf{t}_{\beta\alpha}\}g_{\beta}g
\label{eq:defbeta}
\end{equation}
for some $g\in G_\mathbf{q}
and Bravais lattice vector $\mathbf{t}_{\beta\alpha}$.
By moving $g_\alpha$ to the right-hand-side of Eq~(\ref{eq:defbeta}), it is evident that
$h\mathbf{q_\alpha} = \{E|\mathbf{t}_{\beta\alpha}\}g_{\beta}g g_\alpha^{-1} \mathbf{q}_\alpha = \{E|\mathbf{t}_{\beta\alpha}\}g_{\beta}g\mathbf{q} = \{E|\mathbf{t}_{\beta\alpha}\}g_{\beta}\mathbf{q} = \{E|\mathbf{t}_{\beta\alpha}\}\mathbf{q}_\beta$. The second and fourth equalities follow from the definition of $\mathbf{q}_{\alpha,\beta}$ and the third equality follows from $g\in G_\mathbf{q}$. Thus,
\begin{equation}
\mathbf{t}_{\beta\alpha}=h\mathbf{q}_\alpha-\mathbf{q}_{\beta}. \label{eq:Rab}
\end{equation}
If $\rho$ is an $n_q$-dimensional representation of $G_\mathbf{q}$, and if the Wyckoff multiplicity of the position $\{\mathbf{q}_\alpha\}$ is $n$, then there are $n\times n_q$ {energy bands} in the band representation $\rho_G$.
\end{defn}
(This is a special case of the general induction procedure; a similar formula can be used for inducing the representation of any group from one of its subgroups.) {Notice that we did not need to specify a particular Hamiltonian. Our discussion applies to any Hamiltonian that respects the crystal symmetry and acts on a local Hilbert space of Wannier functions.}
All of the above holds for either spinless or spin-orbit coupled systems. When spin-orbit coupling is negligible, we consider the single valued (or spinless) linear representations of the site symmetry group{; we double everything to account for the trivial spin degeneracy}. For spin-orbit coupled systems, we must use the double-valued (spinor) representations.
Note that a band representation is formally infinite dimensional since it depends on the momentum $\mathbf{k}$ ({it has as many dimensions as the number of unit cells in the crystal}), while the irreducible representations of the space groups are indexed by discrete sets of $\mathbf{k}$ vectors. As such, every band representation is formally reducible, {as it decomposes as} an infinite direct sum (over $\mathbf{k}$ points) of space group irreps at each $\mathbf{k}$ point. However, it is the band representations, rather than the space group irreps, that tell us about the global band structure topology in a crystal.
Hence, we will be interested in the decomposition of band representations into sums of other band representations.
{First, we must specify how to tell if two band representations are equivalent. Given two band representations $\rho_G$ and $\sigma_G$, a necessary condition for {their} equivalence up to now\cite{Bacry1988,RegRep} is that at all points in the BZ they restrict to the same little group representations. However, for the study of topological phases, we need a stronger form of equivalence. We define a form of \emph{homotopy equivalence} that makes explicit the smoothness properties needed for discussing topological phase transitions. Namely, we say
\begin{defn}\label{def:equiv}
Two band representations $\rho_G^\mathbf{k}$ and $\sigma_G^\mathbf{k}$ are {\bf equivalent} iff there exists a {unitary} matrix-valued function $S(\mathbf{k},t,g)$ smooth in $\mathbf{k}$ and continuous in $t$ such that for all $g\in G$
\begin{enumerate}
\item $S(\mathbf{k},t,g)$ is a band representation for all $t\in[0,1]$,
\item $S(\mathbf{k},0,g)=\rho_G^\mathbf{k}(g)$, and
\item $S(\mathbf{k},1,g)=\sigma_G^\mathbf{k}(g)$
\end{enumerate}
\end{defn}
{Note that since $S$ is continuous in $t$, any property of a band representation evolves continuously under the equivalence $S$. In particular, the Wilson loop {(i.~e.~Berry phase)} matrices\cite{Zakphase,Aris2014,Aris2016} computed from the bands in the representation $\rho_G^{\mathbf{k}}$ are homotopic to the Wilson loop matrices computed in the representation $\sigma_G^{\mathbf{k}}$. As such, two equivalent band representations cannot be distinguished by any quantized Wilson loop invariant.} {\color{black} This is a constructive formulation of the type of equivalence noted in Refs.~\onlinecite{Michel1992} and \onlinecite{RegRep}}
We can also give a more constructive view of equivalence. {Consider} two symmetry sites, $\mathbf{q}, \mathbf{q}'$, which have {distinct} site symmetry groups, $G_{\mathbf{q}}$ and $G_{\mathbf{q}'}$, respectively, with nonempty intersection, $G_0=G_{\mathbf{q}}\cap G_{\mathbf{q}'}$.
Since $G_0$ is the intersection of two distinct stabilizer groups, it is a stabilizer group of some {(lower)} symmetry site $\mathbf{q}_0$.
This symmetry site will have a variable parameter that interpolates between the symmetry sites $\mathbf{q}$ and $\mathbf{q}'$; this allows us to easily identify $\mathbf{q}_0$ from a table of Wyckoff positions{, which we have done for all the Wyckoff positions of all the $230$ space groups\cite{}.} If $G_0$ is an index $m_q$ subgroup in $G_\mathbf{q}$, then the associated Wyckoff position {with stabilizer group $G_0$} has multiplicity $m_q$ times that of $\mathbf{q}$. Furthermore, $G_\mathbf{q}$ has a coset decomposition in terms of $m_q$ cosets of $G_0$; analogous statements hold when we view $G_0$ as an index $m_q'$ subgroup of $G_{\mathbf{q}'}$. We can use this coset decomposition to induce representations of $G_\mathbf{q}$ and $G_{\mathbf{q}'}$ from representations of $G_0$, in much the same way as outlined above. Given a representation $\rho$ of $G_0$, the band representations $(\rho\uparrow G_{\mathbf{q}})\uparrow G$ and $(\rho\uparrow G_{\mathbf{q}'})\uparrow G$ are equivalent. The existence of a homotopy $S$ implementing this equivalence is guaranteed by the fact that the symmetry site $\mathbf{q}_0$ can be continuously moved from $\mathbf{q}$ to $\mathbf{q}'$ without violating the crystal symmetries.
Using Definition \ref{def:equiv} of equivalence, we define
\begin{defn}\label{def:composite}
A band representation is called {\bf composite} if it {is equivalent to} the direct sum of other band representations. A band representation that is not composite is called {\bf elementary}.
\end{defn}
Using the fact that induction {of representations}, $\uparrow$, commutes with direct sums, and that induction factors through subgroup inclusion\cite{Bacry1993}, we deduce that elementary band representations are induced from irreducible representations of maximal site symmetry groups. These conditions are necessary, however they are not sufficient. It may still be the case that a band representation induced from a maximal site symmetry irrep is equivalent [in the sense of Def.~(\ref{def:equiv})] to a composite band representation. We catalogue all such exceptions in Table~\ref{table:sbr} for the space groups {(first found in Ref.~\onlinecite{Bacry1988})}, and in Table~\ref{table:dbr} for the double space groups. {The full decription of how this data was obtained will be presented in the accompanying Data paper\cite{GroupTheoryPaper}, and the data itself is accessible through the BANDREP program on the Bilbao Crystallographic Server\cite{progbandrep}. We have
\begin{prop}
A band representation $\rho_G$ is elementary if and only if it can be induced from an irreducible representation $\rho$ of a {maximal} site-symmetry group $G_\mathbf{q}$ which does not appear in the first column of Table~\ref{table:sbr} or \ref{table:dbr}.
\end{prop}
}
Up to this point, we have not commented on time-reversal symmetry. We may include antiunitary time-reversal symmetry as an element in any site-symmetry group, {as} it acts locally in real [direct] space, i.~e.~ it commutes with all space group elements. For spinless systems, time-reversal squares to $+1$, while for spinful systems it squares to $-1$. We call site-symmetry representations which are compatible with the action of time-reversal \emph{physical} representations. Note that all physically irreducible site-symmetry representations are even-dimensional for spinful systems by Kramers's theorem. The entire discussion thusfar holds mutatis mutandis for physical band representations, physical equivalence, and physically elementary band representations, by generalizing Defs.~\ref{def:br},\ref{def:equiv}, and \ref{def:composite} to the TR-symmetric case. Taking time-reversal symmetry into account, we find that only the band representations below the double line in Table~\ref{table:sbr} fail to be physically elementary. The rest of the entries in Table~\ref{table:sbr}, as well as all the entries in Table~\ref{table:dbr} are composite without TR symmetry, but physically elementary. Moreover, {we find} there are additional physical exceptions for spinless systems with time-reversal symmetry, catalogued in Table~\ref{table:zakwaswrong}. {The machinery we used to obtain these tables will be explained in detail in Ref.~\onlinecite{GroupTheoryPaper}; they represent an exhaustive search of all induced representations for all space groups. Summarizing, we have with TR that
\begin{prop}
A spinless (i.e. single-valued) band representation $\rho_G$ is physically elementary if and only if it can be induced from a physically irreducible representation $\rho$ of a {maximal} site-symmetry group $G_\mathbf{q}$ which does not appear in {the first column of} Table~\ref{table:sbr} {below the double line, and if it does not appear in Table~\ref{table:zakwaswrong}}.
A spinful (i.e. double-valued) band representation $\rho_G$ is physically elementary if and only if it can be induced from a physically irreducible representation $\rho$ of a {maximal} site-symmetry group $G_\mathbf{q}$.
\label{prop:physelementary}
\end{prop}
}
\section{Connectivity Graphs}
In this Appendix, we review the necessary background for constructing the connectivity graphs associated with elementary band representations. After reviewing compatibility relations in more detail than presented in the main text, we outline our algorithm for computing the allowed connectivities of elementary band representations using the notion of spectral graph partitioning. This allows us to develop an approach to the classification and search for TIs significantly more general than those found in recent proposals\cite{kanegraphs,globaltop}. {A more complete account of the machinery and related data -- which takes more than 100 pages -- will be given in Ref.~\onlinecite{graphdatapaper}, but the following represents a good introduction to our method and its results.}
\subsection{Compatibility Relations}\label{ex:sg216comprel}
Recall that in the textbook theory of energy bands{\cite{Cracknell}}, global band topology {-- the various interconnections between different bands throughout the BZ --} is inferred from the representations of the little groups $G_\mathbf{k}$ through the use of so-called ``compatibility relations''. Specifically, irreducible little group representations at high-symmetry $\mathbf{k}$-points, lines, and planes in the BZ are reducible along high-symmetry lines, planes, and volumes (the general $\mathbf{k}$-point) respectively; the compatibility relations determine which representations can be consistently connected along these subspaces. Since the little groups along high-symmetry surfaces are subgroups of the little groups of their boundaries, the compatibility relations can be determined by starting with the little group representation on a high-symmetry surface, and restricting (subducing) to the representations of the higher dimensional surfaces that it bounds.
As an example, let us examine space group $P\bar{4}3m$ (215). This is a symmorphic space group with primitive cubic Bravais lattice, and point group $T_d$. The group $T_d$ is generated by a threefold rotation, {$C_{3,111}$, a fourfold roto-inversion, $IC_{4z}\equiv S_4^-$ , and a mirror reflection, $m_{1\bar{1}0}$}. Consider the high-symmetry point $\Gamma=(0,0,0)$ in the BZ, and the line $\Lambda=(k,k,k)$ emanating from it. The point group of the little group $G_\Gamma$ of $\Gamma$, known as the {\bf little co-group} $\bar{G}_\Gamma$, is isomorphic to the point group of the space group, while the little co-group $\bar{G}_\Lambda$ of $\Lambda$ is generated by $C_{3,111}$ and $m_{1\bar{1}0}$, and thus isomorphic to the group $C_{3v}$. As such, irreps of $G_\Gamma$ restrict (or subduce) to representations of the little group $G_\Lambda$ of the line $\Lambda$. The compatibility relations enumerate all such restrictions. For example, let us consider first the little group $G_\Gamma$.
We note that since the space group $P\bar{4}3m$ (215) is symmorphic, the representations of the little groups $G_\mathbf{k}$ are trivially determined by the representations of the little co-groups $\bar{G}_\mathbf{k}$.
Here and throughout, we will simplify notation by giving representation matrices and character tables for the little co-groups where appropriate.
The four dimensional double-valued $\bar{\Gamma}_8$ representation of $\bar{G}_\Gamma\approx T_d$ (this is the spin-$3/2$ representation), specified by
\begin{equation*}
\bar{\Gamma}_8(C_{3, 111})=
\frac{\sqrt{2}}{4}e^{-i\pi/4}
\left(\begin{array}{cccc}
-i & -\sqrt{3} & i\sqrt{3} & 1\\
-i\sqrt{3} & -1 & -i & -\sqrt{3}\\
-i\sqrt{3} & 1 & -i & \sqrt{3}\\
-i & \sqrt{3} & i\sqrt{3} & -1
\end{array}\right),\;\;
\bar{\Gamma}_8(IC_{4z}) = \left(
\begin{array}{cccc}
-\sqrt[4]{-1} & 0 & 0 & 0 \\
0 & -(-1)^{3/4} & 0 & 0 \\
0 & 0 & \sqrt[4]{-1} & 0 \\
0 & 0 & 0 & (-1)^{3/4} \\
\end{array}
\right)
\end{equation*}
\begin{equation}\label{eq:td4drep}
\bar{\Gamma}_8(m_{1\bar{1}0})= \left(
\begin{array}{cccc}
0 & 0 & 0 & (-1)^{1/4} \\
0 & 0 & -(-1)^{3/4}& 0 \\
0 & -(-1)^{1/4} & 0 & 0 \\
(-1)^{3/4} & 0 & 0 & 0 \\
\end{array}
\right)
\end{equation}
The matrix for $C_{3,111}$ has eigenvalues $-1,-1,e^{i\pi/3},e^{-i\pi/3}$, while the matrix for $m_{1\bar{1}0}$ has eigenvalues $-i,-i,i,i$.
Next, we note that there are three double-valued representations of $G_\Lambda$, conventionally labelled $\bar{\Lambda}_4$, $\bar{\Lambda}_5$, and $\bar{\Lambda}_6$. The matrix representative of $\{C_{3,111}|000\}$ in each of these representations is given by
\begin{equation}
\bar{\Lambda}_4(\{C_{3,111}|000\})=\bar{\Lambda}_5(\{C_{3,111}|000\})=-1,\; \bar{\Lambda}_6(\{C_{3,111}|000\})=\left(\begin{array}{cc} e^{-i\pi/3} &0 \\ 0 & e^{i\pi/3}\end{array}\right),
\end{equation}
and the matrices for $\{m_{1\bar{1}0}|000\}$ are
\begin{equation}
\bar{\Lambda}_4(\{m_{1\bar{1}0}|000\})=-i,\;\bar{\Lambda}_5(\{m_{1\bar{1}0}|000\})=i,\; \bar{\Lambda}_6(\{m_{1\bar{1}0}|000\})=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0\end{array}\right)
\end{equation}
By comparing eigenvalues, we deduce that the $\bar{\Gamma}_8$ representation of $G_\Gamma$ must restrict to the $\bar{\Lambda}_4\oplus\bar{\Lambda}_5\oplus\bar{\Lambda}_6$ representation of $G_\Lambda$. The compatibility relation for the $\bar{\Gamma}_8$ representation at $\Gamma\rightarrow\Lambda$ is thus,
\begin{equation}
\bar{\Gamma}_8\downarrow G_\Lambda\approx\bar{\Lambda}_4\oplus\bar{\Lambda}_5\oplus\bar{\Lambda}_6\label{eq:sg216comprel}
\end{equation}
\subsection{Graph Theory Review}\label{sec:graphreview}
{Compatibility relations like these must be satisfied at each and every high-symmetry point, line, and plane throughout the BZ. In particular (as discussed in the main text), in order to connect the little group representations of pairs of high-symmetry $\mathbf{k}$-points, and so form global energy bands, we must ensure that the compatibility relations are satisfied along the lines and planes joining the two points. In general, there will be many ways to form global energy bands consistent with the compatibility relations, each yielding a physically distinct realizable band structure. Our goal is to systematically classify all these valid band structures.}
Since the compatibility relations are a purely group-theoretic device with meaning independent of any choice of Hamiltonian, {we can accomplish this task by introducing} more refined graph-theoretic picture of band connectivity, as presented in the main text.
To begin, we introduce some graph-theoretic terminology.
\begin{defn}\label{def:partition}
A {\bf partition} of a graph is a subset, $V_0$, of nodes such that no two nodes in $V_0$ are connected by an edge.
\end{defn}
In our construction, each partition will correspond to a high-symmetry $\mathbf{k}$-point, and irreps of the little group of each $\mathbf{k}$-point will be represented as nodes, as shown in Fig.~\ref{fig:graphschematic}
\begin{defn}\label{def:degree}
The {\bf degree of a node} $v_0$ in a graph is the number of edges that end on $v_0$.
\end{defn}
These definitions allow us to formalize the notion of a connectiviy graph as introduced in the main text, in particular,
\begin{defn}\label{def:compgraph}
Given a collection of little group representations, $\mathcal{M}$, (i.e. bands) forming a (physical) band representation for a space group $G$, we construct the {\bf connectivity graph} $C_\mathcal{M}$ as follows:
{we associate a node, $p^a_{\mathbf{k}_i}\in C_\mathcal{M}$, in the graph to each representation $\rho_{\mathbf{k}_i}^a\in\mathcal{M}$ of the little group $G_{\mathbf{k}_i}$ of every high-symmetry manifold (point, line, plane, and volume), $\mathbf{k}_i$.}
If an irrep occurs multiple times in $\mathcal{M}$, there is a separate node for each occurence.
{The degree of each node, $p^a_{\mathbf{k}_i}$, is $P_{\mathbf{k}_i}\cdot \mathrm{dim}(\rho^a_{\mathbf{k}_i})$, where $P_{\mathbf{k}_i}$ is the number of high-symmetry manifolds connected to the point $\mathbf{k}_i$}: $\mathrm{dim}(\rho^a_{\mathbf{k}_i})$ edges lead to each of these other $\mathbf{k}-manifolds$ in the graph, one for each energy band.
When the manifold corresponding to $\mathbf{k}_i$ is contained within the manifold corresponding to $\mathbf{k}_j$, as in a high-symmetry point that lies on a high-symmetry line, their little groups satisfy $G_{\mathbf{k}_j} \subset G_{\mathbf{k}_i}$. {For each node $p_{\mathbf{k}_i}^a$, we compute
\begin{equation}
\rho_{\mathbf{k}_i}^a\downarrow G_{\mathbf{k}_j}\approx\bigoplus_{b}\rho_{\mathbf{k}_j}^b.
\end{equation}
We then connect each node $p_{\mathbf{k}_j}^b$ to the node $p_{\mathbf{k}_i}^a$ with $\mathrm{dim}(\rho^b_{\mathbf{k}_j})$ edges.
}
\end{defn}
{We give an illustration of these concepts in Fig.~\ref{fig:graphschematic}.}
\begin{figure}[t]
\includegraphics[height=3in]{graphschematic.pdf}
\caption{{Subgraph of a connectivity graph corresponding to the compatibility relations along $\Gamma$ and $\Lambda$ for $P\bar{4}3m$ (215) as discussed in Sec.~\ref{ex:sg216comprel}. There are two partitions in the graph labelled by $\Gamma$ and $\Lambda$. In the $\Gamma$ partition there are two nodes indicated by black circles, labelled $\bar{\Gamma}_8^1$ and $\bar{\Gamma}_8^2$, each corresponding to a copy of the $\bar{\Gamma}_8$ little group representation. Similarly, in the $\Lambda$ partition, there are two nodes corresponding to copies of the $\bar{\Lambda}_4$ little group representation and indicated by red circles; two nodes corresponding to the $\bar{\Lambda}_5$ representation and indicated by blue circles; and two nodes corresponding to the $\bar{\Lambda}_6$ representation and indicated by green circles. The nodes are connected by edges (represented by black lines) consistent with the compatibility relation Eq.~(\ref{eq:sg216comprel}). Because there are only two partitions in this subgraph, $P_\Gamma=P_\Lambda=1$ (c.~f.~Def.~\ref{def:compgraph}) for all nodes. The degree of each node in the $\Gamma$ partition is $4=P\cdot\mathrm{dim}(\bar{\Gamma}_8)$. Similarly, since $\mathrm{dim}(\bar{\Lambda}_6$)=2, the degree of the nodes $\bar{\Lambda}_6^1$ and $\bar{\Lambda}_6^2$ is $2$.
The remaining nodes in the $\Lambda$ partition have degree $1$, since they correspond to $1D$ representations. Note, for example, that if the $\Lambda$ line was also connected to another high symmetry $\mathbf{k}$-point (labelled $L$, for instance), then $P_\Lambda=2$, and the degree of each node in the $\Lambda$ partition would double.}}\label{fig:graphschematic}
\end{figure}
The advantage of this graph-theoretic approach to topological phase transitions is that it is algorithmically tractable. Using the {460} tables of compatibility relations {which we have generated and will publish in the accompanying Ref.~\onlinecite{GroupTheoryPaper}}, we have algorithmically constructed all compatiblity graphs consistent with the {$5646$} allowed elementary band representations, {as well as for the $4757$ independent physically elementary band representations}. {While this may naively seem to be a hopeless task, we have developed several algorithms, {outlined in Section~\ref{sec:algorithms}, and in more detail in the accompanying Ref.~\onlinecite{graphdatapaper}}, which reduce the problem to analyzing a computationally tractable $\sim 10000$ graphs per band representation.} To decompose these into connected groups of valence and conduction bands, we use standard results of spectral graph theory\cite{GraphThy}. In particular, recall that
\begin{defn}
The adjacency matrix, $A$, of a graph with $m$ nodes is an $m\times m$ matrix, where the $(ij)$'th entry is the number of edges connecting node $i$ to node $j$.
\end{defn}
In addition,
\begin{defn}\label{def:degreemat}
The degree matrix, $D$, of a graph is a diagonal matrix whose $(ii)$'th entry is the degree of the node $i$.
\end{defn}
We can then form the Laplacian matrix
\begin{equation}
L\equiv D-A
\end{equation}
We make use of the following fact about the spectrum of $L$:
\begin{prop}
For each connected component of a graph, there is a zero eigenvector of the Laplacian. Furthermore, the components of this vector are $1$ on all nodes in the connected component, and $0$ on all others.
\end{prop}
The proof of this statement follows directly from the observation that the sum of entries in any row of the Laplacian matrix is by definition zero, coupled with the observation that if $L_{ij}\neq 0$, then nodes $i$ and $j$ lie in the same connected component\cite{GraphThy}. {We give an example of this method applied to graphene below in Section~\ref{subsec:graphenegraph}}.
\subsection{Connectivity Graphs}\label{sec:algorithms}
We apply this graph-theoretic machinery to the connectivity graphs (defined in Sec.~II of the main text) associated to elementary band representations. {We start with all the little group representations at high-symmetry points and lines contained in a given EBR}. Because the representation is elementary, we know that the connectivity graphs will have either one connected component, or will decompose into a set of topological band groups, {as explained in the main text}. All connectivity graphs with more than one connected component, if they exist, will then correspond to {topological phases. }
In order to construct the Laplacian matrix, we separate the task into two steps. We first construct all possible adjacency matrices, and then we subtract the degree matrix from each of them. Since the adjacency matrices have a block structure, with nonzero blocks determined by the compatibility relations, we first build each block submatrix separately.
We start by identifying the maximal $\mathbf{k}$-vectors in the BZ. In analogy to maximal Wyckoff positions, these are the $\mathbf{k}$ vectors whose little co-groups are maximal subgroups of the point group of the space group. A valid submatrix will then be created based on {our derived} compatibility and site-symmetry tables\cite{GroupTheoryPaper}.The rows represent the maximal $\mathbf{k}$-vectors and the columns represent the connecting (non-maximal) lines and/or planes. The entries in the submatrix fulfill the following rules: we can only allow one nonzero entry per column, and the sum of the entries in each row equals the dimension of the corresponding little-group representation. Given a single valid submatrix, {all others can be obtained by permuting the columns.}
{With these submatrices, we} build up the full adjacency matrix row by row. In doing so, we must ensure that we account for all possible connections along non-maximal lines and planes. Additionally, we would like to avoid overcounting configurations that differ only by a relabelling of representations along non-maximal $\mathbf{k}$-vectors. {We have developed two main tools to do this. First, although Def.~\ref{def:compgraph} for the connectivity graphs makes use of all high-symmetry manifolds in the BZ, many of them provide redundant information. We thus consider for each space group only the minimal set of paths in $\mathbf{k}$-space necessary. We derived these for each space group by searching first for the paths in the BZ connecting all maximal $\mathbf{k}$-vectors along the highest symmetry surfaces possible, and then pruning connections which add no additional symmetry constraints. For non-symmorphic space groups, it is also necessary to consider paths connecting maximal $\mathbf{k}$-vectors in different unit cells of the reciprocal lattice, to account for the monodromy of representations\cite{Herring1942}. Second, we select from the set of valid submatrices in each block of the adjacency matrix only those that yield non-isomorphic connectivity graphs. A detailed discussion of the algorithm we used is given in Ref.~\onlinecite{graphdatapaper}. The topologically distinct connectivity graphs for each elementary band representation can be accessed through the BANDREP program on the Bilbao Crystallographic Server\cite{progbandrep}.}
\section{Example: Graphene}
In this Appendix, we illustrate the application of our representation- and graph-theoretic methods through the example of graphene with spin-orbit coupling -- the primordial topological insulator. We begin in Subsection~\ref{subsec:graphenegrps} by reviewing the crystal structure and symmetries of the honeycomb lattice. Next, in Subsection~\ref{subsec:graphenebr} we construct explicitly the elementary band representation realized by spin-orbit coupled $p_z$ orbitals in a hexagonal lattice, and hence deduce the full symmetry content of graphene \emph{irrespective of any microscopic model}. In Subsection~\ref{subsec:graphenegraph} we apply our connectivity graph machinery to this band representation, allowing us to catalogue the different allowed topological phases of graphene. Finally, in Subsection~\ref{subsec:grapheneham}, we show how these cases can be physically realized, and comment on the relationship of our approach to older work. {This gives a much needed practical example of how to use our formalism.}. Because we are interested in spin-orbit coupled systems, for the remainder of this section we will employ primarily double-valued point and space group representations unless otherwise specified.
\subsection{Space group symmetries}\label{subsec:graphenegrps}
The $2D$ honeycomb lattice of graphene has as its symmetry group the wallpaper group $p6mm$ {(No. $17$, the most symmetric triangular wallpaper group)}. This is a symmorphic group with primitive lattice basis vectors,
\begin{align}
\mathbf{e}_1&=\frac{\sqrt{3}}{2}\hat{\mathbf{x}}+\frac{1}{2}\hat{\mathbf{y}} \\
\mathbf{e}_2&=\frac{\sqrt{3}}{2}\hat{\mathbf{x}}-\frac{1}{2}\hat{\mathbf{y}},
\end{align}
which are pictured in Fig~\ref{fig:basisvectors}. Note that the Bilbao Crystallographic Server\cite{Bilbao1,Bilbao2,Bilbao3} uses
\begin{equation}
\mathbf{e}_1'=\mathbf{e}_2,\; \mathbf{e}_2'=\mathbf{e}_1-\mathbf{e}_2
\end{equation}
as an alternative choice of primitive lattice vectors.
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=1.5in]{graphenebasisvectors-crop.pdf}
\label{fig:basisvectors}
}
\hspace{.5in}
\subfloat[]{
\includegraphics[width=1.5in]{GrapheneWyckoff-crop.pdf}
\label{fig:Wyckoff}
}
\caption{{Lattice basis vectors (a) and Wyckoff positions (b) of the hexagonal lattice. The (maximal) $1a$, $2b$ and $3c$ Wyckoff positions are indicated by {a black dot, blue squares, and red stars, respectively}. The {non-maximal} $6d$ and $6e$ positions are indicated by {purple} crosses and {green} diamonds, respectively. The multiplicity is determined by the index of the stabilizer group with respect to the point group $C_{6v}$ ($6mm$).}}
\end{figure}
The point group is $C_{6v}$, and is generated by
\begin{align}
C_{3z}:& (\mathbf{e}_1,\mathbf{e}_2)\rightarrow(-\mathbf{e}_2,\mathbf{e}_1-\mathbf{e}_2)\label{eq:grpaction1}\\
C_{2z}:& (\mathbf{e}_1,\mathbf{e}_2)\rightarrow(-\mathbf{e}_1,-\mathbf{e}_2)\\
m_{1\bar{1}}:&(\mathbf{e}_1,\mathbf{e}_2)\rightarrow(\mathbf{e}_2,\mathbf{e}_1),\label{eq:grpaction2}
\end{align}
where the subscript $1\bar{1}$ denotes that the mirror line has normal vector $\mathbf{e}_1-\mathbf{e}_2$. Although this set of generators is overcomplete ({a minimal set of generators is $\{C_{6z},m_{1\bar{1}}\}$}), it is convenient for our purposes. The three-dimensional space group with the symmetries catalogued above is space group $P6mm$ (183), which differs only in the addition of a third translation vector; we recover the $2D$ symmetry group by taking the length of this lattice vector to infinity. We note, however, that when we consider the $2D$ wallpaper group as embedded in three-dimensional space, we have some freedom when it comes to imposing extra symmetries such as inversion $I$. For this particular wallpaper group, we see that the combination $m_z=IC_{2z}$ fixes every point in the $2D$ lattice, but acts on the spin degree of freedom as a rotation by $\pi$ about the $z$ axis. As such, imposing inversion symmetry on graphene is tantamount to imposing the conservation of $S_z$. This will become important when we consider spin-orbit coupling. In general, however, we view spin conservation as non-essential, and restrict ourselves in most cases to the symmetries of $P6mm$ (183).
The honeycomb lattice has three maximal Wyckoff positions, {as shown in Fig~\ref{fig:Wyckoff}}. In graphene, the carbon atoms sit at the $2b$ position, with symmetry sites {$\{\mathbf{q}^b_1,\mathbf{q}^b_2\}=\{(\frac{1}{3}\frac{1}{3}),(\bar{\frac{1}{3}}\bar{\frac{1}{3}})\}$}. {Here} and throughout $\bar{x}=-x$. The stabilizer group $G_{\mathbf{q}^b_1}$ is isomorphic to the group $C_{3v}$; it is generated by the elements $\{m_{1\bar{1}}|00\}$ and $\{C_{3z}|01\}$. It is an index two subgroup of the point group $C_{6v}$, and the quotient group $C_{6v}/C_{3v}$ is generated by the coset {that contains $C_{2z}$} (regardless of whether we are using point groups or double point groups, this quotient group is isomorphic to the abelian group with two elements, since $C_{2z}^2=\bar{E}\in C_{3v}$).
In the BZ, we take for our primitive reciprocal-lattice basis vectors
\begin{align}
\mathbf{g}_1&=2\pi\left(\frac{\sqrt{3}}{3}\hat{\mathbf{x}}+\hat{\mathbf{y}}\right) \\
\mathbf{g}_2&=2\pi\left(\frac{\sqrt{3}}{3}\hat{\mathbf{x}}-\hat{\mathbf{y}}\right),
\label{eq:reciprocal}
\end{align}
which are shown in Fig~\ref{fig:reciprocal}.
We will be primarily interested in the little group representations at three high symmetry points in the BZ.
The first is the $\Gamma$ point, with coordinates $(00)$. The little co-group $\bar{G}_{\Gamma}$ is, as always, the point group {$C_{6v}$.} Next, there are the three time-reversal invariant $M$ points (that is, points $\mathbf{k}$ such that $-\mathbf{k}\equiv\mathbf{k}$ modulo a reciprocal lattice vector), which we denote $M$, $M'$ and $M''$. These have coordinates $(\frac{1}{2}0)$, $(\frac{1}{2}\frac{1}{2})$ and $(0\frac{1}{2})$ respectively. For the remainder of this appendix we need only concern ourselves with the first of these, and so we will refer to it unambiguously as ``the'' $M$ point; the others are related to it by $C_{3z}$ symmetry. It has little co-group {$\bar{G}_M$, which is isomorphic to $C_{2v}$ and generated by $C_{2z}$ and $C_{3z}m_{1\bar{1}}$.} Finally, there are the $K$ and $K'$ points -- the focus of most topological investigations in graphene.
We will focus here primarily on the $K$ point which has coordinates $(\frac{1}{3}\frac{2}{3})$; the $K'$ point can be obtained by a $\pi/3$ rotation.
The little co-group {$\bar{G}_{K}$ is isomorphic to $C_{3v}$ and} is generated by $C_{3z}$ and $C_{2z}m_{1\bar{1}}$.
The high symmetry points are shown in Fig~\ref{fig:reciprocal}. {In t}ables \ref{table:c6v}, \ref{table:c3v}, and \ref{table:c2v} {we} give the character tables for the irreducible representations of the little co-groups $\bar{G}_\Gamma,\bar{G}_K,$ and $\bar{G}_M$ respectively. We indicate double-valued (spinor) representations with a bar over the representation label. As mentioned previously, these character tables fully determine the representations of the corresponding little groups, since $P6mm$ (183) is symmorphic.
\begin{table}[h]
\begin{tabular}{c|ccccccc}
Rep & $E$ & $C_{3z}$ & $C_{2z}$ & $C_{6z}$ & $m_{1\bar{1}}$ & $C_{6z}m_{1\bar{1}}$ & $\bar{E}$ \\
\hline
$\Gamma_1$ &1 &1 &1 &1 &1 &1 &1\Tstrut \\
$\Gamma_2$ &1 &1 &1 &1 &-1 &-1 &1 \\
$\Gamma_3$ &1 &1 &-1 &-1 &-1 &1 &1 \\
$\Gamma_4$ &1 &1 &-1 &-1 &1 &-1 &1 \\
$\Gamma_5$ &2 &-1 &2 &-1 &0 &0 &2 \\
$\Gamma_6$ &2 &-1 &-2 &1 &0 &0 &2 \\
$\bar{\Gamma}_7$ &2 &-2 &0 &0 &0 &0 &-2 \\
$\bar{\Gamma}_8$ &2 &1 &0 &-$\sqrt{3}$ &0 &0 &-2 \\
$\bar{\Gamma}_9$ &2 &1 &0 &$\sqrt{3}$ &0 &0 &-2 \\
\end{tabular}
\caption{The character table for the little co-group $\bar{G}_\Gamma\approx C_{6v}$ of the $\Gamma$ point. The irreps $\Gamma_1$-$\Gamma_6$ are all single valued, while $\bar{\Gamma}_7,\bar{\Gamma}_8,$ and $\bar{\Gamma}_9$ are double valued. $\bar{\Gamma}_9$ is the spin-$\frac{1}{2}$ representation, $\bar{\Gamma}_7$ is the $|S=3/2,m_z=\pm 3/2\rangle$ representation, and $\bar{\Gamma}_8$ is the $|S=5/2,m_z=\pm 5/2\rangle$ representation, {all distinguishable by the action of $C_{6z}$}.}
\label{table:c6v}
\end{table}
\begin{table}[h]
\begin{tabular}{c|cccc}
Rep & $E$ & $C_{3z}$ & $C_{2z}m_{1\bar{1}}$ & $\bar{E}$ \\
\hline
$K_1$ & 1 & 1 & 1 & 1 \Tstrut\\
$K_2$ & 1 & 1 & -1 & 1\\
$K_3$ & 2 & -1 & 0 & 2 \\
$\bar{K}_4$ & 1 & -1 & -i & -1 \\
$\bar{K}_5$ & 1 & -1 & i & -1 \\
$\bar{K}_6$ & 2 & 1 & 0 & -2
\end{tabular}
\caption{Character table for the little co-group $\bar{G}_K\approx C_{3v}$ of the $K$ point. There are three single-valued representations $K_1$--$K_3$, and three double valued representations $\bar{K}_4$--$\bar{K}_6$. The one-dimensional representations $\bar{K}_4$ and $\bar{K}_5$ are complex conjugates of each other. The two dimensional $\bar{K}_{6}$ representation is the spin-$\frac{1}{2}$ representation, while the one-dimensional $\bar{K}_4$ and $\bar{K}_5$ representations act in the space spanned by $|S=3/2, m_z=3/2\rangle\pm i|S=3/2,m_z=-3/2\rangle$ respectively.}\label{table:c3v}
\end{table}
\begin{table}[h]
\begin{tabular}{c|ccccc}
Rep & $E$ & $C_{2z}$ & $m$ & $C_{2z}m$ &$\bar{E}$ \\
\hline
$M_1$ & 1 & 1 & 1 & 1 & 1\Tstrut \\
$M_2$ & 1 & 1 & -1 & -1 & 1 \\
$M_3$ & 1 & -1 & -1 & 1 & 1 \\
$M_4$ & 1 & -1 & 1 & -1 & 1 \\
$\bar{M}_5$ & 2 & 0 & 0 & 0 & -2
\end{tabular}
\caption{Character table for the little co-group $\bar{G}_M\approx C_{2v}$ of the $M$ point, for both single and double-valued representations. The single-valued representations $M_1$--$M_4$ are all one dimensional. The unique double-valued representation, $\bar{M}_5$, is the two-dimensional spin-$\frac{1}{2}$ representation. In terms of the Pauli matrices, it is given concretely as $\bar{M}_5(C_{2z})=i\sigma_z,\bar{M}_5(m)=i\sigma_y$.}
\label{table:c2v}
\end{table}
\subsection{$p_z$ Orbitals and the Elementary band representation}\label{subsec:graphenebr}
In graphene, the relevant orbitals near the Fermi level are the two spin species of the $p_z$ orbitals at the $2b$ Wyckoff position. Let us focus on the orbitals $\{|p_z\uparrow\rangle_1,|p_z\downarrow\rangle_1\}$ at the $\mathbf{q}^b_1$ site. These transform according to an irreducible double-valued (spinor) representation $\rho$ of the site symmetry group $G_{\mathbf{q}^b_1}=C_{3v}$: in the space of these orbitals, $\{C_{3z}|01\}$ acts as a rotation about the $z$-axis in spin space, and $m_{1\bar{1}}$ acts as a spin-flip. Furthermore, time-reversal symmetry $T$ acts as a spin flip times complex conjugation. Symbolically,
\begin{equation}
\rho(\{C_{3z}|01\})=e^{i\pi/3 s_z},\; \rho(m_{1\bar{1}})=is_x,\; \rho(T)=is_y\mathcal{K},
\end{equation}
where $\{s_0,s_x,s_y,s_z\}$ are Pauli matrices that act in the space of spin $\uparrow\downarrow$ ($s_0$ is the identity matrix), and $\mathcal{K}$ is complex conjugation. Similarly, the $p_z$ orbitals at the $\mathbf{q}^b_2$ site transform in an equivalent representation obtained by conjugation by $C_{2z}$.
Because $\rho$ is a physically irreducible representation of the site-symmetry group of a maximal Wyckoff position (and does not appear in Table~\ref{table:dbr}), it induces a physically elementary band representation, $\rho^\mathbf{k}_G=\rho\uparrow G$. It has four bands, coming from the four orbitals per unit cell. To construct the matrices $\rho^\mathbf{k}_G$ we directly examine the action of the point group elements on the spin and location of orbitals. Let's focus first on $\{C_{3z}|00\}$. Its action on orbitals at the $\mathbf{q}^b_1$ site can be deduced from above; for orbitals at the $\mathbf{q}^b_2$ site, it acts as a rotation in spin space, and takes $\{C_{3z}|00\}\mathbf{q}^b_2=\mathbf{q}^b_2+\mathbf{e}_2$. Introducing a set of Pauli matrices $\{\sigma_0,\sigma_x,\sigma_y,\sigma_z\}$ which act in the sublattice basis ($\sigma_0$ is the identity matrix), this allows us to write
\begin{equation}
\rho^\mathbf{k}_G(\{C_{3z}|00\})=e^{i\pi/3 s_z}\otimes e^{i(\mathbf{k}\cdot\mathbf{e}_2)\sigma_z},\label{eq:brC3}
\end{equation}
where $\otimes$ is the usual tensor product. Similarly, $m_{1\bar{1}}$ acts as a spin flip at the $\mathbf{q}^b_2$ site, and also leaves this point invariant. Hence\footnote{Note that this choice for the representative of $m_{1\bar{1}}$ differs by a unitary transformation from the more conventional basis where $m_{1\bar{1}}$ is represented by $i\sigma_y$, a $\pi$ spin rotation about the $\mathbf{e}_1-\mathbf{e}_2=\hat{\mathbf{y}}$ axis. This basis choice is necessary for the Hamiltonian Eq.~(\ref{eq:graphenerashbaham}) to match the well known expression of Kane and Mele\cite{Kane04}.}
\begin{equation}
\rho^\mathbf{k}_G(\{m_{1\bar{1}}|00\})=is_x\otimes\sigma_0.\label{eq:brM}
\end{equation}
Next, since time-reversal acts independent of position, we know immediately that
\begin{equation}
\rho^\mathbf{k}_G(T)=is_y\otimes\sigma_0\mathcal{K}.\label{eq:brT}
\end{equation}
Lastly, we need to examine $\{C_{2z}|00\}$. This interchanges the two sublattices (orbitals), and so acts as $\sigma_x$ in sublattice space. In spin space, it acts as a rotation by $\pi${, and commutes with T, the time-reversal operator}. Thus we deduce
\begin{equation}
\rho^\mathbf{k}_G(\{C_{2z}|00\})=is_z\otimes\sigma_x.\label{eq:brC2}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1.2in]{GrapheneReciprocal-crop.pdf}
\caption{Reciprocal lattice basis vectors and high symmetry points of the hexagonal lattice.}
\label{fig:reciprocal}
\end{figure}
Representation matrices in hand, it is now a simple matter of comparison with Tables \ref{table:c6v}, \ref{table:c3v}, \ref{table:c2v} to determine the little group representations at each high symmetry point. First, at the $\Gamma$ point all point group elements are in the little co-group, and we see that the matrices $\rho^\mathbf{k}_G$ restrict to
\begin{equation}
\rho_G^\Gamma(\{C_{3z}|00\})=e^{i\pi/3s_z}\otimes\sigma_0,\; \rho^\Gamma_G(\{m_{1\bar{1}}|00\})=is_x\otimes\sigma_0,\; \rho_G^\Gamma(\{C_{2z}|00\})=is_z\otimes\sigma_x,\;\rho_G^\Gamma(T)=is_y\otimes\sigma_0\mathcal{K}.
\end{equation}
Comparing the trace of each of these unitary matrices to the characters in Table~\ref{table:c6v}, we see that
\begin{equation}
\left(\rho\uparrow G\right)\downarrow G_\Gamma\approx\bar{\Gamma}_8\oplus\bar{\Gamma}_9.
\end{equation}
[Although TR is not mentioned in Table~\ref{table:c6v} or \ref{table:c2v}, the $\bar{\Gamma}_8$ and $\bar{\Gamma}_9$ representations of $G_\Gamma$ and the $\bar{M}_5$ representation of $G_M$ satisfy Kramers's theorem consistent with our choice of time-reversal matrix Eq.~(\ref{eq:brT}).] Next, the little co-group at $K$ is generated by $C_{3z}$ and $C_{2z}m_{1\bar{1}}$ which in this band representation are given by
\begin{equation}
\rho_G^K(\{C_{3z}|00\})=e^{i\pi/3s_z}\otimes e^{2\pi i /3\sigma_z},\; \rho^K_G(C_{2z}m_{1\bar{1}})=-is_y\otimes\sigma_x. \label{eq:Kptinducedreps}
\end{equation}
Upon taking traces and comparing with Table~\ref{table:c3v} we deduce
\begin{equation}
\left(\rho\uparrow G\right)\downarrow G_K\approx\bar{K}_4\oplus\bar{K}_5\oplus\bar{K}_6.
\end{equation}
Finally, using the fact that there is only a unique double-valued representation $\bar{M}_5$ allowed at the $M$ point, we deduce by simple dimension counting that
\begin{equation}
\left(\rho\uparrow G\right)\downarrow G_M\approx\bar{M}_5\oplus\bar{M}_5.
\end{equation}
Thus, we have deduced, \emph{from symmetry alone}, the little group representations of the energy bands induced by the $p_z$ orbitals in graphene. For future convenience, we summarize this in Table~\ref{table:graphenebrreps}. In the spirit of our programme described in the main text, the next step is to analyze how these energy bands are permitted to connect throughout the BZ. {We will accomplish this with the aid of the graph theory method outlined in Section~\ref{sec:graphreview}.}
\begin{table}[t]
\begin{tabular}{c|c|c|c}
BR & $\Gamma$ & $K$ & $M$ \\
\hline
$\rho\uparrow G$ & $\bar{\Gamma}_8\oplus\bar{\Gamma_9}$ & $\bar{K}_4\oplus\bar{K}_5\oplus\bar{K}_6$ & $\bar{M}_5\oplus\bar{M}_5$\Tstrut
\end{tabular}
\caption{Little group representations for the energy bands induced from $p_z$ orbitals in graphene.}\label{table:graphenebrreps}
\end{table}
\subsection{Graph Analysis}\label{subsec:graphenegraph}
To construct the connectivity graphs -- and hence determine the allowed topological phases of graphene, we shall follow the procedure outlined in the main text and in Section~\ref{sec:graphreview}. To do this, we need to first examine the compatibility relations along the lines joining $\Gamma,K,$ and $M$ in the BZ. Once we have determined the compatibility relations for the little group representations occurring in the band representation $\rho\uparrow G$ of $p_z$ orbitals, we will explicitly construct the distinct connectivity graphs for the system. In particular, we show that there is a fully connected {protected (at half-filling)} semi-metallic phase, as well as a disconnected topological insulating phase.
Let us begin with the line $\Sigma=k\mathbf{g}_1,\;{k\in[0,\frac{1}{2}]}$ which {links} $\Gamma$ and $M$. The little co-group $\bar{G}_\Sigma$ of this line is the abelian group $C_s$ generated by $C_{3z}m_{1\bar{1}}${\cite{PointGroupTables}}. It has two double-valued representations denoted by $\bar{\Sigma}_3$ and $\bar{\Sigma}_4$, distinguished by whether {the group generator} is represented by $\pm i$, respectively. Consider the little group representations appearing in the $\rho\uparrow G$ band representation in Table~\ref{table:graphenebrreps}. {From Table~\ref{table:c6v}, we see that in both the $\bar{\Gamma}_8$ and $\bar{\Gamma}_9$ representations at $\Gamma$, the character $\chi(C_{3z}m_{1\bar{1}})=0$ (it is in the same conjugacy class as $m_{1\bar{1}}$ in the table). From this we deduce that each of these representations restricts to the direct sum $\bar{\Sigma}_3\oplus\bar{\Sigma}_4$ on the line $\Sigma$. A similar analysis shows that the representation $\bar{M}_5$ at the $M$ point subduces also to $\bar{\Sigma}_3\oplus\bar{\Sigma}_4$}. We summarize this in the compatibility relations (\ref{eq:sigmacomprels}):
\begin{align}
\bar{\Gamma}_8\downarrow G_\Sigma&=\bar{\Sigma}_3\oplus\bar{\Sigma}_4 \nonumber \\
\bar{\Gamma}_9\downarrow G_\Sigma&=\bar{\Sigma}_3\oplus\bar{\Sigma}_4 \nonumber \\
\bar{M}_5\downarrow G_\Sigma&=\bar{\Sigma}_3\oplus\bar{\Sigma}_4\label{eq:sigmacomprels}.
\end{align}
Next, we look at the line $T=(\frac{1}{2}+k)\mathbf{g}_1+2k\mathbf{g}_2,{k\in[-\frac{1}{6},0]}$, which connects the $K$ and $M$ points. The little co-group $\bar{G}_T$ of this line is also isomorphic to $C_s$, this time generated by the mirror $C_{6z}m_{1\bar1}$. As above, we denote its two irreps by $\bar{T}_3$ and $\bar{T}_4$. {By looking at the characters of the little group representations $\bar{K}_4,\bar{K}_5$, and $\bar{K}_6$, from Table~\ref{table:c3v}, we deduce that}
\begin{align}
\bar{K}_4\downarrow G_T&=\bar{T}_3 \nonumber\\
\bar{K}_5\downarrow G_T&=\bar{T}_4 \nonumber\\
\bar{K}_6\downarrow G_T &=\bar{T}_3\oplus\bar{T}_4.\label{eq:KTcomprels}
\end{align}
The restriction of the representation $\bar{M}_5$ into representations of $C_s$ was computed {in Eq.~(\ref{eq:sigmacomprels})}, and so
\begin{equation}
\bar{M}_5\downarrow G_T=\bar{T}_3\oplus\bar{T}_4\label{eq:MTcomprels}.
\end{equation}
Finally, we examine the line $\Lambda=k\mathbf{g}_1+2k\mathbf{g}_2, {k\in[0,\frac{1}{3}]}$, which connects the points $\Gamma$ and $K$. Like the previous cases, the little co-group of this line is $C_s$, this time generated by $C_{2z}m_{1\bar{1}}$, {and the compatibility relations are }
\begin{align}
\bar{K}_4\downarrow G_\Lambda&=\bar{\Lambda}_3 \nonumber\\
\bar{K}_5\downarrow G_\Lambda&=\bar{\Lambda}_4 \nonumber\\
\bar{K}_6\downarrow G_\Lambda &=\bar{\Lambda}_3\oplus\bar{\Lambda}_4 \nonumber \\
\bar{\Gamma}_8\downarrow G_\Lambda&=\bar{\Lambda}_3\oplus\bar{\Lambda}_4 \nonumber \\
\bar{\Gamma}_9\downarrow G_\Lambda&=\bar{\Lambda}_3\oplus\bar{\Lambda}_4.\label{eq:LDcomprels}
\end{align}
The only remaining $\mathbf{k}$-surface in the BZ is the general position, denoted $GP=k_1\mathbf{g}_1+k_2\mathbf{g}_2$. However, it has a trivial little group with only one {1$D$} double-valued irreducible representation $\bar{GP}_2$. {Because of this, the compatibility relations are trivial -- all representations restrict to copies of $\bar{GP}_2$, and group theory provides no restrictions on the connectivity. Thus this surface} does not add anything new to the connectivity analysis and we omit it here.
We can now construct the degree, adjacency, and Laplacian matrices, {defined in Section~\ref{sec:graphreview},} consistent with these compatibility relations. In each of these matrices, the rows and columns (i.e. the nodes in the connectivity graph) are labelled by the different irreps occurring in the band representation. In this example, we have the following nodes: $\bar{\Gamma}_8,\bar{\Gamma}_9,\bar{\Sigma}_3^1,\bar{\Sigma}_3^2,\bar{\Sigma}_4^1,\bar{\Sigma}_4^2,\bar{\Lambda}_3^1,\bar{\Lambda}_3^2,\bar{\Lambda}_4^1,\bar{\Lambda}_4^2,\bar{K}_4,\bar{K}_5,\bar{K}_6,\bar{T}_3^1,\bar{T}_3^2,\bar{T}_4^1,\bar{T}_4^2,\bar{M}_5^1,\bar{M}_5^2$. {We reiterate here that, as per Def.~\ref{def:compgraph}, representations at high-symmetry points \emph{and} lines correspond to nodes in our graph.} Note that if a representation occurs more than once {(as is the case along the lines $\Sigma,T$, and $\Lambda$, as well as at the point $M$)}, there is a distinct node for each copy that appears, which we label here with a superscript.
We begin first with the degree matrix $D$, {as defined in Def.~\ref{def:degreemat}}. Let us denote by $d(\sigma)$ the degree of the node labelled by representation $\sigma$ in the connectivity graph. We know from Def.~\ref{def:compgraph} that $d(\sigma)$ is given by $\mathrm{dim}(\sigma)$ times the number of distinct compatiblity tables in which $\sigma$ appears. {For example, $\mathrm{dim}(\bar{\Gamma}_8)=2$}. Furthermore, it connects to $P=2$ other high-symmetry lines (in the notation of Def.~\ref{def:compgraph}), $\Sigma$ and $\Lambda$. Following this prescription, the entry $d(\bar{\Gamma}_8)=2\times2=4$. Carrying out this procedure for all little group representations, we find the nonzero (diagonal) entries $D$ which we summarize in Table~\ref{table:183degmat}.
\begin{table}[h]
\begin{tabular}{c|ccccccccccccccccccc}
& $\bar{\Gamma}_8$&$\bar{\Gamma}_9$&$\bar{\Sigma}_3^1$&$\bar{\Sigma}_3^2$&$\bar{\Sigma}_4^1$&$\bar{\Sigma}_4^2$&$\bar{\Lambda}_3^1$&$\bar{\Lambda}_3^2$&$\bar{\Lambda}_4^1$&$\bar{\Lambda}_4^2$&$\bar{K}_4$&$\bar{K}_5$&$\bar{K}_6$&$\bar{T}_3^1$&$\bar{T}_3^2$&$\bar{T}_4^1$&$\bar{T}_4^2$&$\bar{M}_5^1$&$\bar{M}_5^2$\\
\hline
$d(\rho)$ & $4$ & $4$ & $2$ & $2$ &$2$ & $2$ & $2$ &$2$ &$2$ &$2$&$2$ &$2$ &$4$&$2$ &$2$ &$2$&$2$&$4$&$4$\Tstrut
\end{tabular}
\caption{Nonzero entries in the degree matrix for the $\bar{\rho}_6^{2b}\uparrow G$ band representation in $P6mm$} (183)\label{table:183degmat}
\end{table}
Next, we construct the allowed adjacency matrices for this band representation. The adjacency matrices all have a sparse block structure -- blocks connecting different $\mathbf{k}$-points are nonzero only if the points are compatible. We see then that the only nonzero blocks are the $\Gamma-\Sigma$, $\Gamma-\Lambda$, $K-\Lambda$, $K-T$, $M-T$, and $M-\Sigma$ blocks. Furthermore, up to relabellings of identical representations, i.e. $\bar{M}_5^1\leftrightarrow\bar{M}_5^2,\,\bar{\Lambda}_3^1\leftrightarrow\bar{\Lambda}_3^2$, etc., there are only four distinct adjacency matrices {(we elaborate on these details in Ref.~\onlinecite{graphdatapaper})}. {These fall into two groups which differ by the exchange $\bar{\Gamma}_8\leftrightarrow\bar{\Gamma}_9$, since $\bar{\Gamma}_8$ and $\bar{\Gamma}_9$ have identical compatibility relations along both $\Sigma$ and $\Lambda$. For brevity, we write here only the two independent matrices from which the remaining two can be obtained by {the exchange $\bar{\Gamma}_8\leftrightarrow\bar{\Gamma}_9$}.} They are
\begin{equation}
A_1=\begin{blockarray}{cccccccccccccccccccc}
\bar{\Gamma}_8&\bar{\Gamma}_9&\bar{\Sigma}_3^1&\bar{\Sigma}_3^2&\bar{\Sigma}_4^1&\bar{\Sigma}_4^2&\bar{\Lambda}_3^1&\bar{\Lambda}_3^2&\bar{\Lambda}_4^1&\bar{\Lambda}_4^2&\bar{K}_4&\bar{K}_5&\bar{K}_6&\bar{T}_3^1&\bar{T}_3^2&\bar{T}_4^1&\bar{T}_4^2&\bar{M}_5^1&\bar{M}_5^2&\\
\begin{block}{(cc|cccc|cccc|ccc|cccc|cc)c}
0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Gamma}_8 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Gamma}_9 \\
\cline{1-19}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{\Sigma}_3^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{\Sigma}_3^2 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{\Sigma}_4^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{\Sigma}_4^2\\
\cline{1-19}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_3^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_3^2 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_4^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_4^2 \\
\cline{1-19}
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \bar{K}_4 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & \bar{K}_5 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & \bar{K}_6 \\
\cline{1-19}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{T}_3^1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{T}_3^2 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{T}_4^1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{T}_4^2 \\
\cline{1-19}
0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & \bar{M}_5^1 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & \bar{M}_5^2 \\
\end{block}
\end{blockarray}
\end{equation}
and
\begin{equation}
A_2=\begin{blockarray}{cccccccccccccccccccc}
\bar{\Gamma}_8&\bar{\Gamma}_9&\bar{\Sigma}_3^1&\bar{\Sigma}_3^2&\bar{\Sigma}_4^1&\bar{\Sigma}_4^2&\bar{\Lambda}_3^1&\bar{\Lambda}_3^2&\bar{\Lambda}_4^1&\bar{\Lambda}_4^2&\bar{K}_4&\bar{K}_5&\bar{K}_6&\bar{T}_3^1&\bar{T}_3^2&\bar{T}_4^1&\bar{T}_4^2&\bar{M}_5^1&\bar{M}_5^2&\\
\begin{block}{(cc|cccc|cccc|ccc|cccc|cc)c}
0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Gamma}_8 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Gamma}_9 \\
\cline{1-19}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{\Sigma}_3^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{\Sigma}_3^2 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{\Sigma}_4^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{\Sigma}_4^2\\
\cline{1-19}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_3^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_3^2 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_4^1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & \bar{\Lambda}_4^2 \\
\cline{1-19}
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \bar{K}_4 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & \bar{K}_5 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & \bar{K}_6 \\
\cline{1-19}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{T}_3^1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{T}_3^2 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \bar{T}_4^1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & \bar{T}_4^2 \\
\cline{1-19}
0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & \bar{M}_5^1 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & \bar{M}_5^2 \\
\end{block}
\end{blockarray}
\end{equation}
These matrices differ only in their $K-\Lambda$ and $K-T$ blocks. As a consistency check, we verify that the sum of elements in the row or column labelled by $\sigma$ is equal to $d(\sigma)$ from Table~\ref{table:183degmat}; {thus, the degree matrix $D$ satisfies $D_{ij}=\delta_{ij}\sum_\ell A_{i\ell}$}. We show each of these graphs pictorially in Figure~\ref{fig:183graphs}. Although the graph method does not impose any constraints on the energies of the irreducible representations, we are free to interpret {and visualize} the vertical positioning of the nodes of the graph as the energy of the respective energy bands. Doing so gives Fig.~\ref{fig:183graphs} the alternative interpretation as a plot of the band structure!
We can now construct the Laplacian matrices $L_1=D-A_1$ and $L_2=D-A_2$ associated to these two graphs. To save space we will not write these out explicitly. We find that the null space of the matrix $L_1$ is spanned by the unique vector
\begin{equation}
\psi_1=\left(\begin{array}{ccccccccccccccccccc}1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1\end{array}\right)^T
\label{eq:connectedvector}
\end{equation}
indicating that the graph described by the matrix $A_1$ has a single connected component {consisting of all the nodes in the graph}. On the other hand, we find that the null space of $L_2$ is spanned by
\begin{align}
\psi_2^1&=\left(\begin{array}{ccccccccccccccccccc}1&0&1&0&1&0&1&0&1&0&1&1&0&1&0&1&0&1&0\end{array}\right)^T\label{eq:discvector1}\\
\psi_2^2&=\left(\begin{array}{ccccccccccccccccccc}0&1&0&1&0&1&0&1&0&1&0&0&1&0&1&0&1&0&1\end{array}\right)^T
\label{eq:discvector2}
\end{align}
indicating that the graph described by the matrix $A_2$ has two connected components. Consulting our ordering of representations in Table~\ref{table:183degmat}, we see that the first connected component contains the little group representations $\bar{\Gamma}_8,\bar{\Sigma}_3^1,\bar{\Sigma}_4^1,\bar{\Lambda}_3^1,\bar{\Lambda}_4^1,\bar{K}_4,\bar{K}_5,\bar{T}_3^1,\bar{T}_4^1$ and $\bar{M}_5^2$, while the other connected component contains the remainder $\bar{\Gamma}_9,\bar{\Sigma}_3^2,\bar{\Sigma}_4^2,\bar{\Lambda}_3^2,\bar{\Lambda}_4^2,\bar{K}_6,\bar{T}_3^2,\bar{T}_4^2$ and $\bar{M}_5^1$. (Interchanging $\bar{\Gamma}_8$ and $\bar{\Gamma}_9$ also results in a valid disconnected graph). Since each of these connected components comes from splitting an elementary band representations, they each describe a topological group of bands, and hence a topological insulator.
This is consistent with the results of Ref.~\onlinecite{Soluyanov2011}, which found that the Wannier functions in the valence bands in the topological phase of graphene were of the form $|p_z \uparrow+\downarrow \rangle$ localized on the $A$ sites, and $|p_z \uparrow-\downarrow \rangle$ localized on the $B$ sites (the two points in the unit cell of the $2b$ Wyckoff orbit); by examining the action of $C_{3z}$ on these orbitals we conclude immediately that they do not, {by themselves}, form a representation space (carrier space) of the site-symmetry group $G_{\mathbf{q}^b_1}$. {In fact, no set of spin-$1/2$ Wannier functions formed from $s$ or $p$ orbitals and with one orbital per site can respect the spatial symmetries, since $C_{3v}$ has only two dimensional double-valued representations with $m_z=\pm1/2$ (even when time-reversal symmetry is neglected) as per Table~\ref{table:c3v}.} The fact that graphene is a strong -- rather than just a crystalline -- topological insulator is revealed in the fact that these Wannier functions are also not time-reversal invariant.
\subsection{Hamiltonian Analysis}\label{subsec:grapheneham}
We justify the preceding analysis concretely by considering a tight-binding model of $p_z$ or ($s$) orbitals centered on $2b$ sites with the most general Rashba and Haldane type SOC interactions. In particular, we will show how different classes of spin-orbit coupling terms can drive a transition between the two phases indicated in Fig~\ref{fig:183connected} and \ref{fig:183disconnected}. We will use the basis of spin and sublattice (orbital) Pauli matrices {(including the identity matrices)} $s_i\otimes\sigma_j{, \; i,j=0,1,2,3}$ introduced previously in Sec.~\ref{subsec:graphenebr}.
In this basis, we can expand any Bloch Hamiltonian in terms of sixteen Hermitian basis elements. We call a term in the Hamiltonian an SOC term if it does not act as the identity in spin space. If it commutes with $s_z$, it is of ``Haldane'' type. All other spin-orbit coupling is of Rashba type (because any term that breaks spin conservation in this basis is also not invariant under $C_{2z}I$, when inversion is taken to act in three dimensions. This is true for any two-dimensional system embedded in three dimensional space, c.~f.~ Ref.~\cite{Kane04}) The most general Haldane-type SOC term is
\begin{equation}
H_{HSOC}(\mathbf{k})=d_0(\mathbf{k})s_z\otimes\sigma_0+d_x(\mathbf{k})s_z\otimes\sigma_x+d_y(\mathbf{k})s_z\otimes\sigma_y+d_z(\mathbf{k})s_z\otimes\sigma_z.
\end{equation}
Looking at the $\Gamma$ point first, $C_{2z}$ symmetry forces $d_y(0)=d_z(0)=0$, while mirror symmetry forces $d_0(0)=d_x(0)=0$. Thus, Haldane spin orbit coupling does not affect the band structure at the $\Gamma$ point. At the $K$ point, however, Eq.~(\ref{eq:Kptinducedreps}) shows that mirror symmetry forces $d_0(K)=d_x(K)=0$, and that $C_{3}$ symmetry forces $d_y(K)=0$. Thus, at the $K$ point, Haldane spin orbit coupling takes the general form
\begin{equation}
H_{HSOC}(K)=\lambda_H s_z\otimes\sigma_z.\label{eq:genHaldane}
\end{equation}
We perform the same analysis for Rashba spin-orbit coupling. We start with the most general spin-non-conserving Hamiltonian,
\begin{equation}
H_{R}(\mathbf{k})=\sum_{i=x,y}\sum_{j=0,x,y,z}d_{ij}(\mathbf{k})s_i\otimes\sigma_j
\label{eq:genRashba}
\end{equation}
At the $\Gamma$ point, $H_R(0)=0$, since $C_{3z}$ fails to commute with every term in Eq~(\ref{eq:genRashba}) {(this is perhaps well-known for the standard Rashba term $(\mathbf{k}\times\vec{\sigma})_z$, however here we have shown it is true for \emph{any} spin-nonconserving term that respects the crystal symmetries)}. At the $K$ point, however, $C_{3z}$ symmetry allows $d_{xy}(K)\neq 0$ and $d_{yx}(K)\neq 0$. Furthermore, mirror symmetry forces {$d_{xy}(K)=-d_{yx}(K)$}. Hence,
\begin{equation}
H_{R}(K)=\lambda_R(s_x\otimes\sigma_y-s_y\otimes\sigma_x)\label{eq:graphenerashbaham}
\end{equation}
{The terms in Eqs~(\ref{eq:genHaldane}) and (\ref{eq:genRashba}) exhaust the space of possible SOC terms.}
We now analyze the effect of spin orbit coupling $H_H=H_{HSOC}+H_R$ on the band structure.
{We have shown already that symmetry prohibits the spin-orbit coupling from altering the band structure at $\Gamma$.}
At $K$, the eigenvalues of $H_H(K)$ are $\delta E_{\pm}=-\lambda_H\pm2\lambda_R$ and $\delta E_{0}=\lambda_H$; {the latter is two-fold degenerate, corresponding to the $\bar{K}_6$ little group representation, while the former correspond to the $\bar{K}_4$ and $\bar{K}_5$ representations.} The associated eigenvectors are
\begin{equation}
\psi_{\pm}=\frac{1}{\sqrt{2}}\left(|\uparrow \mathbf{q}^b_2\rangle\mp|\downarrow\mathbf{q}^b_1\rangle\right), \psi_{01}=|\uparrow\mathbf{q}^b_1\rangle,\psi_{02}=|\downarrow\mathbf{q}^b_2 \rangle
\label{eq:2bbasis}
\end{equation}
First consider the case {$\lambda_R \pm \lambda_H >0$, }
so that $\delta E_+>\delta E_0>\delta E_-$.
{(We also could have chosen $\lambda_R\pm \lambda_H <0$, in which case $\delta E_+ < \delta E_0 < \delta E_-$.)}
The Dirac cone at $K$ {that exists} without spin orbit coupling thus splits into a twofold degenerate state sandwiched between two non-degenerate states.
Since $H_H$ does not change the band structure at $\Gamma$, {we will without loss of generality let the $\bar{\Gamma}_9$ representation sit higher in energy than $\bar{\Gamma}_8$ and the $\bar{K}_5$ representation sit above $\bar{K}_4$. Then the induced band representation is four-fold connected, as shown in Fig~\ref{fig:183connected}. Re-ordering the bands at $\Gamma$ or swapping $\bar{K}_4$ and $\bar{K}_5$ does not change this connectivity (as noted previously in Section~\ref{subsec:graphenegraph}).
By inspecting Table~\ref{table:graphenebrreps}, we confirm that the bands transform under the four-fold connected elementary band rep $\bar{\rho}^{2b}_6\uparrow G$, induced from $G_{\mathbf{q}^b_1}$, as we asserted at the beginning of this section.
Connecting to the graph theory analysis in Sec~\ref{subsec:graphenegraph}, the four-fold band connectivity corresponds to the case of the single null vector in Eq~(\ref{eq:connectedvector}). This is an enforced semimetal phase.
}
In the opposite regime,
{where ${\rm sgn}(\lambda_R-\lambda_H) = - {\rm sgn}(\lambda_R + \lambda_H)$}, the situation is more interesting. In this case, at the $K$ point, the twofold degenerate $\psi_{0}$ states sit higher {or lower} in energy than the $\psi_\pm$ states. This ordering implies that bands are now only twofold connected, as shown in Fig~\ref{fig:183disconnected}.
{If we assume the $\bar{\Gamma}_9$ representation of $G_\Gamma$ sits higher in energy than $\bar{\Gamma}_8$ and $\lambda_R -\lambda_H<0$ ($\lambda_R + \lambda_H > 0$), then the conduction bands at the $\Gamma$ point transform under the $\bar{\Gamma}_9$ irrep of $G_\Gamma$ (since $H_{HSOC}(0)=0$), while conduction bands at the $K$ point transform under the $\bar{K}_6$ representation of $G_K$.
Additionally, the valence bands transform under the $\bar{\Gamma}_8$ irrep at the $\Gamma$ point and the $\bar{K}_4\oplus\bar{K}_5$ irrep at the $K$ point.}
{Thus, while it is true that all four bands together still transform under the $\bar{\Gamma}_6\uparrow G$ band representation induced from $G_{\mathbf{q}^b_1}$, for these parameters, the valence bands and the conduction bands alone do not form a band representation}.
This possibility was introduced in the graph theory analysis in Sec~\ref{subsec:graphenegraph} and the two disconnected pieces correspond to the two null vectors in Eqs~(\ref{eq:discvector1}) and (\ref{eq:discvector2}).
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=2.3in]{183conn.pdf}
\label{fig:183connected}
}
\hspace{.5in}
\subfloat[]{
\includegraphics[width=2.3in]{183disconn.pdf}
\label{fig:183disconnected}
}
\caption{Band structures corresponding to the connectivity graphs for $P6mm$ (183), with little group representations along points and lines labelled as shown. (a) shows the graph corresponding to the adjacency matrix $A_1$, while (b) shows the graph corresponding to adjacency matrix $A_2$.
}\label{fig:183graphs}
\end{figure}
\section{Hybridization and Topology in the 1D chain}
To elucidate the connection between hybridization, bonding, and topological phases, we will here examine a simple{, well known} model {in a new light}: a 1D inversion symmetric chain with one (spinless) $s$ orbital and one (spinless) $p_x$ orbital per site. While formally equivalent to the Su-Schrieffer-Heeger\cite{ssh1979} and Rice-Mele\cite{RiceMele} models, in this formulation the connection of topology and chemistry is manifested.
We begin by defining our lattice, a schematic diagram of which is shown in Fig.~\ref{fig:1dlattice}.
\begin{figure}[t]
\subfloat[]{
\includegraphics[width=0.7\textwidth]{1d-chain-diag.pdf}\label{fig:1dlattice}
}\quad
\subfloat[]{
\includegraphics[width=0.3\textwidth]{sporbs.pdf}\label{fig:sporbs}
}
\caption{
The 1D inversion-symmetric chain. (a) shows a schematic diagram of the 1D inversion symmetric lattice, space group $\mathfrak{p} \bar{1}$. The lattice sites are shown in black circles, and the Bravais lattice translation vector is labelled by $\vec{t}$. The green dashed square outlines a single unit cell of the lattice. The lattice site itself serves as our choice of inversion center, and is the $1a$ Wyckoff position. The blue square at the edge of the unit cell is the $1b$ Wyckoff position. The two red stars indicate the points in orbit of the non-maximal $2c$ Wyckoff position. (b) is a schematic representation of $sp$-hybridized orbitals relevant to the transition between the trivial and topological phases. They are obtained as symmetric and antisymmetric linear combination of $s$ and $p$ orbitals.}
\end{figure}
Lattice sites in the figure are represented by black circles. We take as our origin the lattice site within the green dashed {rectangle}, which denotes one unit cell. With this choice of origin, The space group $G$ is generated by
\begin{equation}
G=\langle \{E|\vec{t}\},\{I|0\}, T\rangle,
\end{equation}
where $T$ is spinless time reversal {(although time-reversal is not necessary for the following discussion, we include it here to limit the number of allowed terms in the Hamiltonian)}. There are three distinct Wyckoff positions in the unit cell of this crystal. The first, labelled $1a$, has coordinates $\mathbf{q}^a_1=0$, located at the inversion center. The site symmetry group $G_{\mathbf{q}^a_1}$ is generated by $\{I|0\}$ and $T$. This is a maximal Wyckoff position, since the site symmetry group is isomorphic to the point group of $\mathfrak{p} \bar{1}$.
Similarly, the second maximal position is labelled $1b$, and has coordinates $\mathbf{q}^b_1=\frac{1}{2}$ in units of the lattice constant. The site-symmetry group $G_{\mathbf{q}^b_1}$ is also isomorphic to the point group $\bar{1}$, but now generated by $\{I|1\}$ and $T$.
Finally, there is also the non-maximal Wyckoff position $2c$, with coordinates $\{\mathbf{q}^c_1,\mathbf{q}^c_2\}=\{x,-x\}$. The stabilizer group of either of these sites contains only time-reversal {and the identity element}.
In reciprocal space, the BZ of the crystal is an interval generated by the reciprocal lattice translation $\vec{g}$ satisfying $\vec{g}\cdot\vec{t}=2\pi$. In units of $\vec{g}$, there are two inversion symmetric points in the BZ: $\Gamma=0$ and $X=\frac{1}{2}{(\equiv\pi)}$.
Now, we enumerate the elementary band representations for this space group. First, we note that the full point group -- and hence the site-symmetry groups $G_{\mathbf{q}_1^a}$ and $G_{\mathbf{q}_1^b}$ -- have two one-dimensional irreducible representations $\rho_\pm$ distinguished by whether or not the inversion element is represented by $\pm 1$; in both cases time-reversal is represented by complex conjugation. We can carry out the induction procedure for each maximal Wyckoff position. Because the generating element of $G_{\mathbf{q}^a_1}$ contains no translation, the inversion matrix is momentum-independent in band representations induced from this site, and so the inversion eigenvalues at $\Gamma$ and $X$ are identical. Conversely, because the generating element of $G_{\mathbf{q}^b_1}$ contains a lattice translation, the inversion eigenvalues at $\Gamma$ and $X$ differ in parity in band representations induced from this site. We summarize the results in Table~\ref{table:1dbrs}.
\begin{table}[t]
\begin{tabular}{c|c|c|c}
Position & Rep & $\Gamma$ & $X$ \\
\hline
a & $\rho^a_+\uparrow G$ & $+$ & $+$ \Tstrut\\
& $\rho^a_-\uparrow G$ & $-$ & $-$ \\
\hline
\hline
b & $\rho^b_+\uparrow G$ & $+$ & $-$ \Tstrut\\
& $\rho^b_-\uparrow G$ & $-$ & $+$
\end{tabular}
\caption{Elementary band representations for the 1D space group $\mathfrak{p} \bar{1}$. The first column indicates the Wyckoff position, and the second column the band representation induced from this site. The third column gives the eigenvalue of inversion at the $\Gamma$ point in the BZ. The last column gives the eigenvalue of inversion at the $X$ point.}\label{table:1dbrs}
\end{table}
From the table, we note that the composite band representations $(\rho_+^a\oplus\rho_-^a)\uparrow G$ and $(\rho_+^b\oplus\rho_-^b)\uparrow G$ have the same representation content in momentum space. In fact, these composite band representations are equivalent: The intersection $G_{\mathbf{q}^a_1}\cap G_{\mathbf{q}^b_1}=G_{\mathbf{q}^c_1}$ contains only identity and time-reversal. The unique irrep of this group induces the representations $\rho^a_+\oplus \rho^a_-$ at the $1a$ position, and $\rho^b_+\oplus\rho^b_-$ at the $1b$ position (this follows from the fact that these are the unique two-dimensional site-symmetry representations with zero character of inversion, the so-called \emph{regular representations}\cite{Serre}) Hence this is an equivalence of band representations, which we write as
\begin{equation}
(\rho^a_+\oplus\rho^a_-)\uparrow G \approx (\rho^b_+\oplus\rho^b_-)\uparrow G.\label{eq:1dequiv}
\end{equation}
Now let us consider the band structure induced by a single $s$ and single $p_x$ orbital at the $1a$ site of the lattice. Because $s$ orbitals transform in the $\rho_+^a$ representation, while $p$ orbitals transform in the $\rho_-^a$ representation, the full two-band band structure transforms in the $(\rho^a_+\oplus\rho^a_-)\uparrow G$ band representation. Keeping in mind the equivalence Eq.~(\ref{eq:1dequiv}), this means that Wannier functions for the two bands taken together can lie anywhere along the line between the $1a$ and $1b$ position. If the energetics are such that the system is gapped {(which is generically the case in 1D)}, then the exponentially localized valence band {(which is fully occupied at a filling of one electron per unit cell)} Wannier functions will lie \emph{either} on the $1a$ or $1b$ site.
To see this concretely, let us construct an explicit tight binding model consistent with these symmetries. The most general nearest-neighbor Bloch Hamiltonian induced from an $s$ and a $p$ orbital on the $1a$ site takes the form
\begin{equation}
H(k)=-\left[\epsilon+(t_{ss}+t_{pp})\cos (ka)\right]\sigma_z-2t_{sp}\sin (ka)\sigma_y,\label{eq:sshham}
\end{equation}
where $a$ is the lattice constant, $\epsilon$ is the onsite-energy difference between $s$ and $p$ orbitals, $t_{ss}$ is $s-s$ hopping, $t_{pp}$ is $p-p$ hopping, and $t_{sp}$ is the interorbital hopping {(had we chosen to break time-reversal symmetry, there would be an additional allowed $s-p$ hopping term, which does not affect the fundamental physics)}. This Hamiltonian maps to the Su-Schrieffer-Heeger model\cite{ssh1979} under a rotation $\sigma_z\rightarrow\sigma_x$. In this basis, the inversion matrix is represented as
\begin{equation}
\rho^\mathbf{k}(\{I|0\})=\sigma_z. \label{eq:invmatrix}
\end{equation}
There are two simple limits of this model which we will analyze. First, consider the case where $t_{ss}=t_{sp}=t_{pp}=0$. In this limit, the spectrum is gapped and given by
\begin{equation}
E_\pm(k)=\pm\epsilon.
\end{equation}
This limit corresponds to decoupled atoms, and so we expect the valence band Wannier functions to be localized on the $1a$ sites. We can verify this by computing the Wannier center polarization from the Zak phase\cite{Zakphase,ksv}. Indeed, the occupied band eigenfunction
\begin{equation}
\psi_0(k)=\left(\begin{array}{c}
1 \\
0
\end{array}\right)
\end{equation}
is $k$-independent and periodic, so the Zak phase is 0. From this we deduce that the Wannier functions are localized at the origin of the unit cell, which here is the $1a$ site. We find the same result for the conduction band. In this phase then the Wannier functions are nearly identical to the original atomic orbitals. To see this another way, comparing with Eq.~(\ref{eq:invmatrix}) shows that the valence band transforms in the $\rho_+^a\uparrow G$ band representation, and so the occupied Wannier functions are $s$-like orbitals localized at the $1a$ site.
Now let us consider the opposite limit $\epsilon=0, t_{ss}=t_{pp}=t_{sp}=t$. We find that the spectrum is also flat, with
\begin{equation}
E_{\pm}=\pm2t,
\end{equation}
However now the valence band eigenfunctions are nontrivial. Since the Hamiltonian is a sum of Pauli matrices, we can write immediately that
\begin{equation}
\psi_-(k)=e^{ika/2}\left(
\begin{array}{c}
\cos\frac{ka}{2} \\
i\sin\frac{ka}{2}
\end{array}\right),
\end{equation}
where we have included a prefactor to make the wavefunction periodic. We find for the Zak phase
\begin{equation}
\phi=i\int_{-\pi}^\pi dk \psi_-^\dag \nabla_k\psi_- =\pi,
\end{equation}
from which we deduce that the valence band Wannier functions are localized a half-translation from the origin of the unit cell, which here is at the $1b$ site. We draw the same conclusion by examining the conduction band Wannier functions. Additionally, comparing with Eq.~(\ref{eq:invmatrix}), we see that the occupied band Wannier functions transform in the $\rho_+^{b}\uparrow G$ band representation, and so are $s$-like orbitals centered on the $1b$ site. This is {what is routinely called the ``topological phase''} of the $1D$ chain. {However, based on our Definition 1 in the main text, it represents just another atomic limit; it can be described by symmetric, localized Wannier functions. Between the two atomic limits, there is a phase transition, and hence an ``edge mode,'' as was first pointed out by Shockley\cite{Shockley1939}. The nonzero polarization we have computed is the bulk signature of this edge mode}. We show this for generic parameter values in Figs.~\ref{fig:sshtrivedge}-\ref{fig:sshbulk}. This proves that edge modes can occur at the interface between distinct atomic limits, however they can be pushed into the bulk spectrum in these cases via an appropriate edge potential.
\begin{figure}[t]
\subfloat[]{
\includegraphics[height=2.3in]{ssh-trivial-edge.pdf}\label{fig:sshtrivedge}
}\quad
\subfloat[]{
\includegraphics[height=2.3in]{ssh-top-edge.pdf}\label{fig:sshtopedge}
}
\subfloat[]{
\includegraphics[height=2in]{ssh-top-bulk.pdf}\label{fig:sshbulk}
}\quad
\caption{Spectra for the $1D$ inversion symmetric chain in finite and infinite size. {Because we included only nearest-neighbor hopping and neglected a constant on-site energy in the Hamiltonian~(\ref{eq:sshham}), there is an additional inessential particle-hole symmetry in the spectrum.} (a) shows the spectrum for a chain of $100$ sites in the trivial phase with $\epsilon=0.9$ and $t_{ss}=t_{sp}=t_{pp}=0.2$; note that the spectrum is fully gapped. (b) shows the spectrum for a chain of $100$ sites in the topological phase with $t_{ss}=t_{sp}=t_{pp}=0.45$ and $\epsilon=0.4$. There are a pair of topological edge states, one localized on either edge of the chain. (c) shows the bulk spectrum, which is identical in the two cases.}
\end{figure}
As they are localized at the center of the unit cell, the Wannier functions in the topological phase clearly do not derive from the original atomic orbitals. How then, are we to understand them? The answer comes from chemistry. Given an $s$ and $p$ atomic orbital with nearly the same energy (i.e. $\epsilon\approx 0$), it is convenient to form $sp$ \emph{hybrid orbitals}, {as shown in Fig.~\ref{fig:sporbs}}. When the hopping amplitudes $t$ are also large compared to $\epsilon$, chemical theory tells us we should work in the basis of molecular bonding and antibonding orbitals formed by taking linear combinations of $sp$ orbitals on adjacent atoms. These molecular orbitals are also inversion symmetric, and therefore lie exactly halfway between the atoms, at the $1b$ site\cite{Anderson1984,hoffmann1987chemistry}! We thus see that the topological phase transition between trivial and nontrivial phases of the chain is a \emph{chemical} transition between weak and strong covalent bonding, where the formation of molecular bonding orbitals leads to a quantized charge polarization and edge states.
\section{Survey of material predictions}
{With our theory now developed, we move on to apply our method to find new topological materials. The strategy for this search has already been presented in the main text. Here we summarize our early findings. For topological insulators, we have identified new, broad classes of materials. The first, {Cu$_2$ABX$_4$}, with A$=$Ge,Sn,Sb, B$=$Zn,Cd,Hg,Cu, and X$=$S,Se,Te, was introduced in the main text. There are a total of $36$ materials in this structure type. These compounds are analyzed in detail in Subsection~\ref{subsec:cu2abx4}.
In Subsection~\ref{subsec:bisquare}, we examine a large class of layered materials with square nets of As, Bi, and Sb. {These fall into two space groups. First, in $P4/nmm$ (129) there is the class WHM with W$=$Ti,Zr,Hf, or a rare earth metal, H$=$Si,Ge,Sn,Pb, and M$=$O,S,Se,Te as well as the class ACuX$_2$ with A a rare earth metal and X$=$P,As,Sb,Bi. A small number of materials in these families have been shown to be topological before\cite{129tis,zrsnte}, however here we will present a general group-theoretic argument for why they all \emph{must} be topological generically. These arguments additionally allow us to identify 58 new topological insulator candidates in the distorted $Pnma$ (62): LaSbTe\cite{latesbref}, SrZnSb$_2$, and AAgX$_2$ with A a rare earth metal and X$=$P,As,Sb,Bi.}
In Subsection~\ref{subsec:metal} we present realizations of \emph{sixteen-fold} connected metals, where crystal symmetries force sixteen bands to be connected throughout the BZ. These metals can realize exotic filling fractions ($7/8$ in our example) which may allow for interesting phenomena when interactions are included.
{Using our method, we were also able to identify several other interesting classes of compounds, a detailed analysis of which we defer to future work so as not to overburden the reader. First, we have identified three new Dirac semimetals IrTe$_2$\cite{irte2ref,Pascut2014}, NiTe$_2$, and HfTe$_2$ in $P\bar{3}m1$ (164), the symmetry group of buckled graphene). While Dirac semimetals similar material families have been analyzed recently by others\cite{ptse2dirac}, here we have used our powerful connectivity theory to find candidate materials with Dirac points at or very near the Fermi level, as shown in Fig.~\ref{fig:IrTe2}. Also in this space group, we identify CNb$_2$\cite{Cnb2ref} as a promising topological insulator candidate. We show its band structure in Fig.~\ref{fig:CNb2}. Additionally, we have identified topological bands below the Fermi level in Pb$_2$O\cite{pb20ref} in $Pn\bar{3}m$ (224), shown in Fig.~\ref{fig:Pb20}. Furthermore, we predict that under uniaxial strain in the $z$-direction, the strucure distorts to $P4_2/nnm$ (134), and a topological gap opens near the Fermi level. This is shown in Fig.~\ref{fig:Pb20strain}. Lastly, we find a candidate for a \emph{24-fold connected symmetry protected semimetal}, Cu$_3$TeO$_6$\cite{24foldref}, in $Ia\bar{3}$ (206). In this material, a twenty-four band EBR is half-filled at the Fermi level, realizing the most interconnected EBR allowed by symmetry. We show the band structure in Fig.~\ref{fig:24fold}.} Additional candidates for exotic metals can be found in Table~\ref{table:semimetals}.
\begin{figure}[t]
\subfloat[]{
\includegraphics[height=1.6in]{IrTe2soc-bands.pdf}\label{fig:IrTe2}
}\quad
\subfloat[]{
\includegraphics[height=1.6in]{CNb2-bands.pdf}\label{fig:CNb2}
}\quad
\subfloat[]{
\includegraphics[height=1.6in]{Pb2O-bands.pdf}\label{fig:Pb20}
}\quad
\subfloat[]{
\includegraphics[height=1.6in]{Pb2O_pz-bands.pdf}\label{fig:Pb20strain}
}\quad
\subfloat[]{
\includegraphics[height=1.6in]{Cu3TeO6-zoom-bands.pdf}\label{fig:24fold}
}
\caption{Band structures for new topological insulators and semimetals. (a) shows the band structure for IrTe$_2$ in $P\bar{3}m1$ (164). The red circle highlights the type-II Dirac point near the Fermi level. (b) Shows the band structure for the narrow-gap weak topological insulator CNb$_2$ in the same space group, with the topologically nontrivial valence bands shown in red. (c) gives the band structure for unstrained Pb$_2$O in $Pn\bar{3}m$ (224). The isolated group of bands near $-3.5$eV shown in red does not form a BR, and hence are topological. (d) gives the band structure of Pb$_2$O under uniaxial strain, which opens a topological gap near the Fermi level. Finally, (e) gives the band structure for Cu$_3$TeO$_6$ in $Ia\bar{3}$ (206). The twenty-four bands at the Fermi level in this material are half filled, and form the highest-dimensional PEBR allowed for any of the $230$ space groups.
}
\end{figure}
\subsection{Cu$_2$ABX$_4$}\label{subsec:cu2abx4}
The Cu$_2$ABX$_4$ materials all belong to the symmorphic tetragonal space group I$\bar{4}2$m (121). This group is body-centered, and so we take for a basis of lattice vectors
\begin{equation}
\mathbf{e}_1=\frac{1}{2}(-a\hat{\mathbf{x}}+a\hat{\mathbf{y}}+c\hat{\mathbf{z}}), \; \mathbf{e}_2=\frac{1}{2}(a\hat{\mathbf{x}}-a\hat{\mathbf{y}}+c\hat{\mathbf{z}}),\; \mathbf{e}_3=\frac{1}{2}(a\hat{\mathbf{x}}+a\hat{\mathbf{y}}-c\hat{\mathbf{z}}).\label{eq:bctvecs}
\end{equation}
In addition to these translations, the space group is generated by a fourfold roto-inversion $IC_{4z}\equiv S_{4}^-$ about the $z$-axis, and the rotation $C_{2x}$ about the $x$-axis. There are four maximal Wyckoff positions, labelled $2a,2b$, $4c,$ and $4d$ (divided by $2$ for the primitive cell description given in Eq.~(\ref{eq:bctvecs})). The coordinate triplets of the symmetry equivalent points in the unit cell, with respect to Eq.~(\ref{eq:bctvecs}), are given by:
\begin{align}
\mathbf{q}^{2a}&=(0,0,0),\\
\mathbf{q}^{2b}&=(\frac{1}{2},\frac{1}{2},0),\\
\{\mathbf{q}^{4c}_1,\mathbf{q}^{4c}_2\}&=\{(\frac{1}{2},0,\frac{1}{2}),(0,\frac{1}{2},\frac{1}{2})\},\\
\{\mathbf{q}^{4d}_1,\mathbf{q}^{4d}_2\}&=\{(\frac{3}{4},\frac{1}{4},\frac{1}{2}),(\frac{1}{4},\frac{3}{4},\frac{1}{2})\}.
\end{align}
with stabilizer groups
\begin{align}
G_{\mathbf{q}^{2a}_1}&\approx G_{\mathbf{q}^{2b}_1}\approx D_{2d} \\
G_{\mathbf{q}^{4c}_1}&\approx D_2 \\
G_{\mathbf{q}^{4d}_1}&\approx S_4.
\end{align}
We note that the $4c$ position with site-symmetry group $D_2$ is exceptional as per Table~\ref{table:dbr}, althoug this will not play a role here. We also will need to consider the non-maximal $8i$ Wyckoff position, with {coordinates}
\begin{equation}
\{\mathbf{q}^{8i}_j\}=\{(x+z,x+z,2x),(z-x,z-x,-2x),(-x-z,x-z,0),(x-z,-x-z,0)\}
\end{equation}
and stabilizer group $C_s${, generated by a single mirror}. As this group is a proper subgroup of the stabilizers $G_{\mathbf{q}^{2a}_1}$ and $G_{\mathbf{q}^{2b}_1}$, composite band representations induced from the $8i$ can be labelled by sums of elementary band representations from either the $2a$ or $2b$ positions. We show the crystal structure for these compounds in Fig.~\ref{fig:cu2abx4}. Furthermore, the character table for the group $D_{2d}$ is shown in Table~\ref{table:D2d}. We will also need the repsresntations of $G_{\mathbf{q}_1^{4d}}\approx S_4$. Since this is an abelian group generated by the single element $IC_{4z}$, all of its representations are one dimensional, and specified by the character $\chi(IC_{4z})$. We list these in Table~\ref{table:s4} below.
\begin{table}[h]
\begin{tabular}{c|c|c|c|c|c|c}
Rep & $E$ & $C_{2z}$ & $IC_{4z}$ & $C_{2x}$ & $m_{110}$ & $\bar{E}$ \\
\hline
$\rho^{2a}_1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \Tstrut\\
$\rho^{2a}_2$ & $1$ & $1$ & $-1$ & $1$ & $-1$ & $1$ \\
$\rho^{2a}_3$ & $1$ & $1$ & $-1$ & $-1$ & $1$ & $1$ \\
$\rho^{2a}_4$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $1$ \\
$\rho^{2a}_5$ & $2$ & $-2$ & $0$ & $0$ & $0$ & $2$ \\
$\bar{\rho}^{2a}_6$ & 2 & 0 & $-\sqrt{2}$ & $0$ & $0$ & $-2$ \\
$\bar{\rho}^{2a}_7$ & 2 & 0 & $\sqrt{2}$ & $0$ & $0$ & $-2$
\end{tabular}
\caption{Character table for the point group $D_{2d}$, which is the stabilizer group of both the $2a$ and $2b$ positions in $I\bar{4}2m$ (121)}\label{table:D2d}
\end{table}
\begin{table}[h]
\begin{tabular}{L|L}
\mathrm{Rep} & IC_{4z} \\
\hline
\rho^{4d}_1 & 1 \Tstrut\\
\rho^{4d}_2 & -1 \\
\rho^{4d}_3 & i \\
\rho^{4d}_4 & -i \\
\bar{\rho}^{4d}_5 & e^{3\pi i /4} \\
\bar{\rho}^{4d}_6 & e^{7\pi i /4} \\
\bar{\rho}^{4d}_7 & e^{5\pi i /4} \\
\bar{\rho}^{4d}_8 & e^{i\pi/4}
\end{tabular}
\caption{Character table for the point group $S_4$, which is the stabilizer group of the $4d$ position in $I\bar{4}2m$ (121)}\label{table:s4}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[height=2.5in]{cu2abx4.pdf}
\caption{Crystal structure of the Cu$_2$ABX$_4$ class of compounds, with A$=$Ge,Sn,Sb, B$=$Zn,Cd,Hg,Cu, and X$=$S,Se,Te. The blue circles represent the Cu atoms at the $4d$ position, which contribute at the Fermi level.}\label{fig:cu2abx4}
\end{figure}
In the particular compounds of interest, the Cu atoms sit at the $4d$ position, the A atoms sit at $2b$, the B atoms at $2a$, and the X atoms at $8i$. By consulting Section~\ref{sec:data}, we see that the elementary band representations induced from the {one-dimensional representations of the stabilizer group of the} $4d$ position respect time-reversal symmetry in momentum space. Because of this, we know from Section~\ref{sec:bandreps} that the physically elementary band representations induced from this site can be disconnected, and hence topological. Furthermore, ab-initio calculations reveal that in this material class, the relevant states near the Fermi level come from $d$ orbitals at the $4d$ position, and $p$ orbitals at the $8i$ position. Our discussion in the main text thus flags this group of materials as prime candidates for topological insulators.
We focus below on three cases {out of this large class of $36$ materials}. First, there is Cu$_2$GeZnS$_4$. Ab initio calculations reveal this to be a large gap (trivial) insulator without spin-orbit coupling, and hence it will remain so for weak SOC. Indeed, we find that the {eighty-four} valence bands nearest the Fermi level transform according to the physical composite band representation $(6\bar{\rho}^{2b}_6\oplus 5\bar{\rho}^{2b}_7\oplus 3\bar{\rho}^{4d}_5\oplus 2\bar{\rho}^{4d}_6\oplus3\bar{\rho}^{4d}_7\oplus 2\bar{\rho}^{4d}_8)\uparrow G$, while the {lowest lying} conduction band transforms according to the physically elementary $\bar{\rho}^{2b}_7\uparrow G$ band representation. In this case, all band representations induced from the $1D$ representations of $G_{\mathbf{q}^{4d}_1}$ are "occupied", and hence the material is topologically trivial. We show the band structure for this material in Fig.~\ref{fig:Cu2GeZnS4}
\begin{figure}[t]
\includegraphics[height=2.5in]{Cu2GeZnS4.pdf}
\caption{Band structure for the topologically trivial insulator Cu$_2$GeZnS$_4$ with spin orbit coupling included. The valence and conduction bands each form separate physical band representations, and so the $0.5eV$ band gap is topologically trivial.}\label{fig:Cu2GeZnS4}
\end{figure}
Instead, let us consider Cu$_2$SbCuS$_4$\cite{cu3sbs4ref}. {Without SOC, this material is a zero-gap semimetal}. Furthermore, when SOC is included, the $\bar{\rho}^{4d}_5\uparrow G$ band representation and the $\bar{\rho}^{2b}_7\uparrow G$ representation are exchanged between the valence and conduction band as compared with Cu$_2$GeZnS$_4$. As such, the conduction band of Cu$_2$SbCuS$_4$ consists of the $\bar{\rho}^{4d}_5\uparrow G$ band representation, induced from the \emph{one-dimensional} $\bar{\rho}^{4d}_5$ site symmetry representation. Thus, this band representation is elementary, but \emph{not} physically elementary. We conclude that this material is a topological insulator. We show the band structure for this material in the left panel of Fig.~\ref{fig:cu2sbcus4}. To confirm {our group-theoretic result}, we have computed the Wilson loop spectrum, shown in the right panel of Fig.~\ref{fig:cu2sbcus4}. The spectrum clearly winds nontrivially throughout the BZ, indicating that Cu$_2$SbCuS$_4$ is a strong topological insulator. Note also that the real-space time-reversal partner $\bar{\rho}^{4d}_7\uparrow G$ band representation is $0.2eV$ below the Fermi level, although the gap in the material is only $0.03eV$. The two time-reversed partner EBRs are shown in red in the inset of Fig.~\ref{fig:cu2sbcus4}. Hence a novel feature of this material is that the ``topological gap'' is much larger than the transport gap.
\begin{figure}[t]
\includegraphics[height=2.5in]{Cu2SbCuS4-1.pdf}
\caption{Band structure and Wilson loop for the topologically nontrivial compound Cu$_2$SbCuS$_4$. The conduction band here does not form a physically elementary BR induced from a one-dimensional site-symmetry representation. The left panel shows the band structure, with inset showing a zoomed in view of the gap at $\Gamma$. The right panel shows the calculated Wilson loop spectrum. The winding of the Wilson loop shows that this material is a strong topological insulator.}\label{fig:cu2sbcus4}
\end{figure}
Finally, we find Cu$_2$SnHgSe$_4$. We show its band structure and Wilson loop in Fig.~\ref{fig:cu2snhgse4}. It also is a zero-gap semiconductor without SOC which becomes a strong-topological insulator when SOC is turned on. This particular strong TI, however, is not distinguishable from its purely group-theoretic properties: its valence and conduction bands have the same little group representations at every $\mathbf{k}$ point as true physical band representations; however they are topologically nontrivial. To see this, we can calculate their Wilson loop (Berry phase, holonomy), and from it determine any nontrivial topological indices\cite{Freed2013,Shiozaki2017}. {Our discovery of this new TI further highlights the power of our materials search.}
\begin{figure}[t]
\includegraphics[height=2.5in]{Cu2SnHgSe4.pdf}
\caption{Band structure and Wilson loop for the topologically nontrivial compound Cu$_2$SnHgSe$_4$. The left panel shows the band structure, with the inset showing a zoomed in view of the (rather small) gap at the $\Gamma$ point. The right panel shows the Wilson loop calculated and $k_z=0$ and $k_z=\pi$; the winding of the Wilson loops reveals that this compound is a strong topological insulator.}\label{fig:cu2snhgse4}
\end{figure}
\subsection{Square net topological insulators}\label{subsec:bisquare}
Next, we look at topological insulators of the type $(1,2)$ as defined in the main text. These materials are enforced semimetals with a single partially filled elementary band representation without SOC, which then splits into a topologically disconnected composite band representation when spin-orbit coupling is included. We consider square nets of As, Sb, Sn, and Bi which form layered compounds in $P4/nmm$ (129) and $Pnma$ (62) (upon small distortion of the squares). We find approximately $400$ candidate materials of these types, {discovered by targeting our method towards the specific cases of orbitals which can create topological bands}. In each of these classes, the relevant states near the Fermi level come from the $p$-orbitals of the square-net atoms. The maximal positions within the square net layer are still those shown in Fig.~\ref{fig:checker}. Representative crystal structures for these compounds are shown in Figure~\ref{fig:squarenetstructs}.
\begin{figure}
\includegraphics[width=0.2\textwidth]{squarewyckoff.pdf}
\caption{Maximal Wyckoff positions in the square net. The blue star indicates the $a$ position at the 2D lattice sites, the red diamond indicates the $b$ position at the center of the square cell, and the black circles denote the $c$ Wyckoff position at the middle of the edges.}\label{fig:checker}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{squarenetstructs.pdf}
\caption{Crystal structures for the Bi-square net class of topological insulators. The first and second structures show CaMnBi$_2$ and ZrSnTe in space group $P4/nmm$ (129). In CaMnBi$_2$ the Bi$2$ atoms form the square net, while in ZrSnTe it is the Sn atoms. The third structure shows SrZnSb$_2$ in $Pnma$ (62). Here it is the atoms labelled Sb$2$ which make up the slightly distorted square net.}\label{fig:squarenetstructs}
\end{figure}
To analyze these materials, we first begin without SOC. Viewing the square net in isolation, we find that the Fermi level sits between the $p_z$ orbital bonding and anti-bonding states, as shown for Bi in Figure~\ref{fig:bielectrons}. However, charge transfer of {two electrons per unit cell} from {the adjacent non-square net layers shown in Fig.~\ref{fig:squarenetstructs} for} each of these materials {fill} the $p_z$ antibonding states, {putting them} below the Fermi level; {at the Fermi level, the $\{p_x,p_y\}$ bonding states are filled, while the antibonding states are empty. However, in these materials, the $\{p_x,p_y\}$ bonding and antibonding states form a single, connected four (per-spin) band PEBR. Thus,} the band structure of each quasi-2D layer has at the Fermi level a single half-filled elementary band representation without SOC, coming from the four $\{p_x,p_y\}$ orbitals per unit cell. This band representation is induced from the two-dimensional representation of the site-symmetry group $D_{2d}$, as indicated in Table~\ref{table:orbtab1}; recall that the character table for this group was given in Table~\ref{table:D2d}. This site-symmetry representation is spanned by $p_x\pm ip_y$ orbitals. The band structure for this band representation in a square net of $Bi^{1-}$ ions is shown in Figure~\ref{fig:binosoc}. Note that at half-filling, there is a linear band crossing along the $\Gamma-M$ line, which is the cross-section of a line-node (line-degeneracy) protected by mirror symmetry.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{bisquareelectrons.pdf}
\caption{Crystal field splitting of levels in the Bi square net. For undoped bismuth with three electrons per atom, the Fermi level sits at the blue dotted-dashed line. Note that the four $\{p_x,p_y\}$ states transform in a single elementary band representation.}\label{fig:bielectrons}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[]{
\includegraphics[width=0.3\textwidth]{squarenetnosoc.pdf}\label{fig:binosoc}
}
\subfloat[]{
\includegraphics[width=0.3\textwidth]{squarenetsoc.pdf}\label{fig:bisoc}
}
\caption{Representative band structure for the bands in the Bi square net induced from $\{p_x,p_y\}$ orbitals. (a) shows the band structure without SOC, showing band crossings at the Fermi level. These gap with infinitesimal SOC into a topologically nontrivial insulator, as shown in (b)}
\end{figure}
This line node is key to the topological nontriviality of these materials when SOC is included. From Table~\ref{table:orbtab2}, we see that with SOC the $\{p_x,p_y\}$ orbitals decompose into the reducible $\bar{\rho}_6\oplus\bar{\rho}_7$ representation of $D_{2d}$, and hence induce a physically composite band representation. Note that the $\bar{\rho}_6$ representation is spanned by $\{|p_x+ip_y,\uparrow\rangle,|p_x-ip_y,\downarrow\rangle\}$ states, while $\bar{\rho}_7$ is spanned by the $\{|p_x+ip_y,\downarrow\rangle,|p_x-ip_y,\uparrow\rangle\}$. Thus, for these two {physically elementary} band representations {($\bar{\rho}_6\uparrow G$ and $\bar{\rho}_7\uparrow G$)} to separate in energy {and give a trivial insulator}, SOC must be large enough to completely separate the initially degenerate spin-up and spin-down states in Fig.~\ref{fig:bisoc}.
\begin{table}[h]
\begin{tabular}{c|c|c|c|c|c|c}
$ D_{2d}$ & $\Gamma$ & $M$ & $X$ & $\Sigma$ ($\Gamma$-$M$)& $\Delta$ ($\Gamma$-$X$) & d\\
\hline
$\rho_{5 }\uparrow G$ & $\Gamma_{5}^+\oplus \Gamma_{5}^-$ & $M_3\oplus M_4$ & $X_1\oplus X_2$ & $\Sigma_1\oplus \Sigma_2\oplus \Sigma_3\oplus \Sigma_4$ & $\Delta_1\oplus \Delta_2\oplus \Delta_3\oplus \Delta_4$ &4\Tstrut \\
\hline
$\bar{\rho}_{6}\uparrow G$ & $\bar{\Gamma}_{6}\oplus \bar{\Gamma}_{9}$ & $\bar{M}_{5}$ & $\bar{X}_{3}\oplus \bar{X}_{4}$ & $2\bar{\Sigma}_{5}$ & $2\bar{\Delta}_{5}$ &4 \Tstrut\\
$\bar{\rho}_{7}\uparrow G$ & $\bar{\Gamma}_{7}\oplus \bar{\Gamma}_{8}$ & $\bar{M}_{5}$ & $\bar{X}_{3}\oplus \bar{X}_{4}$ & $2\bar{\Sigma}_{5}$ & $2\bar{\Delta}_{5}$ &4 \\
\end{tabular}
\caption{Band representations induced by $p$-orbitals in a square net, both with and without spin-orbit coupling. Note that the double-valued band representations are distinguished by the little-group representations they subduce at $\Gamma$. The dimensions of the representations d are shown in the last column of the table}\label{table:squarenetbrs}
\end{table}
However, even arbitrarily small spin orbit coupling will gap the aforementioned line node seen along $\Gamma-M$. In contrast to the trivial gap, this gap is topological -- neither the valence nor the conduction band transform as elementary band representations. To see this concretely, let us examine the case of $P4/nmm$ (129). In Table~\ref{table:squarenetbrs} we give the little group representations at each high-symmetry point arising from the band representations induced by $\{p_x,p_y\}$ orbitals; $\rho_5$ is the two-dimensional SOC-free representation of $D_{2d}$, while $\bar{\rho}_6$ and $\bar{\rho}_7$ are the two relevant two-dimensional double-valued representations. The key is that without spin orbit coupling, the $\Gamma_5^+$ and $M_3$ little group representations at $\Gamma$ and $M$ respectively "lie" in the valence band, while the $\Gamma_5^-$ and $M_4$ representations "lie" in the conduction band. Next, we note that for arbitrarily small spin-orbit coupling, the spin representations decompose as
\begin{align}
\Gamma_5^+&\rightarrow\bar{\Gamma}_6\oplus\bar{\Gamma}_7\\
\Gamma_5^-&\rightarrow\bar{\Gamma}_8\oplus\bar{\Gamma}_9\\
M_3&\rightarrow\bar{M}_5 \\
M_4&\rightarrow\bar{M}_5.
\end{align}
We thus see that with weak spin-orbit coupling, the valence band contains the $\bar{\Gamma}_6$ and $\bar{\Gamma}_7$ little group representations at $\Gamma$. However, comparing with Table~\ref{table:squarenetbrs}, we see that this is not possible if the valence band is a physically elementary band representation. {While this particular energy ordering was determined from ab-initio calculations, we see that the same analysis holds whenever there is one occupied and one unoccupied little group representation at $\Gamma$ without SOC; this is generically true at half-filling.} We thus deduce that for small spin-orbit coupling, these materials are topological insulators.
The ubiquity of the square net structure {in nature} allows us to identify hundreds of topological insulators in this class. In space group $P4/nmm$ (129) we find materials in the class of ABX$_2$, with A a rare earth metal, B$=$Cu,Ag and X$=$Bi,As,Sb,P, for a total of 48 candidate materials. Furthermore, the recently discovered topological phase in tetragonal bismuth falls into this class of square-net topological insulators\cite{SSBismuth} [albeit in $I4/mmm$ (139)]. Additionally, in $P4/nmm$ (129) we find square-net compounds of the type ABX with A$=$Ti,Zr,Hf, or another rare earth, B$=$Si,Ge,Sn,Pb, and X$=$Os,S,Se,Te. In total, this yields $328$ candidate materials in this space group.
{
\subsubsection{Distored Square Nets}
Although our analysis has focused primarily on the idealized square net, we can show that topological behavior is insensitive to lattice distortions. We can see this most clearly by examining crystal structures with \emph{distorted} square nets. In particular, we focus on $Pnma$ (62), which is obtained from the idealized square net in $P4/nmm$ (129) after an in-plane $C_4$ symmetry-breaking distortion, shown schematically in Fig.~\ref{fig:squarenetstructs}. We} find the 58 new candidate topological insulators LaSbTe, SrZnSb$_2$, and AAgX$_2$, for A a rare-earth metal and X$=$P,As,Sb,Bi. Representative band structures are shown in Fig~\ref{fig:bisquares}, where the topological gap can be clearly seen. We expect all these materials to share a qualitatively similar topological band structure.
\begin{figure}[t]
\centering
\subfloat[]{
\includegraphics[height=1.8in]{SrZnSb2.pdf}\label{SrZnSb2}
}
\subfloat[]{
\includegraphics[height=1.8in]{LaSbTe-bands-sg62.pdf}\label{LaSbTe}
}
\caption{Representative band structures for new topologically nontrivial insulators in the distorted Bi- square net structure group. (a) shows the band structure of the $3D$ weak topological insulator SrZnSb$_2$, while (b) shows the band structure of the 3D weak topological insulator LaSbTe.}\label{fig:bisquares}
\end{figure}
We note empirically that the magnitude of this distortion appears to be inversely correlated with the strength of spin-orbit-coupling of the atoms in the square net. We conjecture that this is due to the fact that SOC alone lifts the electronic degeneracy that causes the distortion through the Jahn-Teller effect.}
\subsection{Sixteen-fold connected metals}\label{subsec:metal}
Space group I$\bar{4}3$d (220) supports a \emph{sixteen-band} physically elementary band representation. In any topologically trivial phase, all sixteen of these bands need to be connected. {We believe this set of high-connectivity bands, far exceeding the minimum connectivity of Refs.~\onlinecite{Watanabe15,Watanabe16}, can lead to robust protected semimetals with large conductivities, strong correlations, Mott physics, and other exotic properties.} We find examples of this band representation, partially filled at the Fermi-level, in the series of compounds A$_{15}$B$_4$, with A$=$Cu,Li,Na and B$=$Si,Ge,Sn,Pb. {It is amusing to note that these materials which we identified with group theory, are also known as promising candidates for the next generation of batteries\cite{batteryref}. Thus, batteries seem to be symmetry-protected (semi-)metals.} The crystal structure for these compounds in a conventional unit cell is shown in Figure~\ref{fig:a15b4struct}. Note that there are two formula units per primitive unit cell.
\begin{figure}[t]
\includegraphics[width=1.5in]{a15b4.pdf}
\caption{Crystal structure of the A$_{15}$B$_4$ class of materials in $I\bar{4}3d$ (220). One conventional unit cell is shown. The small green circles indicate the location of the $A$ atoms at the $12a$ and $48e$ Wyckoff position. The larger purple circles indicate the B atoms at the $16c$ Wyckoff positon.}\label{fig:a15b4struct}
\end{figure}
To analyze these materials, we first review the basic facts about space group $I\bar{4}3d$ (220). This is a non-symmorphic, body centered cubic space group. We take as primitive basis vectors for the BCC lattice
\begin{equation}
\mathbf{e}_1=\frac{a}{2}(-\hat{\mathbf{x}}+\hat{\mathbf{y}}+\hat{\mathbf{z}}),\;
\mathbf{e}_2=\frac{a}{2}(\hat{\mathbf{x}}-\hat{\mathbf{y}}+\hat{\mathbf{z}}),\;
\mathbf{e}_3=\frac{a}{2}(\hat{\mathbf{x}}+\hat{\mathbf{y}}-\hat{\mathbf{z}}), \label{eq:bccvecs}
\end{equation}
which we recognize as Eq.~(\ref{eq:bctvecs}) with the lattice constants $a=c$. In addition to the translations, space group $I\bar{4}3d$ (220) is generated by the cubic threefold rotation $\{C_{3,111}|000\}$ about the $[111]$ axis, the four-fold roto-inversion $\{IC_{4,011}|\frac{1}{2} 00\}$ about the $\hat{\mathbf{x}}=\mathbf{e}_2+\mathbf{e}_3$ axis, and the mirror $\{m_{1\bar{1}0}|\frac{1}{2}\half\frac{1}{2}\}$ that sends $\mathbf{\hat{x}}\leftrightarrow\mathbf{\hat{y}}$.
There are three maximal Wyckoff positions in this space group, denoted $12a,12b$ and $16c$, with multiplicity $6,6$ and $8$ respectively in the primitive unit cell description. Also note that there is the non-maximal $48e$ Wyckoff position, with multiplicity $24$. The A atoms sit at the $12a$ and $48e$ positions, while the B atoms sit at the $16c$ position. Since the electrons near the Fermi energy come from the B atoms, we will here be interested only in the $16c$ position. It has representative coordinate $\mathbf{q}_1^{16c}=(x,x,x)$ in terms of the lattice vectors Eq.~(\ref{eq:bccvecs}); it is clear that the stabilizer group is $G_{\mathbf{q}^{16c}_1}\approx C_3$, generated by the threefold rotation $\{C_{3,111}|000\}$. The coordinate triplets of its symmetry equivalent points in the primitive unit cell are obtained by the repeated action of $\{IC_{4,011}|\frac{1}{2} 00\}$ and $\{m_{1\bar{1}0}|\frac{1}{2}\half\frac{1}{2}\}$. Because the stabilizer group $C_3$ is abelian, its double-valued representations are all one-dimensional, and specified by the character $\chi(\{C_{3,111}|000\})$. The three possible double-valued representations are
\begin{align}
\bar{\rho}^{16c}_4(\{C_{3,111}|000\})&=-1, \\
\bar{\rho}^{16c}_5(\{C_{3,111}|000\})&=e^{-i\pi/3}, \\
\bar{\rho}^{16c}_6(\{C_{3,111}|000\})&=e^{i\pi/3}.
\end{align}
Consulting Section~\ref{sec:data}, we see that in the physically elementary band representation $(\bar{\rho}^{16c}_5\oplus\bar{\rho}^{16c}_6)\uparrow G$, Kramers's theorem forces connection between bands coming from the $\bar{\rho}^{16c}_5\uparrow G$ and $\bar{\rho}^{16c}_6\uparrow G$ (non-physically) elementary band representations. As such, in any trivial phase, this band representation is sixteen-fold connected.
In the A$_{15}$B$_4$ class of materials, the B atoms sit at the $16c$ Wyckoff position. In the particular examples of Cu$_{15}$Si$_4$, Li$_{15}$Ge$_4$, Li$_{15}$Si$_4$, Na$_{15}$Sn$_4$, and Na$_{15}$Pb$_4$,the relevant states at the Fermi level are precisely the B atom $p$-states, of which there are $48$ per unit cell. Due to charge transfer with the A atoms, there are $46$ electrons filling these states. $32$ out of those $46$ electrons go into filled valence bands, leaving $14$ electrons to fill a band of connectivity $16$. From Table~\ref{table:orbtab2}, we see that these yield bands transforming in the $(2\bar{\rho}^{16c}_4\oplus2\bar{\rho}^{16c}_5\oplus2\bar{\rho}^{16c}_6)\uparrow G$ composite band representation. In the materials listed above, ab-initio calculations reveal that the band representation closest to the Fermi-level is precisely the sixteen-branched $(\bar{\rho}^{16c}_5\oplus\bar{\rho}^{16c}_6)\uparrow G$ band representation, which by electron counting is $14/16=7/8$ filled. These materials are thus truly {protected metals}.
\begin{figure}[t]
\centering
\subfloat[]{
\includegraphics[width=0.3\textwidth]{cu15si4.pdf}\label{fig:cu15si4}
}
\subfloat[]{
\includegraphics[width=0.3\textwidth]{li15ge4.pdf}\label{fig:li15ge4}
}
\subfloat[]{
\includegraphics[width=0.3\textwidth]{na15pb4.pdf}\label{fig:na15pb4}
}
\caption{Band structures with spin-orbit coupling for the sixteen-fold connected metals in the A$_{15}$B$_4$ structure group. Spin-orbit coupling has been included in all calculations. (a) shows the band structure for Cu$_{15}$Si$_4$; the Cu d-orbitals can be seen far below the Fermi level. (b) shows the band structure for Li$_{15}$Ge$_4$. Finally, (c) shows the band structure for Na$_{15}$Pb$_4$.}\label{fig:a15b4}
\end{figure}
We show band structures for these highly-connected metals in Fig.~\ref{fig:a15b4}. These band structures show clearly that the interconnections between these bands are mediated by the exotic degeneracies of Ref.~\onlinecite{Bradlyn2016}. In particular, we see that at the $P$ point these materials host a threefold degenerate fermion, while at the $H$ point they have eightfold degenerate excitations. In fact, {our} site-symmetry tables of Ref.~\onlinecite{GroupTheoryPaper} reveal that the Kramers-enforced connection between the $\bar{\rho}^{16c}_5\uparrow G$ and $\bar{\rho}^{16c}_6\uparrow G$ band representations occurs precisely at an eightfold degeneracy point at $H$.
}
\vspace{-4ex}
\subsection{Twenty-fourfold connected metals}
\vspace{-3ex}
An exhaustive search of the dimensions of all 10,403 EBRs and PEBRs shows that the greatest number of bands that are forced to be connected by symmetry in a topologically trivial phase is $24$. An example of this occurs in $Ia\bar{3}$ (206). In this group the $24d$ maximal Wyckoff position has multiplicity two, and site-symmetry group isomorphic to $C_2$. In spin-orbit coupled systems with TR symmetry, the two-dimensional physically irreducible $\bar{\Gamma}_3\oplus\bar{\Gamma}_4$ representation of this site-symmetry group thus induces a twenty-four band PEBR. In any topologically trivial phase, all twenty-four of these bands \emph{must} be interconnected. In Fig.~\ref{fig:24fold} we show the band structure of Cu$_3$TeO$_6$, which we calculate, has this PEBR half-filled at the Fermi level. Although interaction effects cause this material to be a Mott insulator\cite{24foldmott}, expect other materials with this PEBR near the Fermi level may exhibit exotic fillings due to charge-transfer effects.
\vspace{-2ex}
\section{Supplementary data}\label{sec:data}
\vspace{-4.5ex}
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
Site symmetry group & Reducing group& intersection group & Rep dimension & Space Group Number \\
{ ($G_\mathbf{q}$)} & { ($G_{\mathbf{q}'}$)}& {($G_0$)} & & \\
\hline
$D_3$ & $C_{3i}$& $C_3$& $2$ & $163, 165, 167, 228, 230$ \Tstrut\\
& $T_h$&$C_3$ &$2$ & $223$ \\
& $O$ &$C_3$ &$2$ &$211$ \\
& $T$ &$C_3$ &$2$ & $208, 210, 228$ \\
& $C_{3h}$ &$C_3$ & $2$ & $188, 190, 192, 193$\\
\hline
$D_6$ &$C_{6h}$ &$C_{6}$ &$2$ & $192$\Tstrut \\
\hline
$D_4$ &$O$ & $C_{4}$& $2$ & $207,211,222$ \Tstrut\\
&$C_{4h}$ & $C_{4}$&$2$ & $124,140$ \\
\hline
\hline
$D_{2d}$ &$D_{4h}$ &$C_{2v}$ &$2$ & $229$ \Tstrut\\
&$T_h$ &$C_{2v}$ &$2$ & $226$ \\
& $T_d$ &$C_{2v}$ &$2$ & $215,217,224$ \\
& $D_{2h}$ &$C_{2v}$ &$2$ &$131, 132, 139, 140, 223$
\end{tabular}
\caption{Maximal Wyckoff positions which yield \emph{composite} band representations for the single groups and thus do not need to be considered in a search for elementary band reps; computed by Bacry, Michel, and Zak\cite{Bacry1988}. {Point group symbols are given in Schoenflies notation.\cite{Cracknell}} The first column gives the maximal site-symmetry group, $G_\mathbf{q}$, which induces the composite representation. The second column gives the site-symmetry group, $G_{\mathbf{q}'}$, into whose band representations this composite representation can be reduced. The third column gives the intersection group, $G_0=G_{\mathbf{q}}\cap G_{\mathbf{q}'}$. The fourth column gives the dimension of the irrep which induces the composite band rep. The fifth column indicates the space groups for which this occurs.
{With (spinless) time-reversal, only the groups below the double line yield composite physical band representations (and do not need to be considered in a search for physically elementary band reps).}
}\label{table:sbr}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
Site symmetry group & Reducing group& Intersection group & Rep dimension & Space Group Number \\
{($G_\mathbf{q}$)} & {($G_{\mathbf{q}'}$)}& {($G_0$)} & & \\
\hline
$T_d$ & $D_{3d}$ &$C_{3v}$ & $4$ & $224, 227^*$ \Tstrut\\
& $O_h$ & $C_{3v}$ & $4$ & $225$\\
\hline
$D_3$ & $T_h$& $C_3$ & $2$ & $223$\Tstrut\\
&$O$ &$C_3$ & $2$ & $211$ \\
&$T$ &$C_3$ & $2$ &$208,210,228$ \\
& $C_{3h}$ &$C_3$ &$2$ &$188^*, 190^*, 192^*, 193^*$\\
& $C_{3i}$ &$C_3$ &$2$& $163^*, 165^*, 167^*, 228^*, 230^*$\\
\hline
$D_{3h}$ & $D_{3d}$ &$C_{3v}$ & $2$ & $193^*,194^*$ \Tstrut\\
\hline
$D_6$ &$C_{6h}$ &$C_6$ &$2$ & $192^*$\Tstrut\\
\hline
$D_4$ & $O$ &$C_4$ &$2$ & $207,211, 222$\\
& $C_{4h}$ &$C_4$ &$2$ &$124^*,140^*$\\
\hline
$C_{2v}$ &$C_{6v}$ &$C_s$ &$2$ & $183$\Tstrut\\
&$C_{3v}$ &$C_s$ & $2$ & $183$\\
&$C_{2h}$ &$C_s$ & $2$ & $51^*, 63^*, 67^*, 74^*, 138^*$\\
& $C_{4v}$ &$C_s$ &$2$&$99, 107$ \\
& $D_{2d}$ & $C_s$ & $2$ & $115,137$ \\
\hline
$D_{2}$ &$T$& $C_2$&$2$ & $195,197, 201, 208, 209, 218$\Tstrut\\
& $D_6$ &$C_2$ &$2$&$177,192$\\
& $D_3$ &$C_2$ & $2$ & $177, 192, 208, 211, 214, 230$\\
& $S_{4}$ &$C_2$ &$2$&$112^*, 116^*, 120^*, 121, 126^*, 130^*, 133^*, 138^*, 142^*, 218^*, 230^*$ \\
& $D_{2d}$ &$C_2$ &$2$& $111, 121, 132, 134, 224$ \\
&$C_{2h}$ &$C_2$ &$2$ & $49^*, 66^*, 67^*, 69^*, 72^*, 124^*, 128^*, 132, 134, 135^*, 138^*, 192^*$ \\
& $D_4$ &$C_2$ & $2$ & $89, 97, 124, 126, 211$ \\
& $D_{3d}$ & $C_2$ & $2$ & $224$ \\
& $O$ & $C_2$ & $2$ & $209$
\end{tabular}
\caption{Maximal Wyckoff positions which yield band representations that {we find to be} equivalent to \emph{composite} band representations for the double groups.
The first column gives the maximal site-symmetry group, $G_\mathbf{q}$, which induces the composite representation. {Point groups symbols are given using Schoenflies notation\cite{Cracknell} (e.g $C_s$ is the point group generated by reflection).} The second column gives the site-symmetry group, {$G_{\mathbf{q}'}$}, into whose band representations this composite representation can be reduced. The third column gives the intersection group, $G_0=G_{\mathbf{q}}\cap G_{\mathbf{q}'}$. The fourth column gives the dimension of the irrep which induces the composite band rep. The fifth column indicates the space groups for which this occurs.
An asterisk ($*$) indicates that while the band rep {is disconnected in momentum space when time-reversal symmetry is ignored, there are extra connectivity constraints imposed by Kramers's theorem when TR is present.} This can occur when the {representation $\sigma$ of $G_0$ induces two one-dimensional representations of $G_\mathbf{q}'$} that are not momentum-space time reversal invariant in isolation.}
\label{table:dbr}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
Site symmetry group ($G_\mathbf{q}$) & Reducing group ($G_\mathbf{q}'$) & Space Group Number \\
\hline
$S_4$ & $C_{2h}$ & $84,87,135,136$ \Tstrut\\
& $D_2$ & $112,116,120,121,126,130,133,138,142,218,230$ \\
& $D_4$ & $222$ \\
& $D_{2d}$ & $217$ \\
& $T$ & $219,228$ \\
\end{tabular}
\caption{Additional exceptional band representations with time-reversal. In all cases, the exceptional representation is the physically irreducible two-dimensional representation of $S_4$. For the space groups listed in this table, this band representation decomposes through $G_0=C_2$ into a composite band representation induced from the reducing group $G_\mathbf{q}'$. The first column gives the reducing group, while the second column gives the associated space groups for which the exception occurs.}\label{table:zakwaswrong}
\end{table}
\begin{table}[H]
\centering
{
\begin{tabular}{L|L|L|L|L}
\mathrm{PG} & \mathrm{PG Symbol}& s & p & d \\
\hline
C_1 & 1&\Gamma_1 & 3\Gamma_1 & 5\Gamma_1 \Tstrut\\
C_i & \bar{1}& \Gamma_1^+ & 3\Gamma_1^- & 5\Gamma_1^+\\
\hline
C_2 &2& \Gamma_1 & \Gamma_1\oplus2\Gamma_2 & 3\Gamma_1\oplus2\Gamma_2 \Tstrut\\
C_s &m& \Gamma_1 & 2\Gamma_1 \oplus\Gamma_2 & 3\Gamma_1 \oplus2\Gamma_2 \\
C_{2h} &2/m& \Gamma_1^+ & \Gamma_1^-\oplus2\Gamma_2^- & 3\Gamma_1^+\oplus2\Gamma_2^+\\
\hline
D_2 &222& \Gamma_1 & \Gamma_2\oplus\Gamma_3\oplus\Gamma_4 & 2\Gamma_1\oplus\Gamma_2\oplus\Gamma_3\oplus\Gamma_4 \Tstrut\\
C_{2v} &mm2& \Gamma_1 & \Gamma_1\oplus\Gamma_3\oplus\Gamma_4 & 2\Gamma_1\oplus\Gamma_2\oplus\Gamma_3\oplus\Gamma_4\\
D_{2h} &mmm& \Gamma_1^+ & \Gamma_2^-\oplus\Gamma_3^-\oplus\Gamma_4^- & 2\Gamma_1^+\oplus\Gamma_2^+\oplus\Gamma_3^+\oplus\Gamma_4^+ \\
\hline
C_4 &4& \Gamma_1 & \Gamma_1\oplus\Gamma_3\oplus\Gamma_4 & \Gamma_1\oplus2\Gamma_2\oplus\Gamma_3\oplus\Gamma_4 \Tstrut\\
S_4 &\bar{4}& \Gamma_1 & \Gamma_2\oplus\Gamma_3\oplus\Gamma_4 & \Gamma_1\oplus2\Gamma_2\oplus\Gamma_3\oplus\Gamma_4 \\
C_{4h} &4/m& \Gamma_1^+ & \Gamma_1^-\oplus\Gamma_3^-\oplus\Gamma_4^- & \Gamma_1^+\oplus2\Gamma_2^+\oplus\Gamma_3^+\oplus\Gamma_4^+ \\
\hline
D_4 &422& \Gamma_1 & \Gamma_3\oplus\Gamma_5 & \Gamma_1\oplus\Gamma_2\oplus\Gamma_4\oplus\Gamma_5\Tstrut \\
C_{4v} &4mm& \Gamma_1 & \Gamma_1\oplus\Gamma_5 & \Gamma_1\oplus\Gamma_2\oplus\Gamma_3\oplus\Gamma_5 \\
D_{2d} &\bar{4}2m& \Gamma_1 & \Gamma_3\oplus\Gamma_5 & \Gamma_1\oplus\Gamma_2\oplus\Gamma_3\oplus\Gamma_5 \\
D_{4h} &4/mmm & \Gamma_1^+ & \Gamma_3^-\oplus\Gamma_5^- & \Gamma_1^+\oplus\Gamma_2^+\oplus\Gamma_4^+\oplus\Gamma_5^+ \\
\hline
C_3 &3& \g1 & \Gamma_1\oplus\Gamma_2\oplus\Gamma_3 & \Gamma_1\oplus2\Gamma_2\oplus2\Gamma_3 \Tstrut\\
C_{3i} &\bar{3}& \g1^+ & \Gamma_1^-\oplus\Gamma_2^-\oplus\Gamma_3^- & \Gamma_1^+\oplus2\Gamma_2^+\oplus2\Gamma_3^+ \\
\hline
D_3 &32& \Gamma_1 & \Gamma_2\oplus\Gamma_3 & \Gamma_1\oplus2\Gamma_3 \Tstrut\\
C_{3v} &3m& \Gamma_1 & \Gamma_1 \oplus\Gamma_3 & \Gamma_1\oplus2\Gamma_3 \\
D_{3d} &\bar{3}m& \Gamma_1^+ & \Gamma_2^-\oplus\Gamma_3^- & \Gamma_1^+\oplus2\Gamma_3^+ \\
\hline
C_6 &6& \Gamma_1 & \Gamma_1\oplus\Gamma_3\oplus\Gamma_5 & \Gamma_1\oplus\Gamma_3\oplus\Gamma_4\oplus\Gamma_5\oplus\Gamma_6 \Tstrut\\
C_{3h} &\bar{6}& \Gamma_1 & \Gamma_2\oplus\Gamma_3\oplus\Gamma_5 & \Gamma_1\oplus\Gamma_3\oplus\Gamma_4\oplus\Gamma_5\oplus\Gamma_6 \\
C_{6h} &6/m& \Gamma_1^+ & \Gamma_1^-\oplus\Gamma_3^-\oplus\Gamma_5^- & \Gamma_1^+\oplus\Gamma_3^+\oplus\Gamma_4^+\oplus\Gamma_5^+\oplus\Gamma_6^+ \\
\hline
D_6 &622& \Gamma_1 & \Gamma_2\oplus\Gamma_6 & \Gamma_1\oplus\Gamma_5\oplus\Gamma_6 \Tstrut\\
C_{6v} &6mm& \Gamma_1 & \Gamma_2\oplus\Gamma_1 & \Gamma_1\oplus\Gamma_5\oplus\Gamma_6 \\
D_{3h} &\bar{6}2m& \Gamma_1 & \Gamma_3\oplus\Gamma_5 & \Gamma_1\oplus\Gamma_5\oplus\Gamma_6 \\
D_{6h} &6/mmm& \Gamma_1^+ & \Gamma_2^-\oplus\Gamma_6^- & \Gamma_1^+\oplus\Gamma_5^+\oplus\Gamma_6^+ \\
\hline
T &23& \Gamma_1 & \Gamma_4 & \Gamma_2\oplus\Gamma_3\oplus\Gamma_4 \Tstrut\\
T_h &m\bar{3}& \Gamma_1^+ & \Gamma_4^- & \Gamma_2^+\oplus\Gamma_3^+\oplus\Gamma_4^+\\
\hline
O &432& \Gamma_1 & \Gamma_4 & \Gamma_3\oplus\Gamma_5 \Tstrut\\
T_d &\bar{4}3m& \Gamma_1 & \Gamma_4 & \Gamma_3\oplus\Gamma_4 \\
O_h &m\bar{3}m& \Gamma_1^+ & \Gamma_4^- & \Gamma_3^+\oplus\Gamma_5^- \\
\end{tabular}}
\caption{Decompositions of the representations spanned by spinless $s,p$ and $d$ orbitals into point group representations\cite{PointGroupTables}. The first column gives the point group symbol in Schoenflies notation, listed in the conventional order, and the second column gives point group symbol in Hermann-Mauguin notation. $s$ orbitals transform in the point group representation listed in the third column. $p$ orbitals transform in the representation listed in the fourth column, and $d$ orbitals transform in the representation listed in the last column. The representation labels correspond to the labelling of little group representations at the $\Gamma$ point; the notation matches the Bilbao Crystallographic Server \cite{ssc,cdml,Bilbao1}}\label{table:orbtab1}.
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{L|L|L|L|L}
\mathrm{PG}&\mathrm{PG Symbol} & s & p & d \\
\hline
C_1 &1& 2\bar{\Gamma}_2 & 6\bar{\Gamma}_2 & 10\bar{\Gamma}_2\Tstrut \\
C_i &\bar{1}& 2\bar{\Gamma}_3 & 6\bar{\Gamma}_2 & 10\bar{\Gamma}_3 \\
\hline
C_2 &2& \bar{\Gamma}_3\oplus\bar{\Gamma}_4 & 3\bar{\Gamma}_3\oplus3\bar{\Gamma}_4 & 5\bar{\Gamma}_3 \oplus 5\bar{\Gamma}_4 \Tstrut\\
C_s &m& \bar{\Gamma}_3\oplus\bar{\Gamma}_4 & 3\bar{\Gamma}_3\oplus3\bar{\Gamma}_4 & 5\bar{\Gamma}_3 \oplus 5\bar{\Gamma}_4 \\
C_{2h} &2/m& \bar{\Gamma}_3\oplus\bar{\Gamma}_4 & 3\bar{\Gamma}_5\oplus3\bar{\Gamma}_6 & 5\bar{\Gamma}_3 \oplus 5\bar{\Gamma}_4 \\
\hline
D_2 &222& \bar{\Gamma}_{5} & 3\bar{\Gamma}_5 & 5\bar{\Gamma}_5\Tstrut \\
C_{2v} &mm2& \bar{\Gamma}_{5} & 3\bar{\Gamma}_5 & 5\bar{\Gamma}_5 \\
D_{2h} &mmm& \bar{\Gamma}_{5} & 3\bar{\Gamma}_6 & 5\bar{\Gamma}_5 \\
\hline
C_4 &4& \bar{\Gamma}_6 \oplus \bar{\Gamma}_8 & 2\bar{\Gamma}_6\oplus2\bar{\Gamma}_8\oplus\bar{\Gamma}_5\oplus\bar{\Gamma}_7 & 2\bar{\Gamma}_6\oplus2\bar{\Gamma}_8\oplus3\bar{\Gamma}_5\oplus3\bar{\Gamma}_7\Tstrut\\
S_4 &\bar{4}& \bar{\Gamma}_6 \oplus \bar{\Gamma}_8 & \bar{\Gamma}_6\oplus\bar{\Gamma}_8\oplus2\bar{\Gamma}_5\oplus2\bar{\Gamma}_7 & 2\bar{\Gamma}_6\oplus2\bar{\Gamma}_8\oplus3\bar{\Gamma}_5\oplus3\bar{\Gamma}_7\\
C_{4h} &4/m& \bar{\Gamma}_6 \oplus \bar{\Gamma}_8 & 2\bar{\Gamma}_{10}\oplus2\bar{\Gamma}_{12}\oplus\bar{\Gamma}_{9}\oplus\bar{\Gamma}_{11} & 2\bar{\Gamma}_6\oplus2\bar{\Gamma}_8\oplus3\bar{\Gamma}_5\oplus3\bar{\Gamma}_7\\
\hline
D_4 &422& \bar{\Gamma}_7 & 2\bar{\Gamma}_7\oplus\bar{\Gamma}_6 & 2\bar{\Gamma}_7\oplus3\bar{\Gamma}_6 \Tstrut\\
C_{4v} &4mm& \bar{\Gamma}_7 & 2\bar{\Gamma}_7\oplus\bar{\Gamma}_6 & 2\bar{\Gamma}_7\oplus3\bar{\Gamma}_6 \\
D_{2d} &\bar{4}2m& \bar{\Gamma}_7 & 2\bar{\Gamma}_6\oplus\bar{\Gamma}_7 & 2\bar{\Gamma}_7\oplus3\bar{\Gamma}_6 \\
D_{4h} &4/mmm& \bar{\Gamma}_7 & 2\bar{\Gamma}_8\oplus\bar{\Gamma}_9 & 2\bar{\Gamma}_7\oplus3\bar{\Gamma}_6 \\
\hline
C_3 &3& \bar{\Gamma}_5 \oplus \bar{\Gamma}_6 & 2\bar{\Gamma}_5 \oplus 2\bar{\Gamma}_6 \oplus2\bar{\Gamma}_4 & 3\bar{\Gamma}_5 \oplus 3\bar{\Gamma}_6 \oplus4\bar{\Gamma}_4\Tstrut\\
C_{3i} &\bar{3}& \bar{\Gamma}_5 \oplus \bar{\Gamma}_6 & 2\bar{\Gamma}_7 \oplus 2\bar{\Gamma}_8 \oplus2\bar{\Gamma}_9 & 3\bar{\Gamma}_5 \oplus 3\bar{\Gamma}_6 \oplus4\bar{\Gamma}_4\\
\hline
D_3 &32& \bar{\Gamma}_6 & 2\bar{\Gamma}_6\oplus\bar{\Gamma}_4\oplus\bar{\Gamma}_5 & 3\bar{\Gamma}_6 \oplus 2\bar{\Gamma}_4 \oplus 2\bar{\Gamma}_5\Tstrut \\
C_{3v} &3m& \bar{\Gamma}_6 & 2\bar{\Gamma}_6\oplus\bar{\Gamma}_4\oplus\bar{\Gamma}_5 & 3\bar{\Gamma}_6 \oplus 2\bar{\Gamma}_4 \oplus 2\bar{\Gamma}_5 \\
D_{3d} &\bar{3}m &\bar{\Gamma}_8 & 2\bar{\Gamma}_9 \oplus \bar{\Gamma}_6 \oplus \bar{\Gamma}_7 & 3\bar{\Gamma}_8 \oplus 2\bar{\Gamma}_4 \oplus 2\bar{\Gamma}_5 \\
\hline
C_6 &6& \bar{\Gamma}_{10}\oplus\bar{\Gamma}_{11} & 2\bar{\Gamma}_{10}\oplus2\bar{\Gamma}_{11}\oplus\bar{\Gamma}_7\oplus\bar{\Gamma}_8 & 2\bar{\Gamma}_{10}\oplus2\bar{\Gamma}_{11}\oplus2\bar{\Gamma}_7\oplus2\bar{\Gamma}_8\oplus\bar{\Gamma}_9\oplus\bar{\Gamma}_{12}\Tstrut \\
C_{3h} &\bar{6}& \bar{\Gamma}_{10}\oplus\bar{\Gamma}_{11} & 2\bar{\Gamma}_{10}\oplus2\bar{\Gamma}_{11}\oplus\bar{\Gamma}_7\oplus\bar{\Gamma}_8 & 2\bar{\Gamma}_{10}\oplus2\bar{\Gamma}_{11}\oplus2\bar{\Gamma}_7\oplus2\bar{\Gamma}_8\oplus\bar{\Gamma}_9\oplus\bar{\Gamma}_{12} \\
C_{6h} &6/m& \bar{\Gamma}_{10}\oplus\bar{\Gamma}_{11} & 2\bar{\Gamma}_{16}\oplus2\bar{\Gamma}_{17}\oplus\bar{\Gamma}_{13}\oplus\bar{\Gamma}_{14} & 2\bar{\Gamma}_{10}\oplus2\bar{\Gamma}_{11}\oplus2\bar{\Gamma}_7\oplus2\bar{\Gamma}_8\oplus\bar{\Gamma}_9\oplus\bar{\Gamma}_{12} \\
\hline
D_6 &622& \bar{\Gamma}_9 & 2\bar{\Gamma}_9 \oplus \bar{\Gamma}_7 & 2\bar{\Gamma}_9\oplus2\bar{\Gamma}_7\oplus\bar{\Gamma}_8 \Tstrut\\
C_{6v} &6mm& \bar{\Gamma}_9 & 2\bar{\Gamma}_9 \oplus \bar{\Gamma}_7 & 2\bar{\Gamma}_9\oplus2\bar{\Gamma}_7\oplus\bar{\Gamma}_8 \\
D_{3h} &\bar{6}2m& \bar{\Gamma}_9 & 2\bar{\Gamma}_9 \oplus \bar{\Gamma}_7 & 2\bar{\Gamma}_9\oplus2\bar{\Gamma}_7\oplus\bar{\Gamma}_8 \\
D_{6h} &6/mmm& \bar{\Gamma}_9 & 2\bar{\Gamma}_{12} \oplus \bar{\Gamma}_{10} & 2\bar{\Gamma}_9\oplus2\bar{\Gamma}_7\oplus\bar{\Gamma}_8 \\
\hline
T &23& \bar{\Gamma}_5 & \bar{\Gamma}_5\oplus\bar{\Gamma}_6\oplus\bar{\Gamma}_7 & \bar{\Gamma}_5\oplus2\bar{\Gamma}_6\oplus2\bar{\Gamma}_7 \Tstrut\\
T_h &m\bar{3}& \bar{\Gamma}_5 & \bar{\Gamma}_8\oplus\bar{\Gamma}_9\oplus\bar{\Gamma}_{10} & \bar{\Gamma}_5\oplus2\bar{\Gamma}_6\oplus2\bar{\Gamma}_7 \\
\hline
O &432& \bar{\Gamma}_6 & \bar{\Gamma}_8\oplus\bar{\Gamma}_6 & 2\bar{\Gamma}_8\oplus\bar{\Gamma}_7 \Tstrut\\
T_d &\bar{4}3m& \bar{\Gamma}_6 & \bar{\Gamma}_8\oplus\bar{\Gamma}_6 & 2\bar{\Gamma}_8\oplus\bar{\Gamma}_7 \\
O_h &m\bar{3}m& \bar{\Gamma}_6 & \bar{\Gamma}_8\oplus\bar{\Gamma}_{11} & 2\bar{\Gamma}_{10}\oplus\bar{\Gamma}_7 \\
\end{tabular}
\caption{Decompositions of the representations spanned by spinful $s,p$ and $d$ orbitals (assuming spin-1/2 electrons) into point group representations\cite{PointGroupTables}. The first column gives the point group symbol in Schoenflies notation, listed in the conventional order, and the second column gives point group symbol in Hermann-Mauguin notation. $s$ orbitals transform according to the point group representation listed in the third column. $p$ orbitals transform according to the representation listed in the fourth column, and $d$ orbitals transform according to the representation listed in the last column. The representation labels correspond to the labelling of little group representations at the $\Gamma$ point; the notation matches the Bilbao Crystallographic Server \cite{ssc,cdml,Bilbao1}.}\label{table:orbtab2}
\end{table}
\begin{table}[H]
\small
\centering
\label{my-label}
\begin{tabular}{lclclclclc}\toprule
{SG} &{Mat.} &{SG} &{Mat.} &{SG} &{Mat.} &{SG} &{Mat.} &{SG} &{Mat.} \\
\hline\hline
\\
2 $P\bar{1} $ & IrTe$_2$ & 92 $P4_{1}2_{1}2 $ & La$_5$Si$_4$ & 146 $R3 $ & SnAu$_5$ & 178 $P6_{1}22 $ & Ir$_3$Zr$_5$ & 221 $Pm{\bar 3}m$ & LaIn$_3$ \\
4 $P2_{1} $ & Ge$_2$LaPt$_2$ & 100 $P4bm $ & La$_5$S$_7$ & 147 $P{\bar 3} $ & NW$_2$ & 180 $P6_{2}22 $ & Ge$_2$Ta & 223 $Pm{\bar 3}n$ & IrTi$_3$\\
13 $P2/c $ & AuCrTe$_4$ & 103 $P4cc $ & TaTe$_4$ & 148 $R{\bar 3} $ & Ir$_3$Te$_8$ & 182 $P6_{3}22 $ & Ni$_3$N & 224 $Pn{\bar 3}n$ & AgO$_2$\\
14 $P2_{1}/c$ & AgF$_4$Na$_2$ & 109 $I41md $ & LaPtSi & 149 $P312 $ & TiO$_3$ & 185 $P6_{3}cm $ & IrMg$_3$ & 225 $Fm{\bar 3}m$ & BiLa\\
26 $Pmc21 $ & In$_4$LaPd$_2$ & 113 $P{\bar 4}2_{1}m$ & Na$_5$Sn & 150 $P321 $ & Li$_7$Pb$_2$ & 186 $P6_{3}mc $ & Au$_3$Sr$_7$ & 226 $Fm{\bar 3}c$ & NaZn$_13$\\
34 $Pnn2 $ & CoTe$_2$ & 120 $I{\bar4}c2 $ & K(SnAu$_2$)$_2$ & 152 $P3_{1}21 $ & Ga$_3$Ni$_13$Ge$_6$ & 187 $P{\bar 6}m2 $ & LiZnGe & 227 $Fd{\bar 3}m$ & RbBi$_2$\\
36 $Cmc2_{1}$ & AsNi & 122 $I{\bar4}d $ & FeAgS$_2$ & 155 $R32 $ & Ni$_3$S$_2$ & 188 $P{\bar 6}c2 $ & LiScI$_3$ & 230 $Ia{\bar 3}d$ & Ga$_4$Ni$_3$\\
39 $Aem2 $ & LaS & 123 $P4/mmm $ & InSePd$_5$ & 157 $P31m $ & AuCd & 189 $P{\bar 6}2m $ & GaAg$_2$ & & \\
43 $Fdd2 $ & Ge$_5$Y$_3$ & 128 $P4/mnc $ & CSc$_3$ & 159 $P31c $ & IrLi$_2$Si$_3$ & 190 $P{\bar 6}2c $ & HfSnRh & & \\
52 $Pnna $ & Bi$_3$Sr$_2$ & 129 $P4/nmm $ & LaTe$_2$ & 160 $R3m $ & As$_3$Sn$_4$ & 191 $P6/mmm $ & Ga$_2$La & & \\
55 $Pbam $ & Al$_3$Pt$_5$ & 130 $P4/ncc $ & Ge$_3$La$_5$ & 161 $R3c $ & Li$_2$ReO$_3$ & 193 $P6{\bar 3}/mcm $ & Sr$_5$Sb$_3$ & & \\
58 $Pnnm $ & AlAu$_2$ & 131 $P4_{2}/mmc $ & La(BC)$_2$ & 162 $P{\bar 3}1m $ & Ag$_5$(PbO$_3$)$_2$ & 194 $P6{\bar 3}/mmc $ & Ge$_3$Li$_2$Zn & & \\
59 $Pnmm $ & Ag$_3$Sn & 136 $P4_{2}/mnm $ & ReO$_2$ & 164 $P{\bar 3}m1 $ & Ag$_2$F & 198 $P2_{1}3 $ & NiAsS & & \\
61 $Pbca $ & AgF$_2$ & 138 $P4_{2}/mcm $ & Ge$_7$La$_11$Mg$_2$ & 165 $P{\bar 3}c1 $ & Ca$_5$CuPb$_3$ & 200 $Pm{\bar 3} $ & Au$_6$In$_5$Na$_2$ & & \\
62 $Pnma $ & AgSr & 139 $I4/mmm $ & LiTlPd$_2$ & 166 $R{\bar 3}m $ & Zr$_2$Te$_2$P & 205 $Pa{\bar 3} $ & PdN$_2$ & & \\
63 $Cmcm $ & BiZr & 140 $I4/mcm $ & Te$_3$Tl$_5$ & 167 $R{\bar 3}c $ & Ir$_3$Mg$_13$ & 206 $Ia{\bar 3} $ & Mg$_3$Bi$_2$ & & \\
64 $Cmce $ & Al$_3$Ge$_4$La$_2$ & 141 $I4_{1}/amd $ & NiTi$_2$ & 173 $P6_{3} $ & AlCaSi & 212 $P4_{3}32 $ & BaSi$_2$ & & \\
65 $Cmce $ & Al$_3$Ge$_4$La$_2$ & 142 $I4_{1}/acd $ & IrSn$_4$ & 174 $P{\bar 6} $ & Li$_2$Ni$_12$P$_7$ & 213 $P4_{2}32 $ & Ni$_2$W$_3$N & & \\
74 $Imma $ & La$_3$Pd$_4$Si & 143 $P3 $ & TiNi & 175 $P6/m $ & Rb$_4$SnTe$_4$ & 214 $P4_{2}32 $ & La$_3$SbI$_3$ & & \\
84 $P4_{2}/m$ & AlNi$_4$Zr$_5$ & 144 $P3_{1} $ & IrGe$_4$ & 176 $P6_{3}/m $ & V$_3$S$_4$ & 215 $P{\bar 4}3m $ & Li$_8$Al$_3$Si$_5$ & & \\
\hline
\end{tabular}
\caption{Excerpt of semimetal candidates, with electron filling \emph{smaller} than the number of bands in the smallest PEBR. This criteria ensures that all materials shown are partially filled (semi-)metals with SOC. A complete list will be presented in a future work.}\label{table:semimetals}
\end{table}
\section{Table of EBRs and PEBrs}
Here we give the table of elementary and physically elementary band representations induced from the maximal Wykoff positions in all 230 space groups in a condensed form. The column labeled ``SG'' gives the space group number. ``MWP'' gives the standard name of the maximal Wyckoff position, and ``WM'' gives its multiplicity in the primitive cell. ``PG'' is the point group number of for the site symmetry group, and ``Irrep'' gives the name of the site-symmetry group representation from which each band representation is induced. The reperesentations are labelled using the notation of Stokes, Cordes, and Campbell\cite{ssc}. The column ``Dim'' denotes the dimension of the point stabilizer group irrep. The column ``KR'' denotes whether the band representation is also a physical band representation. Those with a ``$1$'' in this column are PEBRs as is, Those with a ``$2$'' join with copies of themselves when TR symmetry is included. Finally, EBRs labelled by ``$f$'' (for first) pair with their conjugate BR labelled by ``$s$'' (and listed directly below) when TR symmetry is added.
The column labelled ``Bands'' gives the total number of bands in the physical band representation (to obtain the number of bands in the EBR without TR, divide this number by $1$ if the entry in KR is $1$, and $2$ otherwise). The column ``Re'' indicates whether the given band representation can be made time-reversal invariant in momentum space: a $1$ in this column indicates that TR symmetry is satisfied at each $\mathbf{k}$ point, while a $2$ indicates that the given band representation must be connected in momentum space with its TR conjugate. In particular, those band representations induced from $1d$ site-symmetry representations and with a $1$ in the ``Re'' column are prime candidates for topological insulators, as discussed in Section~IV.~A of the main text. Finally, the columns ``E'' and ``PE'' indicate whether the given band representation is an exception (in the language of Sec.~\ref{sec:bandreps} and Tables~\ref{table:sbr}, \ref{table:dbr}, and \ref{table:zakwaswrong}), with and without TR symmetry respectively. An ``e'' in either of these columns indicates elementary, while a ``c'' indicates composite. This full set of data can be accessed in uncondensed form through the BANDREP program on the Bilbao Crystallographic Server\cite{progbandrep}.
\setlength{\LTcapwidth}{\linewidth}
\input{ebr_3.tex}
| fb2f99e07d85414fed66a046b03a07986276d783 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |
\section*{Supplemental Material}
Considering the exchange of vector mesons, the potential between a pair of heavy and antiheavy hadrons at threshold takes the following form:
\begin{equation}
V\sim-F\beta_1\beta_2g_V^2\frac{2m_1m_2}{m_{\rm ex}^2},
\label{eq:potential}
\end{equation}
where $m_1,m_2$ and $m_{\rm ex}$ are the masses of the two heavy hadrons and the exchanged particle, respectively, $\beta_1$ and $\beta_2$ are the coupling constants for the two heavy hadrons with vector mesons, $g_V$ is a coupling parameter for the light-vector mesons, and $F$ is a group theory factor accounting for light-flavor SU(3) information. The values of $F$ are listed in Table~\ref{tab:potentials} for all combinations of a pair of heavy and antiheavy ground state hadrons. $\beta_1$ and $\beta_2$ are positive in our convention so that a positive $F$ means an attractive interaction. For systems that can form states with both positive and negative $C$ parities, for instance, $D\bar D^*\pm\bar D D^*$ or $\Sigma_c \bar \Sigma_c^*\pm \bar \Sigma_c \Sigma_c^*$, the potentials at threshold are the same with the mechanism considered here. The potentials presented here may also be used as the resonance saturation modeling of the constant contact terms in nonrelativistic effective field theory studies of the heavy-antiheavy hadron interactions.
\begin{table}[h]
\caption{Potentials at threshold of heavy-antiheavy hadron pairs with only light vector-meson exchanges, see Eq.~\eqref{eq:potential}. Positive $F$ means attractive. For the systems with $F=0$, the sub-leading exchanges of vector-charmonia also lead to an attractive potential at threshold. } \label{tab:potentials}
\begin{ruledtabular}
\begin{tabular}{cccc}
System & $I$ & exchanged particle & $F$\\
\hline
$D^{(*)}\bar D^{(*)}$& 0 &$\rho,\omega$ & $\frac32,\frac12$\\
& 1 &$\rho,\omega$ & $-\frac12,\frac12$\\
$D_s^{(*)}\bar D^{(*)}$& $\frac12$ &$-$ & $0$\\
$D^{(*)}_s\bar D^{(*)}_s $& 0&$\phi$ & $1$\\
\hline
$\bar D^{(*)}\Lambda_c$& $\frac12$ &$\omega$ & $-1$\\
$\bar D_s^{(*)}\Lambda_c$& $0$ &$-$ & $0$\\
$\bar D^{(*)}\Xi_c$& $1$ &$\rho,\omega$ & $-\frac12,-\frac12$\\
& $0$ &$\rho,\omega$ & $\frac32,-\frac12$\\
$\bar D_s^{(*)}\Xi_c$& $\frac12$ &$\phi$ & $-1$\\
\hline
$\bar D^{(*)}\Sigma_c^{(*)}$& $\frac32$ &$\rho,\omega$ & $-1,-1$\\
& $\frac12$ &$\rho,\omega$ & $2,-1$\\
$\bar D_s^{(*)}\Sigma_c^{(*)}$& $1$ &$-$ & $0$\\
$\bar D^{(*)}\Xi_c^{'(*)}$& $1$ &$\rho,\omega$ & $-\frac12,-\frac12$\\
& $0$ &$\rho,\omega$ & $\frac32,-\frac12$\\
$\bar D_s^{(*)}\Xi_c^{'(*)}$& $\frac12$ &$\phi$ & $-1$\\
$\bar D^{(*)}\Omega_c^{(*)}$& $\frac12$ &$-$ & $0$\\
$\bar D_s^{(*)}\Omega_c^{(*)}$& $0$ &$\phi$ & $-2$\\
\hline
$ \Lambda_c\bar\Lambda_c$& $0$ &$\omega$ & $2$\\
$\Lambda_c\bar \Xi_c$& $\frac12$ &$\omega$ & $1$\\
$\Xi_c\bar \Xi_c$& $1$ &$\rho,\omega,\phi$ & $-\frac12,\frac12,1$\\
& $0$ &$\rho,\omega,\phi$ & $\frac32,\frac12,1$\\
\hline
$\Lambda_c\bar\Sigma_c^{(*)}$& $1$ &$\omega$ & $2$\\
$\Lambda_c\bar\Xi_c^{'(*)}$&$\frac12$ &$\omega$ & $1$\\
$\Lambda_c\bar\Omega_c^{(*)}$ &$0$ &$-$ & $0$\\
$\Xi_c \bar\Sigma_c^{(*)}$ &$\frac32$ &$\rho,\omega$ & $-1,1$\\
&$\frac12$ &$\rho,\omega$ & $2,1$\\
$\Xi_c \bar\Xi_c^{'(*)}$ &$1$ &$\rho,\omega,\phi$ & $-\frac12,\frac12,1$\\
& $0$ &$\rho,\omega,\phi$ & $\frac32,\frac12,1$\\
$\Xi_c \bar\Omega_c^{(*)}$ &$\frac12$ &$\phi$ & $2$\\
\hline
$\Sigma_c^{(*)}\bar\Sigma_c^{(*)}$ & $2$ &$\rho,\omega$ & $-2,2$\\
& $1$ &$\rho,\omega$ & $2,2$\\
& $0$ &$\rho,\omega$ & $4,2$\\
$\Sigma_c^{(*)}\bar\Xi^{'(*)}_c$ &$\frac32$ &$\rho,\omega$ & $-1,1$\\
& $\frac12$ &$\rho,\omega$ & $2,1$\\
$\Sigma_c^{(*)}\bar\Omega^{(*)}_c$ &$0$ &$-$ & $0$\\
$\Xi_c^{'(*)} \bar\Xi_c^{'(*)}$&$1$ &$\rho,\omega,\phi$ & $-\frac12,\frac12,1$\\
&$0$ &$\rho,\omega,\phi$ & $\frac32,\frac12,1$\\
$\Xi^{'(*)}_c \bar\Omega_c^{(*)}$&$\frac12$ &$\phi$ & $2$\\
$\Omega_c ^{(*)}\bar\Omega_c^{(*)}$ &$0$ &$\phi$ & $4$
\end{tabular}
\end{ruledtabular}
\end{table}
\end{document}
| df75fc0ca3c7d2fc2cad839f3beaa784850014c4 | {
"file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz"
} |